BMP Testing Protocols
After Wisconsin passed new administrative rules for the prevention and management of polluted stormwater runoff in 2002, a chain reaction ensued. The implications: a requirement for 80% total suspended solids (TSS) removal in new development and 40% TSS removal in redevelopment areas; in existing urban areas, 20% by 2008 and 40% by 2013.The state legislature had directed the state’s Department of Natural Resources (DNR) to develop performance standards for nonpoint-source pollution. The DNR modified its Wisconsin Pollutant Discharge Elimination System permit program, cross-referencing the adopted standards. “Our state stormwater program mirrors the federal program, but with our pollution prevention management measures, we now have numbers associated with needed solids level reduction,” explains John Pfender, a water resource specialist for the Wisconsin DNR. Soon, Wisconsin municipalities voiced concerns centered on how best management practices (BMPs)–proprietary and nonproprietary–worked and could help them meet the new requirements.Pfender sympathized. “Our concern with the proprietary ones is that manufacturers’ claims are based on assumptions different from what we assumed when we implemented our state law,” he says.One example: a difference in particle size distribution.“It’s not that their testing is bad,” Pfender points out. “The assumptions are different.”Another issue: The state had no way to tie manufacturers’ flow rate efficiency data into its standard of a 20% to 40% reduction on an average annual basis using specifically identified rainfall data.Meanwhile, those building new projects asserted they often did not have much space for a BMP that would achieve 80% TSS removal, though vendors were telling them otherwise, says Roger Bannerman, an environmental specialist with the Wisconsin DNR. In attempting to understand how the BMPs are sized and what could be expected from them, Bannerman started running calculations. He concluded that some claims did not make sense. “I have the responsibility to make sure taxpayers’ money isn’t wasted,” he says. “There’s stress on everyone’s part: City people, consulting engineers, and developers want to do the right thing, but they didn’t know.”Vendors voiced their own frustrations over losing bids, with one vendor sometimes winning despite incorrectly sizing its units. “With technical standards, hopefully they won’t be able to get away with that,” says Bannerman.Thus was set into motion Wisconsin’s initiative to develop testing protocols for proprietary BMPs with the long-term goal of submitting the devices for independent third-party testing using the protocols. Such efforts are being welcomed throughout the stormwater industry. Proprietary stormwater devices are often favored for their small footprints, enabling them to be used in urban settings, points out Omid Mohseni, associate director of applied research at the University of Minnesota’s St. Anthony Falls Laboratory. Located on the banks of the Mississippi River, the laboratory can accommodate BMP testing with a flow rate of up to 300 cubic feet per second (cfs). Vendors such as Hydro International’s Bob Andoh, director of innovation, see Wisconsin’s effort as a “necessary initiative” in the face of the diverse standards plaguing the stormwater industry.Yet Jim Mailloux, a test engineer for Alden Research Laboratory Inc. in Holden, MA, notes there are still some within what he describes as the “cutthroat” stormwater treatment industry unwilling to abide by the same rules. “There are people who say, “˜Let’s all work together,’ but I don’t know if the entire industry ever will want to go the route of independent testing; they’ll want to do their own testing,” he says.“I can see why manufacturers are interested in full-scale empirical independent testing, because some people are running some calculations and saying a product works without any basis to prove it,” he adds.A level playing field will eliminate the problem of “different manufacturers testing in different ways, making different claims,” says Jim Lenhart, chief technical officer for Contech Stormwater Solutions. “The regulatory marketplace is very confused about how these data represent the product relative to what they are trying to accomplish for their water-quality goals,” he says. Monitoring remains the primary tool for evaluation but has inherent discrepancies, Mohseni says. Lack of information about input and the variations in some field measurement instruments can produce a wide range of results.The stakes are high in the process. The cost to treat stormwater runoff using proprietary devices can run from $15,000 to $20,000 per acre and higher in a parking lot, Bannerman says. “As you get into something more sophisticated that’s going to get more than 20% or 30% TSS reduction, the number jumps–the lowest is about $50,000 per acre.”Bringing Diverse Groups Together
Through evaluation by work groups, Wisconsin aims to develop a standard that includes all technical criteria for determining the stormwater treatment efficiencies each device offers, as well as installation guidance. The work groups fall under the umbrella of the Wisconsin Standards Oversight Council (SOC), comprising representatives of municipalities, consulting firms, and state agencies such as the DNR, the Department of Agriculture, the Natural Resources Conservation Service, and the Department of Commerce. The SOC was originally created years ago to address agricultural standards. “There are several state, federal, and county agencies managing agricultural runoff, and each agency was making modifications to standards to suit its own program,” Pfender explains. “It became apparent we needed to create a council to coordinate and update the standards.”The SOC formed two ad hoc teams to assist in the standards project–one team with members from academia, and another consisting of proprietary device manufacturers, consulting engineers, those who work with government entities, and a Department of Transportation liaison. Manufacturers represented on the second team included Stormceptor, Contech Stormwater Solutions, BaySaver Technologies, Environment 21, Hydro International, and Advanced Drainage Systems. “Everyone is a little confused on how you can relate all these devices, their efficiencies, and how they work and come up with an easy way to compare results for them,” says Mike Murray, the SOC’s technical work team coordinator. “It’s not easy to do, but it’s the task at hand.”The task began with a two-day meeting of the two groups in February 2006, where members reviewed the state’s initial findings. The DNR had already been conducting field verification studies on Vortechnics and Stormceptor units. “It appeared if we modeled those devices in the locations where they were set using our Source Loading and Management Model [SLAMM, an urban area nonpoint-source water-quality model], we were able to predict reasonably well how they performed in the field based on detailed site testing,” Pfender says.On the first day, each manufacturer individually addressed the standards team, talking about his or her company’s devices and testing procedures. They then met for a roundtable discussion during the second day to assist state employees with such issues as to how to predict device performance with knowledge of a site’s drainage area, land use, and rainfall data.Developing Protocols: What to Include
Other protocols do exist, and Wisconsin is using them for baseline information. Factors they seem not to share, however, include particle size and scour. In the past, primarily larger particle sizes–in the 50- to 100-micron range–have been used in testing, whereas stormwater tends to have more smaller particles in the 10- to 20-micron range. “They assume a specific gravity of 2.6, where it could be as low as 1.4,” says Bannerman. “How fast it settles depends on its gravity.”As it moves forward with the idea of third-party testing, the state is now working on figuring out what data are needed for the lab protocol to develop an efficiency curve, says Kevin Kirsch, a water resource engineer for the DNR. Subteams are considering other matters, including methodologies to allow for device scaling and questions on scouring, such as whether tests should include a preloaded amount of sediment. Regarding scour testing of a device, Kirsch notes, “Running it until it is full and then running the scour test is best, because it more closely mimics what is occurring. But taking this mixture that’s been going in and preloading [the device] with that same mixture isn’t necessarily realistic, because the particle size distribution isn’t going to reflect what was actually trapped.” Options being considered are to preload the device with the mixture that has been trapped, and to define the design (maximum) capacity–which is not equal for all devices–and make that the point at which the scour is initiated in the device, says Kirsch. He says that for a device without a bypass feature, one would be placed at the front so any flow above it is shunted around it to prevent scour.“Afterward, you look at the total amount of water treated and the total amount of water bypassed, and then you get your percent removal,” says Kirsch. “Then you can run it on an average annual basis and run a series of storms through. The computer model is generating the hydrology, the runoff and the pollutant flows and then bringing it to the device. The technical standards get applied to how that performance curve of flow versus efficiency is put into the model,” Kirsch says. SLAMM or P8–a model that predicts how much and where stormwater runoff pollutants in urban watersheds will occur–will be used.
Through evaluation by work groups, Wisconsin aims to develop a standard that includes all technical criteria for determining the stormwater treatment efficiencies each device offers, as well as installation guidance. The work groups fall under the umbrella of the Wisconsin Standards Oversight Council (SOC), comprising representatives of municipalities, consulting firms, and state agencies such as the DNR, the Department of Agriculture, the Natural Resources Conservation Service, and the Department of Commerce. The SOC was originally created years ago to address agricultural standards. “There are several state, federal, and county agencies managing agricultural runoff, and each agency was making modifications to standards to suit its own program,” Pfender explains. “It became apparent we needed to create a council to coordinate and update the standards.”The SOC formed two ad hoc teams to assist in the standards project–one team with members from academia, and another consisting of proprietary device manufacturers, consulting engineers, those who work with government entities, and a Department of Transportation liaison. Manufacturers represented on the second team included Stormceptor, Contech Stormwater Solutions, BaySaver Technologies, Environment 21, Hydro International, and Advanced Drainage Systems. “Everyone is a little confused on how you can relate all these devices, their efficiencies, and how they work and come up with an easy way to compare results for them,” says Mike Murray, the SOC’s technical work team coordinator. “It’s not easy to do, but it’s the task at hand.”The task began with a two-day meeting of the two groups in February 2006, where members reviewed the state’s initial findings. The DNR had already been conducting field verification studies on Vortechnics and Stormceptor units. “It appeared if we modeled those devices in the locations where they were set using our Source Loading and Management Model [SLAMM, an urban area nonpoint-source water-quality model], we were able to predict reasonably well how they performed in the field based on detailed site testing,” Pfender says.On the first day, each manufacturer individually addressed the standards team, talking about his or her company’s devices and testing procedures. They then met for a roundtable discussion during the second day to assist state employees with such issues as to how to predict device performance with knowledge of a site’s drainage area, land use, and rainfall data.Developing Protocols: What to Include
Other protocols do exist, and Wisconsin is using them for baseline information. Factors they seem not to share, however, include particle size and scour. In the past, primarily larger particle sizes–in the 50- to 100-micron range–have been used in testing, whereas stormwater tends to have more smaller particles in the 10- to 20-micron range. “They assume a specific gravity of 2.6, where it could be as low as 1.4,” says Bannerman. “How fast it settles depends on its gravity.”As it moves forward with the idea of third-party testing, the state is now working on figuring out what data are needed for the lab protocol to develop an efficiency curve, says Kevin Kirsch, a water resource engineer for the DNR. Subteams are considering other matters, including methodologies to allow for device scaling and questions on scouring, such as whether tests should include a preloaded amount of sediment. Regarding scour testing of a device, Kirsch notes, “Running it until it is full and then running the scour test is best, because it more closely mimics what is occurring. But taking this mixture that’s been going in and preloading [the device] with that same mixture isn’t necessarily realistic, because the particle size distribution isn’t going to reflect what was actually trapped.” Options being considered are to preload the device with the mixture that has been trapped, and to define the design (maximum) capacity–which is not equal for all devices–and make that the point at which the scour is initiated in the device, says Kirsch. He says that for a device without a bypass feature, one would be placed at the front so any flow above it is shunted around it to prevent scour.“Afterward, you look at the total amount of water treated and the total amount of water bypassed, and then you get your percent removal,” says Kirsch. “Then you can run it on an average annual basis and run a series of storms through. The computer model is generating the hydrology, the runoff and the pollutant flows and then bringing it to the device. The technical standards get applied to how that performance curve of flow versus efficiency is put into the model,” Kirsch says. SLAMM or P8–a model that predicts how much and where stormwater runoff pollutants in urban watersheds will occur–will be used.
Photo: Alden Lab A testing setup at Alden Laboratories in Massachusetts Not everyone agrees with the idea of third-party verification for BMPs. Lenhart says third-party testing works “in theory” but in practice has been problematic in the industry for some time. “The work needs to be well documented, verifiable, and in a good scientific method,” he says. Contech Stormwater Solutions has engaged in its own monitoring because it is less expensive and because “we understand the operational characteristics of our products better than the typical third party does,” Lenhart says, adding that the procedures his company uses can “withstand scrutiny by anybody relative to the integrity and accuracy of the studies.” The testing included a technical advisor to scrutinize the work and ensure it adhered to scientific methods, he notes.Lenhart further believes third-party testing is unnecessary based on previous experiences where “the third party will shut down communications with the manufacturer and not follow what we or other manufacturers would believe to be appropriate protocol,” he says.He contends many third-party testing organizations lack experience in stormwater sampling, particularly in understanding costs. “Somebody has $10,000 to monitor a system, and after they go through $10,000, they’ve barely gotten started because the samplers aren’t working and they’re not picking up storms. Rather than admitting they weren’t qualified to execute the studies, what typically happens is communications shut down and a report is issued saying there is something wrong with the technology,” says Lenhart.The goal of 80% TSS removal also is controversial. Lenhart says this standard as a measure of performance is “very problematic because the methods for testing TSS are developed for wastewater, not stormwater, and are very different.” Rather than looking at percent removal capability, he says, regulators and others should focus on particle distribution and concentration. He also believes resuspension of particles during extreme flows is an important issue to consider in testing. “The worst-case scenario is you get a storm so big it just cleans [the device] out,” he says. “You can have a device that is very good at removal but does not have the capacity to hold the sediments and pollutants it captures.” He recommends that testing be done not only with an empty system, but also at a sediment level at which the manufacturer recommends maintenance so the device’s full efficiency can be monitored.If smaller particle sizes are used in the testing, Bannerman asserts, no one “expects proprietary devices to be in the 80% range.” One challenge is getting a particle size distribution to match that typically found in stormwater runoff. Being fair and practical is another challenge, he adds. Stormwater treatment devices have varying features, which vendors claim offer different effects, he says. A modeling approach would not treat all devices as fairly as would full-scale laboratory testing, “which should account for different processes within these devices,” he says.Michael Patterson, director of engineering for Environment 21, concurs and points out that testing should not mimic the conditions of wastewater treatment. In a wastewater treatment plant, he says, clarifiers are used to provide calm conditions “to give all pollutants that are going to sink a chance to sink and the ones that will flow a chance to flow. With stormwater treatment, you don’t have that luxury. The event comes in fast and hard, and in 10 minutes it’s gone. You make judgments on sizing and try to keep good hydraulics and not too much turbulence.”Patterson says his message to the SOC is to treat the devices like a clarifier in terms of hydraulic efficiency. “With that information, you can apply any flow and pollutant composition you want and come up with the result,” he says.He also points out that stormwater treatment capabilities focus on two types of conditions: the longer treatment and low turbulence of detention-based systems, such as underground or surface ponds, and the higher turbulence and shorter treatment time of proprietary devices. “An engineer is probably making it a priority to design a system to keep the flow distribution with minimum turbulence, but I don’t think the protocols are getting into that particular aspect,” Patterson says.Other factors that should be considered in developing testing protocols include low head loss at high flows, maintenance, and installation, Patterson says. Bannerman points out other practical challenges to be addressed. A lab needs a holding tank to run enough water for a test; normal taps are insufficient, and fire hydrants are “tough,” he notes. The water needs to be cleaned to run it again, and the type of filter used to clean it is important. Additionally, there must be a mixing chamber, and whether a lab should add dry matter or use a liquid slurry also needs to be considered. As Wisconsin moves to take measures to address BMP evaluations, so too have other states such as New Jersey. A summer symposium hosted by the New Jersey Corp. for Advanced Technology examined stormwater treatment technologies and discussed guidelines for selection, review, and approval of stormwater treatment systems. Also, the University of Massachusetts at Amherst recently created the Massachusetts Stormwater Technology Evaluation Project Web site (www.mastep.net) as a stormwater technologies clearinghouse detailing performance characteristics for proprietary stormwater treatment BMPs. New Jersey and Massachusetts are two of eight states that have signed on with the Technology Acceptance and Reciprocity Partnership (TARP), a joint effort of those states plus California, Illinois, Maryland, New York, Pennsylvania, and Virginia, to reach uniformity in testing standards.Patterson has concerns about the TARP protocol. “Engineers do not design storm sewers according to statistical analysis; they design them according to accepted empirical equations such as rainfall intensity and friction factor for a pipe. The TARP protocol implies it allows you to make good judgments, but it takes performance data and creates a statistical correlation which has no meaning.”In developing its own protocol, Wisconsin is placing emphasis on such factors as particle sizes, scour, and soil. Mohseni says his lab is conducting tests for a range of particle sizes, as well as factors such as land use, soil type, and slope. “If we can get an idea of what type of suspended sediment we get into the stormwater, that becomes a very important factor in sizing these devices, because for the same watershed area and climate region, you may get from one area very small suspended sediments and from another region coarse suspended sediments. Then you have to change the size of the device, because where the particles become very small, the device is not going to do a very good job.”Lenhart points out that water temperature is another variable. “We did a fundamental analysis and found you can have a 25% variation of the settling velocity of particles just based on water temperature,” he says. Scaling is another issue. “If you use a 50-gallon drum to build something to test a hydrodynamic separator and run 10 gallons a minute through it in a two-hour test, then scale it up to something that’s treating 100 cubic feet per second–that concerns me,” Lenhart says. Those who are testing the systems should be required to do so at various flow rates, rather than just the maximum flow rate, Lenhart believes.Also, while upflow velocity is a “good start,” it does not account for differences in device efficiency, according to a Hydro International statement.And though scour is an important testing factor, it’s a difficult one to evaluate, says Bannerman. “These devices are basically a concrete box with a flow control structure, and depending on how they work, you can get more or less scouring,” he says. “The next storm that comes along is liable to stir everything up and leave out the outlet.”Bannerman has embarked on another large study considering particle size distribution from many different source areas: parking lots, rooftops, lawns, and streets. The model he uses adjusts for particle sizes. “It took me a year just to get our lab up and running for particle size distribution,” he says. His lab added a $50,000 piece of equipment to do so. “It’s easy for a lab to do 50 microns or above, but not for those little particles. We know there is error involved, but it is the best we can do.” He says the goal is that when engineers apply test results to a parking lot that has a different particle size in the runoff than does a rooftop, they can adjust the sizing accordingly.Bannerman says he’s not sure to what extent soil type affects BMP performance. “It has some [effect], but as long as we’ve got the particle size right, we could be OK,” he says. “We get a lot of static on that. You can’t study every parking lot in every city; you have to draw the line somewhere.”Meanwhile, as states develop protocols, labs continue testing with methods they believe best address evaluation needs. St. Anthony Falls has been testing many systems after first being approached by BaySaver Technologies to test one of its smallest models. The lab spent months developing an easy way to evaluate the device’s performance, says Mohseni. Testers fed the system with sediments and water so they’d know how much of each was being fed into the system and also know the sediment gradation. A test’s duration should be long enough to reduce error in collecting sediments from devices, Mohseni notes.BaySaver spent thousands of dollars on the testing, but the lab helped the company improve the device’s efficiencies, Mohseni says. Since then, the lab has evaluated similar devices on behalf of the Minnesota Local Road Research Board and the Twin Cities Metropolitan Council using controlled flow from fire hydrants and a measured amount of sediments. The lab conducts 12 tests of four flows, with each test repeated three times. Sediments are collected after the test to determine particle sizes and how well the device removes them.Mohseni notes that each manufacturer has different models with varying flow rate capacities, “so based on drainage area, land use, and type of climate, they would recommend a specific model.” Such differences can make testing difficult, even though many models share similar hydraulic and flow patterns, Mohseni points out. The lab plans to develop a performance function that can be applied to other model sizes.In testing the BaySaver device, St. Anthony Falls used a Peclet number, which Mohseni explains is the ratio between advection and diffusion, or the ratio between the sedimentation of the particles and the turbulence generated by the inflow into the device.“As the turbulence increases, it does not allow particles to settle, and there would be more chance they would get to the system’s outlet,” he says. “As the particles become larger, the sediment becomes larger, so there is more chance the particles can settle. The parameters can be related to particle size, flow rate, and the type of operation of each product.”With lab testing, “the important part is we are not testing these devices at two different locations with different sediment input or different flow rates,” Mohseni says. “We are applying the same input to all devices so we can compare them together.”Other manufacturers’ devices are also now being tested. St. Anthony Falls Laboratory expects to complete 48 field tests by this fall. These tests require dry-weather conditions, because field personnel do not want to put wet sediment into a system or get water coming in from the street. Workers are aiming to perform two tests a day, which Mohseni notes requires extensive logistical efforts.Wisconsin has been in communication with Alden Research Laboratory, where Mailloux supports standardized protocols for third-party testing. The lab is one of the oldest in the country that conducts stormwater testing and can accommodate tests with 100 cfs of flow; Mailloux says the stormwater facility is set up to handle 20 cfs for any unit. “Because various laboratories do things differently doesn’t mean they are wrong–it may be the way they think they should do it or because of the lab’s limitations.” The question we have to ask when considering different testing methodologies, he says, is whether they produce comparable data. For example, without standardized protocols, 80% TSS removal in one lab doesn’t necessarily correlate to 80% TSS removal in another.Mailloux agrees that sediment size variations over different regions is a concern. “Everyone is so detail oriented to their specific needs that testing done for one particular municipality or department of environmental quality may not be carried over into another one,” he points out, adding that without a streamlined process, manufacturers can spend thousands of dollars to get their devices approved under one state’s protocol, only to learn they have to undergo testing again for another state.Mailloux says some labs introduce sediment differently, either through dry injection or slurry. Each method has its benefits and drawbacks. Other differences in testing: the accuracy of measuring flow and sediment collection. Some labs favor using a mass balance approach, Mailloux says. While he agrees it’s important, he says it’s more difficult to do with larger units.“How can you do a true mass balance when you are collecting material so fine it’s not settling out in the unit–it’s being carried into the effluent flow as a suspended solid?” he asks. “How are you able to get that to either settle out into some type of a basin so you can collect it, or capture it in some type of filtration device so that you can account for it when you have these very, very high flows?” He says testers end up using a modified mass balance; some effluent is not being accounted for because of accuracy limitations. Mailloux includes direct sampling in testing. “If someone tells you it’s at 79.08% efficiency, it doesn’t mean 80%,” he says. “How accurate is this testing? How accurate does it need to be? If you are at 79%, is that not 80%? If you were to repeat that test three times, you may end up with 82% as your average, so not every manufacturer is going to want to spend money to repeat every test over and over again to try to get an average.”The use of computational fluid dynamics (CFD) to predict performance has grown, but industry reaction is mixed. Mohseni says CFD is a strong tool but has its setbacks. He notes that when CFD entered the market, labs lost work because CFD was believed to provide all the answers. Two decades later, those using it realize it needs to be calibrated with reliable input data.Lenhart believes CFD “is a great way to help a designer understand how the system is going to work and maybe tweak it a little bit,” but he agrees the results must be verified. While no one would design a spacecraft using only computer analysis and send it into space with people in it, there are those who suggest CFD can show how a stormwater treatment device will remove solids and who would “build it and sell it without testing,” Lenhart says. “Using a design tool to verify something is not a good way to do it. Verification has to be done with experimentation that collaborates and statistically validates the model.”Kirsch points out that CFD also isn’t adept at modeling fine particle size. And the inability to quantify scouring through CFD “led us toward full-scale testing where you would be looking at both performance of the device and some kind of scour routine,” he says. “Manufacturers can still use CFD in their design if it helps them size and analyze the devices,” Kirsch says. “There are certain things you can modify in the CFD: You can adjust your flow and the size of the cells the model is simulating and in some cases get a desired answer.”Modeling to Predict Performance
Mohseni says all devices can perform well if sized correctly. “In order to size them correctly, we have to recap the annual performance of these devices,” he says. “They may not do a good job at very high flows, but it is important to make sure that on other flows–because we get floods of different frequencies and magnitudes throughout the year–they have to be able to remove quite a bit of sediment. Then we look at how much suspended sediment will be supplied throughout the year by all of the floods and, overall, what percentage these devices can remove.”With proper knowledge of various regional particle size distribution and land use, “We can do any kind of stormwater modeling and see how well any of these devices can remove sediments,” he says. Several models can be used to estimate runoff volumes and pollutant loads, Kirsch says, including SLAMM, P-8, and the USEPA’s SWMM (Storm Water Management Model), a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas.“Under the proposed approach, full-scale lab tests would be used to create a flow-versus-treatment efficiency curve that would then be inserted into the model,” he says. “The device’s overall treatment would then be calculated based on what is treated through the device and what is bypassed.”
Mohseni says all devices can perform well if sized correctly. “In order to size them correctly, we have to recap the annual performance of these devices,” he says. “They may not do a good job at very high flows, but it is important to make sure that on other flows–because we get floods of different frequencies and magnitudes throughout the year–they have to be able to remove quite a bit of sediment. Then we look at how much suspended sediment will be supplied throughout the year by all of the floods and, overall, what percentage these devices can remove.”With proper knowledge of various regional particle size distribution and land use, “We can do any kind of stormwater modeling and see how well any of these devices can remove sediments,” he says. Several models can be used to estimate runoff volumes and pollutant loads, Kirsch says, including SLAMM, P-8, and the USEPA’s SWMM (Storm Water Management Model), a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas.“Under the proposed approach, full-scale lab tests would be used to create a flow-versus-treatment efficiency curve that would then be inserted into the model,” he says. “The device’s overall treatment would then be calculated based on what is treated through the device and what is bypassed.”
Lenhart cautions that any model needs to be validated and verified through actual collected data that are statistically matched to the output of the model through laboratory or field testing. Laboratory testing enables one to control all variables, isolate one, and then test a hypothesis by comparing the model results and the device’s labo
Sign up for Stormwater Solutions Newsletters
Get the latest news and updates.