Why officials should consider machine learning live modeling to prepare for flooding  

Dec. 8, 2023
For short-term flood forecasting in real-time, officials are turning toward machine learning live modeling. 

 According to the National Oceanic and Atmospheric Administration, flooding claims more lives annually than tornadoes and hurricanes combined and costs $4.7 billion on average per event. What makes matters worse is that the frequency of flooding is increasing, which could push the NOAA-estimated $165 billion cost of weather climate disasters in 2022 even higher in coming years.   

 The U.S. Environmental Protection Agency (EPA) suggests in its Climate Change Indicators: Coastal Flooding that “the average number of flood events per year has progressively accelerated across decades since 1950.” The same report indicates that the locations experiencing the highest rate of increase in flood events are along the Gulf and East coasts. These findings make one thing clear: people and properties along the coast are at risk of facing graver consequences than ever before as more and more floods occur. However, officials can decrease that risk by implementing live predictive models that deliver near-real-time, short-term flood forecasts to help them prepare and respond.  

Until recently, local governments have had to rely on traditional hydrologic and hydraulic (H&H) modeling to produce flood forecasts based on statistically derived storms (e.g., 24-hour or 100-year storms). These design storm have become so outdated they can no longer predict risk. In fact, the First Street Foundation suggests that the 1-in-100 flood event is now expected to happen every eight years. Still, the results from these models have been the main data source used to develop emergency response plans. These traditional H&H models can offer extensive details on how flooding will affect a general area and predict the conditions at road crossings, houses and selected locations in the community. While useful for broad planning and capital improvement projects, in-depth flood forecasting using the traditional model is often impractical in a near-real-time scenario due to the extensive run times of such models. Implementing a model that requires four or more hours to run to predict what will happen in two hours just is not beneficial to officials.   

 “These days, we want a lot of details from H&H live models,” said Steve Godfrey, modeling team leader at Woolpert. “However, that takes computation time to produce. It’s possible to get models done in 15 or 30 minutes for small areas, but some large, detailed models can take hours if not days to run.”  

 To provide faster answers, researchers have turned to interpolation. Before a storm is even on the horizon, flood modeling engineers run models for many different storms. Then, when a storm arrives in the forecast and officials want to know what, if any, flooding will occur, flood modeling engineers predict an outcome somewhere in between their two best models. This estimation certainly is not ideal for flood forecasting since real storm events rarely behave like probabilistic simulations. Storms do not usually occur with smooth distribution curves like the design storms used for modeling. Consequently, the answer officials get is often inaccurate, which is why they usually prepare their communities for worst-case scenarios for nearly every predicted flood event (as it is understandably better to overprepare than underprepare).  

  Fortunately, the traditional way of doing things is no longer the only option. If officials want an accurate, short-term flood forecast that’s available in near-real time, they can choose a different method—machine learning live modeling.  

What are near-real-time machine learning-based forecasts?  

 Machine learning live modeling generates accurate flood forecasts within seconds. The level of detail differs from what H&H live models deliver; machine learning live modeling provides local governments with predictions about a specific site, not the entire system.  

 “Machine learning live modeling is generally done at a site-specific location,” Godfrey said. “If you want a quick answer about one road crossing, you’ll have to create a model for that one road. If you want to know an answer about another road, you’ll have to build a different model for that road. At that second location, you can do some interpolation between the first and second points, but for the best results, you need to have models for every desired location. Machine learning models run in seconds, though, so they’re very quick.”  

Speed benefits officials in countless ways, especially since these natural disasters are increasing in frequency and severity. There is rarely enough time between storms for officials to run detailed H&H live models to prepare their emergency response teams and residents and consistently overpreparing for worst-case scenarios can exhaust officials and their communities. That is why a machine learning live model is a welcome advancement. It can deliver precise, short-term flood forecasts in near-real time so that officials know exactly what is going to happen and can adequately prepare. The only thing officials need to consider when seeking to use machine learning live modeling is whether they are set up to get the insights the technology offers.  

 Running a machine learning live model is challenging, primarily because officials need multiple things to bring the technology to life: data, databases, data processing infrastructure, and an online dashboard to showcase results. Woolpert recently developed the technology for local governments to implement machine learning live modeling—but for it to work, officials must help provide the first piece of the puzzle: data.   

Laying a data-driven foundation   

 One of the critical components of machine learning live modeling is observation data. Local governments must determine specific locations for which they want to collect data. At each location, Woolpert places two sensors: a rain gauge and a stream gauge. The former gathers rainfall data, and the latter gathers surface elevation, and potentially, velocity data at streams. Both data types fall into the larger category of observation data. Once the sensors are in place, the devices need to collect enough observation data to adequately "teach” the algorithm to generate the live model and provide accurate, short-term flood forecasts. As the gauges collect data, Woolpert houses it in a database.  

 "At five-minute intervals, the database updates and stores all the observation data in one place,” said Arash Karimzadeh, a water engineer at Woolpert. “We can call on that server and extract the observation data for the machine learning live model.”  

 Along with observation data, the machine learning live model needs precipitation forecasts to predict flood events properly.  

“The precipitation forecast is radar data, and it primarily comes from NOAA,” Gil Inouye, PE, ASCE, NSPE, associate and engineer at Woolpert, said. “It’s similar to when you watch the news and see the weather. They show the weather forecast and then they show radar images so that you see the storm moving across your area. That’s the data we are downloading from NOAA and storing in a database so that we not only have observation data but also the predicted rainfall that will come. Also, those radar images are spatial and temporal, meaning they don’t simply indicate that a specific site will get an inch of rain. Instead, they indicate that a specific site will get an inch of rain in the first hour and another half inch of rain in the second hour. The radar data is spatially distributed and temporally distributed.”  

 

A glimpse into the next seven days  

 As multiple databases pull data from sensors and NOAA sources, Woolpert’s data processing infrastructure extracts, restructures and stores the data in the cloud. From there, the data is used to train multiple machine learning algorithms to generate a model that’s best at predicting accurate, short-term flood forecasts. Understanding how this works begins with knowing that machine learning is a subset of artificial intelligence. Flood modeling engineers can train the machine to learn from observations as well as understand patterns and correlations between parameters. After that training, the machine—with little human supervision—can indicate how much flooding will happen after a certain amount of rain falls.  

 Karimzadeh explained this process using a research study performed for a Woolpert client interested in machine learning live modeling. 

“I used the client’s 10-year observation data showing the amount of rain received and the water surface elevation associated with that rain,” Karimzadeh said. “Then, I used that data to train multiple machine learning algorithms. I used 10-13 different machine learning algorithms because each one has its pros and cons. From there, I used different prediction settings to finish building the machine learning model—that just means I specified the characteristics of the prediction. After training the machine learning algorithms, one of them proved to predict with the highest accuracy, so I chose that one to help the client get a near-real-time flood forecast.”  

The chosen model, along with hourly updates of rainfall predictions from NOAA database, deliver a seven-day flood forecast. This short-term forecast is shown on a user-friendly, online dashboard accessible to all the relevant municipal personnel. On the dashboard, the forecast is displayed as the variation of stream flow depth during the next seven days in a graph, with the vertical axis representing flood depth at the site being examined while the horizontal axis shows the date and time. This trend line will rise and fall along the seven-day forecast according to the upcoming rainfall predictions. Officials can use this information to anticipate flooding volume and severity over the next seven-day period.  

Practical ways to prepare for flood events  

 Near-real-time, short-term forecasts make it easier for local governments to prepare for frequent flood events in practical ways. First, officials can allocate their resources better. With traditional modeling, this is not easily achievable. Since flood forecasts are not as accurate, officials may allocate barricades, pumps, and first responders to a road or bridge, believing it will flood when it does not actually happen. In other cases, officials may not allocate resources to an area that does actually flood. With machine learning live modeling, these scenarios are not as likely. Machine learning delivers predictions with higher accuracy of not only flood levels but also times associated with these levels, enabling officials to allocate their resources to the right places at the right times.  

Additionally, near-real-time, short-term flood forecasts enable coordination across departments, including emergency management, public works, resilience, transportation, fire and rescue, and law enforcement. The online dashboard enables all departments to review the same information at the same time. This cohesiveness breeds higher confidence—when officials and their personnel know what will happen and have the same insights for preparation, it assures everyone that they can better protect their community.  

However, officials and personnel are not the only ones who can have increased confidence. Local government leaders can also instill greater confidence in their residents by sending them public notifications about what will happen in the next 24 or 48 hours instead of providing worst-case scenarios. In doing this, residents will have greater certainty regarding which locations will be affected by flood events and whether they are in one of those locations. With this accurate information, only affected residents will need to take proper precautions, decreasing the risk of roads clogging from everyone fleeing.   

Embrace fast, accurate results   

Machine learning live modeling is a tool officials can benefit from as flood events increase. With near-real-time, short-term flood forecasts, local governments can gain quick insights into what is ahead instead of waiting hours or days for H&H modeling to provide results. 

That does not mean the traditional method does not have its place. If the end goal is to get nitty-gritty details on complete system capacities and how a flood will affect an entire area, traditional H&H modeling is still the way to go. It’s just critical to remember that the more detail required, the more time it will take to get results.  

“There’s a huge amount of data needed for hydrologic and hydraulic modeling and a huge amount of effort needed to make a model,” Inouye said. “It’s all very complex, whereas with machine learning, we just need historic observation data and then we can have a model up and running.”  

  

About the Author

Brian Bates | Engineering Program Director and Vice President  at Woolpert

Brian T. Bates, PE, is an engineering program director and vice president at Woolpert. He works out of Woolpert’s Columbia, S.C., office.  Bates can be reached at [email protected].