In 2017, hurricanes in the USA caused at least $850.5 billion in total damages, at an average of $22.4 billion per event. The frequency of natural disasters and their accompanying economic impact has steadily increased since the 1970s as a result of climate-related change, with disaster relief and infrastructural repair accounting for one of the highest average annual government expenditures around the world. Not to mention the devasting cost of human life and personal tragedy that takes place.
For the longest time, we’ve assumed that these are largely unforeseeable events. Systems have been developed over time to provide more efficient evacuation processes, while investment in shelter, infrastructure, and relief has provided some measure of respite. But perhaps a more valuable investment would be in researching and developing more advanced detection measures. Our understanding of what might be round the corner and our ability to plan for floods, disease and economic uncertainties will help to make our societies more robust, healthier and ultimately happier. So how can we predict the chaotic events that lie before us?
To date, meteorologists have relied on data collection from a range of instruments and methodologies, and leaned on historical patterns to make educated guesses at the outcomes of emerging patterns. Unfortunately, as anyone who watches the news knows, this has significant room for error. Forecast models can be both correct and incorrect, and oftentimes meteorologists make judgment calls about whether they agree with the model or not.
A more accurate means of simulating both regional forecasts and broader climate patterns is to use Monte Carlo simulation - that is, the continual generation of an approximated set of outcomes based on historical data to reach a probability distribution of said outcomes. The more data the simulation could ingest, the more complex and, in theory, accurate, its predictions could be.
For the most accurate weather simulation, one would simulate all the interactions down to the subatomic level, but this is clearly far beyond both our computational capabilities and our ability to measure the inputs. Instead, weather simulations use fluid dynamics and thermodynamics, and the inputs are measured weather conditions at many locations around the globe. In the past several decades, weather simulations have gone from being useless for weather prediction (too slow, too imprecise) to being regularly used for forecasting as computers have gotten faster and inputs have become more exact (more weather measurement stations and increasing ability to measure conditions at multiple points vertically).
Right now, the limits of our computational capabilities are a massive bottleneck. The proliferation of sensors and the ever increasing amount of data being generated only further underscores this problem. When you have everything from sensors in farms that provide real-time analytics of the insects on your crops to sensor boxes mounted on moving vehicles, it’s easy for the volume of data to overwhelm even the sturdiest of supercomputers.
This is where distributed computing can make a huge difference. Imagine a democratised platform that can scale these complex algorithms across the cloud and edge; that could allow a meteorologist to extract actionable and timely data without relying on a devops team or having to be a low-level engineer themself. At Hadean, we’ve built a platform that has provided similar accessibility to scientists working in the field of biomolecular engineering, and we’re constantly striving to level the playing field for other scientific endeavours.
More accurate predictions of natural disasters and expanded warning times to prepare can help save lives and spare billions of dollars per annum, stabilising oft-hit areas and providing third world countries in particular with a significant economic uplift. Improved simulation would provide ancillary benefits for a range of different industries - everything from the energy sector to agriculture, retail, travel, and entertainment. And at a global scale, it would allow humanity to make better and more informed decisions about the future of our planet, something that I’m sure we can all agree would be in our best interests.
HadeanOS is a cloud-first operating system that has been engineered and optimized for performance across massively distributed computing infrastructures. HadeanOS natively understands the dynamic scale and real-time demands of modern applications in the cloud and removes the need for complex operations and engineering.