View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
January 27, 2022updated 12 Sep 2023 9:59am

Big Tech is coming for the weather

Investments in AI-powered weather forecasts by the likes of Microsoft and Google promise greater accuracy. They could also see a public good move into private ownership.

By Greg Noone

Bad weather threatens the future of a farm in a variety of ways. Rain, of course is welcome; a prolonged downpour, however, is liable to drown or wash away a newly sown crop. Rapid changes in temperatures are also dangerous. Cold snaps easily kill wheat, soybeans and corn, while heatwaves will incur stunted growth. Then there are the less obvious hazards: the high winds that knock over flimsy steel-roofed outbuildings, or the freak lightning that kills livestock in their hundreds every year.

While many of these dangers cannot be avoided by your typical farmer, some can be anticipated by simple attention to the daily weather forecast – up to a point. These predictions, the product of complex physics-based simulations of the Earth’s atmosphere and the expertise of an army of meteorologists, are accurate to the day in plotting the movement of storm fronts and pressure systems over hundreds of miles. What they’re not good at, though, is ‘nowcasting,’ predictions of differences in temperature or precipitation in hourly timespans over areas measured in single square kilometres.

You don’t need weather models. All you need is your data.
Peeyush Kumar, Microsoft Research

Such forecasts would form a more effective early warning system for farmers than what they have right now – and now it looks like they could obtain it, thanks to a new AI model from Microsoft. Using elements of machine learning and deep learning to parse data from historical weather data, mainstream forecasts and dozens of IoT sensors, DeepMC is able to make predictions on how the weather will change in a local area over a matter of hours. Tests of the model found that its temperature predictions were accurate up to 90% of the time, with 1,000 people and businesses already making use of it. Its deployment in so many locations, explains one of its creators Peeyush Kumar, is testament to how easy the system is to use.

“You don’t need weather models,” says the scientist from Microsoft Research. “All you need is your data. And you put your data into this model and this model can be entirely black box. You know, this can be entirely black box to the level where you’re just pushing on a few knobs to see which one works better.”

DeepMC isn’t unique. Dozens of models have been released in recent years claiming to master the problem of ‘nowcasting’ that conventional forecasting has hitherto failed to crack. The factor holding meteorologists back has been their lack of access to the kind of computing power capable of making such predictions, explains Andrew Blum, author of The Weather Machine. Self-learning models offer a quantum leap in post-processing for the field, allowing it to smash through its historical “day a decade” advance in efficiency to something that could touch the lives of billions of people around the world. After all, the ability to predict rainfall with precise certainty doesn’t just inform when the washing gets hung on the line, but also when crops are planted, planes fly, and when calls for evacuations are made.

Unsurprisingly, Big Tech has been eager to invest in such solutions, with firms such as Google, Raytheon and IBM all producing their own AI-assisted forecasting models. And yet, while these algorithms could trigger untold efficiencies across innumerable value chains, they could also accelerate a trend toward privatisation within weather forecasting that threatens to balkanise the profession. Since the early 1960s, national meteorological organisations have made a special effort to share data and improvements in forecasting capabilities. As the initiative in collecting both passes to the private sector, more of it threatens to become proprietary – and deepen inequalities within the overall system.

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

Atmospheric sensors forming part of a DeepMC deployment. The Microsoft system aims at nowcasting precise changes in microclimates, promising to grant farmers greater agency in how they manage their holdings. (Photo courtesy of Microsoft)

Stormy weather

Meteorology is hardly a field untouched by automation. “The amazing weather forecasts we have today are not because of machine learning, or AI,” explains Blum. Rather, they’re the result of “the work of atmospheric physicists to model the entire Earth’s atmosphere using equations.”

The first such simulations in the 1980s were crude by today’s standards, held back as they were by the limited computing power and relatively thin sensor data. Present-day forecast models, though, can tap into the supercomputers orders of magnitude more powerful than anything that has come before. Even so, the framework underpinning these models has remained roughly the same. “There’s no self-learning about it,” says Blum. “On the contrary,” he adds, these models are “tuned very much by hand.”

That was still largely the case when the first edition of The Weather Machine was published in 2018. Since then, meteorology has been inundated by AI researchers trying to boost forecasting’s accuracy by area and time. And they’ve been embraced by national weather organisations. “We must use automation to handle the surge of observing platforms,” said Eric Kihn, director of the Centre for Coasts, Oceans and Geophysics at the US meteorological agency NOAA, in a recent interview. That priority is fuelling a hiring spree for computer scientists and ML experts at the institution. “Whether inviting commercial and academics to join us, or embedding NOAA scientists with a partner, we’re hoping to harvest knowledge that exists outside of NOAA and embed it with our mission-focused teams.”

That enthusiasm has been matched at the UK’s Met Office. Last year, it collaborated with scientists at Alphabet’s subsidiary DeepMind to devise a model capable of predicting the timing and character of precipitation to within a couple of hours. Predicting rainfall to that level of accuracy is a fiendishly difficult task for conventional forecasting methods. “Between zero and four-ish hours, it takes a little bit of time for the model to stabilise,” explains Suman Ravuri, a scientist at DeepMind. “It also happens to be an area in which, if you’re a meteorologist at the Met Office that’s issuing flood warnings that might happen in the near future, you care about.”

After several months of research, DeepMind and the Met Office devised a deep learning model named DGMR capable of plugging that gap. A form of General Adversarial Network, the system used before and after snapshots of radar readouts and other historical sensor inputs to learn the most likely direction and intensity of rainfall to within just two hours. Subsequent tests by a team of 58 meteorologists found DGMR to be more useful and accurate than conventional forecasting methods up to 89% of the time.

As a recent investigation by Wired found, however, not all AI systems can beat the traditional one-two punch of physics-based models and the nous of a grizzled meteorologist. Such was the case in predicting waterspouts, spinning columns of air that appear above bodies of water, usually in tropical climates. One study recently concluded they could be forecast with greater accuracy by human forecasters than their AI counterparts. Research by NOAA also found that meteorologists were between 20-40% more accurate in their predictions of rainfall than the conventional physics-based models, with ominous implications for those AI systems’ reliance on outputs from the latter.

DGMR also has its limitations. One meteorologist who has researched nowcasting in Brazil recently criticised the model as having parameters unsuited to the climactic conditions of her region. “Many studies that change parameterisations inside the model, they are made in the higher latitudes,” Suzanna Maria Bonnet recently told Nature’s podcast. “It’s not applied for our tropical region. It changes a lot of the results.”

We’re quick to sing the praises of the possibilities of machine learning but when it comes to modelling the atmosphere, nothing beats traditional physics.
Andrew Blum, author

While Ravuri has stated previously that DGMR still needs work before it can be deployed on a wider scale, he says the problem of adaption to different countries is eminently solvable with access to new sources of radar data. “I actually got in touch with that researcher on the Nature podcast, and she’s gotten me in touch with another person who might have access to Brazilian radar,” adds Ravuri. “I can’t say whether or not the model will work well, [but] I’m sneakily optimistic.”

Nevertheless, it touches on another problem afflicting AI-based weather forecasting: hype. Many of the press announcements and coverage of AI breakthroughs in nowcasting, explains Blum, simply do not sufficiently acknowledge the innate strengths of local meteorological teams using conventional forecasting methods. “We’re quick to sing the praises of the possibilities of machine learning,” he says, “but when it comes to modelling the atmosphere, nothing beats traditional physics.”

AI-powered weather
Comparison between a historical radar animation and a prediction by DeepMind's nowcasting model, DGMR, on its direction of travel (Photo courtesy of DeepMind.)

Private clouds

It was this awareness of its own lack of expertise, explains Ravuri, that prompted DeepMind to reach out to the Met Office in the first place. “Without them, we would have solved a problem that no one cared about,” he says. “The meteorologists, they don’t care what technology is behind XYZ. All they care about is [if] these predictions improve your decision-making.”

In time, these kinds of collaborations may be all for the good. For Blum, though, they’re also part and parcel of a much larger trend within weather forecasting toward privatisation. The past few decades have seen companies such as Accuweather, Weather Underground and DTN mine climate data and then repackage it into tailored forecasts for private consumption for other corporate entities and interested individuals. All of these firms provide a valuable service – but, like almost every other type of private organisation, they operate in the interest of shareholders and those willing to pay for their services.

This has always been at odds with the general spirit of weather forecasting shared by national meteorological organisations since the early 1960s. After all, a forecast for the West Coast of the United States doesn’t make much sense if it doesn’t incorporate sensor data on weather fronts in eastern China. Consequently, meteorologists from all over the world have made a special effort to pool their expertise and data through supranational organisations like the World Meteorological Organisation, creating what one of its former directors has described as “the most successful international system yet devised for sustained global cooperation for the common good in science or any other field.”

Accuweather’s subscription-based forecast hasn’t toppled that system, but the growing collaboration between national weather organisations with more powerful big tech firms like Microsoft, Google and Amazon might make it more difficult to hold the former accountable to principles of transparency and the free exchange of data. The proliferation of AI-based forecasting models could be the tip of the spear in that regard.

For his part, Kumar remains sceptical. The tradition of global cooperation and transparency in forecasting is more than matched in AI research, he explains. As a result, while there are cases where companies jealously guard their algorithms from public scrutiny, “it’s hard to hold IPs, or even protections, around specific models.”

The same cannot so easily be said about the nuts and bolts of forecasting. Since the 1980s, advances in forecasting have been reliant on access to generations of supercomputers more powerful than the last. Building and maintaining these vast machines, however, has become extremely expensive. And while organisations such as the ECMWF are still investing billions to do exactly that, privately owned cloud platforms maintained by the likes of Amazon and Microsoft have become increasingly attractive alternatives.

How using computing clouds to monitor natural ones will impact the wider profession of forecasting remains unclear to Blum. While the author acknowledges that the likes of AWS, Google and Microsoft Azure provide an important service to millions of customers on a daily basis, using their resources to perform research and analysis functions in forecasting means “the meat of the work is one step further away from the public scientists doing it” and “a notch less control than they had once before.” Even if that results in more accurate predictions for everyone from farmers to airport traffic controllers, says Blum, it means putting “yet one more thing in the hands of Amazon and Google.”

Topics in this article : , , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU