Hyperlocal weather forecasting

A recent Wall Street Journal article describes the evolution of weather forecasting. I hope Fortran will continue to have an important role.

The Future of Weather Forecasting Is Hyperlocal
Researchers and companies are tapping into new sources of data to predict conditions in an area as small as a backyard or a city block
By Thomas E. Weber
June 19, 2025
archived

Researchers and companies working on hyperlocal forecasts hope to tap a number of sources beyond traditional satellite and radar data. One is the Internet of Things—the billions of internet-connected devices that gather data, from barometric pressure sensors built into cellphones to webcam video feeds. Across the country, states have been building networks of weather stations called mesonets that can supplement the National Weather Service monitors typically found at airports. LIDAR technology, which can measure wind speed by bouncing lasers off particles in the air, is able to gather information with a small box on the ground instead of a weather tower.

Nearly every kind of weather forecasting relies on a mix of public and private data sources, but in the hyperlocal realm private companies are growing more prominent, in part because business customers have the most to gain from tailored forecasts. Microsoft, for instance, deploys its machine-learning technologies to refine weather predictions, which it then incorporates into broader cloud-computing products it markets to agriculture companies. Tomorrow.io provides hyperlocal forecasts to major airlines to help guide airport operations.

2 Likes

One of the take at home messages from a workshop that I attended last week at the UK MetOffice is that “traditional” simulations (as opposed to “novel”, machine learning powered simulations) will still have a big role in providing data for the training. While for weather models this is debatable (as we can actually have a lot of experimental data), for ocean models this true (as of today), as we cannot observe the greatest players in ocean circulation.

I think Fortran will stay relevant as long as these ocean models exist (and new competitors, like oceananigans.jl, do not appear). The drawback is that they are custom builds, doing only one job. The emergence of a community library to do finite difference calculations on quadrilateral grids would be a revolution.

1 Like

I am pretty sure the role of process-based simulations (what I take “traditional” to mean here) will go beyond providing data for training, since anything empirical (all of ML/AI) is fundamentally limited by (observational) data and it would simply be foolish to drop a family of models that doesn’t come with these limitations. While circumventing that limitation is arguably less important for short term weather prediction than in climate prediction, the latter is not becoming less relevant anytime soon, and the big numerical weather and climate models are the same at their core. They’re not leaving anytime soon, and even new ones like ICON (the flagship weather and climate model of the German Weather Service and MPI ) are written in modern Fortran. However, with the dawn of faster GPU supercomputers, I can imagine that process-based atmospheric modelling will change in scope (to include process-based modelling on micro-, meso- and synoptic scale).

Furthermore, while Fortran is more established in process-based atmospheric modelling than in ML/AI based models, the latter is not unheard of and there’s a lot of potential here. In fact, I will be supervising the development of one new, specialised, GPU-optimised statistics/ML based weather prediction model in Fortran for parts of the UK (and more is planned/already proposed). :slight_smile:

Bottom line: I do think Fortran will continue to play a big role in climate and weather prediction (as long as it keeps up with supercomputer developments).

1 Like

Can you elaborate and/or point me in a direction for this? I love the idea of making reusable software.

If I had a penny for every time that I’ve taken a Fortran refresher course and I’ve found myself with people having to work with ICON, I would have something like 0.12$ or 0.13$, which is not much but frankly amazing. Jokes aside, ICON will keep Fortran relevant for a long time.

You have expressed perfectly what I think about the subject. However, the market trends are different from the perspectives of scientists.

Well, I can count at least 5 ocean models (2 versions or ROMS, MitGCM, CROCO, NEMO) following independent development paths that basically do the exact set of base operations (i.e. finite differences on arakawa C grid to solve some variation of the Boussinesq Equations, with halo communication and split temporal scheme). Imagine having a library like xgcm in fortran, capable of defining the topology of the staggered grid (which in this case means the interdependence in terms of differential operations) and containing a set of basic, but fast finite difference, averaging and interpolation operations. That would be very practical, would gather together five different communities (i.e. enlarging the user base and thus the maintenance base) and would standardise a lot of things.

I’ve been trying to sketch this with derived data types and a jungle of pointers, but I would need to discuss with someone with much more expertise than me with respect to pointers.

Well, while the AI hype definitely shows up in academia, the good thing about public sector research is that it’s less susceptible to market trends. :slight_smile:

True that, but less than one month ago a colleague was asked “Why didn’t you include AI in your research project” at the oral examination for a permanent position. His research project was aiming at developing a stochastic model for the ocean transport (with Malliavin calculus, a generalisation of Îto and Stratonovich calculus) for both the ocean and ocean-atmosphere interaction. So on one side a very theoretical mathematical subject, and on the other side something we do not observe very well. And yet, no AI no fundings, no AI no positions.

As someone who’s in interview panels and editorial boards I’ve never seen an example of what you describe (though I’m sure that does happen more frequently now), but see/read a lot of “Justify the use of AI. What does it bring to the table for this problem?” and rejections if that question cannot be answered, and that’s reassuring given what science is about. Similarly, you see funding opportunities in atmospheric science with a scope description that explicitly excludes AI, because it’s not useful for the problem the scheme addresses. (But there’s no question that there’s more funding for AI related projects - this usually results in people just adding a “bit of AI” on top of the planned research to fit the funding call). So the trend has an effect, but doesn’t take over everything.