Chaos is a name for any order that
produces confusion in our minds.
— George Santayana
Weather forecasting using computer models is known as numerical weather prediction (NWP). It sits at the intersection of three areas of interest -- first, the prediction of events that seem largely unpredictable, in this case, the weather; following from that, non-linear systems and chaos theory; and third, computer modeling on very big, very fast machines.
Much of the first part of this inFAQ is loosely adapted from The Handy Weather Answer Book,
This overview of NWP tries to combine the story of its short history with its science and application. It takes the form of a FAQ, but that's only a convenience -- it's intended to be read straight through. The few pictures on the left are just decoration. The ones on the right are clickable, opening external links in new windows. All the links were last updated in November 2009.
Who formulated the ideas of fronts and air masses?
Because of the restrictions imposed by the warring nations of Europe during World War I, the meteorologists of neutral Norway were cut off from weather information from outside their country. In response, Norway established its own dense network of weather stations. Led by the father and son team of Vilhem and Jacob Bjerknes, a group of scientists now known as the the Bergen School went to work analyzing the resulting data.
From this work they developed the theory of air masses and the weather fronts between air masses. They studied instabilities on polar fronts (the boundaries between a cold polar air masses and warmer tropical air masses) and from this developed the basic theory of mid-latitude storms. These theories were to become the very foundations of modern meteorology.
So what is numerical weather prediction?
Because of their academic backgrounds in the study of fluid dynamics, the Bergen School scientists understood that air is a heat-conducting fluid and as such obeys the fundamental physical laws that constitute the "primitive equations." These are just the equations representing the physical laws that we learned in school -- the laws of conservation of energy and mass, conservation of momentum, the laws of thermodynamics, the hydrostatic equation, and the ideal gas law.
We can construct a three-dimensional grid of the atmosphere and plug into this grid the data that we've gathered on the atmosphere's current state. When we use the equations to describe the behavior of the atmosphere in our grid, we've created a mathematical model. We can then run our model forward in time and solve the equations to predict a future state -- numerical weather prediction. But of course it's not nearly that simple.
In our atmospheric model, many of the equations must be expressed as nonlinear partial differential equations. Their solutions are not possible by precise analytical methods, but are achieved by approximation using advanced numerical methods. Since numerical methods require a lot more calculation, a usable model with thousands of data-points forecasting over any significant period of time requires a huge amount of computation.
Further, because of the nonlinear nature of the equations, tiny differences in the data that are plugged in to these equations -- that define the initial state of the system -- will yield huge differences in the results. This "sensitive dependence on initial conditions" is the hallmark of any chaotic system. We'll come back to that idea again.
Okay. So how's NWP work?
The actual mathematics involved are beyond the scope of this amateur essay. (But see attractor.html to get just a taste.) However, assuming you've already developed the mathematics for your model, the process goes something like this:
1. First, settle on the area to be looked at and define a three-dimensional
grid with an appropriate resolution (20 to 200 kilometers on a side, say, and
going maybe 10 kilometers up).
2. Then, gather weather readings (temperature, humidity, barometric pressure, wind speed and direction, precipitation, etc.) for each grid point.
3. Run your assimilation scheme so the data you've gathered actually fits the model you've designed.
4. Now, run your model by stepping it forward in time -- a few minutes, say.
5. Go back and repeat step 2 through step 4 again.
6. When you've finally stepped the model forward as far as your forecast outlook (from a day to maybe a week), publish your prediction to the world.
7. And finally, analyze and verify how accurately your model predicted the actual weather. Revise it accordingly.
All that produces a numerical prediction. An actual weather forecast takes much more work.
Who first tried NWP?
While the fundamental notions of numerical weather prediction were first stated by Vilhelm Bjerknes as early as 1904, in 1922 Lewis F. Richardson formally proposed that weather could be predicted by solving the "equations of atmospheric motion."
Called by Mandelbrot "a great scientist whose originality mixed with eccentricity," Richardson soon realized that the amount of calculation would be formidable. Quixoticly, he proposed a weather prediction center in which a giant circular amphitheater holding some 26,000 accountants with calculators would make their additions and subtractions commanded by a sort of conductor.
Richardson's first attempts failed because the method predicted pressure changes far larger than any that had ever been observed. This unexpected result was years later found to be caused by the way he had approximated the solutions to the equations. He had the physics right but the numerical analysis wrong. Richardson's idea -- which, it should be emphasized, was basically sound -- was dismissed and forgotten. (His failure here was a minor setback in a distinguished career, and later the Richardson number was named after him.)
Numerical weather prediction would have to wait for the proper tool.
When were the first practical attempts at NWP?
The electronic computer was conceived in the 1940's, when mathematician John von Neumann developed the prototype of the stored program electronic machine, the forerunner of today's modern computers. He soon turned his interest to NWP and formed a meteorology project in 1946 at Princeton's Institute for Advanced Study. There meteorologist Jule Charney began working on the problem of numerical weather prediction.
After figuring out why Richardson's first attempts 25 years earlier had failed, Charney was able to formulate equations that could be solved on a modern digital computer. The first successful numerical prediction of weather was made in April 1950, using the ENIAC computer at Maryland's Aberdeen Proving Ground. Within several years research groups worldwide were experimenting with "weather by the numbers." The first operational weather predictions, using an IBM 701 computer, were begun in May 1955 in a joint Air Force, Navy, and Weather Bureau project.
What role do computers play now?
Because of the amount of computation required, meteorologists have used the fastest computers available to do their numerical modeling. NWP has advanced greatly in six decades, in large part due to the spectacular growth in speed and capacity of digital computers. To give you a feel for it, one of the first commercial computers used for atmospheric research was the IBM 1620 which could perform about a thousand (103) operations per second.
The majority of supercomputers today are designed as computer clusters, using thousands of processors (usually off-the-shelf), programmed to work together in parallel on increasingly complex problems. (Currently, since actual testing of nuclear weapons is no longer allowed, about half of the faster of the world's supercomputers are used for simulations for atomic research. Maybe that's a good thing.)
The very fastest of today's massively parallel supercomputers clip along in petaflops (1015 floating point operations per second). As of October 2012, the likely high-performance computing (HPC) leader is a Cray XK7 called the Titan at Oak Ridge National Laboratory. In addition to 16-core Opteron CPUs, each of its 18,688 nodes uses an NVIDIA Graphic Processing Unit (GPU) Accelerator to help keep down power consumption. Titan is capable of over 20 petaflops and has more than 700 terabytes of memory.
Atmospheric scientists and climate researchers (not just NWP people) use and generate huge amounts of data. The National Center for Atmospheric Research (NCAR) estimated that in 1997 their Mass Storage System maintained computer files totaling 30 terabytes (30x1012 bytes) of data. By late 2000 data stored had grown to over 200 terabytes; by 2003 that number continued growing exponentially to over a petabyte (1015 bytes); and by 2008, NCAR's mass storage surpassed 5 petabytes of data, with a net growth rate of 80 to 100 terabytes per month.
In the middle of 2006, Google's largest computer cluster was estimated to have 4 petabytes of random access memory! It's mass storage is far larger. And when fully operational CERN's Large Hadron Collider is expected to generate 15 petabytes of data each year in particle physics experiments.
But there are inherent limits to numerical weather prediction that even the fastest computers can't overcome.
Is the weather even predictable or is the atmosphere chaotic?
The answer is both. We know that weather forecasters are right only part of the time, and that they often give their predictions as percentages of possibilities. So can forecasters actually predict the weather or are they not doing much more than just playing the odds?
Part of the answer seems easy. If the sun is shining, a pleasing breeze is from the north, and the only clouds in the sky are nice little puffy ones, then even we can predict that the weather for the afternoon will stay nice -- probably. So the weathermen are actually doing their jobs.
But in spite of the predictability of the weather -- at least in the short-term -- the atmosphere is in fact chaotic, not in the usual sense of "random, disordered, and unpredictable," but rather, with the technical meaning of a deterministic chaotic system, that is, a system that is ordered and predictable, but in such a complex way that its patterns of order are revealed only with new mathematical tools.
Who discovered deterministic chaos?
Well, not too new. The French mathematical genius Poincaré studied the problem of determined but apparently unsolvable dynamic systems a hundred years ago working with the three-body problem. And the Americans Birkhoff and Smale, in addition to many others world-wide, contributed greatly to the study of dynamic systems.
But its principles were rediscovered in the early 1960s by the meteorologist Edward Lorenz of MIT. While working with a simplified model in fluid dynamics, he solved the same equations twice using what he thought was identical data. But on the second computer run, trying to save time on his very slow machine (this was nearly fifty years ago) and thinking it would make no difference to the outcome, he truncated his data from six to three decimal places. He was surprised to get a totally different solution -- and had the wit to recognize its significance. He had serendipitously rediscovered "sensitive dependence on initial conditions."
Lorenz went on to elaborate the principles of chaotic systems, and is considered to be the father of this area of study. He is often credited with having coined the term "butterfly effect" -- can the flap of a butterfly's wings in Brazil spawn a tornado in Texas? (But see the note.)
(James Yorke of the University of Maryland is usually credited with having spawned this somewhat misleading new use of the word "chaos.")
What are the characteristics of a chaotic system?
Deterministic chaotic behavior is found in phenomena throughout the natural world -- in dripping faucets and orbiting moons; in reacting chemicals and beating hearts; in spreading epidemics and predator-prey relationships; in financial markets and class grading patterns; and, of course, in the dynamics of the earth's atmosphere.
All these phenomena share common characteristics:--
- Sensitive dependence on initial conditions, that is, starting from extremely similar
but slightly different initial conditions,
any complex system will rapidly move to different states. This behavior has two important implications:--
- Exponential amplification of errors: Any mistakes in describing the initial state of a model of a complex system will guarantee completely erroneous results; and
- Unpredictability of long-term behavior: Even extremely accurate starting data will not allow long-term results from a model. The model has to be stepped forward in time just a bit, the resulting data measured and plugged back in, and the model restarted from there.
- Aperiodic: A complex system never, ever repeats itself exactly (although it may be seen to come close).
- Non-random: Although a complex system may at some level contain random elements (Brownian motion, say, or quantum effects), the system's behavior is not random. It is chaotic, in our sense of the word.
- Local instability, but global stability: In the smallest scale the behavior of a complex system is completely unpredictable, while in the large scale the behavior of the system "falls back into itself," that is, restabilizes.
Studying this list, one can see that there are both theoretical and practical constraints on what numerical modeling can do.
Some real-world problems NWP must overcome
There are many hundreds of world-wide weather stations, weather buoys, observations from ships and aircraft, weather balloons and radiosondes, information from doppler radar, plus satellites. In spite of that, some of the data needed to plug into the grid for initializing a computer run of a model is either missing or doesn't fit the model grid either in time or space. The methods developed to deal with this are referred to as data assimilation.
Further, atmospheric processes that happen on scales smaller than that of the model's grid scale but that signicantly affect the atmosphere (such as the large amount of convection that can occur in thunderstorms, cloud formation and the release of latent heat, etc.) must be accounted for. The procedure to do this that is incorporated into the models is called parameterization. As the Meteorological Service of Canada has stated, "Parameterizations can be (and usually are) complex models in their own right."
How does resolution affect the model?
Now, obviously using a finer resolution for the model grid will more accurately reflect the actual atmosphere, and all else being equal, the prediction will more accurately forecast the weather. But the finer the resolution, the more data that has to be gathered and numbers that have to be crunched.
So, in practice, models that cover large areas (like the whole Northern hemisphere) have coarser resolution than those that cover smaller areas (like just the USA, say) and so are not going to be as accurate in the small scale. Further, it's worth noting that models that work with smaller areas can predict only for shorter time periods, since as time passes it's inevitable that weather from outside the model area (and therefore not accounted for in the model) will have influenced the weather inside the model area.
To overcome this limitation, a finer grid for a smaller area of interest can be nested inside a larger, coarser grid. This method is very widely used, but adds its own complications which must be accounted for in the models.
How can NWP models be categorized?
Many models, or more specifically, the forecast products derived from them, don't fit into a neat scheme. Increasingly, the same models are being used to output many different products. For just two examples, the UKMET Unified Model is used to output predictions from global scale down to just the UK, and NOAA's Global Forecast System also outputs predictions for different scales and time-frames.
But it still can be useful to categorize forecast models by their three main characterics:
- Forecast area, or the scale of the grid:
- global, but usually one hemisphere,
- regional, North America or Europe, for example,
- or relocatable;
- Resolution, or the size of the grid, from over a hundred kilometers on a side down to just a few kilometers.
- and Outlook, or time-frame:
- short-range, meaning one to two days out,
- or medium-range, going out from three to 10 days;
What models are in common use?
Worldwide, there are a couple of dozen computer forecast models being used. In the United States, while there are over a dozen models currently operational, about half that number are in common use. Note that each model typically outputs a number of different products. This list is by no means exhaustive, but here are some of them:
- The European Centre for Medium-Range Weather Forecasting (ECMWF) itself has half a dozen different models. Here's one current plot. (In the U.S., ECMWF refers to their 40 km grid, medium-range, global product.)
- In the U.S., the UKMET usually refers to the United Kingdom Meteorological Office 40 km, 6 day, global product of their configurable Unified Model (UM), which goes down to 4 km resolution for just the UK itself.
- NOAA's GFS, the Global Forecast System, replaced the AVN in 2002, which itself absorbed the MRF (Medium Range Forecast). The GFS is a 35 to 70 km resolution, medium-range, global model. Because all output from its various products is freely available over the Internet, it is widely used by commercial forecasters.
- The NAM, North American Mesoscale Model, is the former ETA, renamed in 2005. The ETA gets its name from the Eta coordinate system, a mathematical system that takes into account topographical features such as mountains. It uses a nested grid, but with better resolution the ETA has absorbed the otherwise similar NGM (Nested Grid Model). According to NCEP, the ETA has "outperformed all the others in forecasting amounts of precipitation."
- The GEM, Global Environmental Multiscale model, developed by the Meteorological Service of Canada, provides as one of its product families output for North America.
- NOAA's RUC, Rapid Update Cycle, provides very high frequency updates of current conditions and short-range forecasts, primarily for aviation users. It puts out 12 hour forecasts every hour.
- The HIRLAM, HIgh Resolution Limited Area Model, is a highly-configurable, high-resolution (down to 5 km), system of short-range models developed and maintained as a cooperative program by five Scandanavian nations, three Baltic countries, plus Ireland, the Netherlands, and Spain. (See the site for Denmark, for example.) In use since 1985, it's the main or only forecast model used by most of the program members.
- The MM5, Mesoscale Model version 5, is an experimental model developed at Penn State and NCAR and used at various universities. It is globally relocatable, accounts for terrain, allows for variable scaling from global down to a few kilometers, has multiple-nesting capabilites, and more. For an example at regional scale, see atmos.washington.edu/mm5rt/info.html, where it is producing high-resolution (down to 4 km), 48 hour forecasts. It can be run on as little as a single PC running Linux, although with a very coarse grid.
- The Weather Research and Forecasting Model (WRF) is a next-generation mesoscale NWP system "designed to serve both operational forecasting and atmospheric research needs." It is the product of a multi-agency and univerity collaboration, in use around the country and the world.
How do you get from a computer model to a weather forecast?
More work is needed to help overcome the inherent weaknesses of numerical models and turn model output into an actual forecast. Statistical post-processing develops relations between model predictions and observed weather. The use of Model Output Statistics is one long-standing technique.
These relations are then used to help translate the model output to specific forecast products. For example, two products of dozens produced each day are plots of forecast surface conditions from the NAM model, and 7-day maximum temperature predictions from the GFS.
No single run of a model can be useful beyond six or seven days -- remember "sensitive dependence on initial conditions." To produce a long-range forecast, ensemble forecasting is widely used, in which the ouput from many model runs is statistically combined, smoothing out the inevitable errors of a single model.
The various statistical techniques typically result in percentages of probabilities,
which explains why we so often hear the phrase, "Chance of rain..."
How good are these models and the predictions based on them?
The short answer is, Not bad, and a lot better than forecasting without them. The longer answer is in three parts:
Some of the models are much better at particular things than others. For example, as a 2006 USA Today list of models pointed out, the AVN "tends to perform better than the others in certain situations, such as strong low pressure near the East Coast," and "the ETA has outperformed all the others in forecasting amounts of precipitation." For more on this subject, here's a slide show from NCEP.
The models are getting better and better as they are validated, updated, and replaced -- the "new" MRF replaced the old, was then absorbed by the AVN, which was itself replaced by the GFS. The NGM was replaced by the ETA, now renamed the NAM.
And that's why we'll always need the weatherman -- to interpret and collate the various computer predictions, add his local knowledge, look out the window, and come up with a real forecast.
Some online resources
Unfortunately NOAA does not seem to provide an overview of NWP. However, these Web resources about NWP are available and could be looked at in the order presented:
For a very brief introduction to weather forecasting in general, the University of Illinois at Urbana-Champaign (UIUC) (which does so much for the computing world) has a Dept of Atmospheric Sciences which puts out ww2010.atmos.uiuc.edu/(Gh)/guides/mtr/fcst/home.rxml, "Weather Forecasting, an online meteorology guide"
The ECMWF (European Centre for Medium-Range Weather Forecast) provides a short primer, ecmwf.int/about/overview/fc_by_computer.html, "Forecasting by computer"
Among dozens of others, the Wikipedia Category Numerical_climate_and_weather_models links to en.wikipedia.org/wiki/Numerical_Weather_Prediction which does a competent job, and provides more links.
UCAR (University Corporation for Atmospheric Research) provides the Meteorology Education and Training (MetEd) website for professionals. It makes quite a few technical training resources available either online or for download, including those for NWP, at meted.ucar.edu/topics_nwp.php
As for the models themselves and their output, in addition to the links to individual models found above,
- NOAA provides a PDF overview of its models at products.weather.gov/PDD/NCEPMAF.pdf.
- Forecast maps and documentation from the Environmental Modeling Center of National Weather Service is available at emc.ncep.noaa.gov.
- Current output from NOAA models can be found at nco.ncep.noaa.gov/pmb/nwprod/analysis.
- And weather.unisys.com, Unisys Weather, also provides their plots and maps from half a dozen models.
And for verification and statistics,
The NWS National Verification Program at nws.noaa.gov/mdl/synop/verification.htm gives an indication of how well the models are doing, with more MOS and verification info available from the Statistical Modeling Branch at weather.gov/mdl/synop/index.php.
Note: In The Essence of Chaos, University of Washington Press 1993, Lorenz publishes as Appendix 1 his previously unpublished paper which he had presented to the AAAS meeting in Washington, D.C., in 1972. The paper was titled "Predictability: Does the Flap of a Butterfly's Wings in Brazil Set off a Tornado in Texas?" On page 15 of the book he points out that Phillip Merilees (meeting session convenor of the Global Atmospheric Research Program) suggested the butterfly used in the paper's title.