There is an excellent guest post by Ryan Meyer on May 27 2010 on my son’s weblog titled
The post is based on their paper
Zachary Pirtle, Ryan Meyer, , and Andrew Hamilton, 2010: What does it mean when climate models agree? A case for assessing independence among general circulation models . Environmental Science & Policy. doi:10.1016/j.envsci.2010.04.004
The abstract of their paper reads
“Climate modelers often use agreement among multiple general circulation models (GCMs) as a source of confidence in the accuracy of model projections. However, the significance of model agreement depends on how independent the models are from one another. The climate science literature does not address this. GCMs are independent of, and interdependent on one another, in different ways and degrees. Addressing the issue of model independence is crucial in explaining why agreement between models should boost confidence that their results have basis in reality.”
I want to expand on this issue in this post.
As the Pirtle et al 2010 article discusses, agreement among models is often used to claim robustness in their predictions. However, it is actually quite easy to show that the model are actually very similar in their construction, and only differ in the details of how they are set up.
I decompose atmospheric models in my book
Pielke, R.A., Sr., 2002: Mesoscale meteorological modeling. 2nd Edition, Academic Press, San Diego, CA, 676 pp.
While the focus is on mesoscale atmospheric models, the set up for the atmospheric component of climate models uses the same framework. These models have:
- a fundamental physics part which are the pressure gradient force, advection and gravity. There are no tunable constants or functions.
- the remainder are parameterized physics (parameterized physics means that even if some of the equations of a physics formulation is used, tunable constants and functions inlcuded that are based on observations and/or more detailed models). These parameterizations are almost developed using just a subset of actual real world conditions with the one-dimensional (column) representations yet then applied in the climate models for all situations! The parameterized physics in the atmospheric model include long- and short-wave radiative fluxes; stratiform clouds and precipitation; deep cumulus cloud and associated precipitation; boundary layer turbulence; land-air interactions; ocean-air interactions).
The other components of the climate system (ocean, land, continental ice) each also have parameterized physics. The ocean component also has the fundamental physics of the pressure gradient force, advection and gravity.
All of the climate models have this framework. They differ in the grid spacing used, the numerical solution techniques, and the details of their parameterized physics. The different results that they achieve are due just to these differences.
These models are also different from numerical weather prediction models. These NWP models are initialized with observed data. The predictions for different time periods into the future are compared with the observations at those time periods in order to evaluate prediction skill. The multi-decadal climate models, however, are not started from such observed real world initial conditions. Only recently has it proposed to run climate models in this manner; the so-called “seamless prediction” approach – see the post on this
There is further discussion of what the multi-decadal climate models involve in my post
In this post, there is an interesting statement by one of the lead authors on the WG1 IPCC report (David A. Randall) in an 1997 Bulletin of the American Meteorological Society paper:
“Measurements, Models, and Hypotheses in the Atmospheric Sciences” by David A. Randall, and Bruce A. Wielicki.
The abstract of the paper states,
‘Measurements in atmospheric science sometimes determine universal functions, but more commonly data are collected in the form of case studies. Models are conceptual constructs that can be used to make predictions about the outcomes of measurements. Hypotheses can be expressed in terms of model results, and the best use of measurements is to falsify such hypotheses. Tuning of models should be avoided because it interferes with falsification. Comparison of models with data would be easier if the minimum data requirements for testing some types of models could be standardized.”
Roy Spencer also has an post on this topic titled
Among his insightful comments, he writes
“Where the IPCC has departed from science is that they have become advocates for one particular set of hypotheses, and have become militant fighters against all others.”
What this means is that multi-decadal climate model predictions are just hypotheses. They can only be falsified by comparison with observed data which, of course, will not be available until these decades have passed. Thus they are not only constructed with similar frameworks, but their predictive skill cannot yet even be tested.
Thus the real question regarding the climate models is how different are they in terms of what they really are - scientific hypotheses. A start to answering this question is i) by determining what climate forcings and feedbacks they leave out and ii) a comparison between each model of the specific details in (a) the fundamental physics part and (b) parameterizations of each physical, chemical and biological component of the models.
What is already known, of course, is that all of the climate models have similar formulations and only differ in the details of the numerical solution scheme, grid mesh used and of the parameterizations.