A new article
Edwards, Paul N. , 2011: History of climate modeling. WIREs Climate Change. Volume 2, January/February 2011. John Wiley & Sons, Ltd. Volume 2 128–139 DOI: 10.1002/wcc.95
with the abstract [highlight added]
“The history of climate modeling begins with conceptual models, followed in the 19th century by mathematical models of energy balance and radiative transfer, as well as simple analog models. Since the 1950s, the principal tools of climate science have been computer simulation models of the global general circulation. From the 1990s to the present, a trend toward increasingly comprehensive coupled models of the entire climate system has dominated the field. Climate model evaluation and intercomparison is changing modeling into a more standardized,modular process, presenting the potential for unifying research and operational aspects of climate science”
provides clear evidence of the current [in my view scientifically invalid] approach to climate science. An excerpt from their paper reads
“Climate models—theory-based representations of average atmospheric flows and processes—are the fundamental tools of modern climate science……Since the 1960s, GCMs—computer simulations of atmospheric flows and processes over long periods— have come to dominate climate science, although simpler models remain important both in their own right and as checks on sub-models included in GCMs.”
While I agree climate models are an invaluable tool to use for data analysis (i.e. reanalyses), the assessment of climate processes (i.e. process studies) and the determination of predictability, the fundamental tools of modern climate science must be observed real world data. Models are only hypotheses!
Indeed, Edwards implicitly recognizes that these climate models have a a serious unresolved issue when he writes [highlight added]
“These trends have brought many disciplines together to seek realistic, potentially predictive models of climate change.”
“[P]otentially predictive” means that their skill has not yet been shown when compared with observations!
I discuss the appropriate use of models when I started my weblog in 2005; e.g. see
where I wrote
“There are three types of applications of these models: for process studies, for diagnosis and for forecasting.
Process studies: The application of climate models to improve our understanding of how the system works is a valuable application of these tools. In an essay, I used the term sensitivity study to characterize a process study. In a sensitivity study, a subset of the forcings and/or feedback of the climate system may be perturbed to examine its response. The model of the climate system might be incomplete and not include each of the important feedbacks and forcings.
Diagnosis: The application of climate models, in which observed data is assimilated into the model, to produce an observational analysis that is consistent with our best understanding of the climate system as represented by the manner in which the fundamental concepts and parameterizations are represented. Although not yet applied to climate models, this procedure is used for weather reanalyses (see the NCEP/NCAR 40-Year Reanalysis Project).
Forecasting: The application of climate models to predict the future state of the climate system. Forecasts can be made from a single realization, or from an ensemble of forecasts which are produced by slightly perturbing the initial conditions and/or other aspects of the model…..”
The funding of multi-decadal climate predictions by the NSF and others (e.g. see) are examples of the current flawed approach which Edwards reports on in his paper. NSF and these other agencies are pouring research funding into an approach which is not following the robust scientific method (e.g. see).