A Revolution in Climate Prediction? by Hendrik Tennekes
The World Climate Research Program (WCRP), a program run by the World Meteorological Organization (WMO) organized The World Modeling Summit for Climate Prediction at the European Center for Medium-range Weather Forecasting (ECMWF) in May, 2008. This meeting produced a curious document entitled “The Climate Prediction Project,” which was posted on the WCRP website (see).
I don’t know what to make of this text. Is it a proposal? Is it a call to arms? Is it a trial balloon floated by computer modelers? Is it an attempt by WCRP brass to test the waters for an international facility of unprecedented size? Will the Secretary-General of WMO take this ball and run with it? Will any government be willing to stick its neck out? Did anyone in the circuit that produced and disseminated this text contemplate the ways in which a document of this type may backfire? Did anyone conceive of the complexity of the negotiations that would be needed?
I happen to know of an earlier trial balloon of the same type. The European contingent of CLIVAR, the climate variability subgroup of WCRP, launched a similar proposal in 1998. This group proposed to the European Commission that a European Climate Computing Facility be established. I quote:
“Reliable regional climate change predictions cannot be achieved without enhanced European collaboration and substantial increases in computing resources. These are needed so that multi-century simulations can be made with sufficient complexity that important climatic features, physical processes and regional details are resolved. In addition, ensembles of integrations must be made to estimate the impact on climate predictions of uncertainties in initial conditions and model formulation. The computational requirements for such simulations cannot be met from purely national resources. It is therefore strongly recommended that a European Climate Computing Facility be established.”
To me, it is evident that this proposal was doomed from the start. A European computing facility for weather forecasting (ECMWF) was established in 1979. The board of directors for ECMWF consists of the Directors of the National Weather Services in Europe. They must have interpreted the 1998 CLIVAR proposal either as a hare-brained attempt to greatly expand both the size and the core tasks of ECMWF or as an attempt to create a second European facility of yet greater budget, one that would strain their resources even more. I assume they were not enthused by the idea that the ECMWF staff evidently conducts climate research on the sly. And they must have been quite annoyed that their own scientists had colluded with those at ECMWF without thorough in-house discussions. It is not hard to imagine how they would have responded to a phone call from a bureaucrat at the European Commission in Brussels. Any proposal that is floated without the explicit support from up high deserves to be shot down without compunction. It was.
The current trial balloon (if that is what it is) is of yet grander scale. Europe is too small for the aspirations of computer modellers. The WCRP crowd apparently dreams of multipetaflop computing and of a facility substantially bigger than those of high-energy physics. So it invented language meant to impress diplomats and politicians. I quote from the first paragraph of this document:
“The development of reliable science-based adaptation and mitigation strategies will only be possible through a revolution in regional climate prediction.” Really? Do the physical sciences have a monopoly on the truth, much as religion used to have? Why should any strategy for dealing with climate change rely primarily on the physics of atmosphere and ocean, not on biology, psychology, sociology, land-use and water management, or even intergovernmental negotiations or raw politics?
The idea that climate policy should be “science-based” was promoted by the Intergovernmental Panel on Climate Change (IPCC) from the start of its work some twenty years ago. The kind of knowledge obtained by the physical sciences was taken to be much more reliable than all other kinds of knowledge. The proponents of this viewpoint deliberately organized the IPCC process such that Working Group I had to provide the “Scientific Basis” for climate policy. The so-called “Human Dimensions” of global warming received some lip service, but were otherwise substantially ignored.
This strategy has backfired and will continue to backfire. The “science-based” work of IPCC has been the underpinning of the Kyoto Protocol, but the chances for agreement on a successor to Kyoto are now slimmer than ever. Diplomats realize that Global Warming has been stalling since 1998, and that IPCC appears incapable of providing a convincing explanation. The negotiators merely have to listen to daily weather forecasts in order to realize that there has been no substantial progress in weather prediction since the inception of IPCC, let alone a revolution. Weather and climate are linked in the prevailing methodology, which relies exclusively on General Circulation Models (modelers call this “seamless prediction”). However, the spread in the forecasts for fifty years ahead is as large as it was twenty years ago. On top of that, no information of any kind has been generated on the effective prediction horizon of climate forecasts, on the causes of the evident regional failures of climate models, or on methods by which the reliability of climate runs can be assessed. The “revolution in regional climate predictions” promised in the WCRP-document must be considered wishful thinking.
The hoped-for revolution in climate modeling can be achieved only if the Grand Strategy of climate research focuses no longer on improving predictive skills as such, but on the development of experimental and theoretical tools for the scientific assessment of predictive skills. The assumption that massive escalation of computer power will substantially expand the prediction horizon or the understanding of predictive skills is not supported by any evidence I am aware of. Ensemble forecasting is the current way of obtaining a crude, provisional idea of the reliability of model runs, but climate modelers are now toying with ideas labeled “stochastic-dynamic forecasting.” That sounds promising, but also reeks of the customary ignorance of physicists concerning the nature of problems in turbulence theory. All earlier attempts by physicists, including some famous ones like Werner Heisenberg, have failed. A notorious example is Robert Kraichnan’s Direct-Interaction Approximation (DIA), launched in 1958, when I studied at the Johns Hopkins University. In 1987 Kraichnan finally admitted in a Summer School at NCAR that DIA and all of its descendants were to be seen as efficient computer codes, and had contributed nothing to the understanding of the dynamics of turbulence. I conclude that the chances of success for stochastic-dynamic prediction methods are very slim indeed. Is this a solid scientific basis for a billion-dollar facility? And is this the way to go if the primary goal is not to improve predictive skills, but to finally understand why and how predictive skills are limited? I submit it is not.
What climate research needs most is a solid body of knowledge on the reliability of climate models, a scientific basis for decisions on the direction of research programs and for the delineation of the role of scientists in advising governments. The revolution I envisage does not depend on computer power, but on thinking power. Rapidly escalating computer power diffuses the central issue; thinking power may be able to focus it.
The World Summit text repeatedly focuses on regional climate prediction. I wonder why. Let me start with a few quotes:
“The Summit was organized to develop a strategy to revolutionize prediction of the climate through the 21st century to help address the threat of global climate change, particularly at the regional level.”
“Despite tremendous progress in climate modelling and the capability of high-end computers in the past 30 years, our ability to provide robust estimates of the risk to society, particularly from possible catastrophic changes in regional climate, is constrained by limitations in computer power and scientific understanding.”
“The goal of the project is to provide improved global climate information to underpin global mitigation negotiations and for regional adaptation and decision-making in the 21st century.”
Exactly what might be meant here? What is “global climate change at the regional level”? Global mitigation negotiations don’t need further “underpinning”; they are stalling for entirely different reasons than any real or imagined climate threat. Conflicts of interest concerning fossil fuel policy and unwillingness to address the skewness in the distribution of wealth dominate the agenda of the endless string of meetings.
Also, the mayor of New Orleans, the US Corps of Engineers, and the Governor of the State of Florida surely do not need more climate information. The damage caused by hurricanes is caused primarily by the unconstrained build-up of coastal areas. I cannot conceive of any “possible catastrophic change in regional climate” around the Gulf of Mexico. It’s not the climate that is threatening, it is urban sprawl and inadequate investment in coastal defense technology.
Let me now discuss the last paragraph of the World Summit text and the final paragraph of a message I received from one of its authors. First the official text:
“The Climate Prediction Project will help humanity’s efforts to cope with the consequences of climate change. Because the intellectual challenge is so large, there is great excitement within the scientific community, especially among the young who want to contribute to make the world a better place. It is imperative that the world’s corporations, foundations, and governments embrace the Climate Prediction Project. This project will help sustain the excitement of the young generation, to build global capacity, especially in developing countries, and to better prepare humanity to adapt to and mitigate the consequences change.”
Now the position taken by one of the authors:
“As far as I am concerned, the main achievement of the Summit was to get a consensus statement from modelers around the world that computational constraints were a significant roadblock to improving global climate models. Others will use these statements to pursue possible funding initiatives.”
Was that all? Does everything boil down to a routine song-and-dance for a massive increase in computer power? Is this why bloated words about Humanity and Climate Threat were considered necessary? Is this the way to “make the world a better place”?
In 1990, I wrote a column protesting against similar fantasies. I quote:
“I worry about the arrogance of scientists who blithely claim that they can help solve the climate problem, provided their research receives massive increases in funding. I worry about the lack of sophistication and the absence of reflection in the way climate modellers covet new supercomputers (….) My worries multiply when I contemplate possible side effects. Expansion of research tends to support the illusion that science and technology can solve nearly every problem, given enough resources. Research supports the progress myth that pervades modern society, but that very myth seduces us into ignoring our responsibility for the state of the planet. Therefore, I want to restrain myself. I want to avoid making promises I cannot keep. I want to keep my expansive instincts in check. Above all, I try to be a scientist: I wish to think before I act.”
My message clearly has not lost any urgency.
We should think before we act.