UPDATE: The papers are now available (h/t to Dan Hughes) ; See the blue Free Access to the right edge and under the titles.
Demetris Koutsoyiannis has provided information on a very informative debate on the quality of models to predict climate in the future. Demetris is the 2009 Henry Darcy Medal for his outstanding contributions to the study of hydrometeorological variability and to water resources management.
A year ago I posted about a paper that his group published in Hydrological Sciences Journal:
In late 2009 I had posted about the editorial processing of this paper:
Last week a discussion paper was published
Huard, D. (2011) A black eye for the Hydrological Sciences Journal. Discussion of ‘A comparison of local and aggregated climate model outputs with observed data’ by G.G. Anagnostopoulos et al. (2010, Hydrol. Sci. J. 55 (7), 1094–1110). Hydrol. Sci. J. 56(7), 1330–1333.was published in Hydrological Sciences Journal
The abstract reads
“A paper published by Anagnostopoulos et al. in volume 55 of the Hydrological Sciences Journal (HSJ) concludes that climate models are poor based on temporal correlation between observations and individual simulations. This interpretation hinges on a common misconception, that climate models predict natural climate variability. This discussion underlines fundamental differences between hydrological and climatological models, and hopes to clear misunderstandings regarding the proper use of climate simulations.”
Thr Reply is entitled
Koutsoyiannis, D., Christofides, A., Efstratiadis, A., Anagnostopoulos, G.G. and Mamassis, N. (2011) Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrol. Sci. J. 56(7), 1334–1339.Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal ” by D. Huard
These are two very important communications. It is anticipated that the publisher will make them open access by next week so everyone can read the entire exchange and comment on the publisher’s website [which I highly recommend!]
In the interim, below are excerpts from each paper [highlight added].
From Huard 2011
The main issue with the AKCEM paper is that it is based on a false premise, namely that the selected climate simulations predict (forecast) climate in a deterministic sense.
Global climate models (GCM) are externally driven by solar radiation and planetary orbital parameters. An additional external forcing is the manmade emission of greenhouse gases (GHG) and aerosols. None of these external forcings, however, can explain the inter-annual variability present in all climate variables. Variability at the annual and decadal scales emerges spontaneously from the dynamics of the climate system and is only weakly influenced by external forcing (massive volcanic eruptions are an exception). At the multi-decadal scale, variability is caused by a mix of natural variability and changes in external forcing conditions. Murphy et al. (2009) provide a clear and crisp discussion around these concepts.
One of other metrics used by AKCEM to evaluate model performance is the correlation between the 30-year running mean of simulations and observations. This case is much more interesting, since we expect external forcing by GHGs to play a role at that time scale, and, thus, to explain a portion of the observed variability. Note that this is very different
from saying that climate models predict climate; under constant external forcing, TAR and AR4 simulations have no predictive skill whatsoever on the chronology of events beyond the annual cycle. A climate projection is thus not a prediction of climate, it is an experiment probing the model’s response to change in GHG concentrations.
AKCEM expected individual models to show some skill in predicting multi-decadal climate variations. They do, but their skill is limited to the small fraction of climate’s variability driven by external forcing. To evaluate model performance, it is fundamental to extract the model’s response to the external forcing from the background natural variability (Randall et al. 2007). Failing to do this, AKCEM have merely shown that climate models display chaotic behaviour at small and long time scales, not that they are poor.
So why was the paper published even though its methodology is naive and the conclusion misleading? I asked Frances Watkins from the HSJ Editorial Office for a copy of the anonymous reviews, the author’s response and the editor’s decision letter. The authors, editors and reviewers all agreed to make these documents available and with those I could make some sense of what happened.
The paper received three evaluations. Reviewer A provided a solid review which identified unsubstantiated or false claims and methodological shortcomings. The evaluation included a comment on the general lack of rigour of the paper with a recommendation not to publish. Reviewer B rated the paper as “Very good to excellent” and made three superficial
suggestions for improvements. Reviewer C rated the paper as “Poor to fair” and specifically stated: “This paper is misleading as it is based on a wrong assumption related to the climate system predictability.” Reviewer C also criticized the methodology as inappropriate and recommended the paper be rejected outright.
….the editorial piece (Kundzewicz and Stakhiv 2010) indicates the editor shares the same misguided assumptions about climate simulations as AKCEM and there is little hope that an additional review would have made any difference. This is in my view a black eye for HSJ coming out as lacking the discrimination required to identify poor science.
From Koutsoyiannis et al 2011
“….we tested whether the model outputs are consistent with reality (which reflects the entire variability, due to combined natural and anthropogenic effects). Our results extend Huard’s statements further. Specifically, we show that, climate models are not only unable to predict the variability of climate, but they are also unable to reproduce even the means of temperature and rainfall in the past. For example, as we stated in our paper, “In some [models], the annual mean temperature of the USA is overestimated by about 4–5◦C and the annual precipitation by about 300–400 mm”.
Given our results, an interesting question would be: Under what premise could one, in order to derive meaningful results for the future, use models that fail to reproduce the known past, in terms of both mean level and variability? Huard does not ask this straight question. Yet he admits no predictive skill of models for the past. In his own words, “under constant external forcing, TAR and AR4 simulations have no predictive skill whatsoever on the chronology of events beyond the annual cycle”, and quotes Smith et al. (2007): “Previous climate model projections of climate change accounted for external forcing from natural and anthropogenic sources but did not attempt to predict internally generated natural variability”. Thus, he implies a skill for the future, regardless of poor behaviour in the past.
IPCC …. uses climate model outputs as predictions. Calling these by another name, such as “credible quantitative estimates of future climate change” (Randall et al., 2007, p. 591) does not change the essence. For example, in IPCC (2007, Fourth Assessment Report—AR4; Summary for policymakers, p. 15), we read (our emphasis): “It is very likely that hot extremes, heat waves and heavy precipitation events will continue to become more frequent”. This is one of a total of six occurrences of the word “will” in a similar context (in the three next pages of the section “Projections of future changes in climate”), the last one being “… anthropogenic carbon dioxide emissions will continue to contribute to warming and sea level rise for more than a millennium”— not to mention the over 20 appearances of expressions such as “it is expected”, “it would”, etc.
Huard writes: “The natural variability of the climate system is largely chaotic” and thus “unpredictable”. Not only do we endorse this statement, and not only have we presented research results on this issue (Koutsoyiannis 2003, 2006, 2010, Koutsoyiannis et al. 2009, Christofides and Koutsoyiannis 2011), but we have also pointed to this problem in the second paragraph of the conclusions of our paper, the one that begins: “However, we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.” It is climate modellers who say or imply otherwise; for example Schmidt (2007, our emphasis):
Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so. Climate is instead a boundary value problem— a statistical description of the mean state and variability of a system, not an individual path through phase space. Current climate models yield stable and nonchaotic climates, which implies that questions regarding the sensitivity of climate to, say, an increase in greenhouse gases are well posed and can be justifiably asked of the models.
Therefore, again we are not the right recipients of Huard’s warning that climate is chaotic.
Near the end of his Discussion, Huard makes an appeal to the “mutual respect and trust in the professionalism of our peers”, which makes an interesting contrast with several of his statements referring to us, the editor, the reviewers and other authors, and ultimately the “… HSJ coming out as lacking the discrimination required to identify poor science”.
Whether the HSJ got “a black eye” is for the reader to judge, as is whether “reviewers A and C rejected the paper on technical and methodological grounds, not philosophy”, since the entire review file is now public [As it has already been available to Huard, it is annexed also to this Reply as a Supplementary Information on the HSJ online site.].
This Comment/Reply illustrates, in my view, the continued pressure on Editors not to publish papers that conflict with the IPCC perspective of the climate system and the ability of global climate models to provide skillful predictions decades into the future. Instead of showing in a quantifiable manner any flaws in the work by Demetris Koutsoyiannis and colleages, Huard 2011 resorts to semantics and criticisms of the review process. Whenever authors resort to such arguments, it illustrates that they cannot refute the substance of the research study.