A New AGU EOS Article Titled “Guidelines For Constructing Climate Scenarios” By Mote Et Al 2011 Which Inadvertently Highlights This Flawed Climate Science Approach

out on a limb, ibrahim rasool when he was a  teenager, monafeghin, shay leron

There is an astonishing new article in the August 2 2011 issue of EOS which lays open the serious flaws in the use of multi-decadal global climate models, even when downscaled to regions, to provide value-added information to the impacts community

The article is

Mote et al. 2011: Guidelines for Constructing Climate Scenarios. Volume 92. Number 312.  August 2011

I will have comments after subsections of the text which is extracted from their paper.

The article starts with the text

Scientists and others from academia, government, and the private sector increasingly are using climate model outputs in research and decision support. For the most recent assessment report of the Intergovernmental Panel on Climate Change, 18 global modeling centers contributed outputs from hundreds of simulations, coordinated through the Coupled Model Intercomparison Project Phase 3 (CMIP3), to the archive at the Program for Climate Model Diagnostics and Intercomparison (PCMDI; http://pcmdi3.llnl.gov) [Meehl et al., 2007]. Many users of climate model outputs prefer downscaled data—i.e., data at higher spatial resolution— to direct global climate model (GCM) outputs; downscaling can be statistical [e.g., Maurer et al., 2007] or dynamical [e.g., Mearns et al., 2009]. More than 800 users have obtained downscaled CMIP3 results from one such Web site alone (see http:// gdo -dcp .ucllnl .org/ downscaled cmip3 _projections/, described by Maurer et al. [2007]).

My Comment:  This summary emphasizes that multi-decadal global model predictions are being actively promoted and funded for creating climate scenarios in the coming decades.  This source of model generated information is widely used, but as summarized in my comments, this is a waste of research funding.

Their Figure 1 caption includes the evaluation criteria

Fig. 1. Projected change (in percent) in summer precipitation for the 2080s in the U.S. Pacific Northwest from a variety of climate models (open circles, as used by Mote and Salathé [2010]), for scenario A2 of the Intergovernmental Panel on Climate Change’s Special Report on Emissions Scenarios. The x axis shows the bias factor of Giorgi and Mearns
[2002]; models with simulated 1970–2000 precipitation close to the observed precipitation, within the range of natural variability, are given a skill factor of 1. Linear fit to the data is indicated (sloping line). There is little difference among changes calculated with all models unweighted (horizontal line), with only the  “best” models (models with skill factor >0.9, solid circle), or with weighting the models by their skill factor (plus sign).

My Comment: This is a remarkably weak criteria to justify claiming skill when all that they claim is needed is for the range of the model simulation to be within the range of the natural variability.

Additional extracts read

Descriptions of future climate change should include both a central estimate and some representation of uncertainty. Major contributors to uncertainty are imperfect knowledge of (1) the drivers of change, chiefly the sources and sinks of anthropogenic greenhouse gases and aerosols; (2) the response of the climate system to those drivers; and (3) how unforced variability may mask the forced response to drivers. Quantifying uncertainty in greenhouse gas emissions and other forcings—the drivers of change—remains problematic, and although some studies have attempted to assign probabilities, many instead simply choose among the three forcing scenarios that were widely used for CMIP3. Between now and about 2050 this source of uncertainty is less important than others, because concentration scenarios diverge substantially only after that and because changes before then include a substantially delayed response to previous emissions.

My Comment:  This paragraph includes more significant confessions on the robustness of their model scenarios. First, they do not even include all of the first order climate forcings (e. g. see NRC 2005) where, for example, land use change effects are not included. The admission that there is “imperfect knowledge” of the response of the climate system to these drivers should also raise a red flag regarding the robustness of the scenarios.

The third important source of uncertainty—how unforced variability masks effects by known drivers of climate change—involves the fact that historical climate simulations do not, and are not intended to, reproduce the exact monthly values of climate variables.

My Comment:  This is an amazing statement. If the historical climate simulations do not reproduce monthly statistics of the climate variables, the result should not be accepted as skillful. I am unclear what is meant by the term “exact” in this context, but it is clear that the statistics should be replicated within some quantified range of error. Model results are never “exact”.

Using climate projections for impact assessments depends on being able to separate forced responses from natural climate variability [e.g., Giorgi, 2005], which is often accomplished by analyzing the mean and range in an ensemble of simulations differing only in initial conditions. One thing to note on the uncertainty in climate projections is that on the regional to local scale, where effects are felt, studies may include extremes like cold or heat, storms, and droughts, and detection and attribution of such changes to specific causes (e.g., rising greenhouse gases) becomes more difficult. Consequently, estimating uncertainty in future changes in these local quantities has little theoretical basis. Further, it must be emphasized that the range of available model results does not, and is not intended to, represent the true physical uncertainty of the quantity in question, although many studies implicitly assume that it does.

My Comment:  If the estimate of in the uncertainty of future changes in the local effects from drought, heat etc “has little theoretical basis” what good are they?  This is a remarkable admission of the lack of value of their scenarios.

To distill the large number of model simulations into a small group of scenarios, it seems logical to focus on simulations that seem more credible, culling or weighting the results on the basis of some measure of skill……Furthermore, while many efforts have focused on ranking climate models based on how they simulate the time-averaged regional climate during a historical period [e.g., Gleckler et al., 2008; Brekke et al., 2008], for impact assessments, in particular, a better basis for model ranking might be their ability to simulate regional climate sensitivity to a change in global climate forcing, provided that a theoretical and observational basis for such analysis can be established.

My Comment:  The ranking of models by their ability to simulate regional climate statistics during the historical period should be the  requirement to document model skill.  To rank by “regional climate sensitivity”  to a change in global climate forcing is just a model prescribed evaluation! It tosses out the need to compare with real world observations.

In any case, while some studies have shown that ranking models has led to a separation in future responses [e.g., Walsh et al., 2008], others have shown that considering metrics of model skill has generally made little difference either to detection and attribution studies or for representing likely future change.

My Comment:  This sentence is really nonsensical. There is no observational validation set presented to verify what is a “likely future change”, for example.  They do not present what are their “metrics of model skill“.

In summary, and based on the evaluations cited above, it seems justifiable to forgo culling or weighting climate projections based on perceptions of credibility.

My Comment:  This is a remarkable confession. They write that “perceptions of credibility” is not to be used to assess model prediction skill

Results from new evaluations of models including CMIP5 (see http:// cmip -pcmdi .llnl .gov/ cmip5/) and the North American Regional Climate Change Assessment Program [Mearns et al., 2009] are arriving, along with new downscaled data repositories.

My Comment:  This means that vast additional financial and people resources are going to be used on what is really a scientifically flawed approach.

It may be worth the effort to evaluate the relevant variables against observations, just to be cognizant of model biases, but recognize that most studies have found little or no difference in culling or weighting model outputs.

My Comment:  In other words, observations are devalued as being an important part of the assessment of the skill of model predictions. This is such a deviation from the scientific  method that I find it puzzling why funding agencies such as the NSF do not reject projects that make such a claim.  Not only are they ignoring the need to properly validate model skill by using historical data, but they ignore that they also need to validate the model predictions for the CHANGES in climate statistics that result from human and natural climate forcings. 

My Conclusion: The Mote et al 2011 EOS article provides documentation in their own words that illustrates why the creation of climate scenarios using multi-decadal global (and downscaled regional) model predictions do not add value for use by the impacts community [the research and policy communities who assess risks to key societal and environmental resources]. Even worse, they are misleading policymakers on what is achievable in terms of climate forecast skill.

source of image

Comments Off on A New AGU EOS Article Titled “Guidelines For Constructing Climate Scenarios” By Mote Et Al 2011 Which Inadvertently Highlights This Flawed Climate Science Approach

Filed under Climate Models

Comments are closed.