In the January 2012 issue of the Bulletin of the American Meteorological Society, there is an article
Ho, Chun Kit , David B. Stephenson, Matthew Collins, Christopher A. T. Ferro, Simon J. Brown. 2012: Calibration Strategies: A Source of Additional Uncertainty in Climate Change Projections. Bulletin of the American Meteorological Society. Volume 93, Issue 1 (January 2012) pp. 21-26. doi: http://dx.doi.org/10.1175/2011BAMS3110.1
which has remarkable confessions regarding the level of skill of multi-decadal regional climate predictions; what we refer to as Type 4 downscaling as we discuss in
Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. AGU Monograph on Complexity and Extreme Events in Geosciences, in press.
Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008.
The authors clearly thought they were working to solve this problem, but even a brief reading of the paper shows that implicit in their comments is the failure of the regional climate predictions to even skillfully reproduce the current climate much less be able to accurately predict changes in the regional climate statistics.
I extract text from their paper to illustrate [highlight added]
They start the paper with
“Reliable projections of weather variables from climate models are required for the assessment of future climate change impacts (e.g., flooding, drought, temperature-related mortality, and crop yield).”
My Comment: This claim that stakeholders “require” their predictions is incorrect. While they certainly would use skillful forecasts, if they were available, they can still make informed decisions on the response to risks without definitive predictions. In fact, claiming predictive skill, when none exists, is not an honest way to communicate with stakeholders.
Their text has the other excerpts
“Assessments of such impacts are made by driving impact models with relevant weather variables from climate model simulations (e.g., daily temperature for temperature-related mortality assessment). However, it is generally necessary to adjust (calibrate) the variables to correct for climate model biases rather than to drive impact models with raw climate model output.
Various calibration methods have been used in climate change studies, but few of the published studies carefully investigate the sensitivity of their results to the choice of calibration method.
My Comment: Without a comparison of changes in regional climate statistics in the model results to those in the real world, there is absolutely no way they can calibrate (tune) the model predictions to obtain accurate predictions of changes in climate statistics in the coming decades.
They also make the remarkable statement that
The effect of trend over these short 30-yr time slice periods is negligible compared to natural day-to-day variability, and so the values may be assumed to be almost identically distributed with constant location and scale.
Although the spatial patterns of observed mean temperatures are generally well simulated by HadRM3…., the regional climate model has a warm bias of around 2°–4°C over southern Europe and has a small cold bias over parts of Scandinavia …. The model also overestimates the present-day variance over most parts of continental Europe, especially in the south, where the standard deviation of modeled daily mean temperatures is more than 50% greater than that of observed temperatures…. This comparison confirms the need for calibration of both location and scale before HadRM3 temperature projections are used for any impact assessments.
Because of the warm biases in the climate model simulations, the increases in the mean of daily mean temperatures in the period 2070–2099 relative to present- day observations estimated by the two calibration strategies are both lower than the raw model projections, especially in southern Europe…
My Comment: These findings are just for the current climate and not even for the changes in the climate statistics. Indeed, their admission that the “trend over these short 30-yr time slice periods is negligible compared to natural day-to-day variability” means that there was no statistically significant trend! But they expect us to accept (believe) their changes for the 2070-2099 time period when “calibrated”.
The following excerpt is included in their conclusion
Given the importance of calibration in impact assessments, further research in this area is clearly required. For example, it can be argued that the simple assumptions underlying both bias correction and change-factor strategies are rather unrealistic, and so more rigorous statistical frameworks need to be developed and tested (e.g., Bayesian models that are capable of predicting true climate by properly accounting for model discrepancy and observational errors). Furthermore, the bias correction strategy presented here assumes no changes in climate model biases with time.
My Comment: This is just a call for more funding, in what is an inappropriately framed approach to assist stakeholders in developing approaches to make key resources more resilient to threats faced from weather and other aspects of the climate system in the coming decades.
As I have repeatedly reported on my weblog; e.g.
these studies are not only a waste of money and time, but are misleading stakeholders and policymakers on our actual knowledge level of the climate system. In terms of the image that I posted at the top, this means that model runs used to create that figure (and others like it) are not scientifically robust.