Category Archives: Climate Models

Guest Post By Robert Pollock On Global Climate Modeling

I was sent an informative e-mail by Robert Pollock on the global climate response to volacanic eruptions which I am presenting below with permission.  Robert is a retired physicist with training in radiation dosimetry.  He started a company to measure radon in the environment and sold it a few years ago.

His excellent set of information is presented below

On Tue, 11 Sep 2012, Robert Pollock wrote:

Roger, I don’t know if you have an interest in volcanic eruptions, but they are often cited as an example of the efficacy of GCMs and are very important when looking at ocean heat content.

Gleckler et al. modeled the effect of volcanic eruptions on ocean heat content. Using 12 climate models they showed that Krakatoa in 1883 made its presence felt well into the 20th century in the form of reduced sea level rise and less ocean warming (both on the surface and at depth). As stated in the AR4, including volcanic eruptions improved the model’s match to reality, and the cooling from volcanoes was offsetting a considerable fraction of anthropogenic ocean warming.

Figure 1 from Gleckler shows the difference with and without volcanic forcing between 1880 and 2000: www.nature.com/nature/journal/v439/n7077/fig_tab/439675a_F1.html

At the end of the 20th century simulations with (blue) and without (green) volcanic forcings have a difference of some 70% (18/60 10^22 J).

The authors wrote

“Inclusion of volcanic forcing from Krakatoa (and, by implication, from even earlier eruptions) is important for a reliable simulation of historical increases in ocean heat content and sea-level change due to thermal expansion.”

However, in a 2010 paper Gregory notes that ‘even earlier eruptions’ were not included in the Gleckler modeling work and if they had been, the conclusion would have been quite different. If an eruption produces a cooling and a drop of sea level rise that lasts decades (if not centuries) then each new eruption would lead to further decreases indefinitely.

Such is not the case, and Gregory modeled a steady-state condition resulting from earlier eruptions before Krakatoa. With other climate model the background natural conditions do not include volcanic eruptions. The impact of a new eruption (as part of a series) then becomes less and doesn’t lead to a long-term trend in ocean heat content.

Most GCMs overestimate the (depressive) effect of volcanoes and thus also overestimate the forcing from greenhouse gases to reproduce the climate and ocean heat content of the 20th century.

Gleckler et al. Volcanoes and climate: Krakatoa’s signature persists in the ocean www.nature.com/nature/journal/v439/n7077/abs/43975a.html

Gregory Long-term effect of volcanic forcing on ocean heat content www.agu.org/pubs/crossref/2010/2010GL045507.shtml

Driscoll et al. now have a paper in press that looks at the current generation of models used for the AR5 (13 CHIMP5 models) and their ability to model large tropical eruptions. The abstract lists a number of problems and

“raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings. This is also of concern for the accuracy of geoengineering modeling studies that asses the atmospheric response to stratosphere-injection particles.”

Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions www.agu.org/pubs/crossref/pip/2012JD017607.shtml

Robert Pollock

source of image

Comments Off

Filed under Climate Change Forcings & Feedbacks, Climate Models

More CMIP5 Regional Model Shortcomings

source of image

In my post

CMIP5 Climate Model Runs – A Scientifically Flawed Approach

The CMIP5 - Coupled Model Intercomparison Project Phase 5 is an integral part of the upcoming IPCC assessment.  Two of its goals are to

  • evaluate how realistic the models are in simulating the recent past,
  • provide projections of future climate change on two time scales, near term (out to about 2035) and long term (out to 2100 and beyond)

In my post, CMIP5 Climate Model Runs – A Scientifically Flawed Approach, I presented a number of peer-reviewed model comparisons with real world observations that documents the failure of the multi-decadal global climate models to provide skillful regional climate predictions to the impacts communities.

I concluded my post with the text

These studies, and I am certain more will follow, show that the multi-decadal climate models are not even skillfully simulating current climate statistics, as are needed by the impacts communities, much less CHANGES in climate statistics.  At some point, this waste of money to make regional climate predictions decades from now is going to be widely recognized.

Jos de Laat of KNMI has provided us with further examples that document the serious limitation of the CMIP5 model results. I have presented this list below [with highlighting]. I am pleased that the model hindcast predictions are being reported, as this is clearly information that the impact and policy communities need.

L. Goddard, A. Kumar, A. Solomon, D. Smith, G. Boer, P. Gonzalez, V. Kharin, W. Merryfield, C. Deser, S. J. Mason, B. P. Kirtman, R. Msadek, R. Sutton, E. Hawkins, T. Fricker, G. Hegerl, C. A. T. Ferro, D. B. Stephenson, G. A. Meehl, T. Stockdale, R. Burgman, A. M. Greene, Y. Kushnir, M. Newman, J. Carton, I. Fukumori, T. Delworth. (2012) A verification framework for interannual-to-decadal predictions experiments. Climate Dynamics Online publication date: 24-Aug-2012.

Abstract

Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.

Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.

Abstract

The ability of the climate models submitted to the Coupled Model Intercomparison Project 5 (CMIP5) database to simulate the Northern Hemisphere winter climate following a large tropical volcanic eruption is assessed. When sulfate aerosols are produced by volcanic injections into the tropical stratosphere and spread by the stratospheric circulation, it not only causes globally averaged tropospheric cooling but also a localized heating in the lower stratosphere, which can cause major dynamical feedbacks. Observations show a lower stratospheric and surface response during the following one or two Northern Hemisphere (NH) winters, that resembles the positive phase of the North Atlantic Oscillation (NAO). Simulations from 13 CMIP5 models that represent tropical eruptions in the 19th and 20th century are examined, focusing on the large-scale regional impacts associated with the large-scale circulation during the NH winter season. The models generally fail to capture the NH dynamical response following eruptions. They do not sufficiently simulate the observed post-volcanic strengthened NH polar vortex, positive NAO, or NH Eurasian warming pattern, and they tend to overestimate the cooling in the tropical troposphere. The findings are confirmed by a superposed epoch analysis of the NAO index for each model. The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings. This is also of concern for the accuracy of geoengineering modeling studies that assess the atmospheric response to stratosphere-injected particles.

Mauritsen, T., et al. (2012), Tuning the climate of a global model, J. Adv. Model. Earth Syst., 4, M00A01, doi:10.1029/2012MS000154. published 7 August 2012.

Abstract

During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution. The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters. Here we provide insights into how climate model tuning is practically done in the case of closing the radiation balance and adjusting the global mean temperature for the Max Planck Institute Earth System Model (MPI-ESM). We demonstrate that considerable ambiguity exists in the choice of parameters, and present and compare three alternatively tuned, yet plausible configurations of the climate model. The impacts of parameter tuning on climate sensitivity was less than anticipated.

Jiang, J. H., et al. (2012), Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations, J. Geophys. Res., 117, D14105, doi:10.1029/2011JD017237. published 18 July 2012.

Using NASA’s A-Train satellite measurements, we evaluate the accuracy of cloud water content (CWC) and water vapor mixing ratio (H2O) outputs from 19 climate models submitted to the Phase 5 of Coupled Model Intercomparison Project (CMIP5), and assess improvements relative to their counterparts for the earlier CMIP3. We find more than half of the models show improvements from CMIP3 to CMIP5 in simulating column-integrated cloud amount, while changes in water vapor simulation are insignificant. For the 19 CMIP5 models, the model spreads and their differences from the observations are larger in the upper troposphere (UT) than in the lower or middle troposphere (L/MT). The modeled mean CWCs over tropical oceans range from ∼3% to ∼15× of the observations in the UT and 40% to 2× of the observations in the L/MT. For modeled H2Os, the mean values over tropical oceans range from ∼1% to 2× of the observations in the UT and within 10% of the observations in the L/MT. The spatial distributions of clouds at 215 hPa are relatively well-correlated with observations, noticeably better than those for the L/MT clouds. Although both water vapor and clouds are better simulated in the L/MT than in the UT, there is no apparent correlation between the model biases in clouds and water vapor. Numerical scores are used to compare different model performances in regards to spatial mean, variance and distribution of CWC and H2O over tropical oceans. Model performances at each pressure level are ranked according to the average of all the relevant scores for that level.

From the conclusions: “Tropopause layer water vapor is poorly simulated with respect to observations. This likely results from temperature biases.”

Sakaguchi, K., X. Zeng, and M. A. Brunke (2012), The hindcast skill of the CMIP ensembles for the surface air temperature trend, J. Geophys. Res., 117, D16113, doi:10.1029/2012JD017765. published 28 August 2012

Linear trends of the surface air temperature (SAT) simulated by selected models from the Coupled Model Intercomparison Project (CMIP3 and CMIP5) historical experiments are evaluated using observations to document (1) the expected range and characteristics of the errors in hindcasting the ‘change’ in SAT at different spatiotemporal scales, (2) if there are ‘threshold’ spatiotemporal scales across which the models show substantially improved performance, and (3) how they differ between CMIP3 and CMIP5. Root Mean Square Error, linear correlation, and Brier score show better agreement with the observations as spatiotemporal scale increases but the skill for the regional (5° × 5° – 20° × 20° grid) and decadal (10 – ∼30-year trends) scales is rather limited. Rapid improvements are seen across 30° × 30° grid to zonal average and around 30 years, although they depend on the performance statistics. Rather abrupt change in the performance from 30° × 30° grid to zonal average implies that averaging out longitudinal features, such as land-ocean contrast, might significantly improve the reliability of the simulated SAT trend. The mean bias and ensemble spread relative to the observed variability, which are crucial to the reliability of the ensemble distribution, are not necessarily improved with increasing scales and may impact probabilistic predictions more at longer temporal scales. No significant differences are found in the performance of CMIP3 and CMIP5 at the large spatiotemporal scales, but at smaller scales the CMIP5 ensemble often shows better correlation and Brier score, indicating improvements in the CMIP5 on the temporal dynamics of SAT at regional and decadal scales.

Comments Off

Filed under Climate Models, Research Papers

Another Paper That Documents The Limitations Of Skillful Multi-Decadal Regional Climate Predictions “Urban Precipitation Extremes: How Reliable Are Regional Climate Models?” By Mishra Et Al 2012

I was alerted to another paper that documents the limitations of multi-decadal regional climate predictions [h/t Robert Pollock] .

The paper is

Mishra, V., F. Dominguez, and D. P. Lettenmaier (2012), Urban precipitation extremes: How reliable are regional climate models?, Geophys. Res. Lett., 39, L03407, doi:10.1029/2011GL050658.

The abstract reads [highlight added]

We evaluate the ability of regional climate models (RCMs) that participated in the North American Regional Climate Change Assessment Program (NARCCAP) to reproduce the historical season of occurrence, mean, and variability of 3 and 24-hour precipitation extremes for 100 urban areas across the United States. We show that RCMs with both reanalysis and global climate model (GCM) boundary conditions behave similarly and underestimate 3-hour precipitation maxima across almost the entire U.S. RCMs with both boundary conditions broadly capture the season of occurrence of precipitation maxima except in the interior of the western U.S. and the southeastern U.S. On the other hand, the RCMs do much better in identifying the season of 24-hour precipitation maxima. For mean annual precipitation maxima, regardless of the boundary condition, RCMs consistently show high (low) bias for locations in the western (eastern) U.S. Our results indicate that RCM-simulated 3-hour precipitation maxima at 100-year return period could be considered acceptable for stormwater infrastructure design at less than 12% of the 100 urban areas (regardless of boundary conditions). RCM performance for 24-hour precipitation maxima was slightly better, with performance acceptable for stormwater infrastructure design judged adequate at about 25% of the urban areas.

Their experimental design is explained as

We used RCM-simulated precipitation output from participating models in the North American Regional Climate Change Assessment Program (NARCCAP) [Mearns et al., 2009]. For most of the NARCCAP RCMs, two distinct simulations were made: the first simulation forced the RCMs with output from the National Center for Environmental Prediction/Department of Energy (NCEP/DOE) reanalysis [Kanamitsu et al., 2002] at the boundaries for the 1979–2000 period (RCM-reanalysis henceforth). For the second simulation, output from selected GCMs was used to provide the RCM boundary conditions both in the historical (1968–2000) and future (2038–2080) periods (RCM-GCM henceforth). In this study, we focus only on the RCM reanalysis and RCM-GCM for the historical period, because our objective is to evaluate model skill when compared to observations.

Thus, the downscaling runs using the Reanalysis is a Type 2 downscaling as defined in

Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling:  Assessment of value retained and added using the Regional Atmospheric  Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108,  doi:10.1029/2004JD004721.

The runs with the GCMs for the period 1968-2000 appears to be a Type 3 downscaling (i.e. the SSTs are prescribed over this time period, but their paper is not clear on this).  If SSTs, and all other aspects of the GCM runs were predicted, not prescribed, this would be a Type 4 downscaling simulation run in a hindcast  mode.

Their conclusions include the summary

1. RCM performance is satisfactory in simulating the seasonality of 24-hour precipitation extremes across most of the U.S. However, for most urban areas in the western and southeastern U.S., the seasonality of 3-hour precipitation extremes was not successfully reproduced by the RCMs with either reanalysis or GCM boundary conditions. Specifically, the RCMs tended to predict 3-hour precipitation maxima in winter, whereas the observations indicated summer.
2. RCMs largely underestimated 3-hour precipitation maxima means and 100-year return period magnitudes at most locations across the United States for both reanalysis and GCM boundary conditions. However, performance was better for 24-hour precipitation maxima (means and 100-year events), although there were generally overestimates in the west, and underestimates in the east.
3. For both 3 and 24-hour annual precipitation maxima, RCMs with reanalysis boundary conditions underestimated interannual variability and overestimated interannual variability with GCM boundary conditions.
4. At only a very small number of locations was the bias in RCM-simulated 3 and 24-hour 100 year return period precipitation maxima within +/-10% of the observed estimates, which might be deemed acceptable for stormwater infrastructure design purposes.

This is an informative study. Using reanalyses, where real-world observations are used to constrain the regional climate model predictions (through later boundary conditions and nudging), provides the benchmark upon which the multi-decadal climate forecasts must improve on.

Papers that we have completed on extreme rainfall events in urban areas; e. g.

Lei, M., D. Niyogi, C. Kishtawal, R. Pielke Sr., A. Beltrán-Przekurat, T. Nobis, and S. Vaidya, 2008: Effect of explicit urban land surface representation on the simulation of the 26 July 2005 heavy rain event over Mumbai, India. Atmos. Chem. Phys. Discussions, 8, 8773–8816.

show that landscape effects must also be considered in planning for extreme rainfall events.

See also for Atlanta, research on this subject by Marshall Shepherd and by Dev Niyogi

Atlanta Thunderstorms by J. Marshall Shepherd

News Report On The Role of Landscape Processes On Weather and Climate

The Mishra et al 2012 paper shows that participating models in the North American Regional Climate Change Assessment Program (NARCCAP) have not provided evidence that their predictions would have the required skill for the future time period  (2038–2080).

They have biases  for the recent climate, and have not even been tested in this paper with respect to their ability to skillfully predict changes in urban climate statistics over the period 1968 to 2000.  If they are being provided to urban planners as being robust estimates of the envelope of what could occur during 2038-2080, they are misleading those policymakers.

source of image

Comments Off

Filed under Climate Models, Research Papers

New Paper “Parameterization Of Instantaneous Global Horizontal Irradiance At The Surface. Part II: Cloudy-Sky Component” By Sun Et Al 2012

There is yet another paper that documents the lack of skill in multi-decadal global climate models to skillfully predict climate conditions in the coming years. This paper involves the question of accuracy lost when radiation parameterizations are used at time intervals that are long compared to other physical processes in the models.  The paper is

Sun, Z., J. Liu, X. Zeng, and H. Liang  (2012), Parameterization of instantaneous global horizontal irradiance at the surface. Part II:  Cloudy-sky componentJ. Geophys. Res., doi:10.1029/2012JD017557, in press. [the full paper is not yet available   the full paper is available at the  JGR site by clicking PIP PDF – h/t Victor Venema]

The abstract reads [highlight added]

Radiation calculaions in global numerical weather prediction (NWP) and climate models are usually performed in 3-hourly time intervals in order to reduce the computational cost. This treatment can lead to an incorrect Global Horizontal Irradiance (GHI) at the Earth’s surface, which could be one of the error sources in modelled convection and precipitation. In order to improve the simulation of the diurnal cycle of GHI at the surface a fast scheme has been developed in this study and it can be used to determine the GHI at the Earth’s surface more frequently with affordable costs. The scheme is divided into components for clear-sky and cloudy-sky conditions. The clear-sky component has been described in part I. The cloudy-sky component is introduced in this paper. The scheme has been tested using observations obtained from three Atmospheric Radiation Measurements (ARM) stations established by the U. S. Department of Energy. The results show that a half hourly mean relative error of GHI under all-sky conditions is less than 7%. An important application of the scheme is in global climate models. The radiation sampling error due to infrequent radiation calculations is investigated using the this scheme and ARM observations. It is found that these errors are very large, exceeding 800 W m-2 at many non-radiation time steps due to ignoring the effects of clouds. Use of the current scheme can reduce these errors to less than 50 W m-2.

These errors are clearly larger than the few W m-2 that are due to human climate forcings, and even large relative to the natural variations of radiative fluxes.  This is yet another example of why the IPCC models are not robust tools to predict changes in global, regional and local climate statistics.

source of image

Comments Off

Filed under Climate Models, Research Papers

Guest Post “Modeled European Precipitation Change Smaller Than Observed” By Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink andWilco Hazeleger

 

Modeled European precipitation change smaller than observed

by  Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink, Wilco Hazeleger of the Royal Dutch Meteorological Institute (KNMI)

Introduction

Now is an exciting time to do climate research. In many areas of the world climate change is emerging from the noise of natural variability. This opens the opportunity to compare the observed changes to the changes that are simulated by climate models. Climate models are mathematical representations of the climate system and should in principle give a physics-based response to increased concentrations of CO2 and other greenhouse gases, different types of aerosols, solar and volcanic forcings. However, many processes are too small-scale or complex to be physically represented in the model and are parameterized: the average or expected effect of such processes are specified. Examples are clouds, thunderstorms, fog, ocean mixing. The necessity to parameterize these processes adds model uncertainty into the simulations. Projections of the climate are also dependent on uncertainties in the forcings. Aerosol emissions and concentrations in the past are poorly known and future social-economic developments that affect emissions of greenhouse gases, aerosols and land use change are uncertain. Finally, we should always keep in mind that the climate system also shows natural variations on different timescales.

To deal with these uncertainties, use is often made of multiple climate models: a multi-model ensemble. The spread between the model results of such an ensemble is a combination of model uncertainty and natural climate variability. Note that even when natural variability is low, the model uncertainty is not equal to the spread of the ensemble. It can both be larger (if all models do not represent an essential process) or smaller (if the ensemble contains models of lower quality). For some models multiple realizations are available that allow an estimation of the natural variability from the spread within the model.

To come back to our goal: to have confidence in future climate projections, a correct representation of trends in the past is necessary (but not sufficient). In a recent article (van Haren et al, Clim.Dyn., 2012) we investigated if modeled changes in precipitation over Europe are in agreement with the observed changes.

Results & Discussion

Clear precipitation trends have been observed in Europe over the past century. In winter (October – March), precipitation has increased in north-western Europe. In summer (April – September), there has been an increase along many coasts in the same area. Over the second half of the past century precipitation also decreased in southern Europe in winter (figures 1a and 1d). We checked by comparing different analyses of precipitation that the difference between modeled and observed precipitation changes that are discussed in this article are much larger than the analysis uncertainty in the observations, except for some countries in eastern Europe that do not share much data. These analyses are partly based on the same station observations, but agreement between precipitation changes calculated over the second half of the past century and the complete past century give further confidence that the observed changes are physical and not artifacts of changes in the observational methods.

An investigation of precipitation trends in an ensemble of regional climate models (RCMs) of the ENSEMBLES project shows that these models fail to reproduce the observed trends (figures 1b and 1e). In many regions the observed trend is larger than in any of the models. Similar results are obtained for the entire last century in a comparison of the observed trends with trends in global climate models (GCMs) from the CMIP3 co-ordinated modeling experiment. The models should cover the full range of natural variability, so that the result that the observed trend is outside the ensemble implies that either the natural variability is underestimated, or the trend itself. We compared the natural variability over the last century between the models and observations. The GCMs were indeed found to underestimate the variability somewhat, but the RCMs actually overestimate natural variability on the interannual time scale. In Europe, there is very little evidence of low-frequency variability over land beyond the integrated effects of interannual variability: both the observations and the models are compatible with white noise once the trend has been subtracted.

We also have available from ENSEMBLES regional climate model experiments in which the large scale circulation and sea surface temperatures are prescribed from reanalysis data, which are close to the observations. These simulations reproduce the observed precipitation trends much better (figures 1c and 1f). The observed trends are largely compatible with the (smaller) range of uncertainties spanned by the ensemble, indicating that the prescribed factors in regional climate models, large scale circulation and sea surface temperatures, are responsible for large parts of the trend biases in the GCM-forced ensemble and the GCMs themselves.

Figure 1: Comparison of observed and modeled precipitation trends over 1961-2000 [%/century]. (a) Relative trends in observed summer precipitation. (b) Mean relative trends of summer precipitation of the GCM forced RCM ensemble. (c) Mean relative trends of summer precipitation of the RCM ensemble forced by reanalysis data. (d-f)

Using a simple statistical model we next investigated the relative importance of these two prescribed factors. We find that the main factor in setting the trend in winter is the large scale atmospheric circulation (as we found earlier for the winter temperature trends). The air pressure over the Mediterranean area has increased much stronger in the observations than in the models. In the summer season, sea surface temperature (SST) changes are important in setting precipitation trends along the North Sea and Atlantic coasts. Climate models underestimate the SST trends along the Atlantic coast, the North Sea and other coastal areas (if represented at all). This leads to lower evaporation trends and reduced trends in coastal precipitation.

Conclusions

The results of this study show that climate models are only partly capable of reproducing the details in observed precipitation changes: the local observed trends are often much larger than modeled in Europe. Because it is not clear (yet) whether the trend biases in SST and large scale circulation are due to greenhouse warming, their importance for future climate projections needs to be determined. Processes that give rise to the observed trends may very well be relatively unimportant for climate projection for the end of the century. Therefore, a straightforward extrapolation of observed trends to the future is not possible. A quantitative understanding of the causes of these trends is needed so that climate model based projections of future climate can be corrected for these trend biases.

References:

-       Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink, Matthew Collins and Wilco Hazeleger, SST and circulation trend biases cause an underestimation of European precipitation trends, Clim.Dyn, (2012) 10.1007/s00382-012-1401-5. preprint

-       G. J. van Oldenborgh, S. Drijfhout, A. van Ulden, R. Haarsma, A. Sterl, C. Severijns, W. Hazeleger, and H. Dijkstra, Western Europe is warming much faster than expected, Clim.Past, 5, 1-12, 2009. doi:10.5194/cp-5-1-2009, full text

-       van der Linden, P. and Mitchell, J. F. B. (Eds), ENSEMBLES: Climate Change and its Impacts: Summary of research and results from the ENSEMBLES project. Met Office Hadley Centre, 2009. book

-       Meehl, Gerald A., Curt Covey, Karl E. Taylor, Thomas Delworth, Ronald J. Stouffer, Mojib Latif, Bryant McAvaney, John F. B. Mitchell, 2007: The WCRP CMIP3 Multimodel Dataset: A New Era in Climate Change Research. Bull. Amer. Meteor. Soc., 88, 1383–1394. doi: doi:10.1175/BAMS-88-9-1383. Full text

Comments Off

Filed under Climate Change Metrics, Climate Models, Guest Weblogs

Comments On The Paper “Evaluating Explanatory Models Of The Spatial Pattern of Surface Climate Trends Using Model Selection And Bayesian Averaging Methods” By McKitrick and Tole 2012

 

There is a new paper which documents further the lack of skill of multi-decadal climate model predictions. This paper has also been commented on by Judy Curry in the post

Three new papers on interpreting temperature trends

and by Anthony Watts at

New modeling analysis paper by Ross McKitrick.

As I summarized in my post

Kevin Trenberth Was Correct – “We Do Not Have Reliable Or Regional Predictions Of Climate”

these climate model predictions are failing to accurately simulate fundamental aspects of the climate system.

The paper is

McKitrick, Ross R. and Lise Tole (2012) “Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends using Model Selection and Bayesian Averaging Methods” Climate Dynamics, 2012, DOI: 10.1007/s00382-012-1418-9

with the abstract [highlight added]

We evaluate three categories of variables for explaining the spatial pattern of warming and cooling trends over land: predictions of general circulation models (GCMs) in response to observed forcings; geographical factors like latitude and pressure; and socioeconomic influences on the land surface and data quality. Spatial autocorrelation (SAC) in the observed trend pattern is removed from the residuals by a well-specified explanatory model. Encompassing tests show that none of the three classes of variables account for the contributions of the other two, though 20 of 22 GCMs individually contribute either no significant explanatory power or yield a trend pattern negatively correlated with observations. Non-nested testing rejects the null hypothesis that socioeconomic variables have no explanatory power. We apply a Bayesian Model Averaging (BMA) method to search over all possible linear combinations of explanatory variables and generate posterior coefficient distributions robust to model selection. These results, confirmed by classical encompassing tests, indicate that the geographical variables plus three of the 22 GCMs and three socioeconomic variables provide all the explanatory power in the data set. We conclude that the most valid model of the spatial pattern of trends in land surface temperature records over 1979-2002 requires a combination of the processes represented in some GCMs and certain socioeconomic measures that capture data quality variations and changes to the land surface.

The text starts off with

General Circulation Models (GCMs) are the basis for modern studies of the effects of greenhouse gases and projections of future global warming. Reliable trend projections at the regional level are essential for policy guidance, yet formal statistical testing of the ability of GCMs to simulate the spatial pattern of climatic trends has been very limited. This paper applies classical regression and Bayesian Model Averaging methods to test this aspect of GCM performance against rival explanatory variables that do not contain any GCM-generated information and can therefore serve as a benchmark.

This paper  supports the viewpoint of the papers

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res.,  115, D1, doi:10.1029/2009JD013655.

where we showed that the multi-decadal trends in surface and lower tropospheric temperature trends are diverging from one another with much greater differences over land areas than over ocean areas. The socioeconomic influences on the land surface and data quality issues identified in the McKittrick and Tole 2012 paper are reasons such a divergence should be expected.

In a paper in press (which I am a co-author on) on the subject of the surface temperature trends, we docuement in depth why there is warm bias in the minumum temperature trends that are used to construct an annual, global average multi-decadal temperture trends. I will be posting on this paper as soon as it is posted on the journal website.  It provides even more support on the findings of McKittrick and Tole 2012 on the importance of socioeconomic influences on the land surface and data quality as a factor in long term temperature trends.

Comments Off

Filed under Climate Models

New Paper “Climate Physics, Feedbacks, And Reductionism (And When Does Reductionism go Too Far?)” By Dick Lindzen

I was alerted to an important, informative new paper by Dick Lindzen (h/t to Anthony Watts) on the issue of climate. The paper is

R.S. Lindzen, 2102: Climate physics, feedbacks, and reductionism (and when does reductionism go too far?). Eur. Phys. J. Plus (2012) 127: 52 DOI 10.1140/epjp/i2012-12052-8.

The introduction reads (there is no abstract) [highlight added]

The public perception of the climate problem is somewhat schizophrenic. On the one hand, the problem is perceived to be so complex that it cannot be approached without massive computer programs. On the other hand, the physics is claimed to be so basic that the dire conclusions commonly presented are considered to be self-evident. Consistent with this situation, climate has become a field where there is a distinct separation of theory and modeling. Commonly, in traditional areas like fluid mechanics, theory provides useful constraints and tests when applied to modeling results. This has been notably absent in current work on climate. In principle, climate modeling should be closely associated with basic physical theory. In practice, it has come to consist in the almost blind use of obviously inadequate models.

In this paper, I would like to sketch some examples of potentially useful interaction with specific reference to the issue of climate sensitivity. It should be noted that the above situation is not strictly the fault of modelers. Theory, itself, has been increasingly idealized and esoteric with little attempt at real interaction. Also, theory in atmospheric and oceanic dynamics consists in conceptual frameworks that are generally not mathematically rigorous. Perhaps, we should refer to it as physical or conceptual reasoning instead. As we shall see, when reductionism goes beyond the constraints imposed by these frameworks, it is probably going too far though reductionism remains an essential tool of analysis.

The concluding remarks read

This paper considers approaches to estimating climate sensitivity involving the basic physics of the feedback processes rather than attempting to estimate climate sensitivity from time series of temperature. The latter have to assume a perfect knowledge of all sources of climate variability —something generally absent. The results of a variety of independent approaches all point to relatively low sensitivities. We also note that when climate change is due to regional and seasonal forcing, the concept of one dimensional climate sensitivity may, in fact, be inappropriate. Finally, it should be noted that I have not followed the common practice of considering the feedback factor to be the sum of separate feedback factors from water vapor, clouds, etc. The reason for this is that these feedback factors are not really independent. For example, in fig. 2, we refer to a characteristic emission level that is one optical depth into the atmosphere. For regions with upper level cirrus, this level is strongly related to the cloud optical depth (in the infrared), while for cloud-free regions the level is determined by water vapor. However, as shown by Rondanelli and Lindzen [30], and Horvath and Soden [31], the area covered by upper level cirrus is both highly variable and temperature dependent. The water vapor feedback is dependent not only on changes in water vapor but also on the area of cloud-free regions. It, therefore, cannot readily be disentangled from the cloud feedback.

One interesting statement in the paper is that, with respect to regional climate features,

“……current models do not simulate the PDO [Pacific Decadal Oscillation]. We are currently beginning such a study.”

The entire article is an important new contribution to the climate science discussion by a well-respected colleague.  I recommend reading the entire article.

My one substantive comment is the use of the terminology “climate sensitivity“.  I recognize that so much of the literature is focusing on the response of the global, annual averaged surface temperature to an imposed global averaged forcing (such as the radiative effect of added CO2) and calling this “climate sensitivity“.   However, this is but a very small part of true climate sensitivity. While I completely agree with Dick that there is a fundamental problem with “one-dimensional thinking” as he discussed in section 4 of his paper, it is an even higher dimensional (and more complex) issue than presented in the paper.

As I have often presented on my weblog, the climate system can be schematically illustrated below from NRC (2005).

The real world climate sensitivity is the influence of natural and human climate forcings on each of the components of the climate system.  Research is only just beginning to examine this issue, which needs to be completed using the bottom-up, contextual vulnerability approach that we discuss in our paper

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2011: Dealing  with complexity and extreme events using a bottom-up, resource-based  vulnerability perspective. AGU Monograph on Complexity and  Extreme Events in Geosciences, in press.

source of image at top of post

Comments Off

Filed under Climate Models, Research Papers