Category Archives: Research Papers

New Paper “An Empirical Study Of The Impact Of Human Activity On Long-Term Temperature Change In China: A Perspective From Energy Consumption” By Li And Zhao 2012

Figure from Li and Zhao (2012) –  Spatial distribution of high, mid and low energy consumption region in China. Data for Tibet and Taiwan are absent. Green spot is the provincial capital cities of China.

Jos de Laat has alerted us to a new paper. It is

Li, Y. and X. Zhao (2012), An empirical study of the impact of human activity on long-term temperature change in China: A perspective from energy consumption, J. Geophys. Res., 117, D17117, doi:10.1029/2012JD018132.

The abstract reads [highlight added]

Human activity is an important contributor to local temperature change, especially in urban areas. Energy consumption is treated here as an index of the intensity of human induced local thermal forcing. The relationship between energy consumption and temperature change is analyzed in China by Observation Minus Reanalysis (OMR) method. Temperature trends for observation, reanalysis and OMR are estimated from meteorological records and 2 m-temperature from NCEP/NCAR Reanalysis 1 for the period 1979–2007. A spatial mapping scheme based on the spatial and temporal relationship between energy consumption and Gross Domestic Production (GDP) is developed to derive the spatial distribution of energy consumption of China in 2003. A positive relationship between energy consumption and OMR trends is found in high and mid energy consumption region. OMR trends decline with the decreasing intensity of human activity from 0.20°C/decade in high energy consumption region to 0.13°C/decade in mid energy consumption region. Forty-four stations in high energy consumption region that are exposed to the largest human impact are selected to investigate the impact of energy consumption spatial pattern on temperature change. Results show human impact on temperature trends is highly dependent on spatial pattern of energy consumption. OMR trends decline from energy consumption center to surrounding areas (0.26 to 0.04°C/decade) and get strengthened as the spatial extent of high energy consumption area expands (0.14 to 0.25°C/decade).

Excerpts from this paper include

Besides the impact of land use change on climate, the thermal impact induced by human activity within city plays significant role and should not be ignored. One of them is the anthropogenic heat released from energy consumption. Several studies have shown that anthropogenic heat is important to the development of UHI. Simulation results from a case study in Philadelphia suggested that anthropogenic heat contributes about 2~3C to the nighttime heat island in winter [Fan and Sailor, 2005].

The conclusion contains the text

Our results show significant warming has occurred for most stations in China and the magnitude of warming is closely related to energy consumption, which represents the intensity of human activity. For high and mid energy consumption group, OMR trends decline with the decrease of energy consumption. OMR trends for high and mid energy consumption group is 0.20 and 0.13C/decade respectively. Stronger warming is observed for station with high energy consumption, which usually locates in or near cities. Therefore, the strong warming is more likely a consequence of the local thermal forcing induced by human activity.

It seems that stations belong to high and mid energy consumption group in this study are affected
by human impact to a discernible extent. Just as De Laat[2008] demonstrated, anthropogenic heat released from energy consumption may very well have contributed to the observed temperature change patterns.Thus, it may raise more attention to consider the influence of human activity on surface temperature records in the past and next decades.

This study provides even more motivation for Anthony Watts to expand his station siting quality project to the entire globe!

Comments Off on New Paper “An Empirical Study Of The Impact Of Human Activity On Long-Term Temperature Change In China: A Perspective From Energy Consumption” By Li And Zhao 2012

Filed under Climate Change Metrics, Research Papers

E-Mail To Linda Mearns On The 2012 BAMS Article On Dynamic Downscaling

source of image from Linda Mearns website

With respect to my post

“The North American Regional Climate Change Assessment Program: Overview of Phase I Results” By Mearns Et Al 2012 – An Excellent Study But It Overstates Its Significance In The Multi-Decadal Prediction Of Climate

I have sent the lead author, Linda Mearns, the e-mail below [and copied to her other co-authors and to several other colleagues who work on downscaling]. I will post her reply, if I receive one and have her permission.

Subject: Your Septmeber 2012 BAMS

Hi Linda

I read with considerable interest your paper

Linda O. Mearns, Ray Arritt, Sébastien Biner, Melissa S. Bukovsky, Seth McGinnis, Stephan Sain, Daniel Caya, James Correia, Jr., Dave Flory, William Gutowski, Eugene S. Takle, Richard Jones, Ruby Leung, Wilfran Moufouma-Okia, Larry McDaniel, Ana M. B. Nunes, Yun Qian, John Roads, Lisa Sloan, Mark Snyder, 2012: The North American Regional Climate Change Assessment Program: Overview of Phase I Results. Bull. Amer.Met Soc. September issue. pp 1337-1362.

It is a very much needed, effective analysis of the level of regional dynamic downscaling skill when forced by reanalyses. In

Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value retained and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721.

and summarized in

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling . what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008.

Pielke, R. A., Sr., R. Wilby,  D. Niyogi, F. Hossain, K. Dairuku,J. Adegoke, G. Kallos, T. Seastedt, and K. Suding (2012), Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective, in Extreme Events and Natural Hazards: The Complexity Perspective, Geophys. Monogr. Ser., vol. 196, edited by A. S. Sharma et al. 345.359, AGU, Washington, D. C., doi:10.1029/2011GM001086. [copy available from https://pielkeclimatesci.files.wordpress.com/2011/05/r-365.pdf]

you are evaluating the skill and value-added of Type 2 downscaling.

However, you are misleading the impacts communities by indicating that your results apply to regional climate change (i.e. Type 4 downscaling).

I have posted on my weblog today

“The North American Regional Climate Change Assessment Program: Overview of Phase I Results” By Mearns Et Al 2012 – An Excellent Study But It Overstates Its Significance In The Multi-Decadal Prediction Of Climate

which is critical of how you present the implications of your findings.

As you wrote at the end

The Mearns et al 2012 study concludes with the claim that

“Our goal was to provide an overview of the relative performances of the six models both individually and as an ensemble with regard to temperature and precipitation. We have shown that all the models can simulate aspects of climate well, implying that they all can provide useful information about climate change. In particular, the results from phase I of NARCCAP will be used to establish uncertainty due to boundary conditions as well as final weighting of the models for the development of regional probabilities of climate change.”

You write

“We have shown that all the models can simulate aspects of climate well, implying that they all can provide useful information about climate change.”

What you have actually accomplished (and it is significant) is document the upper bound in terms of simulation skill of value-added to reanalyses using dynamic downscaling. However, you have not shown how this study provides skillful information in terms of changes in regional climate statistics on multi-decadal time scales.

I would like to post on my weblog a response from you (and your co-authors if they would like to) that responds to my comments. I will also post this e-mail query.

I have also copied this e-mail to other of our colleagues who are working on dynamic downscaling.

With Best Regards

Roger

Comments Off on E-Mail To Linda Mearns On The 2012 BAMS Article On Dynamic Downscaling

Filed under Climate Models, Research Papers

“The North American Regional Climate Change Assessment Program: Overview of Phase I Results” By Mearns Et Al 2012 – An Excellent Study But It Overstates Its Significance In The Multi-Decadal Prediction Of Climate

There is a new paper

Linda O. Mearns, Ray Arritt, Sébastien Biner, Melissa S. Bukovsky, Seth McGinnis, Stephan Sain, Daniel Caya, James Correia, Jr., Dave Flory, William Gutowski, Eugene S. Takle, Richard Jones, Ruby Leung, Wilfran Moufouma-Okia, Larry McDaniel, Ana M. B. Nunes, Yun Qian, John Roads, Lisa Sloan, Mark Snyder, 2012: The North American Regional Climate Change Assessment Program: Overview of Phase I Results. Bull. Amer.Met Soc. September issue. pp 1337-1362.

that provides further documentation of the level of skill of dynamic downscaling. It is a very important new contribution which will be widely cited. The participants in the North American Regional Climate Change Assessment Program  are listed here.

However, it significantly overstates the significance of its findings in terms of its application to the multi-decadal prediction of regional climate.

The paper is even highlighted on the cover of the September 2012 issue of BAMS, with the caption for the cover in the Table of Contents that reads

“Regional models are the foundation of research and services as planning for climate change requires more specific information than can be provided by global models. The North American Regional Climate Change Assessment Programs (Mearns et al., page 1337) evaluates uncertainties in using such models….”

Actually, as outlined below, the Mearns et al 2012 paper, while providing valuable new insight into one type of regional dynamic downscaling, is misrepresenting what these models can skillfully provide with respect to “climate change”.

The study uses observational data (from a Reanalysis) to drive the regional models. Using the classification we have introduced in our papers (see below), this is a type 2 dynamic downscaling study.

The Mearns et al 2012 paper only provides an upper bound of what is possible with respect to their goal to provide

uncertainties in regional scale projections of future climate and produce high resolution climate change scenarios using multiple regional climate models (RCMs) nested within atmosphere ocean general circulation models (AOGCMs) forced with the A2 SRES scenario.”

The type of downscaling used in a study is a critically important point that needs to be emphasized when dynamic downscaling studies are presented.  Indeed, the new paper seeks to just replicate the current climate, NOT changes in climate statistics over the time period of the model runs.

It is even more challenging to skillfully predict CHANGES in regional climate which is what is required if the RCMs are to add any value for predicting climate in the coming decades.

The abstract and their short capsule reads [highlight added]

The North American Regional Climate Change Assessment Program is an international effort designed to investigate the uncertainties in regional scale projections of future climate and produce high resolution climate change scenarios using multiple regional climate models (RCMs) nested within atmosphere ocean general circulation models (AOGCMs) forced with the A2 SRES scenario, with a common domain covering the conterminous US, northern Mexico, and most of Canada. The program also includes an evaluation component (Phase I) wherein the participating RCMs, with a grid spacing 50 km, are nested within 25 years of NCEP/DOE global reanalysis II.

We provide an overview of our evaluations of the Phase I domain-wide simulations focusing on monthly and seasonal temperature and precipitation, as well as more detailed investigation of four sub-regions. We determine the overall quality of the simulations, comparing the model performances with each other as well as with other regional model evaluations over North America.  The metrics we use do differentiate among the models, but, as found in previous studies, it is not possible to determine a ‘best’ model among them. The ensemble average of the six models does not perform best for all measures, as has been reported in a number of global climate model studies. The subset ensemble of the 2 models using spectral nudging is more often successful for domain wide root mean square error (RMSE), especially for temperature. This evaluation phase of NARCCAP will inform later program elements concerning differentially weighting the models for use in producing robust regional probabilities of future climate change.

Capsule

This article presents overview results and comparisons with observations for temperature and precipitation from the six regional climate models used in NARCCAP driven by NCEP/DOE Reanalysis II (R2) boundary conditions for 1980 through 2004.

Using the types of dynamic downscaling that we present in the articles

Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling:  Assessment of value retained and added using the Regional Atmospheric  Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108,  doi:10.1029/2004JD004721.

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum,  93, No. 5, 52-53, doi:10.1029/2012EO050008.

the Mearns et al 2012 paper is a Type 2 downscaling. It provides an upper bound on the skill possible from Type 3 and Type 4 downscaling, since real world observations are used to constrain the model simulations (through the lateral boundary conditions, and from interior nudging if used).

These types of downscaling are defined in the Castro et al 2005 and Pielke and Wilby 2012 papers as

Type 1 downscaling is used for short-term, numerical weather prediction. In dynamic type 1 downscaling the regional model includes initial conditions from observations. In type 1 statistical downscaling the regression relationships are developed from observed data and the type 1 dynamic model predictions.

Type 2 dynamic downscaling refers to regional weather (or climate) simulations [e.g., Feser et al., 2011] in which the regional model’s initial atmospheric conditions are forgotten (i.e., the predictions do not depend on the specific initial conditions) but results still depend on the lateral boundary conditions from a global numerical weather prediction where initial observed atmospheric conditions are not yet forgotten or are from a global reanalysis. Type 2 statistical downscaling uses the regression relationships developed for type 1 statistical downscaling except that the input variables are from the type 2 weather (or climate) simulation. Downscaling from reanalysis products (type 2 downscaling) defines the maximum forecast skill that is achievable with type 3 and type 4 downscaling.

Type 3 dynamic downscaling takes lateral boundary conditions from a global model prediction forced by specified real world surface boundary conditions such as seasonal weather predictions based on observed sea surface temperatures, but the initial observed atmospheric conditions in the global model are forgotten [e.g., Castro et al., 2007]. Type 3 statistical downscaling uses the regression relationships developed for type 1 statistical downscaling except using the variables from the global model prediction forced by specified real-world surface boundary conditions.

Type 4 dynamic downscaling takes lateral boundary conditions from an Earth system model in which coupled interactions among the atmosphere, ocean, biosphere, and cryosphere are predicted [e.g., Solomon et al.,
2007]. Other than terrain, all other components of the climate system are calculated by the model except for human forcings, including greenhouse gas emissions scenarios, which are prescribed. Type 4 dynamic
downscaling is widely used to provide policy makers with impacts from climate decades into the future. Type 4 statistical downscaling uses transfer functions developed for the present climate, fed with large scale atmospheric information taken from Earth system models representing future climate conditions. It is assumed that statistical relationships between real-world surface observations and large-scale weather patterns will not change. Type 4 downscaling has practical value but with the very important caveat that it should be used for model sensitivity experiments and not as predictions [e.g., Pielke, 2002; Prudhomme et al., 2010].

Because real-world observational constraints diminish from type 1 to type 4 downscaling, uncertainty grows as more climate variables must be predicted by models, rather than obtained from observations.

The Mearns et al 2012 study concludes with the claim that

Our goal was to provide an overview of the relative performances of the six models both individually and as an ensemble with regard to temperature and precipitation. We have shown that all the models can simulate aspects of climate well, implying that they all can provide useful information about climate change. In particular, the results from phase I of NARCCAP will be used to establish uncertainty due to boundary conditions as well as final weighting of the models for the development of regional probabilities of climate change.

First, as documented in the article, the difference between  the models and the observations are actually significant. To claim that

“all the models can simulate aspects of climate well”

is not a robust claim.  What is meant by “well”?  The tables and figures in the article document significant biases in the temperatures and precipitation even for the current climate type 2 downscaling simulations.

Even more significantly, their type 2 downscaling study does NOT imply

“that they all can provide useful information about climate change”!

The  Mearns et al 2012 study did not look at the issue of their skill to predict CHANGES in climate statistics. For this they must examine type 4 downscaling skill, which they did not do.

In the context of the skill achieved with type 2 dynamic downscaling, this is an important, useful study.  However, to use the results of this type 2 downscaling study by Mearns et al 2012 to provide

“….final weighting of the models for the development of regional probabilities of climate change”

is a gross overstatement of what they accomplished. One cannot use type 2 downscaling to make claims about the accuracy of type 4 downscaling.

I am e-mailing the authors of the Mearns et al 2012 paper to request their response to my comments.  Each of them are well-respected colleagues and I will post their replies when they respond.

source of image

Comments Off on “The North American Regional Climate Change Assessment Program: Overview of Phase I Results” By Mearns Et Al 2012 – An Excellent Study But It Overstates Its Significance In The Multi-Decadal Prediction Of Climate

Filed under Climate Models, Climate Science Misconceptions, Research Papers

Our Chapter “Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective” By Pielke Sr Et Al 2012 Has Appeared

Our article

Pielke, R. A., Sr., R. Wilby,  D. Niyogi, F. Hossain, K. Dairuku,J. Adegoke, G. Kallos, T. Seastedt, and K. Suding (2012), Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective, in Extreme Events and Natural Hazards: The Complexity Perspective, Geophys. Monogr. Ser., vol. 196, edited by A. S. Sharma et al. 345–359, AGU, Washington, D. C., doi:10.1029/2011GM001086. [the article can also be obtained from here]

has appeared in

Sharma, A. S.,A. Bunde, P. Dimri, and D. N. Baker (Eds.) (2012), Extreme Events and Natural Hazards: The Complexity Perspective, Geophys. Monogr. Ser., vol. 196, 371 pp., AGU, Washington, D. C., doi:10.1029/GM196.

The description of the book is given on the AGU site as [highlight added]

Extreme Events and Natural Hazards: The Complexity Perspective examines recent developments in complexity science that provide a new approach to understanding extreme events. This understanding is critical to the development of strategies for the prediction of natural hazards and mitigation of their adverse consequences. The volume is a comprehensive collection of current developments in the understanding of extreme events. The following critical areas are highlighted: understanding extreme events, natural hazard prediction and development of mitigation strategies, recent developments in complexity science, global change and how it relates to extreme events, and policy sciences and perspective. With its overarching theme, Extreme Events and Natural Hazards will be of interest and relevance to scientists interested in nonlinear geophysics, natural hazards, atmospheric science, hydrology, oceanography, tectonics, and space weather.

The abstract of our article reads

“We discuss the adoption of a bottom-up, resource–based vulnerability approach in evaluating the effect of climate and other environmental and societal threats to societally critical resources.This vulnerability concept requires the determination of the major threats to local and regional water, food, energy, human health, and ecosystem function resources from extreme events including climate, but also from other social and environmental issues. After these threats are identified for each resource, then the relative risks can be compared with other risks in order to adopt optimal preferred mitigation/adaptation strategies.

This is a more inclusive way of assessing risks, including from climate variability and climate change than using the outcome vulnerability approach adopted by the IPCC. A contextual vulnerability assessment, using the bottom-up, resource-based framework is a more inclusive approach for policymakers to adopt effective mitigation and adaptation methodologies to deal with the complexity of the spectrum of social and environmental extreme events that will occur in the coming decades, as the range of threats are assessed, beyond just the focus on CO2 and a few other greenhouse gases as emphasized in the IPCC assessments.”

In the assessment of climate risks, the approach we recommend is an inversion of the IPCC process, where the threats from climate, and from other environmental and social risks are assessed first, before one inappropriately and inaccurately runs global climate models  to provide the envelope of future risks to key resources.

Comments Off on Our Chapter “Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective” By Pielke Sr Et Al 2012 Has Appeared

Filed under Research Papers, Vulnerability Paradigm

Review of Humlum Et Al 2012 “The Phase Relation Between Atmospheric Carbon Dioxide And Global Temperature” By Donald Rapp

Commentary By Donald Rapp on the paper: “The phase relation between atmospheric carbon dioxide and global temperature” by Ole Humlum, Kjell Stordahl, Jan-Erik Solheim, accepted for publication in: Global and Planetary Change.

This paper analyzed data on annual variations in carbon dioxide concentration, various measures of earth temperature, and rate of emissions of carbon dioxide for the period 1980 to 2011. They compared the rate of change of CO2 concentration with measures of the rate of change of global temperature. While both CO2 and temperature generally increased during this 31-year period, the rates of change varied significantly during the period. They showed that changes in CO2 correlated somewhat with changes in sea surface temperature (SST) but the CO2 change lagged the SST change by about 11-12 months. They concluded that “A main control on atmospheric CO2 appears to be the ocean surface temperature”. They mentioned possible connection to the giant 1998 El Niño but did not elaborate on the connection of the entire sequence of data to El Niño indices.

In the present posting I desire to make a few comments on this paper by Humlum et al. Of course, as noted by the authors, the common belief is that rising CO2 produces an increase in the rate of warming, not vice versa. Their data suggests quite the opposite.

Consider the figure at the top of this post.

The uppermost curve shows the NINO3.4 index from 1980 to 2011. Peak El Niños are labeled with letters A to F.

The middle curve shows the change in CO2 concentration per year plotted on a monthly basis. The peaks in this curve are also subjectively labeled A to F. The average change in CO2 concentration per year can be interpreted either as a ramp or a step-function. Arbitrarily adopting the step function, the average change in CO2 concentration per year varied from year to year about 1.5 ppm/yr prior to the 1998 El Niño, and varied from year to year about 2.0 ppm/yr after the 1998 El Niño. These are depicted as horizontal dashed lines x and y.

The lowermost curve shows the annual change in anthropogenic CO2 emissions plotted on a per month basis.

A rough rule of thumb is that each Gt of carbon (3.67 Gt of CO2) produces the equivalent of about 0.5 ppm of CO2 in the atmosphere if none of it is absorbed. The figure below shows that annual variations in global emissions of carbon are typically about 2 x 104 metric tons per year which if unabsorbed, would produce annual changes in CO2 that are far too small to account for the observed variations in the average change in CO2 concentration per year.

The point made by Humlum et al. is that the average change in CO2 concentration per year lags the change in ocean temperature by about 11-12 months. As Tisdale showed in his book, El Niños leave behind them a pool of warm surface waters. As a result, the average change in CO2 concentration per year tends to lag the NINO3.4 index by a bit more than a year. This correlation is far from perfect but it seems to have some validity, particularly for the major El Niño that started toward the end of 1997. The data suggest that the ability of the oceans to absorb CO2 emitted by human activity responds to the state of the NINO3.4 index with a delay of a bit over a year.

Human activity is presently emitting roughly 8 Gt/yr of carbon, which if unabsorbed, would be sufficient to increase the atmospheric concentration of CO2 by about 4 ppm per year. Over a period of years, (very) roughly half of that CO2 is absorbed by earth systems (oceans, biosphere, …) and the other half ends in the atmosphere raising the atmospheric concentration by about 2 ppm. However, on a year-by-year basis, the proportion of emitted CO2 that is absorbed by the earth systems varies considerably, mainly due to the presence of warm surface waters in the Pacific produced quasi-periodically by El Niños. According to the graphical data below, the annual increase in CO2 concentration can be as high as 3 ppm (following the 1998 El Niño) or as low as 1 ppm (between peaks B and C). During the most recent period after the 1998 El Niño, variations in annual increase in CO2 concentration seem to have varied roughly as 2 ± 0.5 ppm or ±25%. These results seem to suggest that while roughly half of emissions end up in the atmosphere over an extended period, annual variations in the distribution of emitted CO2 between the atmosphere and the earth system are significant, and strongly dependent on prevalence of El Niños.

Tisdale showed that from 1976 to about 2005, there was a pronounced prevalence of El Niños over La Niñas. He argued that this could account for all of the warming of the earth during that period without invoking the greenhouse effect. However, it seems likely that during this period, a greater proportion of emitted CO2 ended up in the atmosphere due to prevalence of El Niños, and this might have amplified the natural El Niño warming effect via greenhouse gas forcing. McLean et al. (2009) estimated that 70% was due to El Niños while Foster et al. (2010) fell back on climate models that attribute only 15-30% of temperature variation in the 20th century to variability of the El Niño index. As is usual in climate matters, one has only to glance at the authors to know in advance what spin the results are likely to show. The Foster paper included the crème de la crème of climategate characters while the Mclean paper was written by skeptics.

The proportion of global heating from 1976 to 2005 due to prevalence of El Niños over La Niñas vs. greenhouse gas forcing remains uncertain. Nevertheless, the state of the Pacific Ocean is clearly important, not only for its impact on the atmospheric temperature, but also because it regulates the annual rise in CO2 concentration.

Tisdale, Bob (2012) “Who turned on the heat?”, http://bobtisdale.wordpress.com/

McLean, J. D., C. R. de Freitas, and R. M. Carter (2009) “Influence of the Southern Oscillation on tropospheric temperature” Journal of Geophysical Research, 114, D14104.

Foster, G., J. D. Annan, P. D. Jones, M. E. Mann, J. Renwick, J. Salinger, G. A. Schmidt and K. E. Trenberth (2010) “Comment on “Influence of the Southern Oscillation on tropospheric temperature” by J. D. McLean, C. R. de Freitas, and R. M. Carter”, Journal of Geophysical Research, 115, D09110.

Comments Off on Review of Humlum Et Al 2012 “The Phase Relation Between Atmospheric Carbon Dioxide And Global Temperature” By Donald Rapp

Filed under Guest Weblogs, Research Papers

New Paper “Observations Of Increased Tropical Rainfall Preceded By Air Passage Over Forests” By Spracklen Et al 2012

Chris Taylor has alerted us to another very important paper. It is

D. V. Spracklen, S. R. Arnold, C. M. Taylor, 2012: Observations of increased tropical rainfall preceded by air passage over forests, 2012:  NatureVolume:489,Pages:282–285 (13 September 2012)DOI:doi:10.1038/nature11390

 The abstract reads [highlight added]

Vegetation affects precipitation patterns by mediating moisture, energy and trace-gas fluxes between the surface and atmosphere1. When forests are replaced by pasture or crops, evapotranspiration of moisture from soil and vegetation is often diminished, leading to reduced atmospheric humidity and potentially suppressing precipitation2, 3. Climate models predict that large-scale tropical deforestation causes reduced regional precipitation4, 5, 6, 7, 8, 9, 10, although the magnitude of the effect is model9, 11 and resolution8 dependent. In contrast, observational studies have linked deforestation to increased precipitation locally12, 13, 14 but have been unable to explore the impact of large-scale deforestation. Here we use satellite remote-sensing data of tropical precipitation and vegetation, combined with simulated atmospheric transport patterns, to assess the pan-tropical effect of forests on tropical rainfall. We find that for more than 60 per cent of the tropical land surface (latitudes 30 degrees south to 30 degrees north), air that has passed over extensive vegetation in the preceding few days produces at least twice as much rain as air that has passed over little vegetation. We demonstrate that this empirical correlation is consistent with evapotranspiration maintaining atmospheric moisture in air that passes over extensive vegetation. We combine these empirical relationships with current trends of Amazonian deforestation to estimate reductions of 12 and 21 per cent in wet-season and dry-season precipitation respectively across the Amazon basin by 2050, due to less-efficient moisture recycling. Our observation-based results complement similar estimates from climate models4, 5, 6, 7, 8, 9, 10, in which the physical mechanisms and feedbacks at work could be explored in more detail.

In their conclusuions, they write

Our analysis explores the role of regional-scale vegetation patterns on precipitation. Through evapotranspiration, forests maintain atmospheric moisture that can return to land as rainfall downwind. These processes operate on timescales of days over distances of 100–1,000km such that large-scale land-use change may alter precipitation hundreds to thousands of kilometres from the region of vegetation change. Land-use patterns and small-scale deforestation may also alter precipitation locally, through changes in the thermodynamic profile and the development of surface-induced mesoscale circulations. Natural and pyrogenic emissions from vegetation can also have a role in rainfall initiation over tropical forest regions. The impact of cloud microphysical processes on precipitation is highly uncertain, and biogenic emissions could contribute to our observed relationship between rainfall and exposed vegetation. However, our water-balance calculations imply that cumulative increases in evapotranspiration over upstream forested regions more than account for the increase in downstream rainfall.

What this paper means is that the atmosphere is enriched by water vapor and convective potential energy as it is transported across a region of transpiring vegetation. The removal of this vegetation results in an atmosphere that is less conducive to precipitation.  While the paper focuses on the effects of tropical deforestation, this would be expected to occur in locations where increases in transpiration occur such as due to irrigation. Several of our papers that have examined this issue include

Hossain, F., I. Jeyachandran, and R.A. Pielke Sr., 2010: Dam safety effects due to human alteration  of extreme precipitation. Water Resources Research, 46, W03301,   doi:10.1029/2009WR007704.

Degu, A. M., F. Hossain, D. Niyogi, R. Pielke Sr., J. M.   Shepherd, N. Voisin, and T. Chronis, 2011: The influence of large dams on surrounding climate and   precipitation patterns. Geophys. Res. Lett., 38, L04405, doi:10.1029/2010GL046482.

Woldemichael, A., F. Hossain,   R.A. Pielke Sr., and A. Beltrán-Przekurat, 2012: Understanding the impact of dam-triggered land use/land cover change on the modification of extreme precipitation, Water Resour. Res., doi:10. 1029/ 2011 WR011684.

Pielke, R.A. and X. Zeng, 1989: Influence on severe storm development  of irrigated land. Natl. Wea. Dig., 14, 16-17.

Pielke Sr., R.A., 2001: Influence of the spatial distribution of vegetation  and soils on the prediction of cumulus convective rainfall. Rev. Geophys.,  39, 151-177.

In Figure 9 in Pielke et al 2001, for example, we show the major impact on the potential for deep cumulonimbus clouds (and thus rainfall) of just an increasre in surface air dew point temperature by just one degree Celcius.

The new Sracklen et al 2012 is yet another example of why land use/land cover change is a first order human climate forcing.

source of image

Comments Off on New Paper “Observations Of Increased Tropical Rainfall Preceded By Air Passage Over Forests” By Spracklen Et al 2012

Filed under Climate Change Forcings & Feedbacks, Research Papers

The Hindcast Skill Of The CMIP Ensembles For The Surface Air Temperature Trend” By Sakaguchi Et Al 2012

Figure caption: These maps show the observed (left) and model-predicted (right) air temperature trend from 1970 to 1999. The climate model developed by the National Center for Atmospheric Research (NCAR) is used here as an example. More than 50 such simulations were analyzed in the published study. (Illustration: Koichi Sakaguchi)

I was alerted to a new paper that examines the predictive skill of the multi-decadal global climate predictions; h/t to Anthony Watts in his post

Climate Models shown to be inaccurate less than 30 years out

Actually, the article also informs us on their value for even longer time periods,. The article is

Sakaguchi, K., X. Zeng, and M. A. Brunke (2012), The hindcast skill of the CMIP ensembles for the surface air temperature trend, J. Geophys. Res., 117, D16113, doi:10.1029/2012JD017765.

[as a side comment, Xubin Zeng was one of my Ph.d. students (and an outstanding one!) who I have published with, and I have also published with Mike Brunke].

The abstract reads [highlight added]

Linear trends of the surface air temperature (SAT) simulated by selected models from the Coupled Model Intercomparison Project (CMIP3 and CMIP5) historical experiments are evaluated using observations to document (1) the expected range and characteristics of the errors in hindcasting the ‘change’ in SAT at different spatiotemporal scales, (2) if there are ‘threshold’ spatiotemporal scales across which the models show substantially improved performance, and (3) how they differ between CMIP3 and CMIP5. Root Mean Square Error, linear correlation, and Brier score show better agreement with the observations as spatiotemporal scale increases but the skill for the regional (5° × 5° – 20° × 20° grid) and decadal (10 – ∼30-year trends) scales is rather limited. Rapid improvements are seen across 30° × 30° grid to zonal average and around 30 years, although they depend on the performance statistics. Rather abrupt change in the performance from 30° × 30° grid to zonal average implies that averaging out longitudinal features, such as land-ocean contrast, might significantly improve the reliability of the simulated SAT trend. The mean bias and ensemble spread relative to the observed variability, which are crucial to the reliability of the ensemble distribution, are not necessarily improved with increasing scales and may impact probabilistic predictions more at longer temporal scales. No significant differences are found in the performance of CMIP3 and CMIP5 at the large spatiotemporal scales, but at smaller scales the CMIP5 ensemble often shows better correlation and Brier score, indicating improvements in the CMIP5 on the temporal dynamics of SAT at regional and decadal scales.

The conclusions contain the informative caution

The spatiotemporal scales with more reliable model skills as identified in this study are consistent with previous studies [Randall et al., 2007] and suggest caution in directly using the outputs of long-term simulations for regional and decadal studies.

This is reminensent of the statement by Kevin Trenberth who wrote for Nature entitled

Predictions of climate

that

“…..we do not have reliable or regional predictions of climate.”

Clearly, the CMIP5 model results do not have the skill needed by the impacts communities either directly from the global model or dynamically or statistically downscaled on any multi-decadal time scales, as we summarized in our article

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum,  93, No. 5, 52-53, doi:10.1029/2012EO050008.

If they do not have sufficient skill for time periods less than 30 years for surface temperature, and longer time periods are made up of 30 years periods, they certainly will not have added skill at any multi-decadal time period. Moreover, since other climate metrics (e.g. precipitation) are even more difficult to predict, the lack of value of the CMIP5 model runs for the impacts communities is actually well (although subtlely) documented in the Sakaguchi et al 2o12 paper.

There is a major oversight, however, in the Sakaguchi et al 2o12 paper. This paper neglected to include available peer reviewed papers that document a serious lack of skill in the CMIP5 model runs. I have summarized these in my posts

Comments On The Nature Article “Afternoon Rain More Likely Over Drier Soils” By Taylor Et Al 2012 – More Rocking Of The IPCC Boat

More CMIP5 Regional Model Shortcomings

CMIP5 Climate Model Runs – A Scientifically Flawed Approach

By neglecting the peer reviewed papers I listed in those posts [most of which available to the authors], the Sakaguchi et al 2o12 even with its critical assessment of the CMIP3 and CMIP5 model predictive skill, has still not completely assessed the actual skill of the CMIP5 and CMIP3 model capabilities.

Comments Off on The Hindcast Skill Of The CMIP Ensembles For The Surface Air Temperature Trend” By Sakaguchi Et Al 2012

Filed under Climate Models, Research Papers