Monthly Archives: August 2011

Study Of Value-Added By Type 2 Dynamic Downscaling – “Regional Climate Models Add Value To Global Model Data” By Feser Et Al 2011

UPDATE #2 September 6 2011 I have corrected the journal where the Feser et al paper is in press. It will appear in the Bulletin of the American Meteorological Society (BAMS).

UPDATE September 2 2011

I sent the e-mail below to the authors of the paper, and the reply follows


From: Roger Pielke Date: Sat, Aug 27, 2011 at 5:47 PM
Subject: Your in press J of Climate paper Feser et al 2011
To: Hans von Storch , Burkhardt Rockel, Frauke.Feser@hzxx

Hi Dr. Feser, Hans and Burkhardt

I plan to post the information below later this coming week. Your
paper is an important contribution but I am concerned that readers
will interpret that your conclusions apply to the value of dynamic
downscaling from multi-decadal global climate predictions. I would be
glad if you could respond and I will post as a Reply on my weblog.

With Best Regards



From: Frauke Feser Frauke.Feser@xxx  to pielkesr@xxx cc Hans von Storch , Burkhardt Rockel

date: Fri, Sep 2, 2011 at 2:21 AM
subject :  Your in press J of Climate paper Feser et al 2011

Dear Prof. Pielke,

thanks a lot for your comments on our article. So far it has to our knowledge not been shown that the RCM added value results also apply for future scenario simulations of Type 4. We will take this as a motivation to look into this more systematically soon, until then our results are valid only for Type 2 RCM hindcast simulations.

With best regards,

Frauke Feser


There is a new paper which adds to our understanding of value-added from Type 2 dynamic downscaling.  Type 2 dynamic downscaling is defined in Castro et al (2005) and has recently been defined further in a new submitted paper with Rob Wilby [which I will post on in the future]  as

Type 2  dynamic downscaling refers to regional weather (or climate) simulations. In this case, the regional model’s initial atmospheric conditions are forgotten, but results are still depend on the lateral boundary conditions from a numerical global model weather prediction (in which initial observed atmospheric conditions are not yet be forgotten), or a global reanalysis, along with the land surface boundary conditions.  Reanalyses such as ERA-40, NCEP, and JRA-55 assimilate spatially discontinuous weather observations in order to estimate temperature, humidity, wind speeds and so forth at grid points covering the entire globe. Downscaling from reanalysis products defines the maximum forecast skill that is achievable with Types 3 and 4 downscaling.

Type 2 dynamic downscaling is quite distinct from Type 4 downscaling [Type 1 is with respect to numerical weather prediction and Type 3 is with respect to seasonal weather prediction, for example, where certain parts of the climate system, such as sea surface temperatures are prescribed]. Type 4 is defined below.

Type 4 dynamic downscaling takes lateral boundary conditions from an earth system model in which coupled interactions between the atmosphere, ocean, biosphere and cryosphere are predicted. Other than terrain, all other components of the climate system are predicted except for human forcings, including greenhouse gas emissions scenarios, which are prescribed. Type 4 downscaling is widely used to provide policymakers with impacts from climate decades into the future.

Unfortunately, the new paper does not highlight that their findings do not apply to Type 4 dynamic downscaling (which is the IPCC multi-decadal climate predictions) except as an upper bound as to what value-added is possible.

Type 2 dynamic downscaling from reanalyses, in particular, has the advantage that the lateral boundary conditions and interior nudging is based on sampling from the continuous real world atmospheric conditions. In stark contrast, Type 4 dynamic downscaling knows nothing about spatial scale smaller than resolved by the global model.

The new paper is

Frauke Feser, Burkhardt Rockel, Hans von Storch, Jörg Winterfeldt, and Matthias Zahn, 2011: Regional Climate Models add Value to Global Model Data – A Review and selected Examples. J of Climate. BAMS, in press.

The abstract reads [highlight added]

An important challenge in current climate modeling is to realistically describe small scale weather statistics such as topographic precipitation, coastal wind patterns or regional phenomena like polar lows. Global climate models simulate atmospheric processes with increasingly higher resolutions, but still regional climate models have a lot of advantages. They consume less computation time due to their limited simulation area and thereby allow for higher resolution both in time and space as well as for longer integration times. Regional climate models can be used for dynamical downscaling purposes, as their output data can be processed to produce higher resolved atmospheric fields, allowing the representation of small-scale processes and a more detailed description of physiographic details (such as mountain ranges, coastal zones, and details of soil properties).

But does higher resolution add value when compared to global model results? Most studies implicitly assume that dynamical downscaling leads to output fields superior to the driving global data, but little work has been carried out to substantiate these expectations. Here, we review a series of articles that evaluate the benefit of dynamical downscaling by explicitly comparing results of global and regional climate model data to observations. These studies show that the regional climate model generally performs better for the medium spatial scales, but not always for the larger spatial scales.

We conclude that regional models can add value, but only for certain variables and locations; particularly those influenced by regional specifics such as coasts or mesoscale dynamics such as polar lows. Therefore, the decision of whether a regional climate model simulation is required depends crucially on the scientific question being addressed.

The following text documents that this study is about Type 2 downscaling and not Type 4 downscaling

“In this article, efforts to determine such added value in case studies as well as in multi-decadal simulations with different RCMs are summarized and evaluated. The simulations presented here comprise mostly ‘reconstructions’, e. g. simulations of the weather dynamics since 1948 until today of Western Europe or the Northwestern Pacific. Most of these simulations use a grid distance of about 50 km, have been constrained with spectral nudging (von Storch et al., 2000) and use global NCEP/NCAR reanalysis (Kalnay et al. 1996; hereafter referred to as the NCEP reanalysis) as forcing data.”

Among their findings is

The regional model does not add value over the open ocean, due to the lack of orographic details and infrequent meso-scale phenomena here. It may even be worse than the reanalyses, which is reflected by the negative BSSs [Brier Skill Score].

Their statement that

“We conclude that RCMs do indeed add value to global models for a number of
applications, variables, and areas. If examined only at the regional scale, added value emerges very distinctly for many model variables, justifying the additional computational effort of RCM simulations.”

is correct but ONLY for Type 2 dynamic downscaling.

source of image

Comments Off

Filed under Climate Models, Research Papers

New Paper “Observed Changes In Surface Atmospheric Energy Over Land” By Peterson Et Al 2011

Several years ago, we proposed the use of surface moist enthalpy as the prefered metric to diagnose the heat of the near surface air;

Pielke Sr., R.A., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211.

As we write in that paper [highlight added]

Surface air temperature alone does not capture the real changes in surface air heat content of the Earth system. Even using the limited definition of the term “global warming,”the moisture content of the surface air must be included. Future assessments should include trends and variability of surface heat content in addition to temperature.”

In our follow-up paper

Davey, C.A., R.A. Pielke Sr., and K.P. Gallo, 2006: Differences between near-surface equivalent temperature and temperature trends for the eastern United States – Equivalent temperature as an alternative measure of heat content. Global and Planetary Change, 54, 19–32.

we wrote

There is currently much attention being given to the observed increase in near-surface air temperatures during the last century. The proper investigation of heating trends, however, requires that we include surface heat content to monitor this aspect of the climate system. Changes in heat content of the Earth’s climate are not fully described by temperature alone. Moist enthalpy or, alternatively, equivalent temperature, is more sensitive to surface vegetation properties than is air temperature and therefore more accurately depicts surface heating trends. The microclimates evident at many surface observation sites highlight the influence of land surface characteristics on local surface heating trends. Temperature and equivalent temperature trend differences from 1982–1997 are examined for surface sites in the Eastern U.S. Overall trend differences at the surface indicate equivalent temperature trends are relatively warmer than temperature trends in the Eastern U.S. Seasonally, equivalent temperature trends are relatively warmer than temperature trends in winter and are relatively cooler in the fall. These patterns, however, vary widely from site to site, so local microclimate is very important.

A new paper has appeared which examines this issue on a global scale [which I read about on Climate Abyss in a comment from Peter Thorne in response to my comment to John Nielsen-Gammon on his latest post on the Texas drought and heat]

Peterson, T. C., K. M. Willett, and P. W. Thorne (2011), Observed changes in surface atmospheric energy over land, Geophys. Res. Lett., 38, L16707, doi:10.1029/2011GL048442

with the abstract

“The temperature of the surface atmosphere over land has been rising during recent decades. But surface temperature, or, more accurately, enthalpy which can be calculated from temperature, is only one component of the energy content of the surface atmosphere. The other parts include kinetic energy and latent heat. It has been advocated in certain quarters that ignoring additional terms somehow calls into question global surface temperature analyses. Examination of all three of these components of atmospheric energetics reveals a significant increase in global surface atmospheric energy since the 1970s. Kinetic energy has decreased but by over two orders of magnitude less than the increases in both enthalpy and latent heat which provide approximately equal contributions to the global increases in heat content. Regionally, the enthalpy or the latent heat component can dominate the change in heat content. Although generally changes in latent heat and enthalpy act in concert, in some regions they can have the opposite signs.”

The Peterson et al article is an effective examination with the current data analyses from the HadCRUH land dataset and Global Historical Climatology Network Monthly (GHCN-M) Version 3.  It should, of course, be reevaluated when all of the uncertainties and biases we have identified, for example,  in Pielke et al 2009, Klotzbach et al 2009 and Fall et al 2011 are  remedied.

They show that global warming, as diagnosed from surface measurements over land, is actually larger than that diagnosed from the dry bulb temperature trends alone (as we also found in the Davey et al 2006 study).  The analysis of moist enthalpy should also be extended into the lower troposphere in order to see if the divergence between the surface and lower tropospheric temperature trends that we identified in Klotzbach et al 2009 can be explained when the trends in water vapor are included.

I disagree, however, with the following text in Peterson et al

The heat content of the upper ocean has become a heavily utilized metric of global climate change [e.g., Palmer et al., 2010]. Some authors argue that the heat content of the surface atmosphere should also be a key metric. Indeed, the “concept of ‘global warming’ requires assessments of units of heat (that is, Joules)” according to Pielke et al. [2004]. Davey et al. [2006] argue that global surface temperature is not a “proper” measure of the heat content of the Earth’s climate system; which is true as it is just a measure of temperature. But Pielke et al. [2007] go even further to claim that “ignoring concurrent trends in surface air absolute humidity therefore introduces a bias in the analysis of surface air temperature trends” and that we “need to include absolute humidity in order to describe observed temperature trends.”

Temperature and humidity are distinctly different physical parameters as implied by their units of K and g kg−1, and they are measured by different instrumentation. Therefore, we do not understand how ignoring humidity could bias an analysis of temperature trends or why an assessment of humidity would be required in order to describe trends in temperature. We do, however, have concerns about the potential for the general public to misinterpret heat content analysis.  Figure 1 shows that heat content tends to be decreasing in Australia despite increases in surface temperature. Presenting heat content as the primary metric for global warming could lead lay readers to erroneously perceive Australia as cooling – after all, its heat (content) is decreasing. Our concern is not just nomenclature. Heat content by any other name if used as a global warming metric has the potential to imply cooling even in places with increasing temperature simply because the location is becoming dryer.

The Peterson et al paper is incorrect that temperature and humidity are distinctly different physical parameters. They both are associated with heat as they clearly show in their equation 3 ;  i.e.

H=  C pT + L q

where C p is the heat capacity at constant pressure and L is the heat of vaporization.

A more important issue is that they have not recognized that the use of surface measurements to diagnose the  radiative imbalance of the climate system requires the identification of where the heat from this imbalance goes to. Their paper is quite effective at bringing in the concept of the Bowen ratio, but they are not understanding that we need to monitor both types of heat (sensible; i.e. dry bulb and latent heat) in order to diagnose the heat content changes in the surface air, and to properly interpret the observed trends in the dry bulb temperatures.

Perhaps this would be clearer to them if instead of writing that we

“need to include absolute humidity in order to describe observed temperature trends.”

if we wrote that we 

“need to include absolute humidity in order to correctly explain observed temperature trends.”

Finally,  they write

“Presenting heat content as the primary metric for global warming could lead lay readers to erroneously perceive Australia as cooling – after all, its heat (content) is decreasing.”

However, if the heat content is decreasing because of drying, even though the dry bulb temperature trend is positive,  it IS cooling in terms of Joules per kilogram of air!  There is less heat energy in this air that when the moist enthalpy was higher.

 I suggest that both dry bulb temperature and moist enthalpy trends and anomalies be presented in climate analyses and in model predictions. This will provide a more complete diagnosis of climate (and surface global warming)  of the atmosphere than using the dry bulb temperatures alone.

source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

Hurricanes And Nor’easters in the late Holecene – Evidence For Large Natural Variability

We have been alerted by Ken Haapala of the Science and Environmental Policy Project to an interestng paper on paleo-climate which documents a large natural variation in the occurrence of hurricanes and nor’easters along the coast of North Carolina.

It is

Mallinson, D.J., Smith, C.W., Mahan, S., Culver, S.J. and McDowell, K. 2011. Barrier island response to late Holocene climate events, North Carolina, USA. Quaternary Research 76: 46-57

The abstract reads [highlight added]

The Outer Banks barrier islands of North Carolina, USA, contain a geologic record of inlet activity that extends from ca. 2200 cal yr BP to the present, and can be used as a proxy for storm activity. Optically stimulated luminescence (OSL) dating (26 samples) of inlet-fill and flood tide delta deposits, recognized in cores and geophysical data, provides the basis for understanding the chronology of storm impacts and comparison to other paleoclimate proxy data. OSL ages of historical inlet fill compare favorably to historical documentation of inlet activity, providing confidence in the technique. Comparison suggests that the Medieval Warm Period (MWP) and Little Ice Age (LIA) were both characterized by elevated storm conditions as indicated by much greater inlet activity relative to today. Given present understanding of atmospheric circulation patterns and sea-surface temperatures during the MWP and LIA, we suggest that increased inlet activity during the MWP responded to intensified hurricane impacts, while elevated inlet activity during the LIA was in response to increased nor’easter activity. A general decrease in storminess at mid-latitudes in the North Atlantic over the last 300 yr has allowed the system to evolve into a more continuous barrier with few inlets.

The conclusion reads

This study demonstrates that OSL is a viable tool for dating subtidal to intertidal barrier island inlet and flood-tide delta facies, and can provide valuable insight into barrier evolution and coastal response to varying climate conditions. Ages of inlet facies along the North Carolina Outer Banks indicate a period of large-scale inlet activity concentrated between Rodanthe and Ocracoke during the MWP, which we attribute to hurricane impacts, and a later period of elevated inlet activity during the Little Ice Age, which we attribute to an increase in nor’easter activity.Closure of most of these inlets occurred over the last 300 yr, probably reflecting more stable climate conditions, fewer storm impacts (both hurricane and nor’easter), and a decrease in the average wind intensity and wave energy field in the mid-latitudes of the North Atlantic.

This study suggests that we have been in a period of reduced storminess along the coast of North Carolina than is typical of the longer time period. It also illustrates the difficulty of extracting a human caused climate change component to this large natural variability.

source of image

Comments Off

Filed under Research Papers

Follow On Comment To The Post – New Paper “Why Do Tornados And Hail Storms Rest On Weekends” By Rosenfeld and Bell 2011

I received an e-mail comment on the post

New Paper “Why Do Tornados And Hail Storms Rest On Weekends” By Rosenfeld and Bell 2011

The very informative comment is

Not all aerosols peak during the week. It appears that elemental (also called black) carbon, mainly from diesels, is at its lowest concentrations on Sunday and Monday in rural middle America. Natural dusts are also lower on Sundays and Mondays – perhaps also due to less driving generally, less stirring up of road dust?  Or, is it possible that farmers do less dirt-moving on the weekends, or at least on Sunday?  Is the black carbon missing on Sundays possibly due to less use of farm equipment?  Nitrates (mainly but not exclusively a vehicular emission) exhibit the same weekly cycle. Surprisingly, power plant particle emissions do not, although sulfur dioxide levels (a gas) are lower on weekends. See this link:

Here are parts of the Abstract:

“Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network of aerosol samplers and NOAA monitoring sites are examined for weekly cycles. At remote and rural sites, fine particle elemental carbon, crustal elements, and coarse particle mass had pronounced (up to 20%) weekly cycles with minima on Sunday or Monday. Fine particle organic carbon and mass had smaller amplitude cycles, also with Sunday or Monday minima. There was no statistically significant weekly cycle in fine particle sulfate despite a 5 to 15% weekly cycle in power plant SO2 emissions. Although results for nitrate may be more susceptible to sampling artifacts, nitrate also showed a pronounced weekly cycle with an amplitude similar to elemental carbon……These results support a large role of diesel emissions in elemental carbon aerosol over the entire United States and suggest that a large fraction of the airborne soil dust is anthropogenic. They also suggest that studies of  weekly cycles in temperature, cloudiness, precipitation, or other meteorological variables should look for causes more in light-absorbing particles and possible ice nucleation by dust rather than sulfate or total aerosol….”

Hope this is of interest,

Tom Grahame

Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs, Research Papers

An Example Of The Need For A Bottom-Up Resourse-Based Perspective Of Vulnerability With Respect To Electric Power


Power outages

In our paper

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2011: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. AGU Monograph on Complexity and Extreme Events in Geosciences, in press. 

we present a bottom-up, resource-based approach to assess risks to society from extreme events including from hurricanes. We have concluded this is a much more robust approach than replying on a top-down global climate model prediction of changes in climatology (such as hurricane frequency) in the coming decades.

Hurricane Irene presents an example of why we need this bottom-up approach as the primary framework for the reduction of risk.

Hurricane Irene has caused large losses of electric power. For future hurricanes (regardless of how they might change) a program to reduce these outages should be a high priority.

An example of a news article on this vulnerability of the electrical system is by Childs Walker of the Baltimore Sun on August 26 2011, titled

Irene’s greatest aggravation: power outages

Its subtitle reads

Marylanders try to accept outages gracefully, admit their patience might wane after a few days

Excerpts read [highlight added]

Though Irene did not cause widespread flooding in Maryland or smash buildings to the degree many feared, the storm left as many as 800,000 businesses and households without power.

Maryland appeared to rank second in outages among states hit by Irene. Virginia reported about 2.5 million residents without power, the second-most in state history. North Carolina, where Irene made landfall, reported more than 400,000 customers without power. Pennsylvania, New York, New Jersey, Connecticut and Massachusetts each reported between 300,000 and 500,000 outages at various times Sunday.

Since hurricanes will occur along the east coast, the improvement of the resiliency of the electic power grid should be goal. We do not need to know anything about how the climatology of these storms might change in the coming decades. Just pruning trees before each hurricane season (as homeowners to in the western USA before the fire season) would be a cost-effective way to reduce risk. 

source of image

Comments Off

Filed under Vulnerability Paradigm

Set #3 Of The Photographs Of Surface Climate Observing Sites

This post presents the next three photographs of surface climate observing sites that I introduced in my post on August 11 2011

Quality Of Global Climate Surface Observing Sites

Some of these sites are reasonably well-sited while others are not. There is, however, a clear need to document each of those sites that are used in the Global Climate Reference Network

1.Veracruz, Mexico

2. Tampico, Mexico

3. Salina Cruz, Mexico

Comments Off

Filed under Climate Change Metrics

Kudos To The National Weather Service And The National Hurricane Center For An Excellent Forecast Of Hurricane Irene!

The successful forecast of Hurricane Irene from its travels across the Bahamas and up and along the east coast of the United States needs to be widely appreciated and applauded!

This forecast was based on a suite of numerical weather prediction models which provided a solid basis for predicting its track and intensity.  Radar and satellite provided detailed real-time information.

Many lives have been saved by this outstanding service by this federal agency. 

Congratulations to the National Weather Service and the National Hurricane Center for a job well done!

source of image

Comments Off

Filed under Climate Science Reporting