Monthly Archives: November 2008

Real Climate Misunderstanding Of Climate Models

Real Climate has introduced a weblog titled FAQ on climate models. There are quite a few issues that can be raised with their answers, but I will focus on just one here. It is their answer to the question “What is tuning”. They write

“What is tuning?

We are still a long way from being able to simulate the climate with a true first principles calculation. While many basic aspects of physics can be included (conservation of mass, energy etc.), many need to be approximated for reasons of efficiency or resolutions (i.e. the equations of motion need estimates of sub-gridscale turbulent effects, radiative transfer codes approximate the line-by-line calculations using band averaging), and still others are only known empirically (the formula for how fast clouds turn to rain for instance). With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist. Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.

Secondly, there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment.”

They make the following remarkable claims:

1. “With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist.”

First, there are always tunable parameters within each  parameterization, and there are always quite a few more than one or two.

 In my class on modeling, the students have documented the number of tunable parameter for a range of parameterizations, and 10 and more are common for each individual parameterization (e.g. see the class powerpoint presentations at ATOC 7500 for my most recent class).

Second, the only basic physics in the models are the pressure gradient force, advection and the acceleration due to gravity. These are the only physics in which there are no tunable coefficients. Climate models are engineering codes and not fundamental physics.

The framework of all climate models is illustrated in one of my powerpoint talks for weather models (see slides 3 and 4);

Pielke, R.A., Sr., 2003: The Limitations of Models and Observations. COMET Symposium on Planetary Boundary Layer Processes, Boulder, Colorado, September 12, 2003.

2. “Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.

Secondly, there are tuning parameters that control aspects of the emergent system. “

The claim that there is a single formula is incorrect; the parameterizations of the physics involves subroutines (or their equivalent) which involve quite a few lines of code. More importantly, the matching with observations to tune the parameterizations is typicially completed for ideal situations (such as from a field observational campaign or high resolution model) and then applied to climate model situations which quite frequently fall outside of the conditions that were used to tune the parametrization.

 Some parameters (such as the von Karman “constant”) are assumed to be universal, but most are just values that provide the best fit of a parametrization with the observed data used in its construction.

The second type of parametrization is the same as the first (their division into two types is artificial), except there is no observational data to make the tuning. Besides gravity wave drag, and a threshold of relative humidity for the onset of precipitation, a good example of a parameterization without any observational tuning is horizontal smoothing (which represents horizontal subgrid scale mixing).

The  conclusion with respect to the Real Climate posting on “What is tuning” is that they inaccurately presented the actual limitations of parameterizations. They also did not accurately report that tuning involves many more tunable corefficients than they report.

Their sentence that

Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data “

is incorrect. The students in each of my modeling classes (see for the classes for modeling and scroll to the bottom of each for the students’ class presentations where they decomposed parameterizations in order to quantify the number of tunable parameters) have documented the large number of tunable parameters within each of the parameterizations. There are no exceptions; all parameterizations involve a number of tunable parameters. 

Real Climate is “surprised” that there are “maybe a half dozen” tunable parameters. They should have not been surprised but have looked in more depth to ascertain if their conclusion was correct (which they are not). Climate Science would be glad to post a guest weblog from Real Climate if they disagree with the Climate Science conclusions.

Readers on want an in-depth analysis of the number of parameters used in selected parameterizations in atmospheric modeling can view this in chapters 7 to 9 my book

Pielke, R.A., Sr., 2002: Mesoscale meteorological modeling. 2nd Edition, Academic Press, San Diego, CA, 676 pp

 

Comments Off

Filed under Climate Models

Wind Changes over Time and Space as a Climate Metric to Diagnose Temperature Trends

In Pielke et al. 2001: Analysis of 200 mbar zonal wind for the period 1958-1997. J. Geophys. Res., 106, D21, 27287-27290, we demonstrated that temporal and spatial trends in upper tropospheric winds can be used to diagnosis the trends in the tropospheric temperatures below the level of the wind observations. This concept uses what is called the “thermal wind relation” and is a robust, well-established relationship between the change of wind with altitude and the horizontal temperature gradient.

In that paper, we showed as an example, that a surface (1000 hPa) to 200 hPa layer-mean horizontal north-south temperature gradient of 1 degree Celsius (using an average latitude of 43 degrees) would produce a 200 hPa wind speed increase of 4.6 meters per second. This means that if there were a 0.1-0.2 degree Celsius decrease in the zonally-averaged gradient between the high- and mid-latitudes over a period of a decade or more, we would see a 0.46-0.92 meter per second decrease in the wind speed over the same time period. Such a magnitude of change in the tropospheric layer-averaged temperatures has been observed (e.g., see http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt).

There is an important message to these numbers. First, the wind speed change that must result from the observed reduction of the zonally-averaged layer-average temperature gradient is quite small in comparison to the typical upper tropospheric wind speeds associated with synoptic weather features that can be 50 meters per second and more. This small trend places the use of global- and zonally-averaged tropospheric temperature trends in an appropriate perspective, in that they are a poor metric to diagnose climate variability and change.

Secondly, in order to assess the accuracy of satellite and radiosonde assessments of tropospheric temperature trends, the monitoring of the trends in wind speed and direction provides an independent metric to assess the temperature trends. If the Arctic troposphere is really warming relative to the mid- and tropical latitudes, we should see a weakening of the zonally-averaged wind speeds. However, for the period 1958-1997, in
Chase et al. 2002: A proposed mechanism for the regulation of minimum midtropospheric temperatures in the Arctic. J. Geophys. Res., 107(D14), 10.10291/2001JD001425, we actually found that the 200 hPa winds had become somewhat stronger at higher latitudes.

The finding of the relatively small magnitude of observed zonally- and globally- averaged tropospheric temperature trends with respect to the upper tropospheric winds further illustrates why we need to focus on regional tropospheric temperature changes. Figure 11 in Pielke et al. (2005), shows spatial trends for 1979-2001 in the 300 hPa winds from the NCEP and ERA-40 Reanalyses. Areas with relatively large anomalies are diagnosed. It is the larger regional trends that have the much more direct effect on our weather. The stronger winds across the north Pacific, for example, are just one example of a trend in regional circulation patterns that directly affect the climate of North America.

Such regional assessments of tropospheric temperature trends should be a major initiative within the IPCC and other climate assessments. This needs to be completed on seasonal as well as annually averaged time scales. Our July 28, 2005 blog on What is the Importance to Climate of Heterogeneous Spatial Trends in Tropospheric Temperatures? provides further discussion as to why the regional spatial scale is so important to our understanding of climate variability and change.

Comments Off

Filed under Climate Change Metrics

Is Global Warming Spatially Complex?

Originally Posted on September 25, 2005.

The short answer is Yes.

As discussed in Heat storage within the Earth system, the appropriate climate metric to assess global warming is ocean heat content in Joules. As was shown in that 2003 paper, the radiative imbalance of the climate system can be effectively assessed by monitoring changes in Joules of the ocean heat content over time, as the other stores of heat in the climate system are small. For example, in that paper, between the mid-1950s and the mid-1990s, a global radiative imbalance of + 0.3 Watts per meter squared was diagnosed, with half of this heating (+0.15 Watts per meter squared) above 300 m and the remainder between 300 m and 3 km. Since that study, the analysis of Willis et al. 2004 provides more recent ocean heat storage changes. As we diagnosed from their data (see Pielke and Christy), the radiative imbalance for the period 1993- mid 2003 was about 0.62 Watts per meter squared.

However, these estimates are based on a global ocean average heat storage change. The actual spatial trends in ocean heat content are actually quite complex (i.e., see Figure 4 in Willis et al. 2004). They found that most of the heating was in the southern hemisphere mid-latitude ocean down to a depth of 750 m. The sea surface temperature (SST) anomalies mirror this spatial complexity at the surface of the oceans (see http://www.osdpd.noaa.gov/PSB/EPS/SST/climo.html for a current map of the anomalies). On September 24, 2005 large areas of cool SST anomalies are evident in the southern hemisphere oceans, while large areas of warm SST anomalies are seen in the northern hemisphere Atlantic Ocean.

An advantage of using Joules as the climate metric of global heat changes is that an adequately sampled snapshot at any moment of time is all that is needed to monitor the heating within the climate system. Unlike surface air temperature by itself (that has been the main climate metric used to assess global warming), in which there is a lag between a radiative imbalance and an equilibrium temperature; e.g., see Equation 1-1 in NRC 2005: Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties ), there is no lag between a radiative imbalance and the amount of Joules in the climate system.

We can, therefore, apply an assessment of the current anomalies in ocean heat content to determine where the global warming signal is most pronounced. Data provided by the European Centre for Medium Range Forecasting of near-current ocean heat content anomalies (presented as ocean temperature anomalies) can be used to illustrate the current spatial complexity of the ocean heating. The ECMWF presents data for the following slices through the oceans:

1. Equatorial depth temperature anomaly

2. Latitude –depth temperature anomaly at 165 E

3. Latitude –depth temperature anomaly at 140 W

4. Latitude –depth temperature anomaly at 109.7 W [currently 11/25/2008 no longer available]

5. Latitude –depth temperature anomaly at 30W

The near-surface temperature anomaly (5 m depth) [now the averaged temperature in the upper 300 m] is also available from the ECMWF.

There is an issue as to whether all of the important spatial scales of the heat anomalies are sampled in these analyses. Nonetheless, there are several important conclusions from even a cursory examination of these slices even if we still need improved spatial monitoring.

A significant portion of the warming is at depth. The portion of this heat that is a depth below the thermocline is not readily available to heat the atmosphere above or to contribute to enhanced evaporation of water vapor from the ocean surface. This heat is “sequestered‿ for an unknown period of time.

The anomalies have significant horizontal, as well as vertical variations. Such horizontal structure could be a result of the heterogeneous character of a number of the climate forcings, as we discussed in the weblog entry for July 28, 2005 (What is the Importance to Climate of Heterogeneous Spatial Trends in Tropospheric Temperatures?) and/or related to the complexity of ocean dynamical and thermodynamic processes. A number of these anomalies are cooler than the long-term average.

Thus, the answer to the question posed in this weblog is that global warming has significant spatial variations. Global warming is not a more-or-less uniform warming spread across the oceans. Such a spatially complex warming pattern further supports the claim that a multiple set of climate forcings, in addition to the more homogeneous radiative forcing of the well-mixed greenhouse gases, is altering our climate. The reconstruction of the observed temporal evolution of the spatial pattern over the last several decades by the global climate models remains an unrealized goal.

Comments Off

Filed under Climate Change Metrics

Is There a Human Effect on the Climate System?

Originally Posted on August 1, 2005.

As discussed in depth in the NRC (2005) report, the human influences on the climate system are diverse and include, in addition to the radiative effect of the well-mixed greenhouse gases such as carbon dioxide and methane, diverse influences from aerosols, land-use/land-cover change, the biogeochemical effects of enhanced CO2 and of nitrogen deposition. As concluded in the multi-authored paper Nonlinearities, Feedbacks, and Critical Thresholds within the Earth’s Climate System:

“The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual, and multiple equilbria are the norm……..It is imperative that the Earth’s climate system research community embrace this nonlinear paradigm if we are to move forward in the assessment of the human influence on climate.”

The IPCC studies have convincingly shown that there are long-term effects on the climate system due to the human input of carbon dioxide into the atmosphere. Indeed, it should be no surprise that when we change the composition of the Earth’s atmosphere, we alter its energy budget, and thus other aspects of the climate system. The IPCC conclusions should be interpreted, however, as process studies as discussed in the July 15 blog on this website (What are Climate Models? What do They Do?). In that context they tell us that elevated atmospheric concentrations of well-mixed greenhouse gases due to human activity do affect our climate system.

However, carbon dioxide is just one human climate forcing. It is not the only first-order climate forcing, as clearly articulated in the NRC (2005) report. The term “global warming” has been used as a synonym for climate change, and is now the basis for extensive economic activity (for an example, see the July 7, 2005 article on carbon permits in the Economist; subscription required). Such a narrow focus, and use of the term “global warming” fails to recognize that other first order climate forcings exist, in addition to the radiative forcing of carbon dioxide. The issuance of carbon permits will not satisfactorily address the more complex (and realistic) questions as to how human activity is altering the climate system. “Global warming” is a grossly inadequate term to characterize the actual human effect on the climate system.

Comments Off

Filed under Climate Change Forcings & Feedbacks

Why is Land Use/Land Cover Change a First-Order Climate Forcing?

Originally posted on August 5, 2005.

As recognized by the National Research Council in 2005, land-use/land-cover change is a first-order climate forcing. However, its role as a regional and global climate influence is not widely recognized, except as it effects the atmospheric concentration of carbon dioxide and the global average surface albedo. In the summary figure from the IPCC Statement for Policymakers (see Figure ES-2 here), in terms of the global mean radiative forcing, only albedo effects of land use/land cover change are identified.

However, numerous studies have shown that the effect of land-cover/land-use change is to alter temperatures and precipitation in regions where the change occurs, as well as weather globally through teleconnections (see, for example, The influence of land-use change and landscape dynamics on the climate system: relevance to climate-change policy beyond the radiative effect of greenhouse gases and The climatic impacts of land-surface change and carbon management, and the implications for climate change mitigation policy).

The reason for this influence is described in a presentation I gave entitled “Land-Use/Land-Cover Change as a Major Climate Forcing: Evidence and Consequences for Climate Research.” In the talk, I asked the question “why should landscape effects, which cover only a fraction of the Earth’s surface, have global circulation effects?” The answer can be summarized as follows:

  1. Land-use/land-cover change alters the surface fluxes of heat and water vapor from what they were before the change. This alteration in the fluxes affects the atmospheric boundary layer, and the energy available for thunderstorms.
  2. As shown in pioneering work by Joanne Simpson and Herbert Riehl, globally from 1500-5000 thunderstorms (which are referred to as “hot towers”) are the conduit to transport heat, moisture and wind energy to higher latitudes. Since thunderstorms occur only in a relatively small percentage of the Earth’s surface, a change in their spatial patterns would be expected to have global consequences.
  3. Most thunderstorms (by a ratio of about 10 to 1) occur over land.
  4. The regional alteration in tropospheric diabatic heating has a large influence on the climate system (see my July 28th blog).
  5. Global climate effects occur with ENSO events since they are of large magnitude, have long persistence, and are spatially coherent. Regional land-use/land-cover changes have the same and larger spatial scales (see Australian Land Clearing, A Global Perspective: Latest Facts & Figures for changes in landscape in the 1990s). Regional land-use/land-cover changes have a large magnitude, long persistence, and are spatially coherent.

We should, therefore expect global climate effects from land-use/land-cover change. The next IPCC needs to focus more on this first-order climate forcing than they have in the past. The question of searching for a “discernable effect on the climate system” misses the obvious in that we have been altering regional and global climate by land-use/land-cover change for decades. The goal of “preventing dangerous anthropogenic interference with the climate system” (from the UN Framework Convention on Climate Change, article 2, 1999), by focusing on CO2, has overlooked the first order climate forcing of land-use/land-cover change in altering the surface heat and water vapor fluxes.

Comments Off

Filed under Climate Change Forcings & Feedbacks

Is CO2 a Pollutant?

Originally posted on August 9, 2005.

A recent news article illustrates a popular understanding of carbon dioxide as a pollutant. Referring to carbon permit trading it reports:

“These brokers don’t trade stocks or bonds or gold or oil. What they trade is pollution. To be exact, they buy and sell the right to foul the air with carbon dioxide, a greenhouse gas that the U.S. National Academy of Sciences says causes global warming.”

The term “foul” has a number of definitions according to the Webster New World Dictionary, but the most appropriate in the context of the above quote is that it means:

“so offensive to the senses as to cause disgust; stinking; loathsome” and “extremely dirty or impure”; disgustingly filthy.”

A “pollutant” is defined as:

“a harmful chemical or waste material discharged into the water or atmosphere.”

To “pollute” is to:

“make unclean, impure, or corrupt; defile; contaminate; dirty.”

The American Meteorological Society’s Glossary lists the definition as:

air pollutionThe presence of substances in the atmosphere, particularly those that do not occur naturally. These substances are generally contaminants that substantially alter or degrade the quality of the atmosphere. The term is often used to identify undesirable substances produced by human activity, that is, anthropogenic air pollution. Air pollution usually designates the collection of substances that adversely affects human health, animals, and plants; deteriorates structures; interferes with commerce; or interferes with the enjoyment of life. Compare airborne particulates, designated pollutant, particulates, criteria pollutants.

The question is: How does atmospheric carbon dioxide fit into this definition? Carbon dioxide does occur naturally, of course, and is essential to life on Earth, as it is an essential chemical component in the photosynthesis process of plants. This is in contrast with other trace gases in the lower atmosphere such carbon monoxide, ozone, and sulfur dioxide which are have direct health and environmental effects on humans and vegetation. Indeed, when combustion is optimized, less carbon monoxide and more carbon dioxide are produced. There are no positive effects that I am aware of at any level of these pollutants in the lower atmosphere.

Thus, it is more informative to define anthropogenic inputs of carbon dioxide as a climate forcing, as was done in the 2005 National Research Council Report. This provides the recognition that carbon dioxide does not have direct health effects as implied by the news article that carbon dioxide “fouls” the air, but it does significantly affect our climate. Of course, carbon monoxide, ozone, and sulfur dioxide are also climate forcings. When these other atmospheric constituents are referred to in news articles and elsewhere, we would benefit by a distinction between an “air pollutant” and a “climate forcing” depending on the context.

Comments Off

Filed under Climate Change Forcings & Feedbacks

Linear Climate Trends or Sudden Transitions of Climate – Which is More Likely?

Originally posted on August 19, 2005.

A recent paper in Geophysical Research Letters by K. Zickfeld and colleagues (“Is the Indian summer monsoon stable against global change?” provides an example of investigating multiple climate forcings. According to their study, sulfur emissions and/or land-use changes as they affect planetary albedo, or natural variations in insolation and CO2 concentrations, could trigger abrupt transitions between different monsoon regimes. While the paper uses a simple box model of the tropical atmosphere, it is a start at investigating a set of multiple climate forcings as causing rapid transitions of climate in India. Such rapid transitions are already part of the natural system; see Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38.)

In contrast to the nearly linear, and monotonic predicted trends from the anthropogenic increases of CO2 such as reported by the IPCC, the occurrence of such sudden changes are more the norm and would have major societal impacts. Such nonlinear climate system responses, however, are likely to be impossible to skillfully predict. This provides further impetus to adopt the vulnerability perspective such as promoted by Jon Foley (subscription required) and here on this Weblog (see our July 19 and 26 and August 16 2005 postings).

The August 11 2005 paper published in Science Express Reports (Sherwood S., J. Lanzante ,and Cathryn Meyer: Radiosonde daytime biases and late-20th century warming; subscription required), for example, perpetuate the emphasis on large-scale linear trend analysis, in their study of tropospheric temperature trends. This is because the General Circulation Models focus on global- and zonally-averaged temperature trends, and linear trends from the observations are being compared with linear trends from the GCMs. As quoted by one of the authors (Sherwood) in a New York times interview (free subscription required).

“Things being debated now are details about the models,” said Steven Sherwood, the lead author of the paper on the balloon data and an atmospheric physicist at Yale. “Nobody is debating any more that significant climate changes are coming.”

This statement is based on a linear analysis.

However, while their study of the accuracy of linear trends determined from radiosondes is scientifically interesting, if the Geophysical Research Letters study by Zickfeld et al. has merit, it is to show us that assessing large-scale linear trends is of little practical use in estimating our real threat from future climate change.

Comments Off

Filed under Climate Change Forcings & Feedbacks

What is a “Teleconnection”? Why are Teleconnections Important in Climate Science?

Originally posted on August 25, 2005.

Teleconnections are defined by the American Meteorological Society as:

“1. A linkage between weather changes occurring in widely separated regions of the globe. 2. A significant positive or negative correlation in the fluctuations of a field at widely separated points. Most commonly applied to variability on monthly and longer timescales, the name refers to the fact that such correlations suggest that information is propagating between the distant points through the atmosphere.”

This linkage can be accomplished by alterations of regional tropospheric temperatures which create changes in the large-scale pressure and wind fields, and/or by the advection of material from one region to another (such as from blowing dust or emissions of pollutants that are advected by the wind). The National Research Council report discusses teleconnections as related to radiative forcings.
Originally posted on August 25, 2005.

Two recent papers provide examples of the teleconnection associated with alterations in regional tropospheric temperatures (see Lu, Riyu, and Buwen Dong, 2005. Impact of Atlantic sea surface temperature anomalies on the summer climate in the western North Pacific during 1997-1998. J. Geophys. Res. – Atm., 110, D16102, doi:10.1029/2004JD005676, August 19, 2005, and Wang, D., C. Wang, X. Yang, and J. Lu, 2005. Winter Northern Hemisphere surface air temperature variability associated with the Arctic Oscillation and North Atlantic Oscillation. Geophys. Res. Lett., 32, L16706, doi:10.1029/2005GL022952, August 20, 2005). This work further illustrates the importance of climate patterns in one region affecting the climate elsewhere through alterations in the large-scale pressure field. Work that Chris Castro of our research group has completed has also illustrated how sea surface temperature anomalies in the Pacific Ocean affect the summer rainfall patterns in western North America by teleconnections.

The acceptance of sea surface anomaly patterns as a surface climate forcing that affects the weather at large distances, of course, is an accepted teleconnection effect. Indeed, this teleconnection effect is why there are major global climate anomalies when an El Niño occurs.

The influence of spatially heterogeneous climate forcing by land-use/land-cover change and by aerosol clouds as they produce teleconnections, however, is less accepted by the climate community despite the clear parallel between climate forcing from sea surface temperature anomalies and these forms of climate forcing. Each of these climate forcings is spatially coherent, persist for long time periods, and significantly affect the fluxes of heat, moisture, and momentum into and out of the atmosphere. We discussed the role of spatially focused climate forcings in our July 28th blog “What is the Importance to Climate of Heterogeneous Spatial Trends in Tropospheric Temperatures”? The two new papers by Lu and Dong, and by Wang and colleagues clearly show that it is the regional variations of the climate system that exerts a major influence on the weather we experience. The focus of the climate community on global-averaged and zonally-averaged surface and tropospheric temperature changes is a distraction from what the dominant spatial scales of climate forcing are, as exemplified by these two new papers.

Comments Off

Filed under Climate Change Forcings & Feedbacks

What is Climate? Why Does it Matter How We Define Climate?

Originally Posted on July 11, 2005.

The title of this weblog is “Climate Science,” so the first thing we need to do is define “climate.” For many, the term refers to long-term weather statistics. However, on this blog we are adopting the definition that is provided in the 2005 National Research Council (NRC) report where the climate is the system consisting of the atmosphere, hydrosphere, lithosphere, and biosphere. Physical, chemical, and biological processes are involved in interactions among the components of the climate system. Figure 1-1 and 1-2 in the report illustrate this definition of climate very clearly. In the NRC report, climate forcings were extended beyond the radiative forcing of carbon dioxide to include the biogeochemical influence of carbon dioxide, but also a variety of aerosol forcings (see Table 2-2 in the report), nitrogen deposition, and land-cover changes. Each of these forcings has been determined to influence long-term weather statisitics as well as other aspects of the climate.

However, this concept of climate and its alterations by humans, has been generally ignored. The NRC report listed above certainly appears to have been incompletely missed by policymakers. As an example, at the G-8 meeting, the term “climate change” is used interchangably with “global warming.” However, the human influence on climate is much more complex and multi-dimensional than captured by the term “global warming” (see, for example, http://www.climatesci.org/publications/pdf/R-260.pdf; http://www.nap.edu/books/0309095069/html/15.html and http://www.climatesci.org/pdf/R-225.pdf). The term “global warming” is generally used to refer to an increase in the globally-averaged surface temperature in response to the increase of well-mixed greenhouse gases, particularly CO2.

If, however, we are interested in atmospheric and ocean circulation changes, which, afterall is what creates our weather, we need to focus on how humans are altering these circulations. Ocean heat content changes are the much more appropriate metric than a globally-averaged surface temperature when evaluating “global warming” in any case (http://www.climatesci.org.edu/publications/pdf/R-247.pdf).

Thus it matters how we define climate and climate forcing (http://www.nap.edu/books/0309095069/html/15.html). By ignoring a number of the other first-order climate forcings, we are not properly addressing the threat we face in the future, but instead relying on the overly simplistic view of focusing on reductions in carbon dioxide emissions as the way to reduce our “dangerous intervention” in the climate. With respect to the changes of circulations, and therefore, weather, we need to identify and quantify the role of spatially heterogeneous climate forcings such as from aerosols and land-cover change, in addition to the influence of well-mixed greenhouse gases. These heterogeneous climate forcings could represent a more significant threat to our future climate system than the risk of an increase in the atmospheric concentration of CO2.

Hopefully, this blog will stimulate discussion, as well as illuminate reasons why this broader perspective on climate variability and change has been mostly ignored.

Comments Off

Filed under Definition of Climate

Are Multi-decadal Climate Forecasts Skillful?

Originally posted on July 22, 2005.

In one of our July 11, 2005 posts, climate was defined so that climate forecasts are forecasts of the future state of the atmosphere, oceans, land, and continental glaciers, as defined using physical, chemical, and biological variables that we can measure. We can apply local, regional, or global averages over any time period we choose to characterize the future state of the climate. Weather forecasts are a subset of climate forecasts, in that we limit our forecasts to weather conditions, averaged over 12-hour periods, for example, out to a week or more, and generally assume a number of climate variables, such as vegetation and sea-surface temperatures, are invariant over this time period. It is important to note that the averaging time is not what distinguishes weather from climate (e.g., although called “seasonal climate predictions”, these forecasts are more accurately “seasonal-averaged weather predictions”).

As a necessary condition, climate forecasts must be able to skillfully reconstruct the observed temporal and spatial variability and change of local, regional, and global climate variables, when the forecast models are only given the external forcings (such as solar irradiance, volcanic eruptions, CO2 concentrations) as illustrated in Figure 1-2 in Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties (2005). See also Tables 1 and 2 in Dynamical downscaling: Assessment of value retained and added using the Regional Atmospheric Modeling System (RAMS) where climate forecasts are called a Type 4 model simulation.

In 2000, we published a paper which demonstrated that the general circulation models were unable to skillfully reconstruct even the globally-averaged mid-tropospheric temperature trend during the 1979-2000 time periods. Thus, as of that date, the climate prediction models were shown to not be able to skillfully forecast the future climate even with respect to a single globally-averaged climate variable. (I am on a CCSP committee entitled “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences” and will update our assessment of the issue of climate prediction skill as soon as the report is public).

Mike MacCracken in his essay response to my Climatic Change essay seeks to distinguish a “prediction” from a “projection.” However, this only obscures the discussion, as GCM results are obviously packaged as forecasts in that specific time periods in the future are presented (see, as just one example, the 2070-2100 forecasts of the United Kingdom Hadley Centre). Even Mike recognizes there is no regional predictive skill in his paper entitled “Reliable regional climate model not yet on horizon.”

A conclusion of our evaluation is that papers which appear in the literature that present future values of a subset of (or all) climate variables are misrepresenting their results by implying that they are forecasts. They should be presented as sensitivity studies (as a process study; see my July 15, 2005 post on the types of model applications).

We can illustrate their misuse as forecasts by an analog. If we could run a numerical weather prediction model to provide a forecast of rainfall for tomorrow and publish a paper on it today, would this be considered sound science justifying a paper? Of course not. First we would want to wait to see if the forecast was skillful. This is possible with weather forecasts for tomorrow, but we cannot yet verify a climate forecast model’s skill, for decadal-averaged weather conditions decades into the future.

The climate modeling community runs ensembles of multi-decadal predictions (with different initial conditions, different models) and they average their results over decadal time periods, which they claim distinguishes their simulations from the numerical weather prediction community’s application. Of course the numerical weather prediction community also runs ensembles of simulations. The fundamental difference is that the weather community can validate their model results thousands of times. There is no such ability with multi-decadal climate prediction models.

Our conclusions are the following:

  1. Peer-reviewed papers, and national and international assessments, which present model results for decades into the future, or provide impact studies in response to these model simulations, should never be interpreted as skillful forecasts (or skillful projections). They should be interpreted as process (sensitivity) studies, even though the authors use definitive words (such as this “will” occur) and display model output with specific time periods in the future.
  2. The US National Assessment, which provided model simulations on regional scales for the coming decades, is inaccurately portrayed when their results are given to stakeholders with the interpretation that their results bracket what is expected in the future. This is misleading when transmitted to policymakers, as process studies are inappropriately interpreted to be forecasts.
  3. Climate forecasts (projections) decades into the future have not demonstrated skill in forecasting local, regional, and global climate variables. They have shown that human climate forcing has the capacity to alter the climate system, but we should not present these model simulations as forecasts. To present them as forecasts is misleading to policymakers and others who use this information.

Comments Off

Filed under Climate Models