Monthly Archives: October 2005

Another Problem With Using Surface Air Temperatures To Assess Long-Term Temperature Trends. Should Light Wind And Windy Nights Have The Same Temperature Trends At Individual Levels Even If The Boundary Layer Averaged Heat Content Change Is The Same?

The answer to this question is NO.

An October 9. 2005 article in the Seattle Times included the following,

“American researchers examined the possibility that urban heat was masquerading as global warming in 1997, by comparing data from all over the globe with measurements made only in rural areas. The warming was the same. Last year, David Parker, of Britain’s Hadley Centre for Climate Prediction and Research, settled the question emphatically by comparing measurements taken on calm and windy nights (Parker, D.E. 2004: Large-scale warming is not urban. Nature, November 18, 2004). If urbanization was making the planet look hotter than it really is, the effect should be more pronounced when there’s no wind to dissipate the heat from sweltering cities. But rates of warming were the same whether the wind was blowing or not.”

Our new paper in press at Geophysical Research Letters (GRL) “Should light wind and windy nights have the same temperature trends at individual levels even if the boundary layer averaged heat content change is the same?” has shown that the answer to this question must be no, and that the conclusions reported in the Seattle Times are incorrect. This GRL paper shows that the Parker Nature conclusions in which the temperature trends were the same on windy and light wind nights actually means that the heat content changes in the boundary layer were different!

We find that

“…if the nocturnal boundary layer heat fluxes change over time, the trends of temperature under light winds in the surface layer will be a function of height, and that the same trends of temperature will not occur in the surface layer on windy and light wind nights.”

The abstract of this new GRL paper states

“Long-term climate trends of surface air temperature should not be expected to have the same trends for light wind and stronger wind nights, even if the trends in the boundary layer heat fluxes were the same. Parker [2004] segmented observed surface temperature data into lighter and stronger wind terciles in order to assess whether the reported large-scale global-averaged temperature increases are attributable to urban warming. We conclude, however, that trends at an individual height depend on wind speed, thermodynamic stability, aerodynamic roughness, and the vertical gradient of absolute humidity. We present an analysis to illustrate why temperature values at specific levels will depend on wind speed, and with the same boundary layer heat content change, trends in temperature should be expected to be different at every height near the surface when the winds are light, as well as different between light wind and stronger wind nights. This introduces a complexity into the assessment of long-term surface temperature trends that has not been previously recognized.”

This research raises further questions as to the value of using surface air temperature data to assess global warming as well as the conclusions of the Parker Nature paper. See the weblog of July 11, 2005 entitled The Globally-Averaged Surface Temperature Trend – Incompletely Assessed? Is It Even Relevant? for other serious problems with using surface temperature data for this purpose.

Comments Off

Filed under Climate Change Metrics

Has The 2005 National Research Council Report “Radiative Forcing Of Climate Change: Expanding The Concept And Addressing Uncertainties” Been Reported In The Media?

Apparently not. While, I have referred to this report numerous times in my weblog, a search on-line does not find a single reference in the media to this important national report. The major conclusion, listed below, represent a significant broadening of climate change science that is very policy relevant, but that is being ignored by the media.

“Despite all these advantages, the traditional global mean TOA radiative forcing concept has some important limitations, which have come increasingly to light over the past decade. The concept is inadequate for some forcing agents, such as absorbing aerosols and land-use changes, which may have regional climate impacts much greater than would be predicted from TOA radiative forcing. Also, it diagnoses only one measure of climate change—global mean surface temperature response—while offering little information on regional climate change or precipitation. These limitations can be addressed by expanding the radiative forcing concept and through the introduction of additional forcing metrics. In particular, the concept needs to be extended to account for (1) the vertical structure of radiative forcing, (2) regional variability in radiative forcing, and (3) nonradiative forcing.” (http://www.nap.edu/books/0309095069/html/4.html).

The only interpretation I arrive at for the neglect of this report by the media is that this National Research Council report, which further confirms the complexity of the climate system, including the human influence on it, is inconvenient for those who view the radiative effect of human-caused CO2 increases as the dominant forcing of climate change.

Comments Off

Filed under Climate Science Reporting

Climate Modeling Questions

The website RealClimate had a very informative set of questions from Tom Cole and answers from Gavin Schmidt . RealClimate provides a valuable service by providing a set of issues in this Q&A format that we can answer. I provide my perspective on the questions below, in order to add to this discussion.

1. What schemes are you using for solving the partial differential equations? Are they free of numerical errors?

No model is free of numerical errors.

Climate models require the accurate simulation of the ocean, atmosphere, land, and continental ice. Physical, chemical, and biological processes must be included. In the atmospheric and ocean components of these models only the pressure gradient force and advection are represented in terms of fundamental concepts. This part of the models is referred to as the “dynamic core.” All other processes in these models are parameterized (e.g., turbulence, cloud and precipitation, short- and long-wave radiative fluxes).
The dynamical core of the models has been represented with finite difference and spectral methods; the latter of which is typical for global models, while regional climate models generally have applied finite differencing. For spatial scales less then 4 grid increments (or its equivalence in a spectral model), there is always serious numerical error (either in terms of preservation of amplitude and/or phase). For finite difference models, this is discussed in detail in Chapter 10 “Methods of Solutionâ€? of Pielke, R.A., Sr., 2002: Mesoscale meteorological modeling. 2nd Edition, Academic Press, San Diego, CA, 676 pp.

This inability of models to skillfully simulate the smallest features within the grid structure is why the term “resolution” should be reserved to refer to spatial scales of at least 4 grid increments in each direction. This limitation applies to both finite difference and spectral models (see Pielke, R.A., 1991: A recommended specific definition of “resolution”, Bull. Amer. Meteor. Soc., 12, 1914; Pielke Sr., R.A., 2001: Further comments on “The differentiation between grid spacing and resolution and their application to numerical modeling”. Bull. Amer. Meteor. Soc., 82, 699; and Laprise, R., 1992: The resolution of global spectral models. Bull. Amer. Meteor. Soc., 9, 1453-1454.

Parameterizations that are used in the models have been vertical (i.e., one-dimensional) column or box models, and always include adjustable, tunable coefficients and functions. They are engineering codes which are calibrated based on observations, sometime in conjunction with higher resolution models, usually from what are often referred to as “golden days.” Golden days are selected with ideal conditions in order to best fit the theoretical framework of the parameterization. Since parameterizations are applied in the climate models for situations in which the parameterizations were not calibrated, there certainly are errors but of an unknown magnitude.

A powerpoint presentation that overviewed these issues is available at (Pielke, R.A., Sr., 2004: The Limitations of Models and Observations. COMET COMAP Symposium 04-1 on Planetary Boundary Layer Processes, Boulder, Colorado, June 21-25, 2004.).

2. Have you made tests to determine if the model results depend on resolution? In other words, have you increased the detail sufficiently so that the results are no longer dependent upon the size of an individual grid box?

Model results are always dependent on the grid increments used. It is unreasonable to expect the one-dimensional column and box parameterizations to accurately represent real-world three-dimensional features that are spatially smaller than can be resolved by the model grid increments.

That resolution matters is shown quantitatively in the paper Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value restored and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721. In that paper, Table 5 presents a suite of regional climate experiments for horizontal grid intervals with 50 km, 100 km and 200 km spacing. The degradation in model skill as the horizontal increment is increased is shown.

Another issue, that the global climate models have not adequately addressed, is how well do they perform when initialized in a numerical weather prediction mode in terms of such atmospheric features as extratropical and tropical cyclogenesis? This test is a necessary condition of the accuracy of both the dynamics and parameterizations within the model. Since some global climate models have a parentage from numerical weather prediction code, this would be straightforward for them to evaluate with the code as adapted for long-term climate simulations. Clearly, a global model that is superior to others when it is run as a weather forecast, with observed initial conditions, will be a superior climate model as this means its dynamics and parameterizations are more accurate. Such comparison experiments starting from initial conditions have not been performed and documented in the literature to my knowledge. This approach would be an extension of the Atmospheric Model Intercomparison Project (AMIP) comparisons (i.e., as discussed, for example in Research Activities in Atmospheric and Oceanic Modeling, J. Cote, Ed.).

3. What are the dominant external forcing functions?

Figure 1-2 in the 2005 National Academy report defines natural forcings as from the Sun, due to the Earth’s orbital characteristics, and from volcanoes. Natural as defined here is meant to include forcings which reside external from the climate system. With this definition, the human-climate forcings are not external forcings.

4. What are the sources of intrinsic variability?

We do not know all of the reasons for the intrinsic (internal) variability of the climate system. Gavin has clearly identified some of them. However, the paper by Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38, illustrates observed examples on a variety of time scale of sudden and rapid transitions of climate which we do not adequately understand. One major conclusion, however, is that intrinsic, complex variability results from interfacial nonlinear interactions between the components.

5. How do errors in estimating the forcing functions, or in simulating the internal variability impact the results?

I agree with Gavin that this is a good question. However, until we include all of the first-order climate forcings and feedbacks, as well as successfully model sudden climate transitions, we have large remaining errors of an unknown magnitude. We also have to show prediction skill in the quasi-linear global and regional long term trends of important climate metrics (regional precipitation, regional layer-averaged tropospheric temperatures, etc). Whether or not we agree the models have shown skill in reproducing global temperature averages or not, they certainly have not demonstrated regional skill for the spectrum of important climate metrics.

These first-order climate forcings are identified in the National Research Council report “Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties”, while the first-order climate feedbacks are identified in “Understanding Climate Change Feedbacks”. Sudden climate transitions are discussed in the National Research Council report “Abrupt Climate Change: Inevitable Surprises” .

6. The minimum amount of observed data that you have to reproduce in order to gain some confidence in your model is that you have to reproduce periods of time when temperatures are increasing and when they are decreasing. Have you queried the model as to what the dominant mechanism(s) is/are that caused the cooling? If so, is/are the mechanism(s) plausible? Can they be verified independently?

Gavin stated “This isn’t much of a test. The models are pretty stable in the absence of forcing changes (although there is some centennial variability as noted above, related mostly to ocean circulation/sea ice interactions).”

As illustrated in the National Research Council reports and in the Rial et al. paper, however, the observations show that the climate system is not “pretty stable” even without clear changes in the external forcing, If the models are unable to skillfully simulate and explain abrupt regional climate change, they are of limited use in describing our real risk to human-caused and natural climate change and variability. We need to move beyond “linear climate change” thinking.

Also, we need to move beyond global mean surface temperature (and even global mean tropospheric temperature) as the primary climate change metrics. This was a clear conclusion of the 2005 National Research Council report . We need to focus on climate metrics such as drought, growing season and floods, for example, which are climate effects that directly impact society and the environment.

7. Have you tested the model against simplified analytical solutions? Are you able to accurately reproduce analytical results?

Gavin’s answer is correct. We need to use observations to test the models. This is one reason that there is considerable interest in using global and regional climate models to simulate prehistoric and historic climates.

8. How do you address the issue that models cannot be used to predict the future? In other words, models can only predict what might happen under a given set of conditions, not what will happen in the future.

The IPCC and US National Assessment results have been interpreted as predictions. To use the word ‘projection’ to indicate they are different than ‘prediction’ is a nuance that is lost by almost everyone. Indeed the Webster’s New World Dictionary (1988 edition) has one definition of a projection as “a prediction or advance estimate based on known data or observations; extrapolations.” A projection is a prediction! See also my July 15 and 22, 2005 weblogs entitled What Are Climate Models? What Do They Do? and Are Multi-decadal Climate Forecasts Skillful? , and my 2002 Climatic Change essay Overlooked issues in the U.S. National Climate and IPCC assessments on this subject.

We do have a serious problem in climate science in that the same individuals who perform the research are completing the climate assessment reports. This is equivalent to the authors of a research paper, and their close collaborators, review their submitted paper! When I served as Chief Editor of the Monthly Weather Review and Co-Chief Editor of the Journal of Atmospheric Science, this type of procedure was never was permitted. It should certainly not be allowed for the CCSP and IPCC reports, and, to the extent it is, those reports should be interpreted as advocacy documents and not a balanced review of climate science (see, for example my October 4 2005 weblog entitled “Overlooked Issues in Prior IPCC Reports and the Current IPCC Report Process: Is There a Change From the Past?.”

10. I have been working on the same code for over 27 years, and I can guarantee that it is not bug free. A debuggers job is never done. How long has your code been in development?

The more serious error in the models is their incomplete representation of the climate system including the accurate representation of all first-order climate forcings and feedbacks. We also need to know the sensitivity of the model results due to the uncertainly in the parameterizations and from the spatial resolution used. Coding bugs, while as anyone who has written code realizes, never completely disappear as the model is applied to new situations, is not a major problem with climate model simulations that I am aware of.

As my final comment, I want to add to Gavin’s closing remarks, reproduced below

“On a final note, an implicit background to these kinds of questions is often the perception that scientific concern about global warming is wholly based on these (imperfect) models. This is not the case. Theoretical physics and observed data provide plenty of evidence for the effect of greenhouse gases on climate. The models are used to fill out the details and to make robust quantiative projections, but they are not fundamental to the case for anthropogenic warming. They are great tools for looking at these problems though.”

Models are a powerful tool to better understand the climate system and to assess the sensitivity of the climate system to human and natural climate forcings. They have shown us that the radiative effect of the of addition of greenhouse gases is a first-order climate forcing that alters our climate.

However, where I and others disagree with Gavin is the statement that “The models are used to fill out the details and to make robust quantitative projections…”. What “details” and what demonstration of “robust quantitative projectionsâ€??This blanket statement needs to be clarified. Even Mike MacCracken and colleagues for example, have published a paper in Nature in 2004 entitled “Reliable regional climate model not yet on horizon.” The overselling of regional and global models as skillful (robust) projections, unfortunately, rather than as sensitivity simulations, adds to the existing politicalization of climate science and provides justifiable criticism of the assessment reports that are published.

Comments Off

Filed under Q & A on Climate Science

Can Regional Models Be Used To Obtain Skillful Higher Spatial Resolution Climate Forecasts Decades Into The Future?

The answer is certainly NO.

An article on-line from the National Geographic entitled “No winter by 2105? New study offers grim forecasts for U.S.”
is of considerable relevance to this question. This news report is based on the Proceedings of the National Academy of Sciences (PNAS) November 1, 2005 paper by Diffenbaugh et al. entitled Fine-scale processes regulate the response of extreme events to global climate change”. This paper uses a regional model forced by a global climate model to produce “fine-scaleâ€? forecasts over the next century.

Should we accept the predictions from this study as skillful, given that to test against reality we have to wait 100 years?

As we have posted on our weblogs of July 15 and 22, 2005 (“What are Climate Models? What do they do?” and “Are Multi-decadal Climate Forecasts Skillful?” ) a necessary condition for skillful long-term climate forecasts is that all first-order climate forcings and feedbacks be included in the global climate model. This is not the case with the PNAS article. Thus, as skillful predictions, the global model is inadequate for this purpose, even before downscaling using a regional climate model.

There is another serious problem with this modeling approach, which is true even if we had accurate large-scale information. Not only is the global model deficient in its inclusion of climate forcings and feedbacks, such that information that is passed into the regional model through the lateral boundaries is deficient, but the regional model itself will not be able to retain the large-scale structure of the global model unless the regional model domain is small such that the lateral boundary conditions are close to the center of the regional model. The only value-added, therefore, of the regional model is its improved spatial resolution of surface forcing including terrain. However, to the extent these smaller-scale features are dependent on the lateral boundary conditions, they will degrade in accuracy.

In our paper,

Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value restored and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721,

we concluded that regional climate models (in which the initial conditions are forgotten; Type 2 through Type 4 regional model applications as presented in Table 1 in Castro et al.) do not retain the larger-scale atmospheric structure except for very small regional domains, even when real-world observed larger-scale analyses (a Type 2 application) are inserted as lateral boundary conditions into the regional model.

The utility of the regional climate model is to resolve smaller-scale features which have a greater dependency on the surface boundary, but skill degrades if this surface forcing is significantly dependent on the lateral boundaries. In contrast, a Type 1 regional model application has observed atmospheric initial conditions which provide skillful fine-scale forecasts until the initial conditions are forgotten ( a numerical weather prediction application). Type 1 forecasts must, therefore, be more skillful than Type 2-4 regional model applications.

Thus downscaling from a global climate multi-decadal forecast using a regional climate model does not add appropriate insight to be used by policymakers. While presenting the appearance of more accuacy, since higher spatial resolution features are shown in model plots, there is no added skill over what is achieved from the global model. Unfortunately, the authors of this paper neglected to refer to (or seek to refute) our peer-reviewed paper in presenting their results and conclusions. This oversight by the climate community in the limitations of using regional climate models to downscale from multi-decadal global climate model predictions needs to be remedied.

Comments Off

Filed under Climate Models

Is Climate Prediction Sensitive To Initial Conditions?

Since there has been so much interest in the topic of the “butterfly effectâ€?, a weblog on climate prediction with respect to its sensitivity to initial conditions is warranted.

The answer to the question posed on today’s weblog, of course, is YES.

With respect to weather prediction, the importance of initial conditions is universally accepted. As just one example, we can refer to their importance in hurricane track forecasts, where the size of the initial hurricane vortex, its initial motion, and its intensity each matter in terms of its subsequent motion. These are large enough perturbations to upscale (unlike a butterfly’s flapping wings!). Weather also exhibits chaotic behavior such as when slight differences in large-scale flow patterns can determine whether baroclinic cyclogenesis occurs or not.

For climate prediction, however, the existence of two definitions of “climateâ€? complicates the discussion. The term “climateâ€? has been used to mean long-term weather statistics, but also the coupled atmosphere, hydrosphere, lithosphere, and biosphere (see the weblog posting for July 29th entitled “What is climate changeâ€?).

The use of long-term weather statistics to mean “climateâ€?, however, is an atmospheric-centric view. Weather statistics, as the definition for “climateâ€? has traditionally been limited to physical variables such as temperature and precipitation, but not even to atmospheric chemical composition (see the AMS definition of “climateâ€?).

The distinction is important. With the atmospheric-centric view, the ocean, land, and continental ice are often treated as boundaries that are prescribed. This places a constraint on the “climateâ€? prediction since the interactions with these surfaces are reduced or even ignored. With the more inclusive definition of climate, there are interfacial, nonlinear fluxes between the atmosphere, oceans, land, and continental ice. That is there are no true boundaries.

This subject is discussed in my essay – Pielke, R.A., 1998: Climate prediction as an initial value problem. Bull. Amer. Meteor. Soc., 79, 2743-2746. In that essay, I concluded that “as a result of the variety of significant ocean-atmosphere-land surface interactions, model-based forecasts of future climate should be viewed as sensitivity analyses rather than as reliable predictions.â€?

A specific example on a seasonal time scale of the sensitivity of a climate prediction to the initial soil moisture content (i.e., a non-atmospheric variable) as its affects growing-season weather is presented in Pielke Sr., R.A., G.E. Liston, J.L. Eastman, L. Lu, and M. Coughenour, 1999: Seasonal weather prediction as an initial value problem. J. Geophys. Res., 104, 19463-19479. In this paper, we concluded

“…that the seasonal evolution of weather is dependent on the initial soil moisture and landscape specification. Coupling this model to a land-surface model, the soil distribution and landscape are shown to cause a significant nonlinear interaction between the vegetation growth and precipitation. These results demonstrate that seasonal weather prediction is an initial value problem. Moreover, on seasonal and longer term timescales the surface characteristics such as soil moisture, leaf area index, and landcover type must be treated as dynamically evolving dependent variables, instead of prescribed variables.â€?

See also Lu, L., R.A. Pielke, G.E. Liston, W.J. Parton, D. Ojima, and M. Hartman, 2001: Implementation of a two-way interactive atmospheric and ecological model and its application to the central United States. J. Climate, 14, 900-919.

What is missing from the IPCC and US National Assessments is the recognition that climate is not atmospheric-centric (or even physical ocean-atmosphere centric), but as involving significantly the other components of the climate system as both forcings and feedbacks. Ocean plankton distributions, fresh water river and sediment discharge into the oceans, and land-cover/land-use are just a few examples of climate variables that need to be initialized in the non-atmospheric components and involve interfacial, nonlinear fluxes, but whose importance has been ignored or understated.

When we learn of “projectionsâ€?, “forecastsâ€?, and “predictionsâ€? of climate decades into the future (e.g., see “No Winter by 2105? New Study Offers Grim Forecast for the U.S. ), we should first assess whether the suite of model simulations that were used to create the envelope of predicted future climate has included the spectrum of the initial climate conditions which must include the non-atmospheric components. If they have not (which is the case for all existing modeling studies of this type), the value of such studies are as sensitivity experiments, and should not be presented, as the National Geographic has done, as forecasts.

Comments Off

Filed under Climate Science Misconceptions

Is Global Warming the Same as Climate Change?

Is Global Warming the same as Climate Change?

Readers of the Climate Science weblog know that the answer to this question is a definitive NO.

However, the media frequently use the two terms interchangeably. A search on google provides ready examples of the intermixing of the two terms. For example, see http://www.cln.org/themes/global_warming.html where a “global warming/climate change theme page” is presented for educational purposes.

At http://www.ec.gc.ca/climate/home-e.html, the Kyoto Protocol has been promoted as a priority with respect to climate change, where it is clear that global warming and climate change are being interpreted as interchangeable. From this Environment Canada website,

“Climate change is one of the most significant environmental challenges the world has ever faced. We are already seeing the effects of climate change in Canada. The potential impacts on our health, economy and environment require us to take action.
With the ratification of the Kyoto Protocol, the Government of Canada has made climate change a national priority, and is working closely with Canadians and the global community to meet this challenge.”

The focus of this effort to “control” global warming is with respect to the reduction of well-mixed greenhouse gas emissions, particularly CO2.

However, as shown in the 2005 National Research Council report “Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C.”, which has been mentioned repeatedly on our web site, climate change is much broader than global warming.

Global warming is defined by a positive accumulation of heat (Joules) in the climate system, of which most occurs in the oceans (see Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335). While surface temperature as also been used to define this heating, it has a range of problems with its use in this context, as we have discussed in our papers, and in earlier weblogs, with more to follow.

Human-caused climate change, however, involves forcings beyond the radiative forcing of the well-mixed greenhouse gases. As summarized in the 2005 NRC report, this includes the multiple influences of aerosols and of biogeochemically active gases, and land-cover changes. The regional changes from these forcings must be considered, even if there were no global warming from these forcings.

By conflating the terms “global warming” and “climate change”, we misinform policymakers, by leading them to believe that the radiative effect of the well-mixed greenhouse gases is the only major forcing of human-caused climate change. It is not. Dealing with climate change is a much more difficult issue than is captured by focusing on global warming.

Comments Off

Filed under Climate Science Misconceptions

A New Paper on the Role of Vegetation Dynamics in the Climate System

As we discussed in our weblogs of October 14 and September 12, for example, vegetation dynamics exerts a major effect on the climate system. A new paper has just appeared in the Journal of Climate which provides further support to this perspective entitled “Simulated and Observed Preindustrial to Modern Vegetation and Climate Changes” by Notaro et al. An extract from the conclusions state

“A fully coupled atmosphere-ocean-land-surface model with dynamic vegetation has been used here to simulate changes in climate and vegetation due to rising CO2 from preindustrial to modern times, as well as to diagnose the separate radiative and physiological effects. The model reproduced broad aspects of the natural (excluding land use) biome distribution as well as seasonal shifts in vegetation, despite overprediction of forest in many areas and an excessive simulated area of polar desert. Despite its biases, the fully coupled model represents an advance compared with models of intermediate complexity (e.g., Brovkin et al. 2002) in simulating a wide range of feedbacks among the atmosphere, biosphere, ocean, and cryosphere, including biogeophysical feedbacks associated with the effects of CO2 on plant physiology.”

Also, that

“In addition, the observed climate and vegetation records contain signatures of the effects of anthropogenic land use and aerosols, making it difficult to determine the specific impact of rising carbon dioxide levels and climate change. “

This paper provides additional evidence of the complexity of the climate system response to the diversity of human-caused and natural climate forcings.

Comments Off

Filed under Climate Change Forcings & Feedbacks

Will 2005 Be The Hottest Year On Record?

The short answer is that, through September, it certainly is at least among the warmest when we evalute the 2005 tropospheric and ocean heat anomalies. However, it is not yet distinctly the warmest on record with respect to the tropospheric data that is discussed below.

The following headline motivated this weblog. It appeared on October 13, 2005 in a Washington Post article By Juliet Eilperin – “World Temperatures Keep Rising With a Hot 2005″, with the starting text

“New international climate data show that 2005 is on track to be the hottest year on record, continuing a 25-year trend of rising global temperatures.”

This claim is based on the surface air temperature record, which, as we have reported on this weblog (see July 11, 2005 entitled “The Globally-Averaged Surface Temperature Trend- Incompletely Assessed? Is It Even Relevant?”), and will soon have a new posting on this subject, have major problems with using it to assess long-term temperature trends. Moreover, it is not even the most appropriate metric to evaluate global warming (the ocean heat content changes are; see our weblog of September 25, 2005 entitled “Is Global Warming Spatially Complex?”).

Nonetheless, this weblog expands on the Washington Post article and asks the question as to whether the tropospheric temperatures are the warmest on record. We report here on tropospheric temperature trends and the placement of 2005 (through September) with respect to its ranking to earlier years using the information freely available from the Climate Diagnostic Center’s (CDC) website. (As we have stated before, this is an excellent, readily available climate resource). The analysis we present below, of course, should also be compared with other Reanalyses (ERA-40) and satellite assessments (e.g., the UAH and RSS MSU satellite evaluations).

From the CDC web site, we are using the NCEP/NCAR Reanalysis product. We have utilized this Reanalysis for several of our papers in order to assess global and regional tropospheric temperature trends as well as to compare with other analysis products (e.g., Chase, T.N., R.A. Pielke Sr., J.A. Knaff, T.G.F. Kittel, and J.L. Eastman, 2000: A comparison of regional trends in 1979-1997 depth-averaged tropospheric temperatures. Int. J. Climatology, 20, 503-518.) We have developed confidence in its robustness.

In our previous evaluations we concluded that layer-averaged tropospheric temperatures are the more appropriate tool to evaluate trends, rather than the temperature at a single level. We used the Reanalysis data only since 1979 since this is when satellite-derived temperature information became available globally. For the earlier period (back to 1948) it was cooler, as clearly evident in the NCEP/NCAR Reanalysis and as we reported in Pielke et al. (1998a) and Pielke et al. (1998b); but we cannot be certain that this cooler period, at least in part, was not also a result of different available data.

The difference in the heights of two pressure surfaces (referred to as “thickness”) is dependent on the layer-averaged temperature between them, which is calculated from all of the available temperatures between the pressure height levels. The warmer the layer between the two pressure surfaces, the greater the thickness. Since the CDC website does not conveniently provide thickness, however, the discussion below uses pressure altitude as the metric to determine the layer mean temperature between that level and the surface of the Earth. When averaging globally, this does provide a robust measure of the layer-averaged temperature below the selected pressure altitude.

For our analysis below, the rankings of several globally-averaged pressure altitude anomalies are presented for the period since 1979.

For the January-September time period, 2005 was the second warmest (1998 was warmer) for the layer from 300 hPa and 500 hPa to the surface. The 700 hPa layer to the surface was tied with 1998 as the warmest. The 850 hPa layer to the surface tied with 1998 and 2003 (although further back in the record, the early 1950s and late 1940s were appreciably warmer than the more recent years).

At 500 hPa, with respect to the long-term highest globally-averaged pressure heights in the 1980s and 1990s before 1998, the highest pressure altitude in the January-September 2005 was about 0.26% higher. As a reference, the variations of the globally-averaged monthly highest pressure altitudes in the period January to September 2005 varied by about 1%.

The change between the highest globally-averaged 500 hPa pressure altitude in 2005 and the longer-term highest average in the 1980s and 1990s before 1998 corresponds to about a 0.75°C layer-averaged warming. To place this into context, the change during 2005 in the period January to September in the maximum globally-averaged 500 hPa pressure altitude corresponds to about a 3°C variation within this 9 month time period. That is the globally-averaged tropospheric temperatures below 500 hPa within this past year varied by over 4 times what is recorded as the longer-term change.

(The 500 hPa plot was obtained from the NOAA CDC website; the other data for the other pressure altitudes can be extracted from the CDC website in a similar manner).

In terms of long-term trends, there is no significant warming evident in the pressure height at 850 hPa. This contradicts the reported surface warming trend that the Washington Post article reported. At 700 hPa and 500 hPa, there is a warming trend, particularly in the last few years. At 300 hPa, the warming trend is very slight, with only 2005 clearly warmer with respect to the last few years.

This overview of the globally-averaged tropospheric layer-averaged temperatures presents a more complex picture than the simple conclusion reported in the Washington Post. A balanced article would not focus only on the surface data to assess long-term temperature trends.

Moreover, as we have discussed on this weblog (e.g., see “What is the Importance to Climate of Heterogeneous Spatial Trends in Tropospheric Temperatures” ), it is not the globally-averaged surface, nor globally-averaged tropospheric temperature trends that are important in terms of how the weather, and other aspects of the climate system, are affected. It is the regional anomaly pattern.

The CDC website provides this perspective. A plot of the regional anomalies for the January to September 2005 time period with respect to the long-term averages, for instance, can be constructed from http://www.cdc.noaa.gov/cgi-bin/Composites/comp.pl (note that you need to set the time period (January to September), the pressure altitude (check 500 h Pa to relate to the global averaged values referred to earlier in the weblog), and click “anomaly”). This regional assessment shows large areas of both positive and negative temperature anomalies with large areas of January-September 2005 significant cold anomalies. These anomalies clearly are associated with regional weather variations, which are what we should be focusing on in terms of long-term climate trends, rather than surface globally-averaged temperature trends.

This important climate change issue, unfortunately, is consistently ignored in media reports such as the Washington Post despite the subject being highlighted in the 2005 National Research Council report “Radiative forcing of climate change: Expanding the concept and addressing uncertainties.” Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C.

Comments Off

Filed under Climate Change Metrics

Is the Biogeochemical Effect of Increased CO2 on the Climate System a First-Order Climate Forcing? Is it at Least as Important as the Radiative Forcing of CO2 in Influencing Climate Change?

Biogeochemical forcing involves changes in vegetation biomass and soils (http://www.nap.edu/books/0309095069/html/96.html) which result in changes in the climate.

The answer to the first question is a definitive YES on both the regional and global scales. The second question also appears to be YES, although further investigation is required to confirm.

The evidence for this conclusion on the global scale comes from published papers (e.g., Cox, P. M., R. A. Betts, C. D. Jones, S. A. Spall, and I. J. Totterdell, 2000: Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model. Nature, 408, 184-187; Friedlingstein P., L. Bopp, P. Ciais, J.-L Dufresne, L. Fairhead, H. LeTreut, P. Monfray, and J. Orr, 2001: Positive feedback between future climate change and the carbon cycle. Geophys. Res. Lett., 28, 1543-1546. ). This conclusion, based on model process studies (see my weblog of July 15 entitled What Are Climate Models? What Do They Do?) is clearly articulated in the 2005 National Research Council report . These experiments tell us that this is an important climate forcing but, of course, the published papers should not be interpreted as predictions.

On the regional scale, we found (Eastman, J.L., M.B. Coughenour, and R.A. Pielke, 2001: The effects of CO2 and landscape change using a coupled plant and meteorological model. Global Change Biology, 7, 797-815) that the radiative effect of doubling CO2 was swamped by the combined biophysical (changes in the fluxes of trace gases and heat between vegetation, soils, and the atmosphere), and biogeochemical effects of doubling CO2. Significant climate influences included that maximum surface air temperatures were decreased and minimum surface air temperatures increased during this growing season simulation of the central Great Plains.

The reason for this large effect was that the increased atmospheric concentration of CO2 permitted individual stoma on the leaves to be more water efficient, which resulted in more plant growth than occurred in the model study when the current atmospheric concentrations of CO2 were prescribed. This effect was immediate, whereas, the radiative effect of CO2 requires a feedback with a warming ocean so that the more important greenhouse gas, H2O, increases in the atmosphere. The doubled biophysical/ biogeochemical CO2 experiment, however, resulted in greater atmospheric concentrations of H2O without any ocean feedback, since the larger amount of vegetation transpired more water vapor within the atmosphere, than otherwise would have occurred. This added water vapor produced a greenhouse effect, which resulted in warmer nights. The greater amount of incident solar insolation that went into latent heat flux during the day, however, resulted in cooler daytime maximum temperatures.

This conclusion, of course, is only from a process study for just for one landscape type. It does strongly support, however, a first-order effect on the climate system due the biogeochemical effect of increased atmospheric CO2. Clearly more work is needed to understand the multi-faceted effect of biogeochemical climate forcings, but we already know that this additional first order effect further complicates our ability to provide skillful climate predictions to policymakers.

Readers of the weblog are invited to add additional papers which provide (or refute, if available), published studies on the role of the biogeochemical role of CO2 as a first-order climate forcing.

Comments Off

Filed under Climate Change Forcings & Feedbacks

More on the Butterfly Effect

In response to the variety of comments on the weblog of October 6, 2005 entitled “What is the Butterfly Effect”, I asked Associate Professor Richard Eykholt of the Department of Physics at Colorado State University to provide his perspective on the discussion. Professor Eykholt is an internationally respected expert on chaos and nonlinear dynamical systems. His website provides information on his excellent professional and academic credentials.

His response to my request (dated October 11, 2005) is reproduced, with his permission;

“Roger:

I think that you captured the key features and misconceptions pretty well. The butterfly effect refers to the exponential growth of any small perturbation. However, this exponential growth continues only so long as the disturbance remains very small compared to the size of the attractor. It then folds back onto the attractor. Unfortunately, most people miss this latter part and think that the small perturbation continues to grow until it is huge and has some large effect. The point of the effect is that it prevents us from making very detailed predictions at very small scales, but it does not have a significant effect at larger scales.

Richard Eykholt”

This summary should put to rest the misconception about the “butterfly effect.” In answer to the question presented in the original weblog on this subject, “Predictability: Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?”, the answer is absolutely no.

Roger A. Pielke Sr.

Comments Off

Filed under Climate Science Misconceptions