Monthly Archives: June 2008

The Effect Of Landscape Change Within The Climate System – A New Workshop

Our workshop report on the role of humans in the climate system appeared in late 2007;

Mahmood, R., K. G. Hubbard, R. Pielke Sr. (2007), Effect of Human Activities on the Atmosphere,Eos Trans. AGU, 88(52), 580, 10.1029/2007EO520007

The abstract reads,

Detecting the Atmospheric Response to the Changing Face of the Earth: A Focus on Human-Caused Regional Climate Forcings, Land-Cover/Land-Use Change, and Data Monitoring; Boulder, Colorado, 27–29 August 2007; Human activities continue to significantly modify the environment. The impacts of these changes are highlighted, for example, in local-, regional-, and global-scale trends in modern atmospheric temperature records and other relevant atmospheric indicators. Studies using both modeled and observed data have documented these impacts. Thus, it is essential that we detect these changes accurately to better understand the impacts on climate and provide improved assessment of the predictability of future climate.”

The full EOS workshop report is also available (see).

There is a new meeting scheduled in November 2008 which will further present assessments of the role of land surface processes within the climate system. It is

“LUCID – Land-Use and Climate, IDentifi cation of robust impacts” organized by Nathalie de Noblet-Ducoudré and Andy Pitman.

This meeting will be a major, much needed further reporting on this issue, but only if they assess climate metrics more completely than was done in the 2007 IPCC report.

In assessing the role of land use change, I have urged them to consider the three climate metrics that we have proposed:

1. The magnitude of the spatial redistribution of latent and sensible heating as we presented in

Chase, T.N., R.A. Pielke, T.G.F. Kittel, R.R. Nemani, and S.W. Running, 2000: Simulated impacts of historical land cover changes on global climate in northern winter. Climate Dynamics, 16, 93-105.

2. The magnitude of the spatial distribution of precipitation and moisture convergence as we reported in

Pielke, R.A. Sr., and T.N. Chase, 2003: A Proposed New Metric for Quantifying the Climatic Effects of Human-Caused Alterations to the Global Water Cycle. Presented at the Symposium on Observing and Understanding the Variability of Water in Weather and

and

3. The normalized gradient of regional radiative heating changes (which we did for aerosols in

Matsui, T., and R.A. Pielke Sr., 2006: Measurement-based estimation of the spatial gradient of aerosol radiative forcing. Geophys. Res. Letts., 33, L11813, doi:10.1029/2006GL025974).

In the aerosol evaluation, we found that in terms of the gradient of atmospheric radiative heating, the role of human inputs was 60X greater than the role of the human increase in the well-mixed greenhouse gases. This means, that with respect to the effect on atmospheric circulations, the aerosol effect (and I anticipate land use change also) has a much more significant role on the climate than is inferred when using global average metrics.

Also, the pre-industrial crop areas that are being used (see their figure of crop area) relative to current crops, is going to mute the signal in all areas but most significantly in Asia, Africa and Europe. I have suggested that they complete a natural landscape set of runs. After all, the CO2 runs are made from “natural” to “current” so that the landscape change runs should made consistent with that difference, even if not over the same time period. Both sets of experiments will be insightful but the the assessment is significantly incomplete if the natural landscape simulations are not also completed.

Their LUCID meeting should be a very important assessment. However, it is essential that they move beyond the standard global average procedure (as emphasized in the IPCC report) to evaluate the human role on the climate system, as well as to complete natural landscape simulations.  Their assessment must also focus on how the heterogeneous effects of landscape change alter atmospheric and ocean circulation patterns (as suggested by items #1, #2 and #3). It is these regional responses, not a global average, that produces drought, floods and other societally important climate impacts.

Comments Off

Filed under Climate Science Meetings

House Testimony of Roger A. Pielke Sr. “A Broader View of the Role of Humans in the Climate System is Required In the Assessment of Costs and Benefits Effective Climate Policy”

This morning I testified to a House Subcommittee on the climate issue to the Subcommittee on Energy and Air Quality of the Committee on Energy and Commerce – Honorable Rick Boucher, Chairman. The title of my presentation is “A Broader View of the Role of Humans in the Climate System is Required In the Assessment of Costs and Benefits Effective Climate Policy”.

My oral testimony follows [the written testimony is available; see].

“The human addition of CO2 into the atmosphere is a first-order climate forcing. We need an effective policy to limit the atmospheric concentration of this gas. However, humans are significantly altering the climate system in a diverse range of ways in addition to CO2 . The information that I am presenting will assist in properly placing CO2 policies into the broader context of climate policy.

Climate is much more than just long-term weather statistics but includes all physical, chemical, and biological components of the atmosphere, oceans, land surface, and glacier-covered areas. In 2005, the National Research Council published a report “Radiative forcing of climate change: Expanding the concept and addressing uncertainties” that documented that a human disturbance of any component of the climate system, necessarily alters other aspects of the climate.

The role of humans within the climate system must, therefore, be one of the following three possibilities

  •  The human influence is minimal and natural variations dominate climate variations on all time scales;
  • While natural variations are important, the human influence is significant and involves a diverse range of first-order climate forcings, including, but not limited to the human input of CO2 
  • The human influence is dominated by the emissions into the atmosphere of greenhouse gases, particularly carbon dioxide.  My written testimony presents evidence that the correct scientific conclusion is that       

The human influence on climate is significant and involves a diverse range of first-order climate forcings, including, but not limited to the human input of CO2 .

Modulating carbon emissions as the sole mechanism to mitigate climate change neglects the diversity of the other, important first-order human climate forcings. As a result, a narrow focus only on carbon dioxide, to predict future climate impacts, will lead to erroneous confidence in the ability to predict future climate, and, thus, costs and benefits will be miscalculated. CO2 policies need to be complemented by other policies focused on the other first-order climate forcings.

In addition, the 2005 National Research Council Report concluded that a global average surface temperature trend offers little information on regional climate change. In other words, the concept of “global warming”, by itself, does not accurately communicate the regional responses to the diverse range of human climate forcings. Regional variations in warming and cooling for example, such as from tropospheric aerosols and landscape changes, as concluded in the National Research Council report, have important regional and global impacts on weather.

The human climate forcings that have been ignored, or are insufficiently presented in the IPCC [Intergovernmental Panel on Climate Change] and CCSP [US Climate Change Science Program] reports include

  • The influence of human-caused aerosols on regional (and global) radiative heating
  • The effect of aerosols on clouds and precipitation
  • The influence of aerosol deposition (e.g., soot; nitrogen) on climate
  • The effect of land cover/land use on climate
  • The biogeochemical effect of added atmospheric CO2

Thus climate policy that is designed to mitigate the human impact on regional climate by focusing only on the emissions of CO2 is seriously incomplete unless these other first-order human climate forcings are included, or complementary policies for these other human climate forcings are developed. Moreover, it is important to recognize that climate policy and energy policy, while having overlaps, are distinctly different topics with different mitigation and adaptation options.

A way forward with respect to a more effective climate policy is to focus on the assessment of adaptation and mitigation strategies that reduce vulnerability of important societal and environmental resources to both natural and human caused climate variability and change. For example, restricting development in flood plains or in hurricane storm surge coastal locations is an effective adaptation strategy regardless of how climate changes.

In conclusion, humans are significantly altering the global climate, but in a variety of diverse ways beyond the radiative effect of carbon dioxide. The CCSP assessments have been too conservative in recognizing the importance of these human climate forcings as they alter regional and global climate. These assessments have also not communicated the inability of the models to accurately forecast future regional climate on multi-decadal time scales since these other first-order human climate forcings are excluded. The forecasts, therefore, do not provide skill in quantifying the impact of different mitigation strategies on the actual climate response that would occur as a result of policy intervention with respect to only CO2 .”

Comments Off

Filed under Climate Science Reporting

Comments On The Article By Palmer et al. 2008 “Toward Seamless Prediction: Calibration of Climate Change Projections Using Seasonal Forecasts”

UPDATE: JULY 10 2008: Unfortunately, Tim Palmer elected not to consider writing a guest weblog in response to the weblogs on his paper until possibly the Fall. This is yet another example of how, when science issues are raised regarding papers or assessments, constructive scientific discussion is avoided.  

Tim Palmer of the European Centre for Medium Range Forecasting (ECMWF) is an excellent scientist. He is Head of the Probability and Seasonal Forecasting Division at ECMWF. A brief overview of his credentials are that he “is a fellow of the Royal Society and of the American Meteorological Society, and has received awards from both of these societies. He is currently chairman of the Scientific Steering Group of the U.N. World Meteorological Organization’s Climate Variability and Predictability Project, and was lead author of the most recent assessment report of the Intergovernmental Panel on Climate Change.”

 Thus, any publication that he authors is worthy of discussion.

There is a new paper led by Dr. Palmer;

 T. N. Palmer, F. J. Doblas-Reyes, A. Weisheimer, and M. J. Rodwell, 2008: Toward Seamless Prediction: Calibration of Climate Change Projections Using Seasonal Forecasts. Bulletin of the American Meteorological Society Volume 89, Issue 4 (April 2008) pp. 459–470 DOI: 10.1175/BAMS-89-4-459.

Professor Hendrik provided an excellent weblog on this paper yesterday (see), and I am adding my perspective today.

The Palmer et al paper is headlined with the text

“In a seamless prediction system, the reliability of coupled climate model forecasts made on seasonal time scales can provide useful quantitative constraints for improving the trustworthiness of regional climate change projections.”

 The paper uses a schematic to illustrate the subject which is reproduced below

Figure 1 A schematic figure illustrating that the link between climate forcing and climate impact involves processes acting on different time scales. The whole chain is as strong as its weakest link. The use of a seamless prediction system allows probabilistic projections of climate change to be constrained by validations on weather or seasonal forecast time scales [reproduced with the permission of Tim Palmer].

“The figure shows a chain. One end of this chain represents humanity’s forcing of climate through emissions of greenhouse gases into the atmosphere. The other end of the chain represents the impact of this forcing in terms of regional climate change (temperature, precipitation, wind, and so on).”

“This is where the notion of seamless prediction can play a key role. It will be decades before climate change projections can be fully verified. However, our basic premise, illustrated by the schematic in Fig. 1, is that there are fundamental physical processes in common to both seasonal forecast and climate change time scales. If essentially the same ensemble forecasting system can be validated probabilistically on time scales where validation data exist, that is, on daily, seasonal, and (to some extent) decadal time scales, then we can modify the climate change probabilities objectively using probabilistic forecast scores on these shorter time scales.”

However, the climate system has physical, biological and chemical interactions within and across each component of the system on all time scales (see Figures 1 and 2 below). As one seeks to predict, even probabilistically, further into the future, more of the slower feedbacks and forcings become more important (as Professor Tennekes noted in his guest weblog).  The forcings and nonlinear feedbacks that operate on the time scale longer than seasonal cannot be tested by the methodology proposed in the BAMS paper.

 
 
 

 

 

 

Figure 2 Conceptual framework of climate forcing, response, and feedbacks  Examples of human activities, forcing agents, climate system components, and variables that can be involved in climate response are provided in the lists in each box. ( from NRC, 2005: Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties. Board on Atmospheric Sciences and Climate (BASC), National Academy of Sciences, Washington DC)

 

 

 Therefore, while I agree that evaluating model prediction performance on seasonal time scale, by running the global models with initial conditions, is a very worthwhile and important goal (and deserving of support), it will not inform us of the skill at making longer term forecasts, where, for example, such aspects of the climate as sea surface temperature must be accurately predicted. With seasonal prediciton, the sea surface temperatures retain a signficant correlation with the values inserted in the initialization.

Seasonal prediction itself is a difficult problem. We have examined this issue in our paper

Castro, C.L., R.A. Pielke Sr., J. Adegoke, S.D. Schubert, and P.J. Pegion, 2007: Investigation of the summer climate of the contiguous U.S. and Mexico using the Regional Atmospheric Modeling System (RAMS). Part II: Model climate variability. J. Climate, 20, 3866-3887.

As we conclude in this paper

“In order for RCMs [regional climate models] to be successful in a seasonal weather prediction mode for the summer season, it is required that the GCM [general circulation model] provide a reasonable representation of the teleconnections and have a climatology that is comparable to a global atmospheric reanalysis.”

In other words, unless the global model is realistic,  as defined by comparing the model results to global reanalyses, it is not possible for a skillful regional scale prediction. 

This also means that longer term skillful predictions will be unable to produce skillful regional forecasts unless the global model accurately predicts the statistics of atmospheric and ocean circulation features (such as ENSO, the NAO, PDO, etc), as well as drought and other aspects of the climate system.

 While, I endorse the analysis of multimodel seasonal forecast reliability that Tim Palmer has been a pioneer in introducing, the claim that the verification of model skill on this time scale can provide confidence in the skill of longer term (e.g., decadal and multi-decadal) climate prediction is not a robust scientfic conclusion. If there is disagreement on this claim by Climate Science, than the only arbitrator is to perfofm these long term predictions for the coming years, and validate whether or not they are reliable.

Comments Off

Filed under Climate Models

Seamless Prediction Systems by Hendrik Tennekes

Guest Weblog by Henrick Tennekes  June 24 2008

Roger Pielke gracefully invited me to write a brief essay on an interesting technical detail in the World Summit document issued by WCRP. According to the document, all time scales, from hours to centuries, all regional details, everything related to prediction should be dealt with by GCM technology. In this context, the term “seamless prediction” is used. That caught my attention. Let me quote the relevant paragraph:

“Advances in climate prediction will require close collaboration between the weather and climate prediction research communities. It is essential that decadal and multi-decadal climate prediction models accurately simulate the key modes of natural variability on the seasonal and sub-seasonal time scales. Climate models will need to be tested in sub-seasonal and multi-seasonal prediction mode also including use of the existing and improved data assimilation and ensemble prediction systems. This synergy between the weather and climate prediction efforts will motivate further the development of seamless prediction systems.”

The current use of the concept of seamless prediction is explained in a recent paper by Tim Palmer and others, published in the Bulletin of the AMS (see Palmer, T.N., F.J. Doblas-Reyes, A. Weisheimer, and M.J. Rodwell, 2008: Toward Seamless Prediction: Calibration of Climate Change Projections Using Seasonal Forecasts. Bull. Amer. Meteor. Soc., 89, 459–470. ). I quote:

“If essentially the same ensemble forecasting system can be validated probabilistically on time scales where validation data exist, that is, on daily, seasonal, and (to some extent) decadal time scales, then we can modify the climate change probabilities objectively using probabilistic forecast scores on these shorter time scales.”

“We propose that if the same multimodel ensemble is used for seasonal prediction as for climate change prediction, then the validation of probabilistic forecasts on the shorter time scale can be used to improve the trustworthiness of probabilistic predictions on the longer time scale. This improvement would come from assessing processes in common to both the seasonal forecast and climate projection time scales, such as the atmospheric response to sea surface temperatures. To reiterate, our basic premise is that processes, such as air–sea coupling, that are relevant for the seasonal forecast problem also play a role in determining the impact of some given climate forcing, on the climate system itself. The calibration technique provides a way of quantifying the weakness in those links to the chain common to both seasonal forecasting and climate change time scales.”

Apparently, the idea behind this application of the seamless prediction paradigm is that the reliability of climate models can be improved if they are used as extended-range weather forecast models. Experimental verification, which is impossible in climate runs, then becomes feasible. With a bit of luck, certain types of shortcomings in the model formulation can be detected this way. This process may lead to climate codes with fewer systematic errors.

This sounds promising. Climate models cannot be verified or falsified (if at all, because they are so complex) until after the fact. Strictly speaking, they cannot be considered to be legitimate scientific products. Any methodology that would ameliorate this situation would be a step forward, however small and tentative. I am happy to grant Palmer et al the benefit of the doubt as far as this point is concerned.

But I wonder how short-term calibration of a long-term tool might help to unravel the long-period irregularities in the climate system. The original meaning of the term “seamless prediction” was to express the idea that weather forecasting technology can be usefully extended to climate problems. The term was coined to consolidate the monopoly of GCM technology in all kinds of weather and climate forecasting. However, in the paper by Palmer et al. it refers to the reverse focus, where calibration is attempted by shrinking the time horizon. Alice gazing through the other side of the looking glass, as it were.

The tail wags the dog here. I know that dressed-up versions of weather forecast models are used to make climate prediction runs. I don’t mind too much, though this methodology hides a chronic, distressing lack of insight in the statistical dynamics of the General Circulation. I consider the seamless use of GCM technology a sign of intellectual poverty. Gone are the days of Jule Charney’s Geostrophic Turbulence, Ed Lorenz’ WMO monograph on the General Circulation, and Victor Starr’s early thoughts on Negative Eddy Viscosity Phenomena.

To turn the matter on its end is one step too far. Short- and medium-term forecast methods work quite well without an interactive ocean, interactive biosphere, interactive changes in the state of the world economy, and the like. I see no reason to burden a weather forecast model with the enormous complexity of climate models, and I see no way in which interactions of subordinate importance in weather forecasting can reliably be calibrated to improve crucial interactions in climate runs. I know I rub against the grain of the GCM paradigm, but so be it.

Palmer et al. also seem to forget that, though weather forecasting is focused on the rapid succession of atmospheric events, climate forecasting has to focus on the slow evolution of the circulation in the world ocean and slow changes in land use and natural vegetation. In the evolution of the Slow Manifold (to borrow a term coined by Ed Lorenz) the atmosphere acts primarily as stochastic high-frequency noise. If I were still young, I would attempt to build a conceptual climate model based on a deterministic representation of the world ocean and a stochastic representation of synoptic activity in the atmosphere.

One example I am familiar with is the North Atlantic storm track, which guides the surface winds that drive the Gulf Stream and help to sustain the thermohaline circulation in the world ocean. The kind of model I envisage deals with the slow evolution of the ocean circulation deterministically, but with the convergence of the meridional flux of atmospheric eddy momentum in the way turbulence modellers do. In this view, the individual extra-tropical cyclones that feed the momentum of the jet stream can be represented by stochastic parameterizations, but the jet stream itself is part of the deterministic code. In a more general sense, I claim that stochastic tools of the kind proposed by Palmer et al. will have to be developed on the basis of a better understanding of the dynamics of the climate system. Purely statistical methods, however sophisticated, can be compared with attempts to kill a songbird with a shotgun.

There is yet another principal shortcoming in the paper by Palmer et al. I will grant them that the approach they advocate may be of some use as far as the possible deleterious effects of greenhouse gases are concerned. These gases are rapidly mixed through the entire atmosphere. That’s what the turbulence in the general circulation is good at. But now think of slow forestation and deforestation, or the expected northward crawl of corn and wheat belts.  And what about large hydropower projects or land-use changes as the peoples of India and China become wealthier, drive more cars, and become more urbanized? Can the reverse use of seamless prediction methods help to calibrate the response of the climate system to these elements of the Slow Manifold? I would not know how.

I offer a solution to Palmer’s quandary. Seamless prediction may or may not have a glorious future, but it does have a history spanning almost twenty years. I propose that WCRP should initiate a Seamless Reprediction Program, as a kind of extension to the reanalysis efforts undertaken from time to time at ECMWF. That is, climate runs made in the past should be analyzed, restarted with the latest version of the stochastic feedback paradigm, and calibrated with accumulated observational evidence. Perhaps the latest versions of climate models cannot be investigated this way, but the great advantage is that working in a retrospective mode offers falsification prospects. Looking back, all data needed for calibration do exist. So do the computers and the software. Immediate, large-scale expansion of facilities is not needed if this path is taken. And I trust ECMWF will be permitted to participate in this effort.

Would Palmer not agree that evidence from such a Reprediction Program might turn out to become a cornerstone for the World Climate Computing Facility that he and the World Summit crowd are lobbying for? I wish them well.

Comments Off

Filed under Guest Weblogs

World Modelling Summit For Climate Prediction – Comments By Climate Science -Part II

Part I of the Climate Science weblog on the World Modelling Summit for  Climate Prediction was presented on June 17, 2008 (see). The specific recommendatons in their Statement are discussed here.

following are their conclusions, followed by a Climate Science comment.

The Summit

The World Modelling Summit for Climate Prediction, jointly organized by the World Climate Research Programme, World Weather Research Programme, and the International Geosphere-Biosphere Programme, was held at the European Centre for Medium-Range Weather Forecasts on 6-9 May 2008. The Summit was organized to develop a strategy to revolutionize prediction of the climate through the 21st century to help address the threat of global climate change, particularly at the regional level.”

The recognition that regional scale information is needed is welcome. As discussed in depth many times on Climate Science, the use of a global average surface temperature trend provides little if anything of practical use by the impacts community. The claim, however, that they want to “revolutionize prediction of the climate through the 21st century” is quite a bold claim, as discussed previously by Professor Tennekes on a weblog posted June 19, 2008 entitled “A Revolution in Climate Prediction”.

“The Summit brought together the world’s leading scientists from a number of disciplines to discuss what must be done to address society’s urgent needs.

The Summit concluded:”

  • “Considerably improved predictions of the changes in the statistics of regional climate, especially of extreme events and high-impact weather, are required to assess the impacts of climate change and variations, and to develop adaptive strategies to ameliorate their effects on water resources, food security, energy, transport, coastal integrity, environment and health. Investing today in climate science will lead to significantly reduced costs of coping with the consequences of climate change tomorrow.”
  • “Despite tremendous progress in climate modelling and the capability of high-end computers in the past 30 years, our ability to provide robust estimates of the risk to society, particularly from possible catastrophic changes in regional climate, is constrained by limitations in computer power and scientific understanding. There is also an urgent need to build a global scientific workforce that can provide the intellectual power required to address the scientific challenges of predicting climate change and assessing its impacts with the level of confidence required by society.”

The value of improved predictions of regional climate is a worthy goal. However, the statement ignores that much can already be done without any skill at predicting future regional climate.  Risks to water resources, food security, energy transport, coastal integrity, and environment and health can be reduced just by assessing impacts to today’s (or estimates of future demands on these resources) based on the historical, recent paleo, and worst case sequences of weather patterns that have already occurred. Then the threat from climate as contrasted with other risks can be contrasted. This approach was summarized in

Pielke, R.A. Sr., 2004: Discussion Forum: A broader perspective on climate change is needed. IGBP Newsletter, 59, 16-19.

The Summit failed to include a comparison of the benefits of their approach with that of a resource specific vulnerability assessment.

  • “Climate prediction is among the most computationally demanding problems in science. It is both necessary and possible to revolutionize regional climate prediction: necessary because of the challenges posed by the changing climate, and possible by building on the past accomplishments of prediction of weather and climate. However, neither the necessary scientific expertise nor the computational capability is available in any single nation. A comprehensive international effort is essential.”
  • “The Summit strongly endorsed the initiation of a Climate Prediction Project coordinated by the World Climate Research Programme, in collaboration with the World Weather Research Programme and the International Geosphere-Biosphere Programme, and involving the national weather and climate centres, as well as the wider research community. The goal of the project is to provide improved global climate information to underpin global mitigation negotiations and for regional adaptation and decision-making in the 21st century.”
  • “The success of the Climate Prediction Project will critically depend on significantly enhancing the capacity of the world’s existing weather and climate research centres for prediction of weather and climate variations including the prediction of changes in the probability of occurrence of regional high impact weather. This is particularly true for the developing countries whose national capabilities need to be increased substantially.”

These statements provides absolutely no detail on how it is “possible to revolutionize regional climate prediction”. What new science is needed to accomplish this task?

  • “An important and urgent initiative of the Climate Prediction Project will be a world climate research facility for climate prediction that will enable the national centres to accelerate progress in improving operational climate prediction at all time scales, especially at decadal to multi-decadal lead times. This will be achieved by increasing understanding of the climate system, building global capacity, developing a trained scientific workforce, and engaging the global user community.”
  • “The central component of this world facility will be one or more dedicated highend computing facilities that will enable climate prediction at the model resolutions and levels of complexity considered essential for the most advanced and reliable representations of the climate system that technology and our scientific understanding of the problem can deliver. This computing capability acceleration, leading to systems at least a thousand times more powerful than the currently available computers, will permit scientists to strive towards kilometre scale modelling of the global climate system which is crucial to more reliable prediction of the change of convective precipitation especially in the tropics.

This is a call for a new major facility to complete these predictions. Why are not the existing centers adequate [such as NCAR (USA), Frontier (Japan) and ECMWF (Europe)]?

  • “Access to significantly increased computing capacity will enable scientists across the world to advance understanding and representation of the physical processes responsible for climate variability and predictability, and provide a quantum leap in the exploration of the limits in our ability to reliably predict climate with a level of detail and complexity that is not possible now. It will also facilitate exploration of biogeochemical processes and feedbacks that currently represent a major impediment to our ability to make reliable climate projections for the 21st century.”

The statement is correct that “biogeochemical processes and feedbacks ….currently represent a major impediment to our ability to make reliable climate projections for the 21st century”. This is an important admission by Summit of a major shortcoming of the 2007 IPCC report. The statement, however, writes that the new center would “enable scientists across the world to advance understanding and representation of the physical processes responsible for climate variability and predictability, and provide a quantum leap in the exploration of the limits in our ability to reliably predict climate.”  This claim is inconsistent with the need for biological and chemical forcings and feedbacks that they recognize are needed!

The 2005 National Research Council correctly identified the need for including physical, biological and chemical climate forcings and feedbacks, and for including land, ocean, continental ice and atmospheric processes as illustrated in the figure below from that report.

 

Conceptual framework of climate forcing, response, and feedbacks  Examples of human activities, forcing agents, climate system components, and variables that can be involved in climate response are provided in the lists in each box. ( from NRC, 2005: Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties. Board on Atmospheric Sciences and Climate (BASC), National Academy of Sciences, Washington DC)

  • “Sustained, long-term, global observations are essential to initialize, constrain and evaluate the models. Well documented and sustained model data archives are also essential for enabling a comprehensive assessment of climate predictions. An important component of the Climate Prediction Project will therefore be an accessible archive of observations and model data with appropriate user interface and knowledge-discovery tools.”
  • “To estimate the quality of a climate prediction requires an assessment of how accurately we know and understand the current state of natural climate variability, with which anthropogenic climate change interacts. All aspects of estimating the uncertainty in climate predictions pose an extreme burden on computing resources, on the availability of observational data and on the need for attribution studies. The Climate Prediction Project will enable the climate research community to make better estimates of model uncertainties and assess how they limit the skill of climate predictions.

The comparison and testing of model predictions with observations is a central tenet of science. This is an excellent recommendation. However, the need is not to “estimate the quality of a climate prediction”  but to quantitatively evaluate model predictive skill.

  • “Advances in climate prediction will require close collaboration between the weather and climate prediction research communities. It is essential that decadal and multi-decadal climate prediction models accurately simulate the key modes of natural variability on the seasonal and sub-seasonal time scales. Climate models will need to be tested in sub-seasonal and multi-seasonal prediction mode also including use of the existing and improved data assimilation and ensemble prediction systems. This synergy between the weather and climate prediction efforts will motivate further the development of seamless prediction systems.”

Testing climate the global model predictions by running them in a weather prediction mode is an excellent idea. However, data assimilation cannot be used to adjust the model performance, since decadal and multi-decadal climate predictions are for a time period that has not yet occurred!

  • “The Climate Prediction Project will help humanity’s efforts to cope with the consequences of climate change. Because the intellectual challenge is so large, there is great excitement within the scientific community, especially among the young who want to contribute to make the world a better place. It is imperative that the world’s corporations, foundations, and governments embrace the Climate Prediction Project. This project will help sustain the excitement of the young generation, to build global capacity, especially in developing countries, and to better prepare humanity to adapt to and mitigate the consequences of climate change.”

The Statement does not communicate how their forecasts would “help humanity’s efforts to cope with the consequences of climate change”. Rather, it reads as a self serving statement to justify the new computing and climate  prediction center.

 

Comments Off

Filed under Climate Models, Climate Science Meetings

Another Example Of CCSP Bias In The Report “Weather and Climate Extremes in a Changing Climate”

I was alerted to another example of a bias in the CCSP report Weather and Climate Extremes in a Changing Climate which Climate Science weblogged on earlier today (see). It from the comments on the CCSP report

December 20, 2007 COMPILATION OF PUBLIC COMMENTS ON CCSP SYNTHESIS AND ASSESSMENT PRODUCT 3.3

Comment

Goklany, CH1-9, Pages 57, Lines 1188-1191: An alternative view of the European
heatwave is provided in Chase et al. (2006). This should be discussed too.

Reference: Chase, T. N., K. Wolter, R. A. Pielke Sr., and I. Rasool, 2006. Was the 2003 European
summer heat wave unusual in a global context?
Geophysical Research Letters, 33,
L23709, doi:10.1029/2006GL027470. Indur Goklany, Department of the Interior

CCSP Response:The Chase et al. (2006) paper found that “extreme warm anomalies equally, or more, unusual than the 2003 heat wave occur regularly.” This is in contrast to the paper we cited, as well as numerous other papers such as Stott et al. 2004, Trigo et al. 2005, Meehl and Tebaldi 2004, Menzel 2005, etc. which find 2003 to be a highly unusual event. The problem with Chase et al.’s analysis is that they used 1000 to 500 mb thickness anomalies as their metric. As pointed out in a comment on Chase et al., using the Chase et al. method but applying it to surface temperatures reveals that the summer of 2003 was indeed a unique record (Connolley, 2007). Mortality depends on surface temperature not the temperature averaged over 1000 mb to 500 mb which is a measure from near the surface up to about 5.5 km. Indeed, Kalkstein et al. (2007) analysis of analog European heat wave events for U.S. cities estimates that a similar magnitude heat wave in New York City would have a heat related mortality of 3,253. Since such high mortality does not occur regularly in the U.S., this analysis also indicates that the European heat wave of 2003 was an unusual event.

Climate Science Comment – June 20 2006

The CCSP authors, as we have already noted in the weblog from earlier today, ignored peer reviewed research that conflicts with their viewpoint. This response to a Comment by Indur Goklany is yet another example. The CCSP authors conveniently ignored that the Connolley 2007 study confirmed the Chase et al study. In addition, the CCSP authors neglected to communicate that the anomalous surface temperatures could not be due to greenhouse gas warming, which necessarily must extend through most of the troposphere. As stated in the Chase et al reply

“the conclusion that the heat wave was a shallow phenomenon in terms of its unusualness argues against the point of view that it was a direct manifestation of the effects of increased atmospheric CO2.”

and

“…..we also conclude that land surface conditions (low soil moisture) are the likely direct cause for such an ‘unusual’event near the surface.”

Yet again, the CCSP report process, at least when led by Tom Karl, presents biased information of the diversity of conclusions in peer reviewed studies of the climate system.
 

Comments Off

Filed under Climate Change Metrics, Climate Science Misconceptions, Uncategorized

New CCSP Report Appears “Weather and Climate Extremes in a Changing Climate” – Unfortunately, Another Biased Assessment

There is another CCSP report that was made available yesterday. It is

CCSP, 2008: Weather and Climate Extremes in a Changing Climate. Regions of Focus: North America, Hawaii, Caribbean, and U.S. Pacific Islands. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research. [Thomas R. Karl, Gerald A. Meehl, Christopher D. Miller, Susan J. Hassol, Anne M. Waple, and William L. Murray (eds.)]. Department of Commerce, NOAA’s National Climatic Data Center, Washington, D.C., USA, 164 pp.

It is led by the same individual, Tom Karl, Director of the National Climate Data Center, who produced the CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences”, in which I resigned from and detailed the reasons in

Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences”. 88 pp including appendices.

This report perpetuates the use of assessments to promote a particular perspective on climate change, such as they write in the Executive Summary

“It is well established through formal attribution studies that the global warming of the past 50 years is due primarily to human-induced increases in heat-trapping gases. Such studies have only recently been used to determine the causes of some changes in extremes at the scale of a continent. Certain aspects of observed increases in temperature extremes have been linked to human influences. The increase in heavy precipitation events is associated with an increase in water vapor, and the latter has been attributed to human-induced warming.”

This claim conflicts with the 2005 National Research Council report

National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp

where a diversity of human climate forcings were found to alter global average radiative warming, including from atmospheric aersosols and due to the deposition of soot on snow and ice. The claim of an increase in atmospheric water vapor conflicts with a variety of observations as summarized on Climate Science (e.g. see).

To further illustrate the bias in the report, the assessment chose to ignore peer reviewed research that raises serious questions with respect to the temperature data that is used in their report. As just one example,  they ignored research where we have shown major problems in the use of surface air temperature measurements to diagnose long term temperature trends including temperature extremes. Our multi-authored paper

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229,

should have been included in the CCSP assessment. It was ignored. Yet the papers that use this land surface temperature data to claim changes in the extremes were included; e.g.

Easterling, D.R., B. Horton, P.D. Jones, T.C. Peterson, T.R. Karl, D.E. Parker, M.J. Salinger, V. Razuvayev, N. Plummer, P. Jamison, and C.K. Folland, 1997: Maximum and minimum temperature trends for the globe. Science, 277(5324), 364-367.

Peterson, T.C., 2003: Assessment of urban versus rural in situ surface temperatures in the contiguous United States: No difference found. Journal of Climate, 16(18), 2941-2959.

Peterson, T.C., X. Zhang, M. Brunet-India, and J.L. Vázquez- Aguirre, 2008: Changes in North American extremes derived from daily weather data. Journal Geophysical Research, 113, D07113, doi:10.1029/2007JD009453.

 Since this assessment is so clearly biased, it should be rejected as providing adequate climate information to policymakers. There also should be questions raised concerning having the same individuals preparing these reports in which they are using them to promote their own perspective on the climate, and deliberately excluding peer reviewed papers that disagree with their viewpoint and research papers. This is a serious conflict of interest.

Comments Off

Filed under Climate Science Misconceptions, Climate Science Reporting

Diagnosis Of Global Sea Level and Upper Ocean Heat Content On Seasonal To Interannual Timescales Paper Willis et al 2008 Published

The paper

Willis J. K., D. P. Chambers, R. S. Nerem (2008), Assessing the globally averaged sea level budget on seasonal to interannual timescales, J. Geophys. Res., 113, C06015, doi:10.1029/2007JC004517. 

has been published [thanks to Richard Hanson for alerting us].

The absract reads

“Analysis of ocean temperature and salinity data from profiling floats along with satellite measurements of sea surface height and the time variable gravity field are used to investigate the causes of global mean sea level rise between mid-2003 and mid-2007. The observed interannual and seasonal fluctuations in sea level can be explained as the sum of a mass component and a steric (or density related) component to within the error bounds of each observing system. During most of 2005, seasonally adjusted sea level was approximately 5 mm higher than in 2004 owing primarily to a sudden increase in ocean mass in late 2004 and early 2005, with a negligible contribution from steric variability. Despite excellent agreement of seasonal and interannual sea level variability, the 4-year trends do not agree, suggesting that systematic long-period errors remain in one or more of these observing systems.”

Climate Science has already weblogged on this important paper: see

Important New Paper In Press by Willis And Colleagues On Sea Level Rise And Ocean Heat Content Changes

A major finding from the Willis et al 2008 paper is

“Despite the short period of the present analysis, these results have important implications for climate. First, from 2004 to the present, steric contributions to sea level rise appear to have been negligible. This is consistent with observations of ocean surface temperature, which show relatively little change in the global average between 2003 and 2006 [Smith and Reynolds, 2005, see NCDC global surface temperature anomalies]. It is in sharp contrast, however, to historical analyses of thermal expansion over the past decade [Willis et al., 2004] and the past half-century [Antonov et al., 2005; Lombard et al., 2005; Ishii et al., 2006]. Although the historical record suggests that multiyear periods of little warming  (or even cooling) are not unusual, the present analysis confirms this result with unprecedented accuracy.”

Now that this paper has appeared, the global modellng community is challenged to accurately simulate and explain the absence of significant upper ocean heat changes during this time period. The new (June 19 2008) Nature paper by Domingues et al “Improved estimates of upper-ocean warming and multi-decadal sea-level rise“, unfortunately, chose to end its analysis period five years ago (2003).  The Editors should have required that they update their study. The Willis et al paper supercedes their time period of analysis.

Comments Off

Filed under Climate Change Metrics

Guest Weblog by Hendrik Tennekes: A Revolution in Climate Prediction?

A Revolution in Climate Prediction? by Hendrik Tennekes

The World Climate Research Program (WCRP), a program run by the World Meteorological Organization (WMO) organized The World Modeling Summit for Climate Prediction at the European Center for Medium-range Weather Forecasting (ECMWF) in May, 2008. This meeting produced a curious document entitled “The Climate Prediction Project,” which was posted on the WCRP website (see).

I don’t know what to make of this text. Is it a proposal? Is it a call to arms? Is it a trial balloon floated by computer modelers? Is it an attempt by WCRP brass to test the waters for an international facility of unprecedented size? Will the Secretary-General of WMO take this ball and run with it? Will any government be willing to stick its neck out? Did anyone in the circuit that produced and disseminated this text contemplate the ways in which a document of this type may backfire? Did anyone conceive of the complexity of the negotiations that would be needed?

I happen to know of an earlier trial balloon of the same type. The European contingent of CLIVAR, the climate variability subgroup of WCRP, launched a similar proposal in 1998. This group proposed to the European Commission that a European Climate Computing Facility be established. I quote:

“Reliable regional climate change predictions cannot be achieved without enhanced European collaboration and substantial increases in computing resources. These are needed so that multi-century simulations can be made with sufficient complexity that important climatic features, physical processes and regional details are resolved. In addition, ensembles of integrations must be made to estimate the impact on climate predictions of uncertainties in initial conditions and model formulation. The computational requirements for such simulations cannot be met from purely national resources. It is therefore strongly recommended that a European Climate Computing Facility be established.”

To me, it is evident that this proposal was doomed from the start. A European computing facility for weather forecasting (ECMWF) was established in 1979. The board of directors for ECMWF consists of the Directors of the National Weather Services in Europe. They must have interpreted the 1998 CLIVAR proposal either as a hare-brained attempt to greatly expand both the size and the core tasks of ECMWF or as an attempt to create a second European facility of yet greater budget, one that would strain their resources even more. I assume they were not enthused by the idea that the ECMWF staff evidently conducts climate research on the sly. And they must have been quite annoyed that their own scientists had colluded with those at ECMWF without thorough in-house discussions. It is not hard to imagine how they would have responded to a phone call from a bureaucrat at the European Commission in Brussels. Any proposal that is floated without the explicit support from up high deserves to be shot down without compunction. It was.

The current trial balloon (if that is what it is) is of yet grander scale. Europe is too small for the aspirations of computer modellers. The WCRP crowd apparently dreams of multipetaflop computing and of a facility substantially bigger than those of high-energy physics. So it invented language meant to impress diplomats and politicians. I quote from the first paragraph of this document:

“The development of reliable science-based adaptation and mitigation strategies will only be possible through a revolution in regional climate prediction.” Really? Do the physical sciences have a monopoly on the truth, much as religion used to have? Why should any strategy for dealing with climate change rely primarily on the physics of atmosphere and ocean, not on biology, psychology, sociology, land-use and water management, or even intergovernmental negotiations or raw politics?

The idea that climate policy should be “science-based” was promoted by the Intergovernmental Panel on Climate Change (IPCC) from the start of its work some twenty years ago. The kind of knowledge obtained by the physical sciences was taken to be much more reliable than all other kinds of knowledge. The proponents of this viewpoint deliberately organized the IPCC process such that Working Group I had to provide the “Scientific Basis” for climate policy. The so-called “Human Dimensions” of global warming received some lip service, but were otherwise substantially ignored.

This strategy has backfired and will continue to backfire. The “science-based” work of IPCC has been the underpinning of the Kyoto Protocol, but the chances for agreement on a successor to Kyoto are now slimmer than ever. Diplomats realize that Global Warming has been stalling since 1998, and that IPCC appears incapable of providing a convincing explanation. The negotiators merely have to listen to daily weather forecasts in order to realize that there has been no substantial progress in weather prediction since the inception of IPCC, let alone a revolution. Weather and climate are linked in the prevailing methodology, which relies exclusively on General Circulation Models (modelers call this “seamless prediction”). However, the spread in the forecasts for fifty years ahead is as large as it was twenty years ago. On top of that, no information of any kind has been generated on the effective prediction horizon of climate forecasts, on the causes of the evident regional failures of climate models, or on methods by which the reliability of climate runs can be assessed. The “revolution in regional climate predictions” promised in the WCRP-document must be considered wishful thinking.

The hoped-for revolution in climate modeling can be achieved only if the Grand Strategy of climate research focuses no longer on improving predictive skills as such, but on the development of experimental and theoretical tools for the scientific assessment of predictive skills. The assumption that massive escalation of computer power will substantially expand the prediction horizon or the understanding of predictive skills is not supported by any evidence I am aware of. Ensemble forecasting is the current way of obtaining a crude, provisional idea of the reliability of model runs, but climate modelers are now toying with ideas labeled “stochastic-dynamic forecasting.” That sounds promising, but also reeks of the customary ignorance of physicists concerning the nature of problems in turbulence theory. All earlier attempts by physicists, including some famous ones like Werner Heisenberg, have failed. A notorious example is Robert Kraichnan’s Direct-Interaction Approximation (DIA), launched in 1958, when I studied at the Johns Hopkins University. In 1987 Kraichnan finally admitted in a Summer School at NCAR that DIA and all of its descendants were to be seen as efficient computer codes, and had contributed nothing to the understanding of the dynamics of turbulence. I conclude that the chances of success for stochastic-dynamic prediction methods are very slim indeed. Is this a solid scientific basis for a billion-dollar facility? And is this the way to go if the primary goal is not to improve predictive skills, but to finally understand why and how predictive skills are limited? I submit it is not.

What climate research needs most is a solid body of knowledge on the reliability of climate models, a scientific basis for decisions on the direction of research programs and for the delineation of the role of scientists in advising governments. The revolution I envisage does not depend on computer power, but on thinking power. Rapidly escalating computer power diffuses the central issue; thinking power may be able to focus it.

The World Summit text repeatedly focuses on regional climate prediction. I wonder why. Let me start with a few quotes:

“The Summit was organized to develop a strategy to revolutionize prediction of the climate through the 21st century to help address the threat of global climate change, particularly at the regional level.”

“Despite tremendous progress in climate modelling and the capability of high-end computers in the past 30 years, our ability to provide robust estimates of the risk to society, particularly from possible catastrophic changes in regional climate, is constrained by limitations in computer power and scientific understanding.”

“The goal of the project is to provide improved global climate information to underpin global mitigation negotiations and for regional adaptation and decision-making in the 21st century.”

Exactly what might be meant here? What is “global climate change at the regional level”? Global mitigation negotiations don’t need further “underpinning”; they are stalling for entirely different reasons than any real or imagined climate threat. Conflicts of interest concerning fossil fuel policy and unwillingness to address the skewness in the distribution of wealth dominate the agenda of the endless string of meetings.

Also, the mayor of New Orleans, the US Corps of Engineers, and the Governor of the State of Florida surely do not need more climate information. The damage caused by hurricanes is caused primarily by the unconstrained build-up of coastal areas. I cannot conceive of any “possible catastrophic change in regional climate” around the Gulf of Mexico. It’s not the climate that is threatening, it is urban sprawl and inadequate investment in coastal defense technology.

Let me now discuss the last paragraph of the World Summit text and the final paragraph of a message I received from one of its authors. First the official text:

“The Climate Prediction Project will help humanity’s efforts to cope with the consequences of climate change. Because the intellectual challenge is so large, there is great excitement within the scientific community, especially among the young who want to contribute to make the world a better place. It is imperative that the world’s corporations, foundations, and governments embrace the Climate Prediction Project. This project will help sustain the excitement of the young generation, to build global capacity, especially in developing countries, and to better prepare humanity to adapt to and mitigate the consequences change.”

Now the position taken by one of the authors:

“As far as I am concerned, the main achievement of the Summit was to get a consensus statement from modelers around the world that computational constraints were a significant roadblock to improving global climate models. Others will use these statements to pursue possible funding initiatives.”

Was that all? Does everything boil down to a routine song-and-dance for a massive increase in computer power? Is this why bloated words about Humanity and Climate Threat were considered necessary? Is this the way to “make the world a better place”?

In 1990, I wrote a column protesting against similar fantasies. I quote:

“I worry about the arrogance of scientists who blithely claim that they can help solve the climate problem, provided their research receives massive increases in funding. I worry about the lack of sophistication and the absence of reflection in the way climate modellers covet new supercomputers (….) My worries multiply when I contemplate possible side effects. Expansion of research tends to support the illusion that science and technology can solve nearly every problem, given enough resources. Research supports the progress myth that pervades modern society, but that very myth seduces us into ignoring our responsibility for the state of the planet. Therefore, I want to restrain myself. I want to avoid making promises I cannot keep. I want to keep my expansive instincts in check. Above all, I try to be a scientist: I wish to think before I act.”

My message clearly has not lost any urgency.

We should think before we act.

Comments Off

Filed under Guest Weblogs

World Modelling Summit For Climate Prediction – Comments By Climate Science -Part I

There was a meeting among a set of modellers from May 6 -9 2008 which was hosted by the European Centre for Medium Range Forecasting (ECMWF) entitled “World Modelling Summit for Climate Prediction”. The goal of the meeting was to provide

“society with reliable regional predictions of climate change at all timescales, necessary to develop mitigation and adaptation strategies. “

Presentations from the meeting are available from this link.

 They issued a statement from this meeting which is reproduced in part below with comments by Climate Science. A subsequent post will comment on their specific recommendations. Following is the framework that they have adopted for the Project;

“The Climate Prediction Project

Revolutionizing Global Climate Prediction for Regional Adaptation and Decision-Making in the 21st Century

The Challenge

The world recognizes that the consequences of global climate change constitute one of the most important threats facing humanity. The peoples, governments, and economies of the world must develop mitigation and adaptation strategies, which will require investments of trillions of dollars, to avoid the dire consequences of climate change. The development of reliable science-based adaptation and mitigation strategies will only be possible through a revolution in regional climate predictions supported by appropriate climate observations and assessment, and the delivery of this information to society.

This statement clearly presents the perspective of the organizers of this meeting in that they view global climate change as

one of the most important threats facing humanity”

requiring  “investments of trillions of dollars”.

They also claim that

The development of reliable science-based adaptation and mitigation strategies will only be possible through a revolution in regional climate predictions supported by appropriate climate observations and assessment, and the delivery of this information to society.”

However, in order to determine the relative importance of the risk of human caused climate change on the environment and society, the first step, which they have not taken, is to evaluate the spectrum of all risks to society and the environment and then to prioritize them! This is a focus that the IGBP itself, one of the organizers of the World Modelling Summit, had previously recommended (e.g. see Figure E.7 in

Kabat, P., Claussen, M., Dirmeyer, P.A., J.H.C. Gash, L. Bravo de Guenni, M. Meybeck, R.A. Pielke Sr., C.J. Vorosmarty, R.W.A. Hutjes, and S. Lutkemeier, Editors, 2004: Vegetation, water, humans and the climate: A new perspective on an interactive system. Springer, Berlin, Global Change – The IGBP Series, 566 pp.)

Bjorm Lomberg in his book Cool It: The Skeptical Environmentalist’s Guide to Global Warming comes to a similar conclusion.  If the group of scientists who wrote the Climate Prediction Project Statement wants us to spend trillions of dollars, they have an obligation to quantitatively show why the role of human climate forcing, with a specific focus on the emssions of CO2, has a higher priority for such funds then other environmental threats.  They also need to define how, what they predict with the models, affects where this vast amount of money would be spent.

Thus the recommendations from this meeting is actually advocacy for a particular policy namely that  “reliable science-based adaptation and mitigation strategies….will only be possible through a revolution in regional climate predictions…”.

We should encourage supporting climate models to better understand the climate system. However, to claim that the only way to provide reliable policy strategies is from regional climate predictions is unnecessarily narrow, and clearly self-serving.

 

 

 
 

 

 
 

 
 

 

 

 
 
 
 
 
 
 

 

 

Comments Off

Filed under Climate Models