Monthly Archives: May 2008

Welcome To A New Climate And Weather Weblog!

There is a new climate/weather weblog by John Nielsen-Gammon and Barry Lefer called

Atmo.Sphere Climate conversation with John Nielsen-Gammon and Barry Lefer

I have worked with John on two research papers (see and see) and while I was State Climatologist for Colorado and he was (and is) State Climatologist for Texas. I have the highest respect for his professional capabilities and look forward to reading their weblogs!

Welcome to the Blogosphere!

Comments Off on Welcome To A New Climate And Weather Weblog!

Filed under Climate Science Reporting

Review Of Cotton and Pielke 2nd Edition By Hans Von Storch

Below is a second review of our book Human Impacts on Weather and Climate 2nd Edition. The review by internationally respected  Hans von Storch  of the Institute for Coastal Research, GKSS Research Center will appear in Meteorlogishe Zeitschrift. [see the first review].

The review is

“This is a book of two eminent meteorologists, who write about climate. They write about atmospheric physics, about processes, in quite some detail, with very many references. Not much math, but quit a few sketches and diagrams. For anybody who wants an overview what processes, which issues, which paradigms and views prevail, this book is very useful. But it is not an easy read, not a book which can be read on the train from Bonn to Hamburg. Compared to the first version of the book, this 2nd edition is less good in telling a concept, in telling its “story”; instead it is providing lots of information, of details – without being a textbook useful for the class room.

The book is written in three “parts” and an epilogue.

Part I, “The rise and fall of the science of weather modification by cloud seeding” tells about a once fancy science, which had and still has its merits, but suffered from “overselling”. In a sense, this part provides the morale of the whole book – overselling scientific knowledge and scientific potentials leads eventually to a crash; not the individual scientists, who are engaged in overselling, are paying for the short term success, but the community as a whole. Obviously, this example is meant an example, or analogy, of the present “global warming” debate, which the authors consider as overheated.

Part II, “Inadvertent human impacts on regional weather and climate” is dealing with the effects of urbanization and land-use/land-cover changes. Issues addressed are irrigation, deforestation and desertification. Atmospheric processes are dealt with in great detail, but the analysis of changing long-terms statistics are not really taken into account. Disappointing is that the authors have not really dealt with the historical perspective. The impact of deforestation is an issue which was dealt since the late 18th century (e.g., Grove, R.H, 1975: Green Imperialism. Expansion, Tropical Islands Edens and the Origins of Environmentalism 1600 – 1860. Cambridge University Press; Stehr, N., and H. von Storch (Eds.), 2000: Eduard Brückner – The Sources and Consequences of Climate Change and Climate Variability in Historical Times. Kluwer Academic Publisher; Pfister, C., and D. Brändli,1999: Rodungen im Gebirge – Überschwemmungen im Vorland: Ein Deutungsmuster macht Karriere. In R.P. Sieferle and H. Greunigener (Hrsg.) Natur-Bilder. Wahrnehmungen von Natur und Umwelt in der Geschichte Campus Verlag Frankfurt/ New York, 9-18). Such ideas are thus part of our (western) cultural fabric, and certainly also influence scientific thinking.

Part III, “Human impacts of global climate” deals with the modifications of the atmospheric radiation budget due to changing concentrations of carbon dioxide, water vapor, aerosols and dust. The nuclear winter hypothesis is discussed; the knowledge about global effects of changing land surface conditions is reviewed. Again, the problem is looked at from the viewpoint of processes, while the angle of empirical evidence based on long term statistics is almost entirely disregarded. No mentioning is made of the concept of “detection” of non-natural climate change and “attribution” of most likely causes. The short subsection on the IPCC is much too short; hidden in this section is a definition of what the authors consider to be a prediction” – their definition includes projections and scenarios, so that the two words “forecast” and “predictions” are very different terms.

In the epilogue, the authors leave their “scientific sector of competence” but discuss general societal issue of the process of science in a politically driven society. This is a thoughtful and interesting part of the book, a good read. In particular the chapter “Scientific credibility and advocacy” is interesting, albeit very short – a much deeper discussion “The honest broker” has been published by the son of the second author, Roger Pielke Jr. in 2008.

However, in the subsection “The dangers of overselling” the authors become inconsistent with their own definition of “predictions” – they claim that contemporary models are ” not capable of predicting climate” – thus no realistic scenarios possible? – and they can not be included “in quantitative forecast systems” – who is claiming the latter? Certainly, such models are capable of making “credible predictions of long term climate trends and regional impacts”, when the word “predictions” is understood as scenarios, i.e., “descriptions of plausible, possible, internally consistent but not necessarily probable futures”. They are not capable of making credible forecasts (meaning specifying most probable states at some future time), right.

In summary – this book constitutes a good contribution to the present debate about humans’ influence on climate; it brings in many different and valid view points. Bill Cotton and Roger Pielke sr. widen the horizon of understanding and options, which we see limited by those who are zealous to use scientific knowledge in shaping culturally preferred policies, who prune scientific knowledge claims according to their political utility.”

Comments Off on Review Of Cotton and Pielke 2nd Edition By Hans Von Storch

Filed under Books

New Information From Josh Willis On Upper Ocean Heat Content

 

Josh Willis, in response to a request for information for a short article that I am writing, wrote to me on May 6 2008 the e-mail below [reproduced with his permission]. This e-mail contains important new insight into recent upper ocean heat variations, as well as the uncertainty in the data. The analyses performed by Josh Willis and colleagues should be the gold standard used to monitor global climate system heat changes (e.g., as discussed on the Climate Science weblog of May 26 2008).

Josh Willis’s e-mail [edited to focus on his new data analysis]

Hi Roger,

“…After thinking this over some, I really think that the best plot for you is the first one attached to this message. This is just a plot of ocean heat content with errorbars, with the red lines illustrating the average, 4-year rate of ocean warming (plus or minus one standard error in the slope). Note that in all plots and in the discussion below, I have used one standard error, NOT 95% confidence limits.

Although I’ve attached a couple of other plots to illustrate some of my points, I think the first plot is the best one to use for several reasons.

First, this pretty much just differs by a scale factor from the steric curve we published before, and I think this will make the explanation clearer. Also, the size of the error bars is of critical importance if we want to use these data to constrain the radiative imbalance, and this gives a clear illustration of how well we can measure the average temperature of the upper ocean in a single month. As you’ll see below, that one-month error is sort of the building block for how accurately we can use ocean temperature data for this problem and what time scales we can expect it to be useful over.

Finally, this plot does not use any other assumptions or contain any other data about the radiative imbalance, the area of the Earth, the time derivative or any other factors to confuse its meaning.

So, to equate this plot to a radiative imbalance, we need to do two things. First, we assume that all of the radiative imbalance at the top of the atmosphere goes toward warming the ocean (this is not exactly true of course, but we think it is correct to first order at these time scales). So in normalizing this, we divide by the surface area of the Earth instead of the surface area of the Ocean. For the purposes of this calculation, I’ve approximated this as 5.1 x 10^14 m^2. The second thing we have to do is take the time derivative of the heat content curve to arrive at the rate of warming, or radiative imbalance. In other words, we have to turn Joules into Watts.

This second part is were the estimate gets really noisy. If I do the simplest thing and take a first difference of my monthly time series of ocean heat content, the noise is very large, as you might expect. This is shown in the second plot, and you can see that errors in a single one-month difference often exceed 20 W/m^2. For all of the following time derivative calculations, I computed the errors as follows:

error_rate = sqrt( error(month 1)^2 + error(month 2)^2 ) / time_difference

We can do a bit better, if instead of taking a one-month difference, we take one-year differences. This has the added benefit of removing the seasonal cycle. In other words, we subtract:

warming rate = hc(July, 2004) – hc(July, 2003) / (1 year)

The error bar is now smaller because the time difference has gotten larger.

In the example above, the estimate of the one-year warming rate is centered on January, 2004. If we do this for every pair of points separated by one year in the time series, we get the last figure attached.

However, what you would really like to know is the 4-year warming rate over this period. We have to be careful with the second two figures attached.

We cannot simply average over these and get a “mean warming rate” because we have taken a time derivative and the points are not independent (and neither are their errors). So, to get the 4-year rate illustrated by the red lines in the first plot, I took the 7 months in the time series that have 4-year pairs. That is, July, 2003 through Jan., 2004 and July, 2007 through Jan, 2008. Each of these pairs gives a 4-year rate and error bar as follows (in W/m^2):

-0.1132 +/- .5820
-0.4417 +/- .5807
-0.0102 +/- .5693
-0.0154 +/- .5673
-0.0231 +/- .5636
-0.0799 +/- .5410
0.1519 +/- .5468

Each one of these estimates is completely independent, so if we take their mean, we get an average 4-year warming rate of:

-0.075941 +/- 0.2139 W/m^2

Again, this is a one standard error estimate. Also, this is the number used for the slope of the red line in the first figure. I think it is very important to note that this is NOT a least squares fit of a straight line to the heat content curve. In my experience, whenever you fit a line to a time series, the meaning of the line and its errorbars is often muddled (even for scientists). For this reason, I try to avoid fitting lines to time series whenever possible.

Another thing that is important to note is that this excludes any warming in the deep ocean. Over four years, it is reasonable to expect at least some warming (or cooling) below 900 m as well as in regions that Argo does not sample such as under the sea ice. Not to mentions the small changes in heat content due to melting ice, warming land, and a warming atmosphere (with more or less water vapor). For these reasons, it would probably be wise to add at least a few more tenths of a W/m^2 to the errorbar on this 4-year warming rate.

Anyway, I hope this is useful. Sorry for the long email, but I wanted to be clear about the errors and I think there are a few subtlties there.

Cheers, Josh
**********************************************************
Josh Willis, Ph.D.
Jet Propulsion Laboratory”

 

 

 My Follow Up E-mail On May 9 2008

“Hi Josh, I have started to go through your analysis and have several comments and suggestions. I agree Figure 1 is the clearest for the …. article. It clearly shows, for example, the intrannual variation in heat with an interesting variation in the time of the peaks and troughs between the years. Within a year, there is both a period with positive radiative imbalance and a period with negative radiative imbalance (e.g., see this also for the lower troposphere in the April 18, 2008 weblog).

There is also another way to assess the heat content change over the period and that is to bin the data in yearly blocks and test statistically whether any of the blocks are different from each other. If the last block is not statistically different from the first block, then there is no statistically significant change. This would add more data points to test (within the blocks) and also reduces the question to whether or not there has been a statistically significant addition or loss of heat over the time period.

With respect to how to allocate the heat changes to the global scale, I suggest that there is no need to scale up by area. The assumption that since the oceans are about a 70% sample of the Earth, the computed value of the heat changes could be assumed to be the same for the other 30% (but, of course, are not sampled). This is an approximation, but in lieu of better ways to estimate, this seem reasonable.

On the other heat reservoirs, the global sea ice is actually near its long-term average; see
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg

In any case, this data can be used to estimate the heat changes in the sea ice by assuming a reasonable thickness (it will be a small contribution in any case).

The troposphere has actually been flat and has now cooled; e.g. see Figure 7 in
http://www.ssmi.com/msu/msu_data_description.html

I agree with you on the deeper ocean. The only other explanation for continuing sea level rise is a rise in the ocean bottom on these time scales (which is a topic outside of my expertise).

It also would be useful to compute what would be the maximum and minimum global annually averaged radiative imbalance using the 95% confidence value. This would permit a direct comparison to the way the IPCC present their data (as global annually averaged TOA radiative forcing), in order to assess if the sum of the radiative forcings and feedbacks are less than or greater than the IPCC estimates of the radiative forcing. If they are negative under any reasonable estimate, this means the radiative feedbacks are negative which would conflict with the IPCC assumption of an amplification of warming by the water vapor feedback. If the feedbacks are positive, this supports the IPCC’s view….

Best Regards, Roger”

The analysis being completed by Josh Willis and colleagues is central to the issue of assessing global warming and cooling. Climate Science recommends that upper ocean heat changes in Joules become the primary assessment tool for global climate system heat changes, as the data, with the introduction of the Argo network, is now robust to this evaluation. A website with the latest upper ocean heat content analyses should be funded and made widely available to the clmiate community, policymakers and, of course, the public.

Comments Off on New Information From Josh Willis On Upper Ocean Heat Content

Filed under Climate Change Metrics

Error Growth Beyond The Hapless Butterfly

Climate Science is fortunate to have another guest weblog by the internationally respected scientist Professor Hendrik Tennekes [see also his excellent earlier guest weblogs].

 

Weblog: Error Growth Beyond The Hapless Butterfly by Henk Tennekes

 

 

In the minds of the general public, the sensitive dependence on initial conditions that many nonlinear systems exhibit is expressed vividly by Ed Lorenz’ description of a butterfly which, merely by flapping its wings, might cause a tornado far away. It is unfortunate that Lorenz’ poetry has been taken too literally, even by scientists. As far as I have been able to determine, Lorenz meant to illustrate error growth caused by data assimilation and initialization errors, not the possible upscale propagation of errors. In my mind, an undetected small-scale disturbance cannot cause an unexpected large-scale event. Even a million Monarch butterflies taking off from their winter roost in Mexico cannot cause a tornado in Kansas. Also, it takes considerable time for small-scale calculation errors to propagate toward the large-scale end of the energy spectrum, especially when, as in all turbulence, the flow is strongly dissipative. Errors that creep in through subtle deficiencies in the codes employed are most effective when they invade the large scales of motion directly. Aliasing between neighboring wave numbers is a good example. Also, the upscale transfer of error “energy” in the subgridscale realm is ruled out by parameterization. Whenever individual eddies are replaced by a parameterized estimate of the subgrid scale motion, the issue of sensitive dependence on small-scale errors in initial conditions is moot. Lorenz’ butterfly deserves a more intelligent treatment.

 

The matter of sensitive dependence on initial conditions addresses what might happen to individual realizations. Once ensembles are formed, as by averaging over time and/or space, one encounters the core of the turbulence problem: the dynamical properties of averages differ substantially from those of individual events. Error growth in ensembles is unlikely to parallel error growth in individual realizations. Turbulent pipe flow, for example, is stable in the mean, even if the individual eddies are not. Much further study is needed, but progress in this area is impeded by the conveniences offered by General Circulation Models as applied to climate projections. These are run in a quasi-forecasting mode and imitate features like cyclogenesis on average rather well, even if the timing and pathways of individual storms are poorly represented. It is unfortunate that no robust theory exists of the dynamics of the general circulation. Such a theory would offer a conceptual framework for the study of the many varieties of error growth in GCMs. Climate forecasting is far from being mature. No systematic work on the admittedly very complicated   dynamics of error growth has been done. Even the relatively straightforward matter of estimating the prediction horizon of climate models has received no attention to speak of. If a reliable method for calculating the effective prediction horizon exists anywhere, it must have slipped past me unawares, though I have been anxiously waiting for it these past twenty years.

 

In view of the manifestly chaotic behavior of the weather, one should be suspicious of claims about the stability of the climate system. The idea that the climate might be well-behaved, even if the weather is not, is not supported by any investigations that I am aware of. The very claim that there exist no processes in the climate system that may exhibit sensitive dependence on initial conditions, or on misrepresentations of the large-scale environment in which these processes occur, is ludicrous. Just think of the many factors that promote the birth of a hurricane. It is not just the sea water temperature that may trip such an event, but also the presence or absence of wind shear, the upper atmosphere temperature field, and so on. In short, the climate would be stable if there exists not a single potential “tipping point”. I consider that inconceivable.

 

In the absence of a theoretical framework, one has to investigate all possible causes of error growth. Data assimilation and initialization errors are but one source of trouble. What to think of errors caused by the unavoidable shortcomings in the parameterization of the “physics”? Parameterization always involves simplification and smoothing; in a complex nonlinear system like the climate one cannot assume offhand that these tricks will not lead to unexpected kinds of error growth. Also, any error in this category is not triggered by a single impulse at startup time. Instead, it is aggravated by new impulses at each time step in the calculations.

 

Let me illustrate this with the simple model Ed Lorenz used to popularize nonlinear behavior. The repeated iteration

 

x(n + 1) = x(n)^2 – 1.8

 

is sensitive to initial errors, but it is also sensitive to other kinds of mistakes. One might imagine that the exact value of the coefficient in front of  x-squared is unknown, or that the additive term 1.8 is subject to a small parameterization defect, so that it is taken to be 1.82, a mere 1% off  the “true” value 1.8. Now determine what will happen. If the iteration is started with x(0) = 1 and the additive constant equals 1.8, we obtain the sequence

 

1,  -0.8,  -1.16,  -0.4544,  -1.59352,  0.73931, and so on.

 

But if the additive constant is 1% off, we get

 

1,  -0.82,  -1.1476,  -0.50301,  -1.56698,  0.63542, and so on.

 

In just five steps, the 1% “parameterization error” has grown a factor of sixteen!

 

One can vary this theme in many ways. Imagine, for example, that one cannot be sure of the exponent in the algorithm. It is taken as two, but what would happen if one has to accept a 10% uncertainty because of inadequate knowledge of the “physics”? In climate modeling, several processes are modeled with parameterizations of questionable accuracy. The difference between clouds in the atmosphere and cloudiness in a model involves several conceptual simplifications of dubious reliability, including the lack of attention to the difference between the behavior of ensembles (“cloudiness” is an ensemble) and that of the clouds that pass my window at this moment. The standard trick of making models behave “realistically” by adding an overdose of numerical viscosity is, to put it mildly, unprofessional. The viscosity dampens unwanted behavior, but decisions as to what is wanted and what is not are made subjectively. If such choices are not open to public scrutiny, the science involved is probably substandard. I maintain, as I have for many years, that it is up to climate modelers to demonstrate by which methods the accuracy, reliability, and forecast horizons of their model runs can be assessed. Good intentions aren’t good enough.

 

Ed Lorenz is also famous for the attractor in his three-variable model for deterministic, nonperiodic flow (1963). That attractor has a shape vaguely reminiscent of butterfly wings, which was of great help in spreading the butterfly fairytale. In the youthful enthusiasm of the early years of chaos theory, many people were hunting for the dimension of the climate attractor. Numbers around nine were mentioned with some frequency. These days we know better. The climate attractor is incredibly complex; its multidimensional landscape of hills, valleys and “tipping points” has not yet been charted with any accuracy. Future generations of climate scientists will have to study the possible sensitive dependence of each feature in that landscape on assimilation, initialization, and parameterization errors. I dare to venture that they will find so many conceivable “tipping points” that they may decide to throw their hats in the ring and give up on the idea of climate forecasting altogether. I did so many years ago, when I realized that sensitive dependence on initial conditions is not nearly as dangerous as the unwillingness to explore possible sensitive dependence on shortcomings in the codes employed and in the data assimilation software.

 

Let me conclude. I adhere to the Lorenz paradigm because I do not want to forget for a moment that small mistakes of whatever kind on occasion have large consequences. As far as I am concerned, the climate of our planet continuously balances on the verge of chaos. In my opinion, optimistic pronouncements about the stability of the climate system are unwarranted and unprofessional. I prefer modesty.

Comments Off on Error Growth Beyond The Hapless Butterfly

Filed under Guest Weblogs

A Short Explanantion Of Why The Monitoring Of Global Average Ocean Heat Content Is The Appropriate Metric to Assess Global Warming

Climate Science has posted numerous weblogs (e.g. see and see) and several papers (e.g. see) on the value of using ocean heat content changes to assess climate system heat changes.  We have also presented evidence of major problems, including a significant warm bias, with the use of land temperature data at a single level to monitor these heat changes (e.g. see and see).

To consisely illustrate the issue, the definition of the global average surface temperature anomaly, T’, can be used. The equation for this in NRC (2005) is

dH/dt = f – T’/lamda

where H is the heat content of the climate system, T’ is the change in surface temperature in response to a change in heat content (the temperature anomaly), f is the radiative forcing at the tropopause, and lambda is called the “climate feedback paramter” [although more accurately, it should be called the “surface temperature feedback parameter”!]. T’ is on the order of tenths of degrees C per decade and must be computed from a spatially heterogenous set of temperature anomoly data, particularly over land.

Moreover,in this approach, there are four variables: H, f, T’ and lamda. This is clearly an unnecessarily complicated way to compute climate system heat changes.

The alternative is much more straightforward. Simply compute H at one time and H at a second time (using ocean heat content measurements; e.g. see). The uncertainty in the data needs to be quantified, of course, but within these uncertainty brackets, a robust evaluation of global warming can be obtained. For example, a time slice of ocean heat content at any particular time can be compared with an earlier time slice, and within the uncertainty of the observations and their spatial representativeness,  can be used to document the change in H between these two time periods.  There is also no “unrealized heating”, as is claimed when T’ is used.

The change in H can then be used to communicate to policymakers and others the magnitude of global warming in Joules, which, unlike temperature in degrees Celsius, Joules is a physics unit for heat.

Why is this not a priority? There are two possible reasons. First, the time period of good data is much shorter than for the surface temperatures. However, since the IPCC models predict continuing warming, the emphasis on the data from the last decade is well placed. Secondly, the assumption still exists that the ocean is not well sampled or that there are large errors in the measurements. These concerns have been taken care of by excellent global coverage of the oceans by the Argo network (see) and by recent corrections to the data (e.g. see).

Therefore, it is time to move beyond seeking to evaluate T’ and instead directly monitor values of H for different time periods as the primary metric of global warming. A reluctance to report on these values by the media and in upcoming climate change reports should be viewed as an attempt to squash an inclusive assessment of global warming.

Comments Off on A Short Explanantion Of Why The Monitoring Of Global Average Ocean Heat Content Is The Appropriate Metric to Assess Global Warming

Filed under Climate Change Metrics

Challenge to Real Climate On The IPCC Global Climate Model Predictions Of Global Warming

SECOND UPDATE: May 27 2008

In seeking further to assess the agreement between the GISS model and observations, since Gavin would not help, I dug deeper into the paper

Hansen et al, 2005 Earth’s Energy Imbalance: Confirmation and Implications, Science 3 June 2005: 1431-1435 DOI: 10.1126/science.1110252

and their supplement.

It is clear that the GISS model is consistent with global average upper ocean heat content changes in the 1990s and up to 2003, and that the Watts per meter squared estimate of radiative imbalance that is diagnosed is accurate based on the observed ocean heat data. One of the confusions in their paper, however, is that Figure 2 has the incorrect units plotted on their left hand axis. The units are plotted as Watts year per meter squared, when they should be Joules *10**22.

The Supplemental data presents insightful plots of the spatial distribution of upper 750 m ocean heat content change. This is an effective format for them to use when updating the observational and modeling comparisons.  This should include quantitative statistical comparisons of the spatial degree of agreement between the GISS model predictions and the observations of upper ocean heat content over the same time period.

The upper ocean heat content change data in the last 4 years, however, does not conform to the Watts per meter squared estimate of radiative heating reported in Hansen et al. 2005.  This is a central issue that needs to be explained by the GISS group. We look forward to Gavin and Jim (and their other co-researchers) reporting to the climate community on this subject. We also look forward to the values of upper ocean heat content changes that the GISS model predicts for the next several years.

UPDATE: MAY 27 2008

Regretfully, Real Climate [Gavin Schmidt] has rejected the challenge [see http://www.realclimate.org/index.php/archives/2008/05/tropical-tropopshere-ii/#comment-88172].  The claim that I should do this analysis is off base, however, as Gavin works with the models and is in an ideal position to do this.

Moreover, Gavin ansd his colleague Jim Hansen already did this a few years ago as I wrote in the first posting of this weblog; again see

Hansen et al, 2005: Earth’s Energy Imbalance: Confirmation and Implications. Science Express. April 28 2005

It is obviously time for GISS to repeat this analysis, since their conclusion on the radiative imbalance is significantly too high. The data were good enough then for Gavin , where as they wrote in the abstract of their paper

“…..Our climate model, driven mainly by increasing human-made greenhouse gases and aerosols, among other forcings, calculates that Earth is now absorbing 0.85 ± 0.15 watts per square meter more energy from the Sun than it is emitting to space. This imbalance is confirmed by precise measurements of increasing ocean heat content over the past 10 years…..”

The data have improved even further since (e.g. see).  The only conclusion is that completing this comparison between GISS modeled and observations of updated upper ocean heat content trends would be an embarrassment to them.

*********************************************************************************************

Real Climate has offered a challenge (a bet) on their weblog on global cooling but they use global average surface temperature trends as the metric [see].

As shown, for example, in

Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335,

however, the monitoring of changes in the ocean heat content is a much more robust metric to assess global warming and cooling. The global average surface temperature trend has a number of unresolved issues with respect to its value to diagnose global climate system heat changes, including a warm bias (see and see).

Climate Science has proposed using Joules that accumulate within the oceans as the currrency to assess climate system heat changes, rather than a global average surface temperature trend (e.g. see).

This viewpoint is supported by Jim Hansen and colleagues.  As reported by

Hansen et al, 2005 Earth’s Energy Imbalance: Confirmation and Implications, Science 3 June 2005: 1431-1435 DOI: 10.1126/science.1110252′

their abstract claims that

“Our climate model, driven mainly by increasing human-made greenhouse gases and aerosols, among other forcings, calculates that Earth is now absorbing 0.85 ± 0.15 watts per square meter more energy from the Sun than it is emitting to space. This imbalance is confirmed by precise measurements of increasing ocean heat content over the past 10 years. Implications include (i) the expectation of additional global warming of about 0.6°C without further change of atmospheric composition; (ii) the confirmation of the climate system’s lag in responding to forcings, implying the need for anticipatory actions to avoid any specified level of climate change; and (iii) the likelihood of acceleration of ice sheet disintegration and sea level rise.”

This means, using the conversion between a continuous rate of the global average heating in Watts per square meter and the accumulated Joules of heat given in the Pielke 2003 paper, that the 0.85 Watts per meter squared reported in Hansen et al correponds to an accumulation of heat of  1.38 * 10**22 Joules per year. Over ten years, this would be 13.8* 10 ** 22 Joules of heat accumulation. The use of this climate metric (Joules) to assess global warming and cooling will add to the resolution of the very appropriate issues raised on Prometheus and The Blackboard as to how to test the skill of the IPCC models.

On those websites, the discussion has focused on how many years temperatures at one level near the surface must be monitored in order to have confidence that model predictions of the global average surface temperature trend are consistent with the observations of temperature trends.  The use of Joules, however, significantly reduces the time period required, since it a change in a reservoir of heat (i.e. in the ocean) with its much greater mass that is sampled, rather than the trend of the global average near-surface air temperature anomaly at one level.

The challenge to Gavin Schmidt and Ray Pierrehumbert at Real Climate (since Real Climate has presented a bet), therefore, is to answer these questions:

  • what amount of heat in Joules was predicted by the IPCC models to accumulate within the climate system over the last five, ten and twenty years?
  • what is the best estimate of what actually accumulated based on the observations of ocean heat content changes for these three time periods?
  • what magnitude of heat accumulation in Joules in the next five, ten, and twenty years would cause Real Climate to conclude that the IPCC models are “inconsistent” with the observations?

We would be glad to post their reply to these questions as a guest weblog on Climate Science.

Comments Off on Challenge to Real Climate On The IPCC Global Climate Model Predictions Of Global Warming

Filed under Climate Change Metrics

Media Report On the Important Role Of Landscape Change On Climate

There is an interesting news article In The Telegraph on May 23 2008 by G.S. Mudur titled “Riders rained out, trees to blame – Jump in green cover over capital the reason for unseasonable showers, say scientists“. The article, referring to New Dehli and vicinity, includes the text

“The growth of forest and tree cover in the capital and its neighbouring regions — traditionally an arid zone — may be a key factor contributing to unseasonal local rain, atmospheric scientists said.

Rain in Delhi during the month of May has increased each year over the past three years — from 37mm in 2005 to 71mm in 2007, and 104mm recorded so far this month. Over the decades, forest and tree cover in the capital has bloomed — from a mere 22sqkm in 1993 to 283sqkm by 2005.

“We could call this unintentional climate engineering — on a local scale,” said Rengaswamy Ramesh, a senior scientist at the Physical Research Laboratory, Ahmedabad.

This is yet another example (on the local scale) that land use/land cover change are first order climate forcings (also see and see), and that this issue was ignored in the 2007 IPCC Summary to Policymakers. Finally, the media are starting to pay attention to this issue!

Comments Off on Media Report On the Important Role Of Landscape Change On Climate

Filed under Climate Science Reporting

Follow-up to The Response to Ray Pierrehumbert’s Real Climate Post by Roy Spencer

Follow-Up By Roy Spencer on Ray Pierrehumbert’s Real Climate Post

I’ve received the comment that I did not adequately address the last three graphs that Ray showed in his RealClimate.org post of May 21, 2008. These three plots represent what he calls Lesson 1, 2, and 3 on how to “cook a graph.”

As I mentioned in my May 22 post, the model I assumed was purposely simple. I did not claim that my results were necessarily the explanation for warming over the last century, only that climate researchers have largely ignored the possibility that natural modes of climate variability such as ENSO and the PDO could have very small associated non-feedback changes in cloudiness which might have caused a substantial part – maybe even most – of the low temperature variability since 1900.

In his “Lesson 1”, he used yearly averaging (rather than 5-year averaging, like I did) of the SOI and PDO indicies in a simple model like I described. The yearly data then generated a temperature history that had much larger SST fluctuations than would be reasonable. But he is basically assuming here that there is a perfect mapping of the PDO and SOI indicies to cloud cover at arbitrarily high time resolutions, which I never claimed. The relationship would likely be very noisy, with the underlying signal of 1 Watt or so of “internal radiative forcing” associated with the PDO or El Niño/La Niño only emerging over long periods of time.

In Ray’s “Lesson 2”, he claims that I used a much too deep mixed layer depth; I used 1,000 m, and he claimed it should be more like 50 m. Oh, really? Well, if the mixed layer depth of the ocean on multi-decadal time scales is only 50 m, then why are we waiting for the remaining “warming in the pipeline” from our CO2 emissions, as we are constantly told exists because of the huge heat capacity of the ocean?

Finally, in his third and final lesson of “How to Cook a Graph” he claims I used a totally unrealistic starting value for a temperature anomaly. I used about -0.5°C. Well, the temperature baseline is irrelevant, it’s the initial radiative imbalance that is important: I used about 0.25 W m-2.  Does Ray really think we know what the radiative balance at the top-of-atmosphere was in the year 1902, to 0.25 W m-2?  I sure don’t.

In conclusion, Ray’s claim that all cloud fluctuations are connected to surface temperature variations (i.e., the result of feedback) is an illustration of one of the problems that most climate modeling groups have: too many physicists from other disciplines, and not enough meteorologists who know how complex weather (and especially the processes that control cloud formation and dissipation) really is.

Comments Off on Follow-up to The Response to Ray Pierrehumbert’s Real Climate Post by Roy Spencer

Filed under Guest Weblogs

Model Verification – A Guest Weblog by Giovanni Leoncini

Giovanni Leoncini is finishing his Ph.D. with Dr. Pielke and is working at the Meteorological Department of the University of Reading on convective ensembles. He can be contacted at g(dot)leoncini(at)reading(dot)ac(dot)uk [Thanks to Timo Hämeranta for alerting us to this paper].

Giovanni Leoncini’s Guest Weblog

As a member of the mesoscale NWP community, climate modeling papers and seminars often seem to have a different standard when it comes to verification. Whilst it is routine in the NWP community (see the last issue of Meteorological Applications on verification: http://www3.interscience.wiley.com/journal/113388504/home), I don’t perceive a similar effort in the climate modeling community. In the introduction of their paper “Performance metrics for climate models” (2008, J. Geophys. Res.) Glecker et al. mention a few reasons for this discrepancy. The applications of climate models are very diverse in scale and parameters, and “a succinct set of measures that assess what is important to climate has yet to be identified.” A second reason is that the opportunities to test climate model skills are limited because of the slow evolution of climatologies. Furthermore, data are not error free and uncertainties are often poorly underestimated. Gleckler et al. also mention that models can be tuned to appear realistic for some features, “but as a result of compensating errors.” They go on to tackle very elegantly and thoroughly the complex issue of climate model verification offering a methodology to condense the relevant information content as thoroughly as possible without oversimplification.

Both the methodology and the conclusions of the paper are very important and the authors discuss them in the wider context of the complexities of climate modeling, which are laid out clearly along with their significance for the model performance. By use of different datasets, and a variety of indices of model performance, Gleckler et al. rank several models based on their difference with observations. They also introduce two indices to evaluate the mean fields and their variability. Their main conclusions can briefly be summarized as follows:

  • The mean and median model very often perform best.
  • Generally speaking there are models which tend to perform better than others, but a single model can change its ranking by 6 or 7 slots depending on the field analyzed, whether the focus is on its mean or variance, on the geographical area (Northern Hemisphere, Tropics, etc), and also depending on which dataset is used as reference.
  • The mean climate metrics can be “woefully inadequate for describing the multiple facets of model performance”, and a good agreement with observation for the mean climate does not necessarily imply a good representation of the climate variability.

Gleckler, Taylor, and Doutriaux deserve credit for this very important work, especially because they were able to look at model performance in-depth, summarizing the wide variety of information that model output provides. Furthermore, the first two points mentioned above represent a formalization of some aspects of modeling in general that have been implicitly accepted for a while, at least in the NWP community. Because no model can yet encompass alone, the variety of processes and interactions that occur in the atmosphere, no model outperforms all other models in all cases, and a multimodel ensemble can capture additional uncertainties, and thus its mean performs best (for a more in-depth discussion see Hagedorn et al. 2005). The third point raises another important issue: what conclusions can be drawn from a model simulation that does reproduce a mean variable, but fails to characterize its variability? Does this imply that the model is getting the mean value right for the wrong reasons? Most likely so, but we all agree that the significance of the error varies with the variable and the type of mean, and most of all this does not imply that the entire simulation is to be discarded for all applications nor that it can’t be used as a forecast or as a sensitivity experiment. However there is no quantitative analysis, at least to my knowledge, that tackles this issue in a general fashion, although multimodel ensembles are definitely a way of doing just that. While the significance of this problem for NWP is strongly limited by the constant verification of model performance, I think it is an issue for climate-type simulations especially when they are used to establish policies. How significant is a hydrological balance for the future European climate as provided by a global model that does not well simulate the North Atlantic Oscillation? If the NAO is not well captured, the storm track might not be realistic, further decreasing the confidence on the precipitation fields over Europe. Being able to quantitatively assess this type of uncertainty is a very important step toward a realistic use of models.

Reference:
Hagedorn, R., F.J. Doblas-Reyes, and T.N. Palmer, 2005: The rationale behind the success of multi-model ensembles in seasonal forecasting – I. Basic concept. Tellus Series A, 57:219-233.

Comments Off on Model Verification – A Guest Weblog by Giovanni Leoncini

Filed under Guest Weblogs

A Response to Ray Pierrehumbert’s Real Climate Post of May 21, 2008 by Roy Spencer

Guest Weblog By Roy Spencer on Ray Pierrehumbert’s Real Climate Post of May 21 2008

Since Ray Pierrehumbert has decided to critique some of my published work (and unpublished musings) on global warming over at RealClimate.org, I thought I’d offer some rebuttal. The main theme of his objections to our new paper and what it demonstrates is clearly wrong – and leading IPCC experts have agreed with me on this.

But first the big picture.  The bottom line of what I try to demonstrate these days is that the claimed high probability for the belief that mankind is responsible for most or all of the warming over the last century is grossly overstated.  Is an anthropogenic explanation plausible? Sure. But since virtually no serious work has been done to investigate natural variability on daily to decadal time scales and how it can influence lower frequency climate variability, it is far from ‘very probable’.  

Ray’s first objection is to our new paper, now in press in J. Climate (Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, by Spencer and Braswell).  I quote:

 “In Spencer and Braswell (2008), and to an even greater extent in his blog article, Spencer tries to introduce the rather peculiar notion of “internal radiative forcing” as distinct from cloud or water vapor feedback. He goes so far as to say that the IPCC is biased against “internal radiative forcing,” in favor of treating cloud effects as feedback. Just what does he mean by this notion? And what, if any, difference does it make to the way IPCC models are formulated? The answer to the latter question is easy: none, since the concept of feedbacks is just something used to try to make sense of what a model does, and does not actually enter into the formulation of the model itself.”

Ray is quite simply wrong — and the reviewers of our paper (Piers Forster and Isaac Held) agree with me.  It matters a great deal whether radiative fluctuations are the result of feedback on surface temperature, versus the myriad other variables that control cloudiness.  Piers Forster was honest enough to admit that their neglect of the internal variability term in Eq. 3 of “The Climate Sensitivity and its Components Diagnosed from Earth Radiation Budget Data” (Forster and Gregory, J. Climate, 2006) was incorrect, and that it indeed can not be neglected in feedback diagnosis efforts using observational data.  He also stated that the climate modeling community needs to be made aware of this.

In fact, both Forster and Held had to construct their own simple models of the effect to understand what I was talking about so that they could convince themselves.  Now, I am not a modeler – I’m more of an observationalist.  Why did it take someone like me to point this out before anyone else in the modeling community discovered it?  I’m not funded to do this stuff – they are.

Our paper gives the very simple case of daily random cloud variability over the ocean (does Ray believe there is no such thing as stochastic variability?).  As the following figure demonstrates, this random behavior can cause decadal-scale SST variability that looks like positive feedback.

In this simple case, where the model noise and SST forcing matches satellite-observed statistics from CERES (for reflect SW) and TRMM TMI (for SST), a positive feedback bias of 0.6 W m-2 K-1 resulted (the specified feedback, including the Planck temperature effect, was 3.5 W m-2 K-1). 

And if daily random cloud variations can do this, what might weekly, monthly, or yearly non-feedback fluctuations do?  Any cloud changes resulting from fluctuations in stability, wind shear, precipitation efficiency, etc. accompanying El Niño/La Niña, the Pacific Decadal Oscillation, or any other mode of internal variability will ALWAYS look like positive feedback – even if there is no feedback present.  The question of how the neglect of this effect has contaminated observational estimates of feedback has never even been addressed, let alone answered.

I repeat: to the extent that any non-feedback radiative fluctuations occur, their signature in climate data LOOKS LIKE positive feedback.  And when modelers use those relationships to help formulate cloud parameterizations, it can lead to models that are too sensitive.

Next, Ray objects to my simple example of using a different non-feedback source of variability: I assumed cloud changes proportional to the SOI and PDO indices as a potential low-frequency example of this behavior.  He shows that the resulting yearly radiative forcing would be much larger than what satellite radiative budget data have measured.  Well, the 5-year average forcing was only 1 or 2 W m-2, and any higher frequency (e.g., yearly) noise in the relationship could just be chalked up to the fact that something like the PDO index is not likely to be perfectly correlated to a cloud change.

And besides, the SOI/PDO example took me 1 hour on a weekend with a very simple single idea, internet access, and an Excel spreadsheet.  In stark contrast, the IPCC work represents many years and hundreds of millions of dollars of effort to connect the few degrees of freedom contained in the last 100 years of global temperature variations to an anthropogenic cause for those low-frequency signals.  What might we have learned if we put that kind of money and brainpower into looking for potential natural non-feedback sources of radiative variability?

Finally, Ray continues the popular ad hominem attack and revisionist history when referring to the fact that our (UAH) satellite temperature data dataset contained errors (before Mears and Wentz and RSS developed their own analysis and discovered those errors).  Well, contrary to Ray’s claim, we corrected those errors after they were demonstrated.  For years now, our decadal temperature trends have been pretty close to those from RSS.  This is how science progresses. 

If there had been only one climate model up till now, would we be surprised if a new, second modeling group found errors in what the first modeling group had done?

And, Ray might be surprised to learn that we were not the last ones to make such an error.  The RSS satellite temperature record recently had spurious COOLING since early 2007 – which we helped RSS find the reason for.

Finally, I want to reiterate that I DO believe that an anthropogenic source for most of the warming over the last century is a plausible theory.  But the claim that an anthropogenic source for the warming has been demonstrated to a high level of confidence can not be supported…simply because so little work on potential natural causes has been done.

Comments Off on A Response to Ray Pierrehumbert’s Real Climate Post of May 21, 2008 by Roy Spencer

Filed under Guest Weblogs