Monthly Archives: April 2011

Climate Science Myths And Misconceptions – Post #4 On Climate Prediction As An Boundary Value Problem

I was alerted to a paper on climate as an initial and boundary value problem (h/t  Jos de Laat). The paper inappropriately uses a model to make their (incorrect) conclusions. As has been discussed numerous times on this weblog, models are hypotheses.  Only real world observations can be used to test the skill of the models.

The paper is

Grant Branstator and Haiyan Teng, 2010: Two Limits of Initial-Value Decadal Predictability in a CGCM. Journal of Climate Volume 23, Issue 23 (December 2010) pp. 6292-6311 doi: 10.1175/2010JCLI3678.1

is another example of the misuse of the scientific method.

The abstract reads of their paper is [highlight added]

When the climate system experiences time-dependent external forcing (e.g., from increases in greenhouse gas and aerosol concentrations), there are two inherent limits on the gain in skill of decadal climate predictions that can be attained from initializing with the observed ocean state. One is the classical initial-value predictability limit that is a consequence of the system being chaotic, and the other corresponds to the forecast range at which information from the initial conditions is overcome by the forced response. These limits are not caused by model errors; they correspond to limits on the range of useful forecasts that would exist even if nature behaved exactly as the model behaves. In this paper these two limits are quantified for the Community Climate System Model, version 3 (CCSM3), with several 40-member climate change scenario experiments. Predictability of the upper-300-m ocean temperature, on basin and global scales, is estimated by relative entropy from information theory. Despite some regional variations, overall, information from the ocean initial conditions exceeds that from the forced response for about 7 yr. After about a decade the classical initial-value predictability limit is reached, at which point the initial conditions have no remaining impact. Initial-value predictability receives a larger contribution from ensemble mean signals than from the distribution about the mean. Based on the two quantified limits, the conclusion is drawn that, to the extent that predictive skill relies solely on upper-ocean heat content, in CCSM3 decadal prediction beyond a range of about 10 yr is a boundary condition problem rather than an initial-value problem. Factors that the results ofthis study are sensitive and insensitive to are also discussed.

The text starts with

“The scientific community is now taking on the challenge of using initialized models to produce time-evolving climate predictions for the next 10–30 yr (Smith et al. 2007; Keenlyside et al. 2008; Pohlmann et al. 2009). Such predictions will be a key component of the next Intergovernmental Panel on Climate Change (IPCC) assessment report (Taylor et al. 2009). Compared with traditional climate change experiments, the fundamental difference in these forecasts is that the initial ocean state is determined from observations, and the hypothesis is that the resulting forecasts will substantially benefit from this added information. But the duration of the influence of the ocean initial conditions remains unknown. Since the climate system is chaotic, inevitable errors in the initial conditions growwith time causing the initial signals to fade (Lorenz 1963). Eventually, the impact of the initial conditions become undetectable, placing a fundamental limit on its influence. If one considers a situation where the forcing of the climate system is changing, a second limit on initial condition influence should be introduced. For, if, as in the case with forcing by the ongoing changes in greenhouse gas (GHG) and aerosol concentrations, the system response increases with time, then at some point the influence of the initial conditions becomes of secondary importance compared to the forced response. In this paper, we quantify the forecast range at which these two limits are reached. Our results should help to determine the feasibility and value of decadal predictions (Meehl et al. 2009; Hurrell et al. 2009; Solomon et al. 2011).”

The method to study this question is described as

Here, we have analyzed several Community Climate Model version 3 (CCSM3) ensemble experiments specifically designed to make it possible to address this issue.”

This paper uses a modeling approach, in which the models have not been able to show skill at regional multi-decadal climate prediction in order to make a statement regarding the real climate system. These models do not accurately represent atmospheric/ocean features such as ENSO, PDO, NAO etc as well as longer term changes in deep ocean circulations.

Thus

Misconception #4:  Multi-Decadal Climate Prediction Is A Boundary Value Problem

I have discussed this subject in my paper

Pielke, R.A., 1998: Climate prediction as an initial value problem. Bull. Amer. Meteor. Soc., 79, 2743-2746.

where I wrote

“weather prediction is a subset of climate prediction and that both are, therefore, initial value problems in the context of nonlinear geophysical flow.”

In our paper

Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38

we concluded that

“The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the norm.”

Excerpts from our paper are

“Past records of climate change are perhaps the most frequently cited examples of nonlinear dynamics, especially where certain aspects of climate, e.g., the thermohaline circulation of the North Atlantic ocean, suggest the existence of thresholds, multiple equilibria, and other features that may result in episodes of rapid change (Stocker and Schmittner, 1997). As described in Kabat et al. (2003), the Earth’s climate system includes the natural spheres (e.g., atmosphere, biosphere, hydrosphere and geosphere), the anthrosphere (e.g., economy, society, culture), and their complex interactions (Schellnhuber, 1998). These interactions are the main source of nonlinear behavior, and thus one of the main sources of uncertainty in our attempts to predict the effects of global environmental change. In sharp contrast to familiar linear physical processes, nonlinear behavior in the climate results in highly diverse, usually surprising and often counterintuitive observations…”

There is a paper

F. Giorgi, 2005 : Climate Change Prediction: Climatic Change (2005) 73: 239. DOI: 10.1007/s10584-005-6857-4

which discussed this subject, but its implications were ignored by Branstator and Teng in 2010. Girogi writes

“….because of the long time scales involved in ocean, cryosphere and biosphere processes a first kind predictability component also arises. The slower components of the climate system (e.g. the ocean and biosphere) affect the statistics of climate variables (e.g. precipitation) and since they may feel the influence of their initial state at multi decadal time scales, it is possible that climate changes also depend on the initial state of the climate system (e.g. Collins, 2002; Pielke, 1998). For example, the evolution of the THC in response to GHG forcing can depend on the THC initial state, and this evolution will in general affect the full climate system. As a result, the climate change prediction problem has components of both first and second kind which are deeply intertwined.”

 If the climate system is both a boundary value and an initial value problem (which I agree with), it is an initial value problem!

The  Branstator and Teng 2010 paper is an example of a study that has failed to properly follow the scientific method as was discussed in

Short Circuiting The Scientific Process – A Serious Problem In The Climate Science Community

where I wrote

There has been a development over the last 10-15 years or so in the scientific peer reviewed literature that is short circuiting the scientific method.

The scientific method involves developing a hypothesis and then seeking to refute it. If all attempts to discredit the hypothesis fails, we start to accept the proposed theory as being an accurate description of how the real world works.

A useful summary of the scientific method is given on the website sciencebuddies.org.where they list six steps

  • Ask a Question
  • Do Background Research
  • Construct a Hypothesis
  • Test Your Hypothesis by Doing an Experiment
  • Analyze Your Data and Draw a Conclusion
  • Communicate Your Results

Unfortunately, in recent years papers have been published in the peer reviewed literature that fail to follow these proper steps of scientific investigation. These papers are short circuiting the scientific method.”

All the  Branstator and  Teng 2010 paper does is to tell us how the models that they used behaved. This is an interesting classroom study, but does not advance our understanding of the climate system as their study is not properly evaluated against observed real world data. This study should  not have appeared in a peer reviewed journal.

The reality is that Climate Prediction Is An Initial Value Problem.

Comments Off

Filed under Climate Models, Climate Science Misconceptions, Research Papers

Perceptive Article On The Sad State Of Research Funding By Toby N. Carlson

Toby N. Carlson of the Department of Meteorology at the Pennsylvania State University has shared with me two article on the sad state of research funding. This sentiment fits with my impressions of NSF funding that I have posted on in my weblog; e.g. see

Is The NSF Funding Untestable Climate Predictions – My Comments On A $6 Million Grant To Fund A Center For Robust Decision–Making On Climate And Energy Policy”

The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill

The two articles are

Carlson, T. N, 2010: Science by Proxy. The Chronicle for Higher Education. October 17 201o.

and

Carlson, T. N., 2008: Current funding practices in academic science stifle creativity. Review of Policy Research (Dupont Summit issue), 25, 631-642.

In Carlson 2010, excerpts are [highlight added]

“The agencies are also at fault. They are bureaucracies that promote top-down science to suit political and administrative ends. To begin with, there is the application process itself. Often, an agency’s request for proposal, or RFP, reads like a legal document, constricting the applicant to stay within very narrow and conventional bounds, with no profound scientific questions posed at all. Many RFP’s are so overly specific that they amount to little more than work for hire. Those who know how to play the game simply reply to RFP’s with parroted responses that echo the language in the proposal, in efforts to convince the reviewers that their programs exactly fit the conditions of the RFP. Thus many RFP’s inhibit good research rather than encourage it.

Program managers—who are even further removed from the forefront of their fields than overburdened principal investigators—also favor large, splashy research projects with plenty of crowd appeal, like fancy Web sites that look impressive but that no one actually uses. In other words, useless science.

Money is trumping creativity in academic science. This statement was previously given substance in an article I published, along with a companion paper by Mark Roulston in the Bulletin of the American Meteorological Society (Carlson, 2006a; Roulston, 2006) and in a subsequent address I gave to the Heads and Chairs meeting in Boulder, Colorado (Carlson, 2006b). Here, I expand further on the issues treated in these papers, and make a plea for changing the way funding is administered in academic science. Using examples I show that the present worsening situation places a dead hand on the spirit and creative output of academic scientists, especially junior faculty. I suggest a possible solution, which would enable academic scientists to function in a stable environment, free from spurious financial pressures and dictates from university administration and funding agencies.”

Excerpts from the Carlson 2008 paper read

I would like to suggest an alternate approach to addressing this crisis. One approach would be to award a sum of money based on the score received from the reviewers. This would insure that all but the poorest proposals would receive some funding. Another suggestion is more radical. For this, we need not be fixated on the numbers here, as expediting the idea would entail a thorough cost analysis of funds available from institutions and the numbers of potential recipients of that funding. I believe that were funding agencies to collaborate by agreeing to award each faculty member a nominal sum of money each year (let’s say $20,000) plus one graduate student, subject to a very short proposal justifying the research and citing papers published, the total amount of money handed out would be far less than at present and the time spent in fruitless chasing after funds reduced considerably. Importantly, the productivity and creativity of the scientist would increase and the burden placed on reviewers of papers and proposals and on editors of journals would decrease.

The proposal submitted by the scientist to the funding agency would be very short (e.g., one page), and be subjected to a nominal review and a pass/fail criterion: does this proposal seem worthwhile? The level of subsistence would be set low enough to eliminate greed (or complacency on the part of the recipient), high enough to allow scientists adequate funds to carry on a viable research program free of financial stresses. The allotment would also be set sufficiently low as to insure that funding agencies have sufficient money left over for some larger programs. The latter would be funded by the submission of conventional proposals, subject to the current review process, except that the research would be initiated from the working scientist rather than the funding agency. In other words, bottom up science.

The atmosphere being created by the present system in academic science is joyless. Good scientific research requires dedication, patience, and enthusiasm and a high degree of passion for the chosen subject. Overhearing conversations in the corridors of my own institution, I am struck by the fact that the topics are almost always related to proposal writing and funding and not to scientific ideas. Where is the inspiration; where is the passion?

Toby’s recommendation is excellent, and should be encouraged. With respect to NSF funding in climate science, the current focus on funding multi-decadal climate predictions by the NSF fits with his characterization  that they “are bureaucracies that promote top-down science to suit political and administrative ends“.

Comments Off

Filed under Academic Departments, Politicalization of Science

Another Publication Of An Unverifiable Multi-Decadal Climate Prediction: “Cold Spells In A Warming World”

I was alerted to an article and news releases on the prediction of cold outbreaks decades from now [h/t Ned Niklov].  The  researchers  are affiliated with Oak Ridge National Laboratory. That is relevant to me as I was on science review panel at Oak Ridge several years ago where one of our major recommendations was that they assess the predictability of climate forecasts starting as an initial value problem. This would have been a robust scientific approach as observations can be used to test the skill of the multi-decadal predictions.

However, this article (and the climate modeling research program at Oak Ridge National Laboratory, if this paper is typical) has been derailed from the proper assessment of the skill at climate prediction.

Instead, as illustrated in the paper below, they have  adopted the scientifically flawed approach of making regional climate forecasts decades into the future. The journal, Geophysical Research Letters, by accepting such a prediction paper, is similarly compromising robust science.

I have discussed the failure of the scientific method with such studies in past posts; e.g. see

Is The NSF Funding Process Working Correctly?

Invited Letter Now Rejected By Nature Magazine

Comments On The Peer-Review Journal Publication Process And Recommendations For Improvement

See also

Guest Post By Ben Herman Of The University Of Arizona

This article which fails as robust science is

Kodra, E., K. Steinhaeuser, and A. R. Ganguly (2011), Persisting cold extremes under 21st-century warming scenarios, Geophys. Res. Lett., doi:10.1029/2011GL047103, in press.

The abstract reads [highlight added]

Analyses of climate model simulations and observations reveal that extreme cold events are likely to persist across each land-continent even under 21st-century warming scenarios. The grid based intensity, duration and frequency of cold extreme events are calculated annually through three indices: the coldest annual consecutive three-day average of daily maximum temperature, the annual maximum of consecutive frost days, and the total number of frost days. Nine global climate models forced with a moderate greenhouse-gas emissions scenario compares the indices over 2091 2100 versus 1991-2000. The credibility of model-simulated cold extremes is evaluated through both bias scores relative to reanalysis data in the past and multi-model agreement in the future. The number of times the value of each annual index in 2091-2100 exceeds the decadal average of the corresponding index in 1991-2000 is counted. The results indicate that intensity and duration of grid-based cold extremes, when viewed as a global total, will often be as severe as current typical conditions in many regions, but the corresponding frequency does not show this persistence. While the models agree on the projected persistence of cold extremes in terms of global counts, regionally, inter-model variability and disparity in model performance tends to dominate. Our findings suggest that, despite a general warming trend, regional preparedness for extreme cold events cannot be compromised even towards the end of the century.”

An excerpt reads [boldface added].

“We find evidence from nine climate models that intensity and duration of cold extremes may occasionally, or in some cases quite often, persist at end-of-20th-century levels late into the 21st century in many regions. This is expected despite unanimous projections of relatively significant mean warming trends.”

The use of the term “evidence” with respect to climate models illustrates that this study is incorrectly assuming that models can be used to test how the real world behaves. 

 Moreover, they write “[t]he credibility of model-simulated cold extremes is evaluated through both bias scores relative to reanalysis data in the past and multi-model agreement in the future.”  The testing against reanalysis data for the period 1991-2000 is robust science. However, bias scores using“multi-model agreement in the future” is a fundamentally incorrect approach.

 Models are hypotheses and need to be tested against real data.  However, the climate models have not been shown skill at predicting how the statistics of cold waves change in response to human climate forcings during the 21st century.  Indeed, there is no way to perform this test until those decades occur.

Comments Off

Filed under Climate Change Metrics

Repost Of Weblog Climatequotes.com “Climate Scientists Answer Question: Should Climate Sensitivity Be Measured By Global Average Surface Temperature Anomaly?”

There is an excellent collection of interviews posted by Sam Patterson on April 23 2011 on the weblog Climatequotes.com titled

Climate Scientists Answer Question: Should climate sensitivity be measured by global average surface temperature anomaly?

I have reposted his very informative set of interviews and commentary below.

_________________________________________________

Note: I wrote this post many weeks ago and never posted it because I was waiting for some more feedback. However, Pielke Sr. has posted specifically on this issue recently and Watts ran it also, so I feel now is a good time to post it.

This post deals with the the question of whether or not climate sensitivity should be measured by global average surface temperature anomaly. I asked multiple climate scientists their opinion, and their responses are below. First, some background.

Over at The Blackboard there is an interesting guest post by Zeke. He attempts to find areas where agreement can take place by laying out his beliefs and putting a certain confidence level on them. This idea was commented upon by several blogs and scientists. Judith Curry, Anthony Watts, Jeff Id, and Pielke Sr. all contributed. I want to focus on Pielke’s response, because he challenges a core assumption of the exercise.

In Zeke’s post, he gives his position on climate sensitivity:

Climate sensitivity is somewhere between 1.5 C and 4.5 C for a doubling of carbon dioxide, due to feedbacks (primarily water vapor) in the climate system…

Here is Pielke’s response to this claim:

The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear…

Pielke goes on to explain that he has dealt with this issue previously in the paper entitled “Unresolved issues with the assessment of multi-decadal global land surface temperature trends.” Here is the main thrust of his response:

This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.

So we know Pielke’s position. He is adamantly opposed to using surface temperature anomaly when discussing climate sensitivity, for various reasons, not the least of which is it ignores metrics which actually matter to people.

I haven’t heard this view expressed very often, so I decided to contact other climate scientists and find out their opinions on this issue. I asked the following questions and invited them to give their general impressions:

1. Do you believe that global annual average surface temperature anomaly is the best available metric to discuss climate sensitivity?

If yes to Question 1, then:

2. Could you briefly explain why you consider global annual average surface temperature anomaly the best available metric to discuss climate sensitivity?

If no to question 1, then:

2. What do you believe is the proper metric to discuss climate sensitivity, and could you briefly explain why?

John Christy

1. Do you believe that global annual average surface temperature anomaly is the best available metric to discuss climate sensitivity?

No. The surface temperature, especially the nighttime minimum, is affected by numerous factors unrelated to the global atmospheric sensitivity to enhanced greenhouse forcing (I have several papers on this.) The ultimate metric is the number of joules of energy in the system (are they increasing? at what rate?). The ocean is the main source for this repository of energy. A second source, better than the surface, but not as good as the ocean, is the bulk atmospheric temperature (as Roy Spencer uses for climate sensitivity and feedback studies.) The bulk atmosphere represents a lot of mass, and so tells us more about the number of joules that are accumulating.

Patrick Michaels

I think it is a reasonable metric in that it integrates the response of temperature where it is important–i.e. where most things on earth live. However, it needs to be measured in concert with ocean measurements at depth and with both tropospheric and stratospheric temperatures. For example, if there were no stratospheric decline in temperature, then lower tropospheric or surface rises would be hard to attribute to ghg changes. Because we don’t have any stratospheric proxy (that I know of) for the early 20th century, when surface temperature rose about as much as they rose in the late 20th, we really don’t know the ghg component of that (though I suspect it was little to none).

Having said that, I suspect that where we do have such data, it is indicative that the sensitivity is lower than generally assumed, but not as low as has been hypothesized by some.

Gavin Schmidt

Your questions are unfortunately rather ill-posed. This is probably not your fault, but it is indicative of the confusion on these points that exist.

“Climate sensitivity” is *defined* as being the equilibrium response of the global mean surface temperature to a change in radiative forcing while holding a number of things constant (aerosols, ice sheets, vegetation, ozone) (c.f. Charney 1979, Hansen et al, 1984 and thousands of publications since). There is no ambiguity here, no choice of metrics to examine, and no room for any element of belief or non-belief. It is a definition. There are of course different estimates of the surface temperature anomaly, but that isn’t relevant for your question.

There are of course many different metrics that might be sensitive to radiative forcings that one might be interested in: Rainfall patterns, sea ice extent, ocean heat content, winds, cloudiness, ice sheets, ecosystems, tropospheric temperature etc. Since they are part of the climate, they will be sensitive to climate change to some extent. But the specific terminology of “climate sensitivity” or the slightly expanded concept of “Earth System Sensitivity” (i.e Lunt et al, 2010) (that includes the impact on the surface temperature of the variations in the elements held constant in the Charney definition), are very specific and tied directly to surface temperature.

People can certainly hold opinions about which, if any, of these metrics are of interest to them or are important in some way, and I wouldn’t want to prevent anyone from making their views known on this. But people don’t get to redefine commonly-understood and widely-used terms on that basis.

I sent a response to Gavin clarifying my questions, and including Pielke Sr’s comments. Here is his response to Pielke’ comments:

I disagree. Prof. Pielke might not find the global temperature anomaly interesting, but lots of other people do, and as an indicator for other impacts, it is actually pretty good. Large-scale changes in rainfall patterns, sea ice amount, etc. all scale more or less with SAT. (They can vary independently of course, and so ‘one number’ does not provide a comprehensive description of what’s happening).

Kevin Trenberth

1. Do you believe that global annual average surface temperature anomaly is the best available metric to discuss climate sensitivity?

This is not a well posed question. This relates to definition: the sensitivity is defined that way. It is not the best metric for climate change necessarily

If yes to Question 1, then:

2. Could you briefly explain why you consider global annual average surface temperature anomaly the best available metric to discuss climate sensitivity?

I think the best metric overall is probably global sea level as it cuts down on weather and related noise. But global mean temperature can be carried back in time more reliably and it is reasonably good as long as decadal values are used.

If no to question 1, then:

2. What do you believe is the proper metric to discuss climate sensitivity, and could you briefly explain why?

However, it is all variables collectively that make a sound case

Pielke Sr.

We have already discussed Pielke’s position, but I contacted him to find out what metrics he would prefer to use. Here is his response:

1. Do you believe that global annual average surface temperature anomaly
is the best available metric to discuss climate sensitivity?

NO

If yes to Question 1, then:

2. Could you briefly explain why you consider global annual average
surface temperature anomaly the best available metric to discuss
climate sensitivity?

If no to question 1, then:

2. What do you believe is the proper metric to discuss climate
sensitivity, and could you briefly explain why?

The term “climate sensitivity” is not an accurate term to define how the climate system responds to forcing, when it is used to state a response in just the global average surface temperature. This is more than a semantic issue, as the global average surface temperature trend has been the primary metric used to communicate climate effects of human activities to policymakers. The shortcoming of this metric (the global average surface temperature trend) was discussed in depth in

“National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp. http://www.nap.edu/openbook/0309095069/html/

but has been mostly ignored in assessments such as the 2007 IPCC WG1 report.

A more appropriate metric to assess the sensitivity of the climate system heat content to forcing is the response in Joules of the oceans, particularly where most the heat changes occur. I discuss this metric in

Pielke Sr., R.A., 2008: A broader view of the role of humans in the climate system. Physics Today, 61, Vol. 11, 54-55.
http://pielkeclimatesci.files.wordpress.com/2009/10/r-334.pdf

Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335. http://pielkeclimatesci.files.wordpress.com/2009/10/r-247.pdf

More generally, in terms of true climate sensitivity, more metrics are needed as we discussed in the 2005 NRC report. The Executive summary includes the text [http://www.nap.edu/openbook.php?record_id=11175&page=4]

“Despite all these advantages, the traditional global mean TOA radiative forcing concept has some important limitations, which have come increasingly to light over the past decade. The concept is inadequate for some forcing agents, such as absorbing aerosols and land-use changes, that may have regional climate impacts much greater than would be predicted from TOA radiative forcing. Also, it diagnoses only one measure of climate change “global mean surface temperature response” while offering little information on regional climate change or precipitation. These limitations can be addressed by expanding the radiative forcing concept and through the introduction of additional forcing metrics. In particular, the concept needs to be extended to account for (1) the vertical structure of radiative forcing, (2) regional variability in radiative forcing, and (3) nonradiative forcing. A new metric to account for the vertical structure of radiative forcing is recommended below. Understanding of regional and nonradiative forcings is too premature to recommend specific metrics at this time. Instead, the committee identifies specific research needs to improve quantification and understanding of these forcings.”

It is, therefore, time to move beyond the use of the global annual average surface temperature trend as the metric to define “climate sensitivity”.

Differing views

There are clearly differing views on this subject.

John Christy does not support the metric. He points out that the surface temperature is affected by numerous things other than greenhouse forcing, and then gives two metrics which he prefers. The first is the change in joules in the system, with particular emphasis on the oceans. The second is bulk atmospheric temperature.

Patrick Michaels supports using the metric. He points out that the metric is important because it addresses the area where people live. However, he emphasizes that the surface temperature must be taken in concert with measurements such as ocean temperature at depth, and tropospheric and stratospheric temperatures. Without these other measurements, it would be difficult to assess the impact of GHGs on surface temperature.

Gavin Schmidt supports the metric unreservedly. He and Trenberth rightly point out that climate sensitivity is defined by global average surface temperature anomaly. Of course, the point of my question is challenging whether or not this is the best definition. Gavin seems to think so, and points out that the metric is “commonly-understood and widely-used”. He states that other metrics such as rainfall patterns and sea ice amount track very well with surface air temperature.

Trenberth is very brief, but states that global average surface temperature anomaly is not necessarily the best metric to use for climate change. He considers that global sea level is a better metric because it cuts down on weather related noise. However, he also points out that global average surface temperature anomaly is useful because it can be applied to the past more reliably. He also states that all variables taken together make a sound case.

Pielke Sr. is adamantly opposed to using this metric. We’ve already discussed his reasons. He also proposes a different metric for assessing climate sensitivity, “A more appropriate metric to assess the sensitivity of the climate system heat content to forcing is the response in Joules of the oceans”. He supports these claims with several of his own papers as well as a NRC report.

Conclusion

Pielke and Christy want to stop assessing climate sensitivity by using global average surface temperature anomaly, and both recommend using a change of joules (particularly in the ocean) as a better metric.

Michaels and Trenberth support the metric while emphasizing that other metrics must also be taken into account. Schmidt does not mention any drawbacks and emphasizes that the metric is already widely used and it works well with other metrics.

It seems to me the main problem here isn’t the metric itself, but the emphasis placed on it. I don’t believe that Pielke or Christy believe the metric has no value at all, only that it is a poor choice to use as the main metric when discussing CO2′s impact on climate. In Pielke’s case, the emphasis on CO2 itself is a problem, as he believes that other human impacts are far more important.

Climate science so frequently focuses on CO2 and temperature that it seems natural climate sensitivity would be measured by global average surface temperature anomaly. A shift away from this metric seems unlikely. However, if it can be shown in the future that a change in joules in the ocean directly contradicts other metrics then I’m sure this discussion will come up again. Pielke’s paper mentions an apparent contradiction found by Joshua Willis of JPL, although the measurements are only taken over a four year period. Only time will tell which metric is most valuable.

Comments Off

Filed under Climate Change Metrics, Climate Science Reporting

“Water Vapor Feedback Still Uncertain” By Marcel Crok

Marcel Crok is an outstanding climate science writer and reporter (see also his book http://www.staatvanhetklimaat.nl/2011/04/13/de-staat-van-het-klimaat-nu-ook-als-ebook/ which is expected to soon be translated into English). This past week on his weblog he has published a very important post with respect to the water vapor feedback aspect of climate. His post is

Water vapor feedback still uncertain

I have reposted below critical new information on the water vapor trends which Marcel obtained from Tom Vonder Haar [who is a  close colleague of mine at Colorado State University] as reported in Water vapor feedback still uncertain:

“We have most definitely never said the preliminary NVAP data show a negative trend and anyone who does is making a false scientific statement. All we can say at present is that the preliminary NVAP data, according to the Null Hypothesis, cannot disprove a trend in global water vapor either positive or negative.

In addition, there are good reasons based upon both Sampling / Signal Processing Theory and observed natural fluctuations of water vapor ( ENSO’s, Monsoons, volcanic events, etc. ) to believe that there are no sufficient data sets on hand with a long enough period of record from any source to make a conclusive scientific statement about global water vapor trends.

I believe discussion and informed speculation is healthy for Earth System Science when properly reported.  As you know the most recent IPCC assessment went into considerable detail about Uncertainty and I support even more of this work.  It helps focus our scientific attention on key areas where improvements should and can be made. Water vapor variability and feedback is one such area and that is the reason for the Re-analyses and improvements to the NVAP data set.  We are planning first release and discussion of our new results of NVAP-M ( sponsored by the NASA MEaSUREs research program ) at the World Climate Research Programme Science Conference in Denver, CO in October, 2011.  That will begin a period of more than one year wherein we will intercompare the NVAP-M results with independent estimates by colleagues in both the US and international community.  The checks and balances provided by such a collaborative effort should then produce a credible statement about our rapidly increasing knowledge of variability and trends of water vapor.

Now although of course he is very careful – as most scientists are – these statements are far less certain about the observational evidence for a positive water vapor feedback than IPCC was in AR4. There they wrote in the Summary for Policy Makers:

“The average atmospheric water vapour content has increased since at least the 1980s over land and ocean as well as in the upper troposphere. The increase is broadly consistent with the extra water vapour that warmer air can hold.”

Now this of course sounds much more certain than the remark of Vonder Haar that ‘there are no sufficient data sets on hand with a long enough period of record from any source to make a conclusive scientific statement about global water vapor trends’. So I think we all look forward to hear more about this important data set next October in Denver.”

Marcel also reported on new valuable analyses of multi-decadal precipitation trends by Demetris Koutsoyiannis. As Marcel wrote 

“Koutsoyiannis recently presented an analysis about trends in extreme precipitation at the EGU conference concluding that especially since 1970 there is no trend at all. Also at EGU he showed that models underestimate extreme rainfall for some stations around the Mediterranean up to a factor of ten…..Koutsoyiannis found no trend in floods worldwide either.”

I will add here to Marcel’s insightful comments and very effective journalism (and to Demetris’s cutting edge analysis).

The IPCC view on water vapor feedback is that the radiative warming effect is amplified by added water vapor in the atmosphere as the ocean surface warms which results in greater evaporation into the atmosphere, as Marcel discussed in his weblog post.

However, there are a number of other studies which conclude that the multi-decadal global climate models as reported by the IPCC are incorrectly simulating the water cycle which includes the amount of water vapor in the atmosphere. These include, for example,

Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532

who concluded that

“….models produce precipitation approximately twice as often as that observed and make rainfall far too lightly. This finding reinforces similar findings from other studies based on surface accumulated rainfall measurements. The implications of this dreary state of model depiction of the real world are discussed.”

and

Sun, De-Zheng, Yongqiang Yu, Tao Zhang, 2009: Tropical Water Vapor and Cloud Feedbacks in Climate Models: A Further Assessment Using Coupled Simulations. J. Climate, 22, 1287–1304.

who wrote

“…….extended calculation using coupled runs confirms the earlier inference from the AMIP runs that underestimating the negative feedback from cloud albedo and overestimating the positive feedback from the greenhouse effect of water vapor over the tropical Pacific during ENSO is a prevalent problem of climate models.”

They did write in their paper that

“We …. suggest that the two common biases revealed in the simulated ENSO variability may not be carried over to the simulated global warming, though these biases highlight the continuing difficulty that models have to simulate accurately the feedbacks of water vapor and clouds on a time-scale we have observations”.

but in the following question that I asked the De-Zheng Sun (see)

“[I]t is not clear how such a bias could be removed when the models are applied in longer term model projections. Indeed, what is the data which says that the biases are removed?”

he replied

“You are right that no data have shown that those biases will not be removed. We are just mentioning the possibility that there could be error cancellation as global warming may involve more processes that those in ENSO, and the errors may cancel in such a way that prediction of global warming by these models that have these errors may actually get the answer right.  It is just a possibility worth mentioning.”

Our analysis of observed data also indicates that on the regional scale at least, the water vapor amplification feedback is not occurring as claimed in the 2007 IPCC WG1 report.  We reported on this in our paper

Wang, J.-W., K. Wang, R.A. Pielke, J.C. Lin, and T. Matsui, 2008: Towards a robust test on North America warming trend and precipitable water content increase. Geophys. Res. Letts., 35, L18804, doi:10.1029/2008GL034564

where we found that

‘…. atmospheric temperature and water vapor trends do not follow the conjecture of constant relative humidity over North America.We found that for the domain we evaluated ……. temperatures significantly increased (0.248 ± 0.0742 K/decade) according to the 27-year monthly data, but the pr e c i p i t a b le wat e r vapor ( 0.00619 ± 0.0755 Kg/m2/decade) and total precipitable water (0.0108 ± 0.0782 Kg/m2/decade) did not.

Marcel has succinctly and clearly presented information which conflicts with the IPCC statements on our current understanding of the climate system. We need  more journalists to adopt the high investigatory standards that Marcel exemplifies.

Comments Off

Filed under Climate Change Forcings & Feedbacks, Climate Science Reporting

La Niña and Tornado Outbreaks In The USA

There have been several excellent and very informative posts by Joe Bastardi and Joe D’ Aleo at WeatherBell  on the current weather pattern and its conduciveness to severe thunderstorm outbreaks including tornado family outbreaks. These weblog posts include their latest

Quick Look at Severe Weather ( Carolinas may get hit late Wed into Thur again)

Heavy rains, supercell tornadoes and cold in the news

We have looked at the issue of the relationship of La Niña to family outbreaks of tornadoes in our report

 Knowles, J.B., and R.A. Pielke Sr., 2005: The Southern Oscillation and its effect on tornadic activity in the United States. Atmospheric Science Paper No. 755, Colorado State University, Fort Collins, CO 80523, 15 pp. (Originally prepared in 1993, published as a Atmospheric Science Paper in March 2005).

Our abstract reads

The Southern Oscillation has been shown in previous research to cause changes in the weather patterns over the continental United States. These changes, caused by either the warm El Niño or cold La Niña, could potentially affect numbers, locations, and strengths of tornadoes in the United States.

Using a variation of the Southern Oscillation Index, the seven strongest El Niño and five strongest La Niña events during the period 1953-1989 were examined to see what effect, if any, that they would have on: 1) Total tornado numbers, 2) Violent tornado track length, 3) Violent tornado numbers, and 4) >40 tornado outbreaks.

Little difference was found in total tornado numbers between El Niño and La Niña events. However, significant differences were found in the number of violent tornadoes, and in large number tornado outbreaks. La Niña event years were found to have longer than average track lengths, more violent tornadoes, and a good probability of having an outbreak of 40 or more tornadoes. El Niño event years were found to have shorter than average track lengths, less violent tornadoes, and only a slim possibility of having an outbreak.

Possible reasons for the above conclusions include: 1) Warmer than normal temperatures in the western U.S./Canada along with cooler than normal temperatures in the southern U.S. during El Niño years; and 2) Colder than normal temperatures in the western U.S./Canada along with warmer than normal temperatures in the southern U.S. during La Niña years. This would act to weaken/strengthen the interactions between warm and cold air in the midwest U.S. during El Niño/La Niña event years and decrease/increase the numbers and lengths of violent tornadoes.

The current Spring 2011 certainly fits this pattern.

Comments Off

Filed under Climate Change Forcings & Feedbacks, Research Papers

Implications Of A New Paper “Impact Of Polar Ozone Depletion On Subtropical Precipitation” By Kang Et Al 2011

There is a new paper (h/t to Steve Milloy) that further documents that

1. Carbon Dioxide is but one of a diverse range of human climate forcings

and

2. That it is atmospheric and ocean circulation pattern changes that are of much more importance than a long term trend in the global annual average surface temperature.

We reported on these two issues, for example, in

Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell,  W. Rossow,  J. Schaake, J. Smith, S. Sorooshian,  and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union.

where we wrote

In addition to greenhouse gas emissions,  other first-order human climate forcings are important to understanding the future behavior of Earth’s climate. These forcings are spatially heterogeneous and include the effect of aerosols on clouds and associated precipitation [e.g., Rosenfeld et al., 2008], the influence of aerosol deposition (e.g., black carbon (soot) [Flanner et al. 2007] and reactive nitrogen [Galloway et al., 2004]), and the role of changes in land use/land cover [e.g., Takata et al., 2009]. Among their effects is their role in altering atmospheric and ocean circulation features away from what they would be in the natural climate system [NRC, 2005].

Ozone depletion in the stratosphere can be added to this list.

The new paper is

S. M. Kang, L. M. Polvani,J. C. Fyfe, M. Sigmond, 2011: Impact of Polar Ozone Depletion on Subtropical Precipitation Scienc Express.org / 21 April 2011

The abstract reads [highlight added]

“Over the past half-century, the ozone hole has caused a poleward shift of the extratropical westerly jet in the Southern Hemisphere. Here, we argue that these extratropical circulation changes, resulting from ozone depletion, have substantially contributed to subtropical precipitation changes. Specifically, we show that precipitation in the Southern subtropics in austral summer increases significantly when climate models are integrated with reduced polar ozone concentrations. Furthermore, the observed patterns of subtropical precipitation change, from 1979 to 2000, are very similar to those in our model integrations, where ozone depletion alone is prescribed. In both climate models and observations, the subtropical moistening is linked to a poleward shift of the extratropical westerly jet. Our results highlight the importance of polar regions on the subtropical hydrological cycle.”

An excerpt from the paper reads

In a broader perspective, the impact of polar ozone depletion on tropical precipitation discussed here provides one more instance of how changes in high latitudes are able to affect the tropics. Other well-known examples are the effect of Arctic sea ice … and of the Atlantic thermohaline circulation … on the position of the Intertropical Convergence Zone (ITCZ). Hence the need to deepen our understanding of polar to tropical linkages in order to accurately predict tropical precipitation.”

In a news article on this paper by Richard Black of the BBC,

Ozone hole has dried Australia, scientists find,

it is written

“This study does illustrate the important point that different mechanisms of global change are contributing to the climate impacts we’re seeing around the world,” observed Professor Myles Allen of Oxford University, a leading UK climate modeller.

“It’s very important to unpack them all rather than assuming that any impact we see is down simply to greenhouse gas-mediated warming.”

Myles Allen has succinctly and accurately summarized the significance of this study in his quote.

Comments Off

Filed under Climate Change Forcings & Feedbacks, Research Papers

Set Of Interviews Of Climate Scientists By Hans Von Storch

Hans von Storch has interviewed a number of climate scientists (including me) in the Atmospheric Sciences Section of the AGU Newsletter as well as in Interviews of eminent scientist prepared by Hans von Storch.

The scientists Hans has interviewed so far are:

  • von Storch, H. and K. Fraedrich, 1996: Interview mit Prof. Hans Hinzpeter, Eigenverlag MPI für Meteorologie, Hamburg; in German
  • von Storch, H., J. Sündermann and L. Magaard, 2000: Interview with Klaus Wyrtki. GKSS Report 99/E/74; in English
  • von Storch, H., and K. Hasselmann, 2003: Interview mit Reimar Lüst.GKSS Report 2003/16, 39 pp; in German
  • von Storch, H., G. Kiladis and R. Madden, 2005: Interview with Harry van Loon, GKSS Report2005/8; in English
  • von Storch H., and D. Olbers, 2007: Interview with Klaus Hasselmann, GKSS Report 2007/5; in English, 67 pp
  • von Storch, H., and K. Hasselmann, 2010: Seventy Years of Exploration in Oceanography. A prolonged weekend discussion with Walter Munk. Springer Publisher, 137pp, DOI 10.1007/978-3-642-12087-9 

This interview series will be continued, but is limited to people which deserve respect because of their remarkable scientific and personal integrity.

An interesting interview, done by William Aspray in 1986 with Philip Thomson on the history of numerical weather prediction (mainly in the 1940s and 1950s) is avaiable from the Charles Babbage Institute at the University of Minnesota.

  1. July 2009 Heinz Wanner
  2. September 2009 Réne Laprise
  3. November 2009 Raino Heino
  4. January 2010 Christoph Kottmeier
  5. June 2010 Aristita Busuioc
  6. August 2010 Roger A. Pielke, sr.
  7. November 2010 Nanne Weber
  8. December 2010 Alan Robock.
  9. March 2010 Gebriele Hegerl.

There is also an interview of Martin Claussen, Guy Brasseur and Stefan Rahmstorf reported on Hans’s weblog post Interview mit Rahmstorf, Brasseur und Claußen in ZEO2 but it is in German [Google translator, however, can be used to read the text].

Im Gespräch: Stefan Rahmstorf, Guy Brasseur, Martin Claußen [from this url].

Comments Off

Filed under Climate Science Reporting

Guest Post “Atlantic Multidecadal Oscillation And Northern Hemisphere’s Climate Variability” By Marcia Glaze Wyatt, Sergey Kravtsov, And Anastasios A. Tsonis

A very important new paper has been accepted for publication in Climate Dynamics that adds further substance into the topic of spati0-temporal chaos that was discussed on Judy Curry’s weblog Climate Etc in the post byTomas Milanovic  titled

Spatio-temporal chaos

This new paper is a part of the Ph.d. dissertation of Marcia Wyatt at the University of Colorado

The new paper is

Wyatt, Marcia Glaze , Sergey Kravtsov, and Anastasios A. Tsonis, 2011: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability  Climate Dynamics: DOI: 10.1007/s00382-011-1071-8.

The abstract reads

Proxy and instrumental records reflect a quasi-cyclic 50-to-80-year climate signal across the Northern Hemisphere, with particular presence in the North Atlantic. Modeling studies rationalize this variability in terms of intrinsic dynamics of the Atlantic Meridional Overturning Circulation influencing distribution of sea-surface-temperature anomalies in the Atlantic Ocean; hence the name Atlantic Multidecadal Oscillation (AMO). By analyzing a lagged covariance structure of a network of climate indices, this study details the AMO-signal propagation throughout the Northern Hemisphere via a sequence of atmospheric and lagged oceanic teleconnections, which the authors term the “stadium wave”. Initial changes in the North Atlantic temperature anomaly associated with AMO culminate in an oppositely signed hemispheric signal about 30 years later. Furthermore, shorter-term, interannual-to-interdecadal climate variability alters character according to polarity of the stadium-wave-induced prevailing hemispheric climate regime. Ongoing research suggests mutual interaction between shorter-term variability and the stadium wave, with indication of ensuing modifications of multidecadal variability within the Atlantic sector. Results presented here support the hypothesis that AMO plays a significant role in hemispheric and, by inference, global climate variability, with implications for climate-change attribution and prediction.

The authors of the paper have graciously written the guest post below which discussed their research findings.

“Atlantic Multidecadal Oscillation And Northern Hemisphere’s Climate Variability” by Marcia Glaze Wyatt, Sergey Kravtsov, And Anastasios A. Tsonis

Climate is ultimately complex. Complexity begs for reductionism. With reductionism, a puzzle is studied by way of its pieces. While this approach illuminates the climate system’s components, climate’s full picture remains elusive. Understanding the pieces does not ensure understanding the collection of pieces. This conundrum motivates our study.

Our research strategy focuses on the collective behavior of a network of climate indices. Networks are everywhere – underpinning diverse systems from the world-wide-web to biological systems, social interactions, and commerce. Networks can transform vast expanses into “small worlds”; a few long-distance links make all the difference between isolated clusters of localized activity and a globally interconnected system with synchronized [1] collective behavior; communication of a signal is tied to the blueprint of connectivity. By viewing climate as a network, one sees the architecture of interaction – a striking simplicity that belies the complexity of its component detail.

Considering index networks rather than raw three-dimensional climate fields is a relatively novel approach, with advantages of increased dynamical interpretability, increased signal-to-noise ratio, and enhanced statistical significance, albeit at the expense of phenomenological completeness. Climate indices represent distinct subsets of dynamical processes. One could consider these indices – the nodes of our network – to be climate oscillators, each node, by itself, an intrinsic, self-sustaining system. When coupled with other self-sustaining oscillators of the network, the collective choreography of interlinked nodes generates a hemispherically spanning, propagating teleconnection signal – our “stadium wave” – an atmospheric and lagged oceanic teleconnection sequence that communicates an Atlantic-born climate signal of multidecadal warming and cooling (superimposed upon longer-time-scale temperature trends) across the Northern Hemisphere. Significantly, a warm North Atlantic generates a decadal-scale lagged cooling hemispheric response; a cool Atlantic generates a warming one.

The devil is always in the detail. What are the mechanisms linking one node to the next? And what is the statistical significance of low-frequency alignment of a collection of regional climate time series, considering we are working with only the 20th century instrumental record in this study – a matter of only one hundred years? The bulk of our paper is devoted to these matters.

Using the network approach, data – raw variables such as sea-surface-temperature (SST) and sea-level-pressure (SLP), etc. – are compressed into indices, or into a subspace of dynamically and geographically distinct indices [2]. Our selection of indices was guided by extensive literature review regarding proxy records, instrumental data, and climate-model studies. We first tested eight indices (AMO, AT, NAO, NINO3.4, NPO, PDO, ALPI, and NHT). They represent a variety of oceanic and atmospheric processes, each of which, upon preliminary examination, appeared to have a multidecadal component, albeit not simultaneously timed. To these indices, we applied Multichannel Singular Spectrum Analysis (M-SSA) – a method well suited to identify a propagating signal. A leading pair of modes, well separated from all others, was considered to be our climate signal. We later added seven complementary indices to generate a larger, fifteen-member network. Our M-SSA results remained unchanged with this expanded network.

Statistical-significance testing showed the leading M-SSA pair – our climate signal – to be unlikely due to random temporal alignment of uncorrelated red-noise indices, the chance for such being less than three percent. It is not uncommon for geophysical time series to possess a strong low-frequency component – red-noise. This is due to slowly varying factors within geophysical systems that build in inertia, conveying a “memory” that manifests as a spurious low-frequency oscillatory temporal signal. Such red noise can contaminate a possible “real” low-frequency signal. This caveat can be minimized if a coherent spatial structure, distinct from noise, characterizes a quasi-periodic signal. Our stadium-wave signal, present in a set of indices representing geographically diverse regions – i.e. a coherent spatial structure – minimizes the likelihood the signal will reflect contamination. Separation of signal from noise, therefore, is more robust.  

Using the identified climate signal as our spatiotemporal filter, normalized reconstructed components (RCs) were generated for all indices, each reflecting a multidecadal signal that centers on ~64 years. The long time scale suggests involvement of ocean dynamics. A substantial fraction of variance dominates AMO, AT, PDO, and NHT. AT is an atmospheric index. Its strong variance in the low-frequency spectrum speaks to an atmospheric response to ocean-induced multidecadal variations. Numerous studies cited in our paper address this less-well-known phenomenon of decadal-scale and longer forcing of the atmosphere by the oceanic heat flux. This phenomenon, most pronounced in the boreal winter – the interval of our focus – is believed to play a strong role in the stadium-wave teleconnection sequence [3].

Statistical results address only co-variability among nodes within the climate-index network, not causality. Interpretation of our results relies on a diverse collection of observational and modeling studies. Our paper details these studies, which suggest mechanisms that include: i) the ocean forcing the atmosphere throughout the troposphere on decadal and longer timescales, ii) stability changes in the tropical thermocline, and iii) latitudinal shifts in the Intertropical Convergence Zones of the Atlantic and Pacific.  

In addition to evaluating multidecadal behavior – the stadium wave – in a climate network, we also considered interannual-to-interdecadal-scale variability. For this, we evaluated the collective behavior of higher-frequency variability of the residual signal in the fifteen indices, from which the multidecadal signal had been removed. This line of inquiry was motivated by related previous research of Tsonis et al. (2007) and Swanson and Tsonis (2009), whose work identified five intervals throughout the 20th century during which certain high-frequency indices synchronized. Three of these five intervals coincided with multidecadal hemispheric climate-regime shifts, which were characterized by a switch between distinct atmospheric and oceanic circulation patterns, a reversal of NHT trend, and by altered character of ENSO variability. Our results provide a more detailed picture of these “successful” (~1916, ~1940, and ~1976) and “unsuccessful” (~1923 and ~1957) synchronizations among the higher-frequency indices. While a conclusion is far from clear, it appears the “successful” synchronizations tend toward a more symmetrical contribution from both the Atlantic and Pacific sectors. PNA participates in all synchronizations. It is intriguing to note a shared rhythm among the following: successful synchronizations of high-frequency indices, shifts between periods of alternating character of interannual variability, and the stadium-wave’s multidecadal tempo. This similar pacing suggests possible stadium-wave influence on synchronizations of interannual-to-interdecadally-varying indices within the climate network. Future research is required to determine the exact significance of these episodes.

In closing, results presented in our paper suggest that AMO teleconnections, as captured by our stadium-wave, have implications for decadal-scale climate-signal attribution and prediction. Potential mechanisms underlying the stadium-wave and related interdecadal variability are topics of active and controversial research, reliant upon technological leaps in data retrieval and computer modeling to advance them toward consensus.

Index Profile of the Stadium Wave:

  • Atlantic Multidecadal Oscillation (AMO) – a monopolar pattern of sea-surface-temperature (SST) anomalies in theNorth Atlantic Ocean.
  • Atmospheric-Mass Transfer anomalies (AT) – characterizing direction of dominant wind patterns over the Eurasian continent.
  • North Atlantic Oscillation (NAO) – reflecting atmospheric-mass distribution between subpolar and subtropical latitudes over theNorth Atlanticbasin.
  • NINO3.4 – a proxy for El Nino behavior in the tropical Pacific Ocean.
  • North Pacific Oscillation (NPO) – the Pacific analogue for theAtlantic’s NAO.
  • Pacific Decadal Oscillation (PDO) – an SST pattern in the North Pacific Ocean.
  • Aleutian Low Pressure Index (ALPI) – a measure of intensity of the Aleutian Low over the Pacific Ocean mid-latitudes.
  • Northern Hemisphere Temperature (NHT) – anomalies of temperature across the Northern Hemisphere.

 

The “Stadium Wave”:

-AMO → (7 years) → +AT → (2 years) → +NAO → (5 years) → +NINO3.4 → (3 years) → +NPO/PDO → (3 years) → +ALPI → (8 years) → +NHT → (4 years) → +AMO → (7 years) → -AT → (2 years) → -NAO → (5 years) → -NINO3.4 → (3 years) → -NPO/-PDO → (3 years) → -ALPI → (8 years) → -NHT → (4 years) → -AMO

References:

Swanson K, Tsonis AA (2009) Has the climate recently shifted? Geophys Res Lett 36. doi:10.1029/2008GL037022.

Tsonis AA, Swanson K, Kravtsov S (2007) A new dynamical mechanism for major climate shifts. Geophys Res Lett 34: L13705. doi:10.1029/2007GL030288


[1] Synchronization refers to the matching of rhythms among self-sustained oscillators; although the motions are not exactly simultaneous. If two systems have different intrinsic oscillation periods, when they couple, they adjust their frequencies in such a way that cadences match; yet always with a slight phase shift (lags).

[2] The original eight indices include: AMO, AT, NAO, NINO3.4, NPO, PDO, ALPI, and NHT. Please refer to explanation of indices at article’s end. It details the dynamical profile of each index.

[3] Refer to stadium-wave sequence and associated lags between indices after profile of indices.

Comments Off

Filed under Climate Change Forcings & Feedbacks, Research Papers

New Article Titled “Bias In the Peer Review Process: A Cautionary And Personal Account” By Ross McKittrick

There is an informative article by Ross McKittrick

McKitrick, Ross R. (2011) “Bias in the Peer Review Process: A Cautionary and Personal Account” in Climate Coup, Patrick J. Michaels ed., Cato Inst. Washington DC. 

This article appears in the book

Michaels, Patrick J., 2011: Climate Coup: Global Warming’s Invasion of Our Government and Our Lives. Cato Institute. ISBN: 978-1-935308447

with the summary of its content

“A first-rate team of experts offers compelling documentation on the pervasive influence global warming alarmism now has on almost every aspect of our society-from national defense, law, trade, and politics to health, education, and international development.”

With respect to Ross’s chapter,  Pat Michaels writes

“The second chapter in this volume goes to the core of what we consider to be the canon of science, which is the peer-reviewed, refereed scientific literature. McKitrick’s and my trials and tribulations over journal publication are similar to those experienced by many other colleagues. Unfortunately, the Climategate e-mails revealed that indeed there has been systematic pressure on journal editors to reject manuscripts not toeing the line about disastrous climate change. Even more unfortunate, my experience and that of others are that the post-Climategate environment has made this situation worse, not better. It is now virtually impossible to publish anything against the alarmist grain. The piles of unpublished manuscripts sitting on active scientists’desks are growing into gargantuan proportions…..”

Pat is correct that the peer reviews process and, also, the funding of research, has become very politicized and biased. 

Ross starts his article with the text [highlight added]

“Showing that the IPCC claim is also false took some mundane statistical work, but the results were clear. Once the numbers were crunched and the paper was written, I began sending it to science journals. Having published several against-the-flow papers in climatology journals, I did not expect a smooth ride, but the process eventually became surreal. In the end, the paper was accepted for publication, but not in a climatology journal. Fortunately for me, I am an economist, not a climatologist, and my career doesn’t depend on getting published in climatology journals. If I were a young climatologist, I would have learned that my career prospects would be much better if I never wrote papers that question the IPCC. The skewing of the literature (and careers) can only be bad for society, which depends on scientists and the scientific literature for trustworthy advice for wise policy decisions.”

His conclusion has the text

“Some people might be tempted to defend climatology by saying that normal scientific procedures have broken down due to the intense policy fights and political interference. But in my opinion that confuses cause and effect. The policy community has aggressively intervened in climate science because of all the breaches of normal scientific procedures. The public has lost confidence in the ability of the major institutions of climatology, including the IPCC and the leading journals, to deal impartially with the evidence. It doesn’t have to be this way. My own field of economics constantly deals with policy-relevant topics with major public consequences. Of course, differences of opinion exist and vigorous disputes play out among opposing camps. But what is happening in climate science is very different, or at least is on a much more intense scale. I know of no parallels in modern economics. It appears to be a profession-wide decision that, due to the conjectured threat of global warming, the ethic of scientific objectivity has had an asterisk added to it: there is now the additional condition that objectivity cannot compromise the imperative of supporting one particular point of view.

This strategy is backfiring badly: rather than creating the appearance of genuine scientific progress, the situation appears more like a chokehold of indoctrination and intellectual corruption. I do not know what the solution is, since I have yet to see a case in which an institution or a segment of society, having once been contaminated or knocked off balance by the global warming issue, is subsequently able to right itself. But perhaps, as time progresses, climate science will find a way to do so. Now that would be progress.”

Both Pat and Ross are correct that a prejudice exists in the climate science community with respect to publication and in funding. My experiences have been similar to theirs.

I have posted on this subject in my posts. Several examples are

My Comments For The InterAcademy Council Review of the IPCC

Is The NSF Funding Process Working Correctly?

Invited Letter Now Rejected By Nature Magazine

Comments On The Peer-Review Journal Publication Process And Recommendations For Improvement

It is important that policymakers become aware of the inappropriate control on the peer review process and in the funding of research by the NSF and other agencies.  I have summarized this for policymakers most recently in my testimony

Pielke Sr., R.A. 2011: Climate Science and EPA’s Greenhouse Gas Regulation.  Testimony to the House Subcommittee on Energy and Power

where I wrote with respect to the CCSP assessment process [which is one of the source of information for the 2007 IPCC]

“The process for completing the CCSP Report excluded valid scientific perspectives under the charge of the Committee. The Editor of the Report [Tom Karl] systematically excluded a range of views on the issue of understanding and reconciling lower atmospheric temperature trends.

The Executive Summary of the CCSP Report ignores critical scientific issues and makes unbalanced conclusions concerning our current understanding of temperature trends.”

Ross’s article and Pat’s experiences document further that the exclusion of research papers in a number of major journals and research funding by the NSF and other agencies is a systematic and serious problem that has compromised  objective scientific inquiry into climate science.

Comments Off

Filed under Climate Science Reporting, Research Papers