Monthly Archives: July 2008

Comments On The Draft CCSP Report “Global Climate Change Impacts in the United States”

The Draft report Global Climate Change Impacts in the United States has been released. There is an announcement of the Public Review Draft of the Unified Synthesis Product Global Climate Change in the United States. Public comments are due by August 14 2008 [Climate Science readers are urged to submit comments].

This US Climate Change Science Program (CCSP) report is Co-Chaired by Thomas R. Karl, Jerry Melillo, and Thomas C. Peterson with the Senior Editor and Synthesis Team Coordinator Susan J. Hassol. These are the same individuals who have led past CCSP reports (e.g. see and see), with Tom Karl and Tom Peterson deliberately excluding scientific perspectives that differ from their viewpoints (i.e. see). Susan Hassol was writer of the HBO Special “To Hot Not to Handle”. This HBO show clearly had a specific perspective on the climate change issue, and lacked a balanced perspective. The HBO Executive Producer was Ms. Laurie David. 

A clear real conflict of interest is obvious.

As a result, this report continues the biased narrow perspective of the earlier CCSP reports, as has been reported on a number of times on Climate Science and in other communications (e.g. see and see). As just one example of the bias, the Karl et al report starts with the text

The Future is in Our Hands

Human-induced climate change is affecting us now. Its impacts on our economy, security, and quality of life will increase in the decades to come. Beyond the next few decades, when warming is “locked in” to the climate system from human activities to date, the future lies largely in our hands. Will we begin reducing heat trapping emissions now, thereby reducing future climate disruption and its impacts? Will we alter our planning and development in ways that reduce our vulnerability to the changes that are already on the way? The choices are ours.”

This statement perpetuates the rejected perspective on the role of humans in the climate system that

the human influence [on the climate system] is dominated by the emissions into the atmosphere of greenhouse gases, particularly carbon dioxide.

The perspective that is, however, supported by a wide body of scientific evidence (e.g. see) is that

natural variations are more important than recognized in the Karl et al CCSP synthesis report and that the human influence involves a diverse range of first-order climate forcings, including, but not limited to the human input of CO2.

The remainder of the Karl et al CCSP report necessarily miscommunicates climate information since it is built on their incorrect focus on “reducing heat trapping emissions”, rather than also on the role of natural variations as observed in the past, and on the other first order climate forcings such as the role of aerosols in precipitation, nitrogen deposition and land use/land cover change (e.g. see).

For example, their claim that

Historical climate and weather patterns are no longer an adequate guide to the future”

is not supported by the observational evidence (e.g. see where an example is presented of past data that we should use to plan for the future).

Thus the conclusion is that the US CCSP Program has failed in its mission. These reports have become stale and in-bred, since the same people are repeating their perspective on the climate issue.

The CCSP program, initiated within the Bush Administration, offered the opportunity to provide an independent assessment of the role of humans and natural variability in the climate system, as well as a comprehensive framework for reducing societal and environmental vulnerability to risk from climate variations and change through adaptation and mitigation. The CCSP process, however, has not succeeded in this goal.

As recommended in the Climate Science weblog [see] we need new scientists who are not encumbered by their prior advocacy positions on climate change to lead the preparation of balanced climate assessment reports.

The response of the media when this report is released in its final form will also be enlightening. Those reporters who parrot the synthesis without questioning its obvious bias and conflict of interest should be identified as sycophants. Those who adequately communicate the diversity of scientifically supported disagreements with the report should be lauded for the true journalist that they are.

 

  

Comments Off on Comments On The Draft CCSP Report “Global Climate Change Impacts in the United States”

Filed under Climate Science Reporting

On the Credibility of Climate Predictions by Koutsoyiannis et al. 2008

An outstanding and very important new paper has appeared which raises further issues with respect to the inability of the IPCC multi-decadal global models to predict future climate. The paper is

Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, and A. Christofides, 2008: On the credibility of climate predictions, Hydrological Sciences Journal, 53 (4), 671-684.

with the abstract

“Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.”

Here is an extract from the conclusions that effectively summarizes the implications of their results

“At the annual and the climatic (30-year) scales, GCM interpolated series are irrelevant to reality. GCMs do not reproduce natural over-year fluctuations and, generally, underestimate the variance and the Hurst coefficient of the observed series. Even worse, when the GCM time series imply a Hurst coefficient greater than 0.5, this results from a monotonic trend, whereas in historical data the high values of the Hurst coefficient are a result of large-scale over-year fluctuations (i.e. successions of upward and downward ‘trends’. The huge negative values of coefficients of efficiency show that model predictions are much poorer than an elementary prediction based on the time average. This makes future climate projections at the examined locations not credible. Whether or not this conclusion extends to other locations requires expansion of the study, which we have planned. However, the poor GCM performance in all eight locations examined in this study allows little hope, if any. An argument that the poor performance applies merely to the point basis of our comparison, whereas aggregation at large spatial scales would show that GCM outputs are credible, is an unproved conjecture and, in our opinion, a false one.”

A fundamental and societally relevant conclusion from this study is that the use of the IPCC model predictions as a basis for policy making is invalid and seriously misleading.

Comments Off on On the Credibility of Climate Predictions by Koutsoyiannis et al. 2008

Filed under Climate Models

Documentation Of Continued Significant Land Use Change

Timo Hämeranta has again provided us important new research papers, this time on the latest information on deforestation [Thanks Timo!].  

He has alerted us to the following:

Sukumar, Raman, 2008. Forest Research for the 21st Century. Science Editorial Vol. 320, No 5882, p. 1395, June 13, 2008

where the abstract reads

“Last month the United Nations (UN) concluded a biodiversity conference in Bonn, Germany, where delegates from 191 countries negotiated “access to and sharing of the benefits of the rich genetic resources of the world.” Many of these resources reside in forests, which cover 4 billion hectares or 30% of Earth’s land. Forests are decreasing at a rate of 7 million hectares annually, mostly in the tropics. How can research encompassing the ecological, social, economic, and political dimensions of forest conservation contribute to reducing forest destruction and maintaining biodiversity, climatic stability, and the livelihoods of the poor, 40 to 50% of whose resources come from forests?”

and

Hansen, Matthew C., Stephen V. Stehman, Peter V. Potapov, Thomas R. Loveland, John R. G. Townshend, Ruth S. DeFries, Kyle W. Pittman, Belinda Arunarwati, Fred Stolle, Marc K. Steininger, Mark Carroll, and Charlene DiMiceli et al., 2008. Humid tropical forest clearing from 2000 to 2005 quantified by using multitemporal and multiresolution remotely sensed data. PNAS Vol. 105, No 27, pp. 9439-9444

with the abstract

“Forest cover is an important input variable for assessing changes to carbon stocks, climate and hydrological systems, biodiversity richness, and other sustainability science disciplines. Despite incremental improvements in our ability to quantify rates of forest clearing, there is still no definitive understanding on global trends. Without timely and accurate forest monitoring methods, policy responses will be uninformed concerning the most basic facts of forest cover change. Results of a feasible and cost-effective monitoring strategy are presented that enable timely, precise, and internally consistent estimates of forest clearing within the humid tropics. A probability-based sampling approach that synergistically employs low and high spatial resolution satellite datasets was used to quantify humid tropical forest clearing from 2000 to 2005. Forest clearing is estimated to be 1.39% (SE [Standard Error] 0.084%) of the total biome area. This translates to an estimated forest area cleared of 27.2 million hectares (SE 2.28 million hectares), and represents a 2.36% reduction in area of humid tropical forest. Fifty-five percent of total biome clearing occurs within only 6% of the biome area, emphasizing the presence of forest clearing “hotspots.” Forest loss in Brazil accounts for 47.8% of total biome clearing, nearly four times that of the next highest country, Indonesia, which accounts for 12.8%. Over three-fifths of clearing occurs in Latin America and over one-third in Asia. Africa contributes 5.4% to the estimated loss of humid tropical forest cover, reflecting the absence of current agro-industrial scale clearing in humid tropical Africa. “

Timo summarized from the paper that

“…Forest clearing is estimated to be 1.39% (SE 0.084%) of the total biome area. This translates to an estimated forest area cleared of 27.2 million hectares (SE 2.28 million hectares), and represents a 2.36% reduction in area of humid tropical forest….”

“Current global deforestation is 0,28% of forest are per year and forested area is 30% of the Earth’s land area, and the land area is 29% of the Earth’s surface…..”

and concluded, as has Climate Science, that these huge landscape conversions must have important consequences on the global climate system. The importance of such land use change in the coming decades was expertly evaluated in the paper

Feddema et al. 2005: The importance of land-cover change in simulating future climates, 310, 1674-1678.

 

Comments Off on Documentation Of Continued Significant Land Use Change

Filed under Climate Change Forcings & Feedbacks

When Will They Ever Learn? By Hendrik Tennekes

When Will They Ever Learn By Hendrik Tennekes

The current special issue of Climatic Change, the journal edited by Stanford University professor Stephen Schneider, is devoted to Learning and Climate Change. This appealed to me, so I started browsing. Let me quote a few highlights.

 

On p. 6, University of Maryland professor Andreas Lange writes:

 

“Considering limits in applying the expected utility framework to climate change problems, we consider a more recent framework with ambiguity aversion, which accounts for situations with imprecise or multiple probability distributions. We discuss both the impact of ambiguity  aversion on decisions and difficulties in applying such a non-expected utility framework to a dynamic context.”

 

On p. 67, MIT professor Mort Webster writes:

 

“We model endogenous learning by calculating posterior distributions of climate sensitivity from Bayesian updating, based on temperature changes that would be observed for a given true climate sensitivity and assumptions about errors, prior distributions, and the presence of additional uncertainties.”

 

On p. 139, University of California at Santa Barbara professor Charles Kolstad writes:

 

“Uncertainty with Complete Learning leads to higher expected membership but lower expected aggregate net benefits than No Learning, while Partial Learning almost certainly leads to lower membership and even lower expected aggregate net benefits.”

 

I am tempted to counter these exquisite examples of Bayesian scholarship with a few lines from my contributions to turbulence theory:

 

“The approximately log-normal probability distribution of the microstructure of turbulent flows has given rise to the concept of microstructural intermittency. The smallest scales of motion are active only in a small fraction of the space-time domain, requiring a revision of  the classic Kolmogorov theory of small-scale turbulence. The universal use of Fourier decompositions, however, is a complicating factor, because it causes spurious kinetic energy dispersion  in wave-number space.”

 

And what about an example of my hermetic jargon on the North Atlantic storm track?

 

“The meridional convergence of the zonal transient eddy momentum flux drives the momentum of the jet stream in a such a way that the suggestion is created that a negative eddy viscosity prevails. This concept, however, seems to contradict the Second Law of Thermodynamics, unless one is willing to make a thorough study of the energetics of the General Circulation.”

 

Obviously, it is not all that hard to poke fun with convoluted academic writing styles. I can do it too if I have to. But my mood changed when I came to the last paper in this issue of Climatic Science. It is a paper on Negative Learning by Michael Oppenheimer of Princeton University, Brian O’Neill of IIASA in Laxenburg, Austria, and Mort Webster. Negative Learning? The authors give a definition on p. 158:

 

“Negative Learning is a decrease or sustained divergence in the correspondence between the true outcome and the uncertainty characterization or belief over time.”

 

Their first alarm is sounded one page earlier:

 

“Our study suggests that twenty years of experience with large international assessments has failed to solve, and in some respects even aggregated the problem posed by negative learning for policy makers.”

 

I am capable of distinguishing between a bugle call on the battlefield and the incomprehensible jargon used by the Bayesian crowd, so I started reading this paper in earnest. On p. 156, I found:

 

“Overconfidence is one likely cause of negative learning, but it is by no means the only one. The use of expert elicitation to assess knowledge and uncertainty among limited groups of experts sometimes involves a  reflexive revision of judgments that is known to consolidate beliefs, revealing that some group interactions can lead to negative learning.”

 

Now, where did I hear such sounds before? Certainly not from James Hansen and Paul Crutzen, nor from Gavin Schmidt at RealClimate, nor from Susan Solomon, nor anyone of the recent 2007 Nobel Peace Prize winners at IPCC.  The message that Oppenheimer, O’Neill, and Webster want to convey is phrased in terms even a retired engineer like me can understand (p. 167-169):

 

“Given past experience, we recommend that the assessment process should be overhauled such that characterizing uncertainty becomes a co-equal partner with consensus building. Attention should be devoted to the implications of poorly understood or hypothetical but plausible processes and alternative model structures. Recommendations for improvements that would minimize the possibility of negative learning in the production and use of such assessments include avoiding uncertainty assessment based only on model intercomparison and explicit reporting of disagreements among assessment authors. In addition, research funding from mission-oriented agencies also bears the potential to reinforce existing assumptions. Funding could be explicitly allocated to research that explores alternative processes or assumptions not present in current models.”

 

“Almost no effort has been devoted to understanding why learning in the global change arena, whether in basic science or assessment, sometimes goes awry. It would be timely to perform critical reviews of particular assessment case histories, not just to compare predictions to outcomes, but to understand how specific judgments were made. Accordingly, IPCC working group discussions should become much more transparent, so that the basis for particular decisions might be understood by non-participants and participants alike. With more than three decades of experience in hand, the scientific community should apply the same strict standards of scholarship to examining how assessments perform and how they might be improved that it applies to its own research.”

 

These conclusions warm my heart. Is this paper in Climatic Change, certainly not a publication vessel for climate skeptics, perhaps a harbinger of things to come? Does the fact that worldwide temperatures have stopped climbing ten years ago finally penetrate even the stubborn and prejudiced minds that have been instrumental in forging the Consensus Doctrine? Is the tide finally shifting?

 

The message from Oppenheimer et al. to the IPCC crowd is evidently that IPCC should stop its addiction to Negative Learning. I want to add a message of my own. It is:

 

When will you ever learn?

 

In 1964, Bob Dylan wrote one of his most memorable lyrics:

 

“Come gather ‘round people

Wherever you roam

And admit that the waters

Around you have grown

And accept it that soon

You’ll be drenched to the bone.

If your time is to you

Worth savin’

Then you better start swimmin’

Or you’ll sink like a stone

For the times they are a-changin’.

 

The line it is drawn

The curse it is cast

The slow one now

Will later be fast

As the present now

Will later be past

The order is

Rapidly fadin’.

And the first one now

Will later be last

For the times they are a-changin’.”

 

I for one, hope that the times will yet change within my lifetime. 

Comments Off on When Will They Ever Learn? By Hendrik Tennekes

Filed under Guest Weblogs

Important New Research On The Role Of Aerosols On Precipitation By Professor Chidong Zhang

Professor Chidong Zhang of the University of Miami presented an important talk on June 19 2008 at NOAA’s Earth System Research Laboratory entitled “Climatic Effect of Aerosol on Tropical Rainfall: Evidence from Satellite Observations.” The abstract reads

“Many efforts have been made to investigate whether and how aerosol may affect precipitation. These efforts have yielded inconsistent and therefore controversial results. Especially, in the absence of observed aerosol effects on precipitation on climate scales, the null hypothesis “There is no climatic effect of aerosol on precipitation” has not been rejected. This has led the IPCC 2007 report to state “the sign of the global change in precipitation due to aerosols is not yet known”. In this study, we use long-term satellite observations of aerosol and rainfall to document large-scale co-variability of the two variables over the tropical Atlantic and West Africa. When influences due to known climate phenomena (e.g., ENSO, NAO, TAV) and meteorological factors (e.g., water vapor) are ruled out and analysis domains are carefully designed, such co-variability, especially its spatial patterns, suggests possible aerosol effects on rainfall. Large reductions in rainfall are found over the western tropical Atlantic Ocean and over the Gulf of Guinea in months of anomalously high aerosol concentration. Such reductions are statistically different from random and overall interannual variability. We propose that these reductions signify the climatic effect of aerosol on precipitation distribution and variability. Statistical results based on long-term satellite data are confirmed by consistent results based on more recent and higher quality satellite observations.”

This very important talk is based on the research of Professor Zhang in the following two papers

Huang, J., C. Zhang, and J. M. Prospero, 2008: Large-Scale Effects of African Aerosol on Precipitation of the West African Monsoon. Quart. J. Royal Meteor. Soc., in press.

The abstract reads

“We used multiyear satellite observations to study aerosol effects on the large-scale variability in precipitation of the West African Monsoon (WAM). We found a significant precipitation reduction associated with high aerosol concentration near the Guinea coast from late boreal autumn to winter. The largest aerosol-induced reduction (~ 1.5 mm d-1) is about 50% of the mean in the region and is mainly in the rain rate range of 2-17 mm day-1) under maritime environment off the northern coast of the Gulf of Guinea. This reduction cannot be attributed to known climate and weather factors such as El Niño-Southern Oscillation, North Atlantic Oscillation, Atlantic sea surface temperature, and water vapour. The fractional precipitation variance related to aerosol is about 13%, a value comparable to those attributed to the known climate factors. Aerosol responsible for the observed precipitation reduction can be traced back to various African sources where aerosol emissions have varied considerably over the past several decades, in part attributable to human activities.”

and

Huang, J., C. Zhang, and J. M. Prospero, 2008: Aerosol-Induced Large-Scale Variability in Precipitation over the Tropical Atlantic. J. Climate, submitted.

The abstract reads

“We used multiyear satellite observations to document a relationship between the large-scale variability in precipitation over the tropical Atlantic and aerosol traced to African sources. During boreal winter and spring, there is a significant reduction in precipitation south of the Atlantic marine intertropical convergence zone during months when aerosol concentrations are anomalously high over a large domain of the tropical Atlantic Ocean. This reduction cannot be attributed to known climate factors such as El Niño-Southern Oscillation, North Atlantic Oscillation, and zonal and meridional modes of tropical Atlantic sea surface temperature, or to meteorological factors such as water vapor. The fractional variance in precipitation related to aerosol is about 12% of the total interannual variance, which is of the same order of magnitude as that related to each of the known climate and weather factors. A backward trajectory analysis confirms the African origin of aerosols that directly affect the changes in precipitation. The reduction in mean precipitation mainly comes from decreases in moderate rain rates (10 – 20 mm/day), while light rain (<10 mm/day) can actually be enhanced by aerosol. Our results suggest aerosols have a clearly identifiable effect on climate variability in precipitation in the Pan-Atlantic region.”

The role of aerosols on precipitation (from industrial activity but also fires and blowing dust) was reported in my recent House subcommittee testimony (see) on overlooked and/or under reported issues in the IPCC and CCSP reports. Professor Zhang’s research provides further documentation of this critically important topic with respect to the human role on the climate system.

Comments Off on Important New Research On The Role Of Aerosols On Precipitation By Professor Chidong Zhang

Filed under Climate Change Forcings & Feedbacks

The Value Of Paleoclimate Records In Assessing Vulnerability to Drought: A New Paper Meko et al 2008

There is a seminal new study of drought in the western United States that extends the period of assessment back to 800 A.D [and thanks to Connie Woodhouse for providing me a copy of their paper].  The paper is

Meko, D., C. A. Woodhouse, C. A. Baisan, T. Knight, J. J. Lukas, M. K. Hughes, and M. W. Salzer (2007), Medieval drought in the upper Colorado River Basin, Geophys. Res. Lett., 34, L10705, doi:10.1029/2007GL029988.

The abstract reads

“New tree-ring records of ring-width from remnant preserved wood are analyzed to extend the record of reconstructed annualflows of the Colorado River at Lee Ferry into the Medieval Climate Anomaly, when epic droughts are hypothesized from other paleoclimatic evidence to have affected various parts of western North America. The most extreme low-frequency feature of the new reconstruction, covering A.D. 762-2005, is a hydrologic drought in the mid-1100s. The drought is characterized by a decrease of more than 15% in mean annualflow averaged over 25 years, and by the absence of high annual flows over a longer period of about six decades. The drought is consistent in timing with dry conditions inferred from tree-ring data in the Great Basin and Colorado Plateau, but regional differences in intensity emphasize the importance of basin-specific paleoclimatic data in quantifying likely effects of drought on water supply.”

Figure 2 from that paper provides must be considered one of the most important illustrations of the natural variability of the climate system as well as the observation that the climate is never static.

 


Figure caption from Meko et al2008: Time series plot of 25-year running mean of reconstructed flows of the Colorado River at Lee Ferry . Flows are plotted as percentage of the 1906–2004 mean of observed naturalflows (18.53 billion cubic meters, or 15.03 million acre-ft). Confidence intervalderived from 0.10 and 0.90 probability points of ensemble of 1000 noise-added reconstructions. Horizontal dashed line is lowest25-year running mean of observed flows (1953–1977). [the Colorado River, of course, is a major source of water for much of the southwest United States].

What this figure tells us include that

  • periods of drought occurred that were longer and more severe in the pre-historical time period

and that

  • the climate of this region is never static but varies significantly over time.

One obvious conclusion is that regardless of how humans are altering the climate system, the natural variations are significantly larger that stated in the 2007 IPCC assessment. This conclusions adds significant new support for the paper

Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38,

where we report that

“The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional,
change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the norm”

and that

“It is imperative that the Earth’s climate system research community embraces this nonlinear paradigm if we are to move forward in the assessment of the human influence on climate.”

The significance of these observational findings is that the regional multi-decadal predictions based on the current generation of IPCC models should not be used by the impacts and policy making communities. The IPCC models fail to skillfully predict climate features such as drought, as exemplified in the figure from the Meko et al paper. A more appropriate procedure for the impacts and policy communities would be to use historical and paleoclimate data to explore the consequences to society and the environment in the coming decades, if these extreme climate conditions were to reoccur. In terms of how humans are altering the climate system, we do not know if we are making such extremes more of less likely (despite scientifically flawed claims to the contrary; e.g. see and see), but we do know that such extreme weather conditions occurred in the past. Therefore, prudent policy management would plan for their eventual reoccurrence.

Comments Off on The Value Of Paleoclimate Records In Assessing Vulnerability to Drought: A New Paper Meko et al 2008

Filed under Vulnerability Paradigm

Climate Assessment Oligarchy – The IPCC

An oligarchy is a

“form of government in which all power is vested in a few persons or in a dominant class or clique; government by the few.”

This definition certainly fits with the IPCC, as illustrated by the closed meeting in which Gerald Meehl, Jonathan Overpeck, Susan Solomon, Thomas Stocker, and Ron Stouffer are organizing in Hawaii in March 2009. This meeting is reported at

Joint IPCC-WCRP-IGBP Workshop: New Science Directions and Activities Relevant to the IPCC AR5 [Tuesday, March 03, 2009 – Friday, March 06, 2009 at the University of Hawaii International Pacific Research Center Honolulu , Hawaii].

While the meeting is to be mostly self-funded [which means federal contracts and grants and other such sources will be used to pay for the trip], it raises the issue as to why such a remote location is chosen. Presumably the particpants should be concerned about the emission of CO2 into the atmosphere from the jet aircraft that will transport them to Hawaii.

The Workshop is also open to only the IPCC Working Group 1 Lead Authors [LAs] and Contributing Lead Authors [CLAs] from all four assessments.  While the goals of the Workshop are appropriate scientific topics, the closed character of the Workshop and its location perpetuates the exclusiveness of the IPCC process.  

This small community of climate scientists is controlling the agenda with respect to the assessment of climate change. This is an oligarchy.

Climate Science urges that a new group of climate scientists be empowered to lead the next IPCC report. The inbred group of scientists who are to attend the Hawaii workshop, while most are excellent scientists, have a conflict of interest in that they have already presented their viewpoints on the role of humans in the climate system [at the expense of excluding peer reviewed science viewpoints, however; eg. see the Appendix in Pielke 2008].

The next IPCC assessment should involve only scientists who have not taken a strong position on the IPCC reports, but who have outstanding scientific credentials. Among the first questions they should address are the three hypotheses, only one of which can be true;

  • The human influence is minimal and natural variations dominate climate variations on all time scale;
  • While natural variations are important, the human influence is significant and involves a diverse range of first-order climate forcings (including, but not limited to the human input of CO2);
  • The human influence is dominated by the emissions into the atmosphere of greenhouse gases, particularly carbon dioxide.
  • This research question has been discussed on Climate Science (e.g. see).

    Without new scientists leading the IPCC process as LAs and CLAs, the next IPCC report is doomed to continue to be completed by an oligarchy that is using its privileged position to advocate for a particular perspective on the role of humans within the climate system [the third hypothesis above]. The next IPCC report will not be a balanced assessment, but continue to be policy advocacy in the guise of a scientific framework.

     

    Comments Off on Climate Assessment Oligarchy – The IPCC

    Filed under Climate Science Meetings, Climate Science Op-Eds

    Climate Change in Kansas City: A Guest Weblog By Dr. Lynwood Yarbrough

    Let me introduce myself.  I received a Ph.D. degree in Biochemistry and Molecular Biology (Purdue University) and did postgraduate training in Biophysics (The Albert Einstein College of Medicine).  I ran a research lab at a major university medical center for 32 years and recently retired.  I served as a research consultant for the National Institutes of Health for 8 of those years and am presently on the editorial board of a journal in my field. 

    Several years ago I began reading the literature on climate change that was appearing in Science, Nature, and other peer-reviewed journals.  I did so because I was concerned at the alarmism I was seeing in the media regarding “global warming” and the dire predictions of some in the scientific literature.  I consider myself a scientific skeptic and want to be convinced by the data before I accept something as “true” (see Freeman Dyson at edge.org on skepticism in science).

    http://www.edge.org/3rd_culture/dysonf07/dysonf07_index.html

    As a biologist, I am aware of a number of cases in which science has been led in directions not based  on hard evidence.  Examples include Malthus and the Malthusian Theory, Lysenkoism in the old Soviet Union, and eugenics in the US and elsewhere (see the excellent archive at Cold Spring Harbor for examples of such “science”).  I suspect not one in fifty Americans alive today is aware that nearly 30,000 were sterilized in the early part of the 20th century because they were deemed “genetically defective”.

    http://www.dnaftb.org/eugenics/

    To quote from the introduction to the Cold Spring Harbor website “…..It is important to remind yourself that the vast majority of eugenics work has been completely discredited. In the final analysis, the eugenic description of human life reflected political and social prejudices, rather than scientific facts.”  We must try and ensure that we don’t repeat this process with the issue of climate change.

    In my reading on climate and climate change I read the work of Stott, Shaviv, Hansen, Veizer, Feddema, Von Storch and numerous others, too many to mention.  I also discovered the work of the Pielkes, first that of Roger Junior and later that of his father, Roger Senior.  Subsequently, I have read the works of Prins, Rayner, Sarewitz, and others regarding the fallacy of focusing only on CO2, as does the Kyoto approach.  I believe their conclusion is correct, Kyoto is a failure and a new approach is badly needed.

    Earlier this year our local paper, the Kansas City Star, ran an article on temperature anomalies in Kansas City. I had read the definition by Roger Pielke Sr. of an anomaly as being defined by 2 or more standard deviations from the mean and wanted to examine the data for myself  I then did some web research and found the NOAA data for KC for  April for the past 20+ years.  The graphed data from NOAA showed a trend line with a slight slope. 

    This slope seemed of questionable significance and I wrote the following email to a NOAA contact:

    Lyn Yarbrough said the following on 4/13/2008 9:23 AM:

     1.  I did a plot of the April data from 1972 to 2007 and the trend line (green) showed a positive slope.  What is the correlation coefficient for the trend line for this time period?  I serve on the editorial board of a journal of Biochemistry and Biophysics and I would reject a paper that showed such a “trend” because it appears to me that the correlation coefficient would be so low as to be meaningless from a statistical point of view.  Why not show the correlation coefficient of any “trend” so that viewers could evaluate the significance for themselves?

     2.  Are means and standard deviations available for daily high and low temperatures for April over the above time period?  The local newspaper had an article today stating that the low temperatures we are seeing this year in KC are not anomalous.  From my reading about climate (the scientific literature and blogs such as that of Roger Pielke Sr.) I understand that an anomaly is defined as a temperature that falls outside two standard deviations of the mean.  Is this correct?

    3.  The table format presumably intended to format the data and rank from both the highest and lowest temperature.  Both appear to be the same

    thank you, Lynwood Yarbrough

    I received the following response to my inquiry:
     
    Lyn,

    1.  At present, we do not compute the correlation coefficient for the trend lines on these plots.  We have talked about including them for those who are more statistically savvy.  This website was originally designed to be interpreted by a layperson.  We will likely include more statistics with these plots in the future.

    2.  The Climate at a Glance website doesn’t utilize daily data, only monthly data.  So, I wouldn’t have these statistics handy. We utilize the term “anomaly” to mean anything which departs from the mean (positive or negative). So, certainly, 2 std from the mean would also be an anomaly, but I’m not sure if it is more accurate than the definition we use here at NCDC.

    3.  The table format allows a user to shorten the period of record for ranking purposes.  You will always have a full period of record comparison and whatever the user selects.  The default is the entire period of record.  So, in that case, you will see two sets of the same rankings for the same period of record.  I hope that makes sense.

    Thank you for your input.  It will help to make our products more useful in the future.

    Following this response, I downloaded the data and did a linear regression and the R value obtained was 0.157.

    As I understand, the square of this value (coefficient of variation) represents the observed variation due to the independent variable (year).  R square is thus equal to approximately 0.03.  I then did a “t test” of the temperatures for the first ten years of the period (1973-1982) and compared them with the last 10  years (1998-2007).   The  results are below.

    The results of an unpaired t-test performed at 16:12 on 7-MAY-2008

    t= -1.09

    sdev= 3.19

    degrees of freedom = 18

    The probability of this result, assuming the null hypothesis, is 0.29

    I am no statistician and have no formal training in statistics. However, it appears fair to conclude that, based on the above data, there has been no statistically significant change in average April temperature for Kansas City over the time period for which data was obtained.  I have examined NOAA graphs for other months during this time period and, although I have not analyzed the data, most months show “trends” similar to that above. 

    I think that these data show clearly that there should be changes in the way in which NOAA presents temperature data and any “trends” in data.  At minimum, the NOAA graph should show the calculated correlation coefficient for any trend line shown.  I believe that there are few, if any, journals in my field of science and medicine that would publish a report purporting to show a significant trend based on data such as these.

    Finally, the NOAA contact noted that “We utilize the term “anomaly” to mean anything which departs from the mean (positive or negative).  So, certainly, 2 std from the mean would also be an anomaly, but I’m not sure if it is more accurate than the definition we use here at NCDC.”  Since the temperature data is presented to tenths of a degree, almost every year would be anomalous by this definition since few years show exactly the same temperature for any given month.  Perhaps there needs to be a little more rigor in data analysis and the definition of climate change at NOAA.  See:

    Pielke, R.A. and N. Waage, 1987: A definition of normal weather. Natl. Wea. Dig., 12, 20-22. 

    View Dr. Lyn Yarbrough’s Bio

    Comments Off on Climate Change in Kansas City: A Guest Weblog By Dr. Lynwood Yarbrough

    Filed under Guest Weblogs

    Integrated Land Use Approach – An Example Of Applying The “Vulnerability Paradigm”

    There is an interesting concept in land management that relates directly to the integrated approach Climate Science has recommended with respect to the reduction of vulnerability (e.g. see) [and thanks to Ray Soper for alerting us to this!].

    As Ray has written

    “A pioneer in this part of the world is Peter Andrews.  A website www.naturalsequencefarming.com.au  presents Peter’s ideas, and a forum for discussion.  Peter’s main proposition is that farming practices over the past 150 years in Australia have progressively dehydrated and degraded much of the country.  He argues that ploughing the soil, draining swamps, taking willows out of the rivers, monoculture farming practices, construction of dams everywhere has led to the systematic dehydration of the landscape.

    He has developed strategies for rehydration, principally by restoring wetlands, swamps, floodplains by slowing down the water flows and keeping more of the water in the landscape.  As well, he respects nature’s strategies to rehabilitate degraded areas, and welcomes all vegetation (including what many of us call weeds) as part of that process.

    Channel 9 presented a program on Peter that shows directly the beneficial results of his approaches.  I think that this work adds an important new angle to the ways in which mankind can assist in managing climate, which allied to the new findings that show that trees, like animals, have ‘thermostat’ systems that maintain temperatures within a close range, can change the way we approach the problems.

    As written on a website that describes Natural Sequence Farming

    Peter Andrews is a grazier and race horse breeder from Bylong in the Upper Hunter Valley. He is a man who many believe is way ahead of his time. Peter has gained fundamental insights to the natural functioning of the Australian landscape that leave him almost without peer. He has applied these insights in restoring his and other properties to fertility levels that he says existed upon European arrival in this country.

    The model that Peter Andrews set up at Tarwyn Park was based on the principle of reintroducing natural landscape patterns and processes as they would have existed in Australia prior to European settlement. This included:

    • reintroduction of a natural valley flow pattern, reconnecting the stream to its flood plain, which would reintroduce a more natural hydrological and fertility cycle to that landscape.     

    • and that through a managed succession of the vegetation (mostly weeds back then), the natural fluvial pattern could be ‘regrown’, so that then nutrients and biomass harvested on the flood plain could be redistributed throughout the property and obviously through the stock.

    This type of integrated approach to reduce vulnerability to environmental threats, including from climate variability and change, is to be commended and encouraged. Rather than relying exclusively on controlling atmospheric concentrations of CO2, a much more scientifically integrated approach is needed, as exemplified by the work of Peter Andrews.

    Comments Off on Integrated Land Use Approach – An Example Of Applying The “Vulnerability Paradigm”

    Filed under Vulnerability Paradigm

    Oceanic Influences On Recent Continental Warming – An Important New Research Paper: Compo and Sardeshmukh, 2008

    Climate Science has previously weblogged on an important new perspective on the role of regional climate forcings on climate variability and change which involves ocean-atmosphere interactions (e.g. see and see). Now there is a very significant new paper on this subject by this research group which should attract major attention. It is

    Compo,G.P., and P.D. Sardeshmukh, 2008: Oceanic influences on recent continental warming. Climate Dynamics, in press.

    The abstract reads

    “Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. The oceanic influence has occurred through hydrodynamic-radiative teleconnections, primarily by moistening and warming the air over land and increasing the downward longwave radiation at the surface. The oceans may themselves have warmed from a combination of natural and anthropogenic influences.”

    The conclusion of the paper reads

    “In summary, our results emphasize the significant role of remote oceanic influences, rather than the direct local effect of anthropogenic radiative forcings, in the recent continental warming. They suggest that the recent oceanic warming has caused the continents to warm through a different set of mechanisms than usually identified with the global impacts of SST changes. It has increased the humidity of the atmosphere, altered the atmospheric vertical motion and associated cloud fields, and perturbed the longwave and shortwave radiative fluxes at the continental surface. While continuous global measurements of most of these changes are not available through the 1961-2006 period, some humidity observations are available and do show upward trends over the continents. These include near-surface observations (Dai 2006) as well as satellite radiance measurements sensitive to upper tropospheric moisture (Soden et al. 2005).”

     Although not a focus of this study, the degree to which the oceans themselves have recently warmed due to increased GHG, other anthropogenic, natural solar and volcanic forcings, or internal multi-decadal climate variations is a matter of active investigation (Stott et al. 2006; Knutson et al. 2006; Pierce et al. 2006). Reliable assessments of these contributing factors depend critically on reliable estimations of natural climate variability, either from the observational record or from coupled climate model simulations without anthropogenic forcings. Several recent studies suggest that the observed SST variability may be misrepresented in the coupled models used in preparing the IPCC’s Fourth Assessment Report, with substantial errors on interannual and decadal scales (e.g., Shukla et al. 2006, DelSole, 2006; Newman 2007; Newman et al. 2008). There is a hint of an underestimation of simulated decadal SST variability even in the published IPCC Report (Hegerl et al. 2007, FAQ9.2 Figure 1). Given these and other misrepresentations of natural oceanic variability on decadal scales (e.g., Zhang and McPhaden 2006), a role for natural causes of at least some of the recent oceanic warming should not be ruled out.

    Regardless of whether or not the rapid recent oceanic warming has occurred largely from anthropogenic or natural influences, our study highlights its importance in accounting for the recent observed continental warming. Perhaps the most important conclusion to be drawn from our analysis is that the recent acceleration of global warming may not be occurring in quite the manner one might have imagined. The indirect and substantial role of the oceans in causing the recent continental warming emphasizes the need to generate reliable projections of ocean temperature changes over the next century, in order to generate more reliable projections of not just the global mean temperature and precipitation changes (Barsugli et al. 2006), but also regional climate changes.”

    This is a major scientific conclusion, and the authors should be recognized for this achievement. If these results are robust, it further documents that a regional perspective of climate variabilty and change must be adopted, rather than a focus on a global average surface temperature change, as emphasized in the 2007 IPCC WG1 report.  This work also provides support for the perspective on climate sensitivity that Roy Spencer has reported on in his powerpoint presentation last week (see).

    Comments Off on Oceanic Influences On Recent Continental Warming – An Important New Research Paper: Compo and Sardeshmukh, 2008

    Filed under Climate Change Forcings & Feedbacks