Monthly Archives: October 2010

Invited Input To The American Geophysical Union On Their Mission

I was invited to participate in a teleconference (the first of several) with colleagues from the American Geophysical Union on assessing how well the AGU is doing and recommendations of what to do differently. The attendees at the teleconference were nominated from the different discipline and focus groups within the AGU [I was nominated by the Natural Hazards focus group]. The President-Elect of the AGU Carol Finn is to be commended for reaching out to the membership of the AGU.

The discipline areas at the AGU are

  • Atmospheric Sciences
  • Biogeosciences
  • Geodesy
  • Geomagnetism and Paleomagnetism
  • Hydrology
  • Ocean Sciences
  • Planetary Sciences
  • Seismology
  • Space Physics and Aeronomy
  • Tectonophysics
  • Volcanology, Geochemistry, and Petrology
  • and the focus groups are

  • Atmospheric and Space Electricity
  • Cryosphere Sciences
  • Earth and Planetary Surface Processes
  • Earth and Space Science Informatics
  • Mineral and Rock Physics
  • Global Environmental Change (no Web site)
  • Natural Hazards
  • Near Surface Geophysics
  • Nonlinear Geophysics
  • Paleoceanography and Paleoclimatology
  • Study of the Earth’s Deep Interior
  • Societal Impacts and Policy Sciences (no Web site)
  • The AGU will release the information on this survey on their website when ready, as I understand. However, I want to use this post to summarize a few issues.

    1. First, in terms of the journals, they remain at the highest level of peer reviewed professional journals. However, the decision of Geophysical Research Letters [GRL] to permit Editors to reject papers without any reviews is a recipe for an Editor to serve even more as a gatekeeper of research papers than already occurs. More appropriately, the Editor should be a judge who collects assessments of a paper by others who, generally, will have more expertise than he or she does.

    Except for submissions that are clearly outside of the topic area of the journal or are clearly inadequate in terms of English and/or format, I never permitted my Editors to reject papers without following the peer review system when I was a Chief Editor of the Monthly Weather Review and the Journal of Atmospheric Science.

    2. I also recommended that the reviews (remaining anonymous) and the Editor’s decision on all papers (both accepted and rejected –  the later with the author’s permission) be made available on the AGU website. The rejected papers (as manuscripts) themselves would reside on the website of the authors. If their paper was subsequently accepted elsewhere, this would be noted on the author’s website.  The availability of this information will permit AGU members and others to better assess if the Editors are being objective judges of whether to accept or reject a paper.

    3. One alternative to the above recommendation,  is for an open journal sponsored by the AGU in which papers would be submitted and the review process on-line completed in public. This type of approach has been adopted by the European Geophysical Union [EGU]; i.e. see where they write

    “The EGU has extended the traditional peer review process by adding the concepts of a “Public Peer Review”, i.e. the comments of the reviewers, anonymous or attributed, are published together with the article on the web, and of “Interactive Public Discussions”, i.e. after having passed a rapid access peer review process manuscripts submitted to EGU two-stage-journals will be published first of all in the “Discussions” part of the website of that journal being then subject to interactive public discussions initiated by alerting the corresponding scientific community. The results of the public peer-review and of the interactive public discussions are then used for the final evaluation of the manuscript by the Editor and, eventually, for its publication on the website of the actual journal.”

    4. There was also a strong feeling among all of the attendees at the teleconference that interdisciplinary research needs to be more encouraged. Earth Interactions is the one AGU journal of this type, and I recommended that its use should be encouraged.

    5. To identify the interdisciplinary topics within the AGU, I suggested that each member fill in a vertical (discipline) – horizontal (focus area) matrix with where they fit.  The AGU discipline and focus areas are listed earlier in this e-mail. The AGU members would choose more than one matrix entry if they are interdisciplinary. If a member finds a missing discipline and/or focus area, this is a needed area that the AGU needs to add to its responsibilities.  

    6. In the strategic plan for the AGU, I recommended that real world observations and comparison with models be elevated to a primary strategic goal.  Also, there is a need for a bottom-up, resource-based vulnerability focus as discussed in the weblog post

    A Way Forward In Climate Science Based On A Bottom-Up Resource-Based Perspective

    7. I also urged that all AGU talks be videoed and provided for download, as well as on a venue such as provided by Roku [which already includes lectures by faculty from major universities].

    8. Finally, I recommended that members vote on ALL AGU policy statements within their area of expertise [as defined by the discipline/focus matrix discussed under #5. Instead of having just a small set of AGU members decide what is the policy recommendation of the AGU, they would be required to have a recorded vote.

    Comments Off on Invited Input To The American Geophysical Union On Their Mission

    Filed under Climate Science Op-Eds

    The Role Of Fossil Water On Climate – An Important Climate Forcing Whose Influence Has Not Yet Been Properly Assessed

    There was an article in the October 9 2010 issue of The Economist titled

    Deep waters, slowly drying up

    which prompted me to consider the importance of non-replenished ground water on the atmospheric water vapor content when this deep water is transfered to the surface and evaporates. Non-replenished water is called “fossil water“.

    This is an important climate issue which seems to have been overlooked. The Economist article includes the text

    “….. aquifers are still poorly understood. Until a UNESCO inventory in 2008, nobody knew even how many transboundary aquifers existed. Experts are still refining the count: the American-Mexico border may include 8, 10, 18 or 20 aquifers, depending on how you measure them. Defining sustainability vexes hydrologists too, particularly with ancient fossil aquifers that will inevitably run dry eventually. Estimates for the life of the Nubian sandstone aquifer range from a century to a millennium.”

    Fossil water as written in Wikipedia is

    Fossil water or paleowater is groundwater that has remained sealed in an aquifer for a long period of time. Water can rest underground in “fossil aquifers” for thousands or even millions of years. When changes in the surrounding geology seal the aquifer off from further replenishing from precipitation, the water becomes trapped within, and is known as fossil water.

    The Ogallala Aquifer and Nubian Sandstone Aquifer System are among the most notable of fossil water reserves. Fossil aquifers also exist in the Sahara, the Kalahari, and the Great Artesian Basin. A further potential store of ancient water is Lake Vostok, a subglacial lake in Antarctica.

    Fossil water is a non-renewable resource.[1] Whereas most aquifers are naturally replenished by infiltration of water from precipitation, fossil aquifers get very little recharge.[2] The extraction of water from such non-replenishing groundwater reserves (known as low safe-yield reserves) is known in hydrology as “water mining”.[3] If water is pumped from a well at a withdrawal rate that exceeds the natural recharge rate (which is very low or zero for a fossil aquifer), the water table drops, forming a depression in the water levels around the well.[2] Water mining has been blamed for contributing to rising sea levels.

    An important climate question is the contribution of this fossil water, through irrigation,  to local, regional and global transpiration and evaportion from soils and other surfaces. This is a flux of water vapor into the atmosphere that would otherwise not occur.

    Fossil water reservoirs can be quite large. For example, in the paper

    Issar, A, 1985: Fossil Water under the Sinai-Negev Peninsula. Scientific American Vol. 253, No. 1, p 104-110, July, 1985

    the abstract reads

    “A study of water issuing from springs and wells scattered across the Sinai (Egypt) and the Negev (Israel) deserts has identified a great aquifer formed during the last glacial age. Satellite photographs of the region reveal surface characteristics consistent with subterranean geology that could support an aquifer. Carbon-14 dating puts the age of the water from springs and wells at 20,000 to 30,000 yr. The ages determined from C-14 dating agree with results of hydrological flow models. Water samples from the ‘ Ayun Musa, from the abandoned oil-exploration well dug into the Nubian sandstone layer in Nakhel and from the artesian wells in the Nubian sandstone layer near the Dead Sea all have the same relative amounts of deuterium and oxygen-18. The chemical and isotopic studies in conjunction with archaeological evidence suggest that the aquifer holds rainwater that was trapped during the most recent ice age. It has been calculated that the Nubian sandstone aquifer under the Sinai and the Negev holds 200 billion cu m of water, 70 billion cu m of which is under the Negev. Agricultural settlements in the Negev demonstrate that the water is low enough in salt content to be suitable for irrigation.”

    Recently, there was a report of the Liyban government’s program to ultilize fossil water for irrigation;

    Libya’s Qaddafi taps ‘fossil water’ to irrigate desert farms

    It is reported in the above news article

    The Libyan government says the 26-year project has cost $19.58 billion. Nearing completion, the Great Man-Made River is the largest irrigation project in the world and the government says it intends to use it to develop 160,000 hectares (395,000 acres) of farmland. It is also the cheapest available option to irrigate fields in the water-scarce country, which has an average annual rainfall of about one inch.

    I have asked my colleague, Faisal Hossain at Tennessse Tech University ,who is an internationally well-respected hydrologist about this issue, and he replied that, based on the paper

    Freydank, K.  and Siebert, S. Towards mapping the extent of irrigation in the last century: a time series of irrigated area per country. Frankfurt Hydrology Paper 08, Institute of Physical Geography, University of Frankfurt, Frankfurt am Main, Germany, (2008).

    that globally about 60% of irrigation water is from ground water, but the fraction that is from fossil water does not appear to have yet been assessed.  Professor Hoassin reminded me that the paper

    [UPDATED PM Oct 18 2010 to correct citation and with edits in the associated  text]

    DeAngelis, A., F. Dominguez, Y. Fan, A. Robock, M. D. Kustu, and D. Robinson (2010), Evidence of enhanced precipitation due to irrigation over the Great Plains of the United States, J. Geophys. Res., 115, D15115, doi:10.1029/2010JD013892.

    claims that groundwater in the Midwest from the Ogallala  Aquifer(much of which is fossil water) doubled its contribution in terms of evaporation in the 20th century.

    There is another paper on the role of irrigation in the global climate. It is

    Puma, M. J., and B. I. Cook (2010), Effects of irrigation on global climate during the 20th century, J. Geophys. Res., 115, D16120, doi:10.1029/2010JD014122

    I posted on this paper in

    New Paper “Effects Of Irrigation On Global Climate During The 20th Century” By Puma and Cook (2010).

    Among the recommendations in the Puma et al (2010) paper is

    “Future efforts to understand irrigation in a climate model setting should not only carefully document the amount of irrigation water applied to the land, but also keep track of the relative amounts of surface water and groundwater used for irrigation.”

    Faisal also referred me to the paper

    Scanlon, B. R., I. Jolly, M. Sophocleous, and L. Zhang (2007), Global impacts of conversions from natural to agricultural ecosystems on water resources: Quantity versus quality, Water Resour. Res., 43, W03437, doi:10.1029/2006WR005486

    which includes the excerpts from the abstract

    “Past land use changes have greatly impacted global water resources…… Since the 1950s, irrigated agriculture has expanded globally by 174%, accounting for ~90% of global freshwater consumption. ….. Long time lags (decades to centuries) between land use changes and system response (e.g., recharge, streamflow, and water quality), particularly in semiarid regions, mean that the full impact of land use changes has not been realized in many areas and remediation to reverse impacts will also take a long time…….”

    We recommend an extension to these studies. Particularly, that ground water be further broken into replenishable ground water and fossil water, and an assessment of the local, regional and global contribution of fossil water used for irrigation to the transfer of water vapor into the atmosphere.

    Fossil water, as with fossil fuels, involves the insertion of a climate forcing into the atmosphere due to human activities of a gas (in this case additional water vapor) which would otherwise not be there.

    Comments Off on The Role Of Fossil Water On Climate – An Important Climate Forcing Whose Influence Has Not Yet Been Properly Assessed

    Filed under Climate Change Forcings & Feedbacks

    Comment On The Science Paper “Atmospheric CO2: Principal Control Knob Governing Earth’s Temperature” By Lacis Et Al 2010

    A new paper has appeared in Science magazine that concludes that CO2 is the dominate control of the Earth’s climate system. It is also yet another model sensitivity study (climate process study) in which only a subset of the real world climate system is simulated.

     The paper is

    Andrew A. Lacis, Gavin A. Schmidt, David Rind, and Reto A. Ruedy, 2010: Atmospheric CO2: Principal Control Knob Governing Earth’s Temperature. 15 Ocotober 2010 Vol 330 Science.

    The abstract reads

    “Ample physical evidence shows that carbon dioxide (CO2) is the single most important climate-relevant greenhouse gas in Earth’s atmosphere. This is because CO2, like ozone, N2O, CH4, and chlorofluorocarbons, does not condense and precipitate from the atmosphere at current climate temperatures, whereas water vapor can and does. Noncondensing greenhouse gases, which account for 25% of the total terrestrial greenhouse effect, thus serve to provide the stable temperature structure that sustains the current levels of atmospheric water vapor and clouds via feedback processes that account for the remaining 75% of the greenhouse effect. Without the radiative forcing supplied by CO2 and the other noncondensing greenhouse gases, the terrestrial greenhouse would collapse, plunging the global climate into an icebound Earth state.”

    Lacis et al 2010 is correct that the human addition of CO2 is a first order climate forcing as we reported on in our paper

    Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell,  W. Rossow,  J. Schaake, J. Smith, S. Sorooshian,  and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union.

    However, this paper perpetuates the narrow view that the since this gas does not condense and precipitate, it is THE dominate forcing since it “sustains the current levels of atmospheric water vapor and clouds via feedback processes.”

    Text in their paper includes

    It often is stated that water vapor is the chief greenhouse gas (GHG) in the atmosphere. For example, it has been asserted that “about 98% of the natural greenhouse effect is due to water vapour and stratiform clouds with CO2 contributing less than 2%”

    An improved understanding of the relative importance of the different contributors to the greenhouse effect comes from radiative flux experiments that we performed using Goddard Institute for Space Studies (GISS) ModelE

    “CO2 is the key atmospheric gas that exerts principal control over the strength of the terrestrial greenhouse effect. Water vapor and clouds are fast-acting feedback effects, and as such are controlled by the radiative forcings supplied by the noncondensing GHGs. There is telling evidence that atmospheric CO2 also governs the temperature of Earth on geological time scales…”

    The paper is an interesting model experiment, but it really does not present any new insight beyond what we already know. Quite frankly, this would be a good Master’s thesis study to show why CO2 is an important climate forcing as well as provide insight into the water cycle feedback. However, it presentation as a major new research insight by Science is puzzling, unless the magazine wants to promote the message at the end of the Lacis et al paper that

    “The anthropogenic radiative forcings that fuel the growing terrestrial greenhouse effect continue unabated. The continuing high rate of atmospheric CO2 increase is particularly worrisome, because the present CO2 level of 390 ppm is far in excess of the 280 ppm that is more typical for the interglacial maximum, and still the atmospheric CO2 control knob is now being turned faster than at any time in the geological record. The concern is that we are well past even the 300- to 350-ppm target level for atmospheric CO2, beyond which dangerous anthropogenic interference in the climate system would exceed the 25% risk tolerance for impending degradation of land and ocean ecosystems, sea-level rise, and inevitable disruption of socioeconomic and food producing infrastructure. Furthermore, the atmospheric residence time of CO2 is exceedingly long, being measured in thousands of years. This makes the reduction and control of atmospheric CO2 a serious and pressing issue, worthy of real-time attention.”

    My conclusion is that their paper does not present new scientific insight but is actually an op-ed presented in the guise of a research paper by Science magazine

    They also do not present (and show why they should be refuted) alternative published perspectives so as we present in Pielke et al (2009) that

    “In addition to greenhouse gas emissions, other first-order human climate forcings are important to understanding the future behavior of Earth’s climate. These forcings are spatially heterogeneous and include the effect of aerosols on clouds and associated precipitation [e.g., Rosenfeld et al., 2008], the influence of aerosol deposition (e.g., black carbon (soot) [Flanner et al. 2007] and reactive nitrogen [Galloway et al., 2004]), and the role of changes in land use/land cover [e.g., Takata et al., 2009]. Among their effects is their role in altering atmospheric and ocean circulation features away from what they would be in the natural climate system [NRC, 2005]. As with CO2, the lengths of time that they affect the climate are estimated to be on multidecadal time scales and longer.”

    The authors of this paper are welcome to present a guest post on my weblog (or could do that on Real Climate) as to what is actually new about their findings. Absent that rebuttal, we should interpret this article has simply a repackaging of their perspective that CO2 is the dominant climate forcing.

    It is well known that CO2 is a first order greenhouse gas and is essential for providing the Earth with a habitable climate. We discuss this in our book

    Cotton, W.R. and R.A. Pielke, 2007: Human impacts on weather and climate, Cambridge University Press, 330 pp.

    For example, on page 158 we wrote

    “In the absence of ….greenhouse gases, the average surface temperature of the Earth would be over 30C cooler than it is today.”


    “The major greenhouse gas is water vapor which varies naturally in space and time due to the Earth’s hydrological cycle….[the] second most important greenhouse gas is carbon dioxide. In contrast to water vapor, carbon dioxide is rather uniformly distributed throughout the troposphere, although the radiative forcing associated with it is more heterogeneous as a result of spatial (e.g. latitudinal) and temporal variations in tropospheric temperature and water vapor concentrations, and in surface emissions and absorption.”

    On page 174 in our Section “Water vapor feedbacks”, we write

    “…any changes in water vapor concentration in response to other greenhouse gases would substantially alter the net greenhouse heating”.

    In Section 8.2.8 on pages 166 to 175 of our book we present a section titled “Assessment of the relative effect of carbon dioxide and water vapor” with this text reproduced and discussed further in the weblog posts 

    Relative Roles of CO2 and Water Vapor in Radiative Forcing

    Further Analysis Of Radiative Forcing By Norm Woods

    In Cotton and Pielke and in these posts, we present an analysis of the role of 1X and 2X CO2 as a radiative forcing for three representative atmospheric soundings [tropical; subarctic summer; subarctic winter].

    We found, as should be expected since the radiative forcing of CO2 is a logarithmic function of its atmospheric concentration, that the largest effect of changing its concentration is from zero CO2 to 1X CO2 relative to 1X CO2 to 2X CO2.  For example, as we report in our book and on my weblog posts, Norm Woods found that for a tropical sounding

    “the downwelling longwave flux at the surface when the CO2 concentration changes from 360ppm to 560ppm is 0.09 Watts per meter squared, as contrasted with a change of 0.41 Watts per meter squared when the concentration changes to 360ppm from 0 ppm. The reason for this relative insensitivity to added CO2 in the tropics is due to the high concentrations of water vapor which results in additional long wave flux changes due to CO2 being very muted”

    For a subarctic summer sounding

    “the corresponding values are 2.94 Watts per meter squared when changing the CO2 concentrations to 360 ppm from 0, and 0.47 Watts per meter squared when changing the CO2 concentrations to 560 ppm from 360 ppm.”

    For a subarctic winter sounding

    “the change is 14.43 Watts per meter squared when the CO2 concentrations are changed to 360 ppm from 0, and 1.09 Watts per meter squared when the CO2 concentrations are changed to 560 ppm from 360 ppm.”

    The Lacis et al 201o paper accurately reports that CO2 is a first order climate forcing. However, this is not a new finding.

    There is a further (presumably unintended by the authors) bottom line message, however, from the Lacis et al 2010 Science paper.

     While, they have reconfirmed the importance of CO2 as a first-order climate forcing, they have not added anything that is new. Thus, in terms of further model predictions using the GISS model (or other IPCC model) what are they going to add that is policy relevant beyond what has already been achieved with their model?

    Comments Off on Comment On The Science Paper “Atmospheric CO2: Principal Control Knob Governing Earth’s Temperature” By Lacis Et Al 2010

    Filed under Climate Change Forcings & Feedbacks, Climate Models

    Seminar – “Co-evolution of Climate and Life” By Gordon Bonan

    There will be an informative seminar by an internationally well-respected climate scientist, Gordon Bonan of NCAR, on the role of biological processes within the climate system. Indeed, the topic of ecology of ecosystems [ecosystem function] is really just the biological community’s way to express describe the climate system on Earth. The term “climate”, of course, originated in the physical sciences community, but is increasingly taking on the integrated perspective presented in the figure below from NRC (2005)


    The seminar is

    Subject: Astrobiology Seminar, Wednesday, Oct. 20, 2:00 PM Astrobiology Seminar

    “Co-evolution of Climate and Life

    Presented by :  Gordon Bonan (NCAR)

    DATE:  Wednesday,  October 20
    TIME:   2:00 PM
    LOCATION:  LASP Conference Room D142, Duane Physics Building,
    University of Colorado-Boulder

    Refreshments available at 1:45 PM


    The physiology of plants and ecology of ecosystems are important determinants of climate change. I review the processes by which the biosphere affects climate and give examples of biosphere-atmosphere coupling in the paleoclimate record. Climate change over the 20th and 21st centuries is greatly influenced by the biosphere, through carbon cycle feedbacks and through anthropogenic land cover change. I give examples of these feedbacks and discuss the potential for ecosystems to mitigate 21st century climate change.

    Comments Off on Seminar – “Co-evolution of Climate and Life” By Gordon Bonan

    Filed under Climate Change Forcings & Feedbacks

    Comments On Judy Curry’s Post “The Culture Of Building Confidence In Climate Models”

    In Judy Curry’s post at Climate Etc

    The culture of building confidence in climate models

    she listed information from Knutti 2008 regarding why there should be confidence in the multi-decadal global climate models. I have reproduced below this text from Judy’s post.

    Knutti 2008 describes the reasons for having confidence in climate models as follows:

    • Models are based on physical principles such as conservation of energy, mass and angular momentum.
    • Model results are consistent with our understanding of processes based on simpler models, conceptual or theoretical frameworks.
    • Models reproduce the mean state and variability in many variables reasonably well, and continue to improve in simulating smaller-scale features
    • Models reproduce observed global trends and patterns in many variables.
    • Models are tested on case studies such as volcanic eruptions and more distant past climate states
    • Multiple models agree on large scales, which is implicitly or explicitly interpreted as increasing our confidence
    • Projections from newer models are consistent with older ones (e.g. for temperature patterns and trends), indicating a certain robustness.

    I will discuss each of these criteria below

    1. “Models are based on physical principles such as conservation of energy, mass and angular momentum.”

    Models actually include basic physics for only a subset of the physics, This basic physics includes the pressure gradient forces (e.g. in the atmosphere; oceans), gravity, and advection (e.g. by winds, currents, percolation of water into the soil) on the resolvable scale of the model (which is at least 4 grid increments as I discuss in detail in Pielke (2002)). All other aspects of the physics, chemistry and biology are parameterized using tunable coefficients and functions. The multi-decadal global climate models (and indeed all numerical climate models) require the conservation of energy, mass and angular momentum, but the implication from the first bullet of Knutti 2008 that the climate models are basic physics code is incorrect.

    2. “Model results are consistent with our understanding of processes based on simpler models, conceptual or theoretical frameworks.”

    This is certainly a necessary test of any complex code. However, the pertinent question is are the model results consistent with real-world observations? For time periods decades into the future, there is no way to test this requirement. In fact, even in hindcasts of past years, the multi-decadal climate models have no regional skill, as I posted on in When Is A Model a Good Model?

    3.  “Models reproduce the mean state and variability in many variables reasonably well, and continue to improve in simulating smaller-scale features” and “Models reproduce observed global trends and patterns in many variables”.

    This are erroneous claims. As shown, for example, in

    Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, and A. Christofides, 2008: On the credibility of climate predictions, Hydrological Sciences Journal, 53 (4), 671-684

    where, among their conclusions, they write with respect to the global climate models that they 

    “…perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.”

    Even Kevin Trenberth has written with respect to these models (see)

    “…the science is not done because we do not have reliable or regional predictions of climate.”

    4. “ Models are tested on case studies such as volcanic eruptions and more distant past climate states”

    The testing of global climate models when there is a volcanic eruption is a much simpler evaluation than multi-decadal global model predictions associated with natural variability and the diverse range of human inputs into the climate system.

    Large volcanic eruptions result in the insertion of large quantities of ash into the stratosphere which reduces the solar irradiance that reaches the surface. This produces a cooling with the spatial distribution of the cooling dependent on where the volcanic emission into the stratosphere occurs. This use of the global climate models is an effective test of its skill and prediction of global and regional climate on the time period of seasons to a couple of years after a major eruption, as real world data can be used to directly compare with the model forecasts. This is a valuable necessary (but not sufficient) test of the skill of global climate models.

    The model simulation of distant past climates is much more difficult since observational verification of skill must depend on proxy data. As a result temporal and spatial resolution is coarse, and only the larger climate perturbations can be resolved (not tenths of a degree in a global average temperature, for example).  Moreover, skill with these models with respect to proxy data often occurs primarily due to the imposition of the different topography of earlier times as the bottom boundary condition. With respect to the last glacial maximum, for example, the insertion of continental ice sheets that are thousands of meters high over vast areas directly alters the wind circulations in its vicinity such that finding proxies of cold vegetation on their boundaries is expected even without a model simulation.

    The use of the global models, when there are major volcanic eruptions and for past climates, is a worthwhile scientific endeavor. However, it does not indicate if the models necessarily have skill with respect to predicting climate decades from now associated with changes in atmospheric greenhouse gas concentrations, land use/land cover change and other human and natural climate forcings.

    5. “Multiple models agree on large scales, which is implicitly or explicitly interpreted as increasing our confidence” and “Projections from newer models are consistent with older ones (e.g. for temperature patterns and trends), indicating a certain robustness”.

    Model to model comparisons, while interesting and necessary, are no substitute for comparisons with real world data. The models themselves are actually quite similar to each other in terms of their dynamical core (i.e. the pressure gradient force, advection) and their parameterizations of the physics. They are not independent tests of skill, as they are themselves hypotheses (expressed in a mathematical set of numerical code).

    Finally, the Knutti 2008 list is remarkably silent on what should be the most important test of the multi-decadal global climate models. This test is

    What is their quantitative skill at predicting climate variations and change on short (e.g. days); medium (e.g. seasons) and long (e.g. multi-decadal) time scales?

    Until this test is completed (the “seamless climate prediction“), policymakers should not have confidence in their forecasts (projections) decades into the future.

    For further relevant posts on this subject see

    How Independent Are Climate Models?

    Q&A “On GCMs, Weather, and Climate”

    Dissecting a Real Climate Text by Hendrik Tennekes

    Guest Weblog By Gerbrand Komen

    Comments On The Article By Palmer et al. 2008 “Toward Seamless Prediction: Calibration of Climate Change Projections Using Seasonal Forecasts”

    Comments Off on Comments On Judy Curry’s Post “The Culture Of Building Confidence In Climate Models”

    Filed under Climate Models

    Guest Post “Draining Away The Earth’s Coolant” by Tony Mount

    Today, we are fortunate to have a guest post by Anthony Blair Mount. It is titled

    Draining Away The Earth’s Coolant

    Introduction.  Local Tasmanian coastlines show increased erosion which seems to confirm the sea level has risen slightly. The recent arrival of fish from warmer climes indicates that it has also warmed. Thermal expansion probably accounts for most of the rise in sea levels but there could be some contributory agents as discussed below.

    Catchment Evaporation. When a year’s rain falls on an undisturbed dense forest catchment, about one quarter wets the canopy and is then evaporated. About half is used by surface evaporation plus transpiration through the vegetation from the soil moisture store. The latent heat required to evaporate these three quarters of the local annual rainfall helps cool both the canopy and the land it shades.

    Catchment Run-off.  The other quarter of annual rainfall runs-off and is exported from the local catchment via streams, rivers, ponds and lakes to the sea, and on the way it evaporates.

    Dense Forest Conversion.  On land cleared of dense forest then grassed or cropped, all the rain reaches the ground.  Surface evaporation and transpiration from regenerating plants together still use about half of it, so the other half runs off. In this way the catchment’s evaporative cooling is reduced by a third while run-off is doubled.  The accompanying removal of both tree-top cooling and canopy shade aggravate this reduction in catchment evaporative cooling.  The extra run-off only replaces what is already evaporating from existing water surfaces so cannot add to evaporative cooling, but it may contribute to sea level rise.

    Open Forest Conversion.  Real but lesser reductions in local cooling and increases in run-off also occur when shorter and less dense forests in lower rainfall areas are cleared and cropped or grassed.

    Sealed and Drained Lands. Where natural vegetation is replaced by roads, roofs and runways that are sealed and drained, local cooling by transpiration and evaporation from a soil store are much reduced and most of the drainage water is usually exported to existing water surfaces – including the sea.  These reductions in local cooling are additional to the surface warming caused by power stations, transport, urban areas, air conditioners, etc.

    CO2 Increases. As surface evaporation plus plant transpiration hardly change with conversion of forests to low vegetation (once established), CO2 absorption may behave similarly. But ploughing aerates the soil which produces extra CO2 and removes nearby plants; and all sealed and drained lands have relatively low CO2 absorption. However glasshouse crops seem happy to absorb 1000ppm CO2 so it is likely this extra local and all other extra CO2 should be quickly absorbed and increases plant growth. Whenever global temperatures rise all land CO2 plant sinks should gain from all water degassing sources.

    Summary.  Forest clearing and land sealing and draining by humans all decrease local land evaporative cooling and increase local run-off.  It is probable that, at world levels, that these two local effects of human land management make some contribution to both land warming and sea level rise.

    Bio.  Born in Canada, schooled in England, University at Adelaide, Canberra and Hobart (MSc). Started work at Forestry Tasmania’s Maydena Research Station Jan 1957. Invited to address the Tall Timbers Fire Ecology Conference at Tallahassee in 1969 and still active in that field (see for two 2009 broadsheets “Tasmania’s Ancient Bushfire Heritage” and “Managing Tasmania’s Fire Environment”). Lectured at Melbourne Uni Forestry School.

     Wrote the Australian Forestry Council’s booklet “Australian Bushfire Research” (1987). Published “The Forest Green and Other Poems” in 2000.

    Currently attend the University of the Third Age and regularly Orienteering.

     Credentials re posted topic.

    Tony Mount (79) is a long retired forest fire ecology researcher and fire manager. He is the author of the Soil Dryness Index (SDI) – see Mount, A.B.(1972) The derivation and testing of a soil dryness index using run-off data. Tasmanian Forestry Commission. Bull. 4.(points & Fahrenheit) and Mount, A.B. (1980) Estimation of evaporative losses from forests; a proven (=tested) simple model with wide applications. Institution of Engineers, Australia, Hydrology and Water Resources Symposium. (mm & Centigrade)

     The SDI has been used in Tasmania for some 40 years for fire management and flood forecasting purposes and for fire management in Western Australia and elsewhere.

    The SDI was re-calibrated by Melbourne Metropolitan Board of Works to accurately model Melbourne’s run of the river water supply – see Langford, K.J., Duncan, H.P. and Heeps, D.P. (1977) Evaluation and use of a water balance model. Institution of Engineers, Australia, Hydrology Symposium.


    Comments Off on Guest Post “Draining Away The Earth’s Coolant” by Tony Mount

    Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

    New Paper “Distribution Of Landscape Types In The Global Historical Climatology Network” By Montandon Et Al 2010

    In 2007, I taught the class Human Impacts on Weather and Climate at the University of Colorado in Boulder. One of my students, Laure Montandon, completed a term paper

    How representative of a global surface temperature average is the Global Historical Climatology Network (GHCN)?

    Over the following several years, in collaboration with Souleymane Fall and Dev Niyogi of Purdue University, she led the development of this unfunded research into a peer reviewed paper. This paper is now available:

    Montandon, L.M., S. Fall, R.A. Pielke Sr., and D. Niyogi, 2010: Distribution of landscape types in the Global Historical Climatology Network. Earth Interactions, in press, doi: 10.1175/2010EI371.1

    The abstract reads

    The Global Historical Climate Network version 2 (GHCNv.2) surface temperature dataset is widely used for reconstructions such as the global average surface temperature (GAST) anomaly. Because land use and land cover (LULC) affect temperatures it is important to examine the spatial distribution and the LULC representation of GHCNv.2 stations. Here, nightlight imagery, two LULC datasets, and a population and cropland historical reconstruction are used to estimate the present and historical worldwide occurrence of LULC types and the number of GHCNv.2 stations within each. Results show that the GHCNv.2 station locations are biased towards urban and cropland (>50% stations vs. 18.4% of the world’s land) and past century reclaimed cropland areas (35% stations vs. 3.4% land). However, widely occurring LULC such as open shrubland, bare, snow/ice, and evergreen broadleaf forests are under-represented (14% stations vs. 48.1% land) as well as non-urban areas that have remained uncultivated in the past century (14.2% stations versus 43.2% land). Results from the temperature trends over the different landscapes confirm that the temperature trends are different for different LULC and that the GHCNv.2 stations network might be missing on long term larger positive trends. This opens the possibility that the temperature increases of the Earth’s land surface in the last century would be higher than what the GHCNv.2 based GAST analyses reports.

    Excerpts from the conclusion read

    Using over 5000 stations and different LULC datasets, a synthesis regarding the representation of the current GHCN-monthly temperature dataset (Version 2, GHCNv.2 hereafter) was successfully conducted…. Our results confirm the findings from Hansen et al. (2001) that the GHCNv.2 metadata is outdated.


    The scientific level of understanding on how LULC affect climate is low and the scientific community should focus on better understanding the related impacts, improving the global distribution of temperature stations network, and updating the descriptions of the LULC and other metadata for each station (including photographic documentation) in order to address this issue. The analysis presented in this paper should also be updated
    with more recent temperature datasets and land use metadata. The trend analysis exercise was undertaken to gain a perspective on the potential impact of the land cover distribution on the surface temperatures, and should be repeated in a more formal manner with historical land use change data, more detailed metadata and up-to-date datasets in a follow up study.

    Our finding that, based on the HHCNv.2, that “temperature increases of the Earth’s land surface in the last century would be higher than what the GHCNv.2 based GAST analyses reports”,  results in an even greater divergence in long term trends between surface and lower tropospheric temperatures as we reported on in

    Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

    Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655.

    Our new paper also shows that a set of data (the GHCN) which is used in the construction of a multi-decadal global annual average surface temperature trend has serious quantitative errors. These errors are compounded by biases resulting from poor siting of the observations (often even in rural areas) and a systematic bias in the extrapolation of the data to a grid (i.e. “homogenization”) as we will have found for the USHCN and will report on here once the review process of that paper is complete.

    Comments Off on New Paper “Distribution Of Landscape Types In The Global Historical Climatology Network” By Montandon Et Al 2010

    Filed under Climate Change Metrics, Research Papers

    When Is A Model a Good Model?

    I am reading the book The Grand Design by Stephen Hawking and Leonard Mlodinow. While the book involves a non-mathematical discussion of quantum physics and general relatively, among other topics, there is a concise summary on page 51 as to what is a “good model”.

    They write

    A model is a good model if it:

    1. Is elegant
    2. Contains few arbitrary or adjustable elements
    3. Agrees with and explains all existing observations
    4. Makes detailed predictions about future observations that can disprove or falsify the model if they are not borne out.

    With respect to the mult-decadal global climate models, it is clear they fail these requirements to be a “good model”. As candidly summarized, for example, by Kevin Trenberth in 2007 [an IPCC WG1 author] [highlighting added]

    “…there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. There are a number of assumptions that go into these emissions scenarios. They are intended to cover a range of possible self consistent “story lines” that then provide decision makers with information about which paths might be more desirable. But they do not consider many things like the recovery of the ozone layer, for instance, or observed trends in forcing agents. There is no estimate, even probabilistically, as to the likelihood of any emissions scenario and no best guess.Even if there were, the projections are based on model results that provide differences of the future climate relative to that today. None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe. Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.

    The current projection method works to the extent it does because it utilizes differences from one time to another and the main model bias and systematic errors are thereby subtracted out. This assumes linearity. It works for global forced variations, but it can not work for many aspects of climate, especially those related to the water cycle. For instance, if the current state is one of drought then it is unlikely to get drier, but unrealistic model states and model biases can easily violate such constraints and project drier conditions. Of course one can initialize a climate model, but a biased model will immediately drift back to the model climate and the predicted trends will then be wrong. Therefore the problem of overcoming this shortcoming, and facing up to initializing climate models means not only obtaining sufficient reliable observations of all aspects of the climate system, but also overcoming model biases. So this is a major challenge.”

    The obvious answer to the questions posed regarding a “good model” in the Hawking and Mlodinow 2010 book is that the models used in the 2007 IPCC report are not “good models” as they fail all four of the requirements.

    This failure does not mean we should not be concerned about the human addition of greenhouse gases (or other human and natural climate forcings), but it should cause policymakers and funders of climate model researchers to realize that they have been oversold on the scientific rigor of the IPCC models. The funding of model predictions decades into the future using these tools is not money well spent.

    Comments Off on When Is A Model a Good Model?

    Filed under Climate Models, Q & A on Climate Science

    September 2010 Global Temperature Report From the University Of Alabama At Huntsville

    Thanks to Phil Gentry, I have posted below the University of Alabama at Huntsville’s Global Lower Tropospheric Temperature Analysis for September 2010.  What is very interesting in this latest analysis is that almost the entire globe has above average lower tropospheric temperatures. If this persists while we are in a La Niña pattern (when we expect cooling) it will provide strong support for those who expect a long term warming to occur as a result of the accumulation of greenhouse gases in the Earth’s atmosphere. On the other hand, if the temperatures cool to average or below average over large portions of the globe, this would indicate that the climate has a self regulation which mutes temperature excursions.

    The University of Alabama at Huntsville Report

    Vol. 20, No. 5

    For Additional Information: Dr. John Christy, (256) 961-7763  john.christy[at] Dr. Roy Spencer, (256) 961-7960 roy.spencer [at]

    Global Temperature Report: September 2010

    Sept. 2010 was hottest September in 32 years

    Global climate trend since Nov. 16, 1978: +0.14 C per decade

    September temperatures (preliminary): 

    Global composite temp.:  +0.60 C (about 1.08 degrees Fahrenheit) above 20 year average for September.

    Northern Hemisphere: +0.56 C (about 1.01 degrees Fahrenheit) above 20-year average for September.

    Southern Hemisphere: +0.65 C (about 1.17 degrees Fahrenheit) above 20-year average for September.

    Tropics: +0.28 C (about 0.50 degrees Fahrenheit) above 20-year average for September.

    August temperatures (revised):

    Global composite: +0.51 C above 20-year average

    Northern Hemisphere: +0.67 C above 20-year average

    Southern Hemisphere: +0.35 C above 20-year average

    Tropics: +0.36 C above 20-year average 

    All temperature anomalies are based on a 20-year average (1979-1998) for the month reported.

    Notes on data released Oct. 8, 2010:

    September 2010 was the hottest September in the 32-year satellite-based temperature dataset, with a global temperature that was 0.14 C warmer than the previous record in September 1998, according to Dr. John Christy, professor of atmospheric science and director of the Earth System Science Center at The University of Alabama in Huntsville.

    With September setting records, 2010 is moving closer to tying 1998 as the hottest year in the past 32. Through September, the composite global average temperature for 2010 was 0.55 C above the 20-year average. That is just 0.04 C (about 0.07 degrees Fahrenheit) cooler than the January-through-September record set in 1998.

    The record September high was set despite the continued cooling of temperatures in the tropics as an El Nino Pacific Ocean warming events fades away.

    Color maps of local temperature anomalies may soon be available on-line at:

    The processed temperature data is available on-line at:

    As part of an ongoing joint project between UAHuntsville, NOAA and NASA, Christy and Dr. Roy Spencer, a principal research scientist in the ESSC, use data gathered by advanced microwave sounding units on NOAA and NASA satellites to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available.

    The satellite-based instruments measure the temperature of the atmosphere from the surface up to an altitude of about eight kilometers above sea level. Once the monthly temperature data is collected and processed, it is placed in a “public” computer file for immediate access by atmospheric scientists in the U.S. and abroad.


    Neither Christy nor Spencer receives any research support or funding from oil, coal or industrial companies or organizations, or from any private or special interest groups. All of their climate research funding comes from federal and state grants or contracts.

    Comments Off on September 2010 Global Temperature Report From the University Of Alabama At Huntsville

    Filed under Climate Change Metrics

    On the Vertical Mixing Of Heat And Long Term Surface Temperature Trends

    There is a new paper [thanks to Jos de Laat for alerting us to it!]

    Somnath Baidya Roy, and Justin J. Traiteur, 2010: Impacts of wind farms on surface air temperatures. PNAS.

    with the abstract

    “Utility-scale large wind farms are rapidly growing in size and numbers all over the world. Data from a meteorological field campaign show that such wind farms can significantly affect near-surface air temperatures. These effects result from enhanced vertical mixing due to turbulence generated by wind turbine rotors. The impacts of wind farms on local weather can be minimized by changing rotor design or by siting wind farms in regions with high natural turbulence. Using a 25-y-long climate dataset, we identified such regions in the world. Many of these regions, such as the Midwest and Great Plains in the United States, are also rich in wind resources, making them ideal candidates for low-impact wind farms.”

    While their conclusions are important with respect to how wind turbine farms can alter surface air temperatures, their results have an even broader importance.

    An excerpt from their paper is

    Data from the field campaign show that near-surface air temperatures downwind of the wind farm are higher than upwind regions during night and early morning hours, whereas the reverse holds true for the rest of the day (Fig. 2A). Thus, this wind farm has a warming effect during the night and a cooling effect during the day. The observed temperature signal is statistically significant for most of the day according to the results of a Mann–Whitney Rank Sum Test

    As we indicated in our paper

    Pielke Sr., R.A., and T. Matsui, 2005: Should light wind and windy nights have the same temperature trends at individual levels even if the boundary layer averaged heat content change is the same? Geophys. Res. Letts., 32, No. 21, L21813, 10.1029/2005GL024407.

     and reported by my colleague Dick McNider

    In the Dark of the Night – the Problem with the Diurnal Temperature Range and Climate Change by Richard T. McNider

    long term alterations in the vertical distribution of heat changes can result in long term temperature trends which are a function of height, even IF the average lower tropospheric temperature change were zero. The strength of the winds in the layer of the atmosphere near the surface has been proposed as one mechansim to influence the vertical distribution of heat in this layer.

    The figure below, from the Roy and Traiteur 2010 paper illustrates this effect very clearly


    Figure 2a: Observed temperature in and near the San Gorgonio wind farm for near-surface air-temperature patterns at the San Gorgonio wind farm during the field campaign. 

    Without the wind turbines, the temperature plots would presumably be identical.

    This study indicates an error in the findings of Parker in his paper

    Parker, D.E., 2004: Large-scale warming is not urban. Nature, 432, 290, doi:10.1038/432290a.

    Parker wrote in his abstract

    Controversy has persisted over the influence of urban warming on reported large-scale surface-air temperature trends. Urban heat islands occur mainly at night and are reduced in windy conditions. Here we show that, globally, temperatures over land have risen as much on windy nights as on calm nights, indicating that the observed overall warming is not a consequence of urban development.

    Since the figure presented above does show a dependence of temperatuare anomalies as a function of wind speed (due to the wind turbines in this case), this raises serious issues with the conclusions in Parker (2004) regarding his claim that “temperatures over land have risen as much on windy nights as on calm nights”. 

    We look forward to additional important research studies by Somnath Baidya Roy, and Justin J. Traiteur!

    Comments Off on On the Vertical Mixing Of Heat And Long Term Surface Temperature Trends

    Filed under Climate Change Metrics, Research Papers