Category Archives: Guest Weblogs

Announcement Of Second Edition “The Simple Science of Flight: From Insects to Jumbo Jets (Revised and Expanded Edition)” by Henk Tennekes

Henk Tennekes has a second edition of his book, and I am pleased to announce it on my weblog. Following the announcement, Henk has also provide a review of another book that he has completed.

“The Simple Science of Flight: From Insects to Jumbo Jets (Revised and Expanded Edition)” by Henk Tennekes

From the smallest gnat to the largest aircraft, all things that fly obey the same aerodynamic principles. In The Simple Science of Flight, Henk Tennekes investigates just how machines and creatures fly: what size wings they need, how much energy is required for their journeys, how they cross deserts and oceans, how they take off, climb, and soar. Fascinated by the similarities between nature and technology, Tennekes offers an introduction to flight that teaches by association. Swans and Boeings differ in numerous ways, but they follow the same aerodynamic principles. Biological evolution and its technical counterpart exhibit exciting parallels. What makes some airplanes successful and others misfits? Why does the Boeing 747 endure but the Concorde now seem a fluke? Tennekes explains the science of flight through comparisons, examples, equations, and anecdotes.

The new edition of this popular book has been thoroughly revised and much expanded. Highlights of the new material include a description of the incredible performance of bar-tailed godwits (7,000 miles nonstop from Alaska to New Zealand), an analysis of the convergence of modern jetliners (from both Boeing and Airbus), a discussion of the metabolization of energy featuring Lance Armstrong, a novel treatment of the aerodynamics of drag and trailing vortices, and an emphasis throughout on evolution, in nature and in engineering. Tennekes draws on new evidence on bird migration, new wind-tunnel studies, and data on new airliners. And his analysis of the relative efficiency of planes, trains, and automobiles is newly relevant. (On a cost-per-seat scale, a 747 is more efficient than a passenger car.)

About the Author
Henk Tennekes is Director of Research Emeritus at the Royal Netherlands Meteorological Institute, Emeritus Professor of Meteorology at the Free University (VU) in Amsterdam, and Emeritus Professor of Aerospace Engineering at Pennsylvania State University. He is the coauthor of A First Course in Turbulence (MIT Press, 1972).

“This was a great little book when it came out in its original edition; this new version is even better, as it contains both Henk’s homage to his favorite flying machine (Boeing 747) and explanations based on some of the unexpected results of recent experiments with bird flight (including a phenomenal gliding jackdaw). Read it, then watch the birds and planes, and then dip into it again and again.”
-Vaclav Smil, University of Manitoba, and author of Global Catastrophes and Trends

One gets a fine sense of how so much of aircraft design-whether by humans or by evolution-depends on size and mission. This new version of The Simple Science of Flight broadens the enlightenment that so many of us found appealing in its predecessor. It yields even more of that satisfying ‘now I understand what’s happening’ rather than the usual ‘how brilliant those designers must be.’ And I know of no book that derives such an awesome wealth of insight from such simple quantification. Beyond being informative, it provides pleasant reading-for any one who travels by air, watches animals fly, or dreams of learning to fly.”
-Steven Vogel, James B. Duke Professor, Emeritus, Duke University

Review By Henk Tennekes

Alexander’s Jumbo Jets

Alexander, David E. 2009. Why Don’t Jumbo Jets Flap Their Wings? Rutgers University Press. ISBN 978-0-8135-4479-3, hardback, 278 pp, figures. Price 28 euro.

David Alexander is the author of Nature’s Flyers (2002), a deservedly popular introductory biology text on flying insects, bats, and birds. Rutgers University Press recently released Alexander’s second book, Why Don’t Jumbo Jets Flap Their Wings? The new book is written for the general public, not primarily for professional  biologists and engineers. “Science writing at its best,” says professor Sankar Chatterjee of Texas Tech, and I agree. This book is intended for birdwatchers who, like me, are fascinated by everything that flies, natural or technical.

In ten easygoing and enjoyable chapters, focussed on the differences between flying animals and airplanes, Alexander deals successively with evolution, lift, power, manoeuverability, the need for tail surfaces, flight instruments, soaring, hovering, aerial combat, and ornithopters. One major point of divergence: muscles excel in back-and-forth motion such as wing flapping, aircraft engines base their functionality on rotary motion. As far as manoeuverability is concerned, the sophisticated interaction between their nervous system and their flying apparatus that insects, birds, and bats are capable of is a source of envy for pilots and aircraft designers. Bats have no need for tails because their nervous systems are so well integrated. The chapter on predation and aerial combat is a real treat. I knew of course that Eleanora’s falcon feeds on migrating passerines during its breeding season, but I didn’t know that the greater noctule bat does so too, taking advantage of the fact that most passerines are nocturnal migrants. And I was thrilled to learn that some insect-hawking bats “use their wings as tennis rackets, deftly tapping an insect to deflect it into their mouths.” Alexander deals at length with ornithopters. Considering the title of his book, he has to. Flapping wings are not the way to go when size and weight become too large. A jumbo jet does not flap its wings because the hinges, engines, and linkage systems  needed to power it would be far too heavy. Also, flapping flight is like a roller-coaster ride, because the upstroke of the wings delivers little or no lift, so that the body falls until lifted again by the downstroke. All passengers riding a flapping jumbo jet would be airsick for the entire ride. On the other hand, flapping is the preferred solution when sizes are small. Miniature rotary engines cannot compete in that technological niche.

Alexander compares the slow evolution of flight in Nature with the rapid evolution of flight in human technology. “Natural selection works on a time scale of hundreds of thousands or even millions of years. When a one-in-a-million beneficial change does occur, it tends to spread through the species. Changes that might take hundreds of thousands of years of animal evolution can take place in less than a decade of technological development.” He recognizes other differences, too. Animals co-evolve with their environment, human technology often changes the environment. Wheels are unsuitable in rough terrain; the worldwide success of automobiles is due in no small part to the concurrent evolution of highway systems. I feel Alexander tends to underestimate how often technological breakthroughs resemble random genetic mutations in Nature, which, as he correctly states, are “almost always detrimental.” Airplane encyclopedias are filled with planes that can fairly be labeled as evolutionary misfits, as designs that did not live up to their designers’ dreams and disappeared within ten or twenty years. Some, like Howard Hughes’ Spruce Goose made just one brief hop. Others, like the supersonic Concorde, are evolutionary mutants, products of the overheated preoccupations of their designers and sponsors. Even the ultimate aeronautical dream, human-powered flight, lovingly described in Alexander’s book, did not last long. Planes powered by human athletes are unfit for everyday use; they are in fact extinct now.

In the epilogue, Alexander returns to the central theme of his book: how flying animals differ from flying machines. “In the end, what truly sets birds apart from airplanes is versatility versus efficiency. Engineers design airplanes to carry out particular tasks, so airplanes tend to be quite specialized. A Boeing 747 can haul huge loads of passengers over enormous distances, but that is basically all it can do. Animals cannot afford to be so specialized.” I agree, but not without some reservations. Albatrosses are specialized in so-called dynamic soaring in wide-open environments with a uniform wind regime, bar-tailed godwits perform 11,000 km nonstop flights across the Pacific Ocean but have a barely adequate immune system, bats use very sophisticated echo location equipment that is useless in daylight because insects can easily take evasive action, penguins use their wings exclusively for under-water swimming, and so on. And some kinds of airplanes, like the Piper Cub and the Cessna 172, are supreme generalists, much like sparrows and starlings. In fact, the early success of the Piper Cub was based on its usefulness for the US Army: it could land and take off most anywhere, rough terrain or not. The task of evaluating the differences between biological evolution and its technological counterpart is far from being finished, but in Jumbo Jets Alexander makes a giant step in the right direction.

Henk Tennekes


Comments Off

Filed under Books, Guest Weblogs

Comments By Mike Smith of My Weblog “Debate Question For Professor Steve Schneider and Colleagues”

In response to my weblog Debate Question For Professor Steve Schneider and Colleagues Mike Smith and I have exhanged e-mails on these three hypotheses. With Mike’s permission, I have extracted the text from our e-mails and reproduced with minor edits below. 

Mike Smith is CEO of WeatherData Services, Inc., An AccuWeather Company.  Smith is a Fellow of the American Meteorological Society and a Certified Consulting Meteorologist.   He is a recipient of the American Meteorological Society’s Award for Outstanding Contributions to Applied Meteorology and WeatherData has received the Society’s Award for Outstanding Services to Meteorology by a Corporation.

Mike’s comments are in regular text, and mine are italicized.

Mike Smith’s first e-mail

Hi Roger,

I have been reading the exchange regarding the SF articles.  There is something I would like to circle back on.  You say, “only one of these is true” if I am reading you correctly,

1. The human influence is minimal and natural variations dominate
climate variations on all time scale;

2. While natural variations are important, the human influence is significant and involves a diverse range of first-order climate forcings (including, but not limited to the human input of CO2);

3. The human influence is dominated by the emissions into the atmosphere
of greenhouse gases, particularly carbon dioxide.

I do agree with you that, 30 years from now, when we know much more, likely only one of the three contentions will be the “most correct” answer.  But, I don’t believe we are at that point.

Given our current knowledge, why can’t the most likely answer be, “Somewhere between 1 and 2″?  I believe the current state-of-the- science is telling us #3 is not correct.  I agree with you that there are many human forcings that influence climate, but it is not clear to me that the Wichita heat island (which I have informally documented) or the Reno heat island (see Anthony Watts’ website) have much influence on world climate (i.e., would the climate in Rome or Honolulu be different if the RNO and ICT heat islands did not exist?).  Does the deforestation in Brazil influence the climate in South Africa? IF the answer is “no”, then on a planetary scale #1 is the correct answer.

My best educated guess is the most correct answer is about 70%  #1 and 30%  #2.  I realize you believe this answer would be incorrect. Please tell me where you think I am off base.    If you wish to publish this question and your answer, it would be fine.  I believe we gain with open debate.

Thanks and best wishes,


Roger A. Pielke Sr. Reply and Mike Smith’s further response

Hi Mike

 Thank you for your feedback. I agree that the three hypotheses need to be addressed with respect to scale. Our research (and that of others) indicates that there are well defined effects of land use/land cover change, the human input of aerosols including both changes in atmospheric concentrations and deposition, and biogeochemical effects due to added trace gases including CO2 on local and regional scales. From your e mail, it seems we both agree on this. If true, the first hypothesis is rejected for these spatial scales (as is the third hypothesis).

Mike Smith Response – I agree with this.

Roger A. Pielke Sr’s Comment

With respect to the global scale, the proper metrics include changes in atmospheric concentrations, alterations in circulation patterns, etc. There is no question that added CO2 is from human activities….

Mike Smith Response - I agree

Roger A. Pielke Sr’s Comment

……and this has altered the global average concentration of this gas.

Mike Smith Response

I agree, but I’m not sure we fully know the extent.  There is some evidence for natural variation in CO2 concentrations (i.e., do changes in ocean heat content significantly vary their contribution to atmospheric CO2 concentration?).

Roger A. Pielke Sr’s Comment

In terms of effects on circulations, there are now a number of papers that illustrate with models that there are changes due to several of the human climate forcings listed above.

Mike Smith Response

Yes, but are the models sufficiently robust to make this determination at this time?

Roger A. Pielke Sr’s Comment

I have concluded the first hypothesis is also rejected on the global scale, but agree this needs further investigation (models by themselves, of course, cannot be used to test hypotheses).

Mike Smith Response

I see your point and you may well be proven correct.  However, we seem to be in the early stages of testing the ‘natural variations’ hypothesis.  I am referring to the ‘blank sun.’  The very low levels of sunspot activity the last two years — which seems to be continuing — and which I would call a “natural” variation, may give us a chance to sort out natural from manmade forcings.  The IPCC has (I’m paraphrasing) rejected the hypothesis that variations in the sun’s output have a significant effect on earth’s climate.
The falling temperature trend since 1998 (and, at best, lack of warming in the oceans about which you have written extensively) that seems to parallel the fall in solar output will give us a chance to test several of these hypothesis, especially in view of the record (for modern times) levels of CO2 concentration.  We seem to be getting close to the point where the IPPC’s hypothesis (CO2 is the dominant forcing) is rejected if temperatures and ocean heat content continue to fall while CO2 levels continue to rise.

Other credentialed climate scientists are invited to e-mail me their comments also, and, if appropriate, they can also be posted as a guest weblog.

Comments Off

Filed under Guest Weblogs

Have Changes In Ocean Heat Falsified The Global Warming Hypothesis? – A Guest Weblog by William DiPuccio

Climate Science encourages guest weblogs from all perspectives of the climate science issue. Following is a guest weblog by William DiPuccio, who, although not a published climate scientist,  has provided a view on the global warming discussion which is worth reading.

Guest Weblog By William DiPuccio

The Global Warming Hypothesis

Albert Einstein once said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”  Einstein’s words express a foundational principle of science intoned by the logician, Karl Popper:  Falsifiability.  In order to verify a hypothesis there must be a test by which it can be proved false.  A thousand observations may appear to verify a hypothesis, but one critical failure could result in its demise.  The history of science is littered with such examples.

A hypothesis that cannot be falsified by empirical observations, is not science.  The current hypothesis on anthropogenic global warming (AGW), presented by the U.N.’s Intergovernmental Panel on Climate Change (IPCC), is no exception to this principle.  Indeed, it is the job of scientists to expose the weaknesses of this hypothesis as it undergoes peer review.  This paper will examine one key criterion for falsification: ocean heat.

Ocean heat plays a crucial role in the AGW hypothesis, which maintains that climate change is dominated by human-added, well-mixed green house gasses (GHG).  IR radiation that is absorbed and re-emitted by these gases, particularly CO2, is said to be amplified by positive feedback from clouds and water vapor.  This process results in a gradual accumulation of heat throughout the climate system, which includes the atmosphere, cryosphere, biosphere, lithosphere, and, most importantly, the hydrosphere.  The increase in retained heat is projected to result in rising atmospheric temperatures of 2-6ºC by the year 2100. 

In 2005 James Hansen, Josh Willis, and Gavin Schmidt of NASA coauthored a significant article (in collaboration with twelve other scientists), on the “Earth’s Energy Imbalance:  Confirmation and Implications” (Science, 3 June 2005, 1431-35).  This paper affirmed the critical role of ocean heat as a robust metric for AGW.  “Confirmation of the planetary energy imbalance,” they maintained, “can be obtained by measuring the heat content of the ocean, which must be the principal reservoir for excess energy” (1432). 

Monotonic Heating.  Since the level of CO2 and other well-mixed GHG is on the rise, the overall accumulation of heat in the climate system, measured by ocean heat, should be fairly steady and uninterrupted (monotonic) according to IPCC models, provided there are no major volcanic eruptions.  According to the hypothesis, major feedbacks in the climate system are positive (i.e., amplifying), so there is no mechanism in this hypothesis that would cause a suspension or reversal of overall heat accumulation.  Indeed, any suspension or reversal would suggest that the heating caused by GHG can be overwhelmed by other human or natural processes in the climate system. 

A reversal of sufficient magnitude could conceivably reset the counter back to “zero” (i.e., the initial point from which a current set of measurements began).  If this were to take place, the process of heat accumulation would have to start again.  In either case, a suspension or reversal of heat accumulation (excepting major volcanic eruptions) would mean that we are dealing with a form of cyclical rather than monotonic heating. 

Most scientists who oppose the conclusions of the IPCC have been outspoken in their advocacy of cyclical heating and cooling caused primarily by natural processes, and modified by long-term human climate forcings such as land use change and aerosols.  These natural forcings include ocean cycles (PDO, AMO), solar cycles (sunspots, total irradiance), and more speculative causes such as orbital oscillations, and cosmic rays.

Temperature is not Heat! 

Despite a consensus among scientists on the use of ocean heat as a robust metric for AGW, near-surface air temperature (referred to as “surface temperature”) is generally employed to gauge global warming.  The media and popular culture have certainly equated the two.  But this equation is not simply the product of a naïve misunderstanding.  NASA’s Goddard Institute for Space Studies (GISS), directed by James Hansen, and the British Hadley Centre for Climate Change, have consistently promoted the use of surface temperature as a metric for global warming.  The highly publicized, monthly global surface temperature has become an icon of the AGW projections made by the IPCC. 

However, use of surface air temperature as a metric has weak scientific support, except, perhaps, on a multi-decadal or century time-scale.  Surface temperature may not register the accumulation of heat in the climate system from year to year.  Heat sinks with high specific heat (like water and ice) can absorb (and radiate) vast amounts of heat.  Consequently the oceans and the cryosphere can significantly offset atmospheric temperature by heat transfer creating long time lags in surface temperature response time.  Moreover, heat is continually being transported in the atmosphere between the poles and the equator.  This reshuffling can create fluctuations in average global temperature caused, in part, by changes in cloud cover and water vapor, both of which can alter the earth’s radiative balance.

Hype generated by scientists and institutions over short-term changes in global temperature (up or down) has diverted us from the real issue:  heat accumulation.  Heat is not the same as temperature.  Two liters of boiling water contain twice as much heat as one liter of boiling water even though the water in both vessels is the same temperature.  The larger container has more thermal mass which means it takes longer to heat and cool.   

Temperature measures the average kinetic energy of molecular motion at a specific point.  But it does not measure the total kinetic energy of all the molecules in a substance.  In the example above, there is twice as much heat in 2 liters of boiling water because there is twice as much kinetic energy.  On average, the molecules in both vessels are moving at the same speed, but the larger container has twice as many molecules.

Temperature may vary from point to point in a moving fluid such as the atmosphere or ocean, but its heat remains constant so long as energy is not added or removed from the system.  Consequently, heat-not temperature-is the only sound metric for monitoring the total energy of the climate system.  Since heat is a function of both mass and energy, it is normally measured in Joules per kilogram (or calories per gram): 

Q = mc∆T

Where Q is heat (Joules)

m is mass (kg)

c is the specific heat constant of the substance (J/kg/°C)

∆T is the change in temperature (°C)

The Thermal Mass of the Oceans

Water is a more appropriate metric for heat accumulation than air because of its ability to store heat.  For this reason, it is also a more robust metric for assessing global warming and cooling.  Seawater has a much higher mass than air (1030 kg/m3 vs. 1.20 kg/m3at 20ºC), and a higher specific heat (4.18 kJ/kg/°C vs. 1.01 kJ/kg/°C for air at 23°C and 41% humidity).  One kilogram of water can retain 4.18x the heat of an equivalent mass of air.  This amounts to a thermal mass which is nearly 3558x that of air per unit volume.

For any given area on the ocean’s surface, the upper 2.6m of water has the same heat capacity as the entire atmosphere above it!  Considering the enormous depth and global surface area of the ocean (70.5%), it is apparent that its heat capacity is greater than the atmosphere by many orders of magnitude.  Consequently, as Hansen, et. al. have concluded, the ocean must be regarded as the main reservoir of atmospheric heat and the primary driver of climate fluctuations.

 Heat accumulating in the climate system can be determined by profiling ocean temperature, and from precise measurements of sea surface height as they relate to thermal expansion and contraction of ocean water.  These measurements are now possible on a global scale with the ARGO buoy array and from satellite measurements of ocean surface heights.  ARGO consists of a world-wide network of over 3000 free-drifting platforms that measure temperature and salinity in the upper 2000m of ocean.  The robotic floats rise to the surface every 10 days and transmit data to a satellite which also determines their location. 

Pielke’s Litmus Test

In 2007 Roger Pielke, Sr. suggested that ocean heat should be used not just to monitor the energy imbalance in the climate system, but as a “litmus test” for falsifying the IPCC’s AGW hypothesis (Pielke, “A Litmus Test…”,, April 4, 2007).  Dr. Pielke is a Senior Research Scientist in CIRES (Cooperative Institute for Research in Environmental Sciences), at the University of Colorado in Boulder, and Professor Emeritus of the Department of Atmospheric Science, Colorado State University, Fort Collins.  One of the world’s foremost atmospheric scientists, he has published nearly 350 papers in peer-reviewed journals, 50 chapters in books, and co-edited 9 books. 

Pielke’s test compares the net anthropogenic radiative forcing projected by GISS computer models (Hansen, Willis, Schmidt et al.) with actual ocean heat as measured by the ARGO array.  To calculate the annual projected heat accumulation in the climate system or oceans, radiative forcing (Watts/m2) must be converted to Joules (Watt seconds) and multiplied by the total surface area of the oceans or earth:

      [#1]  Qannum = (Ri Pyear Aearth) .80

 or, [#2]  Qannum = (Ri Pyear Aocean) .85 

Where Qannum is the annual heat accumulation in Joules

Ri is the mean global anthropogenic radiative imbalance in W/m2

P is the period of time in seconds/year (31,557,600)

Aocean is the total surface area of the oceans in m2 (3.61132 x 1014)

 Aearth is the total surface area of the earth in m2 (5.10072 x 1014)

.80 & .85 are reductions for isolating upper ocean heat (see below)

Radiative Imbalance.  The IPCC and GISS calculate the global mean net anthropogenic radiative forcing at ~1.6 W/m2(-1.0, +.8), (see, 2007 IPCC Fourth Assessment Summary for Policy Makers, figure SPM.2 and Hanson, Willis, Schmidt et al., page 1434, Table 1).  This is the effective total of all anthropogenic forcings on the climate system.  Projected heat accumulation is not calculated from this number, but from the mean global anthropogenic radiative imbalance (Ri).  According to Hanson, Willis, Schmidt et al., the imbalance represents that fraction of the total net anthropogenic forcing which the climate system has not yet responded to due to thermal lag (caused primarily by the oceans).  The assumption is that since the earth has warmed, a certain amount of energy is required to maintain the current global temperature.  Continuing absorption will cause global temperatures to rise further until a new balance is reached. 

Physically, the climate system responds to the entire 1.6 W/m2 forcing, not just a portion of it.  But while energy is being absorbed, it is also being lost by radiation.  The radiative imbalance is better described as the difference between the global mean net anthropogenic radiative forcing and its associated radiative loss.  The global radiative imbalance of .75 W/m2 (shown below) would mean that the earth system is radiating .85 W/m2 in response to 1.6 W/m2of total forcing (1.6 – .85 = .75).  For a more detailed discussion of radiative equilibrium see, Pielke Sr., R.A., 2003: “Heat storage within the Earth system.”  Bulletin of the American  Meteorological Society, 84, 331-335.

Projected Ocean Heat.  Since observed heat accumulation is derived from measurements in the upper 700m-750m of the ocean, an “apples to apples” comparison with model projections requires some adjustments.  Eq. #1, used by the GISS model, assumes that nearly all of the energy from anthropogenic radiative forcing is eventually absorbed by the oceans (80%-90% according to Willis, U.S. CLIVAR, 1, citing Levitus, et. al.).  Based on modeling by Hansen, Willis, Schmidt, et. al., (page 1432) upper ocean heat is thought to comprise 80% of the total as shown in the illustration.  So, the calculated heat must be multiplied by 0.8 to subtract deep ocean heat (below 750m) and heat storage by the atmosphere, land, and cryosphere (see discussion on deep ocean heat and melting ice below).

Another method for calculating heat accumulation is shown in Eq. #2.  This method assumes that only 71% (i.e., the fraction of the earth covered by oceans) of the energy from anthropogenic radiative forcing is absorbed by the oceans.  Hence, the net global anthropogenic radiative flux is scaled to ocean surface area.  To compare to upper ocean measurements, deep ocean heat must be subtracted by multiplying the results by ~0.85.  As shown in the illustration above, the deep ocean absorbs about 0.11 W/m2 of the total ocean flux of 0.71 W/m2 (estimates vary, see discussion on deep ocean heat, below).  Since this equation is not used by climate models, it is not included in the following tables.  But, it is displayed in the graph below as a possible lower limit of projected heat accumulation.

In his blog, “Update On A Comparison Of Upper Ocean Heat Content Changes With The GISS Model Predictions” (, Feb. 9, 2009), Pielke projects heat accumulation based on an upper ocean mean net anthropogenic radiative imbalance of  0.6 W/m2as shown below (see Hanson, Willis, Schmidt et al., 1432).  This is only a slight variance from his 2007 blog and affords the best opportunity for the GISS models to agree with observed data.  A failure to meet this benchmark would be a robust demonstration of systemic problems.

Observed Ocean Heat.  A comparison of these projections to observed data is shown below.  Despite expectations of warming, temperature measurements of the upper 700m of the ocean from the ARGO array show no increase from 2003 through 2008.  Willis calculates a net loss of -0.12 (±0.35) x 1022Joules per year (Pielke, Physics Today,55) from mid-2003 to the end of 2008 (Dr. Pielke received permission from Josh Willis to extend the ARGO data to the end of 2008). 

According to a recent analysis of ARGO data by Craig Loehle, senior scientist at the Illinois-based National Council for Air and Stream Improvement, the loss is -0.35 (±0.2) x 1022Joules per year from mid-2003 to the end of 2007 (see Loehle, 2009: “Cooling of the global ocean since 2003.″ Energy & Environment, Vol. 20, No. 1&2, 101-104(4)).  Loehle used a more complex method than Willis to calculate this trend, enabling him to reduce the margin of error.

My calculations for observed global heat, shown below, are based on observed upper ocean heat.  Since upper ocean heat is calculated to be 80% of the global total (Eq. #1), observed global heat equals approximately 125% (1/0.8) of the observed upper ocean heat.

Model Projected Global Heat Accumulation(Joules  x 1022) Observed Global Heat Accumulation (Joules  x 1022) Projected Upper Ocean Heat Accumulation(Joules  x 1022) Observed Upper Ocean Heat Accumulation (Joules  x 1022)
GISS 7.26 -0.83 Willis (5.5 yr)-1.98 Loehle (4.5 yr) 5.82 -0.66 Willis (5.5 yr)-1.58 Loehle (4.5 yr)

 Heat Deficit.  The graph below shows the increasing deficit of upper ocean heat from 2003 through 2008 based on GISS projections by Hansen, Willis, Schmidt, et. al.  Actual heat accumulation is plotted from observed data (using ARGO) and shows the overall linear trend (after Willis and Loehle).  Seasonal fluctuations and error bars are not shown.

The projection displays a range representing the two ways of calculating heat accumulation discussed above.  The upper limit assumes that virtually all of the energy from anthropogenic radiative forcing is eventually absorbed by the oceans (Eq. #1).  The lower limit scales the total radiative imbalance to the surface area of the oceans (Eq. #2).  The upper limit represents the actual GISS model projection.


 The 5.5 year accumulated heat deficit for GISS model projections (red line) ranges from 6.48 x 1022 Joules (using Willis) to 7.92 x 1022 Joules (Loehle, extrapolated to the end of 2008).  Pielke is more conservative in his calculations, given the substantial margin of error in Willis’ data (±0.35).  Accordingly, he assumes zero heat accumulation for the full 6 year period (2003-2008), yielding a deficit of 5.88 x 1022Joules (Pielke, “Update…”).  Loehle’s work, which was not yet known to Pielke in February of 2009, has a much smaller margin of error (±0.2).

ARGO DataAnalyzed by Willis ARGO DataAnalyzed by Loehle (extrapolated to end of 2008) Pielke(based on Willis)
-6.48 x 1022 Joules -7.92 x 1022 Joules -5.39 x 1022 Joules(-5.88 for 6 full years )

 These figures reveal a robust failure on the part of the GISS model to project warming.   The heat deficit shows that from 2003-2008 there was no positive radiative imbalance caused by anthropogenic forcing, despite increasing levels of CO2.  Indeed, the radiative imbalance was negative, meaning the earth was losing slightly more energy than it absorbed.  Solving for Riin Eq. #1, the average annual upper ocean radiative imbalance ranged from a statistically insignificant -.07 W/m2 (using Willis) to -.22 W/m2(using Loehle).

As Pielke points out (“Update…”), in order for the GISS model to verify by the end of 2012 (i.e., one decade of measurements), the annual radiative imbalance would have to increase to 1.50 W/m2 for the upper ocean which is 2.5x higher than the .6 W/m2projected by Hansen, Willis, Schmidt, et. al. (1432).  This corresponds to an annual average accumulation of 2.45 x 1022 Joules in the upper ocean, or a 4 year total of 9.8 x 1022 Joules. 

Using Loehle’s deficit, the numbers are even more remarkable.  Assuming that heating resumes for the next 4.5 years (2009 to mid 2013), the annual average accumulation of heat would need to be 2.73 x 1022 Joules in the upper ocean, for a 4.5 year total of 12.29 x 1022 Joules.  The derived radiative imbalance for the upper ocean would increase to 1.7 W/m2, or nearly 3x higher than the projected imbalance.

Improbable Explanations for the Failure of Heat Accumulation

Hidden Heat.  A few explanations have been proposed for the change in ocean heat.  One popular suggestion is that there is “hidden” or “unrealized” heat in the climate system.  This heat is being “masked” by the current cooling and will “return with a vengeance” once the cooling abates. 

This explanation reveals a fundamental ignorance of thermodynamics and it is disappointing to see scientists suggest it.  Since the oceans are the primary reservoir of atmospheric heat, there is no need to account for lag time involved with heat transfer.  By using ocean heat as a metric, we can quantify nearly all of the energy that drives the climate system at any given moment.  So, if there is still heat “in the pipeline”, where is it?  The deficit of heat after nearly 6 years of cooling is now enormous.  Heat can be transferred, but it cannot hide.  Without a credible explanation of heat transfer, the idea of unrealized heat is nothing more than an evasion.

Deep Ocean Heat.  Is it possible that “lost” heat has been transferred to the deep ocean-below the 700 meter limit of our measurements?  This appears unlikely.  According to Hansen, Willis, Schmidt et al., model simulations of ocean heat flow show that 85% of heat storage occurs above 750 m on average (with the range stretching from 78 to 91%) (1432).  Moreover, if there is “buried” heat, widespread diffusion and mixing with bottom waters may render it statistically irrelevant in terms of its impact on climate.

The absence of heat accumulation in deep water is corroborated by a recent study of ocean mass and altimetric sea level by Cazenave, et. al.  Deep water heat should produce thermal expansion, causing sea level to rise.  Instead, steric sea level (which measures thermal expansion plus salinity effects) peaked near the end of 2005, then began to decline nearly steadily.  It appears that ocean volume has actually contracted slightly.

Melting Ice.  Another possibility is that meltwater from glaciers, sea ice, and ice caps is offsetting heat accumulation.  Perhaps the ocean temperature has plateaued as the ice undergoes a phase change from solid to liquid (heat of fusion). 

This explanation sounds plausible at first, but it is not supported by observed data or best estimates.  In a 2001 paper published in Science, Levitus, et. al. calculates that the absorption of heat due to melting ice amounts to only 6.85% of the total increase in ocean heat during the 41 year period from about 1955 to 1996:

Observed increase in ocean heat (1955-1996) = 1.82 x 1023 J

Observed/estimated heat of fusion (1950’s-1990’s) = 1.247 x 1022 J

This work is quoted by Hansen, Willis, Schmidt, et. al. and further supported by their calculations (1432), which are even more conservative.  Given a planetary energy imbalance of approximately +0.75 W/m2, their simulations show that only 5.3% (0.04 W/m2) of the energy is used to warm the atmosphere, the land, and melt ice.  The balance of energy is absorbed by the ocean above 750 m (~0.6 W/m2), with a small amount of energy penetrating below 750 m (~0.11 W/m2).

The absorption of heat by melting ice is so small that even if it were to quadruple, the impact on ocean heat would be miniscule. 

Cold Biasing.  The ARGO array does not provide total geographic coverage.  Ocean areas beneath ice are not measured.  However, this would have a relatively small impact on total ocean heat since it comprises less than 7% of the ocean.  As mentioned above, quality controlled water temperature below 700m is not available, though the floats operate to a depth of 2000m.  Above 700m, the analysis performed by Willis includes a quality check of raw data which revealed a cold bias in some instruments.  This bias was removed (Willis, CLIVAR, 1). 

Loehle warns that the complexities of instrumental drift could conceivably create such artifacts (Loehle, 101), but concludes that his analysis is consistent with satellite and surface data which show no warming for the same period (e.g., see Douglass, D.H., J.R. Christy, 2009: “Limits on CO2 climate forcing from recent temperature data of Earth.” Energy & Environment, Vol. 20, No. 1&2, 178-189 (13)). So it is unlikely that cold biasing could account for the observed changes in ocean heat. 

In brief, we know of no mechanism by which vast amounts of “missing” heat can be hidden, transferred, or absorbed within the earth’s system.  The only reasonable conclusion-call it a null hypothesis-is that heat is no longer accumulating in the climate system and there is no longer a radiative imbalance caused by anthropogenic forcing.  This not only demonstrates that the IPCC models are failing to accurately predict global warming, but also presents a serious challenge to the integrity of the AGW hypothesis.

Analysis and Conclusion

Though other criteria, such as climate sensitivity (Spencer, Lindzen), can be used to test the AGW hypothesis, ocean heat has one main advantage:  Simplicity.  While work on climate sensitivity certainly needs to continue, it requires more complex observations and hypotheses making verification more difficult.  Ocean heat touches on the very core of the AGW hypothesis:  When all is said and done, if the climate system is not accumulating heat, the hypothesis is invalid.

Writing in 2005, Hansen, Willis, Schmidt et al. suggested that GISS model projections had been verified by a solid decade of increasing  ocean heat (1993 to 2003).  This was regarded as further confirmation the IPCC’s AGW hypothesis. Their expectation was that the earth’s climate system would continue accumulating heat more or less monotonically.  Now that heat accumulation has stopped (and perhaps even reversed), the tables have turned.  The same criteria used to support their hypothesis, is now being used to falsify it.

It is evident that the AGW hypothesis, as it now stands, is either false or fundamentally inadequate.  One may argue that projections for global warming are measured in decades rather than months or years, so not enough time has elapsed to falsify this hypothesis.  This would be true if it were not for the enormous deficit of heat we have observed.  In other words, no matter how much time has elapsed, if a projection misses its target by such a large magnitude (6x to 8x), we can safely assume that it is either false or seriously flawed.

Assuming the hypothesis is not false, its proponents must now address the failure to skillfully project heat accumulation.  Theories pass through stages of development as they are tested against observations.  It is possible that the AGW hypothesis is not false, but merely oversimplified.  Nevertheless, any refinements must include causal mechanisms which are testable and falsifiable.  Arm waiving and ad hoc explanations (such as large margins of error) are not sufficient. 

One possibility for the breakdown may relate back to climate sensitivity.  It is assumed that most feedbacks are positive, amplifying the slight warming (.3º-1.2ºC) caused by CO2.  This may only be partially correct.  Perhaps these feedbacks undergo quasi-cyclical changes in tandem with natural fluctuations in climate.  The net result might be a more punctuated increase in heat accumulation with possible reversals, rather than a monotonic increase.  The outcome would be a much slower rate of warming than currently projected.  This would make it difficult to isolate and quantify anthropogenic forcing against the background noise of natural climate signals. 

On the other hand, the current lapse in heat accumulation demonstrates a complete failure of the AGW hypothesis to account for natural climate variability, especially as it relates to ocean cycles (PDO, AMO, etc.).  If anthropogenic forcing from GHG can be overwhelmed by natural fluctuations (which themselves are not fully understood), or even by other types of anthropogenic forcing, then it is not unreasonable to conclude that the IPCC models have little or no skill in projecting global and regional climate change on a multi-decadal scale.  Dire warnings about “runaway warming” and climate “tipping points” cannot be taken seriously.  A complete rejection of the hypothesis, in its current form, would certainly be warranted if the ocean continues to cool (or fails to warm) for the next few years.

Whether the anthropogenic global warning hypothesis is invalid or merely incomplete, the time has come for serious debate and reanalysis.  Since Dr. Pielke first published his challenge in 2007, no critical attempts have been made to explain these failed projections.  His blogs have been greeted by the chirping of crickets.  In the mean time costly political agendas focused on carbon mitigation continue to move forward, oblivious to recent empirical evidence.  Open and honest debate has been marginalized by appeals to consensus.  But as history has often shown, consensus is the last refuge of poor science.


Cazenave, A., et al., 2008: “Sea level budget over 2003-2008: A reevaluation from GRACE space gravimetry, satellite altimetry and Argo,” Glob. Planet. Change, doi:10.1016/j.gloplacha.2008.10.004.

Douglass, D.H., J.R. Christy, 2009: “Limits on CO2 climate forcing from recent temperature data of Earth.” Energy & Environment, Vol. 20, No. 1&2, 178-189 (13).

Hansen, J., L. Nazarenko, R. Ruedy, Mki. Sato, J. Willis, A. Del Genio, D. Koch, A. Lacis, K. Lo, S. Menon, T. Novakov, Ju. Perlwitz, G. Russell, G.A. Schmidt, and N. Tausnev, 2005: “Earth’s energy imbalance: Confirmation and implications.Science, 308, 1431-1435.

IPCC, 2007: Summary for Policymakers. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change[Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.  See

Levitus, S., J.I. Antonov, J. Wang, T.L. Delworth, K.W. Dixon, and A.J. Broccoli, 2001: “Anthropogenic warming of Earth’s climate system.” Science, 292, 267-268.

Loehle, Craig, 2009:  “Cooling of the global ocean since 2003.″ Energy & Environment, Vol. 20, No. 1&2, 101-104(4).

 Pielke Sr., R.A., 2008: “A broader view of the role of humans in the climate system.” Physics Today, 61, Vol. 11, 54-55.

Pielke Sr., R.A., 2003: “Heat storage within the Earth system.”  Bulletin of the American Meteorological Society, 84, 331-335.

Pielke Sr., R.A., “A Litmus Test For Global Warming – A Much Overdue Requirement“,, April 4, 2007.

Pielke Sr., R.A., “Update On A Comparison Of Upper Ocean Heat Content Changes With The GISS Model Predictions“,, Feb. 9, 2009.

Willis, J.K., D. Roemmich, and B. Cornuelle, 2004: “Interannual variability in upper ocean heat content, temperature, and thermosteric expansion on global scales.”  J. Geophys. Res., 109, C12036.

Willis, J. K., 2008: “Is it Me, or Did the Oceans Cool?”, U.S. CLIVAR, Sept, 2008, Vol. 6, No. 2.

* William DiPuccio was a weather forecaster for the U.S. Navy, and a Meteorological/Radiosonde Technician for the National Weather Service.  More recently, he served as head of the science department for St. Nicholas Orthodox School in Akron, Ohio (closed in 2006).  He continues to write science curriculum, publish articles, and conduct science camps.

Comments Off

Filed under Guest Weblogs

Limits on CO2 Climate Forcing from Recent Temperature Data of Earth: A Guest Weblog by David Douglass and John Christy

Our paper

Limits on CO2 Climate Forcing from Recent Temperature Data of Earth

has just been published in Energy and Environment. (Vol 20, Jan 2009). [Copies may be downloaded from . preprint with figures in color at]  

We show in Figure 1 the well established observation that the global atmospheric temperature anomalies of Earth reached a maximum in 1998.


This plot shows oscillations that are highly correlated with El Nino/La Nina and volcanic eruptions. There also appears to be a positive temperature trend that could be due to CO2 climate forcing.

We examined this data for evidence of CO2 climate forcing.  We start by assumed that CO2 forcing has the following signature.

1. The climate forcing of CO2 according to the IPCC varies as ln(CO2) which is nearly linear over the range of this data. One would expect that the temperature response to follow this function.

2. The atmospheric CO2 is well mixed and shows a variation with latitude which is less than 4% from pole to pole. Thus one would expect that the latitude variation of the temperature anomalies from CO2 forcing to be also small.

Thus, changes in the temperature anomaly T that are oscillatory, negative or that vary strongly with latitude are inconsistent with CO2 forcing.

The latitude dependence of the UAH data is shown in Figure 2.


The anomalies are for NoExtropics, Tropics, SoExtropics and Global. The average trends are 0.28, 0.08, 0.06, and 0.14 K/decade respectively. If  the climate forcing were only from CO2 one would expect from property #2 a small variation with latitude.  However, NoExtropics is 2 times that of the global and 4 times that of the Tropics.  Thus one concludes that the climate forcing in the NoExtropics includes more than CO2 forcing. These non-CO2 effects include: land use [Pielke et al. 2007]; industrialization [McKitrick and Michaels 2007, Kalnay and Cai 2003, DeLaat and Maurellis 2006]; high natural variability, and daily nocturnal effects [Walters et al. 2007].

Thus we look to the tropical anomalies. If one is able to determine an underlying trend in the tropics, then assuming that the latitude variation of the intrinsic CO2 effect is small (CO2 property #2), then the global trend should be close to this value.

Figure 3 shows the tropical UAH data and the nino3.4 time-series. (Results consistent with these were found using RSS microwave temperatures, but evidence also presented here and elsewhere indicates RSS is less robust for trend calculations.)

One sees that the value at the end of the data series is less than at the beginning. However, one should not conclude from this observation that the trend is negative because of the obvious strong correlation between UAH and nino3.4.

The desired underlying trend, the ENSO effect, the volcano effect can all be determined by a multiple regression analysis. The regression analysis yields the underlying trend

            trend = 0.062±0.010 K/decade; R2 = 0.886.                 (1)

 Warming from CO2 forcing
How big is the effect from CO2 climate forcing?  From IPCC [2001]

             ΔT (CO2 ) ≈λ* ΔF (CO2 )                                             (2)

             ΔF (CO2 )  ≈ 5.33 ln (C/C0)*

where l is the climate sensitivity parameter whose value is 0.30 ºK/(W m-2) for no-feedback; C is the concentration of CO2, and C0 is a reference value. From the data the mean value of the slope of ln(C(t)/C(t0)) vs. time from 1979 to 2004 is 0.044/decade.


                          ΔT (CO2 ) ≈ 0.070  K/decade                     (3)

This estimate is for no-feedback. If there is feedback leading to a gain g, then multiply Eq. 3 by g. The underlying trend  is consistent with CO2 forcing with no-feedback. It is frequently argued that the gain g is larger than 1, perhaps as large as 3 or 4. This possibility requires there to be some other climate forcing of negative sign to cancel the excess. From the results of Chylek [2007], this cancellation cannot come from aerosols. One candidate is the apparent negative feedback associated with changes in cirrus clouds when warmed [Spencer et al. 2007].

Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

Guest Weblog By James E. Hansen

Jim Hansen has graciously agreed to write an invited response to the Climate Science weblog Comments On “Air Pollutant Climate Forcings Within The Big Climate Picture” By Hansen et al. 2009.

Guest Weblog By  Dr. Hansen

We include land-use changes as one of the climate forcings in our climate modeling, as discussed in the papers “Efficacy of climate forcings” (J. Geophys. Res. 110, D18104 doi:10.1029/2005JD005776, 2005) and “Dangerous human-made interference with climate: a GISS modelE study” (Atmos. Chem. Phys. 7, 2287-2312.

Those simulations used a global data set of Ramankutty and Foley for land use changes over the past two centuries. The effect of the land use changes was found to be quite large, even dominant, on local regions where a large fraction of the gridbox was affected by land use change. But on global average the effects of greenhouse gases and aerosols were much larger than the effects of land use changes.

As noted in those papers, we did not include irrigation, which can also have large regional effects, because of the absence of a good global data set for irrigation fluxes. Nevertheless, from the simulations that have been made, and from comparison of the climate forcings, these papers make it clear that the largest global climate forcings are changes of greenhouse gases and aerosols.

Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs, Research Papers

Guest Weblog By Professor Darin W. Toohey Of The University Of Colorado At Boulder

Professor Darin Toohey of the University of Colorado at Boulder has prepared a guest weblog for today. 

Guest Weblog

When I opened my local newspaper today, and read the Associated Press interview with John Holdren, it was refreshing to see that the Obama administration is actually considering geoengineering as a potential emergency option to deal with global warming. What’s refreshing isn’t the detailed schemes Holdren mentioned, so much, as the notion that we have a White House that is in tune with the issue enough to be willing to say in public that it’s considering such options. If nothing else, it’s an admission that this is a problem that the US is now taking very seriously.

You can’t be serious about a problem if you do not look into all the possible solutions and geoengineering is, if still poorly understood, a possible solution to out-of-control GHG emissions.

But with some geoengineering schemes, there could be nasty side effects. It’s one thing to pull carbon dioxide out of the air and bury it. It may not be easy, but it makes sense if CO2 is the big problem. It’s something wildly different to inject particles into the stratosphere or to launch mirrors or giant solar power satellites into space. The problem is that stuff called “ozone”- it used to be really important. In fact, it was so important that in order to protect it from even a few percent loss (thereby saving hundreds of thousands of people from life-threatening UV-related skin cancers), we have the only successful international policy to protect the environment that this world has ever seen;  it’s called the Montreal Protocol.

Now, it turns out the particles people are considering injecting into the stratosphere act as surfaces for chemical reactions that accelerate ozone-destroying reactions of chlorine and bromine that are present in the stratosphere both from man-made and natural sources. Rocket emissions (and I assuming here that the best way to get payloads into orbit is to launch them on conventional rockets) destroy ozone by directly adding additional particles for these heterogeneous reactions (alumina, in the case of solid rocket motors, and soot, in the case of kerosene fueled engines) or by influencing the abundances of trace gases like water and nitric acid that produce particles, particularly in the polar regions. The stratosphere is relatively safe from the sort of global launch activity that we see today but to scale this up to a geoengineering scheme to save the planet from runaway global warming will pit ozone depletion against climate change. Which would you rather have? How would you know?

There is a new publication in the journal Astropolitics of which I am a co-author;

Ross, M., D. Toohey, M. Peinemann, and P. Ross, 2009: Limits on the Space Launch Market Related to Stratospheric Ozone Depletion Astropolitics, 7, 50-82, doi:10.1080/14777620902768867

that makes the case that if we don’t start thinking about how to deal with the ozone-depletion caused by rockets, we may be in for a rough ride when the main source of ozone depletion in the stratosphere is from rocket emissions, and not from chlorofluorocarbons (CFCs), sometime in about 20-30 years. That’s probably around the time that geoengineering schemes will no longer be interesting after-AGU dinner conversation. On the other hand, the rocket industry is real; there are a few launches worldwide every week. Just about every aspect of our personal and professional lives depends on launching more and larger satellites for communication and remote sensing.

Currently, the amount of ozone-depleting material deposited in the stratosphere by a few space shuttle launches is comparable to the amount of propellants released from 50 million metered dose inhalers (MDIs) each year in the US. This past December, CFCs were banned from those MDIs. But rocket launches aren’t regulated at all. The Astropolitics paper doesn’t make the case that we should immediately start regulating rocket launches. What is says is that the space launch industry is in a unique position when it comes to ozone-depleting practices, because the amount of ozone depletion caused by rockets has been minuscule compared to that caused by CFCs. But as CFCs are purged from the atmosphere, and as space launch activities increase as forecast, something will eventually have to be done; either propellants will be altered to mitigate ozone losses or the number of launches will need to be restricted. We can be certain that solid propellant rockets are worse for the ozone layer than liquid propellants but we do not know if they are a factor of 10 or 100 times worse. No matter from which angle you look at it, add to the mix geoengineering schemes that require large rockets, launched often, and we’ll be stuck between a rock and a hard place.

Comments Off

Filed under Guest Weblogs

Climatic Effects of 30 Years of Landscape Change over the Greater Phoenix AZ, Region: Part II by Georgescu et al. 2009

Guest Weblog By Matei Georgescu

Previously, the modeled effect of observed (from the early 1970s to the early 2000s) land use/land cover change (LULCC) over one of the most rapidly developing regions in the US, the semi-arid Greater Phoenix [AZ] region, was shown to have an important impact on the surface energy budget and the near-surface atmosphere (e.g., temperature, dewpoint temperature; see).  We address the role of these surface budget changes and subsequent repartitioning of energy on the mesoscale dynamics/thermodynamics of the region, their impact on convective rainfall, and the association with the synoptic scale North American Monsoon (NAM) circulation in a follow-up paper:

Georgescu, M., G. Miguez-Macho, L. T. Steyaert, and C. P. Weaver (2009), Climatic effects of 30 years of landscape change over the Greater Phoenix, Arizona, region: 2. Dynamical and thermodynamical response, J. Geophys. Res., doi:10.1029/2008JD010762, in press.  (subscription required)

Our modeling results show a systematic difference in total accumulated precipitation between the most recent (2001) and least recent (1973) landscape reconstructions: a rainfall enhancement for 2001 relative to the 1973 landscape. We note that while we see this similarity among the “dry” hydrometeorological seasons, the difference pattern for the “wet” seasons does not indicate such an effect on rainfall.

We find that changes in differential heating, resulting from the evolution of the underlying landscape, produce preferentially located mesoscale circulations (evident on most days) which were stronger for the most recent landscape representation (2001) as compared to the oldest (1973).  These enhanced circulations warm and dry the lower planetary boundary layer (PBL) – due to enhanced turbulent heating – and moisten the upper PBL and free atmosphere.  While these circulations are shown to alter the properties of the PBL in all Julys studied (the effect was larger during “dry” Julys as compared with “wet” Julys, and indeed, there was variability among the “dry” hydrometeorological months as well), direct dynamical forcing does not seem to be the explanation for the simulated precipitation enhancement (a signal we only observe during “dry” Julys and in two of the trio of months we investigated) resulting from the landscape’s evolution. 

While the cause of initial triggering of precipitation enhancement remains elusive, we do show that precipitation recycling plays an important role in sustaining and enhancing the initial difference in rainfall between the most recent (2001) and the least recent (1973) landscapes. 

Importantly, this work documents the interplay amongst the continuum of scales investigated [ranging from the turbulence scale (smallest) to the synoptic scale (largest)], and underscores some of the non-linearities and complexities involved in the coupled land-atmosphere system.

Comments Off

Filed under Guest Weblogs

Dissecting a Real Climate Text by Hendrik Tennekes

I understand that Gavin Schmidt was upset by my essay of January 29 . I admit that I neglected to mention that I responded to his long exposition of January 6 on Real Climate. The part of his text that deals with the difference between weather models and climate models reads:

Conceptually they are very similar, but in practice they are used very differently. Weather models use as much data as there is available to start off close to the current weather situation and then use their knowledge of physics to step forward in time. This has good skill for a few days and some skill for a little longer. Because they are run for short periods of time only, they tend to have much higher resolution and more detailed physics than climate models (but note that the Hadley Centre for instance, uses the same model for climate and weather purposes). Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models. There are many current attempts to improve the short-term predictability in climate models in line with the best weather models, though it is unclear what impact that will have on projections.”

What to make of this? I will dissect this paragraph line by line.

“Conceptually they are very similar……”

In practice, they are. However, as I have argued time and again, this apparent similarity is a serious defect. A crude representation of the ocean is all that is needed for a weather model, but in a climate model the ocean should share center stage with deforestation and other land use changes.

“Weather models …use their knowledge of physics to step forward in time.”

What Gavin leaves unsaid here is that most of the physics in a weather model deals with the atmosphere. Also, most of the physics is parameterized and the reliability of the parameterizations continues to be debated. I don’t want to pick nits, else I would query how models can possess knowledge of any kind.

“This has good skill for a few days…….”

Yes, Gavin is aware of Lorenz’ butterfly. He fails to state, however, that the average prediction horizon of weather forecasts is comparable to the lifetime of synoptic weather systems. I would not mind this omission, were it not for the fact that the (unknown) prediction horizon of climate models is determined in part by the life time of circulation systems in the ocean, such as the Pacific Decadal Oscillation. Since  weather models and climate models are conceptually similar, one must expect similar predictability problems.

“Because they are run for short periods of time only……”

The logic in this sentence is inverted. The development of weather and climate models is driven by the desire to employ the latest supercomputers available. It is conceptually a small matter to fill these computers with parameterizations operating at higher resolution. My interactions with Tim Palmer of ECMWF (see my weblog of June 24, 2008) focused on his claim for Seamless Prediction Systems. His advocacy boiled down to a quest for a computer facility that could run climate models at the resolution now feasible for weather models. I submit that no conceptual progress can be expected if the modeling community fails to reconsider the architecture of their software.

“Weather models develop in ways that improve…….”

This line ends with the need to independently assess the impact of model improvements on long-term statistics. I agree with the need, but not with Gavin’s off-hand way of letting this problem pass by without explaining how such assessments can or should be performed. Throughout this text Gavin avoids matters of methodology. That, to me, misleads all readers who are not professionals themselves.

“Curiously, the best weather models…….”

At this point, a Dutchman would say “Nu breekt mijn klomp” (now my clog breaks). Gavin Schmidt is a professional climate modeler, but he appears surprised that the climatology of weather models is inferior. Of course it is. Weather models deal with the atmosphere, climate models with the entire climate system.

“There are many current attempts to improve the short-term predictability …….”

Climate modelers are responding to public opinion and have chosen to develop “seamless” or “unified” prediction systems. The present skill of  seasonal forecasts is marginal at best; why should the public and their governments have confidence in forecasts many ten of years ahead? Conceptually, this is indeed a crucial question. It cannot be answered by increasing computer power. Gavin admits as much:

“….. it is unclear what impact that will have on projections.”

So why should one base climate policy on forecasts made by climate models? 

Curiously, Gavin’s text is conceptually vague. He should be able to do better.

It is up to you, Gavin. I am waiting.

Comments Off

Filed under Guest Weblogs

Climatic Effects of 30 Years of Landscape Change over the Greater Phoenix, AZ, Region: Part I 2009 by Matei Georgescu

Guest Weblog By Matei Georgescu

In order to paint a more comprehensive assessment of anthropogenic influence on climate the National Research Council (NRC) has stressed the need to supplement additional value to the oft-cited and traditionally based evaluation of global-scale forcing(s).  For example, taking into account the regional surface energy balance resulting from the heterogeneous patchwork that is the land surface (and importantly, the modification of the energy balance due to changes in the surface cover) has important implications for proper attribution of surface temperature changes, regional changes in circulation, and perhaps teleconnections, all effects that cannot be explained solely by increases in well-mixed greenhouse gases.

Increased scientific focus on the changing climate has resulted in greater attention paid to areas witnessing rapid population rise and landscape conversion.  Especially vulnerable may be those regions located in semi-arid climatic regions with naturally limiting water resources.

A new paper investigating the role of land cover change over the Greater Phoenix area – in particular, the impact of documented landscape change from the 1970s through the early 2000s – on climate, has appeared In Press in the Journal of Geophysical Research – Atmospheres:

Georgescu, M., G. Miguez-Macho, L. T. Steyaert, and C. P. Weaver (2009), Climatic Effects of 30 Years of Landscape Change over the Greater Phoenix, AZ, Region. Part I: Surface Energy Budget Changes, J. Geophys. Res., doi:10.1029/2008JD010745, in press.

The region’s extensive landscape evolution since the early 1970s is first documented based on analyses of Landsat images and land-use/land-cover (LULC) datasets derived from aerial photography (1973) and Landsat (1992 and 2001).  Results show nearly uninterrupted urban sprawl occurring to the northwest and southeast of central Phoenix, replacing plots of irrigated agriculture, as territorial expansion proceeded from the 1970s through the 1990s, and finally to the early 2000s.  Urban sprawl also occurred, to a lesser degree, at the expense of semi-natural shrubland, just north and east of the central business district of Phoenix. 

These derived land cover datasets, in conjunction with associated biophysical parameters appropriate for the region, were then used as surface boundary conditions for the inner of three grids for which high-resolution, Regional Atmospheric Modeling System (RAMS), simulations (2-km grid spacing used in the inner grid) were conducted.  A total of 18, monthly, numerical experiments were performed, using the circa-1973, circa-1992, and circa-2001 “snapshots in time” landscape representations to quantify the impacts of intensive land-use change on July surface temperatures, dew-point temperatures, and the surface radiation and energy budgets. 

Results illustrate a regional (i.e., centered over Greater Phoenix and averaged over the entirety of the fine grid: 204X204 km) warming of 0.12°C during the roughly three decade period of landscape evolution.  That is to say that landscape change, alone, and separate from other forcings (which were not taken into account) over this region has had a non-negligible warming impact.

To better understand the physical mechanism behind this result, the effect of distinct land-use conversion themes (e.g. conversion from irrigated agriculture to urban land) were examined to evaluate how specific landscape changes have each contributed to the region’s changing climate.  Land-use conversion themes illustrate the important impact of individual landscape change on the land surface radiation and energy budget components and the subsequent impact on near-surface climate, thus allowing us to better isolate the impact of particular LULCC change to climatic effect. 

The two urbanization themes studied in this paper (i.e., [1] conversion from shrubland to urban land, and [2] conversion of irrigated agriculture to urban land) both increased surface shortwave absorption (due to changes in albedo), decreased surface longwave radiative flux at the surface (due to loss of low-level water vapor associated with decreased coverage of irrigated agriculture) – these pair of urbanization themes resulted in peak daytime temperature increases, averaged over the monthly timescale, of 1°C.

This paper focused on effects of landscape change on key components of the surface radiation and energy budgets and their effect on near-surface climate.  In a companion paper (accepted by JGR) the role of the previously discussed surface budget changes, the resulting thermal gradient, the mesoscale dynamics and thermodynamics, convective rainfall and the association with the large-scale North American Monsoon System environment, are addressed.

Comments Off

Filed under Guest Weblogs

Real Climate Suffers from Foggy Perception by Henk Tennekes

Roger Pielke Sr. has graciously invited me to add my perspective to his discussion with Gavin Schmidt at RealClimate. If this were not such a serious matter, I would have been amused by Gavin’s lack of knowledge of the differences between weather models and climate models. As it stands, I am appalled. Back to graduate school, Gavin!

A weather model deals with the atmosphere. Slow processes in the oceans, the biosphere, and human activities can be ignored or crudely parameterized. This strategy has been very successful. The dominant fraternity in the meteorological modeling community has appropriated this advantage, and made itself the lead community for climate modeling. Backed by an observational system much more advanced than those in oceanography or other parts of the climate system, they have exploited their lead position for all they can. For them, it is a fortunate coincidence that the dominant synoptic systems in the atmosphere have scales on the order of many hundreds of kilometers, so that the shortcomings of the parameterizations and the observation network, including weather satellite coverage, do not prevent skillful predictions several days ahead.

A climate model, however, has to deal with the entire climate system, which does include the world’s oceans. The oceans constitute a crucial slow component of the climate system. Crucial, because this is where most of the accessible heat in the system is stored. Meteorologists tend to forget that just a few meters of water contain as much heat as the entire atmosphere. Also, the oceans are the main source of the water vapor that makes atmospheric dynamics on our planet both interesting and exceedingly complicated. For these and other reasons, an explicit representation of the oceans should be the core of any self-respecting climate model. 

However, the observational systems for the oceans are primitive in comparison with their atmospheric counterparts. Satellites that can keep track of what happens below the surface of the ocean have limited spatial and temporalresolution. Also, the scale of synoptic motions in the ocean is much smaller than that of cyclones in the atmosphere, requiring a spatial resolution in numerical models and in the observation network beyond the capabilities of present observational systems and supercomputers. We cannot observe, for example, the vertical and horizontal structure of temperature, salinity and motion of eddies in the Gulf Stream in real time with sufficient detail, and cannot model them at the detail that is needed because of computer limitations. How, for goodness’ sake, can we then reliably compute their contribution to multi-decadal changes in the meridional transport of heat? Are the crude parameterizations used in practice up to the task of skillfully predicting the physical processes in the ocean several tens of years ahead? I submit they are not.

Since heat storage and heat transport in the oceans are crucial to the dynamics of the climate system, yet cannot be properly observed or modeled, one has to admit that claims about the predictive performance of climate models are built on quicksand. Climate modelers claiming predictive skill decades into the future operate in a fantasy world, where they have to fiddle with the numerous knobs of the parameterizations to produce results that have some semblance of veracity. Firm footing? Forget it!

Gavin Schmidt is not the only meteorologist with an inadequate grasp of the role of the oceans in the climate system. In my weblog of June 24, 2008, I addressed the limited perception that at least one other climate modeler appears to have. A few lines from that essay deserve repeating here. In response to a paper by Tim Palmer of ECMWF, I wrote: “Palmer et al. seem to forget that, though weather forecasting is focused on the rapid succession of atmospheric events, climate forecasting has to focus on the slow evolution of the circulation in the world ocean and slow changes in land use and natural vegetation. In the evolution of the Slow Manifold (to borrow a term coined by Ed Lorenz) the atmosphere acts primarily as stochastic high-frequency noise. If I were still young, I would attempt to build a conceptual climate model based on a deterministic representation of the world ocean and a stochastic representation of synoptic activity in the atmosphere.”

From my perspective it is not a little bit alarming that the current generation of climate models cannot simulate such fundamental phenomena as the Pacific Decadal Oscillation. I will not trust any climate model until and unless it can accurately represent the PDO and other slow features of the world ocean circulation. Even then, I would remain skeptical about the potential predictive skill of such a model many tens of years into the future.


Comments Off

Filed under Guest Weblogs