Author Archives: pielkeclimatesci

About pielkeclimatesci

Research Scientist, University of Colorado

Roger Pielke Sr. is now on Twitter!

You can now follow Roger on Twitter. You can find him at @RogerAPielkeSr

Comments Off

Filed under Uncategorized

2012 Climate Science Weblog in Review by Dallas Jean Staley – A Guest Post

I hope all of our readers have a great 2013, and I hope you enjoy reading the stats for 2012.  You can follow our publications at our research website.  Thanks for all your support over the years!  The stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

About 55,000 tourists visit Liechtenstein every year. This blog was viewed about 440,000 times in 2012. If it were Liechtenstein, it would take about 8 years for that many people to see it. Roger Pielke Sr.’ blog had more visits than a small country in Europe!

Click here to see the complete report.

Comments Off

Filed under Uncategorized

CO2 As A Carbon Fertilizer For Plants – Effects On Surface Global Temperatures” By Luigi Mariani

Today, we have a guest post by Luigi Mariani who is Senior Agrometeorologist with experience in applied meteorology, climatology and mathematical modeling of agro-ecosystems at the Università degli Studi di Milano. It starts as follows

Figure caption– Two examples of heroic vegetation in urban areas. On the top the grass Portulaca oleracea L. and on the bottom the tree Ulmus pumila L. (pictures taken in Milano – Italy).

CO2 as carbon fertilizer for plants – effects on surface global temperatures

By Luigi Mariani

Many things have increased almost monotonously after the end of the Little Ice Age (not only the atmospheric level of CO2 but also the global population, the agricultural production, the solar activity, the global plant biomass, the number of cows and so on). This writing reports some reflections on the effects of terrestrial plant biomass increase on global climate and has been written in order to request suggestions and critics.

The colonization of terrestrial environments by vascular plants began during the Cambrian, about 500 millions of years ago (at that time CO2 levels were 20-30 times the present values) and it is possible to hypothesize an active evolution of plant associations which modified the environment in order to maintain their dominant presence in a growing number of habitats, until a coverage of the main part of the terrestrial areas during warm (greenhouse) phases. Taking into account the Liebig’s law of the minimum it is possible to think that this expansion was locally limited by the availability of chemical elements (first of all nitrogen and phosphorous) but the only real global constraint against the expansion of vegetation has been probably represented by the advent of the glacial periods, from the carboniferous glaciation (380 millions of years ago) until the 15 Pleistocene glaciations (last 2.5 millions of years).

In order to interpret the global vegetation expansion a key element is represented by the homeostasis which is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition of properties like temperature or pH ( The homeostasis is fundamental for vegetation, natural and cultivated, in order to achieve its final aim which is the reproduction. Clearly homeostatic are for example the effects of closed canopies which maintain stable values of soil temperature (avoiding excesses, negative for roots and microbial activities) and exert a stabilizing effect on the atmospheric canopy layer (limiting evapotranspirational losses and favor the stomatal uptake of CO2 released by soil microbial activities).

The above-mentioned processes are active at microscale but relevant effects on macroscale are present due to the close coupling mechanisms among scales. A naive expression of this phenomenon is the daisyworld example with a planet that rules its albedo changing the % of black and white daisies, an example that pertains to the Gaia hypothesis ( Moreover a similitude could be established with the ENSO syndrome, where a boundary layer phenomenon (the abrupt warming of oceanic surface) triggers deep convection propagating El Niño signal to the free atmosphere of the whole planet (ITCZ, Hadley cell, Westerlies, monsoons are affected and the final result is, for example, given by the abrupt global warming of 1998).

After these general presuppositions I’d like to list the following elements:

1) simulations made with the low resolution spectral GCM Puma show that a world completely covered in vegetation would be much warmer – many degrees – compared to a desert world (the work is Planet Simulator: Fraedrich et al, 2005. Green planet and desert word,– By the way this simulation takes into account the following effects of vegetation on climate: surface albedo, surface roughness and soil hydrology.

2) obviously the Fraedrich’s et al work doesn’t take into account the mesoscale effect on cloud coverage which are relevant on global climate because water vapor recycled from evapotranspiration is the main component of the continental precipitation. These effects were  analyzed by

Pielke Sr., R.A., 2001: Influence of the spatial distribution of vegetation and soils on the prediction of cumulus convective rainfall. Rev. Geophys., 39, 151-177.

Pielke, R.A. Sr., J. Adegoke, A. Beltran-Przekurat, C.A. Hiemstra, J. Lin, U.S. Nair, D. Niyogi, and T.E. Nobis, 2007: An overview of regional land use and land cover impacts on rainfall. Tellus B, 59, 587-601.

Pielke, R.A. and R. Avissar, 1990: Influence of landscape structure on local and regional climate. Landscape Ecology, 4, 133-155.

4) paleo-atmospheric ice core measurements show an increase of the global ecosystem productivity for the Last Glacial Maximum (LGM) vs. Pre Industrial Holocene (PIH) of about 25 / 40% and  model simulations give  a coherent value of +30%. This increase is probably referred only to terrestrial ecosystems because the marine ones show only marginal variations  in the transition from LGM to PIH (Prentice I.C., Harrison P., Bartlein P.J., 2011. Global vegetation and terrestrial carbon cycle changes after the last ice age, New Phytologist (2011) 189: 988–998 – see comments at

5) simulations of ancient cereals productions (Araus et al., 2003. Productivity in prehistoric agriculture: physiological models for the quantification of cereal yields as an alternative to traditional Approaches, Journal of Archaeological Science 30, 681–693) show  that the transition of CO2 from pre-industrial 275 ppmv to 350  ppmv increase by 40% the cereal production (and with them, I guess, the production of many natural or cultivated C3).

6) the abovementioned increases of vegetation productivity are questioned by authors that hypothesize a limitation due to other nutrients like nitrogen and phosphorous (Korner C. 2006. Plant CO2 responses: an issue of definition, time and resource supply. New Phytologist 172: 393–411). Nevertheless a relevant global plant biomass increase is stated by satellite data (global net ecosystem productivity from 1982 to 1999 by 6% -> Source: Robert Simmon, NASA Earth Observatory, based on data provided by the University of Montana Numerical Simulations Terradynamic Group (NTSG).

7) a diagram of NASA earth observatory  shows that the GW is largely terrestrial (

8) a metrics suggested by Roger Pielke Sr. to look at the energetic role of vegetation is represented by the moist enthalpy (alias equivalent temperature). For example daytime temperatures are generally reduced over crops during the growing season (even with lower albedo) but the moist enthalpy is higher. See in particular:

Pielke Sr., R.A., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211.

Davey, C.A., R.A. Pielke Sr., and K.P. Gallo, 2006: Differences between near-surface equivalent temperature and temperature trends for the eastern United States – Equivalent temperature as an alternative measure of heat content. Global and Planetary Change, 54, 19.32

Fall, S., N. Diffenbaugh, D. Niyogi, R.A. Pielke Sr., and G. Rochon, 2010: Temperature and equivalent temperature over the United States (1979 . 2005). Int. J. Climatol., DOI: 10.1002/joc.2094.

A possible deduction from such evidences is that when CO2 increases, also plant biomass grows, so:

1. More water vapour is input so that the latitudinal transport of energy toward the poles is enhanced and also enhanced is the greenhouse effect

2. the global albedo is decreased (the albedo of a desert is higher than that of a ground covered with vegetation).

3. soil water reservoir is emptied faster, so summer drought begins earlier and the H/LE ratio is also increased; on the other hand mesoscale precipitation is enhanced by vegetation, with a decrease of H/LE.

As a final result of this causal chain the H term of the surface energy balance is emphasized and accordingly an increase of air temperature is measured by ground weather stations, giving rise to the general deduction that CO2 could give a positive feed-back on surface global temperatures acting as “fertilizer” for plants.

Two main questions comes from this reasoning:

1. it is possible to have an idea of the significance and of the overall relevance of this phenomenon?

2. Do IPCC GCM simulations take into account the increase in plant biomass, which probably took place in the last 150 years?


Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

Brief Response To Dr. Gerhard Kramm By Nicola Scafetta

Herein I will give a brief response to Dr. Gerhard Kramm’s response to my first reply to his comment on my latest publication:

N. Scafetta, “A shared frequency set between the historical mid-latitude aurora records and the global surface temperature” Journal of Atmospheric and Solar-Terrestrial Physics, in press. DOI: 10.1016/j.jastp.2011.10.013. 

I believe that Kramm is simply missing the point of my argument.

In his reply Dr. Kramm showed a picture prepared by Solanki which for convenience I report below again. 

The above figure clearly shows that the solar record closely matches the temperature record. Similar results are present in numerous papers that I have authored since 2006 and by numerous papers by other authors as well. 

For convenience I show here again one figure discussed in one of my latest works which is similar to Kramm’s figure, that compare the temperature record since 1600 against the empirical temperature signature of the solar forcing alone by taking into account the heat capacity of the climate system, which in Solanki’s figure was not taken into account:  

N. Scafetta, “Empirical analysis of the solar contribution to global mean air surface temperature change,” Journal of Atmospheric and Solar-Terrestrial Physics 71 1916–1923 (2009), doi:10.1016/j.jastp.2009.07.007.

Both figures clearly suggest that most of the warming observed from the Little Ice Age of the 17th century to recent times can be associated to solar variation alone. As the figure shows, this would be true also for the last decades if the ACRIM total solar irradiance satellite composite is used in the model, given the fact that the ACRIM composite ( ) presents an upward trend from 1980 to 2000 and a downward trend since 2000. The ACRIM pattern may be indicative of a 60-year modulation in the solar activity, which would explain the two 60-year cycles observed in the climate system since 1850.

Of course, I have never claimed that the sun explains 100% of the observed warming since 1900. From the above figures it is evident that about 50-80% of the observed 1900-2000 warming can be related to the Sun, while the leftover may have alternative causes such as anthropogenic GHGs and urban heat island (UHI) and land use change (LUC) effects, where the UHI and LUC contributions may still be present in the data because of the limitations of the mathematical algorithms presently used to filter them out. 

Thus, it is clear that the data show the existence of a very good correlation between solar records and temperature patterns for numerous centuries up to now, as shown in the above two figures. 

Kramm seems to argue that, despite such a good correlation, a strong solar contribution to the observed climate changes needs to be rejected because he claims there is no enough energy in the solar variations to explain the observed climate change.

I am sorry, but I still believe that Kramm is criticizing my works without reading them first.      

In fact, it is overwhelmingly clear in my work that I am arguing about the existence of an ADDITIONAL climate forcing which is related to solar/astronomical changes: a forcing that is not currently included in the climate models adopted by the IPCC. Essentially I am not talking only about a direct solar irradiance forcing, which is the only thing Kramm is thinking about.

My paper makes it overwhelmingly clear that I am referring to an additional solar/astronomical forcing that would directly act on the cloud system through a modulation of the cosmic rays and/or of the electric properties of the top atmosphere. I am referring in particular to the works by Kirkby, Svensmark and Tinsley, as referenced in my paper.

This cloud modulation effect would be mostly modulated by a modulation of the magnetic properties of the heliosphere and magnetosphere (shown below), which can be driven by the solar variation and planetary motion. Indeed, contrary to what many people think, the Earth is not moving in an empty space environment, as the figures below clearly show.


In my paper I show that if this astronomical forcing modulates the cloud system in such a way that the albedo oscillates with amplitude of just 1-2%, this can explain most of the observed climate change also from an energetic point of view. The data show the existence of such modulation in the cloud cover and in the periods of solar dimming and brightening.

Indeed, in my paper I prove that if a solar change is accompanied with equivalent oscillations of the albedo by 1-2%, the climate sensitivity to solar changes would increase by 10 times relative to the climate sensitivity to solar irradiance alone. This would be more than enough to satisfy Kramm’s perplexities. In my paper I also prove that even if the total solar irradiance is perfectly constant, but other related solar forcings cause an albedo oscillation by 1-2%, this too may be sufficient to explain the temperature oscillations.

The exact physics about the mechanisms involved in these phenomena is still unknown. That is probably why Kramm does not find it in his textbooks and current GCM papers. In my opinion, Kramm’s comment suggests that he does not accept the idea that, after all, the science might be still not settled.

Essentially, in my paper I am arguing that the climate on the Earth can be influenced by what is known as Space Weather that alters the electric and magnetic properties of the Earth space environment (shown in the figure below). The Space Weather can be influenced by the Sun and by the planetary motion. The Space Weather then alters the climate by activating a set of mechanisms that slightly modulate the cloud system in such a way that we have periods with a little bit more (less) cloud cover that cause a cooling (warming) at the surface. Because these forcings are quasi cyclical we have a climate that approximately presents the same cycles and patterns that we find in the solar system.


Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

Response From Nicola Scafetta On His New Paper on Astronomical Oscillations and Climate Oscillations.

Roger A Pielke Sr asked me to respond to a comment sent to him by Gerhard Kramm of the University of Alaska on my recent paper

N. Scafetta, “A shared frequency set between the historical mid-latitude aurora records and the global surface temperature” Journal of Atmospheric and Solar-Terrestrial Physics, in press. DOI: 10.1016/j.jastp.2011.10.013.

Kramm’s final argument is that “Since the sunspot number may be considered as an indication for the sun’s activity, this weak correlation does not notably support Scafetta’s hypothesis.”

I believe that Dr. Kramm may be not really familiar with the topics addressed in my paper. The issue is complex and I will try to respond, but only a detailed study of my papers and of the relevant scientific literature can fully satisfy an interested reader.

In brief, Dr. Kramm argument is based on a total solar irradiance model based on the sunspot number record proposed by Schneider and Mass in 1975, that is 36 years ago! This proxy reconstruction claims that solar activity is practically constant plus a 11-year cycle. Because such a reconstruction does not resemble the temperature record in any way, Kramm concluded that it does not support Scafetta’s hypothesis.

I fully agree with Kramm that the solar irradiance reconstruction proposed by Schneider and Mass in 1975 does not support my hypothesis. However, Kramm did not appear to have realized that the solar irradiance reconstruction proposed by Schneider and Mass in 1975 is considered today to be severely obsolete.

Reconstructing the past total solar irradiance is not an easy task: there exists only proxy reconstructions not direct measurements. What people today know is that the sunspot record by alone is not an accurate representation of the solar activity and of the heliosphere dynamics.

The figure below shows some of the total solar irradiance reconstructions proposed during the last 15 years. Other records exist.

Figure:  Several proposed total solar irradiance (TSI) proxy reconstructions. (From top to bottom: Hoyt and Schatten, 1997; Lean, 2000; Wang et al., 2005; Krivova et al., 2007.)

As it is evident from the figure, different models have produced different solar irradiance reconstructions. And all of them differ from Schneider and Mass’ model adopted by Kramm to criticize my paper.

Even the total solar irradiance records obtained with satellite measurements are not certain. At least two possible reconstructions have been proposed: the PMOD (top) and the ACRIM (bottom) TSI satellite composites.


In my past papers I have analyzed the relation between some of the above reconstructions and the climate records in great details and what I got, for example in

N. Scafetta, “Empirical analysis of the solar contribution to global mean air surface temperature change,” Journal of Atmospheric and Solar-Terrestrial Physics 71 1916–1923 (2009), doi:10.1016/j.jastp.2009.07.007.

is summarized in the following figure


The figure shows the climate signature of the solar component alone against a reconstruction of the climate since 1600. Since 1980 I am adopting TSI reconstructions based on ACRIM and PMOD. The matching with the climate records is quite good for 400 years which includes the last 40 years if we use the ACRIM TSI composite. The temperature, though, presents an additional 0.2-0.3 oC warming that is probably the real net anthropogenic contribution (GHG+Aerosol+UHI+LUC+errors in combining the temperature records, etc) since 1900.

The figure above shows that the climate is mostly regulated by solar changes. However, the matching is not absolutely precise. The reason, in my opinion, is that the TSI proxy reconstructions proposed are not sufficiently accurate yet and there may be additional natural forcings.

So, in my more recent papers I have studied the oscillations of the solar system regulated by planetary orbits which very likely are the first cause external forcings acting on the sun and the heliosphere. Very likely, the Sun and the heliosphere oscillate in the same way and the Earth’s system will likely resonate with those oscillations too.

In my recent paper

N. Scafetta, “Empirical evidence for a celestial origin of the climate oscillations and its implications”. Journal of Atmospheric and Solar-Terrestrial Physics 72, 951–970 (2010), doi:10.1016/j.jastp.2010.04.015

I address the above issues and I found that indeed the climate system is characterized by the same oscillations found in the astronomical oscillation driven by planetary and lunar harmonics with major periods at 9, 10-10.5, 20 and 60 years.

In my latest paper

N. Scafetta, “A shared frequency set between the historical mid-latitude aurora records and the global surface temperature” Journal of Atmospheric and Solar-Terrestrial Physics, in press. DOI: 10.1016/j.jastp.2011.10.013

I show that also the mid-latitude historical aurora records since 1700 are characterized by the same frequencies of the climate system and of the planetary system with major periods of 9, 10-10.5, 20 and 60 years. The mid-latitude historical aurora records represent a direct observation of what was happening in the ionosphere and give us an information complementary to the one that can be deduced from the sunspot record alone. The mid-latitude auroras from Europe and Asia, together with other available records from North America and Iceland reveal an interesting oscillating dynamics: Northern and Southern aurora records, which should be understood relative to the magnetic north pole not the geographical one, present a complementary 60 year cycle, for example, that matches the 60-year cycle observed in the temperature as suggested in the figure below


Figure:   (A) The 60 year cyclical modulation of the global surface temperature obtained by detrending this record of its upward trend shown in Fig.1. The temperature record has been filtered with a 8-year moving average. Note that detrending a linear or parabolic trend does not significantly deform a 60-year wave on a160-year record, which contains about 2.5 of these cycles, because first and second order polynomials are sufficiently orthogonal to a record containing at least two full cycles.  On the contrary, detrending higher order polynomials would deform a 60-year modulation on a 160-year record and would be inappropriate. (B) Aurora records from the Catalogue of Polar Aurora <55N in the Period 1000–1900 from 1700 to 1900 (Krivsky and Pejml, 1988). (B) Also depicts the catalog referring to the aurora observations from the Faroes Islands from 1872 to 1966. Both temperature and aurora records show a synchronized 60-year cyclical modulation as proven by the fact that the 60-year periodic harmonic functions superimposed to both records is the same. This 60-year cycle is in phase with the 60–61 year cycle associated to Jupiter and Saturn: see Figs.6 and 7.

Silverman (1992),

for example, showed the 60-year cycle complimentary pattern in the Faroes and Iceland aurora records in this figure.


 Where the 60-year cycle in the Faroes is negative correlated to the 60 year cycle in the temperature while the 60-year cycle in Iceland is positive correlated to the 60 year cycle in the temperature from 1880 to 1940. The same complementary dynamics exists between the mid-latitude European/Asian auroras (which are explicitly studied in my paper) and the American New England auroras (which occupy a northern region relative to the magnetic north pole despite their geographical latitude) for the 1800-1900 period.

This dynamics suggests harmonic changes in the physical properties of the magnetosphere and ionosphere, and upper atmosphere in general, that appear to be directly linked to astronomical oscillations. That may also suggest a change in the magnetosphere/ionosphere sensitivity to incoming cosmic ray flux, which can regulate the cloud system. Thus, my paper shows that a complex astronomical harmonic forcings of the upper atmosphere very likely exists and very likely alters the electric properties of the atmosphere which are known to be able to regulate the cloud system as discussed by Tinsley and Svensmark.

My hypothesis is that the Earth’s albedo is likely oscillating with the same frequencies that we found in the solar system and the temperature at the surface cannot but follow those oscillations too. In the paper, I show that such hypothesis fits the records that we have showing cycles in the cloud system and in the solar dimming and brightening patterns, also from an energetic point of view.

For example a recent paper by Soon et al. (Variation in surface air temperature of China during the 20th century ASTP 2011, showed  a very good correlation between the 60-year cycle in the temperature record (in this specific case referring to China) and the sunshine duration cover in Japan, which may be due to a cloud cover oscillation.


Figure:  Annual mean China-wide surface air temperature time series by Wang et al. (2001, 2004)  from 1880 to 2004 correlated with the Japanese sunshine duration of Stanhill and Cohen (2008) from 1890 to 2002 (from Soon et al. 2011).

Other references referring to cloud and sunshine oscillations are in my paper which presents a 60-year cycle.

In fact, in my paper I have argued that small oscillations of the albedo equal to 1-2% may induce climate oscillations compatible with the observations.

The final result of my paper is summarized in the following figure


Figure:  Astronomical harmonic constituent model reconstruction and forecast of the global surface temperature.(A) Four years moving average of the global surface temperature against the climate reconstructions obtained by using the function F1(t)+P1(t) to fit the period 1850–2010 (black solid) and the period 1950–2010(dash),and the function F2(t)+P2(t)  to fit he period1850–1950(dots). (B) The functions P1(t)  and P2(t) represent the periodic modulation of the temperature reproduced by the celestial model based on the five aurora major decadal and multidecadal frequencies. The arrows indicate the local decadal maxima where the good matching between the data patterns and the models is observed. Note that in both figures the three model curves almost coincide for more than 200 years and well reconstruct and forecast the temperature oscillations.

The figure clearly shows that my harmonic model based on astronomical/lunar cycles, which is depicted in full in B, can reconstruct and forecast with a good accuracy the observed climate oscillations. For example, in B the harmonic model is calibrated during the period 1850-1950 and then it is shows to forecast the climate oscillations (in red) observed from 1950 to 2011. The model is also calibrated during the period 1950-2011 and it is shown to forecast the climate oscillations from 1850 to 1950. The upward trend in A in part produced by the longer solar trending as shown in a figure above and has not been added to the harmonic model yet. Indeed, by looking at the forecasting results in the above figure B I need to say that they perform far better than the IPCC general circulation models, which have never succeeded in forecasting anything.

Of course, I do not claim that my last papers respond to all questions and all related issues. On the contrary, many issues emerge and remain unexplained. This is perfectly normal in science, which is full of mysteries that wait to be explained. Also, my harmonic model may require other frequencies, for example the ocean tides are currently predicted with 35-40 harmonic constituents, while I used only four frequencies in my current model.

However, the merit of my present work, I believe, is to stress the importance of the natural variability of the climate, which has been mostly ignored by the IPCC 2007 modeling, and to show that climate variability is made of an important harmonic component very likely linked to astronomical oscillations and, therefore, the climate can in principle be forecast within a certain limit.

Also an anthropogenic component appears to be present, of course, but because the IPCC models do not reproduce the climate natural variability, those models have significantly overestimated the anthropogenic component by a very large factor between 2 and 4, as explained in my papers. This indirectly implies that the IPCC warming projections for the 21st century need to be reduced by a corresponding large factor. Moreover for the next 30 year the climate may remain steady instead of warming at the rate of 2.3 oC/century as predicted by the IPCC. Longer forecasts may require the addition of longer cycles not yet included in the current work. 

About the criticism of Dr. Kramm based on Schneider and Mass work in 1975, that is a 36-year old work, I cannot but stress that it is based on a severely poor understanding of the present knowledge. Indeed, Dr. Kramm does not seem to have spent much time reading the relevant scientific literature since 1975 and, in particular, my papers with their numerous references. It is evident that it is inappropriate criticize a work without even reading it or trying to become familiar with its topics and arguments which go far beyond the sunspot number record alone. But, apparently, not everybody understands such an elementary logic.

Comments Off

Filed under Guest Weblogs

Schedule Of Presentations At The Third Santa Fe Conference On Global and Regional Climate Variability, October 31-November 4, 2011

This promises to be an interesting Conference. The Schedule is presented below. [the formating is not set clear but the titles and presentors should be clear enough]. This meeting will have a diverse set of viewpoints presented.


The Third Santa  Fe Conference on Global and Regional Climate Variability, October 31-November 4, 2011

Schedule of Presentations 

Monday Morning, October 31, 2011
Registration and continental breakfast ……..7:20-8:20
Welcome: Duncan McBranch, LANL, Deputy Principal Associate Director    ………………………………………………   8:20-8:30
Introduction: Petr Chylek ……………………..8:30-8:40
M-I: Models, Forcing, and Feedbacks  (Chairs: Jerry North and  V. Ramaswamy)
M-1: P. Huybers (Harvard) Regional Temperature Predictions from a Minimalist Model   8:50-9:10
M-2: J. Curry (Georgia Tech) A Critical Look at the IPCC AR4 Climate Change Detection and Attribution   9:10-9:30
M-3: R. Lindzen (MIT) Climate v. Climate Alarm   9:30-9:50
M-4: A. Tsonis (Wisconsin) A new dynamical mechanism for major climate shifts   9:50-10:10

Discussion   10:10-10:25
Coffee and Refreshment   10:25-10:55
M-II: Aerosols and Clouds  (Chairs: Hans von Storch and Jon Reisner)  
M-5: P. Rasch (PNNL) Exploration of aerosol, cloud and dynamical feedbacks in the climate-cryosphere system   10:55-11:15
M-6: D. Rosenfeld (Hebrew U Jerusalem) Number of activated CCN as a key property in cloud-aerosol interactions   11:15-11:35
M-7: W. Cotton (CSU) Potential impacts of aerosols on water resources in the Colorado River Basin………………….…..11:35-11:55
M-8: B. Stevens (Max Planck Institute) The Cloud Conundrum   11:55-12:15

Discussion   12:15-12:30

Monday Afternoon, October 31
M-III: The Arctic (Chairs: Peter Webster and William Lipscomb)
M-9:  I. Polyakov (U Alaska) Recent and Long-Term Changes in the Arctic Climate System   2:00-2:20
M-10: J. Sedlacek (ETH Zurich) Impact of a reduced sea ice cover on lower latitudes   2:20-2:40
M-11: S. Mernild (LANL) Accelerated melting and disappearance of glaciers and ice caps.   2:40-3:00  
M-12: D. Easterbrook (Western Washington U) Ice core isotope data: The past is the key to the future   3:00-3:20

Discussion   3:20-3:35
Coffee and Refreshment     3:35-4:05

M-IV: Models, Forcing, and Feedbacks  (Chairs: Anastasios Tsonis and Anjuli Bamzai)
M-13: J-S von Storch (Max Planck Institute) Dynamical impact of warming pattern     4:05-4:25
M-14: Q. Fu (U Washington) Warming in the tropical upper troposphere: Models versus observation   4:25-4:45
M-15: S. Schwartz (BNL) Earth’s transient and equilibrium climate sensitivities   4:45-5:05
M-16: R. Salawitch (U Maryland) Impact of aerosols, ocean circulation, and internal feedbacks on climate   5:05-5:25
M-17: N. Andronova (U Michigan) Climate sensitivity and climate feedbacks ………………………………………………..5:25-5:45
Discussion   5:45-6:00

Poster Session P-I  (with Refreshment)   6:00-8:00
Poster Session P-I
Monday, October 31
Chairs:  Graeme Stephens, Roger Davis, and Brad Flowers
Tim Garret, U Utah
Will a warmer Arctic be a cleaner Arctic?
H. von Storch, A. Bunde,
Inst. of Coastal Res., Germany
Examples of using long term memory in climate analysis
P. Chylek, C. Folland, et al
LANL, UK Met Office
Observed and model simulated 20th century Arctic temperature variability: Anthropogenic warming and natural climate variability
K. McKinnon, P. Huybers, Harvard U
The fingerprint of ocean on seasonal and inter-annual temperature change
Anthony Davis, JPL
Frontiers in Remote Sensing: Multi-Pixel and/or Time-Domain Techniques
Christopher Monckton
Is CO2 mitigation cost-effective?
H. Moosmuller, et al
Desert Res. Inst., U Nevada
A Development of a Super-continuum Photoacoustic Aerosol Absorption and Albedo Spectrometer for the Characterization of Aerosol Optics
H. Inhaber, Risk Concept
Will Wind Fulfill its Promise of CO2 Reductions?
M. Chen, J. Rowland, et al
Temporal and Spatial Patterns in Thermokarst Lake Area Change in Yukon Flats, Alaska: an Indication of Permafrost Degradation
M. Kafatos, H. El-Askary, et al
Schmid College, WMO
Multi-Model Simulations and satellite observations for Assessing Impacts of Climate Variability on the Agro-ecosystems
C. Xu, et al, LANL, NCAR
Toward a mechanistic modeling of nitrogen limitation on vegetation dynamics
H. Hayden, U Connecticut
Doing the Obvious: Linearizing
L. Hinzman, U Alaska
The Need for System Scale Studies in Polar Regions
X. Jiang, et al, LANL, NCAR
Regional-scale vegetation die-off in response to climate Change in the 21st century

Tuesday Morning, November 1
Registration and continental breakfast   7:30-8:30
T-I: Models, Forcing and Feedbacks  (Chairs: Peter Huybers and Joel Rowland)
T-1: V. Ramaswamy (NOAA GFDL) Addressing the leading scientific challenges in climate modeling,   8:30-8:50
T-2: P. Webster (Georgia Tech) Challenges in deconvoluting internal and forced climate change   8:50-9:10
T-3: H. von Storch (Institute for Coastal Research, Hamburg) Added value generated by regional climate models   9:10-9:30   
T-4: A. Solomon (U Colorado) Decadal predictability of tropical Indo-Pacific Ocean temperature trends   9:30-9:50
Discussion     9:50-10:05
Coffee and Refreshment   10:05-10:35
T-II: Observations (Judy Curry and Manvendra Dubey)
T-5: S. Wofsy (Harvard) HIAPER Pole to Pole Observations (HIPPO) of climatically important gases and aerosols   10:35-10:55
T-6: R. Muller (UC Berkeley) The Berkeley Earth Surface Temperature Land Results     10:55-11:15
T-7: R. Rohde (Berkeley Temp Project) A new estimate of the Earth land surface temperature   11:15-11:35
T-8: F. Singer (SEPP) Is the reported global surface warming of 1979 to 1997 real?   11:35-11:55
T-9: J. Xu (NOAA) Evaluation of temperature trends from multiple Radiosondes and Reanalysis products   11:55-12:15
Discussion   12:15-12:30

Tuesday Afternoon, November 1
T-III: Cosmic Rays, and the Sun  (Chairs: Don Wuebbles and Anthony Davis)
T-10: P. Brekke (Space Center, Norway) Does the Sun Contribute to climate change? An update   2:00-2:20
T-11: G. Kopp (U Colorado) Solar irradiance and climate   2:20-2:40
T-12: A. Shapiro (World Radiation Center, Davos) Present and past solar irradiance: a quest for understanding     2:40-3:00  
T-13: B. Tinsley (U Texas) The effects of cosmic rays on CCN and climate     3:00-3:20
Discussion   3:20-3:35
Coffee and Refreshment   3:35-4:05

T-IV: Aerosols and Clouds (Chairs: William Cotton and Daniel Rosenfeld)
T-14:  J. Vernier (NASA Langley) Accurate estimate of the stratospheric aerosol optical depth for climate simulations     4:05-4:25
T-15: J. Coakley (Oregon SU) Knowledge gained about marine stratocumulus and the aerosol indirect effect   4:25-4:45
T-16: G. Stephens (NASA JPL) Clouds, aerosols, radiation, rain and climate   4:45-5:05
T-17: J. Augustine (NOAA) Surface radiation budget measurements from NOAA’s SURFRAD network   5:05-5:25
T-18: G. Jennings (Ireland National U) Direct Radiative Forcing over the North East Atlantic …………………….5:25-5:40
Discussion   5:40-5:55
Banquet   6:30-8:00
B-1: Judy Curry (Georgia Tech) The uncertainty monster at the climate science-policy interface
B-2: Anjuli Bamzai (NSF) Global and regional climate change research at NSF: Current activity and future plans

Wednesday Morning, November 2
Registration and continental breakfast   7:10-8:10
W-I: Weather, Climate, and Arctic Terrestrial Processes (Chairs: Larry Hinzman and Cathy Wilson)
W-0: T. Schuur (U Florida) Vulnerability of Permafrost Carbon Research Coordination Network ………………8:10-8:30
W-1: H. Epstein (U Virginia) Recent dynamics of arctic tundra vegetation: Observations and modeling   8:30-8:50
W-2: E. Euskirchen (U Alaska) Quantifying CO2 fluxes across permafrost and soil moisture gradients in arctic Alaska   8:50-9:10
W-3: D. Lawrence (NCAR) High-latitude terrestrial climate change feedbacks in an Earth System Model   9:10-9:30   
W-4: D. Wuebbles U Illinois) Severe weather in a changing climate     9:30-9:50

Discussion   9:50-10:05
Coffee and Refreshment   10:05-10:35
W-II: The Arctic  (Chairs: Qiang Fu and Keeley Costigan)
W-5: M. Flanner (U Michigan) Arctic climate: Unique vulnerability and complex response to aerosols   10:35-10:55
W-6: R. Stone (NOAA) Characterization and direct radiative impact of Arctic aerosols: Observed and modeled   10:55-11:15
W-7: M. Zelinka (LLNL) Climate feedbacks and poleward energy flux changes in a warming climate   11:15-11:35
W-8: G. De Boer (U Colorado) The present-day Arctic atmosphere in CCSM4   11:35-11:55
W-9: R. Peltier (U Toronto) Rapid climate change in the Arctic: the case of Younger-Dryas cold reversal     11:55-12:15

Discussion   12:15-12:30
Wednesday Afternoon, November 2
W-III: Arctic and Global Climate Variability (Chairs: Igor Polyakov and Sebestian Mernild)
W-10: G. North (Texas A&M) Looking for climate signals in ice core data   2:00-2:20
W-11: T. Kobashi (National Inst Polar Research, Tokyo) High variability of Greenland temperature over the past 4000 years   2:20-2:40
W-12: M. Palus (Inst Comp Sci, Prague) Phase coherence between solar/geomagnetic activity and climate variability     2:40-3:00  
W-13: N. Scafetta (Duke U) The climate oscillations: Analysis, implication and their astronomical origin   3:00-3:20

Discussion …………………………………3:20-3:35
Coffee and Refreshment …………………3:35-4:05
W-IV: Greenhouse Gases, Aerosols, and Energy Balance (Steve Wofsy and James Coakley)
W-14: M. Dubey (LANL) Multiscale greenhouse gas measurements of fossil energy emissions and climate feedbacks   4:05-4:25
W-15: C. Loehle (Nat Council for Air Improvement) Climate change attribution using empirical decomposition     4:25-4:45
W-16: R. Davies (U Auckland) The greenhouse effect of clouds: Observation and theory   4:45-5:05
W-17: V. Grewe (Inst Atmos Physics, Oberpfaffenhofen) Attributing climate change to NOx emissions   5:05-5:25
Discussion………………………………. 5:25-5:40
Poster Session P-II……………………5:40-7:00
Poster Session P-II

Wednesday November 2, 2011
Chairs: Mark Flanner, Hans Moosmuller, and Dave Higdon
Chris Borel-Donohue,
Air Force Institute of Technology
Novel Temperature/Emissivity Separation Algorithms for Hyperspectral Imaging Data
R. Stone, J. Augustine, E. Dutton,    NOAA, Earth System Res. Lab.
Radiative Forcing Efficiency of the Fourmile Canyon Fire Smoke: A Near-Perfect Ad Hoc Experiment
Fred Singer,
Are observed and modeled patterns of temperature trends ‘Consistent’? Comparing the ‘Fingerprints’
Brian A Tinsley,
University of Texas at Dallas
Charge Modulation of Aerosol Scavenging (CMAS): Causing Changes in Cyclone Vorticity and European Winter Circulation?
A. V. Shapiro, et al, World Rad. Center, Davos, Switzerland
The stratospheric ozone response to a discrepancy of the SSI data
M. Palus, et al, Inst. of Computer Science, Prague, Czech Republic
Discerning connectivity from dynamics in climate networks
Mark Boslough, SNL
Comparison of Climate Forecasts: Expert Opinions vs. Prediction Markets
C. Gangodagamage, et al
Clustering and Intermittency of Daily Air Temperature Fluctuations
in the North-Central Temperate Region of the U.S.
Michael LuValle,
OFS Laboratories
Suggested attribution of Irene’s flooding in New Jersey (2011) via statistical postdiction derived from chaos theory
A. Winguth, et al.,
University of Texas, Arlington
Climate Response at the Paleocene-Eocene Thermal Maximum to Greenhouse Gas Forcing – An Analog for Future Climate Change
David Mascarenas, et al
The development of Autonomous Mobile Sensor Nodes for CO2 Source/Sink                 Characterization
Richard Field, Paul Constantine, and Mark Boslough, SNL
Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change
Steve Schwartz, BNL
Earth’s transient and equilibrium climate sensitivities

Thursday Morning, November
Registration and continental breakfast   7:30-8:30
Th-I: Theory, Experiment, and Observations (Chairs: Brian Tinsley and Nick Hengartner)
Th-1: J. Curtius (Frankfurt U) Atmospheric aerosol nucleation in the CLOUD experiment at CERN   8:30-8:50
Th-2: E. Dunne (U Leeds) The influence of ion-induced nucleation on atmospheric aerosols in CERN CLOUD experiment   8:50-9:10
Th-3: W. Hsieh (UBC) Machine learning methods in climate and weather research   9:10-9:30
Th-4: C. Essex (U Western Ontario) Regime algebra and climate theory   9:30-9:50
Discussion   9:50-10:05
Coffee and Refreshment   10:05-10:35
Th-II: Atlantic Ocean and Climate (Chairs: Anastasios Tsonis and Nicola Scaffeta)
Th-5: M. Hecht (LANL) A perspective on some strength and weaknesses of ocean climate models…………………10:35-10:55
Th-6: L. Frankcombe (Utrecht U) Atlantic multidecadal variability – a stochastic dynamical systems point of view ………10:55-11:15
Th-7: S. Mahajan (ORNL) Impact of the AMOC on Arctic Sea-ice variability …………………………..11:15 11:35

Th-8: P. Chylek (LANL) Ice core evidence for a high spatial and temporal variability of the AMO…………………. 11:35-11:55

Th-9: M. Vianna (Oceanica, Brazil) On the 20 year sea level fluctuation mode in Atlantic Ocean and the AMO   11:55-12:15

Discussion   12:15-12:30

Thursday Afternoon, November 3

Th-III: Climate Change and Vegetation (Chairs: Michael Cai and Thom Rahn)
Th-10: N. McDowell (LANL) Climate, carbon, and vegetation mortality   2:00-2:20
Th-11: D. Gutzler (UNM) Observed and projected hydroclimatic variability and change in the southwestern United States     2:20-2:40
Th-12: C. Allen (USGS) Tree mortality and forest die-off response to climate change stresses at regional to global scales   2:40-3:00
Th-13: J. Chambers (LBL) Carbon balance of an old-growth Central Amazon forest   3:00-3:20
Discussion   3:20-3:35
Coffee and Refreshment   3:35-4:05
Th-IV: Climate Change and Economics (Chairs: Richard Lindzen and John Augustine)
Th-14: T. Garrett (U Utah) Thermodynamic constrains on long-term anthropogenic emission scenarios   4:05-4:25
Th-15: C. Monckton   Is CO2 mitigation cost-effective?   4:25-4:45
Th-16: D. Pasqualini (LANL) Does the climate change the economy? An investigation on local economic impact   4:45-5:05
Th-17: M. Boslough (SNL) Using prediction market to evaluate various global warming hypotheses   5:05-5:25
Discussion     5:25-5:40      

Friday Morning, November 4
Registration and continental breakfast   7:30-8:30
F-I: Observations (Chairs: Steve Love and Brad  Henderson)
F-1: A. Davis (NASA JPL) Cloud and aerosol remote sensing: Thinking outside the photon state-space box   8:30-8:50
F-2: H. Moosmuller (DRI U Nevada) Aerosol optics, direct radiative forcing, and climate change   8:50-9:10
F-3: N-A Morner (Paleogeophysics, Stockholm) Sea level changes in the Indian Ocean: Observational facts   9:10-9:30   
F-4: O. Kalashnikova (NASA JPL) MISR decadal aerosol observations   9:30-9:50
Discussion     9:50-10:05
Coffee and Refreshment   10:05-10:35
F-II: Models, Forcing, and Feedbacks  (Chairs: Tim Garrett and Chris Essex)
F-5: D. Lemoine (U Arizona) Formalizing uncertainty about climate feedbacks   10:35-10:55
F-6: P. Knappenberger, Short-term climate model projected trends of global temperature and observations   10:55-11:15
F-7: C. Keller (LANL) Solar forcing of climate: A review   11:15-11:35
F-8: W. Gray (CSU) Recent multi-century climate changes as a result of variation in the global ocean’s deep MOC   11:35-11:55
F-9: C. Folland (UK Met Office) Global surface temperature trends from six forcing and internal variability factors   11:55-12:15
Discussion   12:15-12:30
Conference ends   12:30


Comments Off

Filed under Climate Science Meetings

Global Annual Radiative Imbalance Relative To The Interannual Radiative Imbalance

The seminal paper

Ellis et al. 1978: The annual variation in the global heat balance of the Earth. J. Geophys. Res., 83, 1958-1962

provides an effective framework to assess the relative magnitudes of the global annual average radiative forcing relative to the interannual global average radiative forcing. 

As evident in the figure from their paper, the variation within the year is ~27 Watts per meter squared.  These number certainly could have been updated since their 1978 paper (and I welcome e-mails that provide an updated interannual top of the atmosphere radiative imbalance), but we can use to compare with estimates of the annual global average radiative imbalance predicted by the multi-decadal global model predictions.

Jim Hansen provides a succinct summary in his communication to me in 2005 (see)

Contrary to the claim of Pielke and Christy, our simulated ocean heat storage (Hansen et al., 2005) agrees closely with the observational analysis of Willis et al. (2004). All matters raised by Pielke and Christy were considered in our analysis and none of them alters our conclusions.

The Willis et al. measured heat storage of 0.62 W/m2 refers to the decadal mean for the upper 750 m of the ocean. Our simulated 1993-2003 heat storage rate was 0.6 W/m2 in the upper 750 m of the ocean. The decadal mean planetary energy imbalance, 0.75 W/m2, includes heat storage in the deeper ocean and energy used to melt ice and warm the air and land. 0.85 W/m2 is the imbalance at the end of the decade.

If we use the 0.85 W/m2 at the end of the 1990s as the estimate for the magnitude of global warming (as predicted by the GISS model), this is about  3% of variation in the global average radiative imbalance during the year. This explains why it is so difficult to skillfully measure this quantity as it such a small fraction of the variations within the year.  It also explains (as everyone seems to agree with) that natural climate variations can result in large enough interannual variations in the top of the atmosphere radiative imbalance that we need to look at longer time periods.

We can do that by assessing the observed and the modeled accumulation of heat over multiple years.  I have posted on this a number of times in the past; e.g. see my most recent in

Comments On the British Met Office Press Release “Pause In Upper Ocean Warming Explained”

Since an radiative imbalance of 0.85 Watts per meter squared corresponds to 1.38 x 10  **23 Joules per decade, as discussed in

Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335.

the global warming signal should eventually emerge even in the intrannual data.

Recent analyses, however, which present the intrannual variations in the radiative imbalance do not show a radiative imbalance of the magnitude reported by Jim Hansen, such as in the figure by Josh Willis in

Pielke Sr., R.A., 2008: A broader view of the role of humans in the climate system. Physics Today, 61, Vol. 11, 54-55.


Figure 1 in

R. S. Knox, David H. Douglass 2010: Recent energy balance of Earth  International Journal of Geosciences, 2010, vol. 1, no. 3 (November) – In press doi:10.4236/ijg2010.00000

The recent paper

C. A. Katsman and G. J. van Oldenborgh, 2011: Tracing the upper ocean’s ‘missing heat’. Geophysical Research Letters.

unfortunately, does not present the model’s intrannual variations.

My recommendations to move forward on this include:

1. The multi-decadal global modelling groups should present their intrannual variations of the global average radiative imbalance for each year of their predictions.

2. The observed intrannual global average radiative imbalance should be made available in real-time, as are other climate metrics, such as as sea ice (e.g. see), and the global average lower tropospheric temperature anomalies (see and see).

With this added information, we would be able to come closer to resolving the real magnitude of global warming over decadal and longer time scales.

Comments Off

Filed under Climate Change Forcings & Feedbacks

Tim Curtin’s Response to Jos De Laat’s Comments

On June 22, 2011 the post

Guest post by Dr. Jos de Laat, Royal Netherlands Meteorological Institute [KNMI]

was presented which commented on an earlier post by Tim Curtin titled

New Paper “Econometrics And The Science Of Climate Change” By Tim Curtin

Tim has provided a response to Jos’s post which is reproduced below.

Reply By Tim Curtin

I am very glad to have Jos de Laat’s comments on my paper, not least because I know and admire his work. I agree with much if not all of what he says, and fully accept his penultimate remark: “estimating the effect of anthropogenic H2O should include all the processes relevant to the hydrological cycle, which basically means full 3-D climate modelling”.  I begin by going through his points sequentially.

1.         Jos said “in the past I had done some back-of-the-envelope calculations about how much water vapour (H2O) was released by combustion processes. Which is a lot, don’t get me wrong, but my further calculations back then suggested that the impact on the global climate was marginal.  Since Curtin [2011] comes to a different conclusion, I was puzzled how that could be”. Well, using my paper’s equation (1) and its data for the outputs from hydrocarbon combustion, I found that combustion currently produces around 30 GtCO2 and 18 GtH2O per annum. Given that the former figure, with its much lower radiative forcing than that from H2O, is considered to be endangering the planet, I would have thought even only 18 GtH2O must also be relevant, not necessarily in terms of total atmospheric H2O (which I henceforth term as [H2O]) but as part of the global warming supposedly generated by the 30 GtCO2 emitted every year by humans, to which should be added, as my paper notes, the 300 GtH2O of additions to [H2O] from the water vapor generated by the cooling systems of most thermal and nuclear power stations.

2.         The next key point is not how much [H2O] there is across the surface of the globe, but how much at the infrared spectrum wavelengths, and how much of that varies naturally relative to the incremental annual extra fluxes generated by the total H2O emissions from hydrocarbon combustion and the cooling process of power generation.

3.         Then, if we do accept de Laat’s claim that the quantity of [H2O] per sq. metre is relevant, then that also applies to the annual NET increase in atmospheric [CO2] in 2008-2009 of just 14 GtCO2 (from TOTAL emissions, all sources including LUC, of 34.1 GtCO2) and that is much less than the total 33 GtH2O from just hydrocarbon combustion.[1] How much is the net increase in [CO2] per square metre? See Nicol (2011: Fig. 6, copy attached below).

4.         Pierrehumbert’s main omission is the [H2O] emitted during the cooling process. Let us recall what that involves, namely collection of water from lakes and rivers, using it to cool steam-driven generators, which produces emissions of steam (Kelly 2009), which is then released to the atmosphere through the cooling towers at the left of the photograph Roger put at the head of de Laat’s post, and it soon evaporates to form [H2O] and then precipitates back to earth after about 10 days, as de Laat notes. What is significant is the huge acceleration of the natural flux of evaporation of surface water to the atmosphere and then back again as rain after 10 days.  Natural evaporation is a very SLOW process, power station cooling towers speed that up enormously.  As my paper footnoted, cooling the power stations of the EU and USA would need at least 25% of the flow of the Rhine, Rhone and Danube rivers, but how much do those rivers contribute to ordinary evaporation over a year? For another order of magnitude, average daily evaporation in Canberra is around 2 mm, rather more than its annual mean rainfall of 600 mm. That is why we have to rely on dams for our water needs!

5.         My paper cites Pierrehumbert at some length, but I regret that his recent uncalled for attack on Steve McIntyre and Ross McKitrick has led me to change my opinion of him.

6.         The graph below is from John Nicol (with his permission); he’s an Australian physics professor (James Cook University). It shows how indeed [CO2] like [H2O] operates at close to the surface of the globe, not at the stratosphere or upper troposphere as perhaps de Laat would have it.


Caption to Figure 6: John Nicol’s diagram shows the power absorbed by carbon dioxide within a sequence of 10 m thick layers up to a height of 50 metres in the troposphere. The five curves represent the level of absorption for concentrations of CO2 equal to 100%, 200% and 300% of the reported current value of 380 ppm. As can be seen, the magnitude of absorption for the different concentrations are largest close to the ground and the curves cross over at heights between 3 and 4 metres, reflecting the fact that for higher concentrations of CO2, more radiation is absorbed at the lower levels leaving less power for absorption in the upper regions.

[1], and CDIAC for atmospheric CO2.


Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

Uncertainty in Utah: Part 3 on The Hydrologic Model Data Set by Randall P. Julander

Uncertainty in Utah Hydrologic Data: Part 3 - The Hydrologic Model Data Set

 A three part series that examines some of the systematic bias in Snow Course, SNOTEL, Streamflow Data and Hydrologic Models

Randall P. Julander Snow Survey, NRCS, USDA


Hydrologic data collection networks and for that matter, all data collection networks were designed, installed and operated – maintained to solve someone’s problem. From the selection of sensors to the site location, all details of any network were designed to accomplish the purpose of the network. For example the SNOTEL system was designed for water supply forecasting and while it is useful for avalanche forecasting, SNOTEL site locations are in the worst locations for data avalanche forecasters want such as wind loading, wind speed/direction and snow redistribution. All data collection networks have bias, both random and systematic. Use of any data from any network for any purpose including the intended one but especially for any other purpose should include an evaluation for data bias as the first step in quality research. Research that links a specific observation or change to a relational cause could be severely compromised if the data set has unaccounted systematic bias.  Many recent papers utilizing Utah Hydrologic Data have not identified or removed systematic bias from the data. The implicit assumption is of data stationarity – that all things except climate are constant through time thus observed change in any variable can be directly attributed to climate change.   Watersheds can be characterized as living entities that change fluidly through time. Streamflow is the last check paid in the water balance – it is the residual after all other bills have been paid such as transpiration, evaporation, sublimation and all other losses. Water yield from any given watershed can be impacted by vegetation change, watershed management such as grazing, forestry practices, mining, diversions, dams and a host of related factors. In order to isolate and quantify changes in water yield due to climate change, these other factors must also be identified and quantified. Operational hydrologic models for the most part grossly simplify the complexities of watershed response due to the lack of data. For the most part they operate on some snow and precipitation data as water balance inputs, temperature as the sole energy input, gross estimations of watershed losses mostly represented by a generic rule curve and streamflow as an output to achieve a mass balance. Temperature is not the main energy driver in snowmelt, short wave solar energy is. Hydrologic models using temperature as the sole energy input can overestimate the impacts of warming.

Hydrologic Models

Operational hydrologic models on the whole are a very simplistic lot. They  represent huge complexities of watershed processes in a few lines of code by averaging or lumping inputs such as precipitation and temperatures and defining a few relatively ‘homogeneous’ areas of supposed similar characteristics. Aside from the systematically biased streamflow, snowpack, temperature and precipitation input data problems these models calibrate from and the adage applies – garbage in garbage out… or biased data in – bias continues in output, many of these models have been simplified to the point where they may not have the ability to accurately quantify climate change outside the normal range of calibration.


This figure represents the workhorse of hydrologic models, the Sacramento Model. It is a basic ‘tank’ based model where ‘tanks’ of various sizes hold water from inputs such as snowmelt and precipitation then release it to streamflow. Your basic tanks are the upper zone and lower zone with the lower zone divided into 2 separate tanks. The water level in each tank determines the total outflow to the stream. In just a few lines of code – essentially adding up the flow of each tank gives us streamflow for a give time step. Precipitation or snowmelt derived from a simple mean average basin temperature provide input to the surface tank which provides input to the lower tanks. Energy input to the snowpack portion of the model is air temperature. In an operational context, air temperature is the most widely used variable because it is normally the only data available and it is highly relational to total energy input over the normal range of model calibration. Models have been developed that have deliberately chosen the most simple input output so as to be useful in a wide range of areas such as the Snowmelt Runoff Model from Martinec and Rango.

              Snowmelt Runoff Model Structure (SRM)

Each day, the water produced from snowmelt and from rainfall is computed, superimposed on the calculated recession flow and transformed into daily discharge from the basin according to Equation (1):

Qn+1 = [cSn · an (Tn + ΔTn) Sn+ cRn Pn][(A*10000)/86400]⋅ (1-kn+1) + Qn kn+1 (1)

where: Q = average daily discharge [m3s-1]

c = runoff coefficient expressing the losses as a ratio (runoff/precipitation), with cS referring to snowmelt and cR to rain

a = degree-day factor [cm oC-1d-1] indicating the snowmelt depth resulting from 1 degree-day

T = number of degree-days [oC d]

ΔT = the adjustment by temperature lapse rate when extrapolating the temperature from the station to the average hypsometric elevation of the basin or zone [oC d]

S = ratio of the snow covered area to the total area

P = precipitation contributing to runoff [cm]. A preselected threshold temperature, TCRIT, determines whether this contribution is rainfall and immediate. If precipitation is determined by TCRIT to be new snow, it is kept on storage over the hitherto snow free area until melting conditions occur.

A = area of the basin or zone [km2]

This is the whole SRM hydrologic model – a synopsis of all the complexities of the watershed summarized in one short equation. It is a great model for its intended purpose. The energy portion of this model consists of a simple expression of a degree day factor with an adjusting factor – that is to say if the average daily temperature is above zero by some amount which is then modified by the adjustment factor, melt occurs and is processed through the model.  The greater the temperature, the more melt occurs. So how/why do these very simple models work? Because streamflow itself the result of many processes across the watershed that tend to blend and average over time and space. As long as the relationships between all processes remain relatively constant, the models do a good job. However, throw one factor into an anomalous condition, say soil moisture and model performance tends to degrade quickly.

More technical hydrologic models utilize a broader spectrum of energy inputs to the snowpack such as solar radiation. These models more accurately represent energy input to snowmelt but are not normally used in an operational context because the data inputs are not available over wide geographic areas.

The energy balance to a snowpack can be summarized as follows:

Energy Balance

M = Qm/L

Where Qm is the amount of heat available for the melt process and L is the latent heat of Fusion and M is Melt

Qs= Qis-Qrs-Qgs+Qld-Qlu+Qh+Qe+Qv+Qg-Qm

Qs is the increase in internal energy storage in the pack

Qis is the incoming solar radiation

Qrs is incoming energy loss due to reflection

Qgs is energy tranferred to soil

Qld is longwave energy to the pack

Qlu is longwave energy loss from the pack

Qh is the turbulent transfer of sensible heat from the air to the pack

Qe is the turbulent transfer of latent heat (evaporation or sublimation) to pack

Qv energy gained by vertical advective processes (rain, condensate, mass removal via evaporation/sublimation)

Qg is the supply of energy from conduction with the soil, percolation of melt and vapor transfer)

During continuous melt, a snowpack is isothermal at 0 degrees C and therefore Qs is assumed negligible as is Qgs… Therefore


where Qn is the net all wave radiation to the pack.

All of these processes in many hydrologic models are summarized in one variable – average air temperature for a given area. It is interesting to note that temperature works only on the surface of the snowpack and that a snowpack in order to melt has to be isothermal from top to bottom else melt from the surface refreezes at lower, colder pack layers. Snow is not mostly snow, it is mostly air. In cool continental areas such as Utah, snowpacks rarely exceed a 50% density which means that at its greatest density, it is still 50% air. Due to this fact, snow tends to be an outstanding insulator. You cannot pound temperature into a snowpack. Solar radiation on the other hand can penetrate and convey energy deep into the pack and it is this mechanism that conveys by far the most energy into snow. This fact can be easily observed every spring when snowpacks start to melt. Observe south facing aspects in any location from a backyard to the mountains and see from a micro to macro scale the direct influence solar radiation has with respect to temperature.

Notice in this photo patches of snow that have been solar sheltered as air temperature is very likely close to be much the same over both the melted areas and the snow covered areas in both time and space. South aspects melt first, north aspects last. Shaded areas hold snow longer than open areas.


This graph from Roger Bales expresses the energy input in watts for the Senator Beck Basin in Colorado.  The black line is the net flux to the pack or the total energy input from all sources.  The red line is the net solar input and the blue line represents all sensible energy. The difference between the red line and the black line is all energy input to the pack combined, negative and positive except net solar. From this, one can readily appreciate the relative influence each individual energy source has on snowmelt. This graph is from a dusty snow surface so the net solar is likely greater than what would be expected from a normal snowpack. However, the contrast is stark – solar radiation is the driver of snowmelt. Air temperature is a long ways back.  This of course is information that has been known for many years as illustrated in the following table from the Army Corps of Engineers in the 1960’s lysimeter experiments at Thomas Creek.

Notice that shortwave radiation is the constant day in day out energy provider, long wave, temperature,  and other sources pop up here and there.

So, what is the implication for hydrologic models that use air temperature as the primary energy input? The relationship between solar radiation and temperature will change. For a 2 degree C rise in temperature the model would see an energy input increase the equivalent to moving the calendar forward several weeks to a month – a huge energy increase. The question becomes what will the watershed actually see – will the snowpack see an equivalent magnitude energy increase?


This chart for the Weber Basin of Utah illustrates the average May temperature for various elevations and what a plus 2 degree C increase would look like. This is what the hydrologic model will see. In order to ascertain what energy increase the watershed will actually see, we go back to the graph of Bales – what the watershed will see is a more modest increase in the sensible heat line. Climate change won’t increase the hours in a day nor increase the intensity of solar radiation so the main energy driver to the snowpack, solar – will stay close to the same, all other things equal. So the total energy to the snowpack will have a modest increase but what the hydrologic model has seen is a much larger proportional increase. Thus if this factor is not accounted, the model is likely to overestimate the impacts that increased temperatures may have on snowpacks.  Hydrologic models work well within their calibrated range because temperature is closely related to solar energy. With climate change warming, this relationship may not be the stable input it once was and models may need to be adjusted accordingly.  Research needs to move in the direction of total energy input to the watershed instead of temperature based modeling. Then we can get a much clearer picture of the impacts climate change may have on water resources. Recent research by Painter et al regarding changes in snow surface albedo and accelerated runoff support the solar vs temperature energy input to the pack where surface dust can accelerate snowmelt by as much as 3 weeks or more whereas modest temperature increases would accelerate the melt by a week.

Evapotranspiration and losses to groundwater

Operational hydrologic models incorporate evapotranspiration mostly as a wild guess. I say that because there is little to no data to support the ‘rule curve’ the models use to achieve these figures. A rule curve is normally developed through the model calibration process. The general shape of the hydrograph is developed via precipitation/snow inputs and then the mass balance is achieved through the subtraction of ET data so the simulated and the observed curves fit and there is no water left over. As a side, some water may be tossed into a deeper ground water tank to make things somehow more realistic. Some pan evaporation data here and there sporadic in time and space with no transpiration data.  So how are these curves derived? Mostly from mathematical calibration fit – one models the streamflow first with precipitation/snow input, you get the desired shape of the hydrograph and then you get the final mass balance correct by increasing or decreasing the ET curve and the losses to deep groundwater. The bottom line is that these parameters have no basis in reality and are mathematically derived to achieve the correct mass balance. We have no clue what either one actually is. This may seem like a minor problem until we see what part of the hydrologic cycle they comprise.


In this chart we have a gross analysis of total watershed snowpack and annual streamflow. Higher elevation sites such as the Weber at Oakley have a much higher per acre water yield than do lower elevation watersheds such as the Sevier River at Hatch. However – in many cases there are far greater watershed losses than snowpack runoff that actually makes it past a streamgage, typical of many western watersheds where potential ET often exceeds annual precipitation. Streamflow, again, is a residual function, that water that is left over after all watershed bills are paid. We model most often the small part of the water balance and grossly estimate ET and groundwater losses. At Lake Powell, between 5 and 15% of the precipitation/snow model input shows up as flow. Small changes to the watershed loss rates or our assumptions about these loss rates can have huge implications on model results. The general assumption in a warming world is that these watershed losses will increase. Higher temperatures lead to higher evaporative losses which are the small part of the ET function – but will transpiration increase? This is a question that needs more investigation because of several issues: 1) higher CO2 can lead to more efficient plant use of water in many plants including trees and 10% to 20% less transpiration could be a significant offsetting factor in water yield and 2) watershed vegetative response to less water either through natural mechanisms (massive forest mortality such as we currently see) or mechanical means could also alter the total loss to ET. The assumptions made on the energy input side of the model together with the assumptions on watershed loss rates are likely the key drivers of model output and both have substantial problems in quantification.


Is average temperature a good metric to assess potential snow and streamflow changes?

Seeing that solar radiation is the primary energy driver to snow ablation we then make the observation that in winter the northern latitudes have very little of that commodity. Without the primary driver of snowmelt, solar radiation, snowpacks are unlikely to experience widespread melt. We then ask the question – is average temperature a good indicator of what might happen? There is at least a 50% -80% probability that any given storm on any given winter day will occur during a period of coldest daily temperature – i.e. nighttime, early morning or late evening. The further north a given point is in latitude, the higher that probability. Once snowpack is on the ground in the mountains of Utah and similar high elevation cool continental climate states, sensible heat exchange is not likely to melt it off. Thus minimum temperature or some weighted combination below average and perhaps a bit above minimum temperature might be a better metric.


In this graph two SNOTEL site minimum average monthly temperatures plus a 2 degree increase are displayed.  Little Grassy is the most southern low elevation (6100 ft) site we have. It currently has a low snowpack (6inches of SWE or less) in any given year and is melted out by April 1. Steel Creek Park is at 10,200 feet on the north slope of the Uintah Mountains in northern Utah – a typical cold site. As you can see, a 2 degree increase in temperature at Little Grassy could potentially shorten the snow accumulation/ablation seasons by a week or so on either end. This is an area which contributes to streamflow in only the highest of snowpack years and as such, a 2 week decrease in potential snow accumulation may be difficult to detect given the huge inter annual variability in SWE. A two degree rise in temperature at Steel Creek Park is meaningless – it would have little to no impact on snow accumulation or ablation. Thus most/much/some of Utah and similar areas west wide may have some temperature to ‘give’ in a climate warming scenario prior to having significant impacts to water resources.  Supporting evidence for this concept comes from the observation that estimates of temperature increases for Utah are about 2 degrees or so and we have as yet, not been able to document declines in SWE or its pattern of accumulation due to that increase. A question for further research would be – at what level of temperature increase could we anticipate temperature impacting snowpacks.

More rain, less snow

In the west where snow is referred to as white gold, the potential of less snow has huge financial implications from agriculture to recreation. The main reason many streams in the west flow at all is because infiltration capacity and deeper soil moisture is exceeded due to snowmelt of 1 to 2 inches per day over a 2 to 12 week period keeping soils saturated and excess water flowing to the stream. In the cool continental areas of the west, it can be easily demonstrated that 60%, 70%, 80% and in some cases exceeding 95% of all streamflow originates as snow. Summer precipitation has to exceed some large extent and magnitude to have any impact on streamflow at all and typically when it does, it pops up for a day and immediately returns to base flow levels. So more rain, less snow has a very ominous tone and over a long period of time, if snowpacks indeed dwindle to near nothing, very serious impacts could occur. In the short run in a counterintuitive manner, rain on snow may actually increase flows. Let’s examine how this might occur. Currently, the date snowpacks begin is hugely variable and dependent on elevation, it can range from mid September to as late as early December. If rain occurs in the fall months it is typically held by the soil through the winter months and contributes to spring runoff by soil saturation. Soils that are dry typically take more of an existing snowpack to bring them to saturation prior to generating significant runoff. Soils that are moist to saturated take far less snowmelt to reach saturation and are far more efficient in producing runoff.


In this chart of Parrish Creek along the Wasatch Front in Utah we see the relationship between soil moisture (red), snowpack (blue) and streamflow (black).  In the first 3 years of daily data, we see peak SWE was identical in all years but soil moisture was very low the first year, very high the second year and average the third year and the corresponding streamflows from identical snowpacks were low, high and average. In the fourth year snowpacks were low as was soil moisture and the resulting streamflow was abysmal.  In the fith year, snowpacks were high but soil moisture was very low and streamflow was mediocre having lost a major portion of the snowpack to bring soils to saturation. Soil moisture can have a huge impact on runoff. Thus fall precipitation on the watershed as rain can be a very beneficial event – some of course is lost to evapotranspiration but that would be the most significant loss.


In this chart of the Bear River’s soil moisture we see exactly that case – large rain events in October brought soil moisture from 30% saturation up to 65% where it remained through the winter months till spring snowmelt. This is a very positive situation that increases snowmelt runoff efficiency. Rain in the fall months is not necessarily a negative. Now lets look at rain in the spring time. This is typically rain on snow kinds of events and in fact, this from a water supply viewpoint is also very positive. If for example a watershed is melting 2 inches of SWE per day and watershed losses are 1 inch per day, then we have 1 inch of water available for runoff. Now, say we have this same scenario and we have a 1 inch rain on snow event. Then we have 2 inches of SWE melt plus 1 inch of rainfall for a total input of 3 inches and the same loss rate of 1 inch per day yields 2 inches of runoff, double the runoff for that particular day.  Twice the runoff means more water in reservoirs. Where this eventually breaks down is where the watershed aerial extent of snowpack becomes so small towards the end of melt season the total amount of water yield becomes inconsequential. If snowmelt due to temperature increases is only 1 week, then more rain less snow may not be a huge factor in water yield. When this does become a significant problem, i.e. when snow season is shortened by ‘X’ weeks should be a subject for further research. For the short term, 50 years or so, water yields in the Colorado may not likely see significant impacts from more rain/less snow and watershed responses will likely be muddled and confused by vegetation mortality. (Mountain Pine Beetle Activity May Impact Snow Accumulation And Melt. Pugh). For the short term, total precipitation during the snow accumulation/ablation season is likely a much more relevant variable than temperature. Small increases in precipitation at the higher elevations may well offset any losses in water yield from the current marginal water yield producing areas at lower elevations. A decrease in this precipitation in combination with temperature increases would be the worst scenario.

SWE to Precipitation ratios

When trying to express this concept of more rain, less snow the SWE to PCP ratio was conceived as a metric to numerically express the observed change. When developing a metric that purports to be related to some variable it is important to make sure mathematically and physically that the metric does what it was intended to do and not to have other factors unduly influence the outcome. Simply said, the SWE to PCP metric was intended to show how increased temperatures have increased rain events and decreased snow accumulation.  This metric should be primarily temperature related with only minor influences from other factors. The fact is this metric is riddled with factors other than temperature that may preclude meaningful results.  It in reality is a better indicator of drought than it is one of temperature.



In these two graphs one can see that the SWE to PCP ratio is a function of precipitation magnitude and as such is influenced by drought more than temperature. The physical and mathematical reasons are detailed in the paper ‘Characteristics of SWE to PCP Ratios in Utah’ available at the link above.


Many hydrologic models have serious limitations on both the energy input side as well as the mass balance side of watershed yield with respect to ET and ground water losses that can influence the results of temperature increases. It is possible that many systematically overestimate the impacts of temperature increases on water yields. Systematic bias in the data used by models can also predispose an outcome. What model is used in these kinds of studies matters – snowmelt models that incorporate energy balance components such as solar radiation in addition to temperature likely produce more realistic results than temperature based models. Assumptions about ET and groundwater losses can have significant impacts to results. Metrics developed to quantify specific variables or phenomena need to be rigorously checked in multiple contexts to insure they are not influenced by other factors.

Comments Off

Filed under Climate Change Metrics, Vulnerability Paradigm

Uncertainty in Utah Hydrologic Data: Part 2 On Streamflow Data by Randall P. Julander

Uncertainty in Utah Hydrologic Data – Part 2 – Streamflow Data

A three part series that examines some of the systematic bias in Snow Course, SNOTEL, Streamflow data and Hydrologic Models

Randall P. Julander Snow Survey, NRCS, USDA


Hydrologic data collection networks and for that matter, all data collection networks were designed, installed and operated – maintained to solve someone’s problem. From the selection of sensors to the site location, all details of any network were designed to accomplish the purpose of the network. For example the SNOTEL system was designed for water supply forecasting and while it is useful for avalanche forecasting, SNOTEL site locations are in the worst locations for data avalanche forecasters want such as wind loading, wind speed/direction and snow redistribution. All data collection networks have bias, both random and systematic. Use of any data from any network for any purpose including the intended one but especially for any other purpose should include an evaluation for data bias as the first step in quality research. Research that links a specific observation or change to a relational cause could be severely compromised if the data set has unaccounted systematic bias.  Many recent papers utilizing Utah Hydrologic Data have not identified or removed systematic bias from the data. The implicit assumption is of data stationarity – that all things except climate are constant through time thus observed change in any variable can be directly attributed to climate change.   Watersheds can be characterized as living entities that change fluidly through time. Streamflow is the last check paid in the water balance – it is the residual after all other bills have been paid such as transpiration, evaporation, sublimation and all other losses. Water yield from any given watershed can be impacted by vegetation change, watershed management such as grazing, forestry practices, mining, diversions, dams and a host of related factors. In order to isolate and quantify changes in water yield due to climate change, these other factors must also be identified and quantified. Operational hydrologic models for the most part grossly simplify the complexities of watershed response due to the lack of data. For the most part they operate on some snow and precipitation data as water balance inputs, temperature as the sole energy input, gross estimations of watershed losses mostly represented by a generic rule curve and streamflow as an output to achieve a mass balance. Temperature is not the main energy driver in snowmelt, short wave solar energy is. Hydrologic models using temperature as the sole energy input can overestimate the impacts of warming.

 Streamflow data

Water yield from the watershed is a residual. It is what is left over after all other processes have claimed their share of the annual input of precipitation. As these processes change, water yield is impacted.  In order to quantify the impacts climate change may have on water yield, it is essential to identify, quantify and remove the impacts other processes may have had. As a first step, accuracy of the data set needs to be defined. To what level of accuracy can we actually measure streamflow. The USGS uses a rating system to rank each gaged point. Streamflow data can be: excellent, good, fair or poor. Each streamflow point is not really a point such as 60 cfs but should be thought of as a range depending on its rating.

In this chart, the point values are represented by the red line – these are the observed values. If this site were excellent, there is a high probability (90-95%) that the actual value is somewhere between the light green lines. Unfortunately, there are no sites in Utah that fit the excellent criteria. About 50% of the USGS sites in Utah are in the good category and the actual value of any given point will be between the light blue lines. About 40% of the sites in Utah are classified as fair and fit between the magenta lines. The other 10% of sites are rated poor and values may regularly be outside of the magenta lines. The point of data accuracy is that if one can only measure to the nearest 10% or 15% then the limit of our ability to quantify change or trends in these data also resides within that data accuracy. To say I have observed a 5% change in a variable with 15% accuracy may or may not have validity. The assumption would be that all data error associated with these measurements are equally random in all directions such that everything cancels to the observed value.  These potential errors in measurements can compound as one tries to adjust streamflow records to obtain a natural flow such as the inflow to Lake Powell. One must add interbasin diversions and reservoir change in storage back to the observed flow in order to calculate what the true observed flow would have been absent water management activities.

In this graph, the USGS observed flow for the Sevier River near Marysvale Utah is adjusted for the change in storage of Piute and Otter Creek Reservoirs. Piute Reservoir is directly above the streamgage and Otter Creek is on the East Fork of the Sevier some 20 miles upstream with very little agriculture between the reservoir and the stream gage. Each line represents a full year of monthly total acre feet adjusted flow. Notice that fully 1/3 of the time from April to September, the adjusted flow goes negative, as much as 15,000 acre feet. We know that this is an impossible figure and clearly these data points are in gross error. As important is what we now don’t know about every other data point – are they any better estimates of adjusted flow than the ones that are clearly in error? What about those that are close to negative, say in the zero to 5,000 acre foot range – are they accurate? Or even those that “look normal”, can we be sure they are?  The only reason we don’t suspect the Colorado River of this kind of data error is that it’s flow is large enough to mask out the errors.

So, how is the official record for Lake Powell inflow adjusted? This, again is a case where a data set has been generated to serve a specific function – to allow us to make a reasonable inflow forecast for Lake Powell that has some meaning to the Water Management Community. It does not reflect the true natural inflow to Lake Powell.

We adjust the observed USGS streamflow at Cisco by 17 major diversions and 17 reservoirs represented here by this schematic. What might the real inflow adjustments look like? On the Colorado side, there are 11,000 reservoirs, ponds and impoundments, 33,000 diverted water rights, ditches, canals, etc, and over 7,000 wells. On the Utah side, there are 2,220 reservoirs, ponds and impoundments, 485 diverted water rights, ditches and canals as well as an unquantified number of wells and center pivots. Wyoming certainly has some number of these kinds of water infrastructure as well but I don’t have those numbers yet. As one can clearly see, this becomes an accounting nightmare. Not just in trying to measure each of these diversions or reservoirs but how much evaporation is coming off of each reservoir and canal. In regards to surface area, canals and ditches may well have far greater evaporation than reservoirs. Reservoirs also have bank storage issues that alter hydrograph characteristics by storing on the fill side and slowly releasing (minus consumptive vegetative use) on the draw down.

It is likely that for most diversions – the error associated with those data is positive. Why make such a declaration – visit any water rights office and ask a simple question “has anyone ever come in and complained that they got too much water and would like to give some back?”. The complaint uniformly is ‘I am not getting my full allocation and I need/demand more’. Check any water gate and you will find that the gate wheel has been turned to its maximum extent against the chain lock either by the water master or by the farmer checking to see that it is. When water is life and the means of providing, each will try to maximize the amount taken. Many of these diversions are simple structures easily altered. The assumption that all of these water managements are consistent in time is not likely true. From this context, it is clear that any reconstructed inflow to Lake Powell will have the potential for serious deficiencies, especially as water use in the upper basins increases. Each 0.01 percent here and there be it a well or pond evaporation or diversion slowly adds to the incremental error in the data set.

Changes to the Watersheds

Since the settlement of the west, there have been extensive changes in watershed characteristics. These changes can have a substantial impact on water yield and consequently have direct bearing on current trends. Let’s start with grazing – there is a substantial body of literature documenting the impacts that grazing can have on water yield. Overgrazing leads to less vegetation, soil compaction and greater water yield and soil erosion. In the 1870’s, there were approximately 4,100,000 cows and 4,800,000 sheep in the 17 western states. By 1900, there were 19,600,000 cows and 25,100,000 sheep. This was the get rich quick scheme of the day – eastern and mid west speculators could buy up herds, ship west, graze for a couple of years then ship back east for slaughter – no land purchase necessary, no regulations – simply fight for a spot to graze. Western watersheds were denuded and devastated. The Taylor Grazing Act, passed in the 1930’s was implemented in part because of a change in hydrology – people in the west and Utah in particular were the victims of annual floods, mud and debris flows brought on by snowmelt and precipitation events on damaged watersheds. This change in hydrology – increased flooding and flow led to action to curtail grazing and heal damaged watersheds.

North Twin Lakes – 1920. Notice the erosional features, the lack of vegetation including trees.

Photos courtesy of the repeat photography project:


North Twin Lakes 1945. Notice that the erosional features are slowly filling in, sage and grasses are more abundant, trees are growing, the watershed is healing. Bottom line, less runoff, more consumptive use by vegetation.  Hydrologically, this watershed has changed dramatically.

North Twin Lakes – 2005. Notice the erosional features are pretty much gone, excellent stands of all kinds of vegetation. Now, what is the difference in water yield from the watershed today compared to 1920? Water yield has decreased and consumptive use increased.

Along with restricted grazing, watershed restoration programs were implemented to improve conditions such as seeding programs to restore vegetation, bank stabilization and other watershed improvements. One of these programs was designed to mechanically reduce streamflows via increased infiltration and water storage on the watershed. Contour trenches were installed on watersheds throughout the west to reduce streamflow, floods and debris flows.

 In This photo above Bountiful, Utah notice the extent and depth of the contour trenches installed in the 1930’s by the Civilian Conservation Corps by hand and by horse drawn bucket scoops. These trenches are even today several feet deep and able to store significant amounts of snowmelt for infiltration.


Mining was an activity that impacted western watersheds in way not typically thought of from todays perspective. After all, mines and the associated infrastructure and even the tailings comprise a tiny fraction of any watersheds geographic area. However, from the 1850’s to basically the 1930’s or even later, ore had to be refined on site. There was no infrastructure or capability to bring ore from the mine to central smelters nor was there ability to bring coal to the mine. Roads were steep and rugged, rail lines expensive if they could be built at all and transportation was by wagon. Thus smelting was most often done at the mine via charcoal. The large mines would have 20,000 bushels of charcoal on site. Large charcoal kilns could take 45 cords of wood per week which equates to 36 million board feet of timber per decade. The famed Comstock Mine basically denuded the entire east side of Lake Tahoe. The cottage industry of the day was making charcoal for the mines. Many farms and ranches had smaller kilns to generate an additional cash flow.

The Annie Laurie in the Tushar Mountains of Utah.

The Annie Laurie today. Notice in this recent photo how vegetation, especially trees have grown, matured and how many more conifers there are today than in the past. In addition to charcoal, timber was necessary for the mine, for the buildings, for heating and cooking.

Locations of the estimated 20,000 abandoned mines in Utah. This represents a substantial amount of timber removed from Utah watersheds over a nearly 80 year period of time. Most assuredly enough to impact species composition and water yield across many watersheds. Fewer trees equals greater water yield.


Logging on western watersheds provided necessary timber for infrastructure such as homes, businesses, barns and other buildings. Timber was most often cut and milled on site with the rough cut timbers hauled from the watershed via horse and wagon.


The Equitable Sawmill, early century.

Where the Equitable Sawmill once was. Notice the dramatic change in forest cover – more trees equals less water yield and in this case, potentially much less.

Tie Hacking

Tie Hacking was a business that provided railroad ties to the industry, basically the same as logging but with a bigger product. As the railroad came through, tie hacks would cut trees and provide the necessary ties to keep the tracks moving forward. Ties at the time were not treated as they are now and needed to be replaced on a regular basis as the soft pine wood could rot quickly.

This is Blacks Fork Commissary – the Tie Hack central provisions location on the North Slope of the Uintahs in northern Utah.

Tie Hacks high grading all the Douglas Fir off the North Slope, leaving the Lodgepole Pine. The majority of the North Slope today is comprised by dense stands of Lodgepole Pine. The rail lines required 3000 ties per mile and 600 miles between western Colorado and the Sieras – at about 14 million board feet per decade.


The policy to fight western fires has done more to change the landscape of western watersheds than possibly any other factor. At the turn of the century, fires burned 10 to 30 million acres of forest every year. With the advent of Smokey Bear,  between 2 and 5 million acres burn annually.   This huge reduction of burned area has change the species composition, density and age of forests across the west. Watersheds that used to have 10 trees per acre now have 200 and more. Fewer trees produce more water yield.

Danish Meadows, 1900 – with frequent fires.

Danish Meadows, 2000. No fires for nearly 100 years. More trees, less water.

The Forest Service has done much research with paired watersheds and timber harvest. The Fools Creek experiment in Colorado is a classic – two watersheds of similar characteristics measured together for more than a decade. Then one watershed was kept pristine while the other was cut by 40%. The end result was an increase in water yield of 40% for 20 years as well as a substantial 25% increase for the period of 30 to 50 years.

Note that the timing of annual snowmelt was also accelerated due to the fact that the forest cover was opened up to short wave solar radiation, the primary energy input to snowmelt.

A recent Duke University study confirms that Utah forests are basically very young with the dominant age class in the 0-100 year old category. This basically confirms that post the mining/logging era from 1850’s to 1960’s a different watershed management policy has occurred on Utah watersheds. Small trees not harvested early on are now the 100 year old trees and seedlings at the time are now the 50 year old trees.

Species Composition Matters

With the virtual elimination of both fire and logging, species such as Aspen are being steadily replaced by Conifers. In paired plots to compare water consumption between Aspen and Conifers, LaMalfa and Ryel found that there was much greater SWE accumulation in the Aspen stands vs the Conifers – research already well known, but also soil moisture under the Aspen stands was much greater than it was under the Conifers. Aspens, with the first frost of the season terminate transpiration and soil moisture starts to recover. Conifers on the other hand, keep transpiring and pumping that moisture out of the ground.

LaMalfa/Ryel – 34% less SWE under the conifers than aspens.

LaMalfa/Ryel – nearly 4.5 inches less soil moisture in the conifers vs the aspens.

Overall, there was 42% less water in the Conifer Community vs the Aspen Community – a whopping 10.5 inches of less total water potentially available from the Conifers than the Aspens. This area of northern Utah, near Monte Cristo typically only gets 37 inches of annual precipitation so the Conifers could potentially produce far less runoff than the Aspens.  Utah and Colorado have lost 2.5 million acres of aspens to conifer encroachment with approximately 1.5 million of that in the Colorado River Basin. That translates into about 125,000 acre feet of water lost per inch of water yield. From this single factor (Aspen replaced by Conifer), the April – July inflow to Lake Powell could be reduced by 2% to 17%.


Ground Water Withdrawals

In the long term, ground water is connected to surface water. This is an area that needs investigation as groundwater withdrawals within the basin could be substantial, certainly on the order of thousands of acre feet annually and potentially much greater. Over the period of many years, increased streamflow losses to groundwater is likely.

Agricultural Practices

In the early years, agriculture was primarily flood irrigation where a big share of the water applied to any specific field would runoff back to the ditch and would eventually become return flow to the river. Much of the flood irrigation has been replaced by sprinkler irrigation with much higher evaporative losses but more efficient crop production. Nearly all of the water that hits the ground is consumptively used by crops. How this impacts streamflow is an issue for more research.

Paradigm Shift

For a century in the west, we indulged in watershed practices that increased streamflows. In the 60’s and 70’s there was the beginnings of the environmental movement and with it significant changes in watershed management. Mining no longer requires vast amounts of timber, logging is but a scant fraction of its past, tie hacking is extinct, fires are extinguished, grazing is tightly managed. Watersheds now have huge amounts of vegetation and in particular vastly more trees than they have ever seen in a historical context. Species composition has changed with far less aspens and far greater confers. More conifers equals less snow, more conifers equals less soil moisture, more trees and vegetation in general equals less streamflow. For 100 years we systematically increased flows from western watersheds and for the past 50 we have done everything possible to reduce streamflows.

Portent for the Future

Forest management that has removed fire and logging from much of the equation has had a net effect of vastly increasing the number of trees per acre of land. Too many trees for a water resource has increased the competition for water to the extent that the recent drought weakened  the forests and a huge pine beetle, spruce bud worm infestation has killed hundreds of thousands of acres of trees. The analogy of 10 men on the edge of a desert with water for 5 in appropriate. If one sends all 10, they all die. If one sends 5, most will likely survive. Our forests sent all 10 and the result is massive forest mortality. In Utah, of 5 million acres of forested lands, nearly 1 million acres is standing dead with the potential for greater mortality. 1 million acres of dead forest equates to the potential of 83,000 acres of additional water per inch of water yield, perhaps as much as 800,000 acre feet in total. In the short run, Utah is likely to see greater water yield, not less – all other things equal. Also, runoff will likely be earlier due to the opening of the canopy to short wave solar radiation.



There are many and complex reasons for declines in streamflows west wide of which climate change is but one. It is not a simple issue and each contributing component is certainly not easily quantified.






Comments Off

Filed under Climate Change Metrics, Guest Weblogs