Monthly Archives: June 2011

An Interesting 1973 Paper “A Preliminary Study On The Climatic Flucuations During The Last 5000 years In China” By Chu Ko-Chen

Chu Ko-Chen (Co-Ching Chu), 1973: A Preliminary Study on the Climatic Flucuations during the last 5000 Years in China. cnki:ISSN:1006-9283.0.1973-02-005. pages 243-261.

This paper is interesting because of its politcal perspective as it was published while Mao was still leading China, but also, of more importance to climate, its statement about temperature trends in China. I have a complete English version of the paper which was republished from Scientia Sinica, Vol XIV, No. 2 May 1973 in a jounral called Cycles. If someone has the url for that journal please e-mail to me and I will add to this post.

The abstract reads [highlight added)

“The world climate during the historical times fluctuated. The numerous Chinese historical writings provide us excellent references in studying the ancient climate of China. The present author testifies, by the materials got from the histories and excavations, that during Yin-Hsu at Anyang, the annual temperature was about 2℃ higher than that of the, present in most of the time. After that came a series of up and down swings of 2—3℃ with minimum temperatures occurring at approximately 100 B. C. (about the end of the Yin Dynasty and the beginning of the Chou Dynasty), 400 A. D. (the Six Dynasties), 1200 A. D. (the South Snug Dynasty), and 1700 A. D. (abont the end of the Ming Dynasty and the beginning of the Ching Dynasty). In the Han and the Tang Dynasties (200 B. C.—220 A. D. and 600—900 A. D.) the climate was rather warm. When the world climate turned colder than usual, it tended to begin at the Pacific coast of Eastern Asia, propagating as a wave westward, through Japan and China, to the Atlantic coast of Europe and Africa. When the world temperature recovered, it tended to propagate eastward from the west. A fuller knowledge of lhe climatic fluctuations in historical times and a good grasp of their laws would render better service to the long-range forecasting in climate.”

Source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

Continued Bias Reporting On The Climate System By Tom Karl and Peter Thorne

Update: June 30 2011 The complete BAMS paper is available from

Blunden, J., D. S. Arndt, and M. O. Baringer, Eds., 2011: State of the Climate in 2010. Bull. Amer. Meteor. Soc., 92 (6), S1-S266.

*************************************************

Today (6/29/2011), there were news articles concerning the state of the climate system; e.g. see  the Associated Press news release in the Washington Post

Climate change study: More than 300 months since the planets temperature was below average

The news article refers to the 2010 climate summary that will be published in a Bulletin of the American Meteorological Society article. The article will undoubtedly include informative information on the climate. 

However, the news article itself erroneously reports on the actual state of the climate, as can easily be shown simply by extracting current analyses from the web.  Two of the prominent individuals quoted in the news report are Tom Karl and Peter Thorne. They make the following claims

“The indicators show unequivocally that the world continues to warm,” Thomas R. Karl, director of the National Climatic Data Center, said in releasing the annual State of the Climate report for 2010.”

“There is a clear and unmistakable signal from the top of the atmosphere to the depths of the oceans,” added Peter Thorne of the Cooperative Institute for Climate and Satellites, North Carolina State University.”

“Carbon dioxide increased by 2.60 parts per million in the atmosphere in 2010, which is more than the average annual increase seen from 1980-2010, Karl added. Carbon dioxide is the major greenhouse gas accumulating in the air that atmospheric scientists blame for warming the climate.”

Karl is correct on the increase in carbon dioxide, but, otherwise,  he and Peter Thorne are not honestly presenting  the actual state of the climate system.  They focus on the surface temperature data, which as, we have reported on in peer-reviewed papers, has major unresolved uncertainties and includes a systematic warm bias; e.g. see

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

The climate system has not warmed since about 2003 either in the upper ocean or in the lower troposphere as shown in the three figures below.

Tom Karl is wrong in his first quote  – The indicators DO NOT show unequivocally that the world continues to warm. This warming has stalled, at least for now, since about 2003. Peter Thorne is misrepresenting the actual data when he erroneously reports that (assuming he means ‘unequivocal warming’)  “There is a clear and unmistakable signal from the top of the atmosphere to the depths of the oceans”.

Global Ocean Heat Content 1955-present

Second, the lower troposphere (from both the RSS and UAH MSU data)  also do NOT SHOW unequivocally that the world continues to warm! Indeed, warming has also stalled since about 2002.

Channel TLT Trend Comparison

Figure caption: Global  average (70 south to 82.5 north) lower tropospheric temperatures (from RSS)

Figure caption: Global  average (70 south to 82.5 north) lower tropospheric temperatures (from UAH)

It should not be surprising that Tom Karl and Peter Thorne are not honestly reporting the actual state of the climate system, which involves a much more complex signal in response to human and natural climate forcings and feedbacks, than they report on; e.g. see

Christy, J.R., B. Herman, R. Pielke, Sr., P. Klotzbach, R.T. McNider, J.J. Hnilo, R.W. Spencer, T. Chase and D. Douglass, 2010: What do observational datasets say about modeled tropospheric temperature trends since 1979?  Remote Sensing, 2(9), 2148-2169.

Previous documentation of the biases and efforts to manage the information provided to policymakers by Tom Karl and Peter Thorne includes the following examples

Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices

The Selective Bias Of NOAA’s National Climate Data Center (NCDC) With Respect To The Analysis And Interpretation Of Multi-Decadal Land Surface Temperature Trends Under The Leadership Of Tom Karl and Tom Peterson

Erroneous Climate Science Statement By Tom Karl, Director Of The National Climate Data Center And President Of The American Meteorological Society

E-mail Documentation Of The Successful Attempt By Thomas Karl Director Of the U.S. National Climate Data Center To Suppress Biases and Uncertainties In the Assessment Of Surface Temperature Trends

Erroneous Statement By Peter A. Stott And Peter W. Thorne In Nature Titled “How Best To Log Local Temperatures?”

It is disappointing that the media do not properly question the claims made by Tom Karl and Peter Thorne. They are presenting a biased report on the actual state of the climate system.

Comments Off

Filed under Bias In News Media Reports

Tim Curtin’s Response to Jos De Laat’s Comments

On June 22, 2011 the post

Guest post by Dr. Jos de Laat, Royal Netherlands Meteorological Institute [KNMI]

was presented which commented on an earlier post by Tim Curtin titled

New Paper “Econometrics And The Science Of Climate Change” By Tim Curtin

Tim has provided a response to Jos’s post which is reproduced below.

Reply By Tim Curtin

I am very glad to have Jos de Laat’s comments on my paper, not least because I know and admire his work. I agree with much if not all of what he says, and fully accept his penultimate remark: “estimating the effect of anthropogenic H2O should include all the processes relevant to the hydrological cycle, which basically means full 3-D climate modelling”.  I begin by going through his points sequentially.

1.         Jos said “in the past I had done some back-of-the-envelope calculations about how much water vapour (H2O) was released by combustion processes. Which is a lot, don’t get me wrong, but my further calculations back then suggested that the impact on the global climate was marginal.  Since Curtin [2011] comes to a different conclusion, I was puzzled how that could be”. Well, using my paper’s equation (1) and its data for the outputs from hydrocarbon combustion, I found that combustion currently produces around 30 GtCO2 and 18 GtH2O per annum. Given that the former figure, with its much lower radiative forcing than that from H2O, is considered to be endangering the planet, I would have thought even only 18 GtH2O must also be relevant, not necessarily in terms of total atmospheric H2O (which I henceforth term as [H2O]) but as part of the global warming supposedly generated by the 30 GtCO2 emitted every year by humans, to which should be added, as my paper notes, the 300 GtH2O of additions to [H2O] from the water vapor generated by the cooling systems of most thermal and nuclear power stations.

2.         The next key point is not how much [H2O] there is across the surface of the globe, but how much at the infrared spectrum wavelengths, and how much of that varies naturally relative to the incremental annual extra fluxes generated by the total H2O emissions from hydrocarbon combustion and the cooling process of power generation.

3.         Then, if we do accept de Laat’s claim that the quantity of [H2O] per sq. metre is relevant, then that also applies to the annual NET increase in atmospheric [CO2] in 2008-2009 of just 14 GtCO2 (from TOTAL emissions, all sources including LUC, of 34.1 GtCO2) and that is much less than the total 33 GtH2O from just hydrocarbon combustion.[1] How much is the net increase in [CO2] per square metre? See Nicol (2011: Fig. 6, copy attached below).

4.         Pierrehumbert’s main omission is the [H2O] emitted during the cooling process. Let us recall what that involves, namely collection of water from lakes and rivers, using it to cool steam-driven generators, which produces emissions of steam (Kelly 2009), which is then released to the atmosphere through the cooling towers at the left of the photograph Roger put at the head of de Laat’s post, and it soon evaporates to form [H2O] and then precipitates back to earth after about 10 days, as de Laat notes. What is significant is the huge acceleration of the natural flux of evaporation of surface water to the atmosphere and then back again as rain after 10 days.  Natural evaporation is a very SLOW process, power station cooling towers speed that up enormously.  As my paper footnoted, cooling the power stations of the EU and USA would need at least 25% of the flow of the Rhine, Rhone and Danube rivers, but how much do those rivers contribute to ordinary evaporation over a year? For another order of magnitude, average daily evaporation in Canberra is around 2 mm, rather more than its annual mean rainfall of 600 mm. That is why we have to rely on dams for our water needs!

5.         My paper cites Pierrehumbert at some length, but I regret that his recent uncalled for attack on Steve McIntyre and Ross McKitrick has led me to change my opinion of him.

6.         The graph below is from John Nicol (with his permission); he’s an Australian physics professor (James Cook University). It shows how indeed [CO2] like [H2O] operates at close to the surface of the globe, not at the stratosphere or upper troposphere as perhaps de Laat would have it.

 

Caption to Figure 6: John Nicol’s diagram shows the power absorbed by carbon dioxide within a sequence of 10 m thick layers up to a height of 50 metres in the troposphere. The five curves represent the level of absorption for concentrations of CO2 equal to 100%, 200% and 300% of the reported current value of 380 ppm. As can be seen, the magnitude of absorption for the different concentrations are largest close to the ground and the curves cross over at heights between 3 and 4 metres, reflecting the fact that for higher concentrations of CO2, more radiation is absorbed at the lower levels leaving less power for absorption in the upper regions.


[1]  www.globalcarbonproject.org), and CDIAC for atmospheric CO2.

 

Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

Comments By Marcia Wyatt on CMIP Data

The CMIP3 data led by the World Climate Research Programme’s (WCRP’s) Working Group on Coupled Modelling  is used extensively for climate model studies. Very recently Marcia Wyatt, as part of her contining studies building on her paper

Wyatt, Marcia Glaze , Sergey Kravtsov, and Anastasios A. Tsonis, 2011: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability  Climate Dynamics: DOI: 10.1007/s00382-011-1071-8. (see also her guest weblog on the paper here)

found problems with this data. I asked her to summarize her experience for our weblog. Her summary is given below.

CMIP3 experience by Marcia Wyatt

CMIP3 (Coupled Model Intercomparison Project) provides a free and open archive of recent model output (netcdf files) for use by researchers. While convenient, it is not immune from data-crunching-induced complacency. Following is a cautionary tale.

 My current research project involves processing CMIP model-datasets, converting “raw” variables into climate indices, and then applying statistical analysis to these reconstructed indices. The process has not been straight forward. With each new set of model data come new problems. For my particular project, slight differences in the formats of each CMIP model-dataset have required modification of computer coding.

Initial phases of a venture present the steepest learning curves; working with the CMIP data has been no exception.  But, with each successful processing and analysis of a dataset, my confidence in the various computer codes scripted for the specific dataset characteristics grew. This trend was not to last.

Early last week (June 21), as the last known glitch was being addressed, allowing progress to be made on several more not-yet-completed datasets, an oddity came to light. As I was putting the processed data – the reconstructed indices – through preparation steps for statistical analysis, I realized the processed values were odd. There were lots of repeated numbers, but with no discernable pattern. At first I suspected the codes; after all, I had encountered problems with each new dataset before. But the inconsistency of performance of the codes on similarly formatted data implied the problem lay elsewhere.

Before concluding the problem lay elsewhere, I looked at the processed data – the “raw” data “read” from the netcdf files. Clusters of zeroes filled certain regions of the huge matrices, but not all. Still I was not convinced beyond a doubt that this reflected a problem. I adopted a different strategy – to re-do the four model datasets already successfully completed. This took me back to square one. I selected data from the CMIP database, downloaded the needed data files, requested their transfer to my email address, and awaited their arrival. If I could repeat the analysis on these data with success, I would deduce the problem was with me, not with the data.

The emailed response from CMIP arrived. Instead of data files, I received messages that these files were unavailable. This was nothing new. I had seen this message on some requested data files before (before mid-June). At that time, I simply re-directed to a different model run, not suspecting an evolving problem. But this time was different. I had downloaded and processed these data successfully in the past. Now I was told they were unavailable. This made no sense.

I contacted the CMIP organization. They must have just discovered the problem, themselves. Within a day, the site was frozen. Users of the site were notified that all data downloaded since June 15th were unreliable. (I had downloaded the problem-ridden data on the 16th.) The message on the CMIP site since has been updated to include a projected resolution date of mid-July. Lesson here – confidence should be embraced with caution.

Comments Off

Filed under Climate Models

My Comments On “NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles”

I have reproduced below my comments to the National Science Board  and National Science Foundation on the merit review process.

 

I am writing this e-mail to comment on

“NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles”

First, my research credentials are summarized at

http://cires.colorado.edu/science/groups/pielke/

http://cires.colorado.edu/science/groups/pielke/people/pielke.html

I have had quite negative experiences with NSF respect to climate proposals in recent years. I have posted several weblog discussions on my experience as summarized in

http://pielkeclimatesci.wordpress.com/2011/06/27/nsbnsf-seeks-input-on-proposed-merit-review-criteria-revision-and-principles/

Based on my experience, I have concluded that the review process lacks sufficient accountability. To remedy this deficiency, I have the following recommendations

-Guarantee that the review process be completed within 6 months [my most recent land use and climate proposal was not even sent out for review until 10 months after its receipt!).

-Retain all e-mail communications indefinitely (NSF staff can routinely delete e-mails, such that there is no record to check their accountability).

-Require external independent assessments, by a subset of scientists who are outside of the NSF, of the reviews and manager decisions, including names of referees. This review should be on all accepted and rejected proposals.

Information on my experiences with NSF climate research are provided in these weblog posts

My Experiences With A Lack Of Proper Diligence And Bias In The NSF Review Process For Climate Proposals
http://pielkeclimatesci.wordpress.com/2011/05/26/my-experiences-with-a-lack-of-proper-diligence-in-the-nsf-review-process-for-climate-proposals/

Is The NSF Funding Untestable Climate Predictions . My Comments On A $6 Million Grant To Fund A Center For Robust Decision.Making On Climate And Energy Policy.
http://pielkeclimatesci.wordpress.com/2011/03/02/is-the-nsf-funding-untestable-climate-predictions-my-comments-on-a-6-million-grant-to-fund-a-center-for-robust-decision%e2%80%93making-on-climate-and-energy-policy/

The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill
http://pielkeclimatesci.wordpress.com/2010/10/21/the-national-science-foundation-funds-multi-decadal-climate-predictions-without-an-ability-to-verify-their-skill/

NSF Decision On Our Request For Reconsideration Of A Rejected NSF Proposal On The Role Of Land Use Change In The Climate System
http://pielkeclimatesci.wordpress.com/2010/06/11/nsf-decision-on-our-request-for-reconsideration-of-a-rejected-nsf-proposal/

Is The NSF Funding Process Working Correctly?
http://pielkeclimatesci.wordpress.com/2010/05/18/is-the-nsf-funding-process-working-correctly/

I would be glad to elaborate further on the lack of diligence and bias by the NSF review process with respect to climate research.

Sincerely

Roger A. Pielke Sr.

Comments Off

Filed under The Review Process

NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles

The National Science Board has sent out a notice requesting input on the NSF review process. Their request is reproduced in this post. One glaring issue that is missing is accountability. I discussed this subject in my posts

My Experiences With A Lack Of Proper Diligence And Bias In The NSF Review Process For Climate Proposals

Is The NSF Funding Untestable Climate Predictions – My Comments On A $6 Million Grant To Fund A Center For Robust Decision–Making On Climate And Energy Policy”

The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill

NSF Decision On Our Request For Reconsideration Of A Rejected NSF Proposal On The Role Of Land Use Change In The Climate System

Is The NSF Funding Process Working Correctly?

I have made the following recommendations:

  • Guarantee that the review process be completed within 6 months [my most recent land use and climate proposal was not even sent out for review until 10 months after its receipt!)
  • Retain all e-mail communications indefinitely (NSF staff can routinely delete e-mails, such that there is no record to check their accountability)
  • Require external independent assessments, by a subset of scientists who are outside of the NSF, of the reviews and manager decisions, including names of referees. This review should be on all accepted and rejected proposals ( as documented in the NSF letter at the end of this post, since they were so late sending out for review, they simply relied on referees of an earlier (rejected) proposal; this is laziness at best).

The National Science Board request follows. I will be submitting my comments, based on the above text, and urge colleagues who read my weblog to do likewise.

NSB-11-42
NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles
line

National Science Board
June 14, 2011

Over the past year, the National Science Board (NSB) has been conducting a review of the National Science Foundation’s merit review criteria (Intellectual Merit and Broader Impacts). At the Board’s May 2011 meeting, the NSB Task Force on Merit Review proposed a revision of the two merit review criteria, clarifying their intent and how they are to be used in the review process. In addition, the Task Force identified a set of important underlying principles upon which the merit review criteria should be based. We now seek your input on the proposed revision and principles.

The Task Force looked at several sources of data for information about how the criteria are being interpreted and used by the NSF community, including an analysis of over 190 reports from Committees of Visitors. The Task Force also reached out to a wide range of stakeholders, both inside and outside of NSF, to understand their perspectives on the current criteria. Members of NSF’s senior leadership and representatives of a small set of diverse institutions were interviewed; surveys about the criteria were administered to NSF’s program officers, division directors, and advisory committee members and to a sample of 8,000 of NSF’s Principal Investigators (PIs) and reviewers; and the NSF community at large was invited to provide comments and suggestions for improvements through the NSF web site ( http://www.nsf.gov/nsb/publications/2011/01_19_mrtf.jsp). The stakeholder responses were very robust—all told, the Task Force considered input from over 5,100 individuals.

One of the most striking observations that emerged from the data analyses was the consistency of the results, regardless of the perspective. All of the stakeholder groups identified similar issues, and often offered similar suggestions for improvements. It became clear that the two review criteria of Intellectual Merit and Broader Impacts are in fact the right criteria for evaluating NSF proposals, but that revisions are needed to clarify the intent of the criteria, and to highlight the connection to NSF’s core principles.

The two draft revised criteria, and the principles upon which they are based, are below. Comments are being collected through July 14—we invite you to send comments to meritreview@nsf.gov. It is expected that NSF will develop specific guidance for PIs, reviewers, and NSF staff on the use of these criteria after the drafts are finalized. Your comments will help inform development of that guidance, and other supporting documents such as FAQs.

The Foundation is the primary Federal agency supporting research at the frontiers of knowledge, across all fields of science and engineering (S&E) and at all levels of S&E education. Its mission, vision and goals are designed to maintain and strengthen the vitality of the U.S. science and engineering enterprise and to ensure that Americans benefit fully from the products of the science, engineering and education activities that NSF supports. The merit review process is at the heart of NSF’s mission, and the merit review criteria form the critical base for that process.

We do hope that you will share your thoughts with us. Thank you for your participation.

Ray M. Bowen
Chairman, National Science Board
Subra Suresh
Director, National Science Foundation

line

Merit Review Principles and Criteria
The identification and description of the merit review criteria are firmly grounded in the following principles:

  1. All NSF projects should be of the highest intellectual merit with the potential to advance the frontiers of knowledge.
  2. Collectively, NSF projects should help to advance a broad set of important national goals, including:
    • Increased economic competitiveness of the United States.
    • Development of a globally competitive STEM workforce.
    • Increased participation of women, persons with disabilities, and underrepresented minorities in STEM.
    • Increased partnerships between academia and industry.
    • Improved pre-K–12 STEM education and teacher development.
    • Improved undergraduate STEM education.
    • Increased public scientific literacy and public engagement with science and technology.
    • Increased national security.
    • Enhanced infrastructure for research and education, including facilities, instrumentation, networks and partnerships.
  3. Broader impacts may be achieved through the research itself, through activities that are directly related to specific research projects, or through activities that are supported by the project but ancillary to the research. All are valuable approaches for advancing important national goals.
  4. Ongoing application of these criteria should be subject to appropriate assessment developed using reasonable metrics over a period of time.

Intellectual merit of the proposed activity

The goal of this review criterion is to assess the degree to which the proposed activities will advance the frontiers of knowledge. Elements to consider in the review are:

  1. What role does the proposed activity play in advancing knowledge and understanding within its own field or across different fields?
  2. To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts?
  3. How well conceived and organized is the proposed activity?
  4. How well qualified is the individual or team to conduct the proposed research?
  5. Is there sufficient access to resources?

Broader impacts of the proposed activity

The purpose of this review criterion is to ensure the consideration of how the proposed project advances a national goal(s). Elements to consider in the review are:

  1. Which national goal (or goals) is (or are) addressed in this proposal? Has the PI presented a compelling description of how the project or the PI will advance that goal(s)?
  2. Is there a well-reasoned plan for the proposed activities, including, if appropriate, department-level or institutional engagement?
  3. Is the rationale for choosing the approach well-justified? Have any innovations been incorporated?
  4. How well qualified is the individual, team, or institution to carry out the proposed broader impacts activities?
  5. Are there adequate resources available to the PI or institution to carry out the proposed activities?

Comments Off

Filed under Climate Proposal Review Process

Uncertainty in Utah: Part 3 on The Hydrologic Model Data Set by Randall P. Julander

Uncertainty in Utah Hydrologic Data: Part 3 – The Hydrologic Model Data Set

 A three part series that examines some of the systematic bias in Snow Course, SNOTEL, Streamflow Data and Hydrologic Models

Randall P. Julander Snow Survey, NRCS, USDA

Abstract

Hydrologic data collection networks and for that matter, all data collection networks were designed, installed and operated – maintained to solve someone’s problem. From the selection of sensors to the site location, all details of any network were designed to accomplish the purpose of the network. For example the SNOTEL system was designed for water supply forecasting and while it is useful for avalanche forecasting, SNOTEL site locations are in the worst locations for data avalanche forecasters want such as wind loading, wind speed/direction and snow redistribution. All data collection networks have bias, both random and systematic. Use of any data from any network for any purpose including the intended one but especially for any other purpose should include an evaluation for data bias as the first step in quality research. Research that links a specific observation or change to a relational cause could be severely compromised if the data set has unaccounted systematic bias.  Many recent papers utilizing Utah Hydrologic Data have not identified or removed systematic bias from the data. The implicit assumption is of data stationarity – that all things except climate are constant through time thus observed change in any variable can be directly attributed to climate change.   Watersheds can be characterized as living entities that change fluidly through time. Streamflow is the last check paid in the water balance – it is the residual after all other bills have been paid such as transpiration, evaporation, sublimation and all other losses. Water yield from any given watershed can be impacted by vegetation change, watershed management such as grazing, forestry practices, mining, diversions, dams and a host of related factors. In order to isolate and quantify changes in water yield due to climate change, these other factors must also be identified and quantified. Operational hydrologic models for the most part grossly simplify the complexities of watershed response due to the lack of data. For the most part they operate on some snow and precipitation data as water balance inputs, temperature as the sole energy input, gross estimations of watershed losses mostly represented by a generic rule curve and streamflow as an output to achieve a mass balance. Temperature is not the main energy driver in snowmelt, short wave solar energy is. Hydrologic models using temperature as the sole energy input can overestimate the impacts of warming.

Hydrologic Models

Operational hydrologic models on the whole are a very simplistic lot. They  represent huge complexities of watershed processes in a few lines of code by averaging or lumping inputs such as precipitation and temperatures and defining a few relatively ‘homogeneous’ areas of supposed similar characteristics. Aside from the systematically biased streamflow, snowpack, temperature and precipitation input data problems these models calibrate from and the adage applies – garbage in garbage out… or biased data in – bias continues in output, many of these models have been simplified to the point where they may not have the ability to accurately quantify climate change outside the normal range of calibration.

 

This figure represents the workhorse of hydrologic models, the Sacramento Model. It is a basic ‘tank’ based model where ‘tanks’ of various sizes hold water from inputs such as snowmelt and precipitation then release it to streamflow. Your basic tanks are the upper zone and lower zone with the lower zone divided into 2 separate tanks. The water level in each tank determines the total outflow to the stream. In just a few lines of code – essentially adding up the flow of each tank gives us streamflow for a give time step. Precipitation or snowmelt derived from a simple mean average basin temperature provide input to the surface tank which provides input to the lower tanks. Energy input to the snowpack portion of the model is air temperature. In an operational context, air temperature is the most widely used variable because it is normally the only data available and it is highly relational to total energy input over the normal range of model calibration. Models have been developed that have deliberately chosen the most simple input output so as to be useful in a wide range of areas such as the Snowmelt Runoff Model from Martinec and Rango.

              Snowmelt Runoff Model Structure (SRM)

Each day, the water produced from snowmelt and from rainfall is computed, superimposed on the calculated recession flow and transformed into daily discharge from the basin according to Equation (1):

Qn+1 = [cSn · an (Tn + ΔTn) Sn+ cRn Pn][(A*10000)/86400]⋅ (1-kn+1) + Qn kn+1 (1)

where: Q = average daily discharge [m3s-1]

c = runoff coefficient expressing the losses as a ratio (runoff/precipitation), with cS referring to snowmelt and cR to rain

a = degree-day factor [cm oC-1d-1] indicating the snowmelt depth resulting from 1 degree-day

T = number of degree-days [oC d]

ΔT = the adjustment by temperature lapse rate when extrapolating the temperature from the station to the average hypsometric elevation of the basin or zone [oC d]

S = ratio of the snow covered area to the total area

P = precipitation contributing to runoff [cm]. A preselected threshold temperature, TCRIT, determines whether this contribution is rainfall and immediate. If precipitation is determined by TCRIT to be new snow, it is kept on storage over the hitherto snow free area until melting conditions occur.

A = area of the basin or zone [km2]

This is the whole SRM hydrologic model – a synopsis of all the complexities of the watershed summarized in one short equation. It is a great model for its intended purpose. The energy portion of this model consists of a simple expression of a degree day factor with an adjusting factor – that is to say if the average daily temperature is above zero by some amount which is then modified by the adjustment factor, melt occurs and is processed through the model.  The greater the temperature, the more melt occurs. So how/why do these very simple models work? Because streamflow itself the result of many processes across the watershed that tend to blend and average over time and space. As long as the relationships between all processes remain relatively constant, the models do a good job. However, throw one factor into an anomalous condition, say soil moisture and model performance tends to degrade quickly.

More technical hydrologic models utilize a broader spectrum of energy inputs to the snowpack such as solar radiation. These models more accurately represent energy input to snowmelt but are not normally used in an operational context because the data inputs are not available over wide geographic areas.

The energy balance to a snowpack can be summarized as follows:

Energy Balance

M = Qm/L

Where Qm is the amount of heat available for the melt process and L is the latent heat of Fusion and M is Melt

Qs= Qis-Qrs-Qgs+Qld-Qlu+Qh+Qe+Qv+Qg-Qm

Qs is the increase in internal energy storage in the pack

Qis is the incoming solar radiation

Qrs is incoming energy loss due to reflection

Qgs is energy tranferred to soil

Qld is longwave energy to the pack

Qlu is longwave energy loss from the pack

Qh is the turbulent transfer of sensible heat from the air to the pack

Qe is the turbulent transfer of latent heat (evaporation or sublimation) to pack

Qv energy gained by vertical advective processes (rain, condensate, mass removal via evaporation/sublimation)

Qg is the supply of energy from conduction with the soil, percolation of melt and vapor transfer)

During continuous melt, a snowpack is isothermal at 0 degrees C and therefore Qs is assumed negligible as is Qgs… Therefore

Qm=Qn+Qh+Qe+Qv+Qg

where Qn is the net all wave radiation to the pack.

All of these processes in many hydrologic models are summarized in one variable – average air temperature for a given area. It is interesting to note that temperature works only on the surface of the snowpack and that a snowpack in order to melt has to be isothermal from top to bottom else melt from the surface refreezes at lower, colder pack layers. Snow is not mostly snow, it is mostly air. In cool continental areas such as Utah, snowpacks rarely exceed a 50% density which means that at its greatest density, it is still 50% air. Due to this fact, snow tends to be an outstanding insulator. You cannot pound temperature into a snowpack. Solar radiation on the other hand can penetrate and convey energy deep into the pack and it is this mechanism that conveys by far the most energy into snow. This fact can be easily observed every spring when snowpacks start to melt. Observe south facing aspects in any location from a backyard to the mountains and see from a micro to macro scale the direct influence solar radiation has with respect to temperature.

Notice in this photo patches of snow that have been solar sheltered as air temperature is very likely close to be much the same over both the melted areas and the snow covered areas in both time and space. South aspects melt first, north aspects last. Shaded areas hold snow longer than open areas.

 

This graph from Roger Bales expresses the energy input in watts for the Senator Beck Basin in Colorado.  The black line is the net flux to the pack or the total energy input from all sources.  The red line is the net solar input and the blue line represents all sensible energy. The difference between the red line and the black line is all energy input to the pack combined, negative and positive except net solar. From this, one can readily appreciate the relative influence each individual energy source has on snowmelt. This graph is from a dusty snow surface so the net solar is likely greater than what would be expected from a normal snowpack. However, the contrast is stark – solar radiation is the driver of snowmelt. Air temperature is a long ways back.  This of course is information that has been known for many years as illustrated in the following table from the Army Corps of Engineers in the 1960’s lysimeter experiments at Thomas Creek.

Notice that shortwave radiation is the constant day in day out energy provider, long wave, temperature,  and other sources pop up here and there.

So, what is the implication for hydrologic models that use air temperature as the primary energy input? The relationship between solar radiation and temperature will change. For a 2 degree C rise in temperature the model would see an energy input increase the equivalent to moving the calendar forward several weeks to a month – a huge energy increase. The question becomes what will the watershed actually see – will the snowpack see an equivalent magnitude energy increase?

 

This chart for the Weber Basin of Utah illustrates the average May temperature for various elevations and what a plus 2 degree C increase would look like. This is what the hydrologic model will see. In order to ascertain what energy increase the watershed will actually see, we go back to the graph of Bales – what the watershed will see is a more modest increase in the sensible heat line. Climate change won’t increase the hours in a day nor increase the intensity of solar radiation so the main energy driver to the snowpack, solar – will stay close to the same, all other things equal. So the total energy to the snowpack will have a modest increase but what the hydrologic model has seen is a much larger proportional increase. Thus if this factor is not accounted, the model is likely to overestimate the impacts that increased temperatures may have on snowpacks.  Hydrologic models work well within their calibrated range because temperature is closely related to solar energy. With climate change warming, this relationship may not be the stable input it once was and models may need to be adjusted accordingly.  Research needs to move in the direction of total energy input to the watershed instead of temperature based modeling. Then we can get a much clearer picture of the impacts climate change may have on water resources. Recent research by Painter et al regarding changes in snow surface albedo and accelerated runoff support the solar vs temperature energy input to the pack where surface dust can accelerate snowmelt by as much as 3 weeks or more whereas modest temperature increases would accelerate the melt by a week.

Evapotranspiration and losses to groundwater

Operational hydrologic models incorporate evapotranspiration mostly as a wild guess. I say that because there is little to no data to support the ‘rule curve’ the models use to achieve these figures. A rule curve is normally developed through the model calibration process. The general shape of the hydrograph is developed via precipitation/snow inputs and then the mass balance is achieved through the subtraction of ET data so the simulated and the observed curves fit and there is no water left over. As a side, some water may be tossed into a deeper ground water tank to make things somehow more realistic. Some pan evaporation data here and there sporadic in time and space with no transpiration data.  So how are these curves derived? Mostly from mathematical calibration fit – one models the streamflow first with precipitation/snow input, you get the desired shape of the hydrograph and then you get the final mass balance correct by increasing or decreasing the ET curve and the losses to deep groundwater. The bottom line is that these parameters have no basis in reality and are mathematically derived to achieve the correct mass balance. We have no clue what either one actually is. This may seem like a minor problem until we see what part of the hydrologic cycle they comprise.

 

In this chart we have a gross analysis of total watershed snowpack and annual streamflow. Higher elevation sites such as the Weber at Oakley have a much higher per acre water yield than do lower elevation watersheds such as the Sevier River at Hatch. However – in many cases there are far greater watershed losses than snowpack runoff that actually makes it past a streamgage, typical of many western watersheds where potential ET often exceeds annual precipitation. Streamflow, again, is a residual function, that water that is left over after all watershed bills are paid. We model most often the small part of the water balance and grossly estimate ET and groundwater losses. At Lake Powell, between 5 and 15% of the precipitation/snow model input shows up as flow. Small changes to the watershed loss rates or our assumptions about these loss rates can have huge implications on model results. The general assumption in a warming world is that these watershed losses will increase. Higher temperatures lead to higher evaporative losses which are the small part of the ET function – but will transpiration increase? This is a question that needs more investigation because of several issues: 1) higher CO2 can lead to more efficient plant use of water in many plants including trees and 10% to 20% less transpiration could be a significant offsetting factor in water yield and 2) watershed vegetative response to less water either through natural mechanisms (massive forest mortality such as we currently see) or mechanical means could also alter the total loss to ET. The assumptions made on the energy input side of the model together with the assumptions on watershed loss rates are likely the key drivers of model output and both have substantial problems in quantification.

 

Is average temperature a good metric to assess potential snow and streamflow changes?

Seeing that solar radiation is the primary energy driver to snow ablation we then make the observation that in winter the northern latitudes have very little of that commodity. Without the primary driver of snowmelt, solar radiation, snowpacks are unlikely to experience widespread melt. We then ask the question – is average temperature a good indicator of what might happen? There is at least a 50% -80% probability that any given storm on any given winter day will occur during a period of coldest daily temperature – i.e. nighttime, early morning or late evening. The further north a given point is in latitude, the higher that probability. Once snowpack is on the ground in the mountains of Utah and similar high elevation cool continental climate states, sensible heat exchange is not likely to melt it off. Thus minimum temperature or some weighted combination below average and perhaps a bit above minimum temperature might be a better metric.

 

In this graph two SNOTEL site minimum average monthly temperatures plus a 2 degree increase are displayed.  Little Grassy is the most southern low elevation (6100 ft) site we have. It currently has a low snowpack (6inches of SWE or less) in any given year and is melted out by April 1. Steel Creek Park is at 10,200 feet on the north slope of the Uintah Mountains in northern Utah – a typical cold site. As you can see, a 2 degree increase in temperature at Little Grassy could potentially shorten the snow accumulation/ablation seasons by a week or so on either end. This is an area which contributes to streamflow in only the highest of snowpack years and as such, a 2 week decrease in potential snow accumulation may be difficult to detect given the huge inter annual variability in SWE. A two degree rise in temperature at Steel Creek Park is meaningless – it would have little to no impact on snow accumulation or ablation. Thus most/much/some of Utah and similar areas west wide may have some temperature to ‘give’ in a climate warming scenario prior to having significant impacts to water resources.  Supporting evidence for this concept comes from the observation that estimates of temperature increases for Utah are about 2 degrees or so and we have as yet, not been able to document declines in SWE or its pattern of accumulation due to that increase. A question for further research would be – at what level of temperature increase could we anticipate temperature impacting snowpacks.

More rain, less snow

In the west where snow is referred to as white gold, the potential of less snow has huge financial implications from agriculture to recreation. The main reason many streams in the west flow at all is because infiltration capacity and deeper soil moisture is exceeded due to snowmelt of 1 to 2 inches per day over a 2 to 12 week period keeping soils saturated and excess water flowing to the stream. In the cool continental areas of the west, it can be easily demonstrated that 60%, 70%, 80% and in some cases exceeding 95% of all streamflow originates as snow. Summer precipitation has to exceed some large extent and magnitude to have any impact on streamflow at all and typically when it does, it pops up for a day and immediately returns to base flow levels. So more rain, less snow has a very ominous tone and over a long period of time, if snowpacks indeed dwindle to near nothing, very serious impacts could occur. In the short run in a counterintuitive manner, rain on snow may actually increase flows. Let’s examine how this might occur. Currently, the date snowpacks begin is hugely variable and dependent on elevation, it can range from mid September to as late as early December. If rain occurs in the fall months it is typically held by the soil through the winter months and contributes to spring runoff by soil saturation. Soils that are dry typically take more of an existing snowpack to bring them to saturation prior to generating significant runoff. Soils that are moist to saturated take far less snowmelt to reach saturation and are far more efficient in producing runoff.

 

In this chart of Parrish Creek along the Wasatch Front in Utah we see the relationship between soil moisture (red), snowpack (blue) and streamflow (black).  In the first 3 years of daily data, we see peak SWE was identical in all years but soil moisture was very low the first year, very high the second year and average the third year and the corresponding streamflows from identical snowpacks were low, high and average. In the fourth year snowpacks were low as was soil moisture and the resulting streamflow was abysmal.  In the fith year, snowpacks were high but soil moisture was very low and streamflow was mediocre having lost a major portion of the snowpack to bring soils to saturation. Soil moisture can have a huge impact on runoff. Thus fall precipitation on the watershed as rain can be a very beneficial event – some of course is lost to evapotranspiration but that would be the most significant loss.

 

In this chart of the Bear River’s soil moisture we see exactly that case – large rain events in October brought soil moisture from 30% saturation up to 65% where it remained through the winter months till spring snowmelt. This is a very positive situation that increases snowmelt runoff efficiency. Rain in the fall months is not necessarily a negative. Now lets look at rain in the spring time. This is typically rain on snow kinds of events and in fact, this from a water supply viewpoint is also very positive. If for example a watershed is melting 2 inches of SWE per day and watershed losses are 1 inch per day, then we have 1 inch of water available for runoff. Now, say we have this same scenario and we have a 1 inch rain on snow event. Then we have 2 inches of SWE melt plus 1 inch of rainfall for a total input of 3 inches and the same loss rate of 1 inch per day yields 2 inches of runoff, double the runoff for that particular day.  Twice the runoff means more water in reservoirs. Where this eventually breaks down is where the watershed aerial extent of snowpack becomes so small towards the end of melt season the total amount of water yield becomes inconsequential. If snowmelt due to temperature increases is only 1 week, then more rain less snow may not be a huge factor in water yield. When this does become a significant problem, i.e. when snow season is shortened by ‘X’ weeks should be a subject for further research. For the short term, 50 years or so, water yields in the Colorado may not likely see significant impacts from more rain/less snow and watershed responses will likely be muddled and confused by vegetation mortality. (Mountain Pine Beetle Activity May Impact Snow Accumulation And Melt. Pugh). For the short term, total precipitation during the snow accumulation/ablation season is likely a much more relevant variable than temperature. Small increases in precipitation at the higher elevations may well offset any losses in water yield from the current marginal water yield producing areas at lower elevations. A decrease in this precipitation in combination with temperature increases would be the worst scenario.

SWE to Precipitation ratios

When trying to express this concept of more rain, less snow the SWE to PCP ratio was conceived as a metric to numerically express the observed change. When developing a metric that purports to be related to some variable it is important to make sure mathematically and physically that the metric does what it was intended to do and not to have other factors unduly influence the outcome. Simply said, the SWE to PCP metric was intended to show how increased temperatures have increased rain events and decreased snow accumulation.  This metric should be primarily temperature related with only minor influences from other factors. The fact is this metric is riddled with factors other than temperature that may preclude meaningful results.  It in reality is a better indicator of drought than it is one of temperature.

http://www.ut.nrcs.usda.gov/snow/siteinfo/data_bias/Swe%20to%20precip%20ratios%20in%20utah.pdf

 

 

In these two graphs one can see that the SWE to PCP ratio is a function of precipitation magnitude and as such is influenced by drought more than temperature. The physical and mathematical reasons are detailed in the paper ‘Characteristics of SWE to PCP Ratios in Utah’ available at the link above.

Conclusions

Many hydrologic models have serious limitations on both the energy input side as well as the mass balance side of watershed yield with respect to ET and ground water losses that can influence the results of temperature increases. It is possible that many systematically overestimate the impacts of temperature increases on water yields. Systematic bias in the data used by models can also predispose an outcome. What model is used in these kinds of studies matters – snowmelt models that incorporate energy balance components such as solar radiation in addition to temperature likely produce more realistic results than temperature based models. Assumptions about ET and groundwater losses can have significant impacts to results. Metrics developed to quantify specific variables or phenomena need to be rigorously checked in multiple contexts to insure they are not influenced by other factors.

Comments Off

Filed under Climate Change Metrics, Vulnerability Paradigm

Uncertainty in Utah Hydrologic Data: Part 2 On Streamflow Data by Randall P. Julander

Uncertainty in Utah Hydrologic Data – Part 2 – Streamflow Data

A three part series that examines some of the systematic bias in Snow Course, SNOTEL, Streamflow data and Hydrologic Models

Randall P. Julander Snow Survey, NRCS, USDA

Abstract

Hydrologic data collection networks and for that matter, all data collection networks were designed, installed and operated – maintained to solve someone’s problem. From the selection of sensors to the site location, all details of any network were designed to accomplish the purpose of the network. For example the SNOTEL system was designed for water supply forecasting and while it is useful for avalanche forecasting, SNOTEL site locations are in the worst locations for data avalanche forecasters want such as wind loading, wind speed/direction and snow redistribution. All data collection networks have bias, both random and systematic. Use of any data from any network for any purpose including the intended one but especially for any other purpose should include an evaluation for data bias as the first step in quality research. Research that links a specific observation or change to a relational cause could be severely compromised if the data set has unaccounted systematic bias.  Many recent papers utilizing Utah Hydrologic Data have not identified or removed systematic bias from the data. The implicit assumption is of data stationarity – that all things except climate are constant through time thus observed change in any variable can be directly attributed to climate change.   Watersheds can be characterized as living entities that change fluidly through time. Streamflow is the last check paid in the water balance – it is the residual after all other bills have been paid such as transpiration, evaporation, sublimation and all other losses. Water yield from any given watershed can be impacted by vegetation change, watershed management such as grazing, forestry practices, mining, diversions, dams and a host of related factors. In order to isolate and quantify changes in water yield due to climate change, these other factors must also be identified and quantified. Operational hydrologic models for the most part grossly simplify the complexities of watershed response due to the lack of data. For the most part they operate on some snow and precipitation data as water balance inputs, temperature as the sole energy input, gross estimations of watershed losses mostly represented by a generic rule curve and streamflow as an output to achieve a mass balance. Temperature is not the main energy driver in snowmelt, short wave solar energy is. Hydrologic models using temperature as the sole energy input can overestimate the impacts of warming.

 Streamflow data

Water yield from the watershed is a residual. It is what is left over after all other processes have claimed their share of the annual input of precipitation. As these processes change, water yield is impacted.  In order to quantify the impacts climate change may have on water yield, it is essential to identify, quantify and remove the impacts other processes may have had. As a first step, accuracy of the data set needs to be defined. To what level of accuracy can we actually measure streamflow. The USGS uses a rating system to rank each gaged point. Streamflow data can be: excellent, good, fair or poor. Each streamflow point is not really a point such as 60 cfs but should be thought of as a range depending on its rating.

In this chart, the point values are represented by the red line – these are the observed values. If this site were excellent, there is a high probability (90-95%) that the actual value is somewhere between the light green lines. Unfortunately, there are no sites in Utah that fit the excellent criteria. About 50% of the USGS sites in Utah are in the good category and the actual value of any given point will be between the light blue lines. About 40% of the sites in Utah are classified as fair and fit between the magenta lines. The other 10% of sites are rated poor and values may regularly be outside of the magenta lines. The point of data accuracy is that if one can only measure to the nearest 10% or 15% then the limit of our ability to quantify change or trends in these data also resides within that data accuracy. To say I have observed a 5% change in a variable with 15% accuracy may or may not have validity. The assumption would be that all data error associated with these measurements are equally random in all directions such that everything cancels to the observed value.  These potential errors in measurements can compound as one tries to adjust streamflow records to obtain a natural flow such as the inflow to Lake Powell. One must add interbasin diversions and reservoir change in storage back to the observed flow in order to calculate what the true observed flow would have been absent water management activities.

In this graph, the USGS observed flow for the Sevier River near Marysvale Utah is adjusted for the change in storage of Piute and Otter Creek Reservoirs. Piute Reservoir is directly above the streamgage and Otter Creek is on the East Fork of the Sevier some 20 miles upstream with very little agriculture between the reservoir and the stream gage. Each line represents a full year of monthly total acre feet adjusted flow. Notice that fully 1/3 of the time from April to September, the adjusted flow goes negative, as much as 15,000 acre feet. We know that this is an impossible figure and clearly these data points are in gross error. As important is what we now don’t know about every other data point – are they any better estimates of adjusted flow than the ones that are clearly in error? What about those that are close to negative, say in the zero to 5,000 acre foot range – are they accurate? Or even those that “look normal”, can we be sure they are?  The only reason we don’t suspect the Colorado River of this kind of data error is that it’s flow is large enough to mask out the errors.

So, how is the official record for Lake Powell inflow adjusted? This, again is a case where a data set has been generated to serve a specific function – to allow us to make a reasonable inflow forecast for Lake Powell that has some meaning to the Water Management Community. It does not reflect the true natural inflow to Lake Powell.

We adjust the observed USGS streamflow at Cisco by 17 major diversions and 17 reservoirs represented here by this schematic. What might the real inflow adjustments look like? On the Colorado side, there are 11,000 reservoirs, ponds and impoundments, 33,000 diverted water rights, ditches, canals, etc, and over 7,000 wells. On the Utah side, there are 2,220 reservoirs, ponds and impoundments, 485 diverted water rights, ditches and canals as well as an unquantified number of wells and center pivots. Wyoming certainly has some number of these kinds of water infrastructure as well but I don’t have those numbers yet. As one can clearly see, this becomes an accounting nightmare. Not just in trying to measure each of these diversions or reservoirs but how much evaporation is coming off of each reservoir and canal. In regards to surface area, canals and ditches may well have far greater evaporation than reservoirs. Reservoirs also have bank storage issues that alter hydrograph characteristics by storing on the fill side and slowly releasing (minus consumptive vegetative use) on the draw down.

It is likely that for most diversions – the error associated with those data is positive. Why make such a declaration – visit any water rights office and ask a simple question “has anyone ever come in and complained that they got too much water and would like to give some back?”. The complaint uniformly is ‘I am not getting my full allocation and I need/demand more’. Check any water gate and you will find that the gate wheel has been turned to its maximum extent against the chain lock either by the water master or by the farmer checking to see that it is. When water is life and the means of providing, each will try to maximize the amount taken. Many of these diversions are simple structures easily altered. The assumption that all of these water managements are consistent in time is not likely true. From this context, it is clear that any reconstructed inflow to Lake Powell will have the potential for serious deficiencies, especially as water use in the upper basins increases. Each 0.01 percent here and there be it a well or pond evaporation or diversion slowly adds to the incremental error in the data set.

Changes to the Watersheds

Since the settlement of the west, there have been extensive changes in watershed characteristics. These changes can have a substantial impact on water yield and consequently have direct bearing on current trends. Let’s start with grazing – there is a substantial body of literature documenting the impacts that grazing can have on water yield. Overgrazing leads to less vegetation, soil compaction and greater water yield and soil erosion. In the 1870’s, there were approximately 4,100,000 cows and 4,800,000 sheep in the 17 western states. By 1900, there were 19,600,000 cows and 25,100,000 sheep. This was the get rich quick scheme of the day – eastern and mid west speculators could buy up herds, ship west, graze for a couple of years then ship back east for slaughter – no land purchase necessary, no regulations – simply fight for a spot to graze. Western watersheds were denuded and devastated. The Taylor Grazing Act, passed in the 1930’s was implemented in part because of a change in hydrology – people in the west and Utah in particular were the victims of annual floods, mud and debris flows brought on by snowmelt and precipitation events on damaged watersheds. This change in hydrology – increased flooding and flow led to action to curtail grazing and heal damaged watersheds.

North Twin Lakes – 1920. Notice the erosional features, the lack of vegetation including trees.

Photos courtesy of the repeat photography project:

http://extension.usu.edu/rra/

 

North Twin Lakes 1945. Notice that the erosional features are slowly filling in, sage and grasses are more abundant, trees are growing, the watershed is healing. Bottom line, less runoff, more consumptive use by vegetation.  Hydrologically, this watershed has changed dramatically.

North Twin Lakes – 2005. Notice the erosional features are pretty much gone, excellent stands of all kinds of vegetation. Now, what is the difference in water yield from the watershed today compared to 1920? Water yield has decreased and consumptive use increased.

Along with restricted grazing, watershed restoration programs were implemented to improve conditions such as seeding programs to restore vegetation, bank stabilization and other watershed improvements. One of these programs was designed to mechanically reduce streamflows via increased infiltration and water storage on the watershed. Contour trenches were installed on watersheds throughout the west to reduce streamflow, floods and debris flows.

 In This photo above Bountiful, Utah notice the extent and depth of the contour trenches installed in the 1930’s by the Civilian Conservation Corps by hand and by horse drawn bucket scoops. These trenches are even today several feet deep and able to store significant amounts of snowmelt for infiltration.

Mining

Mining was an activity that impacted western watersheds in way not typically thought of from todays perspective. After all, mines and the associated infrastructure and even the tailings comprise a tiny fraction of any watersheds geographic area. However, from the 1850’s to basically the 1930’s or even later, ore had to be refined on site. There was no infrastructure or capability to bring ore from the mine to central smelters nor was there ability to bring coal to the mine. Roads were steep and rugged, rail lines expensive if they could be built at all and transportation was by wagon. Thus smelting was most often done at the mine via charcoal. The large mines would have 20,000 bushels of charcoal on site. Large charcoal kilns could take 45 cords of wood per week which equates to 36 million board feet of timber per decade. The famed Comstock Mine basically denuded the entire east side of Lake Tahoe. The cottage industry of the day was making charcoal for the mines. Many farms and ranches had smaller kilns to generate an additional cash flow.

The Annie Laurie in the Tushar Mountains of Utah.

The Annie Laurie today. Notice in this recent photo how vegetation, especially trees have grown, matured and how many more conifers there are today than in the past. In addition to charcoal, timber was necessary for the mine, for the buildings, for heating and cooking.

Locations of the estimated 20,000 abandoned mines in Utah. This represents a substantial amount of timber removed from Utah watersheds over a nearly 80 year period of time. Most assuredly enough to impact species composition and water yield across many watersheds. Fewer trees equals greater water yield.

Logging

Logging on western watersheds provided necessary timber for infrastructure such as homes, businesses, barns and other buildings. Timber was most often cut and milled on site with the rough cut timbers hauled from the watershed via horse and wagon.

 

The Equitable Sawmill, early century.

Where the Equitable Sawmill once was. Notice the dramatic change in forest cover – more trees equals less water yield and in this case, potentially much less.

Tie Hacking

Tie Hacking was a business that provided railroad ties to the industry, basically the same as logging but with a bigger product. As the railroad came through, tie hacks would cut trees and provide the necessary ties to keep the tracks moving forward. Ties at the time were not treated as they are now and needed to be replaced on a regular basis as the soft pine wood could rot quickly.

This is Blacks Fork Commissary – the Tie Hack central provisions location on the North Slope of the Uintahs in northern Utah.

Tie Hacks high grading all the Douglas Fir off the North Slope, leaving the Lodgepole Pine. The majority of the North Slope today is comprised by dense stands of Lodgepole Pine. The rail lines required 3000 ties per mile and 600 miles between western Colorado and the Sieras – at about 14 million board feet per decade.

Fire

The policy to fight western fires has done more to change the landscape of western watersheds than possibly any other factor. At the turn of the century, fires burned 10 to 30 million acres of forest every year. With the advent of Smokey Bear,  between 2 and 5 million acres burn annually.   This huge reduction of burned area has change the species composition, density and age of forests across the west. Watersheds that used to have 10 trees per acre now have 200 and more. Fewer trees produce more water yield.

Danish Meadows, 1900 – with frequent fires.

Danish Meadows, 2000. No fires for nearly 100 years. More trees, less water.

The Forest Service has done much research with paired watersheds and timber harvest. The Fools Creek experiment in Colorado is a classic – two watersheds of similar characteristics measured together for more than a decade. Then one watershed was kept pristine while the other was cut by 40%. The end result was an increase in water yield of 40% for 20 years as well as a substantial 25% increase for the period of 30 to 50 years.

Note that the timing of annual snowmelt was also accelerated due to the fact that the forest cover was opened up to short wave solar radiation, the primary energy input to snowmelt.

A recent Duke University study confirms that Utah forests are basically very young with the dominant age class in the 0-100 year old category. This basically confirms that post the mining/logging era from 1850’s to 1960’s a different watershed management policy has occurred on Utah watersheds. Small trees not harvested early on are now the 100 year old trees and seedlings at the time are now the 50 year old trees.

Species Composition Matters

With the virtual elimination of both fire and logging, species such as Aspen are being steadily replaced by Conifers. In paired plots to compare water consumption between Aspen and Conifers, LaMalfa and Ryel found that there was much greater SWE accumulation in the Aspen stands vs the Conifers – research already well known, but also soil moisture under the Aspen stands was much greater than it was under the Conifers. Aspens, with the first frost of the season terminate transpiration and soil moisture starts to recover. Conifers on the other hand, keep transpiring and pumping that moisture out of the ground.

LaMalfa/Ryel – 34% less SWE under the conifers than aspens.

LaMalfa/Ryel – nearly 4.5 inches less soil moisture in the conifers vs the aspens.

Overall, there was 42% less water in the Conifer Community vs the Aspen Community – a whopping 10.5 inches of less total water potentially available from the Conifers than the Aspens. This area of northern Utah, near Monte Cristo typically only gets 37 inches of annual precipitation so the Conifers could potentially produce far less runoff than the Aspens.  Utah and Colorado have lost 2.5 million acres of aspens to conifer encroachment with approximately 1.5 million of that in the Colorado River Basin. That translates into about 125,000 acre feet of water lost per inch of water yield. From this single factor (Aspen replaced by Conifer), the April – July inflow to Lake Powell could be reduced by 2% to 17%.

 

Ground Water Withdrawals

In the long term, ground water is connected to surface water. This is an area that needs investigation as groundwater withdrawals within the basin could be substantial, certainly on the order of thousands of acre feet annually and potentially much greater. Over the period of many years, increased streamflow losses to groundwater is likely.

Agricultural Practices

In the early years, agriculture was primarily flood irrigation where a big share of the water applied to any specific field would runoff back to the ditch and would eventually become return flow to the river. Much of the flood irrigation has been replaced by sprinkler irrigation with much higher evaporative losses but more efficient crop production. Nearly all of the water that hits the ground is consumptively used by crops. How this impacts streamflow is an issue for more research.

Paradigm Shift

For a century in the west, we indulged in watershed practices that increased streamflows. In the 60’s and 70’s there was the beginnings of the environmental movement and with it significant changes in watershed management. Mining no longer requires vast amounts of timber, logging is but a scant fraction of its past, tie hacking is extinct, fires are extinguished, grazing is tightly managed. Watersheds now have huge amounts of vegetation and in particular vastly more trees than they have ever seen in a historical context. Species composition has changed with far less aspens and far greater confers. More conifers equals less snow, more conifers equals less soil moisture, more trees and vegetation in general equals less streamflow. For 100 years we systematically increased flows from western watersheds and for the past 50 we have done everything possible to reduce streamflows.

Portent for the Future

Forest management that has removed fire and logging from much of the equation has had a net effect of vastly increasing the number of trees per acre of land. Too many trees for a water resource has increased the competition for water to the extent that the recent drought weakened  the forests and a huge pine beetle, spruce bud worm infestation has killed hundreds of thousands of acres of trees. The analogy of 10 men on the edge of a desert with water for 5 in appropriate. If one sends all 10, they all die. If one sends 5, most will likely survive. Our forests sent all 10 and the result is massive forest mortality. In Utah, of 5 million acres of forested lands, nearly 1 million acres is standing dead with the potential for greater mortality. 1 million acres of dead forest equates to the potential of 83,000 acres of additional water per inch of water yield, perhaps as much as 800,000 acre feet in total. In the short run, Utah is likely to see greater water yield, not less – all other things equal. Also, runoff will likely be earlier due to the opening of the canopy to short wave solar radiation.

 

Conclusions

There are many and complex reasons for declines in streamflows west wide of which climate change is but one. It is not a simple issue and each contributing component is certainly not easily quantified.

 

 

 

 

.

Comments Off

Filed under Climate Change Metrics, Guest Weblogs

Guest post by Dr. Jos de Laat, Royal Netherlands Meteorological Institute [KNMI]

Recently, Roger’s ClimateSci posted a blog entitled “Science Of Climate Change By Tim Curtin” which introduces a recent paper by Curtin [2011].

http://pielkeclimatesci.wordpress.com/2011/05/25/new-paper-econometrics-and-the-science-of-climate-change-by-tim-curtin/ <http://pielkeclimatesci.wordpress.com/2011/05/25/new-paper-econometrics-and-the-science-of-climate-change-by-tim-curtin/>

The Curtin [2011] paper discusses – amongst other topics – the potential impact of “anthropogenic water vapor” on climate. Anthropogenic water vapor refers to

“water vapour produced by the combustion of hydrocarbon fuels, both by direct creation of water vapour in the combustion process (18 GtH2O per year.), and by the much larger volume of steam created by the power generation process.”

The paper then goes on to claim that this additional water vapour has a signficicant impact in climate.

Now, as I am always on the lookout for unusual or new findings this paper caught my attention, also because in the past I had done some back-of-the-envelope calculations about how much water vapour (H2O) was released by combustion processes. Which is a lot, don’t get me wrong, but my further calculations back then suggested that the impact on the global climate was marginal. Since Curtin [2011] comes to a different conclusion, I was puzzled how that could be. Given that Curtin [2011] is a long paper with lots of information and calculations it took a little bit of time to figure things out, but in the end I think I understand what the culprit of the Curtin [2011] calculation is and also where – as far as I am concerned right now – a mistake is made.

I mailed Roger my comments and he graciously offered to post it as a weblog and ask Tim Curtin for a response.

So, here we go: where do I think the Curtis [2011] calculation goes astray.

The first question that comes up is how much the addition of anthropogenic H2O by combustion adds to the global amount of H2O. Here follows the calculation I did several years back:

-          Curtin [2011] mentiones 17.5 Gtons of anthropogenic H2O (= 1.75 10^16 g) for 2008-2009 (don’t know if that is for one of two years, but for the sake of argument let’s assume it is one year).
–          Per unit area (area of Earth’s surface = 5.1 10^14 sq.metre) this becomes approximately 34 grams over one year
–          The total amount of atmospheric H2O – global average – per unit area is about 25 mm, or 2.5 kg/sq.metre or 2500 g/sq.metre, which to good approximation is entirely located in the troposphere. Just for reference, one mm of rainfall equals on liter water per sq.metre.
–          The residence time of tropospheric H2O is about 10 days, so the 2500 g/sq.metre is renewed every 10 days
–          The contribution of anthropogenic H2O per 10 day per unit area then becomes ~ 1 gram (34 grams times 10 days / 365 days)
–       The 1 gram is what should be compared to the 2500 g/sq.metre that is already present, which thus is a change in H2O by 0.04 %

Thus, the amount of anthropogenic H2O added with continiuous anthropogenic H2O emissions leads to a continuous change of H2O by 1 gram or ~ 0.04 % of the total amount of tropospheric H2O. That is the number to keep in mind, as explained next.

The Curtin paper goes on to refer to Pierrhumbert et al. [2007] in the following calculation (halfway page 12 of Curtin [2011]):

“But using the first Pierrehumbert et al.[2007] figure above, the radiative forcing (RF) from this addition to [H2O] is 50 per cent higher than that of increased atmospheric CO2. According to the IPCC [Forster and Ramaswamy 2007], the radiative forcing per GtCO2 is 0.0019 Watts/sq.metre, so that changes in [H2O] is 0.0028 W/sq.metre.”

This 50% is actually stated in Pierrehumbert et al. [2007] as follows:

“…one finds that each doubling of water vapor reduces OLR by about 6 W/m2 (Pierrehumbert 1999). This is about 50% greater than the sensitivity of OLR to CO2.”

But what should be mentioned here as well is the sentence in Pierrehumbert [2007] before the statement above:

“… The logarithmic effect of water vapor is somewhat more difficult to cleanly quantify than is the case for well mixed greenhouse gases like CO2, but if one adopts a base-case vertical distribution and changes water vapor by multiplying this specific humidity profile by an altitude independent factor, one finds that each doubling of water vapor reduces OLR by about 6W/m2 [Pierrehumbert 1999]. This is about 50% greater than the sensitivity of OLR to CO2.”

So this 50% is only valid for the case where the amount of humidity THROUGHOUT the troposphere is doubled. Pierrehumbert [2007] wants to estimate how a doubling of CO2 compares to a doubling of H2O, but does not provide a value for similar changes in absolute amounts of H2O and CO2. This is important, because the amount of atmospheric H2O – typically a few % near the surface – is much larger than the absolute amount of CO2 – about 0.039 %. The radiative forcing of for example methane is much larger than that of CO2, but one should consider that the concentrations of methane are currently much smaller than that of CO2, and the residence time of CH4 is also much shorter. This already indicates that care should be taken when comparing the radiative forcings of atmosheric gases wich very much different concenrtations and residence times. And the absolute amounts of astmospheric CO2 and H2O also differ greatly.

For calculating the radiative effect of anthropogenic H2O using Pierrehumberts [2007] number the relative change in total atmospheric H2O due to anthropogenic H2O should be used. The radiative forcing of anthropogenic H2O actually being 0.04% of the 6 W/sq.metre from Pierrehumbert [1999], which really is close to negligible (0.0024  W/sq.metre). Even an order of magnitude more anthropogenic H2O still results in a very small radiative forcing.

Hence, where I think Curtis [2011] makes a mistake is comparing the absolute change in anthropogenic H2O to the absolute change in anthropogenic CO2 for calculating its radiative effect, where instead  the radiative effect of anthropogenic H2O should be calculated from the relative change in atmospheric H2O due to a change in anthropogenic H2O. The change in absolute amounts of anthropogenic H2O is quite similar to the change in absolute amount of (anthropogenic) CO2, but the change in the absolute amount of anthropogenic H2O is very small compared to the absolute amount of H2O. Hence, my much smaller estimate of its radiative effect.

Obviously such “back-of-the-envelope” calculations have limited use. The atmospheric H2O cycle is rather complex as Roger has pointed out on many occasions on this blog, and the radiative forcing of H2O is assumed to be mainly related to what happens in the upper troposphere – where H2O concentrations are orders of magnitudes smaller than at the surface. But then estimating the effect of anthropogenic H2O should include all the processes relevant to the hydrological cycle, which basically means full 3-D climate modelling.

For the moment I don’t see how such a relatively small increase in atmospheric water vapor could have such a large effect as claimed in Curtis [2011], but feel free to comment.

source of image

Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

New Paper “Recent Wind Driven High Sea Ice Export In The Fram Strait Contributes To Arctic Sea Ice Decline” By Smedsrud Et Al 2011

In response to the post

New Paper Under Review “Changes In Seasonal Snow Cover In Hindu Kush-Himalayan Region” By Gurung Et Al 2011

Peter Williamson alerted us to a related paper that highlights the major role of regional circulation patterns on climate  (this time for the Arctic).  The paper is

Smedsrud, L. H., Sirevaag, A., Kloster, K., Sorteberg, A., and Sandven, S.: Recent wind driven high sea ice export in the Fram Strait contributes to Arctic sea ice decline, The Cryosphere Discuss., 5, 1311-1334, doi:10.5194/tcd-5-1311-2011, 2011

Arctic sea ice area decrease has been visible for two decades, and continues at a steady rate. Apart from melting, the southward drift through Fram Strait is the main loss. We present high resolution sea ice drift across 79° N from 2004 to 2010. The ice drift is based on radar satellite data and correspond well with variability in local geostrophic wind. The underlying current contributes with a constant southward speed close to 5 cm s−1, and drives about 33 % of the ice export. We use geostrophic winds derived from reanalysis data to calculate the Fram Strait ice area export back to 1957, finding that the sea ice area export recently is about 25 % larger than during the 1960’s. The increase in ice export occurred mostly during winter and is directly connected to higher southward ice drift velocities, due to stronger geostrophic winds. The increase in ice drift is large enough to counteract a decrease in ice concentration of the exported sea ice. Using storm tracking we link changes in geostrophic winds to more intense Nordic Sea low pressure systems. Annual sea ice export likely has a significant influence on the summer sea ice variability and we find low values in the 60’s, the late 80’s and 90’s, and particularly high values during 2005–2008. The study highlight the possible role of variability in ice export as an explanatory factor for understanding the dramatic loss of Arctic sea ice the last decades.

Comments Off

Filed under Climate Change Forcings & Feedbacks