Monthly Archives: June 2011

An Interesting 1973 Paper “A Preliminary Study On The Climatic Flucuations During The Last 5000 years In China” By Chu Ko-Chen

Chu Ko-Chen (Co-Ching Chu), 1973: A Preliminary Study on the Climatic Flucuations during the last 5000 Years in China. cnki:ISSN:1006-9283.0.1973-02-005. pages 243-261.

This paper is interesting because of its politcal perspective as it was published while Mao was still leading China, but also, of more importance to climate, its statement about temperature trends in China. I have a complete English version of the paper which was republished from Scientia Sinica, Vol XIV, No. 2 May 1973 in a jounral called Cycles. If someone has the url for that journal please e-mail to me and I will add to this post.

The abstract reads [highlight added)

“The world climate during the historical times fluctuated. The numerous Chinese historical writings provide us excellent references in studying the ancient climate of China. The present author testifies, by the materials got from the histories and excavations, that during Yin-Hsu at Anyang, the annual temperature was about 2℃ higher than that of the, present in most of the time. After that came a series of up and down swings of 2—3℃ with minimum temperatures occurring at approximately 100 B. C. (about the end of the Yin Dynasty and the beginning of the Chou Dynasty), 400 A. D. (the Six Dynasties), 1200 A. D. (the South Snug Dynasty), and 1700 A. D. (abont the end of the Ming Dynasty and the beginning of the Ching Dynasty). In the Han and the Tang Dynasties (200 B. C.—220 A. D. and 600—900 A. D.) the climate was rather warm. When the world climate turned colder than usual, it tended to begin at the Pacific coast of Eastern Asia, propagating as a wave westward, through Japan and China, to the Atlantic coast of Europe and Africa. When the world temperature recovered, it tended to propagate eastward from the west. A fuller knowledge of lhe climatic fluctuations in historical times and a good grasp of their laws would render better service to the long-range forecasting in climate.”

Source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

Continued Bias Reporting On The Climate System By Tom Karl and Peter Thorne

Update: June 30 2011 The complete BAMS paper is available from

Blunden, J., D. S. Arndt, and M. O. Baringer, Eds., 2011: State of the Climate in 2010. Bull. Amer. Meteor. Soc., 92 (6), S1-S266.

*************************************************

Today (6/29/2011), there were news articles concerning the state of the climate system; e.g. see  the Associated Press news release in the Washington Post

Climate change study: More than 300 months since the planets temperature was below average

The news article refers to the 2010 climate summary that will be published in a Bulletin of the American Meteorological Society article. The article will undoubtedly include informative information on the climate. 

However, the news article itself erroneously reports on the actual state of the climate, as can easily be shown simply by extracting current analyses from the web.  Two of the prominent individuals quoted in the news report are Tom Karl and Peter Thorne. They make the following claims

“The indicators show unequivocally that the world continues to warm,” Thomas R. Karl, director of the National Climatic Data Center, said in releasing the annual State of the Climate report for 2010.”

“There is a clear and unmistakable signal from the top of the atmosphere to the depths of the oceans,” added Peter Thorne of the Cooperative Institute for Climate and Satellites, North Carolina State University.”

“Carbon dioxide increased by 2.60 parts per million in the atmosphere in 2010, which is more than the average annual increase seen from 1980-2010, Karl added. Carbon dioxide is the major greenhouse gas accumulating in the air that atmospheric scientists blame for warming the climate.”

Karl is correct on the increase in carbon dioxide, but, otherwise,  he and Peter Thorne are not honestly presenting  the actual state of the climate system.  They focus on the surface temperature data, which as, we have reported on in peer-reviewed papers, has major unresolved uncertainties and includes a systematic warm bias; e.g. see

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

The climate system has not warmed since about 2003 either in the upper ocean or in the lower troposphere as shown in the three figures below.

Tom Karl is wrong in his first quote  – The indicators DO NOT show unequivocally that the world continues to warm. This warming has stalled, at least for now, since about 2003. Peter Thorne is misrepresenting the actual data when he erroneously reports that (assuming he means ‘unequivocal warming’)  “There is a clear and unmistakable signal from the top of the atmosphere to the depths of the oceans”.

Global Ocean Heat Content 1955-present

Second, the lower troposphere (from both the RSS and UAH MSU data)  also do NOT SHOW unequivocally that the world continues to warm! Indeed, warming has also stalled since about 2002.

Channel TLT Trend Comparison

Figure caption: Global  average (70 south to 82.5 north) lower tropospheric temperatures (from RSS)

Figure caption: Global  average (70 south to 82.5 north) lower tropospheric temperatures (from UAH)

It should not be surprising that Tom Karl and Peter Thorne are not honestly reporting the actual state of the climate system, which involves a much more complex signal in response to human and natural climate forcings and feedbacks, than they report on; e.g. see

Christy, J.R., B. Herman, R. Pielke, Sr., P. Klotzbach, R.T. McNider, J.J. Hnilo, R.W. Spencer, T. Chase and D. Douglass, 2010: What do observational datasets say about modeled tropospheric temperature trends since 1979?  Remote Sensing, 2(9), 2148-2169.

Previous documentation of the biases and efforts to manage the information provided to policymakers by Tom Karl and Peter Thorne includes the following examples

Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices

The Selective Bias Of NOAA’s National Climate Data Center (NCDC) With Respect To The Analysis And Interpretation Of Multi-Decadal Land Surface Temperature Trends Under The Leadership Of Tom Karl and Tom Peterson

Erroneous Climate Science Statement By Tom Karl, Director Of The National Climate Data Center And President Of The American Meteorological Society

E-mail Documentation Of The Successful Attempt By Thomas Karl Director Of the U.S. National Climate Data Center To Suppress Biases and Uncertainties In the Assessment Of Surface Temperature Trends

Erroneous Statement By Peter A. Stott And Peter W. Thorne In Nature Titled “How Best To Log Local Temperatures?”

It is disappointing that the media do not properly question the claims made by Tom Karl and Peter Thorne. They are presenting a biased report on the actual state of the climate system.

Comments Off

Filed under Bias In News Media Reports

Tim Curtin’s Response to Jos De Laat’s Comments

On June 22, 2011 the post

Guest post by Dr. Jos de Laat, Royal Netherlands Meteorological Institute [KNMI]

was presented which commented on an earlier post by Tim Curtin titled

New Paper “Econometrics And The Science Of Climate Change” By Tim Curtin

Tim has provided a response to Jos’s post which is reproduced below.

Reply By Tim Curtin

I am very glad to have Jos de Laat’s comments on my paper, not least because I know and admire his work. I agree with much if not all of what he says, and fully accept his penultimate remark: “estimating the effect of anthropogenic H2O should include all the processes relevant to the hydrological cycle, which basically means full 3-D climate modelling”.  I begin by going through his points sequentially.

1.         Jos said “in the past I had done some back-of-the-envelope calculations about how much water vapour (H2O) was released by combustion processes. Which is a lot, don’t get me wrong, but my further calculations back then suggested that the impact on the global climate was marginal.  Since Curtin [2011] comes to a different conclusion, I was puzzled how that could be”. Well, using my paper’s equation (1) and its data for the outputs from hydrocarbon combustion, I found that combustion currently produces around 30 GtCO2 and 18 GtH2O per annum. Given that the former figure, with its much lower radiative forcing than that from H2O, is considered to be endangering the planet, I would have thought even only 18 GtH2O must also be relevant, not necessarily in terms of total atmospheric H2O (which I henceforth term as [H2O]) but as part of the global warming supposedly generated by the 30 GtCO2 emitted every year by humans, to which should be added, as my paper notes, the 300 GtH2O of additions to [H2O] from the water vapor generated by the cooling systems of most thermal and nuclear power stations.

2.         The next key point is not how much [H2O] there is across the surface of the globe, but how much at the infrared spectrum wavelengths, and how much of that varies naturally relative to the incremental annual extra fluxes generated by the total H2O emissions from hydrocarbon combustion and the cooling process of power generation.

3.         Then, if we do accept de Laat’s claim that the quantity of [H2O] per sq. metre is relevant, then that also applies to the annual NET increase in atmospheric [CO2] in 2008-2009 of just 14 GtCO2 (from TOTAL emissions, all sources including LUC, of 34.1 GtCO2) and that is much less than the total 33 GtH2O from just hydrocarbon combustion.[1] How much is the net increase in [CO2] per square metre? See Nicol (2011: Fig. 6, copy attached below).

4.         Pierrehumbert’s main omission is the [H2O] emitted during the cooling process. Let us recall what that involves, namely collection of water from lakes and rivers, using it to cool steam-driven generators, which produces emissions of steam (Kelly 2009), which is then released to the atmosphere through the cooling towers at the left of the photograph Roger put at the head of de Laat’s post, and it soon evaporates to form [H2O] and then precipitates back to earth after about 10 days, as de Laat notes. What is significant is the huge acceleration of the natural flux of evaporation of surface water to the atmosphere and then back again as rain after 10 days.  Natural evaporation is a very SLOW process, power station cooling towers speed that up enormously.  As my paper footnoted, cooling the power stations of the EU and USA would need at least 25% of the flow of the Rhine, Rhone and Danube rivers, but how much do those rivers contribute to ordinary evaporation over a year? For another order of magnitude, average daily evaporation in Canberra is around 2 mm, rather more than its annual mean rainfall of 600 mm. That is why we have to rely on dams for our water needs!

5.         My paper cites Pierrehumbert at some length, but I regret that his recent uncalled for attack on Steve McIntyre and Ross McKitrick has led me to change my opinion of him.

6.         The graph below is from John Nicol (with his permission); he’s an Australian physics professor (James Cook University). It shows how indeed [CO2] like [H2O] operates at close to the surface of the globe, not at the stratosphere or upper troposphere as perhaps de Laat would have it.

 

Caption to Figure 6: John Nicol’s diagram shows the power absorbed by carbon dioxide within a sequence of 10 m thick layers up to a height of 50 metres in the troposphere. The five curves represent the level of absorption for concentrations of CO2 equal to 100%, 200% and 300% of the reported current value of 380 ppm. As can be seen, the magnitude of absorption for the different concentrations are largest close to the ground and the curves cross over at heights between 3 and 4 metres, reflecting the fact that for higher concentrations of CO2, more radiation is absorbed at the lower levels leaving less power for absorption in the upper regions.


[1]  www.globalcarbonproject.org), and CDIAC for atmospheric CO2.

 

Comments Off

Filed under Climate Change Forcings & Feedbacks, Guest Weblogs

Comments By Marcia Wyatt on CMIP Data

The CMIP3 data led by the World Climate Research Programme’s (WCRP’s) Working Group on Coupled Modelling  is used extensively for climate model studies. Very recently Marcia Wyatt, as part of her contining studies building on her paper

Wyatt, Marcia Glaze , Sergey Kravtsov, and Anastasios A. Tsonis, 2011: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability  Climate Dynamics: DOI: 10.1007/s00382-011-1071-8. (see also her guest weblog on the paper here)

found problems with this data. I asked her to summarize her experience for our weblog. Her summary is given below.

CMIP3 experience by Marcia Wyatt

CMIP3 (Coupled Model Intercomparison Project) provides a free and open archive of recent model output (netcdf files) for use by researchers. While convenient, it is not immune from data-crunching-induced complacency. Following is a cautionary tale.

 My current research project involves processing CMIP model-datasets, converting “raw” variables into climate indices, and then applying statistical analysis to these reconstructed indices. The process has not been straight forward. With each new set of model data come new problems. For my particular project, slight differences in the formats of each CMIP model-dataset have required modification of computer coding.

Initial phases of a venture present the steepest learning curves; working with the CMIP data has been no exception.  But, with each successful processing and analysis of a dataset, my confidence in the various computer codes scripted for the specific dataset characteristics grew. This trend was not to last.

Early last week (June 21), as the last known glitch was being addressed, allowing progress to be made on several more not-yet-completed datasets, an oddity came to light. As I was putting the processed data – the reconstructed indices – through preparation steps for statistical analysis, I realized the processed values were odd. There were lots of repeated numbers, but with no discernable pattern. At first I suspected the codes; after all, I had encountered problems with each new dataset before. But the inconsistency of performance of the codes on similarly formatted data implied the problem lay elsewhere.

Before concluding the problem lay elsewhere, I looked at the processed data – the “raw” data “read” from the netcdf files. Clusters of zeroes filled certain regions of the huge matrices, but not all. Still I was not convinced beyond a doubt that this reflected a problem. I adopted a different strategy – to re-do the four model datasets already successfully completed. This took me back to square one. I selected data from the CMIP database, downloaded the needed data files, requested their transfer to my email address, and awaited their arrival. If I could repeat the analysis on these data with success, I would deduce the problem was with me, not with the data.

The emailed response from CMIP arrived. Instead of data files, I received messages that these files were unavailable. This was nothing new. I had seen this message on some requested data files before (before mid-June). At that time, I simply re-directed to a different model run, not suspecting an evolving problem. But this time was different. I had downloaded and processed these data successfully in the past. Now I was told they were unavailable. This made no sense.

I contacted the CMIP organization. They must have just discovered the problem, themselves. Within a day, the site was frozen. Users of the site were notified that all data downloaded since June 15th were unreliable. (I had downloaded the problem-ridden data on the 16th.) The message on the CMIP site since has been updated to include a projected resolution date of mid-July. Lesson here – confidence should be embraced with caution.

Comments Off

Filed under Climate Models

My Comments On “NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles”

I have reproduced below my comments to the National Science Board  and National Science Foundation on the merit review process.

 

I am writing this e-mail to comment on

“NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles”

First, my research credentials are summarized at

http://cires.colorado.edu/science/groups/pielke/

http://cires.colorado.edu/science/groups/pielke/people/pielke.html

I have had quite negative experiences with NSF respect to climate proposals in recent years. I have posted several weblog discussions on my experience as summarized in

http://pielkeclimatesci.wordpress.com/2011/06/27/nsbnsf-seeks-input-on-proposed-merit-review-criteria-revision-and-principles/

Based on my experience, I have concluded that the review process lacks sufficient accountability. To remedy this deficiency, I have the following recommendations

-Guarantee that the review process be completed within 6 months [my most recent land use and climate proposal was not even sent out for review until 10 months after its receipt!).

-Retain all e-mail communications indefinitely (NSF staff can routinely delete e-mails, such that there is no record to check their accountability).

-Require external independent assessments, by a subset of scientists who are outside of the NSF, of the reviews and manager decisions, including names of referees. This review should be on all accepted and rejected proposals.

Information on my experiences with NSF climate research are provided in these weblog posts

My Experiences With A Lack Of Proper Diligence And Bias In The NSF Review Process For Climate Proposals
http://pielkeclimatesci.wordpress.com/2011/05/26/my-experiences-with-a-lack-of-proper-diligence-in-the-nsf-review-process-for-climate-proposals/

Is The NSF Funding Untestable Climate Predictions . My Comments On A $6 Million Grant To Fund A Center For Robust Decision.Making On Climate And Energy Policy.
http://pielkeclimatesci.wordpress.com/2011/03/02/is-the-nsf-funding-untestable-climate-predictions-my-comments-on-a-6-million-grant-to-fund-a-center-for-robust-decision%e2%80%93making-on-climate-and-energy-policy/

The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill
http://pielkeclimatesci.wordpress.com/2010/10/21/the-national-science-foundation-funds-multi-decadal-climate-predictions-without-an-ability-to-verify-their-skill/

NSF Decision On Our Request For Reconsideration Of A Rejected NSF Proposal On The Role Of Land Use Change In The Climate System
http://pielkeclimatesci.wordpress.com/2010/06/11/nsf-decision-on-our-request-for-reconsideration-of-a-rejected-nsf-proposal/

Is The NSF Funding Process Working Correctly?
http://pielkeclimatesci.wordpress.com/2010/05/18/is-the-nsf-funding-process-working-correctly/

I would be glad to elaborate further on the lack of diligence and bias by the NSF review process with respect to climate research.

Sincerely

Roger A. Pielke Sr.

Comments Off

Filed under The Review Process

NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles

The National Science Board has sent out a notice requesting input on the NSF review process. Their request is reproduced in this post. One glaring issue that is missing is accountability. I discussed this subject in my posts

My Experiences With A Lack Of Proper Diligence And Bias In The NSF Review Process For Climate Proposals

Is The NSF Funding Untestable Climate Predictions – My Comments On A $6 Million Grant To Fund A Center For Robust Decision–Making On Climate And Energy Policy”

The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill

NSF Decision On Our Request For Reconsideration Of A Rejected NSF Proposal On The Role Of Land Use Change In The Climate System

Is The NSF Funding Process Working Correctly?

I have made the following recommendations:

  • Guarantee that the review process be completed within 6 months [my most recent land use and climate proposal was not even sent out for review until 10 months after its receipt!)
  • Retain all e-mail communications indefinitely (NSF staff can routinely delete e-mails, such that there is no record to check their accountability)
  • Require external independent assessments, by a subset of scientists who are outside of the NSF, of the reviews and manager decisions, including names of referees. This review should be on all accepted and rejected proposals ( as documented in the NSF letter at the end of this post, since they were so late sending out for review, they simply relied on referees of an earlier (rejected) proposal; this is laziness at best).

The National Science Board request follows. I will be submitting my comments, based on the above text, and urge colleagues who read my weblog to do likewise.

NSB-11-42
NSB/NSF Seeks Input on Proposed Merit Review Criteria Revision and Principles
line

National Science Board
June 14, 2011

Over the past year, the National Science Board (NSB) has been conducting a review of the National Science Foundation’s merit review criteria (Intellectual Merit and Broader Impacts). At the Board’s May 2011 meeting, the NSB Task Force on Merit Review proposed a revision of the two merit review criteria, clarifying their intent and how they are to be used in the review process. In addition, the Task Force identified a set of important underlying principles upon which the merit review criteria should be based. We now seek your input on the proposed revision and principles.

The Task Force looked at several sources of data for information about how the criteria are being interpreted and used by the NSF community, including an analysis of over 190 reports from Committees of Visitors. The Task Force also reached out to a wide range of stakeholders, both inside and outside of NSF, to understand their perspectives on the current criteria. Members of NSF’s senior leadership and representatives of a small set of diverse institutions were interviewed; surveys about the criteria were administered to NSF’s program officers, division directors, and advisory committee members and to a sample of 8,000 of NSF’s Principal Investigators (PIs) and reviewers; and the NSF community at large was invited to provide comments and suggestions for improvements through the NSF web site ( http://www.nsf.gov/nsb/publications/2011/01_19_mrtf.jsp). The stakeholder responses were very robust—all told, the Task Force considered input from over 5,100 individuals.

One of the most striking observations that emerged from the data analyses was the consistency of the results, regardless of the perspective. All of the stakeholder groups identified similar issues, and often offered similar suggestions for improvements. It became clear that the two review criteria of Intellectual Merit and Broader Impacts are in fact the right criteria for evaluating NSF proposals, but that revisions are needed to clarify the intent of the criteria, and to highlight the connection to NSF’s core principles.

The two draft revised criteria, and the principles upon which they are based, are below. Comments are being collected through July 14—we invite you to send comments to meritreview@nsf.gov. It is expected that NSF will develop specific guidance for PIs, reviewers, and NSF staff on the use of these criteria after the drafts are finalized. Your comments will help inform development of that guidance, and other supporting documents such as FAQs.

The Foundation is the primary Federal agency supporting research at the frontiers of knowledge, across all fields of science and engineering (S&E) and at all levels of S&E education. Its mission, vision and goals are designed to maintain and strengthen the vitality of the U.S. science and engineering enterprise and to ensure that Americans benefit fully from the products of the science, engineering and education activities that NSF supports. The merit review process is at the heart of NSF’s mission, and the merit review criteria form the critical base for that process.

We do hope that you will share your thoughts with us. Thank you for your participation.

Ray M. Bowen
Chairman, National Science Board
Subra Suresh
Director, National Science Foundation

line

Merit Review Principles and Criteria
The identification and description of the merit review criteria are firmly grounded in the following principles:

  1. All NSF projects should be of the highest intellectual merit with the potential to advance the frontiers of knowledge.
  2. Collectively, NSF projects should help to advance a broad set of important national goals, including:
    • Increased economic competitiveness of the United States.
    • Development of a globally competitive STEM workforce.
    • Increased participation of women, persons with disabilities, and underrepresented minorities in STEM.
    • Increased partnerships between academia and industry.
    • Improved pre-K–12 STEM education and teacher development.
    • Improved undergraduate STEM education.
    • Increased public scientific literacy and public engagement with science and technology.
    • Increased national security.
    • Enhanced infrastructure for research and education, including facilities, instrumentation, networks and partnerships.
  3. Broader impacts may be achieved through the research itself, through activities that are directly related to specific research projects, or through activities that are supported by the project but ancillary to the research. All are valuable approaches for advancing important national goals.
  4. Ongoing application of these criteria should be subject to appropriate assessment developed using reasonable metrics over a period of time.

Intellectual merit of the proposed activity

The goal of this review criterion is to assess the degree to which the proposed activities will advance the frontiers of knowledge. Elements to consider in the review are:

  1. What role does the proposed activity play in advancing knowledge and understanding within its own field or across different fields?
  2. To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts?
  3. How well conceived and organized is the proposed activity?
  4. How well qualified is the individual or team to conduct the proposed research?
  5. Is there sufficient access to resources?

Broader impacts of the proposed activity

The purpose of this review criterion is to ensure the consideration of how the proposed project advances a national goal(s). Elements to consider in the review are:

  1. Which national goal (or goals) is (or are) addressed in this proposal? Has the PI presented a compelling description of how the project or the PI will advance that goal(s)?
  2. Is there a well-reasoned plan for the proposed activities, including, if appropriate, department-level or institutional engagement?
  3. Is the rationale for choosing the approach well-justified? Have any innovations been incorporated?
  4. How well qualified is the individual, team, or institution to carry out the proposed broader impacts activities?
  5. Are there adequate resources available to the PI or institution to carry out the proposed activities?

Comments Off

Filed under Climate Proposal Review Process

Uncertainty in Utah: Part 3 on The Hydrologic Model Data Set by Randall P. Julander

Uncertainty in Utah Hydrologic Data: Part 3 – The Hydrologic Model Data Set

 A three part series that examines some of the systematic bias in Snow Course, SNOTEL, Streamflow Data and Hydrologic Models

Randall P. Julander Snow Survey, NRCS, USDA

Abstract

Hydrologic data collection networks and for that matter, all data collection networks were designed, installed and operated – maintained to solve someone’s problem. From the selection of sensors to the site location, all details of any network were designed to accomplish the purpose of the network. For example the SNOTEL system was designed for water supply forecasting and while it is useful for avalanche forecasting, SNOTEL site locations are in the worst locations for data avalanche forecasters want such as wind loading, wind speed/direction and snow redistribution. All data collection networks have bias, both random and systematic. Use of any data from any network for any purpose including the intended one but especially for any other purpose should include an evaluation for data bias as the first step in quality research. Research that links a specific observation or change to a relational cause could be severely compromised if the data set has unaccounted systematic bias.  Many recent papers utilizing Utah Hydrologic Data have not identified or removed systematic bias from the data. The implicit assumption is of data stationarity – that all things except climate are constant through time thus observed change in any variable can be directly attributed to climate change.   Watersheds can be characterized as living entities that change fluidly through time. Streamflow is the last check paid in the water balance – it is the residual after all other bills have been paid such as transpiration, evaporation, sublimation and all other losses. Water yield from any given watershed can be impacted by vegetation change, watershed management such as grazing, forestry practices, mining, diversions, dams and a host of related factors. In order to isolate and quantify changes in water yield due to climate change, these other factors must also be identified and quantified. Operational hydrologic models for the most part grossly simplify the complexities of watershed response due to the lack of data. For the most part they operate on some snow and precipitation data as water balance inputs, temperature as the sole energy input, gross estimations of watershed losses mostly represented by a generic rule curve and streamflow as an output to achieve a mass balance. Temperature is not the main energy driver in snowmelt, short wave solar energy is. Hydrologic models using temperature as the sole energy input can overestimate the impacts of warming.

Hydrologic Models

Operational hydrologic models on the whole are a very simplistic lot. They  represent huge complexities of watershed processes in a few lines of code by averaging or lumping inputs such as precipitation and temperatures and defining a few relatively ‘homogeneous’ areas of supposed similar characteristics. Aside from the systematically biased streamflow, snowpack, temperature and precipitation input data problems these models calibrate from and the adage applies – garbage in garbage out… or biased data in – bias continues in output, many of these models have been simplified to the point where they may not have the ability to accurately quantify climate change outside the normal range of calibration.

 

This figure represents the workhorse of hydrologic models, the Sacramento Model. It is a basic ‘tank’ based model where ‘tanks’ of various sizes hold water from inputs such as snowmelt and precipitation then release it to streamflow. Your basic tanks are the upper zone and lower zone with the lower zone divided into 2 separate tanks. The water level in each tank determines the total outflow to the stream. In just a few lines of code – essentially adding up the flow of each tank gives us streamflow for a give time step. Precipitation or snowmelt derived from a simple mean average basin temperature provide input to the surface tank which provides input to the lower tanks. Energy input to the snowpack portion of the model is air temperature. In an operational context, air temperature is the most widely used variable because it is normally the only data available and it is highly relational to total energy input over the normal range of model calibration. Models have been developed that have deliberately chosen the most simple input output so as to be useful in a wide range of areas such as the Snowmelt Runoff Model from Martinec and Rango.

              Snowmelt Runoff Model Structure (SRM)

Each day, the water produced from snowmelt and from rainfall is computed, superimposed on the calculated recession flow and transformed into daily discharge from the basin according to Equation (1):

Qn+1 = [cSn · an (Tn + ΔTn) Sn+ cRn Pn][(A*10000)/86400]⋅ (1-kn+1) + Qn kn+1 (1)

where: Q = average daily discharge [m3s-1]

c = runoff coefficient expressing the losses as a ratio (runoff/precipitation), with cS referring to snowmelt and cR to rain

a = degree-day factor [cm oC-1d-1] indicating the snowmelt depth resulting from 1 degree-day

T = number of degree-days [oC d]

ΔT = the adjustment by temperature lapse rate when extrapolating the temperature from the station to the average hypsometric elevation of the basin or zone [oC d]

S = ratio of the snow covered area to the total area

P = precipitation contributing to runoff [cm]. A preselected threshold temperature, TCRIT, determines whether this contribution is rainfall and immediate. If precipitation is determined by TCRIT to be new snow, it is kept on storage over the hitherto snow free area until melting conditions occur.

A = area of the basin or zone [km2]

This is the whole SRM hydrologic model – a synopsis of all the complexities of the watershed summarized in one short equation. It is a great model for its intended purpose. The energy portion of this model consists of a simple expression of a degree day factor with an adjusting factor – that is to say if the average daily temperature is above zero by some amount which is then modified by the adjustment factor, melt occurs and is processed through the model.  The greater the temperature, the more melt occurs. So how/why do these very simple models work? Because streamflow itself the result of many processes across the watershed that tend to blend and average over time and space. As long as the relationships between all processes remain relatively constant, the models do a good job. However, throw one factor into an anomalous condition, say soil moisture and model performance tends to degrade quickly.

More technical hydrologic models utilize a broader spectrum of energy inputs to the snowpack such as solar radiation. These models more accurately represent energy input to snowmelt but are not normally used in an operational context because the data inputs are not available over wide geographic areas.

The energy balance to a snowpack can be summarized as follows:

Energy Balance

M = Qm/L

Where Qm is the amount of heat available for the melt process and L is the latent heat of Fusion and M is Melt

Qs= Qis-Qrs-Qgs+Qld-Qlu+Qh+Qe+Qv+Qg-Qm

Qs is the increase in internal energy storage in the pack

Qis is the incoming solar radiation

Qrs is incoming energy loss due to reflection

Qgs is energy tranferred to soil

Qld is longwave energy to the pack

Qlu is longwave energy loss from the pack

Qh is the turbulent transfer of sensible heat from the air to the pack

Qe is the turbulent transfer of latent heat (evaporation or sublimation) to pack

Qv energy gained by vertical advective processes (rain, condensate, mass removal via evaporation/sublimation)

Qg is the supply of energy from conduction with the soil, percolation of melt and vapor transfer)

During continuous melt, a snowpack is isothermal at 0 degrees C and therefore Qs is assumed negligible as is Qgs… Therefore

Qm=Qn+Qh+Qe+Qv+Qg

where Qn is the net all wave radiation to the pack.

All of these processes in many hydrologic models are summarized in one variable – average air temperature for a given area. It is interesting to note that temperature works only on the surface of the snowpack and that a snowpack in order to melt has to be isothermal from top to bottom else melt from the surface refreezes at lower, colder pack layers. Snow is not mostly snow, it is mostly air. In cool continental areas such as Utah, snowpacks rarely exceed a 50% density which means that at its greatest density, it is still 50% air. Due to this fact, snow tends to be an outstanding insulator. You cannot pound temperature into a snowpack. Solar radiation on the other hand can penetrate and convey energy deep into the pack and it is this mechanism that conveys by far the most energy into snow. This fact can be easily observed every spring when snowpacks start to melt. Observe south facing aspects in any location from a backyard to the mountains and see from a micro to macro scale the direct influence solar radiation has with respect to temperature.

Notice in this photo patches of snow that have been solar sheltered as air temperature is very likely close to be much the same over both the melted areas and the snow covered areas in both time and space. South aspects melt first, north aspects last. Shaded areas hold snow longer than open areas.

 

This graph from Roger Bales expresses the energy input in watts for the Senator Beck Basin in Colorado.  The black line is the net flux to the pack or the total energy input from all sources.  The red line is the net solar input and the blue line represents all sensible energy. The difference between the red line and the black line is all energy input to the pack combined, negative and positive except net solar. From this, one can readily appreciate the relative influence each individual energy source has on snowmelt. This graph is from a dusty snow surface so the net solar is likely greater than what would be expected from a normal snowpack. However, the contrast is stark – solar radiation is the driver of snowmelt. Air temperature is a long ways back.  This of course is information that has been known for many years as illustrated in the following table from the Army Corps of Engineers in the 1960’s lysimeter experiments at Thomas Creek.

Notice that shortwave radiation is the constant day in day out energy provider, long wave, temperature,  and other sources pop up here and there.

So, what is the implication for hydrologic models that use air temperature as the primary energy input? The relationship between solar radiation and temperature will change. For a 2 degree C rise in temperature the model would see an energy input increase the equivalent to moving the calendar forward several weeks to a month – a huge energy increase. The question becomes what will the watershed actually see – will the snowpack see an equivalent magnitude energy increase?

 

This chart for the Weber Basin of Utah illustrates the average May temperature for various elevations and what a plus 2 degree C increase would look like. This is what the hydrologic model will see. In order to ascertain what energy increase the watershed will actually see, we go back to the graph of Bales – what the watershed will see is a more modest increase in the sensible heat line. Climate change won’t increase the hours in a day nor increase the intensity of solar radiation so the main energy driver to the snowpack, solar – will stay close to the same, all other things equal. So the total energy to the snowpack will have a modest increase but what the hydrologic model has seen is a much larger proportional increase. Thus if this factor is not accounted, the model is likely to overestimate the impacts that increased temperatures may have on snowpacks.  Hydrologic models work well within their calibrated range because temperature is closely related to solar energy. With climate change warming, this relationship may not be the stable input it once was and models may need to be adjusted accordingly.  Research needs to move in the direction of total energy input to the watershed instead of temperature based modeling. Then we can get a much clearer picture of the impacts climate change may have on water resources. Recent research by Painter et al regarding changes in snow surface albedo and accelerated runoff support the solar vs temperature energy input to the pack where surface dust can accelerate snowmelt by as much as 3 weeks or more whereas modest temperature increases would accelerate the melt by a week.

Evapotranspiration and losses to groundwater

Operational hydrologic models incorporate evapotranspiration mostly as a wild guess. I say that because there is little to no data to support the ‘rule curve’ the models use to achieve these figures. A rule curve is normally developed through the model calibration process. The general shape of the hydrograph is developed via precipitation/snow inputs and then the mass balance is achieved through the subtraction of ET data so the simulated and the observed curves fit and there is no water left over. As a side, some water may be tossed into a deeper ground water tank to make things somehow more realistic. Some pan evaporation data here and there sporadic in time and space with no transpiration data.  So how are these curves derived? Mostly from mathematical calibration fit – one models the streamflow first with precipitation/snow input, you get the desired shape of the hydrograph and then you get the final mass balance correct by increasing or decreasing the ET curve and the losses to deep groundwater. The bottom line is that these parameters have no basis in reality and are mathematically derived to achieve the correct mass balance. We have no clue what either one actually is. This may seem like a minor problem until we see what part of the hydrologic cycle they comprise.

 

In this chart we have a gross analysis of total watershed snowpack and annual streamflow. Higher elevation sites such as the Weber at Oakley have a much higher per acre water yield than do lower elevation watersheds such as the Sevier River at Hatch. However – in many cases there are far greater watershed losses than snowpack runoff that actually makes it past a streamgage, typical of many western watersheds where potential ET often exceeds annual precipitation. Streamflow, again, is a residual function, that water that is left over after all watershed bills are paid. We model most often the small part of the water balance and grossly estimate ET and groundwater losses. At Lake Powell, between 5 and 15% of the precipitation/snow model input shows up as flow. Small changes to the watershed loss rates or our assumptions about these loss rates can have huge implications on model results. The general assumption in a warming world is that these watershed losses will increase. Higher temperatures lead to higher evaporative losses which are the small part of the ET function – but will transpiration increase? This is a question that needs more investigation because of several issues: 1) higher CO2 can lead to more efficient plant use of water in many plants including trees and 10% to 20% less transpiration could be a significant offsetting factor in water yield and 2) watershed vegetative response to less water either through natural mechanisms (massive forest mortality such as we currently see) or mechanical means could also alter the total loss to ET. The assumptions made on the energy input side of the model together with the assumptions on watershed loss rates are likely the key drivers of model output and both have substantial problems in quantification.

 

Is average temperature a good metric to assess potential snow and streamflow changes?

Seeing that solar radiation is the primary energy driver to snow ablation we then make the observation that in winter the northern latitudes have very little of that commodity. Without the primary driver of snowmelt, solar radiation, snowpacks are unlikely to experience widespread melt. We then ask the question – is average temperature a good indicator of what might happen? There is at least a 50% -80% probability that any given storm on any given winter day will occur during a period of coldest daily temperature – i.e. nighttime, early morning or late evening. The further north a given point is in latitude, the higher that probability. Once snowpack is on the ground in the mountains of Utah and similar high elevation cool continental climate states, sensible heat exchange is not likely to melt it off. Thus minimum temperature or some weighted combination below average and perhaps a bit above minimum temperature might be a better metric.

 

In this graph two SNOTEL site minimum average monthly temperatures plus a 2 degree increase are displayed.  Little Grassy is the most southern low elevation (6100 ft) site we have. It currently has a low snowpack (6inches of SWE or less) in any given year and is melted out by April 1. Steel Creek Park is at 10,200 feet on the north slope of the Uintah Mountains in northern Utah – a typical cold site. As you can see, a 2 degree increase in temperature at Little Grassy could potentially shorten the snow accumulation/ablation seasons by a week or so on either end. This is an area which contributes to streamflow in only the highest of snowpack years and as such, a 2 week decrease in potential snow accumulation may be difficult to detect given the huge inter annual variability in SWE. A two degree rise in temperature at Steel Creek Park is meaningless – it would have little to no impact on snow accumulation or ablation. Thus most/much/some of Utah and similar areas west wide may have some temperature to ‘give’ in a climate warming scenario prior to having significant impacts to water resources.  Supporting evidence for this concept comes from the observation that estimates of temperature increases for Utah are about 2 degrees or so and we have as yet, not been able to document declines in SWE or its pattern of accumulation due to that increase. A question for further research would be – at what level of temperature increase could we anticipate temperature impacting snowpacks.

More rain, less snow

In the west where snow is referred to as white gold, the potential of less snow has huge financial implications from agriculture to recreation. The main reason many streams in the west flow at all is because infiltration capacity and deeper soil moisture is exceeded due to snowmelt of 1 to 2 inches per day over a 2 to 12 week period keeping soils saturated and excess water flowing to the stream. In the cool continental areas of the west, it can be easily demonstrated that 60%, 70%, 80% and in some cases exceeding 95% of all streamflow originates as snow. Summer precipitation has to exceed some large extent and magnitude to have any impact on streamflow at all and typically when it does, it pops up for a day and immediately returns to base flow levels. So more rain, less snow has a very ominous tone and over a long period of time, if snowpacks indeed dwindle to near nothing, very serious impacts could occur. In the short run in a counterintuitive manner, rain on snow may actually increase flows. Let’s examine how this might occur. Currently, the date snowpacks begin is hugely variable and dependent on elevation, it can range from mid September to as late as early December. If rain occurs in the fall months it is typically held by the soil through the winter months and contributes to spring runoff by soil saturation. Soils that are dry typically take more of an existing snowpack to bring them to saturation prior to generating significant runoff. Soils that are moist to saturated take far less snowmelt to reach saturation and are far more efficient in producing runoff.

 

In this chart of Parrish Creek along the Wasatch Front in Utah we see the relationship between soil moisture (red), snowpack (blue) and streamflow (black).  In the first 3 years of daily data, we see peak SWE was identical in all years but soil moisture was very low the first year, very high the second year and average the third year and the corresponding streamflows from identical snowpacks were low, high and average. In the fourth year snowpacks were low as was soil moisture and the resulting streamflow was abysmal.  In the fith year, snowpacks were high but soil moisture was very low and streamflow was mediocre having lost a major portion of the snowpack to bring soils to saturation. Soil moisture can have a huge impact on runoff. Thus fall precipitation on the watershed as rain can be a very beneficial event – some of course is lost to evapotranspiration but that would be the most significant loss.

 

In this chart of the Bear River’s soil moisture we see exactly that case – large rain events in October brought soil moisture from 30% saturation up to 65% where it remained through the winter months till spring snowmelt. This is a very positive situation that increases snowmelt runoff efficiency. Rain in the fall months is not necessarily a negative. Now lets look at rain in the spring time. This is typically rain on snow kinds of events and in fact, this from a water supply viewpoint is also very positive. If for example a watershed is melting 2 inches of SWE per day and watershed losses are 1 inch per day, then we have 1 inch of water available for runoff. Now, say we have this same scenario and we have a 1 inch rain on snow event. Then we have 2 inches of SWE melt plus 1 inch of rainfall for a total input of 3 inches and the same loss rate of 1 inch per day yields 2 inches of runoff, double the runoff for that particular day.  Twice the runoff means more water in reservoirs. Where this eventually breaks down is where the watershed aerial extent of snowpack becomes so small towards the end of melt season the total amount of water yield becomes inconsequential. If snowmelt due to temperature increases is only 1 week, then more rain less snow may not be a huge factor in water yield. When this does become a significant problem, i.e. when snow season is shortened by ‘X’ weeks should be a subject for further research. For the short term, 50 years or so, water yields in the Colorado may not likely see significant impacts from more rain/less snow and watershed responses will likely be muddled and confused by vegetation mortality. (Mountain Pine Beetle Activity May Impact Snow Accumulation And Melt. Pugh). For the short term, total precipitation during the snow accumulation/ablation season is likely a much more relevant variable than temperature. Small increases in precipitation at the higher elevations may well offset any losses in water yield from the current marginal water yield producing areas at lower elevations. A decrease in this precipitation in combination with temperature increases would be the worst scenario.

SWE to Precipitation ratios

When trying to express this concept of more rain, less snow the SWE to PCP ratio was conceived as a metric to numerically express the observed change. When developing a metric that purports to be related to some variable it is important to make sure mathematically and physically that the metric does what it was intended to do and not to have other factors unduly influence the outcome. Simply said, the SWE to PCP metric was intended to show how increased temperatures have increased rain events and decreased snow accumulation.  This metric should be primarily temperature related with only minor influences from other factors. The fact is this metric is riddled with factors other than temperature that may preclude meaningful results.  It in reality is a better indicator of drought than it is one of temperature.

http://www.ut.nrcs.usda.gov/snow/siteinfo/data_bias/Swe%20to%20precip%20ratios%20in%20utah.pdf

 

 

In these two graphs one can see that the SWE to PCP ratio is a function of precipitation magnitude and as such is influenced by drought more than temperature. The physical and mathematical reasons are detailed in the paper ‘Characteristics of SWE to PCP Ratios in Utah’ available at the link above.

Conclusions

Many hydrologic models have serious limitations on both the energy input side as well as the mass balance side of watershed yield with respect to ET and ground water losses that can influence the results of temperature increases. It is possible that many systematically overestimate the impacts of temperature increases on water yields. Systematic bias in the data used by models can also predispose an outcome. What model is used in these kinds of studies matters – snowmelt models that incorporate energy balance components such as solar radiation in addition to temperature likely produce more realistic results than temperature based models. Assumptions about ET and groundwater losses can have significant impacts to results. Metrics developed to quantify specific variables or phenomena need to be rigorously checked in multiple contexts to insure they are not influenced by other factors.

Comments Off

Filed under Climate Change Metrics, Vulnerability Paradigm