Monthly Archives: May 2011

Recommended Weblog Post By John Nielsen-Gammon On One Aspect Of Fall Et Al 2011

There is an important discussion of the significance of one of our findings (on the daily temperature range) in the paper

 Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., in press. Copyright (2011) American Geophysical Union.

in a post on Climate Abyss by John Nielsen-Gammon. His post is titled

Fall et al. 2011: What We Learned About the Climate

and is recommended reading.

Comments Off

Filed under Climate Change Metrics

Important New Report “Parallel Air Temperature Measurements At The KNMI observatory In De Bilt (the Netherlands) May 2003 – June 2005″ By Theo Brandsma

There is an important, much-needed, addition to the scientific literature which adds to our conclusions in

Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., in press. Copyright (2011) American Geophysical Union.

that siting of climate reference stations do matter in terms of long term temperature trends and anomalies. This new report is

Parallel air temperature measurements at the KNMI observatory in De Bilt (the Netherlands) May 2003 – June 2005

The summary reads (emphasis added)

Air temperature measurements at the KNMI-observatory in De Bilt are important mainly because the observatory has a long and relatively homogeneous record and because its observations often serve as an indicator of changes in climate for the Netherlands as a whole. Among others, relocations of the temperature measurement sites and (gradual) changes in surroundings influence the measurements. To improve the homogeneity of the long-term temperature record and to study the representativeness of the current measurements, a parallel experiment was carried out at the observatory of KNMI in De Bilt from May 2003 through June 2005.

Five sites at the KNMI-observatory, including the (at that time) operational site WMO 06 260 (further denoted as DB260), were equipped with identical (operational) instruments for measuring temperature and wind speed at a height of 1.5 m (see for an overview of the sites Figure 1.1). The instruments were calibrated each half-year and the calibrations curves were used to correct the data to minimize instrumental errors. With the measurements at the Test4 site (operational site since 25 September 2008) as a reference, the temperature differences between the sites were studied in connection with the local wind speed and its differences and operationally measured weather variables at the KNMI-observatory. In September/October 2004 the area west of the operational site DB260 was renovated and made into a landscaped park. From 1999 onwards that area slowly transformed from grassland into a neglected area with bushes (wasteland). The parallel measurements provided the opportunity to study the impact of this new inhomogeneity in detail.

The results show that changes in surroundings complicate or impede the use of present-day parallel measurements for correcting for site changes in the past. For instance, the (vertical) growth of the bushes in the wasteland area west of DB260, caused increasing temperature differences between the operational site DB260 and four neighboring stations. The effects were most clearly visible in the dry summer of 2003, when the mean monthly maximum temperatures at DB260 were up to about 0.4C larger than those at the reference Test4. This increase was more than counteracted by a decrease in the mean monthly minimum temperature of up to 0.6C. After the renovation of the wasteland area, the temperature differences between DB260 and Test4 became close to zero (< 0.1C). The comparison of DB260 with four neighboring stations showed that the renovation restored to some extent the temperatures of the old situation of before the year 1999. However, the land use west of the DB260 has been changed permanently (no longer grassland as in the period 1951-1999, but landscaped park land with ponds). Therefore, operational measurements at DB260 became problematic and KNMI decided to move the operational site to the Test4 site in September 2008. The Test4 site is the most open of five sites studied in the report.

The results increase our understanding of inter-site temperature differences. One of the most important causes of these differences is the difference in sheltering between sites. Sheltering stimulates the build up of a night-time stable boundary layer, decreases the outgoing long-wave radiation, causes a screen to be in the shade in the hours just after sunrise and before sunset, and increases the radiation error of screens due to decreased natural ventilation. Depending on the degree and nature of the sheltering, the net effect of sheltering on temperatures may be a temperature increase or decrease. DB260 is a sheltered site where the net effect is a decrease of the mean temperature (before the renovation). The former historical site Test1 is an example of a site where the net effect is a temperature increase. The monthly mean minimum temperature at Test1 is up to 1.2C higher than the reference and the maximum temperature is up to 0.5C higher than that at Test4. The mean temperature at Test1 is, however, only slightly higher than the mean at Test4. This is caused by the relatively low temperatures in the hours after sunrise and before sunset, when the screen at Test1 is in the shade. Both the Test1 and Test4 location are probably not affected by the renovation.

The renovation of the wasteland area causes not only a shift of the location of the pdf of the daily temperature differences but also a change in the shape. This means that for the homogenization of daily temperature series it is not sufficient to correct only the mean.

We showed that the magnitude of the inter-site temperature differences strongly depends on wind speed and cloudiness. In general the temperature differences increase with decreasing wind speed and decreasing cloudiness. Site changes directly affect wind speed because they are usually accompanied by changes in sheltering. Some effects, like the built up and (partly) breaking down of the stable boundary layer near the surface, are highly non-linear processes and therefore difficult to model. The fact that these processes are mostly active at low wind speeds (< 1.0 m/s at 1.5 m) further complicates the modeling. Regular cup anemometers are not really suited to measure low wind speeds. Operationally these anemometers have a threshold wind speed of about 0.5 m/s and this threshold wind speed often increases with the time during which the anemometer is in the field. In addition, anemometers are mostly situated at a height of 10 m. During night-time stable conditions the correlation between wind speed at 10 m and wind speed at screen height is weak. This complicates the homogenization of daily temperature series.

Comments Off

Filed under Climate Change Metrics

Guest Post ” Notions At The Intersection Of Climatology And Renewable Energy” By Jeremy Fordham

 Notions at the Intersection of Climatology and Renewable Energy by Jeremy Fordham

 It’s quite common to hear renewable energy lauded as “America’s last saving grace” these days—after all, how else will the U.S. relieve its dependence on foreign oil imports? How else will the nation engage in macroscopic sustainable practices that will ultimately preserve the environment and stop us from regurgitating greenhouse gases into the atmosphere? It’s still uncertain when renewable technology will attain economic parity with traditional methods of energy use, but there’s no doubt that these technologies are a lot cleaner.  There’s also no doubt that progress in this sector is heavily dependent upon the analytical data provided by climatological studies.

  NASA has extensive databases full of information on surface meteorology and solar characteristics available online. In turn, engineers use this data as a set of parameters for installing things like solar panels and wind turbines in a given location.  The effectiveness of photovoltaic (PV) technologies, for instance, is measured in part by a region’s insolation characteristics. How much direct sunlight does a place receive in a given year? How does a region’s insolation change over a decade, and what factors affect this oscillation? The answers to these questions come from climate scientists, of course.

 While chemical engineers of all specialties work to improve the electronic capabilities of PV cells, their work could easily be rendered null if these improved devices are stationed in places with erratic insolation tendencies. Climatological data is so important to the economic optimization of renewable technology that without it, it would make little sense to fund the improvement of these devices.

 While online PhD programs in renewable energy have yet to come to fruition, many institutions around the world have developed long-distance programs that are geared towards addressing advanced issues at the intersection of technology development, climatology and energy economics.  Loughborough University offers online distance learning curriculum that leads to a Master of Science in Renewable Energy Systems Technology, and numerous other universities, especially in Europe, are offering a significant amount of courses related to these subjects online. As universities in the U.S. continue to realize the importance of interdisciplinary study in general sustainability, more programs focused understanding climatology’s relationship to technology development are sure to gain popularity.

 However, climate science doesn’t just influence the technical development of energy systems. It’s also a very important part of energy policy creation and is essential to an objective analysis of the societal parameters associated with climate change.  A course taught by meteorologist David Eichorn as part of the SUNY College of Environmental Science and Forestry seeks to blend web-based climate change media with outside opinions in order to

 “…enable students to continue their exploration of personal and societal climate change solutions…”

 It’s difficult for one person running their electric car to significantly affect a region’s greenhouse gas levels, but all it takes is a single example to spark larger trends. It is courses like David Eichorn’s that inspire informed opinions and dialogue between people who might potentially be making policy decisions in boardrooms in the future. More universities need to recognize the importance of interdisciplinary integration when it comes to climate science. Sure, it involves a lot of differential calculus for those heavy math-lovers, but it is also very much a social discipline that deserves examination through various critical lenses.

  It’s very difficult to get a handle on what sustainability actually is, but a common thread that runs through most of the official definitions is that it doesn’t have distinct ties with a specific field. Engineers can be “sustainable,” but so can policy makers and architects and even businesses. The idea of “being nice to the environment” or “reducing carbon emissions” or “creating processes that ensure the longevity of future generations” is almost impossible to put into a single concept. The very idea of sustainability arises from the principles of multi-disciplinary collaboration—that includes government leaders, janitors, manufacturers, writers, scientists, doctors … and the list continues. Will any of these professionals have a truly appreciative understanding of sustainability if they’re not exposed to the principles while studying in school?

 It would be wise for universities to take a less-standardized approach to sustainability education. Climate scientists should have the opportunity to learn how their work can influence public policy.  Engineers should have to know why the implementation of a multi-megawatt solar infrastructure is only to be improved by a deep knowledge of a region’s climate. In the long run, leveraging this interconnectedness will ultimately lead to better, more optimized solutions in the space of “green energy” while giving graduates with this knowledge an edge over their competition.

Comments Off

Filed under Academic Departments, Climate Science Op-Eds

Job Announcement Western Kentucky University

WESTERN KENTUCKY UNIVERSITY GEOGRAPHY AND GEOLOGY KENTUCKY CLIMATE CENTER RESEARCH SCIENTIST

Western Kentucky University, Kentucky Climate Center (KCC), housed within the Department of Geography and Geology and a charter member of the Applied Research and Technology Program (ARTP) is seeking applicants for a Research Scientist position in Applied Meteorology and Climatology. This person must have a strong background in both atmospheric modeling and data analysis (both modeled and observed). The department also has a strong program in Geographic Information Science, and the KCC has an established record of collaboration with other units within WKU and with other institutions. Research performed in this position will involve significant use of the Kentucky Mesonet data, along with other atmospheric datasets, which would lead to development of a wide variety of decision tools. Opportunities for collaboration will be available and encouraged.

The Kentucky Mesonet is a research grade operational network observing weather and climate in the Commonwealth. Continued employment is for several years pending budgetary approval and satisfactory performance evaluations.

The following duties and responsibilities are customary for this position.  They are not to be construed as all-inclusive, and therefore may be added, deleted and assigned based on management discretion and institutional needs.

  • Collaborates with primary supervisors
  • Interacts and collaborates with students, staff members, and other atmospheric and environmental scientists in the KCC and the department
  • Conducts applied research, collaborates with software developers for designing and creating interactive, web-based decision tools, writing papers for peer-reviewed journals, and writing grant proposals
  • Participates in grant writing as a PI or Co-PI

 Required Qualifications:

  • Doctoral degree in Atmospheric Science, Meteorology, Environmental Science, Geography, or a closely related field
  • Excellent oral and written communication skills
  • Strong background in Atmospheric Modeling (e.g., WRF, MM5, or RAMS or Climate models) and data analysis
  • Experience in working with both modeled and observational data
  • Strong background in programming (C, C++, FORTRAN, PHP, Perl, Java, JavaScript etc.) and data analysis and visualization software (GrADS, IDV, NCL, S-Plus, Matlab etc.)
  • Experience in working as a member of a research team
  • Ability to think creatively and perform research duties
  • Ability to move computers, connect computers with relevant accessories and upload software

Expected Salary Range:  $60,000.00 – $70,008.00 annually

Applications for employment will be accepted electronically only.  Interested candidates should submit a cover letter with statement of professional goals, and up-to-date CV including list of publications, and names, addresses and daytime phone numbers of three professional references.  Please refer to the following website to apply:  http://asaweb.wku.edu/wkujobs   Please reference requisition number S2897.  For further assistance please call (270) 745-5934.  To ensure full consideration please submit application materials by May 31st, 2011.  Position will remain open until filled. 

Western Kentucky University does not discriminate on the basis of race, color, national origin, sex, sexual orientation, disability, age, religion, or marital status in admission to career and technical education programs and/or activities, or employment practices in accordance with Title VI and VII of the Civil Rights Act of 1964, Title IX of the Educational Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, Revised 1992, and the Americans with Disabilities Act of 1990.

Comments Off

Filed under Academic Departments

Climate Science Myths And Misconceptions – Post #6 On Carbon Dioxide As A Pollutant

I have posted a number of times with respect to whether atmospheric concentrations of CO2 are a pollutant; e.g. see

New Plans To Regulate CO2 As A Pollutant

I want to discuss this further in order to expose a misconception on this issue.  

Misconception #6:  Carbon Dioxide Is In The Same Group As The EPA Criteria Pollutants.

As I wrote in my post

Is CO2 a Pollutant?

A “pollutant” is defined as:

“a harmful chemical or waste material discharged into the water or atmosphere.”

To “pollute” is to:

“make unclean, impure, or corrupt; defile; contaminate; dirty.”

The American Meteorological Society’s Glossary lists the definition as:

air pollutionThe presence of substances in the atmosphere, particularly those that do not occur naturally. These substances are generally contaminants that substantially alter or degrade the quality of the atmosphere. The term is often used to identify undesirable substances produced by human activity, that is, anthropogenic air pollution. Air pollution usually designates the collection of substances that adversely affects human health, animals, and plants; deteriorates structures; interferes with commerce; or interferes with the enjoyment of life. Compare airborne particulates, designated pollutant, particulates, criteria pollutants.

The question is: How does atmospheric carbon dioxide fit into this definition? Carbon dioxide does occur naturally, of course, and is essential to life on Earth, as it is an essential chemical component in the photosynthesis process of plants. This is in contrast with other trace gases in the lower atmosphere such as carbon monoxide, ozone, and sulfur dioxide which have direct health and environmental effects on humans and vegetation. Indeed, when combustion is optimized, less carbon monoxide and more carbon dioxide are produced. There are no positive effects that I am aware of at any level of these pollutants in the lower atmosphere.

Thus, it is more informative to define anthropogenic inputs of carbon dioxide as a climate forcing, as was done in the 2005 National Research Council Report. This provides the recognition that carbon dioxide does not have direct health effects ….. but it does significantly affect our climate. Of course, carbon monoxide, ozone, and sulfur dioxide are also climate forcings. When these other atmospheric constituents are referred to in news articles and elsewhere, we would benefit by a distinction between an “air pollutant” and a “climate forcing” depending on the context.

The distinction between a pollutant and a human climate forcing is significant. The criteria pollutants of the Environmental Protection Agency are described in their text

“The Clean Air Act requires EPA to set National Ambient Air Quality Standards for six common air pollutants. These commonly found air pollutants (also known as “criteria pollutants”) are found all over the United States. They are particle pollution (often referred to as particulate matter), ground-level ozone, carbon monoxide, sulfur oxides, nitrogen oxides, and lead. These pollutants can harm your health and the environment, and cause property damage. Of the six pollutants, particle pollution and ground-level ozone are the most widespread health threats. EPA calls these pollutants “criteria” air pollutants because it regulates them by developing human health-based and/or environmentally-based criteria (science-based guidelines) for setting permissible levels. The set of limits based on human health is called primary standards. Another set of limits intended to prevent environmental and property damage is called secondary standards.

where

“Exposure to these pollutants is associated with numerous effects on human health, including increased respiratory symptoms, hospitalization for heart or lung diseases, and even premature death. “

The distinction between these criteria atmospheric pollutants and atmospheric carbon dioxide is that the criteria pollutants have NO positive benefits. In contrast, CO2 does have positive benefits to vegetation since plants use added CO2 to grow. While there can be undesirable effects (such as certain vegetation being better able to utilize added CO2) as well as climate effects, the fact that CO2 does have positive effects for some situations makes it different from (tropospheric] ozone, particulate matter, carbon monoxide, nitrogen oxides, sulfur dioxide and lead.

Thus

Human additions of carbon dioxide is NOT  in the same group as the EPA criteria pollutants. The criteria pollutants have NO positive benefits. Added CO2, while it may have negative climate effects, also has positive benefits [as do all of the climate forcings].

For the EPA to regulate CO2 as a pollutant would be a significant expansion of the definition of what is a pollutant.

Comments Off

Filed under Climate Change Forcings & Feedbacks, Climate Science Misconceptions

Advertisements On My Weblog Are Not Permitted

NOTICE: While I am not viewing this when I open my weblog on my browser, I am told by others that an ad from google is appearing right below today’s post. I am working to get this excluded. Advertisements by anyone are not permitted on my weblog.

Comments Off

Filed under Uncategorized

New Paper “History Of Climate Modeling” By Edwards 2011 Illustrates A Current Scientifically Flawed Focus Of Climate Science

A new article

Edwards, Paul N. , 2011: History of climate modeling. WIREs Climate Change. Volume 2, January/February 2011. John Wiley & Sons, Ltd. Volume 2 128–139 DOI: 10.1002/wcc.95

with the abstract [highlight added]

“The history of climate modeling begins with conceptual models, followed in the 19th century by mathematical models of energy balance and radiative transfer, as well as simple analog models. Since the 1950s, the principal tools of climate science have been computer simulation models of the global general circulation. From the 1990s to the present, a trend toward increasingly comprehensive coupled models of the entire climate system has dominated the field. Climate model evaluation and intercomparison is changing modeling into a more standardized,modular process, presenting the potential for unifying research and operational aspects of climate science”

provides clear evidence of the current [in my view scientifically invalid] approach to climate science. An excerpt from their paper reads

“Climate models—theory-based representations of average atmospheric flows and processes—are the fundamental tools of modern climate science……Since the 1960s, GCMs—computer simulations of atmospheric flows and processes over long periods— have come to dominate climate science, although simpler models remain important both in their own right and as checks on sub-models included in GCMs.”

While I agree climate models are an invaluable tool to use for data analysis (i.e. reanalyses), the assessment of climate processes (i.e. process studies) and the determination of predictability, the fundamental tools of modern climate science must be observed real world data. Models are only hypotheses! 

Indeed, Edwards implicitly recognizes that these climate models have a a serious unresolved issue when he writes [highlight added]

“These trends have brought many disciplines together to seek realistic, potentially predictive models of climate change.”

“[P]otentially predictive” means that their skill has not yet been shown when compared with observations!

I discuss the appropriate use of models when I started my weblog in 2005; e.g. see

What Are Climate Models? What Do They Do?

where I wrote

“There are three types of applications of these models: for process studies, for diagnosis and for forecasting.

Process studies: The application of climate models to improve our understanding of how the system works is a valuable application of these tools. In an essay, I used the term sensitivity study to characterize a process study. In a sensitivity study, a subset of the forcings and/or feedback of the climate system may be perturbed to examine its response. The model of the climate system might be incomplete and not include each of the important feedbacks and forcings.

Diagnosis: The application of climate models, in which observed data is assimilated into the model, to produce an observational analysis that is consistent with our best understanding of the climate system as represented by the manner in which the fundamental concepts and parameterizations are represented. Although not yet applied to climate models, this procedure is used for weather reanalyses (see the NCEP/NCAR 40-Year Reanalysis Project).

Forecasting: The application of climate models to predict the future state of the climate system. Forecasts can be made from a single realization, or from an ensemble of forecasts which are produced by slightly perturbing the initial conditions and/or other aspects of the model…..”

The funding of multi-decadal climate predictions by the NSF and others (e.g. see) are examples of the current flawed approach which Edwards reports on in his paper. NSF and these other agencies are pouring research funding into an approach which is not following the robust scientific method (e.g. see).

Comments Off

Filed under Climate Models, Climate Science Misconceptions

Guest Post “European Diurnal Temperature Range (DTR) Is Increasing, Not Decreasing by Francis Massen

Guest post by Francis Massen -  francis.massen [at] education.lu  [see also previous posts by Francis].

Abstract:

AGHG induced climate change theory often points to decreasing diurnal temperature range [DTR] as a clear sign of global warming. The European continent does not follow that pattern: since 1970 DTR increases throughout Europe, as shown in the data of an individual station and two research papers using the European Climate Assessment dataset.

1. The global DTR trend from IPCC’s FAR.

 The global trend of DTR (Daily Temperature Range = difference between daily maximum and daily minimum temperatures) has been used in IPCC’s FAR as a clear sign of global warming by human emitted greenhouse accumulation: anthropogenic GHG emissions are said to partially block and reflect outgoing heat radiation (OLR = outgoing Longwave Radiation) and thus will increase the surface temperature: this heating will be most noticeable during the night in the absence of incoming sunlight. The result should be a larger increase of the minimum daily temperature than the corresponding increase in the daily maximum temperature. As a consequence, DTR should decrease, as shown in fig.1

 

 Fig.1  Global DTR anomaly since 1950 (IPPC, FAR 2007 [1])

Regional trends may well be different from this global decrease; this is indeed the case for Europe, as tacitly acknowledged by FAR in fig. 3.11 

 

Fig.2  Positive DTR trend shown in FAR for France and Western Germany (turquoise arrow and circle added)  [2]

Numerous papers have been written on DTR trends, and how DTR may be influenced by UHI, cloud cover, greenhouse gases, moisture, precipitation, solar radiation etc. Roger Pielke Sr. is vocal in his warnings on the unreflected use of DTR as a proxy for global warming: measuring the nightly minimum temperature is especially fraught with problems related to the stability of the boundary layer. As a consequence Roger recommends using daily maximum temperature (if the urge in relying on surface temperatures is irresistible). This warning is repeated in his discussion of a paper by McNider et al. [3], were these authors conclude that “minimum temperatures in a stable boundary layer are not very robust measures of the heat content of the deep atmosphere”

 2. The European situation.

Do European observations document a continuous decrease in DTR? The answer is a resounding NO!

Let me first start by my own measurements done at meteoLCD, Diekirch, Luxembourg [4].  The trend is clearly positive, as shown by figure 3.

 

 

Fig. 3: Trends of average daily minimum, maximum, and DTR from 1998 to 2010 at Diekirch, Luxembourg [4].

The positive trend of 0.03 °C/y is not caused by the heat-wave year 2003 with an exceptional high DTR of 10.03: if this year is skipped, linear regression gives an even higher DTR slope of 0.04°C/y, significant at the 95% level.

This positive DTR trend has been documented in two papers, which both use the European Climate Assessment Dataset (ECA) of more than 2000 stations, usually distant by 75 km or less.

Klok E.J. et al [5] use 333 stations from this dataset covering most of Europe to calculate DTR trends.

 

 

Fig. 4. ECA stations (August 2007); figure from [5]

They find an average annual trend of +0.09 °C/decade; the trend is negative for Winter (-0.11 °C/d), practically zero for autumn (-0.01°C/d) and strongly positive for spring and summer (0.15 and 0.14 °C/d).

Knut Makowski from ETH, Zürich has his doctoral thesis in 2009 titled  “The daily temperature variation and the solar surface radiation” [6]. He writes in the abstract: “It has been widely accepted that DTR decreased on a global scale during the second half of the twentieth century. In this thesis it is shown, however, that the long-term trend of annual DTR has reversed from a decrease to an increase during the 1970s in Western Europe and during the 1980s in Eastern Europe”. For the Benelux nations (Belgium, the Netherlands, Luxembourg) he shows a figure which suggests a DTR increase of 0.2°C during the 1998-2005 period, i.e. a mean annual trend of about +0.03°C/y, quite similar to that found at meteoLCD by linear regression.

 

Fig.5. DTR increase for the BeNeLux nations: +0.2°C from 1998 to 2005

(figure adapted from [6])

 Makowski explains that increasing water vapour content would diminish incoming shortwave radiation from the sun and increase downwelling longwave radiation from the greenhouse gas: this means cooler days and warmer nights, i.e. decreasing DTR.  This intuitive explanation should be taken with a grain of salt. Using the ESRL website [7] to calculate specific humidity at 300mb gives for the box [2.5-7.5 East latitude, 45-55 North longitude] covering a good part of Western Europe indeed a decreasing trend in specific humidity at the 300mb level, but also shows several short-time reversals, the last starting at 1998.

 

Fig.6. Specific humidity at 300mb decreases in a box centered on Western Europe; but this overall 40 year trend may reverse during shorter periods, as can be seen for 1976-1983 and 1998-2010.

3. Conclusion

The urge to find a simple (simplistic?) parameter documenting a global climate change may lead to hasty and faulty conclusions. If a whole continent follows a pattern opposite to that chosen as a proof of a warming world, should one not be more careful using “global” parameters? Roger Pielke Sr, with admirable tenacity keeps telling us that regional climate is what matters and that playing with global averages may be interesting for the junior academic, but useless for solid political decisions. The DTR case is another validation of this position.

REFERENCES: 

1.         Climate Change 2007: Working Group I: The Physical Science Basis
            Fig.3.2
           
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-2-2.html

2          Climate Change 2007: Working Group I: The Physical Science Basis
Fig.3.11
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-2-2-7.html

3          http://pielkeclimatesci.wordpress.com/2011/05/06/paper-a-stable-boundary-layer-perspective-on-global-temperature-trends-by-mcnider-et-al-2010/

4          F. Massen, meteolcd: http://meteo.lcd.lu/trends/meteolcd_trends.html

5.         Klok, E.J., Klein Tank A.M.G. Updated and extended European dataset of daily climate observations. International Journal of Climatology, Vol.29, issue 8, pp. 1882-1191, 2009

6.         Makowski, K.: The daily temperature amplitude and surface solar radiation. PhD dissertation. Diss. ETH 18319, 2009
http://www.iac.ethz.ch/people/mknut/diss

7.         Earth System Research Laboratory: http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries1.pl

Comments Off

Filed under Climate Change Metrics, Guest Weblogs

A Summary Of Our New Paper “Analysis Of The Impacts Of Station Exposure On The U.S. Historical Climatology Network Temperatures and Temperature Trends” By Fall Et Al 2011

 Our paper

Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., in press. Copyright (2011) American Geophysical Union.

has been accepted and is now in press. Below I have presented a summary of the study and its major messages from my perspective.  While the other authors of our paper have read and provided input on the information given below, the views presented below are mine. I will be posting on the history of my involvement on this subject in a follow-up post in a few days.

Volunteer Study Finds Station Siting Problems Affect USA Multi-Decadal Surface Temperature Measurements

We found that the poor siting of  a significant number of climate reference sites (USHCN) used by NOAA’s National Climate Data Center (NCDC) to monitor surface air temperatures national temperatures has led to inaccuracies and larger uncertainties in the analysis of multi-decadal surface temperature anomalies and trends than assumed by NCDC. 

NCDC does recognize that this is an issue. In the past decade, NCDC has established a new network, the Climate Reference Network (CRN), to measure surface air temperatures within the United States going forward. According to our co-author Anthony Watts:

“The fact that NOAA itself has created a new replacement network, the Climate Reference Network, suggests that even they have realized the importance of addressing the uncertainty problem.”

The consequences of this poor siting on their analyses of multi-decadal trends and anomalies up to the present, however, has not been adequately examined by NCDC.

We are seeking to remedy this shortcoming in our study.

The placement of the USHCN sites can certainly affect the temperatures being recorded—both an area of asphalt (which is warmer than the surroundings on a sunny day or irrigated lawns (which is cooler than surrounding bare soil on a sunny day) situated near a station, for example, will influence the recorded surface air temperatures.

NOAA has adopted siting criteria for their climate reference stations: CRN 1 stations are the least likely to being influenced by nearby sources of heat or cooling, while CRN 5 stations are the most likely to be contaminated by local effects.  These local effects include nearby buildings, parking lots, water treatment plants irrigated lawns, and other such local land features.

To determine how the USHCN stations satisfied the CRN siting criteria and also whether the station siting affected temperature trend characteristics, Anthony Watts of IntelliWeather set up the Surface Stations project in 2007. More than 650  volunteers nationwide visually inspected 1007 of the 1221 USHCN stations.  The volunteers wrote reports on the surroundings of each station and supplemented these reports with photographs.  Further analysis by Watts and his team used satellite and aerial map measurements to confirm distances between the weather station sensors and nearby land features.

The Surface Stations project is truly an outstanding citizen scientist project under the leadership of Anthony Watts!  The project did not involve federal funding. Indeed, these citizen scientists paid for the page charges for our article.  This is truely an outstanding group of commited volunteers who donated their time and effort on this project!

Analyzing the collected data, as reported in our paper, we found that only 80 of the 1007 sites surveyed in the 1221 station network met the criteria of CRN 1 or CRN 2 sites – those deemed appropriate for measuring climate trends by NCDC.  Of the remaining, 67 sites attained a CRN 5 rating – the worst rating.  While the 30-year and 115-year trends, and all groups of stations, showed warming trends over those periods,  we found that the minimum temperature trends appeared to be overestimated and the maximum warming trends underestimated at the poorer sites.

This discrepancy matters quite a bit. Wintertime minimum temperatures help determine plant hardiness, for example, and summertime minimum temperatures are very important for heat wave mortality. The use of temperature trends from poorly sited climate stations, therefore, introduces an uncertainly in our ability to quantify these key climate  metrics.

While all groups of stations showed warming trends over those periods, there is evidence to suggest a higher level of uncertainty in the trends since it was found, as one example, that according to the best-sited stations, the 24 hour temperature range in the lower 48 states has no century-scale trend, while the poorly sited locations have a significantly smaller diurnal temperature range. This raises a red flag to avoid poorly sited locations since clearly station measurement siting affects the quality of the surface temperature measurements.

The inaccuracies in the maximum and minimum temperature trends do matter also in the quantification of global warming. The inaccuracies of measurements from poorly sited stations are merged with the well sited stations in order to provide area average estimates of surface temperature trends including a global average.  In the United States, where this study was conducted, the biases in maximum and minimum temperature trends are fortuitously of opposite sign, but about the same magnitude, so they cancel each other and the mean trends are not much different from siting class to siting class. This finding needs to be assessed globally to see if this also true more generally.

However, even the best-sited stations may not be accurately measuring trends in temperature or, more generally, in trends in heat content of the air which includes the effect of water vapor trends (which is the more correct metric to assess surface air warming and cooling; see). Also, most of the best sited stations are at airports, which are subject to encroaching urbanization, and/or use a different set of automated equipment designed for aviation meteorology, but not climate monitoring. Additionally, the NCDC corrections for station moves or other inhomogeneities use data from poorly-sited stations for determining adjustments to better-sited stations, thus muddling the cleaner climate data. We are looking at these issues for our follow-on paper.

However, we know from our study that the use of these poorly sited locations in constructing multi-decadal surface temperature trends and anomalies has  introduced an uncertainty in our quantification of the magnitude of how much warming has occurred in the United States during the 20th and early 21st century.

One critical question that needs to be answered now is; does this uncertainty extend to the worldwide surface temperature record? In our paper

Montandon, L.M., S. Fall, R.A. Pielke Sr., and D. Niyogi, 2011: Distribution of landscape types in the Global Historical Climatology Network. Earth Interactions, 15:6, doi: 10.1175/2010EI371

we found that the global average surface temperature may be higher than what has been reported by NCDC and others as a result in the bias in the landscape area where the observing sites are situated. However, we were not able to look at the local siting issue that we have been able to study for the USA in our new paper.

Appendix- Summary of Trend Analysis Results

Temperature trend estimates do indeed vary according to site classification. Assuming trends from the better-sited stations (CRN 1 and CRN 2) are most accurate:

  • Minimum temperature warming trends are overestimated at poorer sites
  • Maximum temperature warming trends are underestimated at poorer sites
  • Mean temperature trends are similar at poorer sites due to the contrasting biases of maximum and minimum trends
  • The trend of the “diurnal temperature range” (the difference between maximum and minimum temperatures) is most strongly dependent on siting quality. For 1979-2008 for example, the magnitude of the linear trend in diurnal temperature range is over twice as large for CRN 1&2 (0.13ºC/decade) as for any of the other CRN classes. For the period 1895-2009, the adjusted CRN 1&2 diurnal temperature range trend is almost exactly zero, while the adjusted CRN 5 diurnal temperature range trend is about -0.5°C/century.
  • Vose and Menne[2004, their Fig. 9] found that a 25-station national network of COOP stations, even if unadjusted and unstratified by siting quality, is sufficient to estimate 30-yr temperature trends to an accuracy of +/- 0.012°C/yr compared to the full COOP network. The statistically significant trend differences found here in the central and eastern United States for CRN 5 stations compared to CRN 1&2 stations, however, are as large (-0.013°C/yr for maximum temperatures, +0.011°C/yr for minimum temperatures) or larger (-0.023°C/yr for diurnal temperature range) than the uncertainty presented by Menne at al (2010).

 More detailed results are found in the paper, including analyses for different periods, comparisons of raw and adjusted trends, and comparisons with an independent temperature data set.

Questions and Answers

Q: So is the United States getting warmer?

A: Yes in terms of the surface air temperature record. We looked at 30-year and 115-year trends, and all groups of stations showed warming trends over those periods.

Q: Has the warming rate been overestimated?

A: The minimum temperature rise appears to have been overestimated, but the maximum temperature rise appears to have been underestimated.

Q: Do the differing trend errors in maximum and minimum temperature matter?

A: They matter quite a bit. Wintertime minimum temperatures help determine plant hardiness, for example, and summertime minimum temperatures are very important for heat wave mortality. Moreover, maximum temperature trends are the better indicator of temperature changes in the rest of the atmosphere, since minimum temperature trends are much more a function of height near the ground and are of less value in diagnosing heat changes higher in the atmosphere; e.g see .

Q: What about mean temperature trends?

A: In the United States the biases in maximum and minimum temperature trends are about the same size, so they cancel each other and the mean trends are not much different from siting class to siting class. This finding needs to be assessed globally to see if this also true more generally.

However, even the best-sited stations may not be accurately measuring trends in temperature or, more generally, in trends in heat content of the air which includes the effect of water vapor trends.  Also, most are at airports, are subject to encroaching urbanization, and use a different set of automated equipment. The corrections for station moves or other inhomogeneities use data from poorly-sited stations for determining adjustments to better-sited stations.

Q: What’s next?

A:  We also plan to look specifically at the effects of instrument changes and land use issues, among other things.  The Surface Stations volunteers have provided us with a superb dataset, and we want to learn as much about station quality from it as we can.

Comments Off

Filed under Climate Change Metrics, Research Papers

Guest Weblog Post Commentary On ‘Sea Level Rise’ By Madhav Khandekar

Madhav Khandekar has provided us with another informative guest post.

Madhav Khandekar is a former research scientist from Environment Canada and is presently on the Editorial Board of the international Journal Natural Hazards (Kluwer Netherlands). Khandekar was an Expert Reviewer for the IPCC 2007 climate change documents and his latest contribution to sea level rise is a Chapter (Global warming, glacier melt and future sea level rise) in the Book “Global Warming” published by Sciyo Publishers (Sciyo.com) October 2010.

Global warming, glacier melt and sea level rise: need for more realistic future estimates by  Madhav Khandekar

There is now a heightened interest on the possibility of rapid melting of world-wide glaciers and ice caps ( e.g., Greenland and Antarctic ice caps) as a result of ongoing warming which could lead to escalated sea level rise in the ‘near future’. Sea Level Rise (SLR) is an important climate change parameter which is being intensely discussed at present in the context of human-induced global warming and the climate change debate. Many newspaper articles as well as science magazine articles often refer to world-wide glacier melts and the possibility of sea level rise of 3 to 7 ft (1 to 2m) over the next fifty to one hundred years. 

Several recent observational field studies using sophisticated remote sensing technology have provided estimates of “ice sheet mass balance” of Greenland and Antarctic ice caps; these estimates suggest significant acceleration of ice mass loss in recent years (Rignot et al, 2011 Geophysical Research Letters) and conclude that future SLR will be dominated by melting of the two ice caps. Papers published in the last ten years using atmosphere-ocean general circulation models suggest thermal expansion as the largest contributor to future SLR, which is estimated to be in the range of 20 to 37 cm over next 100 years.

 A recent publication (Vermeer & Rahmstrorf,  2009 Proc. National Academy of Sciences) obtains values of 1m and higher for SLR over the next one hundred years using an empirical model which links future warming of the earth’s surface to increased sea levels. These studies plus recent Hollywood movies like An Inconvenient Truth showing big ice-shelves breaking off and sliding down into the cold Arctic Ocean have created a perception that “ice caps and glaciers are indeed melting rapidly causing sea level to rise dramatically”.

How rapidly is the sea level rising at present? Let us look at some latest studies and numbers:

Some basic facts on sea level rise:

1. It is now generally accepted that global sea level increased by about 120 m as a result of de-glaciation that followed the LGM (Last Glacial Maximum) about 21000 y BP ( years Before Present). By about 5000-6000 y BP the melting of high-latitude ice mass was more less complete after which sea level rise was small and globally-averaged SLR over the last 1000 years and prior to the twentieth century has been estimated to be just about 0.2mm/year.

2. The 20th century SLR has been most intriguing and has sparked a large number of studies (see, for example, Khandekar 2009, Energy & Environment). The total SLR during the 20th century is now estimated to be about 1.6 to 2.0 mm/yr.

3. GIA (Glacial Isostatic Adjustment): This refers to the gradual springing back of the earth’s crust in response to the removal of snow loads since the LGM, especially in the region of Gulf of Bothnia (also refereed to as Fennoscandia) where the ice was as much as several km thick during the LGM and where relative SLR is still falling at the rate of about 5-10 mm/yr.

4. Recent satellite data using TOPEX/Poseidon satellite altimeter obtains a value of SLR about 2.8mm/yr and possibly higher.  

5. The IPCC (Intergovernmental Panel on Climate Change) in its latest (2007) document has estimated SLR to be between 14 and 41 cm (mean value 29 cm) under the A1B (greenhouse gas) scenario in which earth’s mean temperature is expected to rise between 2.3C and 4.1C over next 100 years. The IPCC 2007 projects SLR due to thermal expansion (steric component) as about 23 cm while the contribution due to melting of glaciers and ice cap (eustatic component) is estimated as about 6 cm over next 100 years

Several recent papers have provided SLR numbers which need to be examined carefully in the context of present debate on global warming and sea level rise. A paper by Holgate (2007, Geophysical Research Letters) analyzed nine long and nearly continuous sea level records over one hundred years ( 1903-2003) and obtained a mean value of SLR as 1.74mm/yr, with higher values in the earlier part of the 20th century compared to the latter part.

 A comprehensive paper by Prof (emeritus) Carl Wunsch and co-workers ( J of Climate December 2007) generate over 100 million data points using a 23-layer general circulation ocean model which include different types of data ( salinity, sea surface temperature, satellite altimetry, Argo float profiles etc) and obtain an estimate of SLR as 1.6mm/yr for the period 1993-2004. A more recent paper by Wenzel & Schroter (Journal of Geophysical Research-Oceans 2010) analyzes tide gauge records over a period 1900-2006 and obtains a mean value of 1.56mm/yr with NO statistically significant acceleration in sea level rise. The latest paper by Houston & Dean (Journal of Coastal Research 2011) carefully analyzes 57 tide gauge records each with a record length of 80 years which include 25 gauges with data from 1930-2010.  This study finds no acceleration in sea level rise, but instead a small average deceleration of -0.0014 and -0.0123 mm/yr2. These latest findings appear to contradict the general perception that sea level rise is escalating at present.

Recent observations and studies of breaking of ice shelves and ice sheet mass losses must be carefully assessed in the context of Arctic climatology which is now identified as being linked to low frequency atmosphere/ocean oscillation with a period of 60-80 years. An excellent temperature dataset for the entire Arctic basin has been prepared by Dr Igor Polyakov (University of Alaska) for the period 1860-2005.

This dataset shows clearly that the Arctic was at its warmest in 1935 and 1936 and the present temperature in the Arctic is about the same as it was in the mid-1930s. Further, the Arctic witnessed significant icecap and glacier melting during the 1920s and 1930s as evidenced by the following commentary “The Arctic sea is warming up, icebergs are growing scarcer, great masses of ice have been replaced by moraines of earth and stones, at many points well-known glaciers have entirely disappeared (US Weather Bureau 1922)”.

 Also, the temperature history of Greenland shows that the 1920s and 1930s were the two warmest decades over Greenland, in a long dataset from 1880 to 2007. These observations and the US weather Bureau report strongly suggest that the Arctic witnessed significant ice melt and icecap mass loss during the 1920s and 1930s, however, no detailed quantitative calculations (of icecap mass loss) were possible then due to lack of adequate remote sensing technology.

An estimate of sea level rise can be made by observing that from 1940 to 2010, global sea level has risen by about 13-14 cm. Of this rise, the steric (thermal) component of SLR can be estimated at about 6 cm while the eustatic (melt part) contribution is about 8 cm.  If these estimates are used to extrapolate SLR to 2100, we obtain a maximum of 12 cm of SLR due to the eustatic (melting) contribution, while another 8 cm or so due to steric (thermal expansion) contribution.

In summary, the estimate of over 1m and higher rise in sea level by 2100 (in next 90 years) seems unrealistic, when analyzed in the context of present sea level rise which is just about 1.5mm to 2.0 mm per year with almost NO component of acceleration. For the global sea level to rise by over 1m in the next 90 years would require acceleration (in sea level rise) of up to 0.28mm/yr2, which is almost two orders of magnitude larger than present. This seems highly unlikely at present given the fact that the earth’s climate has not warmed in the last ten years and further that the earth’s mean temperature seems to be declining at present.

There is a definite need to obtain more realistic estimates of future SLR than what are available at present.

Acknowledgements: I am grateful to Prof Roger Pielke sr for inviting me to write this commentary.

 

 

 

 

 

Comments Off

Filed under Climate Change Metrics, Guest Weblogs