Category Archives: Climate Change Metrics

Comments On “The Shifting Probability Distribution Of Global Daytime And Night-Time Temperatures” By Donat and Alexander 2012 – A Not Ready For Prime Time Study

above figure from Caesar et al 2006

A new paper has appeared;

Donat, M. G. and L. V. Alexander (2012), The shifting probability distribution of global daytime and night-time temperatures, Geophys. Res. Lett., 39, L14707, doi:10.1029/2012GL052459.

The abstract reads [highlight added]

Using a global observational dataset of daily gridded maximum and minimum temperatures we investigate changes in the respective probability density functions of both variables using two 30-year periods; 1951–1980 and 1981–2010. The results indicate that the distributions of both daily maximum and minimum temperatures have significantly shifted towards higher values in the latter period compared to the earlier period in almost all regions, whereas changes in variance are spatially heterogeneous and mostly less significant. However asymmetry appears to have decreased but is altered in such a way that it has become skewed towards the hotter part of the distribution. Changes are greater for daily minimum (night-time) temperatures than for daily maximum (daytime) temperatures. As expected, these changes have had the greatest impact on the extremes of the distribution and we conclude that the distribution of global daily temperatures has indeed become “more extreme” since the middle of the 20th century.

This study, unfortunately, perpetuates the use of Global Historical Climate Reference Network surface temperature data as being a robust measure of surface temperature trends. The authors report that

 We use HadGHCND [Caesar et al., 2006], a global gridded data set of observed near-surface daily minimum (Tmin) and maximum (Tmax) temperatures from weather stations, available from 1951 and updated to 2010. For this study, we consider daily Tmax and Tmin anomalies calculated with respect to the 1961 to 1990 daily climatological average.

As described in the paper

Caesar, J., L. Alexander, and R. Vose (2006), Large-scale changes in observed daily maximum and minimum temperatures: Creation and analysis of a new gridded data set, J. Geophys. Res., 111, D05101, doi:10.1029/2005JD006280.

A gridded land-only data set representing near-surface observations of daily maximum and minimum temperatures (HadGHCND) has been created to allow analysis of recent changes in climate extremes and for the evaluation of climate model simulations. Using a global data set of quality-controlled station observations compiled by the U.S. National Climatic Data Center (NCDC), daily anomalies were created relative to the 1961–1990 reference period for each contributing station. An angular distance weighting technique was used to interpolate these observed anomalies onto a 2.5° latitude by 3.75° longitude grid over the period from January 1946 to December 2000. We have used the data set to examine regional trends in time-varying percentiles. Data over consecutive 5 year periods were used to calculate percentiles which allow us to see how the distributions of daily maximum and minimum temperature have changed over time. Changes during the winter and spring periods are larger than in the other seasons, particularly with respect to increasing temperatures at the lower end of the maximum and minimum temperature distributions. Regional differences suggest that it is not possible to infer distributional changes from changes in the mean alone.

The Donat and Alexander 2012 article concludes with the text

Using the data from this study we conclude that daily temperatures (both daytime and night-time) have indeed become “more extreme” and that these changes are related to shifts in multiple aspects of the daily temperature distribution other than just changes in the mean. However evidence is less conclusive as to whether it has become “more variable”.

The Donat and Alexander (2012) paper and the Caesar et al (2006) paper, however, both suffer in their ignoring issues that have been raised regarding the robustness of the data they are using for their analyses. They either ignored or are unaware of papers that show that the conclusions they give cannot be considered accurate unless they can show that the unresolved uncertainties  have either been corrected for, or shown not to affect their analyses. An overview of these issues is given in

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with   the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

which the authors ignored in their study. The questions the authors did not examine before accepting the robustness of their analyses include:

1. The quality of station siting in the HadGHCND and whether this affects the extreme surface temperatures [Pielke et al 2002; Mahmood et al 2006Fall et al 2011; Martinez et al 2012].

2. The effect of a concurrent change over time in the dew point temperatures at each HadGHCND location, which, if they are lower, could result in higher dry bulb temperatures [Davey et al 2006; Fall et al 2010; Peterson et al 2011 ]

3.  A bias in the siting of the HadGHCND observing sites for particular landscape types [Montandon et al 2011]

4. Small scale vegetation effects on maximum and minimum temperatures observed at HadGHCND sites [Hanamean et al 2003]

5. The uncertainty associated with each separate step in the HadGHCND homogenization method to develop grid area averages [Pielke 2005].

6. The warm bias that is expected to be in the HadGHCND with respect  to minimum temperatures [which would be expected to be even more pronounced with respect to extreme cold temperatures] [Klotzbach et al 2010,2011; McNider et al 2012].

As just one example from the above list, Mahmood et al 2006 finds that

…the difference in average extreme monthly minimum temperatures can be as high as 3.6 °C between nearby stations, largely owing to the differences in instrument exposures.’

Note also in the figure at the top of this post, the poor spatial sampling for large portions of land.

The conclusion is that the HadGHCND data set is NOT sufficiently quality controlled, despite the assumption of the authors to the contrary. Ignoring peer reviewed papers that raise issues with their methodology does not follow the scientific  method.

The complete cite for these peer-reviewed papers that were ignored are listed below:

Davey, C.A., R.A. Pielke Sr., and K.P. Gallo, 2006: Differences between  near-surface equivalent temperature and temperature trends for the eastern  United States – Equivalent temperature as an alternative measure of heat  content. Global and Planetary Change, 54, 19–32.

Fall, S., N. Diffenbaugh, D. Niyogi, R.A. Pielke Sr., and G. Rochon, 2010: Temperature and equivalent temperature over the United States (1979 – 2005). Int. J. Climatol., DOI: 10.1002/joc.2094.

Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res.,  116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union.

Hanamean,  J.R. Jr., R.A. Pielke Sr., C.L. Castro, D.S. Ojima, B.C. Reed, and Z.  Gao, 2003: Vegetation impacts on maximum and minimum temperatures in northeast  Colorado. Meteorological Applications, 10, 203-215.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res.,  115, D1, doi:10.1029/2009JD013655

Mahmood, R., S. A. Foster, and D. Logan (2006a), The geoprofile metadata, exposure of instruments, and measurement bias in climatic record revisited, Int. J. Climatol., 26, 1091–1124.

Martinez, C.J., Maleski, J.J., Miller, M.F, 2012: Trends in precipitation and temperature in Florida, USA. Journal of Hydrology. volume 452-453, issue , year 2012, pp. 259 – 281

McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S.   Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over  land to added longwave radiative forcing. J. Geophys. Res., doi:10.1029/2012JD017578, in press.

Montandon, L.M., S. Fall, R.A. Pielke Sr., and D. Niyogi, 2011: Distribution of landscape types in the Global Historical Climatology Network. Earth Interactions, 15:6, doi: 10.1175/2010EI371

Peterson, T. C., K. M. Willett, and P. W. Thorne (2011), Observed changes in surface atmospheric energy over land, Geophys. Res. Lett., 38, L16707, doi:10.1029/2011GL048442

Pielke Sr., R.A., T. Stohlgren, L. Schell, W. Parton, N. Doesken, K. Redmond,  J. Moeny, T. McKee, and T.G.F. Kittel, 2002: Problems in evaluating regional  and local trends in temperature: An example from eastern Colorado, USA.  Int. J. Climatol., 22, 421-434.

Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends  in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices.

The Donat and Alexander (2012) is particularly at fault in this neglect as most of the papers questioning the robustness of the GHCN type data sets were published well before their article was completed.  The conclusions of the Donat and Alexander study should not be considered as robust until they address the issues we raised in our papers.  

Comments Off

Filed under Climate Change Metrics, Climate Science Misconceptions, Research Papers

Summary Of Arctic Ice Decline – Recommendations For Investigation Of The Cause(s)

source of top image from the Cryosphere Today

I was alerted to an interesting short movie of the Arctic ice decline;

http://haveland.com/share/piomas2012.gif

See also

Arctic Sea Ice Graphs

and

the WUWT Sea Ice Page

There will be quite a bit of discussion of the upcoming minimum areal extent (which is likely to be a record minimum) in the coming weeks.  My suggestion is we need (for the period 1979 to the present)

i) a presentation of what the CMIP5 models have predicted when run in a hindcast mode,

ii) analyses of lower tropospheric and surface temperature anomalies by season for the Arctic sea ice regions,

iii) analyses of the export volume of sea ice out of the Arctic ocean basin,

iv) analyses of black carbon (soot) deposition on the sea ice,

and

v) analyses of turbulent shearing stress at the surface (which will affect waves and vertical overturning rates of sea ice, such as during storms).

Readers who are aware of such studies are invited to send to me, and I will post on.

Comments Off

Filed under Climate Change Metrics

New Article “Monitoring and Understanding Trends in Extreme Storms: State of Knowledge” By Kunkel Et Al 2012

Jos de Laat of KNMI has alerted us to the informative new paper

Kunkel, K, et al 2012: Monitoring and Understanding Trends in Extreme Storms: State of Knowledge. Bulletin of the American Meteorological Society 2012 ; doi: http://dx.doi.org/10.1175/BAMS-D-11-00262.1

The abstract reads [highlight added]

The state of knowledge regarding trends and an understanding of their causes is presented for a specific subset of extreme weather and climate types. For severe convective storms (tornadoes, hail storms, and severe thunderstorms), differences in time and space of practices of collecting reports of events make the use of the reporting database to detect trends extremely difficult. Overall, changes in the frequency of environments favorable for severe thunderstorms have not been statistically significant. For extreme precipitation, there is strong evidence for a nationally-averaged upward trend in the frequency and intensity of events. The causes of the observed trends have not been determined with certainty, although there is evidence that increasing atmospheric water vapor may be one factor. For hurricanes and typhoons, robust detection of trends in Atlantic and western North Pacific tropical cyclone (TC) activity is significantly constrained by data heterogeneity and deficient quantification of internal variability. Attribution of past TC changes is further challenged by a lack of consensus on the physical linkages between climate forcing and TC activity. As a result, attribution of trends to anthropogenic forcing remains controversial. For severe snowstorms and ice storms, the number of severe regional snowstorms that occurred since 1960 was more than twice that of the preceding 60 years. There are no significant multi-decadal trends in the areal percentage of the contiguous U.S. impacted by extreme seasonal snowfall amounts since 1900. There is no distinguishable trend in the frequency of ice storms for the U.S. as a whole since 1950.

The article is an important new contribution in the assessment of changes in climate metrics over time. I have, however, one comments about the analyses and their conclusions in regards to their suggestion of attributing an increase in extreme precipitation to an increase in atmospheric water vapor.  Kunkel et al 2012 write

Karl and Trenberth (2003) have empirically demonstrated that for the same annual or seasonal precipitation totals, warmer climates generate more extreme precipitation events compared to cooler climates. This is consistent with water vapor being a critical limiting factor for the most extreme precipitation events. A number of analyses have documented significant positive trends in water vapor concentration and have linked these trends to human fingerprints in both changes of surface (Willet et al.2007) and atmospheric moisture (Santer et al. 2007).

The authors present analyses in their Table 2 to document an increase in atmospheric water vapor. They describe their analysis in the Table caption as

Table 2. Differences between two periods (1990-2009 minus 1971-845 1989) for daily, 1-in-5yr extreme events and maximum precipitable water values measured in the spatial vicinity of the extreme event location and within 24 hours of the event time.

However, in their analysis they use just two blocks of time (1990-2009) and (1971-1989) when different sliding analysis windows should have been used, in order to assess our robust there finding is with respect to sampling window.

They also should consider a peer-reviewed study which yields a different finding when assessing the overall North American trend in precipitable water;

Wang, J.-W., K. Wang, R.A. Pielke, J.C. Lin, and T. Matsui, 2008: Towards a robust test on North America warming trend and precipitable water content increase. Geophys. Res. Letts., 35, L18804, doi:10.1029/2008GL034564. http://pielkeclimatesci.files.wordpress.com/2009/10/r-337.pdf

where we report

An increase in the atmospheric moist content has been generally assumed when the lower-tropospheric temperature (Tcol) increases, with relative humidity holding steady. Rather than using simple linear regression, we propose a more rigorous trend detection method that considers time series memory. The autoregressive moving-average (ARMA) parameters for the time series of Tcol, precipitable water vapor (PWAV), and total precipitable water content (PWAT) from the North American Regional Reanalysis data were first computed. We then applied the Monte Carlo method to replicate the ARMA time series samples to estimate the variances of their Ordinary Least Square trends. Student’s t tests showed that Tcol from 1979 to 2006 increased significantly; however, PWAVand PWAT did not. This suggests that atmospheric temperature and water vapor trends do not follow the conjecture of constant relative humidity over North America. We thus urge further evaluations of Tcol, PWAV, and PWAT trends for the globe.

They also did not consider peer-reviewed papers on the role of land use change in altering extreme precipitation events, where irrigation of surrounding landscapes when dams are constructed, appears to enhance extreme precipitation at least in arid and semi-arid landscapes through the enhancement of convective available potential energy (CAPE); e.g. see

Degu, A. M., F. Hossain, D. Niyogi, R. Pielke Sr., J. M. Shepherd, N. Voisin, and T. Chronis, 2011: The influence of large dams on surrounding climate and precipitation patterns. Geophys. Res. Lett., 38, L04405, doi:10.1029/2010GL046482.

In this paper we wrote

Understanding the forcings exerted by large dams on local climate is key to establishing if artificial reservoirs inadvertently modify precipitation patterns in impounded river basins. Using a 30 year record of reanalysis data, the spatial gradients of atmospheric variables related to precipitation formation are identified around the reservoir shoreline for 92 large dams of North America. Our study reports that large dams influence local climate most in Mediterranean, and semi‐arid climates, while for humid climates the influence is least apparent. Clear spatial gradients of convective available potential energy, specific humidity and surface evaporation are also observed around the fringes between the reservoir shoreline and farther from these dams. Because of the increasing correlation observed between CAPE and extreme precipitation percentiles, our findings point to the possibility of storm intensification in impounded basins of the Mediterranean and arid climates of the United States.

Another example of a study that documents how landscape change in the United States can alter precipitation patterns, including intensity, is

Georgescu, M., D. B. Lobell, and C. B. Field (2009), The Potential Impact of US biofuels on Regional Climate, Geophys. Res. Lett., In Press, doi: 10.1029/2009GL040477

who reported that

Using the latest version of the WRF modeling system we conducted twenty-four, midsummer, continental-wide, sensitivity experiments by imposing realistic biophysical parameter limits appropriate for bio-energy crops in the Corn Belt of the United States….. Maximum, local changes in 2m temperature of the order of 1°C occur for the full breadth of albedo (ALB), minimum canopy resistance (RCMIN), and rooting depth (ROOT) specifications, while the regionally (105°W – 75°W and 35°N – 50°N) and monthly averaged response of 2m temperature was most pronounced for the ALB and RCMIN experiments, exceeding 0.2°C….The full range of albedo variability associated with biofuel crops may be sufficient to drive regional changes in summertime rainfall.

An increase in surface temperature would increase CAPE (and the resultant intensity of thunderstorms) if the water vapor content remained the same (or increased).

Urban landscapes also can contribute to enhancing the magnitude of extreme precipitation; e.g. see

Lei, M., D. Niyogi, C. Kishtawal, R. Pielke Sr., A. Beltrán-Przekurat, T. Nobis, and S. Vaidya, 2008: Effect of explicit urban land surface representation on the simulation of the 26 July 2005 heavy rain event over Mumbai, India. Atmos. Chem. Phys. Discussions, 8, 8773–8816

where among the conclusions is written

The results indicate that even for this synoptically active rainfall event, the vertical wind and precipitation are significantly influenced by urbanization, and the effect is more significant during the storm initiation…….The results suggest that urbanization can significantly contribute to extremes in monsoonal rain events that have been reported to be on the rise;

and see also, as another example,

Georgescu, M., G. Miguez-Macho, L. T. Steyaert, and C. P. Weaver (2009), Climatic effects of 30 years of landscape change over the Greater Phoenix, Arizona, region: 2. Dynamical and thermodynamical response, J. Geophys. Res., doi:10.1029/2008JD010762.

where in his guest post on February 9 2009 wrote

Our modeling results show a systematic difference in total accumulated precipitation between the most recent (2001) and least recent (1973) landscape reconstructions: a rainfall enhancement for 2001 relative to the 1973 landscape.

We recommend that in the next assessment led by Ken Kunkel and colleagues they include consideration of the role of landscape processes in affecting extreme weather over the United States (and elsewhere).

source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

“Numerical Simulation Of The Surface Air Temperature Change Caused By Increases Of Urban Area” By Aoyagi et al 2012

There is a new paper that documents the continuning effect of urbanization on surface air temperature trends [h/t Koji Dairaku]

Aoyagi, T., N. Kayaba and Seino, N., 2012: Numerical Simulation of the Surface Air Temperature Change Caused by Increases of Urban Area, Anthropogenic Heat, and Building Aspect Ratio in the Kanto-Koshin Area. Journal of the Meteorological Society of Japan, Vol. 90B, pp. 11–31, 2012 11 doi:10.2151/jmsj.2012-B02

The abstract reads

We investigated a warming trend in the Kanto-Koshin area during a 30-year period (1976-2006). The warming trends at AMeDAS stations were estimated to average a little less than 1.3°C/30 years in both summer and winter. These warming trends were considered to include the trends of large-scale and local-scale warming effects. Because a regional climate model with 20-km resolution without any urban parameterization could not well express the observed warming trends and their daily variations, we investigated whether a mesoscale atmospheric model with an urban canopy scheme could express them. To make the simulations realistic, we used 3 sets of real data: National Land Numerical Information datasets for the estimation of the land use area fractions, anthropogenic heat datasets varying in space and time, and GIS datasets of building shapes in the Tokyo Metropolis for the setting of building aspect ratios. The time integrations over 2 months were executed for both summer and winter. A certain level of correlation was found between the simulated temperature rises and the observed warming trends at the AMeDAS stations. The daily variation of the temperature rises in urban grids was higher at night than in the daytime, and its range was larger in winter than in summer. Such tendencies were consistent with the observational results. From factor analyses, we figured out the classic and some unexpected features of urban warming, as follows: (1) Land use distribution change (mainly caused by the decrease of vegetation cover) had the largest daytime warming effect, and the effect was larger in summer than in winter; (2) anthropogenic heat had a warming effect with 2 small peaks owing to the daily variation of the released heat and the timing of stable atmospheric layer formation; and (3) increased building height was the largest factor contributing to the temperature rises, with a single peak in early morning.

The conclusions state that

By numerical simulations using the JMA-NHM, we studied how much 3 bottom boundary condition changes, namely, in land use area fraction, anthropogenic heat release, and increased building aspect ratio, could explain the warming trends observed at the AMeDAS stations during a 30-year period (1976–2006).

A sensitivity study of land use modification, i.e., the spread of urban area, showed a warming effect on average, and that the effect was larger in grids where the land use modification rate was larger. The e¤ects were very small in central Tokyo because the urban area fraction was already saturated there by 1976. This effect was larger in summer when the Bowen ratio is originally small.

The warming effect of anthropogenic heat was concentrated to the central urban area where the heat was mainly loaded. The effect was larger in winter owing to relatively stable atmospheric conditions. Maximum warming was observed in the morning and a secondary peak was seen in the evening if we set the heat to vary realistically with time.

The increase of the aspect ratio of the buildings also had a warming effect on the surface air temperature. It was mainly caused by the inhibition of radiative cooling during nighttime, and the effect was larger in winter. The daily variation of this effect had a single peak in the morning.

This is a very important study, as it documents that climate observating stations that are in locations which are undergoing urbanization will have a warming (positive temperature trend) which is separate from any larger scale warming. As shown in the post

2012 IGBP Article “Cities Expand By Area Equal To France, Germany And Spain Combined In Less Than 20 years”

urbanization continues unabated. NCDC, GISS, CRU and BEST, in their analyses have not adequately considered the bias that urbanization produces in their analyses.

source of image

Comments Off

Filed under Climate Change Forcings & Feedbacks, Climate Change Metrics, Research Papers

Comments On The Observatonal Paper “Recent Changes In Tropospheric Water Vapor Over The Arctic As Assessed From Radiosondes And Atmospheric Reanalyses” By Serreze Et Al 2012

Figure 5 from Serreze et al 2012 [Time series (1979–2010) of monthly standardized anomalies (z-scores) in surface to 500 hPa precipitable water at the nine radiosonde sites, along with the linear trend line (shown in black), slope (z score per decade) and (in parentheses) the statistical significance.].

Serreze, M. C., A. P. Barrett, and J. Stroeve (2012), Recent changes in tropospheric water vapor over the Arctic as assessed from radiosondes and atmospheric reanalyses, J. Geophys. Res., 117, D10104, doi:10.1029/2011JD017421.

The abstract reads [highlight added]

Changes in tropospheric water vapor over the Arctic are examined for the period 1979 to 2010 using humidity and temperature data from nine high latitude radiosonde stations north of 70°N with nearly complete records, and from six atmospheric reanalyses, emphasizing the three most modern efforts, MERRA, CFSR and ERA-Interim. Based on comparisons with the radiosonde profiles, the reanalyses as a group have positive cold-season humidity and temperature biases below the 850 hPa level and consequently do not capture observed low-level humidity and temperature inversions. MERRA has the smallest biases. Trends in column-integrated (surface to 500 hPa) water vapor (precipitable water) computed using data from the radiosondes and from the three modern reanalyses at the radiosonde locations are mostly positive, but magnitudes and statistical significance vary widely between sites and seasons. Positive trends in precipitable water from MERRA, CFSR and ERA-Interim, largest in summer and early autumn, dominate the northern North Atlantic, including the Greenland, Norwegian and Barents seas, the Canadian Arctic Archipelago and (on the Pacific side) the Beaufort and Chukchi seas. This pattern is linked to positive anomalies in air and sea surface temperature and negative anomalies in end-of-summer sea ice extent. Trends from ERA-Interim are weaker than those from either MERRA or CFSR. As assessed for polar cap averages (the region north of 70°N), MERRA, CFSR and ERA-Interim all show increasing surface-500 hPa precipitable over the analysis period encompassing most months, consistent with increases in 850 hPa air temperature and 850 hPa specific humidity. Data from all of the reanalyses point to strong interannual and decadal variability. The MERRA record in particular shows evidence of artifacts likely introduced by changes in assimilation data streams. A focus on the most recent decade (2001–2010) reveals large differences between the three reanalyses in the vertical structure of specific humidity and temperature anomalies.

The conclusions include the text

On the basis of radiosonde profiles and output from the three latest generation atmospheric reanalyses (MERRA, CFSR and ERA-I), statistically significant trends in precipitable water over the Arctic as assessed over the period 1979–2010 are mostly positive. Trends from the three reanalyses are variously larger or smaller than radiosonde-based estimates. Trends are highly heterogeneous in space and time. The most consistent pattern between months and between the reanalyses is increasing precipitable water over the open waters of the northern North Atlantic, consistent with observed increases in sea surface temperature. Increases are also prominent over the Canadian Arctic Archipelago, especially in the summer months; the strong summer trends in this region are also seen in the radiosonde data. A feature common to all of the reanalyses is a region of positive trends in precipitable water centered over the Beaufort and Chuckchi seas in August and September, corresponding to where negative trends in end-of-summer summer sea ice extent have been most pronounced. These trend patterns mask considerable variability from year to year and from decade to decade.

The results presented here must be viewed with the caveat of uncertainties in both the radiosonde and the reanalysis data Obtaining accurate humidity data in polar regions from radiosondes has and will remain to be a daunting problem. Pointing to challenges of data assimilation in high latitudes, we have also shown that the reanalyses have moist and warm biases at and near the surface from autumn through spring, with smaller biases in summer.None of the reanalyses correctly capture the cold season humidity and temperature inversions seen in the radiosonde data. There are some substantial differences between MERRA, CFSR and ERA-I with respect to the vertical structure of recent (2001–2010 decade) anomalies in specific humidity and air temperature. We see evidence of unphysical features in the MERRA record, and numerous past studies have identified a slate of potential inconsistencies related to changes in data streams.

My Comments:

We need more such observational based studies. There are further implications from what Serreze et al 2012 have found:

1. An increase in water vapor in the Arctic and sub-Arctic (and its influence on the long-wave radiative fluxes) is one of the effects that is shown in

McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S.   Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., doi:10.1029/2012JD017578, in press.

to result in a greater temperature increase in the air near the surface, than higher in the troposphere. Such a temperature increase near the surface overstates the magnitude of the top of the atmosphere radiative imbalance (i.e. global warming) when the surface temperature data is used for this purpose.

The importance of the stable boundary layer in the Arctic, and that even slight changes in vertical mixing can result in significant changes in near surface temperature without much temperature change elsewhere in the troposphere, appears to be a main reason for the divergence of the surface and lower tropospheric temperature trends that we documented in

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res.,  115, D1, doi:10.1029/2009JD013655.

2. That Serreze et al 2012 found that “the reanalyses have moist and warm biases at and near the surface from autumn through spring, with smaller biases in summer“, indicates that their use to quantify the magnitude of global warming using their surface temperature data analyses, at least in this part of the world, is inappropriate.

3. We have examined the effect of an incremental increase in water vapor in the subarctic using a 1-D radiative transfer model as reported in the post

Relative Roles of CO2 and Water Vapor in Radiative Forcing

This analysis, completed by Norm Woods, concluded that with respect to an imposed 5% increase in water vapor for subarctic (summer and winter) and tropical climatological clear sky profiles that

The downwelling fluxes at the surface for the subarctic profile appear less sensitive to changes in carbon dioxide and water vapor concentrations than do the fluxes for the tropical and subarctic summer profiles.  The subarctic winter profile has a relatively weak lapse rate in the lowest part of the troposphere, so changes in the  position of the weighting function may have had little effect on the  downwelling fluxes. In addition, the water vapor amounts in the subarctic winter profile are considerably smaller than those in the two other profiles…due to the much lower atmospheric concentrations of water vapor in the subarctic winter sounding, the change from a zero concentration to its current value results in an increase of 116.46 Watts per meter squared, while adding 5% to the current value results in a 0.70 Watts per meter squared increase.

4.  The spatial analyses of global precipitable water reported in

Vonder Haar, T. H., J. Bytheway, and J. M. Forsythe (2012), Weather and climate analyses using improved global water vapor observations,
Geophys. Res. Lett.,doi:10.1029/2012GL052094, in press.

in which no global trend is seen in recent year; see

needs to assessed to ascertain if the Serreze et al radiosonde results are consistent with what is found from the satellite data with its greater spatial coverage.

Comments Off

Filed under Climate Change Metrics, Research Papers

Global Temperature Report: July 2012 From The University Of Alabama At Huntsville

The July 2012 lower tropospheric temperature analyses are now available (thanks as usual to Phillip Gentry!)

Global Temperature Report: July 2012 [click on images for a clear view]

Global climate trend since Nov. 16, 1978: +0.14 C per decade

July temperatures (preliminary)

Global composite temp.: +0.28 C (about 0.50 degrees Fahrenheit) above 30-year average for July.

Northern Hemisphere: +0.44 C (about 0.79 degrees Fahrenheit) above 30-year average for July.

Southern Hemisphere: +0.11 C (about 0.20 degrees Fahrenheit) above 30-year average for July.

Tropics: +0.33 C (about 0.59 degrees Fahrenheit) above 30-year average for July.

June temperatures (revised):

Global Composite: +0.37 C above 30-year average

Northern Hemisphere: +0.54 C above 30-year average

Southern Hemisphere: +0.20 C above 30-year average

Tropics: +0.14 C above 30-year average

(All temperature anomalies are based on a 30-year average (1981-2010) for the month reported.)

Notes on data released Aug. 6, 2012:

Compared to global seasonal norms, July 2012 was the coolest July since 2008, according to Dr. John Christy, a professor of atmospheric science and director of the Earth System Science Center at The University of Alabama in Huntsville.

Compared to seasonal norms, the coldest spot on the globe in July was the South Pole, where winter temperatures averaged 4.5 C (8.1 degrees F) colder than normal. If it isn’t usually the coldest place on Earth in July, seeing temperatures during the deepest part of the Antarctic winter that much colder than normal might move the South Pole into that spot. By comparison, the “warmest” place on Earth in July was in northeastern Alberta, Canada. Temperatures there averaged 3.43 C (about 6.2 degrees F) warmer than normal for the month.

Archived color maps of local temperature anomalies are available on-line at:

http://nsstc.uah.edu/climate/

The processed temperature data is available on-line at:

vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

As part of an ongoing joint project between UAHuntsville, NOAA and NASA, John Christy, a professor of atmospheric science and director of the Earth System Science Center (ESSC) at The University of Alabama in Huntsville, and Dr. Roy Spencer, an ESSC principal scientist, use data gathered by advanced microwave sounding units on NOAA and NASA satellites to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available.

The satellite-based instruments measure the temperature of the atmosphere from the surface up to an altitude of about eight kilometers above sea level. Once the monthly temperature data is collected and processed, it is placed in a “public” computer file for immediate access by atmospheric scientists in the U.S. and abroad.

Neither Christy nor Spencer receives any research support or funding from oil, coal or industrial companies or organizations, or from any private or special interest groups. All of their climate research funding comes from federal and state grants or contracts.

Comments Off

Filed under Climate Change Metrics

More On The BEST, NCDC, CRU and GISS Analyses Of Multi-Decadal Land Surface Temperature Trends

In my Public Comment on the 2005 CCSP 1.1 report

Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends  in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices.

I made the following science statements and asked questions with respect to the  multi-decadal land surface temperature analyses (the text below is extracted from my Public Comment):

1. The temperature trend near the surface is not height invariant.

What is the bias in degrees Celsius introduced as a result of aggregating temperature data from different measurement heights, aerodynamic roughnesses, and thermodynamic stability?

In the BEST, NCDC, CRU and GISS analyses, the anomalies are assumed to be height invariant.  The paper

McNider, R. T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J. T. Walters, U. S. Nair, and J. R. Christy (2012). Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing, J. Geophys. Res.,doi:10.1029/2012JD017578, in press. [for the complete paper, click here]

has, in my view, conclusively shown that multi-decadal trends in minimum temperatures are a function of height near the ground. Since the raw data used  in the BEST, NCDC, CRU and GISS analyses often come from a variety of heights above the ground, they have not included this uncertainty in their analyses. In addition, as shown in the McNider et al paper, even slight changes in vertical mixing, with little or no warming or cooling higher in the atmosphere, can result in a substantial contribution to the trend in minimum temperatures.  This can occur in otherwise pristine locations, such as Siberia in winter, if a few buildings are constructed nearby, etc.

This is why McNider et al is a game changer in terms of the quantitative accuracy of using the land surface temperature trends as a component in the calculation of global warming. I am a co-author on the McNider et al paper.

2. The quantitative uncertainty associated with each step in homogeneity adjustments needs to be provided

What is the quantitative uncertainty in degrees Celsius that are associated with each of the steps in the homogenization of the surface temperature data?

The surface temperature record, which underpins so much of the report, is considered a robust characterization of large-scale averages, despite unresolved issues on its spatial representativeness

The Fall et al (2011), Menne et al (2010) and now Watts et al (2012) have focused on the issue of whether siting quality matters in terms of computing multi-decafdal surface temperature trends.  This is a fundamental assumption in the  BEST, NCDC, CRU and GISS analyses.

I was a co-author on the Fall et al (2011) article and provided suggested edits and references to Anthony Watts in his 2012 paper. In the later paper, I was not involved in the data analysis, but am now providing specific recommendations in their further examination of this issue with respect to the issue as to whether the time of observation bias changes his conclusions.

For reference, the citation for these papers are listed below with a short summary of what was concluded.

1.Menne, M. J., C. N. Williams Jr., and M. A. Palecki (2010), On the reliability of the U.S. surface temperature record, J. Geophys. Res., 115, D11108, doi:10.1029/2009JD013094.

They reported that

we find no evidence that the CONUS average temperature trends are inflated due to poor station siting.

2.Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res.,  116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union

This paper reported

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite‐signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications.

3.Watts et al, 2012: An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends [to be submitted to JGR]

They reported

Comparisons demonstrate that NOAA adjustment processes fail to adjust poorly sited 54 stations downward to match the well sited stations, but actually adjusts the well sited 55 stations upwards to match the poorly sited stations. Well sited rural stations show a warming nearly three times greater after USHCNv2 adjustments are applied.

Both the Fall et al (2011) and Watts et al (2012) are game changers IF they are robust.  Fall et al, [which used a more complete set of data than in Menne et al 2010] while it did not find a statistically significant effect on the mean temperature trend,  did find a significant trend for maximum and minimum temperatures.  Watts et al (2012) also found a significant difference due to siting even in the mean temperature trends.

However, while the Fall et al (2012) paper has passed peer review, Watts et al (2012) has not. Also, while Watts et al (2012) used a more up-to-date classification of siting quality, it did not assess the impact of the time-of-observation-bias (TOB).  This work is now in progress and I am providing suggestions in its assessment.

The inclusion of the TOB may eliminate the differences in trends in the means, maximum and minimum temperatures between well- and poorly-sited locations. Or it might just eliminate the differences in one or two of these temperature measures.  If it eliminates all of them, the Watts et al 2012 study remains a game changer, as it would confirm (from a skeptical source) that the BEST. NCDC, GISS and CRU assumption that siting quality does not matter is robust.  This is not the “game changer” that we expected, but if that is what the science tells us, you accept it.  Coming from the detailed, thorough analysis that Anthony is leading, this would be a definitive result.

However, if one or more of the temperature measures do depend on siting quality, it is also game changer as this would confirm a significant bias in the use of poorly sited land surface temperatures in the construction of gridded and larger scale (global) average surface minimum, maximum and/or mean temperature anomalies.  This would be a conclusion that BEST, NCDC, GISS and CRU would undoubtedly test.  This how science should be done. Up to the present, however, the BEST, NCDC, GISS and CRU research groups have incompletely examined these issues.

source of image

Comments Off

Filed under Climate Change Metrics

Summary Of Two Game-Changing Papers – Watts Et al 2012 and McNider Et Al 2012

UPDATE #2: To make sure everyone clearly recognizes my involvement with both papers, I provided Anthony suggested text and references for his article [I am not a co-author of the Watts et al paper], and am a co-author on the McNider et al paper.

UPDATE: There has been discussion as to whether the Time of Observation Bias (TOB) could affect the conclusions reached in Watts et al (2012). This is a valid concern.  Thus the “Game Changing” finding of whether the trends are actually different for well- and poorly-sited locations is  tenative until it is shown whether or not TOB alters the conclusions.  The issue, however, is not easy to resolve. In our paper

Pielke Sr., R.A., T. Stohlgren, L. Schell, W. Parton, N. Doesken, K. Redmond, J. Moeny, T. McKee, and T.G.F. Kittel, 2002: Problems in evaluating regional and local trends in temperature: An example from eastern Colorado, USA. Int. J. Climatol., 22, 421-434.

this is what we concluded [highlight added]

The time of observation biases clearly are a problem in using raw data from the US Cooperative stations. Six stations used in this study have had documented changes in times of observation. Some stations, like Holly, have had numerous changes. Some of the largest impacts on monthly and seasonal temperature time series anywhere in the country are found in the Central Great Plains as a result of relatively frequent dramatic interdiurnal temperature changes. Time of observation adjustments are therefore essential prior to comparing long-term trends.

We attempted to apply the time of observation adjustments using the paper by Karl et al. (1986). The actual implementation of this procedure is very difficult, so, after several discussions with NCDC personnel familiar with the procedure, we chose instead to use the USHCN database to extract the time of observation adjustments applied by NCDC. We explored the time of observation bias and the impact on our results by taking the USHCN adjusted temperature data for 3 month seasons, and subtracted the seasonal means computed from the station data adjusted for all except time of observation changes in order to determine the magnitude of that adjustment. An example is shown here for Holly, Colorado (Figure 1), which had more changes than any other site used in the study.

What you would expect to see is a series of step function changes associated with known dates of time of observation changes. However, what you actually see is a combination of step changes and other variability, the causes of which are not all obvious. It appeared to us that editing procedures and procedures for estimating values for missing months resulted in computed monthly temperatures in the USHCN differing from what a user would compute for that same station from averaging the raw data from the Summary of the Day Cooperative Data Set. This simply points out that when manipulating and attempting to homogenize large data sets, changes can be made in an effort to improve the quality of the data set that may or may not actually accomplish the initial goal.

Overall, the impact of applying time of observation adjustment at Holly was to cool the data for the 1926–58 with respect to earlier and later periods. The magnitude of this adjustment of 2 °C is obviously very large, but it is consistent with changing from predominantly late afternoon observation times early in the record to early morning observation times in recent years in the part of the country where time of observation has the greatest effect. Time of observation adjustments were also applied at five other sites.

Until this issue is resolved, the Game Changer aspect of the Watts et al 2012 study  is tenative. [Anthony reports he is actively working to resolve this issue on hold ] The best way to address the TOB issue is to use data from sites in the Watts et al data set that have hourly resolution.  For those years, when the station is unchanging in location, compute the TOB.  The Karl et al (1986) method of TOB adjustment, in my view, needs to be more clearly defined and further examined  in order to better address this issue. I understand research is underway to examine the TOB issue in detail, and results will be reported by Anthony when ready.

*************ORIGINAL POST****************

There are two recent papers that raise serious questions on the accuracy of the quantitative  diagnosis of global warming by NCDC, GISS, CRU and BEST based on land surface temperature anomalies.   These papers are a culmination of two areas of uncertainty study that were identified in the paper

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with   the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

The Summary

  • One paper [Watts et al 2012] show that siting quality does matter. A warm bias results in the continental USA when poorly sited locations are used to construct a gridded analysis of land surface temperature anomalies.
  • The other paper [McNider et al 2012] shows that not only does the height at which minimum temperature observations are made matter, but even slight changes in vertical mixing (such as from adding a small shed near the observation site, even in an otherwise pristine location) can increase the measured temperature at the height of the observation. This can occur when there is little or no layer averaged warming.

The Two Papers

Watts et al, 2012: An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends [to be submitted to JGR]

McNider, R. T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J. T. Walters, U. S. Nair, and J. R. Christy (2012). Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing, J. Geophys. Res.,doi:10.1029/2012JD017578, in press. [for the complete paper, click here]

To Provide Context

First, however, to make sure that my perspective on climate is properly understood;

i) There has been global warming over the last several decades. The ocean is the component of the climate system that is best suited for quantifying climate system heat change [Pielke, 2003] e.g. see the figure below from NOAA’s Upper Ocean Heat Content Anomaly for their estimate of the magnitude of warming since 1993

ii) The human addition to CO2 into the atmosphere is a first-order climate forcing; e.g. see Pielke et al (2009) and the NOAA plot below

However, the Watts et al 2012 and McNider et al 2012 papers  refute a major assumption in the CCSP 1.1 report

Temperature Trends in the Lower Atmosphere – Understanding and Reconciling Differences

that variations in surface temperature anomalies are random and this can be averaged to create area means that are robust measures of the average surface temperature in that region (and when summed globally, provide an accurate global land average surface temperature anomaly).  Randomness, and with assumption of no systematic biases, is shown in the two papers to be incorrect.

In the chapter

Lanzante et al 2005: What do observations indicate about the changes of temperatures in the atmosphere and at the surface since the advent of measuring temperatures vertically?

they write that [highlight added]

“Currently, there are three main groups creating global analyses of surface temperature (see Table 3.1), differing in the choice of available data that are utilized as well as the manner in which these data are synthesized.

My Comment: Now there is the addition of Richard Muller’s BEST analysis.

Since the network of surface stations changes over time, it is necessary to assess how well the available observations monitor global or regional temperature. There are three ways in which to make such assessments (Jones, 1995). The first is using “frozen grids” where analysis using only those grid boxes with data present in the sparsest years is used to compare to the full data set results from other years (e.g., Parker et al., 1994). The results generally indicate very small errors on multi-annual timescales (Jones, 1995). “

My Comment:  The “frozen grids” combine data from poor- and well-site locations, and from different heights.  A warm bias results. This is a similar type of analysis as used in BEST.

The second technique is sub-sampling a spatially complete field, such as model output, only where in situ observations are available. Again the errors are small (e.g., the standard errors are less than 0.06ºC for the observing period 1880 to 1990; Peterson et al., 1998b).

My Comment:  Again, there is the assumption that no systematic biases exist in the observations. Poorly sited locations are blended with well-sited locations which, based on Watts et al (2012), artificially elevates the sub-sampled trends.

The third technique is comparing optimum averaging, which fills in the spatial field using covariance matrices, eigenfunctions or structure functions, with other analyses. Again, very small differences are found (Smith et al., 2005). The fidelity of the surface temperature record is further supported by work such as Peterson et al. (1999) which found that a rural subset of global land stations had almost the same global trend as the full network and Parker (2004) that found no signs of urban warming over the period covered by this report.

My Comment:  Here is where the assumption that the set of temperature anomalies are random is presented. Watts et al (2012) provide observational evidence, and McNider et al (2012)  present theoretical reasons, why this is an incorrect assumption.

Since the three chosen data sets utilize many of the same raw observations, there is a degree of interdependence. Nevertheless, there are some differences among them as to which observing sites are utilized. An important advantage of surface data is the fact that at any given time there are thousands of thermometers in use that contribute to a global or other large-scale average. Besides the tendency to cancel random errors, the large number of stations also greatly facilitates temporal homogenization since a given station may have several “near-neighbors” for “buddy-checks.” While there are fundamental differences in the methodology used to create the surface data sets, the differing techniques with the same data produce almost the same results (Vose et al., 2005a).

My Comment: There statement that there is “the tendency to cancel random errors” is shown in the Watts et al 2012 and McNider et al 2012 papers to be incorrect. This means their claim that  “the large number of stations also greatly facilitates temporal homogenization since a given station may have several “near-neighbors” for “buddy-checks.”  is erroneously averaging together sites with a warm bias.

Bottom Line Conclusion: The Watts et al 2012 and McNider et al 2012 papers have presented the climate community with evidence of major systematic warm biases in the analysis of multi-decadal land surface temperature anomalies by NCDC, GISS, CRU and BEST.  The two paper also help explain the discrepancy seen between the multi-decadal temperature trends in the surface and lower tropospheric temperature that was documented in

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res.,  115, D1, doi:10.1029/2009JD013655.

I look forward to discussing the conclusions of these two studies in the coming weeks and months.

source of image

Comments Off

Filed under Climate Change Metrics, RA Pielke Sr. Position Statements, Research Papers

Comments On The Game Changer New Paper “An Area And Distance Weighted Analysis Of The Impacts Of Station Exposure On The U.S. Historical Climatology Network Temperatures And Temperature Trends” By Watts Et Al 2012

Congratulations to Anthony Watts! Today, Anthony  has announced his seminal new paper

An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends

in his post

PRESS RELEASE

This paper is a game changer, in my view, with respect to the use of the land surface temperature anomalies as part of the diagnosis of global warming. 

The new study extends and improves on the study of station siting quality, as they affect multi-decadal surface air temperature trends, that was introduced in

Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., 116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union.

and whose results have been used by others; i.e.

Martinez, C.J., Maleski, J.J., Miller, M.F, 2012: Trends in precipitation and temperature in Florida, USA. Journal of Hydrology. volume 452-453, issue , year 2012, pp. 259 – 281

Anthony has led what is a critically important assessment of the issue of station quality. Indeed, this type of analysis should have been performed by Tom Karl and Tom Peterson at NCDC, Jim Hansen at GISS and Phil Jones at the University of East Anglia (and Richard Muller).  However, they apparently liked their answers and did not want to test the robustness of their findings.

In direct contradiction to Richard Muller’s BEST study,  the new Watts et al 2012 paper has very effectively shown that a substantive warm bias exists even in the mean temperature trends.  This type of bias certainly exists throughout the Global Historical Climate Network, as well as what Anthony has documented for the US Historical Climate Reference Network.

Despite what is written on the NCDC website for the USHCN website; i.e. that

The U.S. Historical Climatology Network (USHCN, Karl et al. 1990) is a high-quality moderate sized data set of monthly averaged maximum, minimum, and mean temperature and total monthly precipitation developed to assist in the detection of regional climate change.

the USHCN is not yet a robust set of quality controlled data.

Anthony’s new results also undermine the latest claims by Richard Muller of BEST, as not only is Muller extracting data from mostly the same geographic areas as for the NCDC, GISS and CRU analyses, but he is accepting an older  assessment of station siting quality as it affects the trends.

Indeed, since he accepted the Fall et al 2011 study in reporting his latest findings, he now needs to retrench and re-compute his trends. Of course, for the non-USHCN sites, he must bin those sites as performed by Anthony’s research group. If he does not, his study should be relegated to a footnote of a out-of-date analysis.

In Richard Muller’s Op-Ed in the New York Times (see The Conversion of a Climate-Change Skeptic), he makes far-reaching conclusions based on his sparse knowledge of the uncertainties in multi-decadal land surface temperature record. His comments show what occurs when a scientist, with excellent research credentials within their area of scientific expertise, go outside of their area of knowledge.

His latest BEST claims are, in my view, an embarrassment. The statement that he makes in his op-ed that [highlight added]

My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases.

is easily refuted. See, for example,

National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp.

Pielke Sr., R., K.  Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D.  Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E.  Philip Krider, W. K.M. Lau, J. McDonnell,  W. Rossow,  J. Schaake, J.  Smith, S. Sorooshian,  and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases.   Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American   Geophysical Union.

Pielke Sr., R.A., A. Pitman, D. Niyogi, R. Mahmood, C. McAlpine, F. Hossain, K. Goldewijk, U. Nair, R. Betts, S. Fall, M. Reichstein, P. Kabat, and N. de Noblet-Ducoudré, 2011: Land  use/land cover changes and climate: Modeling analysis  and  observational evidence. WIREs Clim Change 2011, 2:828–850. doi: 10.1002/wcc.144.

A. J. Pitman, F. B. Avila, G. Abramowitz, Y. P.Wang, S. J. Phipps and N. de Noblet-Ducoudré, 2011: Importance of background climate in determining impact of land-cover change on regional climate. Nature Climate Change.: 20 November 2011 | DOI: 10.1038/NCLIMATE1294

Avila, F. B., A. J. Pitman, M. G. Donat, L. V. Alexander, and G. Abramowitz (2012), Climate model simulated changes in temperature extremes due to land cover change, J. Geophys. Res., 117, D04108, doi:10.1029/2011JD016382

Now, with the new Watts et al 2012 paper, Richard Muller’s conclusion regarding the robustness of the BEST analysis is refuted in the same day as his op-ed appeared.

Richard Muller, in his latest analysis, continues to ignore past communications regarding the robustness of his results; e. g. see

Informative News Article by Margot Roosevelt In The Los Angeles Times On Richard Muller’s Testimony To Congress

Is There A Sampling Bias In The BEST Analysis Reported By Richard Muller?

Comments On The Testimony Of Richard Muller At the United States House Of Representatives Committee On Science, Space And Technology

Richard Muller On NPR On April 11 2011 – My Comments

It certainly appears that Richard Muller is an attention-getter, which he has succeeded at, but, unfortunately, he has demonstrated a remarkable lack of knowledge concerning the uncertainties in quantifying the actual long-term surface temperature trend, as well as a seriously incomplete knowledge of the climate system.

The proper way to complete a research study is provided in the Watts et al 2012 article.  This article, a culmination of outstanding volunteer support under Anthony’s leadership, shows that Anthony Watts clearly understands the research process in climate science. As a result of his, and of and his colleagues, rigorous dedication to the scientific method, he has led a much more robust study than performed by Richard Muller in the BEST project.

Finally, on Andy Revkin’s well-respected and influential weblog Dot Earth,  in a comment with respect to his post

‘Converted’ Skeptic: Humans Driving Recent Warming

he writes

Muller’s database will hold up as a powerful added tool for assessing land-side climate patterns, but his confidence level on the human element in recent climate change will not. I’d be happy to be proved wrong, mind you.

Andy’s assumption that “Muller’s database will hold up as a powerful added tool for assessing land-side climate patterns” is now shown as incorrect.

The new Watts et al 2012 paper shows that Muller’s data base is really not a significant new addition for assessing land-side climate patterns, at least until further analyses are performed on the siting quality of the stations he uses in the BEST assessment.

Anthony Watt’s new paper shows that  a major correction is needed Muller’s BEST study.  Anthony also has shown what dedicated scientists can do with even limited financial support.  Despite the large quantities of funds spent on the BEST study, it is Anthony Watts and his team who have actually significantly advanced our understanding of  this aspect of the climate system.  Well done Anthony!

source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

New Paper “Trends In Precipitation And Temperature In Florida, USA” By Martinez Et Al 2012

source of image: Martinez et al 2012

There is an excellent new paper on temperature and precipitation trends over multi-decadal time periods. The article is

Martinez, C.J., Maleski, J.J., Miller, M.F, 2012: Trends in precipitation and temperature in Florida, USA. Journal of Hydrology. volume 452-453, issue , year 2012, pp. 259 – 281

The abstract reads [highlight added]

Annual, seasonal, and monthly trends in precipitation, mean temperature, maximum temperature, minimum temperature, and temperature range were evaluated using stations from the United States Historical Climatology Network (USHCN) for the time periods 1895–2009 and 1970–2009 for the state of Florida. The significance and magnitude of station trends were determined using the non-parametric Mann–Kendall test and Sen’s slope, respectively. The collective, field significance of trends were evaluated using a Monte Carlo permutation procedure. Field significant trends in seasonal precipitation were found in only the June–August and March–May seasons for the 1895–2009 and 1970–2009 time periods, respectively. Significant decreasing trends in monthly precipitation were found in the months of October and May for the 1895–2009 and 1970–2009 time periods, respectively. Field significant trends were found for all temperature variables for both time periods, with the largest number of stations with significant trends occurring in the summer and autumn months. Trends in mean, maximum, and minimum temperature were generally positive with a higher proportion of positive trends in the 1970–2009 period. The spatial coherence of trends in temperature range was generally less compared to other temperature variables, with a larger proportion of stations showing negative trends in the summer and positive trends at other times of the year and more negative trends found in the 1970–2009 period. Significant differences in temperature trends based on the surrounding land use were found for minimum temperature and temperature range in the 1970–2009 period indicating that data homogenization of the USHCN temperature data did not fully remove this influence. The evaluation of trends based on station exposure ratings shows significant differences in temperature variables in both the 1895–2009 and 1970–2009 time periods. Systematic changes in trends can be seen in the 1980s, the period of widespread conversion from liquid-in-glass to electronic measurement, indicating that some of the differences found may be due to uncorrected inhomogeneities. Since notable differences were found between differently rated stations pre-1940, a time which the present-day rating should have little to no influence, attribution of differences based on station rating should be done with caution.

The Highlights listed by the author are

► Significant trends were found in precipitation, temperature and temperature range.

► More statistically significant trends were found for temperature than rainfall.

► Station exposure may influence temperature trends.

Among the conclusions is the finding that

Significant differences in most temperature variables were found between good and poorly sited stations based on the classification of USHCN stations using the USCRN rating system.

This study illustrates the need for detailed assessment of the reason for observed temperature and precipitation trends as influenced by station siting and by land use type in their vicinity. This type of study is what NCDC should be doing, but they have left it to others, such as

Marshall, C.H. Jr., R.A. Pielke Sr., L.T. Steyaert, and D.A. Willard,  2004: The impact of anthropogenic land-cover change on the Florida peninsula  sea breezes and warm season sensible weather. Mon. Wea. Rev., 132, 28-52.

and

Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res.,  116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union

whom Martinez et al 2012 cite.

As they conclude  in their paper

This work provides a preliminary analysis of historical trends in the climate record in the state of Florida. While this work did not attempt to fully attribute the cause of observed trends, it provides a first step in future attribution to possible causes including multidecadal climate variability, long term regional temperature trends, and potential errors caused by station siting, regional land use/land cover, and data homogenization.

We need more such detailed analyses, in order to further examine the multitude of issues with the USHCN and GHCN analyses of long term temperature and precipitation trends. Despite what is written on the NCDC website for the USHCN website; i.e. that

The U.S. Historical Climatology Network (USHCN, Karl et al. 1990) is a high-quality moderate sized data set of monthly averaged maximum, minimum, and mean temperature and total monthly precipitation developed to assist in the detection of regional climate change.

they are really not as of as high a quality as claimed.

Comments Off

Filed under Climate Change Metrics, Research Papers