In my Public Comment on the 2005 CCSP 1.1 report
Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices.
I made the following science statements and asked questions with respect to the multi-decadal land surface temperature analyses (the text below is extracted from my Public Comment):
1. The temperature trend near the surface is not height invariant.
What is the bias in degrees Celsius introduced as a result of aggregating temperature data from different measurement heights, aerodynamic roughnesses, and thermodynamic stability?
In the BEST, NCDC, CRU and GISS analyses, the anomalies are assumed to be height invariant. The paper
McNider, R. T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J. T. Walters, U. S. Nair, and J. R. Christy (2012). Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing, J. Geophys. Res.,doi:10.1029/2012JD017578, in press. [for the complete paper, click here]
has, in my view, conclusively shown that multi-decadal trends in minimum temperatures are a function of height near the ground. Since the raw data used in the BEST, NCDC, CRU and GISS analyses often come from a variety of heights above the ground, they have not included this uncertainty in their analyses. In addition, as shown in the McNider et al paper, even slight changes in vertical mixing, with little or no warming or cooling higher in the atmosphere, can result in a substantial contribution to the trend in minimum temperatures. This can occur in otherwise pristine locations, such as Siberia in winter, if a few buildings are constructed nearby, etc.
This is why McNider et al is a game changer in terms of the quantitative accuracy of using the land surface temperature trends as a component in the calculation of global warming. I am a co-author on the McNider et al paper.
2. The quantitative uncertainty associated with each step in homogeneity adjustments needs to be provided
What is the quantitative uncertainty in degrees Celsius that are associated with each of the steps in the homogenization of the surface temperature data?
The surface temperature record, which underpins so much of the report, is considered a robust characterization of large-scale averages, despite unresolved issues on its spatial representativeness
The Fall et al (2011), Menne et al (2010) and now Watts et al (2012) have focused on the issue of whether siting quality matters in terms of computing multi-decafdal surface temperature trends. This is a fundamental assumption in the BEST, NCDC, CRU and GISS analyses.
I was a co-author on the Fall et al (2011) article and provided suggested edits and references to Anthony Watts in his 2012 paper. In the later paper, I was not involved in the data analysis, but am now providing specific recommendations in their further examination of this issue with respect to the issue as to whether the time of observation bias changes his conclusions.
For reference, the citation for these papers are listed below with a short summary of what was concluded.
1.Menne, M. J., C. N. Williams Jr., and M. A. Palecki (2010), On the reliability of the U.S. surface temperature record, J. Geophys. Res., 115, D11108, doi:10.1029/2009JD013094.
They reported that
we find no evidence that the CONUS average temperature trends are inflated due to poor station siting.
2.Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., 116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union
This paper reported
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite‐signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications.
3.Watts et al, 2012: An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends [to be submitted to JGR]
Comparisons demonstrate that NOAA adjustment processes fail to adjust poorly sited 54 stations downward to match the well sited stations, but actually adjusts the well sited 55 stations upwards to match the poorly sited stations. Well sited rural stations show a warming nearly three times greater after USHCNv2 adjustments are applied.
Both the Fall et al (2011) and Watts et al (2012) are game changers IF they are robust. Fall et al, [which used a more complete set of data than in Menne et al 2010] while it did not find a statistically significant effect on the mean temperature trend, did find a significant trend for maximum and minimum temperatures. Watts et al (2012) also found a significant difference due to siting even in the mean temperature trends.
However, while the Fall et al (2012) paper has passed peer review, Watts et al (2012) has not. Also, while Watts et al (2012) used a more up-to-date classification of siting quality, it did not assess the impact of the time-of-observation-bias (TOB). This work is now in progress and I am providing suggestions in its assessment.
The inclusion of the TOB may eliminate the differences in trends in the means, maximum and minimum temperatures between well- and poorly-sited locations. Or it might just eliminate the differences in one or two of these temperature measures. If it eliminates all of them, the Watts et al 2012 study remains a game changer, as it would confirm (from a skeptical source) that the BEST. NCDC, GISS and CRU assumption that siting quality does not matter is robust. This is not the “game changer” that we expected, but if that is what the science tells us, you accept it. Coming from the detailed, thorough analysis that Anthony is leading, this would be a definitive result.
However, if one or more of the temperature measures do depend on siting quality, it is also game changer as this would confirm a significant bias in the use of poorly sited land surface temperatures in the construction of gridded and larger scale (global) average surface minimum, maximum and/or mean temperature anomalies. This would be a conclusion that BEST, NCDC, GISS and CRU would undoubtedly test. This how science should be done. Up to the present, however, the BEST, NCDC, GISS and CRU research groups have incompletely examined these issues.