There is a claim of validation of the BEST data at Blackboard in the post
He starts off his post with the comment
“Its not often that I get to surprise Richard Muller. But at the Berkeley Earth meeting the other week he was flabbergasted by the results of a simple comparison between CONUS Berkeley data and NCDC’s published USHCN data”
However, Zeke has overlooked several fundamental issues with this claim, that has been the basis for discussion in the comment section on blackboard of Zeke’s post. I have presented several of these below on my weblog, as the issues are so significant (and Muller and Zeke have both ignored so far) that it is worth bringing to everyone’s attention.
The concerns were succinctly summarized in a comment by Kenneth Fritsch (Comment #96099) who wrote
“(1) What would Zeke’s comparison of the BEST to the three majors’ station inventory look like if it had been in (a) terms of station months and normalized for quality (b) using BEST weighting and (c) accounted for adding new stations to areas which already have good spatial coverage by again using the BEST spatial coverage weighting?”
My way to frame these questions, as I commented on at Blackboard in Comment #95943) , is that
Hi Zeke – There are several issues with the Muller (BEST) approach that need to be resolved. These are discussed in my post
where I reported what I submitted on Climate Etc that
Hi Judy – I encourage you to document how much overlap there is in Muller’s analysis with the locations used by GISS, NCDC and CRU. In our paper Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229. we reported that “The raw surface temperature data from which all of the different global surface temperature trend analyses are derived are essentially the same. The best estimate that has been reported is that 90–95% of the raw data in each of the analyses is the same (P. Jones, personal communication, 2003).”
Zeke – Unless, Muller pulls from a significantly different set of raw data, it is no surprise that his trends are the same. I realize they use more sites, but i) what percent of overlap is there between the HCN and BEST sites in terms of location and ii) what is the fraction of the time the two sets use different sites (i.e. summing up those stations that both use as compared to the total time of separate BEST and HCN sites)?
Also, what is the siting quality of the non HCN sites used by BEST?
Finally, how do the maximum and minimum temperatures compare?
There remain, in my view substantive unanswered questions. If you have answered this questions already, please refer me to them.
Until these issues are resolved, the quality of Zeke’s analysis and his conclusion remains in limbo. Steve Mosher’s Comment #96066) that
“The only metadata that matters to the algorithm is lat/lon.”
is actually quite an indictment of the BEST analysis and conflicts with almost everything we know about metadata requirements.
Indeed, Anthony Watts’s seminal research on the quality of the USHCN, exemplified in his first paper on this subject,
Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., 116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union.
illustrates quite convincingly why station metadata, including photographic documentation, is so essential.