Monthly Archives: April 2012

A New Article “Total Cloud Cover From Satellite Observations And Climate Models” By Probst Et Al 2012

Probst, P. , R. Rizzi, E. Tosi, V. Lucarini, T. Maestri, 2012: Total cloud cover from satellite observations and climate models. Elsevier, Atmospheric Reasearch, in press, doi:10.1016/j.atmosres.2012.01.005 [not yet available on line at http://www.sciencedirect.com/science/journal/aip/01698095]

with the abstract [highlight added]

Global and zonal monthly means of cloud cover fraction for total cloudiness (CF) from the ISCCP D2 dataset are compared to same quantities produced by the 20th century simulations of 21 climate models from the World Climate Research Programme’s (WCRP’s) Coupled Model Intercomparison Project phase 3 (CMIP3). The comparison spans the time frame from January 1984 to December 1999 and the global and zonal averages of CF are studied. It is shown that the global mean of CF for the PCMDI-CMIP3models, averaged over the whole period, exhibits a considerable variance and generally underestimates the ISCCP value. Large differences among models, and between models and observations, are found in the polar areas, where both models and satellite observations are less reliable, and especially near Antarctica. For this reason the zonal analysis is focused over the 60° S60° N latitudinal belt, which includes the tropical area and midlatitudes. The two hemispheres are analysed separately to show the variation of the amplitude of the seasonal cycle. Most models underestimate the yearly averaged values of CF over all the analysed areas, whilst they capture, in a qualitatively correct way, the magnitude and the sign of the seasonal cycle over the whole geographical domain, but overestimate the amplitude of the seasonal cycle in the tropical areas and at mid-latitudes,when taken separately. The inter annual variability of the yearly averages is underestimated by all models in each area analysed, and also the interannual variability of the amplitude of the seasonal cycle is underestimated, but to a lesser extent. This work shows that the climate models have a heterogeneous behaviour in simulating the CF over different areas of the Globe, with a very wide span both with observed CF and among themselves. Some models agree quite well with the observations in one or more of the metrics employed in this analysis, but not a single model has a statistically significant agreement with the observational datasets on yearly averaged values of CF and on the amplitude of the seasonal cycle over all analysed areas

The conclusion has the text

In this paper the monthly mean of total cloud cover fraction (CF) is chosen as benchmark for intercomparing and validating climate models included in the PCMDI-CMIP3 project, which have contributed decisively to the preparation of IPCC AR4 (Solomon et al., 2007). As observational counterpart, the satellite observations of clouds constituting the ISCCP D2 dataset for the 1984–1999 time frame, are considered. These data are compared to the corresponding period of the standard 20th century simulations of 21 climate models.

Whilst some models agree quite well with the observations in one or more of the metrics employed in this analysis, not a single model shows a statistically significant agreement with the observational dataset of yearly averaged values of CF and on the amplitude of the seasonal cycle on both tropical and extratropical regions. Our results highlight that the representation of the basic statistical properties of clouds in state-of-the-art climate models is still incomplete, as relevant systematic errors are present for most models in both tropical and extratropical regions. Typically, the climate models underestimate both the global CF and the zonal averaged CF over almost all zonal bands.

The range of model results is very wide since the annual and global averaged CF ranges from about 47% to 73%, with a mean difference with D2 observations of about 7%. The largest differences among models in the zonal averages are found in the tropical region and in the two polar regions, where the relative spread of models’ outputs reaches 0.4 (Tropics), 0.6 (Arctic region) and 0.9 (Antarctica). One must however also consider that it is likely that the error in the CF properties in the observational dataset is largest in the polar regions.

Looking at higher order statistics, it is shown that the interrannual variability of global averaged CF are quite strongly underestimated in all models with respect to observations,whilst the interannual variability of the seasonal signal is only slightly underestimated.

The documented differences between the observational dataset and the models constitute a problemsince the statistical properties of clouds play a decisive role in the earth climate, by providing a first order contribution to the energy budget at the top of the atmosphere (Solomon et al., 2007) and at the surface. It is therefore a feature that influences many physical processes inside the real atmosphere and inside models. Since most models are tuned to provide a TOA energy balance as close as possible to the measured record, the systematic deviations between a model and the CF observational dataset imply compensating deviations in a range of physical processes occurring almost everywhere in the system. The documented systematic inter-model discrepancies provide an indication of the effect of diverse mix of physical processes on CF. The authors believe that this is not an healthy situation.

The results presented in this paper provide a natural complement to the analyses shown in Pincus et al. (2008), who discussed the second moments of the statistics of the CF but did not show the results of the mean climatology. Often, the results obtained from different climate models are averaged under the assumption that the model biases will partially compensate, so that a more realistic estimate of the climate properties are achieved by the so-constructed “mean model”. As discussed in, e.g., Lucarini (2008) such a procedure, even if commonly used, is not really well defined in a probabilistic sense, and should be interpreted only in a qualitative sense. Since in our case most of the models have biases of the same sign with respect to observations, the ensemble mean (constructed in our case with a simple un-weighted averaging) does not provide good agreement with observations for the considered statistical estimators, with discrepancies in most case larger than one standard deviation of the single model outputs.

source of image 

Comments Off on A New Article “Total Cloud Cover From Satellite Observations And Climate Models” By Probst Et Al 2012

Filed under Climate Models, Research Papers

New Paper “Impacts Of Wind Farms On Land Surface Temperature” By Zhou Et Al 2012 Documents An Effect Of Local And Regional Landscape Change On Long Term Surface Air Temperature Trends

Update April 30 2012  The authors have prepared

Q&A on “Impacts of Wind Farms on Land Surface Temperature” Published by Nature Climate Change on April 29, 2012

**********ORIGINAL POST**********************

In the papers

Walters, J. T., R. T. McNider, X. Shi, W. B Norris, and J. R. Christy (2007): Positive surface temperature feedback in the stable nocturnal boundary layer, Geophys. Res. Lett., 34, L12709, doi:10.1029/2007GL029505

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with   the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider,  and R.A Pielke Sr, 2011: Screen  level temperature increase due to higher atmospheric carbon dioxide in calm and  windy nights revisited. J. Geophys. Res., 116, D02122,   doi:10.1029/2010JD014612.

and in Professor Dick McNider’s powerpoint talk

Response and Sensitivity of the Stable Boundary Layer to Added Downward Long-wave Radiation

and guest weblog

In the Dark of the Night – the Problem with the Diurnal Temperature Range and Climate Change by Richard T. McNider

we present evidence as to why the use of the minimum land surface air temperature should not be used as part of the construction of a global average multi-decadal surface  temperature trend.

As Dick writes in his post

Because of the redistribution phenomena and the shallow layer affected, observed minimum temperatures are a very poor measure of the accumulation of heat in the atmosphere.

In the Pielke et al 2007 paper, we wrote [highlight added]

In a series of papers exploring the nonlinear dynamics of the stable boundary layer [McNider et al., 1995a, 1995b; Shi et al., 2005; Walters et al., 2007] it was shown that in certain parameter spaces the nocturnal boundary layer can rapidly transition from a cold light wind solution to a warm windy solution. In these parameter spaces, even slight changes in longwave radiative forcing or changes in surface heat capacity can cause large changes in surface temperatures as the boundary mixing changes. However, these temperature changes reflect changes in the vertical distribution of heat, not in the heat content of the deep atmosphere.

There is a new paper which confirms this finding that changes in time in the vertical redistribution of heat makes a significant difference to long term trends in minimum temperatures. This paper is

Zhou, Liming, Yuhong Tian, Somnath Baidya Roy, Chris Thorncroft, Lance F. Bosart and Yuanlong Hu 2012: Impacts of wind farms on land surface temperature. Nature Climate Chnage. doi:10.1038/nclimate1505

The abstract reads

The wind industry in the United States has experienced a remarkably rapid expansion of capacity in recent years and this fast growth is expected to continue in the future. While converting wind’s kinetic energy into electricity, wind turbines modify surface–atmosphere exchanges and the transfer of energy, momentum, mass and moisture within the atmosphere. These changes, if spatially large enough, may have noticeable impacts on local to regional weather and climate. Here we present observational evidence for such impacts based on analyses of satellite data for the period of 2003–2011 over a region in west-central Texas, where four of the world’s largest wind farms are located. Our results show a significant warming trend of up to 0.72 °C per decade, particularly at night-time, over wind farms relative to nearby non-wind-farm regions. We attribute this warming primarily to wind farms as its spatial pattern and magnitude couples very well with the geographic distribution of wind turbines.

An excerpt from the conclusions reads

Very probably the diurnal and seasonal variations in wind speed and the changes in near-surface ABL conditions owing to wind-farm operations are primarily responsible for the LST changes described above.

While the Zhou et al 2012 applies to wind farms, such changes in the vertical distribution of mixing of heat will occur whenever land use change, such as urbanization, deforestation, irrigation, etc occurs.

source of image

Comments Off on New Paper “Impacts Of Wind Farms On Land Surface Temperature” By Zhou Et Al 2012 Documents An Effect Of Local And Regional Landscape Change On Long Term Surface Air Temperature Trends

Filed under Climate Change Metrics, Research Papers

Sea Ice Prediction – Update To 2012 – A Correction

Update April 28 2012: On Tamino in a new post titled Let’s do the math! there is a question by Ron Broberg  as to why I consider single-year sea ice to have inertia. The reason is that, even with a single year of ice, since it has mass, its heat can be expressed in Joules. The mass of sea ice [its coverage and depth] can be expressed in terms of its heat content in Joules [i.e. the Joules  required to warm and melt]. Multi-year sea ice, presumably would be thicker and thus more heat would be required to melt. However, just to illustrate, IF, for example, the area coverage and depth returned in a single year to their values years ago,  the use of linear trends over decades would be meaningless as the “clock” would have reset. It is the mass of the ice that matters in terms of heat, not its age (although other aspects of ice are affected such as its albedo, density, etc).

In looking at the Cryosphere Today analysis  below (with my eyeometer), it appeared that the areal coverage had stopped decreasing when averaged from 2006 to the present, although there are large variations between summer and winter.  My “eye” still sees the change in the character of the analysis, but I have become convinced by the statistical trend analysis by Grant Foster and dana1981 (and several of the commenters) that this is not a significant change in the long term trend, nor can one even say the decline has stopped (due, presumably, to the large intraannual variations between anomalies in the winter and summer).

Also Al Rodger at Tamino’s Let’s do the math! , despite the same type of insults as Grant Foster is spewing, has a very informative plot of arctic sea ice. The portion of the time period that I focused on in my original post was since 2006 [using an eyeometer which saw a flattening of the anomalies, as I mentioned above].  Unfortunately, his time series is not up to the present (although this would make little difference in the mean that are shown).  I have reproduced his excellent figure below.

It does confirm what I saw visually that there was a visual change in the character of the slope in ~2006. From a statistical perspective, as discussed on Tamino and Skeptical Science, I am now convinced it is too short a time to determine if there is a real change in the character of arctic sea ice decline. It certainly could just be a short-term hiatus in the decline, that does not significantly affect the longer term trend. Only time will tell if this is a correct interpretation.

My final comment is why I do not permit comments on my weblog. The reason is straightforward. If you read the posts by Grant Foster and the response by Grant Foster to a comment by michel, you will see the bitterness of his posts and of a number of his commenters. Readers of his weblog (and of Skeptical Science which often has the same tone) with questions are welcome to e-mail me directly at pielkesr@ciresmail.colorado.edu.  I will post, with permission, substantive questions, and my answers, on my weblog.

******************Original post**********************************

I posted an update of the predictions of Arctic sea ice in my post of April 20 2012 titled Sea Ice Prediction – Update To 2012. After an exchange of posts with Tamino (Grant Foster) and Skeptical Science (dana1981), I have become convinced that I made several methodological errors as well as did not properly explain my perspective on the analysis I presented. I did not

i)  clearly state why I chose a start date of 2006,

ii) describe why I chose a relatively short time to compare the trends,

and

iii) why I compared the Cryosphere Today anomalies in  Arctic sea ice area with the Vinnikov et al sea ice extent values.

First,  my use of their long-term trend values to compare with a trend since 2006 assumes that short-term assessments have value for quantities which involve inertia (mass) such as heat and ice. If the sea ice area were to recover to its original area and thickness (for whatever reason), for example, it does not matter what its long-term trend was.  The long-term trend (if there is one) would be reset. I have made this point often with respect to ocean heat content (e.g. see). It also applies to sea ice (although area is only one part of it). I chose 2006 for this reason to see if the long-term trend provided by Levitus et al has been interrupted (as it visually appears to be on the Cryosphere Today website).

Second, I  compared anomalies of sea ice extent and of sea ice area, assuming they would be very close to each other. I have been convinced based on the analysis by Grant Foster and dana1981 that this is not correct [although see http://goo.gl/5rX5O from Zach on Tamino]. This was my more serious mistake.

What they report on Tamino and Skeptical Science is “that the decline in Arctic sea ice extent has actually occurred much faster than climate models were predicting 13 years ago.” and include the figure (from Skeptical Science)

I have no reason to question their finding with respect to sea ice extent. I am requesting that, since they appear to have the statistical analysis program and data readily available that Grant Foster and/or dana1981

i) perform the same analysis for sea ice area that they have done for sea ice extent

and

ii) perform the analysis of insolation-weighted sea ice trends; e.g. see

Pielke Sr., R.A., G.E. Liston, and A. Robock, 2000: Insolation-weighted  assessment of Northern Hemisphere snow-cover and sea-ice variability.  J. Geophys. Res. Lett., 27, 3061-3064

and

Pielke Sr., R.A., G.E. Liston, W.L. Chapman, and D.A. Robinson, 2004:  Actual and insolation-weighted Northern Hemisphere snow cover and sea  ice — 1974-2002. Climate Dynamics, 22, 591-595 DOI10.1007/s00382-004-0401-5.

If these two metrics also show that the model predictions are too conservative, than the Skeptical Science conclusion that “This rapid rate is precisely why the Arctic sea ice decline is described as a death spiral

will have more evidence.   Regardless, they would be adding to the discussion. I urge them to also present these same analyses (area anomalies and insolation-weighted) for the Antarctic sea ice.

source of image

Comments Off on Sea Ice Prediction – Update To 2012 – A Correction

Filed under Climate Change Metrics

Response By John Christy To A Comment Regarding The Lower Tropospheric Temperature Data At Climate Abyss

John Neilson-Gamon has an interesting post at Climate Abyss titled

About the Lack of Warming…

Using surface temperature data, John concludes that

All else being equal, an El Niño year will average about 0.2 C warmer globally than a La Niña year.  Each new La Niña year will be about as warm as an El Niño year 13 years prior.

In response to his post, I wrote the following

Hi John – I recommend you also perform this analysis on the UAH MSU and RSS MSU lower tropospheric temperatures and on the upper ocean heat content paper. As we have shown in

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841 https://pielkeclimatesci.files.wordpress.com/2009/11/r-345.pdf

and

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655 https://pielkeclimatesci.files.wordpress.com/2010/03/r-345a.pdf

there is a growing divergence between the surface analyses and the lower tropospheric temperature anomaly data. We attribute a signficant part of the warm surface temperature bias to the land minimum temperatures.

[Roger- I’ll run the numbers in a couple of days when I’m back in town.  – John N-G]

One of the commenters responded with

Roger Pielke Sr. “there is a growing divergence between the surface analyses and the lower tropospheric temperature anomaly data. We attribute a signficant part of the warm surface temperature bias to the land minimum temperatures”

I will reserve my judgement until the NOAA analysis of lower troposphere is released (they’re working on it). They’ve identified some biases in the UAH and RSS analysis which have been shown to influence the trends at other altitudes in the atmosphere  meaning that it is probably going to influence the synthetic lower tropospheric altitude.

I sent the above comment to John Christy, who replied with the information below [I also sent to Climate Abyss to post].

We examined the NOAA (STAR) analysis and there is a noticeable problem with their method (attached).  In every comparison with independent data, STAR was the hottest for MT (Table 4) and clearly had more error than UAH for both US and Australian station-by-station comparisons (Table 2 and Table 3).  In the latest STAR TMT, there is also a spurious jump on 1 Jan 2001 that no other dataset has – a processing glitch evidently.

Global LT Trends 1979-2011 C/decade

+0.136 UAH +0.139 RSS +0.121 ERA-I (Reanalysis) +0.169 HadAT2 +0.129 RAOBCORE +0.146 RICH +0.165 RATPAC

That’s a pretty tight grouping (+/- 0.025 from mean) – and if you consider the lack of global coverage on HadAT2 and RATPAC, giving those two a bit more error, you get an even tighter grouping.  So, your inquisitor evidently is not aware of all of this evidence.

STAR’s current TMT trend (1979-2011) is +0.13 C/decade.  To produce a lower tropospheric TLT value consistent with the fact the upper part of TMT is cooling (stratosphere) means the STAR TLT must be warmer than their TMT trend by around +0.07 or so, giving STAR a TLT trend of about +0.20 C/decade – well outside the range of independent observations.

The attachment John refers to is

Christy, J.R., R. W. Spencer, and W. R. Norris, 2011: The role of remote sensing in monitoring global. International Journal of Remote Sensing Vol. 32, No. 3, February 2011, 1–15

The abstract reads

The IPCC AR4 (2007) discussed bulk tropospheric temperatures as an indicator of 5 atmospheric energy content. Here, we examine the latest publications about, and versions of, the AR4 data sets. The metric studied is the trend that represents the average rate atmospheric energy accumulation that relates to increased greenhouse gas forcing. For temperatures from microwave instruments, UAHuntsville’s indicates the lowest trend for 1979–2009 and NOAA-STAR’s the highest, being slightly 10 higher than Remote Sensing Systems’ (RSS). Updated analyses using radiosonde data suggest RSS and STAR experienced spurious warming after the mid-1990s. When satellite and radiosonde data sets are considered, the global trends for 1979–2009 of the lower and mid-troposphere are +0.15 and +0.06◦C decade−1 respectively. Error ranges of these estimates, if we do not apply information that 15 indicates some data sets contain noticeable trend problems, are at least ±0.05◦C decade−1, which needs reduction to characterize forcing and response in the climate system accurately.

source of image

Comments Off on Response By John Christy To A Comment Regarding The Lower Tropospheric Temperature Data At Climate Abyss

Filed under Climate Change Metrics, Research Papers

Candid Statement On The Shortcomings Of Multi-Decadal Climate Model Predictions By Early Career Scientists At An NCAR Workshop

There was a candid statement about climate models that were made at the Advanced Study Program/Early Career Scientist Assembly Workshop on Regional Climate Issues in Developing Countries held in Boulder, Colorado on 19–22 October 2011. The Workshop  is reported on in the April 3 2012 issue of EOS on page 145.

The relevant text reads [highlight added]

One recurring issue throughout the workshop was that of managing complex impact assessments with a large range of results from global and regional models; variations between models are often not fully understood, accounted for, and/or communicated. Also problematic is the discrepancy between the spatial and temporal scales on which regional climate projections are made (tens of kilometers and ~30–100 years) and the scales that are of primary interest to many communities in developing countries (kilometers and 0–10 years) that are presently affected by climate change.

My Comment: I agree with this comment except I would delete “change” in the last sentence. Climate is always changing, and the use of the word “change” itself miscommunicates the actual threats faced by developing countries even with climate they have seen in the past.

The EOS article continues with

Approaches for addressing uncertainty and scaling issues might include cost-effective ensemble dynamical-statistical approaches and/or coupling regional modeling efforts to better meet specific objectives (e.g., improved integration of hydrologic models). Facilitating effective “end-to-end” communication was identified as a critical research component to increase awareness of the wider challenges and opportunities facing scientists and end users alike. Such end-to-end communication would also help to ensure that research addresses the particular needs of the communities that are its focus.

My Comment:

There is a critically important requirement, however, that is left off of the approaches. Before modeling results are even used, they must first show skill at predicting changes in climate statistics on the spatial and temporla scale needed by the impacts communties. As we present in our paper

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum,  93, No. 5, 52-53, doi:10.1029/2012EO050008

no regional predictive skill (of changes in climate statistics) has yet been shown on yearly, decadal or multi-decadal time scales.

Their “end-end” communication, however, is appropriately the focus as we emphasize in our article

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing  with complexity and extreme events using a bottom-up, resource-based  vulnerability perspective. AGU Monograph on Complexity and  Extreme Events in Geosciences, in press.

As we wrote in our abstract

We discuss the adoption of a bottom-up, resource–based vulnerability approach in evaluating the effect of climate and other environmental and societal threats to societally critical resources. This vulnerability concept requires the determination of the major threats to local and regional water, food, energy, human health, and ecosystem function resources from extreme events including climate, but also from other social and environmental issues. After these threats are identified for each resource, then the relative risks can be compared with other risks in order to adopt optimal preferred mitigation/adaptation strategies.

This is a more inclusive way of assessing risks, including from climate variability and climate change than using the outcome vulnerability approach adopted by the IPCC. A contextual vulnerability assessment, using the bottom-up, resource-based framework is a more inclusive approach for policymakers to adopt effective mitigation and adaptation methodologies to deal with the complexity of the spectrum of social and environmental extreme events that will occur in the coming decades, as the range of threats are assessed, beyond just the focus on CO2 and a few other greenhouse gases as emphasized in the IPCC assessments.

Hopefully, the attendees of the Workshop will be made aware of our bottom-up, resource-based approach for developing robust effective responses to environmental threats in their countries.

source of image

Comments Off on Candid Statement On The Shortcomings Of Multi-Decadal Climate Model Predictions By Early Career Scientists At An NCAR Workshop

Filed under Vulnerability Paradigm

Comment On “Levitus Data On Ocean Forcing Confirms Skeptics, Falsifies IPCC” At Niche Modeling

There is an interesting post on the significance of the Levitus et al 2012 paper

Levitus, S., et al. (2012), World ocean heat content and thermosteric sea level change (0-2000), 1955-2010, Geophys. Res. Lett.,doi:10.1029/2012GL051106, in press

that I posted on in

Comment On Ocean Heat Content “World Ocean Heat Content And Thermosteric Sea Level Change (0-2000), 1955-2010″ By Levitus Et Al 2012

This new post is on Niche Modeling and is titled

Levitus data on ocean forcing confirms skeptics, falsifies IPCC 

While, the lower diagnosed value of radiative imbalance raises questions on the skill of the models (and the IPCC reliance on them), it is important to distinguish between the three terms radiative imbalance, radiative forcing, and radiative feedback.  In terms of global averages their relationship can be written as

global radiative imbalance =  global radiative forcing + global radiative feedback.

The Levitus et al 2012 data provides a measure of the global average radiative imbalance for 1955-2010 which is ~+0.3 Watts per meter squared.

If one accepts the IPCC radiative forcing values of anthropogenic radiative forcings of +1.6 (+0.6 to +2.4) Watts per meter squared and/or the solar radiative forcing of +0.12 (+0.06 to +0.30) Watts per meter squared as correct, what the Levitus et al data shows is that the global radiative feedback is negative (and this necessarily would include the water vapor, sea ice etc radiative feedbacks). That is

global radiative feedback  <  global radiative forcing.

Alternatively, the IPCC anthropogenic radiative forcings  and/or the solar radiative forcing could be in error.

Either way, the 2007 IPCC WG1 report has a serious error in it.

source of image

Comments Off on Comment On “Levitus Data On Ocean Forcing Confirms Skeptics, Falsifies IPCC” At Niche Modeling

Filed under Climate Change Metrics

The Overstatement Of Regional Climate Prediction Capability

Today I am posting on yet another model study that illustrates the lack of skill of regional models to simulate climate on multi-decadal time scales, as well as how the findings are being misinterpreted. The paper is

Hwang, Syewoon, Wendy Graham, José L. Hernández, Chris Martinez, James W. Jones, Alison Adams, 2011: Quantitative Spatiotemporal Evaluation of Dynamically Downscaled MM5 Precipitation Predictions over the Tampa Bay Region, Florida. J. Hydrometeor, 12, 1447–1464.

The abstract reads [highlight added]

This research quantitatively evaluated the ability of the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5) to reproduce observed spatiotemporal variability of precipitation in the Tampa Bay region over the 1986–2008 period. Raw MM5 model results were positively biased; therefore, the raw model precipitation outputs were bias corrected at 53 long-term precipitation stations in the region using the cumulative distribution function (CDF) mapping approach. CDF mapping effectively removed the bias in the mean daily, monthly, and annual precipitation totals and improved the RMSE of these rainfall totals. Observed daily precipitation transition probabilities were also well predicted by the bias-corrected MM5 results. Nevertheless, significant error remained in predicting specific daily, monthly, and annual total time series. After bias correction, MM5 successfully reproduced seasonal geostatistical precipitation patterns, with higher spatial variance of daily precipitation in the wet season and lower spatial variance of daily precipitation in the dry season. Bias-corrected daily precipitation fields were kriged over the study area to produce spatiotemporally distributed precipitation fields over the dense grids needed to drive hydrologic models in the Tampa Bay region. Cross validation at the 53 long-term precipitation gauges showed that kriging reproduced observed rainfall with average RMSEs lower than the RMSEs of individually bias-corrected point predictions. Results indicate that although significant error remains in predicting actual daily precipitation at rain gauges, kriging the bias-corrected MM5 predictions over a hydrologic model grid produces distributed precipitation fields with sufficient realism in the daily, seasonal, and interannual patterns to be useful for multidecadal water resource planning in the Tampa Bay region.

I have the following substantive comments on this paper with respect to what can be inferred about model skill on multi-decadal time periods:

1. The raw data is biased. It can be adjusted towards the real world observations but only when that data is  available.  This real world observed data is obviously not available for the coming decades

2. The study does not examine skill in the prediction of changes in multi-decadal regional climate statistics.

Thus, while the authors claim that they are  useful for multidecadal water resource planning in the Tampa Bay region, this planning can be directly done with the original real world data. The model downscaling, other than documenting systematic biases, does not provide added information beyond what is already available from observed data and reanalyses without the model.

source of image

Comments Off on The Overstatement Of Regional Climate Prediction Capability

Filed under Climate Models, Research Papers

The Misrepresentation Of Climate Science

The Boulder Daily Camera had an article on April 20 2012 that illustrates the convoluted ways individuals seek to fit real world observations into the IPCC worldview. The article is By Breanna Draxler

On Niwot Ridge west of Boulder, contrasting climates 5 miles apart Scientists find warming at 10,000 feet, cooling at 12,700 feet

It reads [with highlighting and my comments]:

Scientists doing climate research on Niwot Ridge in the mountain’s west of Boulder found a surprising trend: At 10,000 feet of elevation, conditions have become warmer and drier over the past few decades, but at 12,700 feet, conditions are actually cooler and wetter.

“We know the western U.S. has been warming. It’s concentrated in the spring at the forest site. But we see just the opposite at the high elevation site above the tree line,” said Mark Williams, the study’s principal investigator.

My Comment:   The high elevation data actually shows a cooling and the forest site is only warming in the spring. This should be a red flag that there is more going on than a western U.S. warming, or even if this warming is actually occurring at higher elevation sites. Indeed, even lower elevation sites are suspect; e.g. see

Uncertainty in Utah Hydrologic Data: Part 1 On The Snow Data Set by Randall P. Julander

Uncertainty in Utah Hydrologic Data: Part 2 On Streamflow Data by Randall P. Julander

Uncertainty in Utah: Part 3 on The Hydrologic Model Data Set by Randall P. Julander

Williams said snowfall at the higher station has doubled in the last 60 years — the period for which scientists have been collecting precipitation and temperature data at Niwot Ridge. Temperatures, too, have dropped significantly during the winter months, from November through March.

The scientists attribute the cooling to a small-scale climatic balancing act. The warmer temperatures at lower elevations cause snow to evaporate. This moisture in the air is then drawn upward and to the west. When it reaches the continental divide, the moisture falls as snow.

My Comment:  First, the amount of water vapor from lower elevation evaporation in the wintertime would be a very small contribution to the precipitation at higher elevation. This snowfall occurs at Niwot Ridge, and elsewhere in the Rockies, as Pacific moisture is advected eastward over the western U.S. Indeed, higher snowfall indicates a storm track that has resulted in a higher frequency of winter storms.  This also would explain the lower temperatures from November to March. In the 2011-2012 season, snowfall was  quite a bit less since the storm track was far to the north. In the 2010-2011 season the storm track was persistently further south and large amounts of snow fell.

The additional snowfall boosts the albedo effect, which reflects sunlight back into the atmosphere and causes the localized cooling. The higher station is above treeline, so the white snow reflects more sunlight than the tree-covered location of the lower station.

My Comment: There is snow on the ground at Niwot in the winter! Moreover, the albedo effect is a trivial effect at Niwot Ridge in comparison to the advection of cold air at this level of the atmosphere. Indeed, Niwot Ridge is an excellent location to obtain regionally representative long-term temperature trend measurements, as the frequent strong airflow permits a sampling of the larger scale atmosphere.

The findings are particularly surprising since the two research stations are only five miles apart, said Williams, who is a fellow at the Institute of Arctic and Alpine Research and a geography professor at the University of Colorado.

The localized cooling happens amidst a larger warming trend in the West.

My Comment: A scientifically robust approach would be to also look at whether the warming the forested site is due to local effects. In addition, the lower tropospheric temperature anomalies for the same time period and geographic location should be compared to these surface sites.

Bill Bowman is director of mountain research station, which runs the climate program. He has been working on Niwot Ridge for decades and said the warming picked up steam in the 1990s due to human-emitted greenhouse gases.

“Across the western U.S. there is a very clear trend in warming, and in earlier snowmelt, and in greater loss of water due to evaporation at the surface,” Bowman said. “The trend will continue almost certainly because humans are continuing to emit greenhouse gases unabated.”

As a result of these emissions, Williams suspects that the cool temperatures and increased snowfall are only temporary.

“My guess is we put enough energy in the atmosphere, the warming trend will move up above the treeline,” Williams said.

Niwot Ridge is the highest of the 26 long-term ecological research sites around the world. The site 25 miles west of Boulder is thousands of acres in size and includes a range of ecosystems, such as subalpine forest, tundra, talus slopes, glacial lakes and wetlands. Continued warming could wreak havoc on these natural systems and the local communities by increasing the risk of wildfires, reducing municipal water supplies, or triggering another mountain pine beetle outbreak.

“Climate is changing and we end up with climate weirding — unpredictable climate extremes,” Williams said.

My Comment: Here is the pervasive assumption that changes in climate statistics are due to the emissions of greenhouse gases. This is the  IPCC viewpoint that is being repeated (along with the new catch phrase “climate weirding“).  Williams’s statement that he is guessing places the confidence in the statements in the article in their proper place.

He used two recent examples to demonstrate the volatility of current climatic conditions and the unpredictability of future conditions. Last year was the latest recorded date for snowmelt at the research site, and this year was the earliest on record, he said. The difference between the two was 3.5 months.

“You normally don’t see that big of a difference on back-to-back years,” Williams said.

Likewise, on a shorter time-scale, this February was the snowiest on record, while March was the driest.

My Comment:  These large excursions in climate are actually quite typical. Indeed, rather than cherry picking that cooling is local and warming is global, such studies should recognize the overwhelming dominance of regional atmospheric/ocean circulations in causing these anomalies.

“It’s beyond what we can predict in terms of climate change at this point,” Williams said.

The study was published in one of a series of six articles in the April issue of BioScience.

My CommentThis first sentence is finally a correct statement. However, Williams and Bowman should have recognized this also applies to the claims that attribution of these anomalies is also not well understood.

source of image

Comments Off on The Misrepresentation Of Climate Science

Filed under Climate Science Misconceptions, Climate Science Reporting

Comments On The Paper “Skillful Predictions Of Decadal Trends In Global Mean Surface Temperature” By Fyfe Et Al 2012

Jos de Laat of KNMI altered us to the paper

Fyfe, J. C., W. J. Merryfield, V. Kharin, G. J. Boer, W.-S. Lee, and K. von Salzen (2011),Skillful predictions of decadal trends in global mean surface temperatureGeophys. Res. Lett.,38, L22801, doi:10.1029/2011GL049508

which is an example of the overstatment of model predictive skill.

The abstract reads

We compare observed decadal trends in global mean surface temperature with those predicted using a modelling system that encompasses observed initial condition information, externally forced response (due to anthropogenic greenhouse gases and aerosol precursors), and internally generated variability. We consider retrospective decadal forecasts for nine cases, initiated at five year intervals, with the first beginning in 1961 and the last in 2001. Forecast ensembles of size thirty are generated from differing but similar initial conditions. We concentrate on the trends that remain after removing the following natural signals in observations and hindcasts: dynamically induced atmospheric variability, El Niño-Southern Oscillation (ENSO), and the effects of explosive volcanic eruptions. We show that ensemble mean errors in the decadal trend hindcasts are smaller than in a parallel set of uninitialized free running climate simulations. The ENSO signal, which is skillfully predicted out to a year or so, has little impact on our decadal trend predictions, and our modelling system possesses skill, independent of ENSO, in predicting decadal trends in global mean surface temperature.

There are key admissions in the article which should clearly alert a reader to be skeptical about this observational/model comparison. A most revealing comment is that [highlight added]

Since observation-based and model-based climates tend to differ, hindcasts which are initialized to be near the observations tend to drift towards the model climate. For short term hindcasts this is accounted for by removing the mean bias. However, for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases following a procedure detailed in the auxiliary material. We process the three sets of hindcasts using the different initialization techniques separately, but combine the predicted anomalies into one thirty-member ensemble in the following analysis. The ten-member ensemble of freecasts are also trend corrected in this way.

The authors define “freecasts” as

These are climate simulations (referred to here as “freecasts”) which evolve freely based on the specified external forcing.

This is quite an amazing admission. They write that the  “model does not reproduce long-term trends” than “a linear trend correction may be required” and “we correct for systematic long-term trend biases.” The  model results are tuned. They are not freecasts“.

The authors also fail to compare their results with other temperature data sets such as the lower tropospheric temperature anomalies and trends. As we show in our papers

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res.,  115, D1, doi:10.1029/2009JD013655.

there is a warm bias in the surface temperature data. The authors chose to ignore this finding in their study. Instead they tuned their model results to the observed temperature data.

As an editor, at a minimum I would have insisted on this comparison before this paper would have been accepted. As written, however, the Fyfe et al article adds no robust evidence that the models have skill at predicting temperature changes over decadal and longer time periods. Indeed, they provide further evidence of the lack of skill of the multi-decadal climate model predictions.

source of image

Comments Off on Comments On The Paper “Skillful Predictions Of Decadal Trends In Global Mean Surface Temperature” By Fyfe Et Al 2012

Filed under Climate Change Metrics, Research Papers

An Example Of The Misstatement of Fact On Climate Change

Alvin Stone of the ARC Centre of Excellence for Climate System Science at the University of New South Wales [h/t to Marc Hendrickx] made the following comment [highlight added] in the discussion at The Conversation in the post

If you want to roll the climate dice, you should know the odds

Okay, let’s talk spin then. It is very well established that the great majority of climate scientists (in the 90% band up or down a couple of notches depending on the survey) agree that climate change is occurring and that anthropogenic carbon dioxide is the main culprit.

Many were working in this field before it was fashionable and have pretty much been of this understanding for a few decades.

Now, a tiny minority doubt this case but none has published a single paper that undermines the fundamental science.

So, your arguments suggest that you are right and that 95-97% of climate scientists are either knowingly misrepresenting their position for whatever reason are too stupid to understand the real science.

So, how would you classify the climate scientists who support anthropogenic emissions hypothesis. In your eyes are they liars, easily misled or just plain stupid?

This statement by Mr. Stone is not an isolated example, unfortunately, but is a view that is erroneously communicated. This is why I and a group of Fellows of the American Geophysical Union published the following article

Pielke Sr., R., K.  Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D.  Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E.  Philip Krider, W. K.M. Lau, J. McDonnell,  W. Rossow,  J. Schaake, J.  Smith, S. Sorooshian,  and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases.   Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American   Geophysical Union.

Or for an even wider demonstration that the comment by Mr. Stone misrepresents the actual understanding of the science, see

National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties.             Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington,D.C., 208 pp

where it is written

“…..the traditional global mean TOA radiative forcing concept has some important limitations, which have come increasingly to light over the past decade. The concept is inadequate for some forcing agents, such as absorbing aerosols and land-use changes, that may have regional climate impacts much greater than would be predicted from TOA radiative forcing. Also, it diagnoses only one measure of climate change—global mean surface temperature response—while offering little information on regional climate change or precipitation. These limitations can be addressed by expanding the radiative forcing concept and through the introduction of additional forcing metrics. In particular, the concept needs to be extended to account for (1) the vertical structure of radiative forcing, (2) regional variability in radiative forcing, and (3) nonradiative forcing.”

Anthropogenic carbon dioxide is NOT the main [dominant] culprit affecting changes in climate.  It is just one of a diverse set of human and natural climate forcings.

source of image

Comments Off on An Example Of The Misstatement of Fact On Climate Change

Filed under Climate Change Forcings & Feedbacks, Climate Science Misconceptions