Monthly Archives: April 2012

A New Article “Total Cloud Cover From Satellite Observations And Climate Models” By Probst Et Al 2012

Probst, P. , R. Rizzi, E. Tosi, V. Lucarini, T. Maestri, 2012: Total cloud cover from satellite observations and climate models. Elsevier, Atmospheric Reasearch, in press, doi:10.1016/j.atmosres.2012.01.005 [not yet available on line at http://www.sciencedirect.com/science/journal/aip/01698095]

with the abstract [highlight added]

Global and zonal monthly means of cloud cover fraction for total cloudiness (CF) from the ISCCP D2 dataset are compared to same quantities produced by the 20th century simulations of 21 climate models from the World Climate Research Programme’s (WCRP’s) Coupled Model Intercomparison Project phase 3 (CMIP3). The comparison spans the time frame from January 1984 to December 1999 and the global and zonal averages of CF are studied. It is shown that the global mean of CF for the PCMDI-CMIP3models, averaged over the whole period, exhibits a considerable variance and generally underestimates the ISCCP value. Large differences among models, and between models and observations, are found in the polar areas, where both models and satellite observations are less reliable, and especially near Antarctica. For this reason the zonal analysis is focused over the 60° S60° N latitudinal belt, which includes the tropical area and midlatitudes. The two hemispheres are analysed separately to show the variation of the amplitude of the seasonal cycle. Most models underestimate the yearly averaged values of CF over all the analysed areas, whilst they capture, in a qualitatively correct way, the magnitude and the sign of the seasonal cycle over the whole geographical domain, but overestimate the amplitude of the seasonal cycle in the tropical areas and at mid-latitudes,when taken separately. The inter annual variability of the yearly averages is underestimated by all models in each area analysed, and also the interannual variability of the amplitude of the seasonal cycle is underestimated, but to a lesser extent. This work shows that the climate models have a heterogeneous behaviour in simulating the CF over different areas of the Globe, with a very wide span both with observed CF and among themselves. Some models agree quite well with the observations in one or more of the metrics employed in this analysis, but not a single model has a statistically significant agreement with the observational datasets on yearly averaged values of CF and on the amplitude of the seasonal cycle over all analysed areas

The conclusion has the text

In this paper the monthly mean of total cloud cover fraction (CF) is chosen as benchmark for intercomparing and validating climate models included in the PCMDI-CMIP3 project, which have contributed decisively to the preparation of IPCC AR4 (Solomon et al., 2007). As observational counterpart, the satellite observations of clouds constituting the ISCCP D2 dataset for the 1984–1999 time frame, are considered. These data are compared to the corresponding period of the standard 20th century simulations of 21 climate models.

Whilst some models agree quite well with the observations in one or more of the metrics employed in this analysis, not a single model shows a statistically significant agreement with the observational dataset of yearly averaged values of CF and on the amplitude of the seasonal cycle on both tropical and extratropical regions. Our results highlight that the representation of the basic statistical properties of clouds in state-of-the-art climate models is still incomplete, as relevant systematic errors are present for most models in both tropical and extratropical regions. Typically, the climate models underestimate both the global CF and the zonal averaged CF over almost all zonal bands.

The range of model results is very wide since the annual and global averaged CF ranges from about 47% to 73%, with a mean difference with D2 observations of about 7%. The largest differences among models in the zonal averages are found in the tropical region and in the two polar regions, where the relative spread of models’ outputs reaches 0.4 (Tropics), 0.6 (Arctic region) and 0.9 (Antarctica). One must however also consider that it is likely that the error in the CF properties in the observational dataset is largest in the polar regions.

Looking at higher order statistics, it is shown that the interrannual variability of global averaged CF are quite strongly underestimated in all models with respect to observations,whilst the interannual variability of the seasonal signal is only slightly underestimated.

The documented differences between the observational dataset and the models constitute a problemsince the statistical properties of clouds play a decisive role in the earth climate, by providing a first order contribution to the energy budget at the top of the atmosphere (Solomon et al., 2007) and at the surface. It is therefore a feature that influences many physical processes inside the real atmosphere and inside models. Since most models are tuned to provide a TOA energy balance as close as possible to the measured record, the systematic deviations between a model and the CF observational dataset imply compensating deviations in a range of physical processes occurring almost everywhere in the system. The documented systematic inter-model discrepancies provide an indication of the effect of diverse mix of physical processes on CF. The authors believe that this is not an healthy situation.

The results presented in this paper provide a natural complement to the analyses shown in Pincus et al. (2008), who discussed the second moments of the statistics of the CF but did not show the results of the mean climatology. Often, the results obtained from different climate models are averaged under the assumption that the model biases will partially compensate, so that a more realistic estimate of the climate properties are achieved by the so-constructed “mean model”. As discussed in, e.g., Lucarini (2008) such a procedure, even if commonly used, is not really well defined in a probabilistic sense, and should be interpreted only in a qualitative sense. Since in our case most of the models have biases of the same sign with respect to observations, the ensemble mean (constructed in our case with a simple un-weighted averaging) does not provide good agreement with observations for the considered statistical estimators, with discrepancies in most case larger than one standard deviation of the single model outputs.

source of image 

Comments Off

Filed under Climate Models, Research Papers

New Paper “Impacts Of Wind Farms On Land Surface Temperature” By Zhou Et Al 2012 Documents An Effect Of Local And Regional Landscape Change On Long Term Surface Air Temperature Trends

Update April 30 2012  The authors have prepared

Q&A on “Impacts of Wind Farms on Land Surface Temperature” Published by Nature Climate Change on April 29, 2012

**********ORIGINAL POST**********************

In the papers

Walters, J. T., R. T. McNider, X. Shi, W. B Norris, and J. R. Christy (2007): Positive surface temperature feedback in the stable nocturnal boundary layer, Geophys. Res. Lett., 34, L12709, doi:10.1029/2007GL029505

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with   the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr.,  J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the  surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider,  and R.A Pielke Sr, 2011: Screen  level temperature increase due to higher atmospheric carbon dioxide in calm and  windy nights revisited. J. Geophys. Res., 116, D02122,   doi:10.1029/2010JD014612.

and in Professor Dick McNider’s powerpoint talk

Response and Sensitivity of the Stable Boundary Layer to Added Downward Long-wave Radiation

and guest weblog

In the Dark of the Night – the Problem with the Diurnal Temperature Range and Climate Change by Richard T. McNider

we present evidence as to why the use of the minimum land surface air temperature should not be used as part of the construction of a global average multi-decadal surface  temperature trend.

As Dick writes in his post

Because of the redistribution phenomena and the shallow layer affected, observed minimum temperatures are a very poor measure of the accumulation of heat in the atmosphere.

In the Pielke et al 2007 paper, we wrote [highlight added]

In a series of papers exploring the nonlinear dynamics of the stable boundary layer [McNider et al., 1995a, 1995b; Shi et al., 2005; Walters et al., 2007] it was shown that in certain parameter spaces the nocturnal boundary layer can rapidly transition from a cold light wind solution to a warm windy solution. In these parameter spaces, even slight changes in longwave radiative forcing or changes in surface heat capacity can cause large changes in surface temperatures as the boundary mixing changes. However, these temperature changes reflect changes in the vertical distribution of heat, not in the heat content of the deep atmosphere.

There is a new paper which confirms this finding that changes in time in the vertical redistribution of heat makes a significant difference to long term trends in minimum temperatures. This paper is

Zhou, Liming, Yuhong Tian, Somnath Baidya Roy, Chris Thorncroft, Lance F. Bosart and Yuanlong Hu 2012: Impacts of wind farms on land surface temperature. Nature Climate Chnage. doi:10.1038/nclimate1505

The abstract reads

The wind industry in the United States has experienced a remarkably rapid expansion of capacity in recent years and this fast growth is expected to continue in the future. While converting wind’s kinetic energy into electricity, wind turbines modify surface–atmosphere exchanges and the transfer of energy, momentum, mass and moisture within the atmosphere. These changes, if spatially large enough, may have noticeable impacts on local to regional weather and climate. Here we present observational evidence for such impacts based on analyses of satellite data for the period of 2003–2011 over a region in west-central Texas, where four of the world’s largest wind farms are located. Our results show a significant warming trend of up to 0.72 °C per decade, particularly at night-time, over wind farms relative to nearby non-wind-farm regions. We attribute this warming primarily to wind farms as its spatial pattern and magnitude couples very well with the geographic distribution of wind turbines.

An excerpt from the conclusions reads

Very probably the diurnal and seasonal variations in wind speed and the changes in near-surface ABL conditions owing to wind-farm operations are primarily responsible for the LST changes described above.

While the Zhou et al 2012 applies to wind farms, such changes in the vertical distribution of mixing of heat will occur whenever land use change, such as urbanization, deforestation, irrigation, etc occurs.

source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

Sea Ice Prediction – Update To 2012 – A Correction

Update April 28 2012: On Tamino in a new post titled Let’s do the math! there is a question by Ron Broberg  as to why I consider single-year sea ice to have inertia. The reason is that, even with a single year of ice, since it has mass, its heat can be expressed in Joules. The mass of sea ice [its coverage and depth] can be expressed in terms of its heat content in Joules [i.e. the Joules  required to warm and melt]. Multi-year sea ice, presumably would be thicker and thus more heat would be required to melt. However, just to illustrate, IF, for example, the area coverage and depth returned in a single year to their values years ago,  the use of linear trends over decades would be meaningless as the “clock” would have reset. It is the mass of the ice that matters in terms of heat, not its age (although other aspects of ice are affected such as its albedo, density, etc).

In looking at the Cryosphere Today analysis  below (with my eyeometer), it appeared that the areal coverage had stopped decreasing when averaged from 2006 to the present, although there are large variations between summer and winter.  My “eye” still sees the change in the character of the analysis, but I have become convinced by the statistical trend analysis by Grant Foster and dana1981 (and several of the commenters) that this is not a significant change in the long term trend, nor can one even say the decline has stopped (due, presumably, to the large intraannual variations between anomalies in the winter and summer).

Also Al Rodger at Tamino’s Let’s do the math! , despite the same type of insults as Grant Foster is spewing, has a very informative plot of arctic sea ice. The portion of the time period that I focused on in my original post was since 2006 [using an eyeometer which saw a flattening of the anomalies, as I mentioned above].  Unfortunately, his time series is not up to the present (although this would make little difference in the mean that are shown).  I have reproduced his excellent figure below.

It does confirm what I saw visually that there was a visual change in the character of the slope in ~2006. From a statistical perspective, as discussed on Tamino and Skeptical Science, I am now convinced it is too short a time to determine if there is a real change in the character of arctic sea ice decline. It certainly could just be a short-term hiatus in the decline, that does not significantly affect the longer term trend. Only time will tell if this is a correct interpretation.

My final comment is why I do not permit comments on my weblog. The reason is straightforward. If you read the posts by Grant Foster and the response by Grant Foster to a comment by michel, you will see the bitterness of his posts and of a number of his commenters. Readers of his weblog (and of Skeptical Science which often has the same tone) with questions are welcome to e-mail me directly at pielkesr@ciresmail.colorado.edu.  I will post, with permission, substantive questions, and my answers, on my weblog.

******************Original post**********************************

I posted an update of the predictions of Arctic sea ice in my post of April 20 2012 titled Sea Ice Prediction – Update To 2012. After an exchange of posts with Tamino (Grant Foster) and Skeptical Science (dana1981), I have become convinced that I made several methodological errors as well as did not properly explain my perspective on the analysis I presented. I did not

i)  clearly state why I chose a start date of 2006,

ii) describe why I chose a relatively short time to compare the trends,

and

iii) why I compared the Cryosphere Today anomalies in  Arctic sea ice area with the Vinnikov et al sea ice extent values.

First,  my use of their long-term trend values to compare with a trend since 2006 assumes that short-term assessments have value for quantities which involve inertia (mass) such as heat and ice. If the sea ice area were to recover to its original area and thickness (for whatever reason), for example, it does not matter what its long-term trend was.  The long-term trend (if there is one) would be reset. I have made this point often with respect to ocean heat content (e.g. see). It also applies to sea ice (although area is only one part of it). I chose 2006 for this reason to see if the long-term trend provided by Levitus et al has been interrupted (as it visually appears to be on the Cryosphere Today website).

Second, I  compared anomalies of sea ice extent and of sea ice area, assuming they would be very close to each other. I have been convinced based on the analysis by Grant Foster and dana1981 that this is not correct [although see http://goo.gl/5rX5O from Zach on Tamino]. This was my more serious mistake.

What they report on Tamino and Skeptical Science is “that the decline in Arctic sea ice extent has actually occurred much faster than climate models were predicting 13 years ago.” and include the figure (from Skeptical Science)

I have no reason to question their finding with respect to sea ice extent. I am requesting that, since they appear to have the statistical analysis program and data readily available that Grant Foster and/or dana1981

i) perform the same analysis for sea ice area that they have done for sea ice extent

and

ii) perform the analysis of insolation-weighted sea ice trends; e.g. see

Pielke Sr., R.A., G.E. Liston, and A. Robock, 2000: Insolation-weighted  assessment of Northern Hemisphere snow-cover and sea-ice variability.  J. Geophys. Res. Lett., 27, 3061-3064

and

Pielke Sr., R.A., G.E. Liston, W.L. Chapman, and D.A. Robinson, 2004:  Actual and insolation-weighted Northern Hemisphere snow cover and sea  ice — 1974-2002. Climate Dynamics, 22, 591-595 DOI10.1007/s00382-004-0401-5.

If these two metrics also show that the model predictions are too conservative, than the Skeptical Science conclusion that “This rapid rate is precisely why the Arctic sea ice decline is described as a death spiral

will have more evidence.   Regardless, they would be adding to the discussion. I urge them to also present these same analyses (area anomalies and insolation-weighted) for the Antarctic sea ice.

source of image

Comments Off

Filed under Climate Change Metrics

Response By John Christy To A Comment Regarding The Lower Tropospheric Temperature Data At Climate Abyss

John Neilson-Gamon has an interesting post at Climate Abyss titled

About the Lack of Warming…

Using surface temperature data, John concludes that

All else being equal, an El Niño year will average about 0.2 C warmer globally than a La Niña year.  Each new La Niña year will be about as warm as an El Niño year 13 years prior.

In response to his post, I wrote the following

Hi John – I recommend you also perform this analysis on the UAH MSU and RSS MSU lower tropospheric temperatures and on the upper ocean heat content paper. As we have shown in

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841 http://pielkeclimatesci.files.wordpress.com/2009/11/r-345.pdf

and

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655 http://pielkeclimatesci.files.wordpress.com/2010/03/r-345a.pdf

there is a growing divergence between the surface analyses and the lower tropospheric temperature anomaly data. We attribute a signficant part of the warm surface temperature bias to the land minimum temperatures.

[Roger- I'll run the numbers in a couple of days when I'm back in town.  - John N-G]

One of the commenters responded with

Roger Pielke Sr. “there is a growing divergence between the surface analyses and the lower tropospheric temperature anomaly data. We attribute a signficant part of the warm surface temperature bias to the land minimum temperatures”

I will reserve my judgement until the NOAA analysis of lower troposphere is released (they’re working on it). They’ve identified some biases in the UAH and RSS analysis which have been shown to influence the trends at other altitudes in the atmosphere  meaning that it is probably going to influence the synthetic lower tropospheric altitude.

I sent the above comment to John Christy, who replied with the information below [I also sent to Climate Abyss to post].

We examined the NOAA (STAR) analysis and there is a noticeable problem with their method (attached).  In every comparison with independent data, STAR was the hottest for MT (Table 4) and clearly had more error than UAH for both US and Australian station-by-station comparisons (Table 2 and Table 3).  In the latest STAR TMT, there is also a spurious jump on 1 Jan 2001 that no other dataset has – a processing glitch evidently.

Global LT Trends 1979-2011 C/decade

+0.136 UAH +0.139 RSS +0.121 ERA-I (Reanalysis) +0.169 HadAT2 +0.129 RAOBCORE +0.146 RICH +0.165 RATPAC

That’s a pretty tight grouping (+/- 0.025 from mean) – and if you consider the lack of global coverage on HadAT2 and RATPAC, giving those two a bit more error, you get an even tighter grouping.  So, your inquisitor evidently is not aware of all of this evidence.

STAR’s current TMT trend (1979-2011) is +0.13 C/decade.  To produce a lower tropospheric TLT value consistent with the fact the upper part of TMT is cooling (stratosphere) means the STAR TLT must be warmer than their TMT trend by around +0.07 or so, giving STAR a TLT trend of about +0.20 C/decade – well outside the range of independent observations.

The attachment John refers to is

Christy, J.R., R. W. Spencer, and W. R. Norris, 2011: The role of remote sensing in monitoring global. International Journal of Remote Sensing Vol. 32, No. 3, February 2011, 1–15

The abstract reads

The IPCC AR4 (2007) discussed bulk tropospheric temperatures as an indicator of 5 atmospheric energy content. Here, we examine the latest publications about, and versions of, the AR4 data sets. The metric studied is the trend that represents the average rate atmospheric energy accumulation that relates to increased greenhouse gas forcing. For temperatures from microwave instruments, UAHuntsville’s indicates the lowest trend for 1979–2009 and NOAA-STAR’s the highest, being slightly 10 higher than Remote Sensing Systems’ (RSS). Updated analyses using radiosonde data suggest RSS and STAR experienced spurious warming after the mid-1990s. When satellite and radiosonde data sets are considered, the global trends for 1979–2009 of the lower and mid-troposphere are +0.15 and +0.06◦C decade−1 respectively. Error ranges of these estimates, if we do not apply information that 15 indicates some data sets contain noticeable trend problems, are at least ±0.05◦C decade−1, which needs reduction to characterize forcing and response in the climate system accurately.

source of image

Comments Off

Filed under Climate Change Metrics, Research Papers

Candid Statement On The Shortcomings Of Multi-Decadal Climate Model Predictions By Early Career Scientists At An NCAR Workshop

There was a candid statement about climate models that were made at the Advanced Study Program/Early Career Scientist Assembly Workshop on Regional Climate Issues in Developing Countries held in Boulder, Colorado on 19–22 October 2011. The Workshop  is reported on in the April 3 2012 issue of EOS on page 145.

The relevant text reads [highlight added]

One recurring issue throughout the workshop was that of managing complex impact assessments with a large range of results from global and regional models; variations between models are often not fully understood, accounted for, and/or communicated. Also problematic is the discrepancy between the spatial and temporal scales on which regional climate projections are made (tens of kilometers and ~30–100 years) and the scales that are of primary interest to many communities in developing countries (kilometers and 0–10 years) that are presently affected by climate change.

My Comment: I agree with this comment except I would delete “change” in the last sentence. Climate is always changing, and the use of the word “change” itself miscommunicates the actual threats faced by developing countries even with climate they have seen in the past.

The EOS article continues with

Approaches for addressing uncertainty and scaling issues might include cost-effective ensemble dynamical-statistical approaches and/or coupling regional modeling efforts to better meet specific objectives (e.g., improved integration of hydrologic models). Facilitating effective “end-to-end” communication was identified as a critical research component to increase awareness of the wider challenges and opportunities facing scientists and end users alike. Such end-to-end communication would also help to ensure that research addresses the particular needs of the communities that are its focus.

My Comment:

There is a critically important requirement, however, that is left off of the approaches. Before modeling results are even used, they must first show skill at predicting changes in climate statistics on the spatial and temporla scale needed by the impacts communties. As we present in our paper

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum,  93, No. 5, 52-53, doi:10.1029/2012EO050008

no regional predictive skill (of changes in climate statistics) has yet been shown on yearly, decadal or multi-decadal time scales.

Their “end-end” communication, however, is appropriately the focus as we emphasize in our article

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing  with complexity and extreme events using a bottom-up, resource-based  vulnerability perspective. AGU Monograph on Complexity and  Extreme Events in Geosciences, in press.

As we wrote in our abstract

We discuss the adoption of a bottom-up, resource–based vulnerability approach in evaluating the effect of climate and other environmental and societal threats to societally critical resources. This vulnerability concept requires the determination of the major threats to local and regional water, food, energy, human health, and ecosystem function resources from extreme events including climate, but also from other social and environmental issues. After these threats are identified for each resource, then the relative risks can be compared with other risks in order to adopt optimal preferred mitigation/adaptation strategies.

This is a more inclusive way of assessing risks, including from climate variability and climate change than using the outcome vulnerability approach adopted by the IPCC. A contextual vulnerability assessment, using the bottom-up, resource-based framework is a more inclusive approach for policymakers to adopt effective mitigation and adaptation methodologies to deal with the complexity of the spectrum of social and environmental extreme events that will occur in the coming decades, as the range of threats are assessed, beyond just the focus on CO2 and a few other greenhouse gases as emphasized in the IPCC assessments.

Hopefully, the attendees of the Workshop will be made aware of our bottom-up, resource-based approach for developing robust effective responses to environmental threats in their countries.

source of image

Comments Off

Filed under Vulnerability Paradigm

Comment On “Levitus Data On Ocean Forcing Confirms Skeptics, Falsifies IPCC” At Niche Modeling

There is an interesting post on the significance of the Levitus et al 2012 paper

Levitus, S., et al. (2012), World ocean heat content and thermosteric sea level change (0-2000), 1955-2010, Geophys. Res. Lett.,doi:10.1029/2012GL051106, in press

that I posted on in

Comment On Ocean Heat Content “World Ocean Heat Content And Thermosteric Sea Level Change (0-2000), 1955-2010″ By Levitus Et Al 2012

This new post is on Niche Modeling and is titled

Levitus data on ocean forcing confirms skeptics, falsifies IPCC 

While, the lower diagnosed value of radiative imbalance raises questions on the skill of the models (and the IPCC reliance on them), it is important to distinguish between the three terms radiative imbalance, radiative forcing, and radiative feedback.  In terms of global averages their relationship can be written as

global radiative imbalance =  global radiative forcing + global radiative feedback.

The Levitus et al 2012 data provides a measure of the global average radiative imbalance for 1955-2010 which is ~+0.3 Watts per meter squared.

If one accepts the IPCC radiative forcing values of anthropogenic radiative forcings of +1.6 (+0.6 to +2.4) Watts per meter squared and/or the solar radiative forcing of +0.12 (+0.06 to +0.30) Watts per meter squared as correct, what the Levitus et al data shows is that the global radiative feedback is negative (and this necessarily would include the water vapor, sea ice etc radiative feedbacks). That is

global radiative feedback  <  global radiative forcing.

Alternatively, the IPCC anthropogenic radiative forcings  and/or the solar radiative forcing could be in error.

Either way, the 2007 IPCC WG1 report has a serious error in it.

source of image

Comments Off

Filed under Climate Change Metrics

The Overstatement Of Regional Climate Prediction Capability

Today I am posting on yet another model study that illustrates the lack of skill of regional models to simulate climate on multi-decadal time scales, as well as how the findings are being misinterpreted. The paper is

Hwang, Syewoon, Wendy Graham, José L. Hernández, Chris Martinez, James W. Jones, Alison Adams, 2011: Quantitative Spatiotemporal Evaluation of Dynamically Downscaled MM5 Precipitation Predictions over the Tampa Bay Region, Florida. J. Hydrometeor, 12, 1447–1464.

The abstract reads [highlight added]

This research quantitatively evaluated the ability of the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5) to reproduce observed spatiotemporal variability of precipitation in the Tampa Bay region over the 1986–2008 period. Raw MM5 model results were positively biased; therefore, the raw model precipitation outputs were bias corrected at 53 long-term precipitation stations in the region using the cumulative distribution function (CDF) mapping approach. CDF mapping effectively removed the bias in the mean daily, monthly, and annual precipitation totals and improved the RMSE of these rainfall totals. Observed daily precipitation transition probabilities were also well predicted by the bias-corrected MM5 results. Nevertheless, significant error remained in predicting specific daily, monthly, and annual total time series. After bias correction, MM5 successfully reproduced seasonal geostatistical precipitation patterns, with higher spatial variance of daily precipitation in the wet season and lower spatial variance of daily precipitation in the dry season. Bias-corrected daily precipitation fields were kriged over the study area to produce spatiotemporally distributed precipitation fields over the dense grids needed to drive hydrologic models in the Tampa Bay region. Cross validation at the 53 long-term precipitation gauges showed that kriging reproduced observed rainfall with average RMSEs lower than the RMSEs of individually bias-corrected point predictions. Results indicate that although significant error remains in predicting actual daily precipitation at rain gauges, kriging the bias-corrected MM5 predictions over a hydrologic model grid produces distributed precipitation fields with sufficient realism in the daily, seasonal, and interannual patterns to be useful for multidecadal water resource planning in the Tampa Bay region.

I have the following substantive comments on this paper with respect to what can be inferred about model skill on multi-decadal time periods:

1. The raw data is biased. It can be adjusted towards the real world observations but only when that data is  available.  This real world observed data is obviously not available for the coming decades

2. The study does not examine skill in the prediction of changes in multi-decadal regional climate statistics.

Thus, while the authors claim that they are  useful for multidecadal water resource planning in the Tampa Bay region, this planning can be directly done with the original real world data. The model downscaling, other than documenting systematic biases, does not provide added information beyond what is already available from observed data and reanalyses without the model.

source of image

Comments Off

Filed under Climate Models, Research Papers