In my post yesterday
I referenced a post by Judy Curry titled
One of the strategies for making decadal climate predictions on both global and regional scales that she listed is
“statistical forecast methods that combined projections of forcings and known modes of natural internal variability”
Today’s post discusses this strategy.
- First, statistical downscaling from the parent global model should be used as the benchmark (control) with which dynamic downscaling would have to improve on.
An excellent example of this type of testing is given in the paper
Landsea,C.W., Knaff,J.A., 2000: “How much skill was there in forecasting the very strong 1997-98 El Niño?” Bulletin of the American Meteorological Society, 81.
Among their insight conclusions from this seminal paper is
“…..the use of more complex, physically realistic dynamical models does not automatically provide more reliable forecasts. Increased complexity can increase by orders of magnitude the sources for error, which can cause degradation in skill.”
- Statistical downscaling does add prediction skill for Type 1, 2 and, perhaps, Type 3 applications.
With respect to statistical downscaling, the different types are defined below:
1. Type 1: The regional statistical model is trained from the output of a numerical global model weather prediction and/or a regional dynamically downscaled numerical weather prediction model, or a global data reanalysis, at regular time intervals (e.g. 6 or 12 h). A numerical global model weather prediction is one in which the initial atmospheric conditions are not yet forgotten. The Method of Model Output Statistics (MOS) and the Perfect Prog Method are two approaches of the statistical downscaling method. MOS permits the method to correct for systematic biases, while the Perfect Prof Method does not. Type 1 statistical downscaling has been shown to be of considerable value in producing skillful short-term weather forecasts.
2. Type 2: The regional statistical model is trained from the output of a numerical global model weather prediction, or a global data reanalysis, at regular time intervals (e.g. 6 or 12 h). A numerical global model weather prediction is one in which the initial atmospheric conditions are not yet forgotten, but, for Type 2, the regional dynamically downscaled numerical weather prediction model has forgotten its initial conditons. Type 2 statistical downscaling has less skill than Type 1 since skillful finer (regional) scale real world observationally constrained information is not available.
3. Type 3: The regional statistical model lateral boundary conditions are provided from a numerical global model prediction model which is forced with specified real-world surface boundary conditions, but in which the initial atmospheric conditions of the global model have been forgotten. Type 3 has even less skill than Type 2 since less real world observations are available as input to the predictors for the statistical downscaling model. However, since the equations used to train the statistical model were developed from real world observations, there is an assumption that the same relationship will hold for the dynamically predicted numerical model results.
4. Type 4: The regional statistical model from a coupled earth system global climate model in which the atmosphere-ocean-biosphere and cryosphere are interactive. Other than terrain, all other components of the climate system are predicted and are not constrained by real world observations. As long as the relationship between the real world observations and the statistically predicted model results does not change, the main issue is how accurate are the dynamically predicted numerical model results. However, IF the statistical relationship changes in the future, this method will not provide the actual real world response.
A summary of the statistical downscaling method (as well as an expensive project to utilize this approach) is given in [h/t to Judy Curry]
Excerpts from this website read [highlight added]
Climate modeling groups have produced hundreds of simulations of past and future climates for the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). The WCRP Working Group on Coupled Modelling helped to coordinate these activities through the CMIP3 effort (see Meehl et al. 2007) and worked to co-locate these simulations within a single archive, hosted by the Lawrence Livermore National Laboratory (LLNL) Program for Climate Model Diagnosis and Intercomparison (PCMDI). The conversion of all simulation results to a common data format has made probabilistic, multi-model projections and impacts assessments practical.
One issue not solved by the AR4 archive development is that the spatial scale of climate model output is too coarse for most impacts studies and decision-support purposes. Multiple downscaling approaches exist for deriving regional climate from coarse resolution model output (Giorgi et al. 2001, Wilby and Wigley 1997). One method of statistically downscaling spatially continuous fields, developed for hydrologic impact studies (Wood et al. 2004), is computationally efficient enough to be easily applied to ensembles of projections (e.g., Maurer 2007), and has compared favorably to other downscaling techniques. Downscaled data developed by this method have been used in the study of potential climate change impacts on various resource systems, including watershed hydrology, reservoir systems, wine grape cultivation, habitat migration, and air quality.
Statistical downscaling is typically used to predict one variable at one site, though some techniques for simultaneous downscaling to multiple sites for precipitation have been developed (Harpham and Wilby, 2005; Wilks, 1999). However, for studies of some climate impacts such as river basin hydrology, it is important to downscale simultaneous values of multiple variables (such as precipitation and temperature) over large, heterogeneous areas, while maintaining physically plausible spatial and temporal relationships, though few downscaling techniques have been developed to do this. The BCSD technique (i.e. the chosen methodology to develop these archive data) is unique in that it can produce gridded time series of precipitation and surface air temperature at a fine resolution over a large spatial domain and has been used extensively in published studies across the U.S. (e.g. Cayan et al., 2007; Christensen et al.; 2004; Hayhoe et al., 2004; Hayhoe et al., 2007; Maurer and Duffy, 2005; Maurer, 2007, Payne et al., 2004; Van Rheenen et al., 2004; Wood et al., 2004). The BCSD method has been shown to provide downscaling capabilities comparable to other statistical and dynamical methods in the context of hydrologic impacts (Wood et al., 2004)
The principal weakness of any statistical downscaling method is the assumption of some stationarity. In the case of BCSD, the assumption is made that the relationship between large-scale precipitation and temperature and fine-scale precipitation and temperature in the future will be the same as in the past. For example, the processes determining how precipitation and temperature anomalies for any 2 degree grid box are distributed within that grid box are assumed to govern in the future as well. A second assumption included in the bias-correction step of the BCSD method is that any biases exhibited by a GCM for the historical period will also be exhibited in future simulations. Tests of these assumptions, using historic data, show that they appear to be reasonable, inasmuch as the BCSD method compares favorably to other downscaling methods (Wood et al, 2004).
The WCRP CMIP3 Climate Projections are Type 4 statistical downscaling. They have the fundamental issues that they have to assume the statistical relationships are invariant in a changing climate AND the dynamically predicted numerical model results from which they derive their predictions are accurate. The dynamic model predictions that they use, however, as the same as Type 4 that is used for dynamic downscaling! Type 4 dynamic downscaling has not been shown to have skill, and there is no reason to expect a better behavior for Type 4 statistical downscaling.
The bottom line is that vast amounts of money are being spent for both dynamic and statistical downscaling predictions for decades from now that have absolutely no demonstrative skill. Policymakers are being provided information that is at best, no worse than one that can be achieved by using historical and recent paleo-climate information and/or worst case sequences of climate events. At worse, however, these predictions could be significantly misleading policymakers to the actual threats that our key resources of water, energy, food, human health and ecosystem face in the coming decades.