In Judy Curry’s excellent post
she reported that
“Possible strategies for making decadal climate predictions on both global and regional scales include:
global climate model simulations, including dynamical downscaling methods using regional climate models forced by the global climate model simulations statistical forecast methods that combined projections of forcings and known modes of natural internal variability statistical/dynamical methods, that use elements of both of the previous methods
Climate model projections from CMIP3
While the CMIP3 20th century simulations used in the AR4 show some average skill on subcontinental scales (e.g. the U.S.), they show little skill on regional scales, and none in many regions (notably the southeastern U.S., which is a location that I have investigated closely.) One strategy that has been used for future projections of regional climate change is to take the projections from the CMIP3 21st century simulations and use these fields to force higher resolution regional climate models (referred to as dynamical downscaling.) The idea of dynamical downscaling is to force a regional climate model nested from a scale of say the U.S. with successively higher resolution grids down from the continental to the regional to the local scale of interest. An ambitious dynamical downscaling effort for the U.S. climate based on the CMIP3 simulations is described here.”
With respect to the example that Judy presents [Bias Corrected and Downscaled WCRP CMIP3 Climate Projections], this is actually a statistical downscaling project [h/t to Chris Weaver for alerting me to this!]. Nevertheless, Judy’s description of dynamic downscaling is correct.
Dynamic downscaling models are called “regional climate models [RCMs]”. This type of dynamic downscaling is a Type 4 application as defined in the paper
Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value retained and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721; see Tables 1 and 2.
The four types of downscaling are cataloged by
1. Type 1: The regional dynamic model is forced by lateral boundary conditions from a numerical global model weather prediction or global data reanalysis at regular time intervals (typically 6 or 12 h), by bottom boundary conditions (e.g., terrain, soil moisture, etc), and specified initial conditions. A numerical global model weather prediction is one in which the initial atmospheric conditions are not yet forgotten. Type 1 are called “numerical weather prediction models”. This application of dynamic downscaling is of considerable value as it is the basis for our short-term weather forecasts.
2. Type 2: The regional dynamic model initial atmospheric conditions have been forgotten, but results are still dependent on the lateral boundary conditions from a numerical global model weather prediction model ( in which the initial atmospheric conditions are not yet forgotten) or a global data reanalysis, and on the bottom boundary conditions. Type 2 includes using ERA-40 or NCEP Reanalyses, for example, as the best-estimate of the large scale atmospheric structure at selected time intervals (e.g. 6 hours). Reanalyses use a combination of real world observations that are inserted into a model in order to obtain the most accurate description (diagnosis) of the atmospheric distribution of temperature, humidity, winds, etc as possible. This type of dynamic downscaling permits us to test the maximum forecast skill that is achievable with Type 3 and 4 downscaling.
3. Type 3: The regional dynamic model lateral boundary conditions are provided from a numerical global model prediction model which is forced with specified real-world surface boundary conditions, but in which its initial atmospheric conditions are forgotten. Type 3 includes seasonal forecasts in which certain climate system attributes, such as sea surface temperature are prescribed. This type of dynamic downscaling is at the frontier of assessing how far into the future we can produce skillful weather forecasts.
4. Type 4: Lateral boundary conditions from a coupled earth system global climate model in which the atmosphere-ocean-biosphere and cryosphere are interactive. Other than terrain, all other components of the climate system are predicted and are not constrained by real world observations. Type 4 includes the 2007 IPCC runs that claim to predict climate decades from now. Type 4 downscaling, while the basis for 21st century climate change impacts, has not demonstrated predictive skill.
Necessarily, the prediction skill decreases as one moves from Type 1 to Type 2 to Type 3 to Type 4 since progressively more climate variables must be predicted rather than prescribed from observations.
Millions of dollars are being spent by the National Science Foundation, and others to produce high-resolution forecasts (Type 4) for the coming decades in the 21st century which have no demonstrated skill.
There are many dynamic downscaling studies that predict climate decades into the future; e.g. see just two random examples obtained by a search on google scholar;
H. Kunstmann, K. Schneider, R. Forkel, R. Knoche, 2004. Impact analysis of climate change for an Alpine catchment using high resolution dynamic downscaling of ECHAM4 time slices. Hydrology and Earth System Sciences 8, 6 (2004) 1031-1045
Frei, C., R. Schöll, S. Fukutome, J. Schmidli, and P. L. Vidale (2006), Future change of precipitation extremes in Europe: Intercomparison of scenarios from regional climate models, J. Geophys. Res., 111, D06105, doi:10.1029/2005JD005965.
The NSF list of awarded projects is also quite informative on their funding these multi-decadal regional climate predictions [using just the global climate models or also using regional climate models) where we have to wait decades to verify their predictions (well after the funding has been spent!). Here is a random samples of such funding:
“Using a predictive model of the coupled natural (climate) and social (violence) systems, with feedback loops and mediating socio-political-economic variables, the PIs will measure the impact of adverse climate change and/or changes in climate variability on the rate of armed conflict, determine which mediating factors influence the rate of this impact, and project the violence outcomes on the basis of different climate change/variability scenarios.”
“Improved understanding of the impacts of mesoscale atmosphere ocean coupling will form the foundation for determining likely changes induced by global warming on the regional climate of California and its coastal ocean.”
“Previous work by the principal investigator and others shows that midlatitude stability increases with global warming, with potentially significant consequences for the hydrological cycle.”
“The principal investigators have shown that the seasonal cycles of temperature and precipitation are delayed by a few days at the end of the 21st century compared to the end of the 20th century in climate change simulations.”
“The work performed under the grant will be of broad interest because of the need for better estimates of the degree to which the world will warm in response to greenhouse gas increases.”
Those parts of such projects that produce predictions decades into the future, quite frankly, are a complete waste of money. The assumption that spatial and temporal prediction skill is improved from what is in the global multi-decadal climate model prediction is a misleading illusion. The global multi-decadal climate model prediction, themselves, are necessarily inaccurate since they do not include all of the first order climate forcings (e.. see NRC, 2005 and Pielke et al., 2009).
This claim of an improvement from these (inaccurate global models) is based primarily on the finer scale spatial structure that is evident in the maps that these regional climate models produce. However, the predicted social and environmental impacts in coming decades from these dynamically downscaled model results not scientifically robust.
As Rob Wilby discussed (bold-face added) ; see my post
“The scientific community is developing regional climate downscaling (RCD) techniques to reconcile the scale mismatch between coarse-resolution OA/GCMs and location-specific information needs of adaptation planners……It is becoming apparent, however, that downscaling also has serious practical limitations, especially where the meteorological data needed for model calibration may be of dubious quality or patchy, the links between regional and local climate are poorly understood or resolved, and where technical capacity is not in place. Another concern is that high-resolution downscaling can be misconstrued as accurate downscaling (Dessai et al., 2009). In other words, our ability to downscale to finer time and space scales does not imply that our confidence is any greater in the resulting scenarios.”
The reason for the necessary failure of the regional climate models (as a dead-end engineering and science tool) can be summarized in the following:
- the parent global multi-decadal predictions are unable to simulate major atmospheric circulation features such the PDO, NAO, El Niño, La Niña etc. Such regional atmospheric features explain the recent extreme cold and snow in western Europe, for example. However, the regional climate models are slaves of the lateral boundary conditions and of interior nudging from their parent models; as is shown, for example, in these papers
Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value retained and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. – Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721
Rockel, B., C.L. Castro, R.A. Pielke Sr., H. von Storch, and G. Leoncini, 2008: Dynamical downscaling: Assessment of model system dependent retained and added variability for two different regional climate models. J. Geophys. Res., 113, D21107, doi:10.1029/2007JD009461
Lo, J.C.-F., Z.-L. Yang, and R.A. Pielke Sr., 2008: Assessment of three dynamical climate downscaling methods using the Weather Research and Forecasting (WRF) Model. J. Geophys. Res., 113, D09112, doi:10.1029/2007JD009216.
If the global multi-decadal climate model predictions cannot accurately predict the larger scale circulation features of PDO, NAO, El Niño, La Niña etc, there is no way they can provide accurate lateral boundary conditions and interior nudging to the regional climate models (RCMs). The RCMs themselves do not have the domain scale to skillfully predict these atmospheric features.
- The advocates of the multi-decadal climate predictions state that, while they recognize that they cannot predict future climate change as an initial value problem, they can predict the change in the statistics of the future climate as a boundary value problem. However, there is only value for predicting climate change IF they could skillfully predict the CHANGES in the statistics of the weather and other aspects of the climate system.
There is no evidence, however, that the models can predict the change in these climate statistics. Unless they could predict changes in the statistics of climate, the impacts community, in order to assess risks in the future, could just use the historical, paleo-record and worst case sequences of events for this purpose. While there is value in assessing the time and spatial limits of skillful climate forecasts, and providing such skillful forecasts to the the impacts community, the climate model needs to quantitatively test these limits (e.g. such as proposed by Judah Cohen; e.g, see).
- The need for regional climate models (RCMs) themselves will shortly become irrelevant, as the global models themselves achieve the same spatial resolution as the RCMs. This improvement in resolution is being achieved by the continued advancement in computational power.
The bottom line message is that the global and regional climate models are providing a level of confidence in forecast skill of the coming decades that does not exist.
- I do, of course, support the goal of assessing the predictability of global and regional climate on seasonal, yearly and decadal time scales. I discuss this in my post Comment On Judy Curry’s Post “Scenarios: 2010-2030. Part I”.
The assessment of the ability to make skillful climate forecasts (by comparing with real-world observations – this is the evaluation of predictability), however, is not the same as providing predictions (forecasts) of climate change decades into the future for the impacts community. Large amount of research funds are being wasted for these forecasts.
Moreover, as a test for predictability, the dynamic downscaled predictions need to show skill over that achieved by using statistical downscaling from the parent model in a hindcast mode. Unless the dynamic models can show skill above that achieved by the statistically downscaled results, they are not useful, and, indeed, will provide misleading, inaccurate results to policymakers and others.
As I have suggested, there is a much more effective and scientifically robust approach, as summarized in my post
This recommendation can be written as
There are 5 broad areas that we can use to define the need for vulnerability assessments : water, food, energy, human health and ecosystem function. Each area has societally critical resources. The vulnerability concept requires the determination of the major threats to these resources from climate, but also from other social and environmental issues.
After these threats are identified for each resource, then the relative risk from natural- and human-caused climate change (estimated from global and regional climate model predictions that have been shown to have quantifiable skill, but also from the historical, paleo-record and worst case sequences of events) can be compared with other risks in order to adopt the optimal mitigation/adaptation strategy.