We have been alerted by Robert Webb of NOAA to a study which applies the bottom-up, resource-based perspective that is discussed in my February 9 2011 post
This study is reported in the
Although there is still too much acceptance of the multi-decadal IPCC climate predictions in part of the report, the report does include the insightful text [highlight added]
“Most of the existing climate change impacts analysis uses a projection-oriented “top-down” approach that considers a range of scenarios of world development. These scenarios include greenhouse gas emissions that serve as input to GCMs. GCM output serves as input to impact models (with or without inclusion of adaptive actions). Under this approach, analysis of the probability of certain impacts could largely depend on the ability of the GCMs to characterize that probability, which may be more subjective than the level of rigor required to support a risk-based analysis (Dessai and Hulme, 2003). In flood management, risk-based analysis is often based on probabilities derived from event frequency documented in historical records. However, the extreme events and their corresponding climate signals are the most uncertain elements of the climate change research. As a result, additional consideration is necessary of an appropriate approach for a climate change vulnerability analysis in the context of flood management.
“Another approach, the “bottom-up” approach, has seen greater development and application in recent years. The bottom-up approach reflects a focus on the underlying adaptive capacity of the system under study, emphasizing broader social impacts. It is place-based and deals with specific resources of interest. Flood managers could start with existing knowledge of the system and use evaluation tools to identify changes in climate that may be most threatening to long-term management goals and practices – critical system vulnerabilities. GCM outputs are then used as a reference to assess the likelihood of such system-critical vulnerabilities (Ray et al, 2008; Dessai and Hulme, 2003). This approach may ease concerns for policy makers who are hesitant to move forward with policy decisions while climate uncertainties remain.”
My Comment: Here is an example of the overconfidence on the multi-decadal global climate model predictions as covering the envelope of possible future climate conditions.
The report, however, in Section 2.2.1 has a good discussion as to the value of the bottom-up approach, along with specifics as to how it should be applied. An excerpt from this section reads
Vulnerability can be assessed from various different levels and with different focus. Critical components of the flood management system have associated thresholds of vulnerability, the crossing of which can cause undesirable consequences. The first step is to identify components and thresholds that exist on several spatial scales. Examples include a reservoir losing capacity to regulate flows downstream, a reservoir (or a system of reservoirs) exceeding its objective release, or an infrastructure (e.g., dam, levee) failure.
Once thresholds for critical system components are identified, the consequences of exceeding the thresholds on a community level can be quantified. For example, a reservoir losing its capacity to regulate downstream flows would have large-scale, systemwide consequences. The effects of crossing a systemwide threshold would likely cascade through the system, causing other thresholds to be crossed. Other critical thresholds would have more moderate, regional consequences, such as a reservoir exceeding its objective release. At the smallest, most local scale, a levee failure may have severe impacts to a specific protection area, but less impact on other parts of the flood management system and operations.
Defining critical thresholds that will need analysis requires a level of agreement among the various State, federal, and local entities with flood risk management responsibilities. It is conceivable that components with potential broader damages to communities (including natural communities) would be easier for broad agreement for CVFPP systemwide application. However, for local flood management studies with a more finite project scope, the local critical thresholds could be used without exhausting available resources.
Identify Causal Conditions
The next step is to define the hydrologic conditions required for a given threshold to be exceeded. These conditions can be described by a set of hydrologic metrics. Critical thresholds for large-scale, systemwide components will be affected by relatively fewer sets of hydrologic matrices. In contrast, critical thresholds for local components will be influenced by significantly more sets of hydrologic metrics at various locations throughout the flood management system.
Hydrologic conditions leading to threshold exceedence are linked to atmospheric patterns that can be affected by climate change. These patterns can be described by a set of atmospheric metrics that can be sampled from a future projection of climate and translated into hydrologic metrics for planning purposes. Subject to additional investigation, it is anticipated that for systemwide components, relatively fewer sets of atmospheric metrics will correspond to the hydrologic metrics, which in turn, correspond to critical thresholds, and more sets for critical thresholds for local components.
My Comment: The statement that “t]hese patterns can be described by a set of atmospheric metrics that can be sampled from a future projection of climate and translated into hydrologic metrics for planning purposes” is an example in the report where they still accept the multi-decadal global climate model predictions as robust. However, the rest of the text is excellent.
The Report continues
Assess Likelihood of Exceedence
The final step in the approach is to assess the likelihood of threshold exceedence. It is anticipated that this would be an assessment against baseline conditions or other base of comparison, and would be conducted qualitatively based on available GCMs. It remains to be determined whether current climate change science can provide adequate information to inform the process. If so, an analysis of the likelihood of crossing critical thresholds can be performed, and the results will inform planning analysis for further investment in the flood management system. If not, identification of vulnerabilities will help identify areas of needed climate science investment to obtain adequate information.
My Comment: Instead of writing that “would be conducted qualitatively based on available GCMs“, The authors of the report should add to the examination of current climate, that “[it] remains to be determined whether multi-decadal climate predictions can provide adequate information to inform the process”.
While, unfortunately, the change-over to the bottom-up approach, and recognition of the inability of the multi-decadal climate models to provide any skill at predicting changes in climate statistics is incomplete, the Central Valley Flood Management Planning Program report is a movement in the direction that we propose in our paper
Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2011: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. AGU Monograph on Complexity and Extreme Events in Geosciences, in press.
However, there is still too much acceptance of the multi-decadal global models as having predictive skill even in the bottom-up vulnerability part of the Report. The requirement for the models to be of value is actually quite high, as they not only have to predict the current climate statistics correctly, but also the changes in these statistics over multi-decadal time scale. They have never done that, to my knowledge, even in a hindcast mode.