This past Friday, Ben Herman commented in
“Scientific predictions are just that, predictions, and until they have been verified, are just that, unverified predictions.”
This inability to validate predictions decades from now has not stopped journals and funding agencies, such as the National Science Foundation, from reporting and funding such studies.
Misconception #3: Regional climate predictions provide testable skillful predictions of changes in the decadal and longer statistics of extreme weather.
There is a new article which perpretuates this myth that multi-decadal global model preditions are skillful. This, unfortunately is just one example of many who are making unsubstantiated scientific claims, yet, whose research is being accepted in peer reviewed journals and being funded by the National Science Foundation and other agencies.
This article is
Ren, Diandong, Rong Fu, Lance M. Leslie, Robert E. Dickinson, 2011: Predicting Storm-triggered Landslides. Bulletin on the American Meterological Society. 129–139.
The abstract of the paper reads
“An advanced numerical modeling system projects rain-triggered landslides in a warming climate.”
Just a few excerpts from the article shows that the foundation of this paper is flawed. They write [bold face added]
What can we say about changes in storm-triggered landslides on 50-yr (or longer) time scales when we cannot predict rainfall next week? On one hand, the overall climate response of the precipitation to the increasing atmospheric concentrations of greenhouse gases may be proven predictable by current global coupled ocean–atmosphere climate models (CGCMs; Allen and Ingram 2002, and references therein). On the other hand, only very heavy or extreme precipitation triggers landslides (Iverson 2000). Although CGCMs are unable to project a specific storm’s location and timing, they can provide a statistically correct rainfall scenario for the region of interest.
“….for point accuracy in predicting a slope’s stability, very high-resolution (spatial and temporal) precipitation data from observations or more likely from numerical weather prediction models are required to drive the landslide model. In the following experiments, we linearly downscale the CGCM-provided meteorological forcing to a high resolution digital elevation map.”
“All known physics considered, the CGCMs are well positioned to answer the question of whether increased temperatures cause increases in precipitation intensity and amount. Climate prediction is concerned with quantifying general trends and not with accurate prediction of specific storm events. This does not devalue the CGCM projections because, for many purposes (e.g., infrastructure construction), we are not interested in the exact timing of a mudslide and we only are required to know the time frame of its recurrence. The CGCMs, because of their limited horizontal resolution, are not expected to resolve individual precipitation events.”
That this paper passed peer review with statements that the CGCMs provide “statistically correct rainfall scenario for the region of interest ” and “CGCMs, because of their limited horizontal resolution, are not expected to resolve individual precipitation events” is astounding.
There is no way that realistic model simulations of extreme precipitation is possible by linearly downscaling from the CGCMs, as was discussed, for example, in my posts
They also have not been shown to provide “statistically correct rainfall scenarios” as discussed, for example, by
where they wrote
“The scientific community is developing regional climate downscaling (RCD) techniques to reconcile the scale mismatch between coarse-resolution OA/GCMs and location-specific information needs of adaptation planners……It is becoming apparent, however, that downscaling also has serious practical limitations, especially where the meteorological data needed for model calibration may be of dubious quality or patchy, the links between regional and local climate are poorly understood or resolved, and where technical capacity is not in place. Another concern is that high-resolution downscaling can be misconstrued as accurate downscaling (Dessai et al., 2009). In other words, our ability to downscale to finer time and space scales does not imply that our confidence is any greater in the resulting scenarios.”
It is clear that considerable research funding is being provided to support what is not following the scientific method. As was presented in the post
There has been a development over the last 10-15 years or so in the scientific peer reviewed literature that is short circuiting the scientific method.
The scientific method involves developing a hypothesis and then seeking to refute it. If all attempts to discredit the hypothesis fails, we start to accept the proposed theory as being an accurate description of how the real world works.
A useful summary of the scientific method is given on the website sciencebuddies.org.where they list six steps
- Ask a Question
- Do Background Research
- Construct a Hypothesis
- Test Your Hypothesis by Doing an Experiment
- Analyze Your Data and Draw a Conclusion
- Communicate Your Results
Unfortunately, in recent years papers have been published in the peer reviewed literature that fail to follow these proper steps of scientific investigation. These papers are short circuiting the scientific method.
As written in the IAC Review of the IPCC report (which I reproduced in the above weblog post)
“…the guidance was often applied to statements that are so vague they cannot be falsified.”
This is a correct conclusion and applies to the Ren et al Bulletin of the American Meteorological Society paper, as well as all such studies whose findings cannot be falsified.
The acceptance of hypotheses as facts in the publication process including this Ren et al 2011 paper, is one main reason that the policy community is being significantly misinformed about the actual status of our understanding of the climate system and the role of humans within it.