I learned about this interview with Michael Mann
from Judy Curry’s post
The text is below with highlights added and my comments inserted at several places in the text. As I discuss below, Mike is misleading in his defense of multi-decadal climate models predictions as a robust scientific tool to forecast changes in climate statistics decades from now.
The interview starts with highlight added]
Penn State climate modeler Michael Mann talks about what computer models can tell us–and what they don’t need to. David Biello reports
Fair warning: the following is more than 60 seconds, and it’s about climate change.
“Even in high school my idea of a good time was sitting in front of a computer and solving problems.” Climatologist Michael Mann. “And that has always been true. I love using computational methods to learn about the way, hopefully, the way the world actually works.”
Some critics, such as physicist Freeman Dyson, charge that climate change science relies too much on such computer models. And even worse, that the climate scientists behind them are too much in love with their computational creations. Such mathematical approximations are crude, failing to capture the real world climate impacts of a cloud, for example. That makes them useful for understanding climate but not for predicting climate change, Dyson has argued. I asked Mann in a recent phone interview how he responded to such arguments.
My Comment: Freeman Dyson is 100% correct. As an example of this adoration of climate modeling, below is a quote from the 2006 report CCSP 1.1. in the Executive Summary
Although the majority of observational data sets show more warming at the surface than in the troposphere, some observational data sets show the opposite behavior. Almost all model simulations show more warming in the troposphere than at the surface. This difference between models and observations may arise from errors that are common to all models, from errors in the observational data sets, or from a combination of these factors. The second explanation [i.e. “errors in the observational data sets”] is favored, but the issue is still open.
As indicated by that quote, the preference is to believe the models over real-world observations. That is backwards thinking! At least they accept that the issue is still open.
The Scientific American interview continues
“I have to wonder if Freeman Dyson will get on an airplane or if he’ll drive a car because a lot of the modern day conveniences of life and a lot of our technological innovations of modern life are based on phenomena so complicated that we need to be able to construct models of them before we deploy that technology.
My Comment: Mike does not properly distinguish between the types of modeling. When airplanes or cars are built, the engineers are testing their models using real world airplanes and cars, as well as with wind tunnel evaluations. They can ground-truth their models.
With respect to atmospheric modeling, numerical modeling prediction of the weather for the coming days is ground-truthing, as the forecasts can be compared with real-world observations just a few days later.
With multi-decadal climate predictions, they can only realistically be tested from past climate conditions, unless we wait for the coming decades to pass. Even in the hindcast mode, however, the global climate models (whether downscaled to regions or not) have failed to predict changes in the statistics of regional climate. I invite any climate scientist to present evidence on my weblog (as an unedited guest post] that refutes this conclusion.
The interview continues
“In the case of the climate, of course, there is only one Earth, so we can’t do experiments with multiple Earths and formulate the science of climate change as if it’s an entirely observationally based, controlled experiment. We need to rely on conceptual models of the system we’re studying and it’s no different in any other field of science. In fact, the way science progresses is by conceptual models being put forward and then testing them against observations. One of the most, I think, striking examples of that was just within the last month, this announcement, the Higgs Boson.
“Its existence was predicted by the standard model of particle physics and the fact that there’s—we got a glimpse of it, it looks like it may very well be there—is a real victory for that model of science where you test, you put forward conceptual models of the way the world or the universe works and test those models against the observations and see the extent to which they can predict new observations and when they do, it gives you increased confidence in the models.
“It’s no different in the case of climate change. The models are simply at some level a formulation of our conceptual understanding and when someone says they don’t like models then I’m wondering what alternative they have in mind.
My Comment: Mike is in error. With the Higgs Boson, its existence (the theory) is being tested against real world data. With the prediction of climate change, even with coarse metrics such as the magnitude of global warming as diagnosed by changes in the heat content of the climate system, these global average forecasts on the verge of failing (e.g. see)! With respect to the prediction of multi-decadal changes in regional climate statistics, which are needed by the impact community, these models have failed so far to show any skill.
The Scientific american interview continues
“How do they formalize their conceptual understanding? Through back-of-the-envelope, poorly conceived thought experiments? It’s somewhat bewildering when I hear something like that from a premier scientist, and I think it belies a misunderstanding of the way models are used. In climate science, for example, where we don’t need an elaborate climate model to understand the basic physics and chemistry of greenhouse gases, so at some level the fact that increased CO2 warms the planet is a consequence of very basic physics and chemistry.
My Comment: Mike is correct – “we don’t need an elaborate climate model to understand the basic physics and chemistry of greenhouse gases, so at some level the fact that increased CO2 warms the planet is a consequence of very basic physics and chemistry.” However, Mike misses the point that this knowledge of physics does not then result in skillful global and regional predictions of changes in climate statistics. The climate system is much more than just changes in the atmospheric concentration of CO2 and a few other greenhouse gases. Mike is misunderstanding “the way models are used“. He is confusing tested and verified model predictions with unverified model results.
The interview continues
“The details, how much warming you get, depend on things like feedbacks. And you can’t incorporate feedbacks through a back of the envelope approach. You actually have to critically think about the interactions that take place in this very complex system. And those feedbacks ultimately determine the extent to which that initial warming will be amplified, but they don’t even change the fact that you elevate greenhouse gas concentrations in the atmosphere and you’ll get a warming of the surface. That’s basic physics and chemistry and people who claim that they don’t believe that, they don’t believe we’re warming the planet through increasing CO2 levels because of climate models, they don’t understand the fact that you don’t need a climate model to come to that conclusion. It’s basic physics and chemistry.
My Comment: Mike is arguing about an issue that is not in disagreement! Of course, if you add greenhouse gases, there is a radiative warming effect. However, its magnitude is relatively small unless there is a significant positive radiative feedback from added water vapor. It is this feedback, which involves the entire hydrologic cycle that is still so poorly understood; e.g. see
Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532.
The interview continues
“The climate models come in because we wanna know how that’s modified by feedback. What are the important feedbacks? How will atmospheric circulation patterns change? And again, does Freeman Dyson, assuming he is willing to get on an airplane even though models have been used to test the performance of the airplane, assuming he does and he knows he’s going somewhere where they’ve predicted, where weather models have predicted rainfall for the next seven days, does he not pack his umbrella because he doesn’t believe the models? It’s just in that case the worst that will happen is somebody gets wet when they wouldn’t otherwise have. In this case, the worst that can happen is that we ruin the planet.”
My Comment: Mike is misleading in his answer. As I wrote earlier, the ability of an airplane to fly and of a weather forecast days from now is tested against real data! Climate predictions over decadal time periods, in contrast, when tested in a hindcast mode, are failing to provide skillful forecasts. In fact they are misleading policymakers in their decision making. Mike is misleading readers when he equates testable predictions which have been confirmed with real world observations with predictions which have failed to show any skill. He implicitly recognizes this, as of yet lack of skill with the models when he writes “What are the important feedbacks? How will atmospheric circulation patterns change?” Indeed, it these are two major issues we still do not understand and Mike should have emphasized that.
As written in the Scientific American Interview, Freeman Dyson is 100% correct
“that climate change science relies too much on such computer models. And even worse, that the climate scientists behind them are too much in love with their computational creations. Such mathematical approximations are crude, failing to capture the real world climate impacts of a cloud, for example. That makes them useful for understanding climate but not for predicting climate change”
It is an open question as to how long it is going to take funding agencies and policymakers to recognize this reality.