Monthly Archives: January 2012

Candid Admission On The Limitations In Multi-Decadal Climate Model Predictions In A BBC News Article On The UK Climate Impact Plan

On January 26 2012, David Shukman of the BBC published the news article

First report on UK climate impact

The article contains climate predictions decades from now, which as discussed in our new article

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum. in press

have no predictive skill. What is interesting with respect to the BBC article, however, are several candid quotes [which I have highlighted]

The text of the BBC article starts with

Climate change this century poses both risks and opportunities, according to the first comprehensive government assessment of its type. The report warns that flooding, heatwaves and water shortages could become more likely. But benefits could include new shipping lanes through the Arctic, fewer cold-related deaths in winter and higher crop yields. The findings come in the Climate Change Risk Assessment. This 2,000-page document has been produced by the Department for Environment, Food and Rural Affairs (Defra). It forms part of the government’s strategy for coping with global warming. The research was carried out over the past three years and involved studying the possible impacts in 11 key areas including agriculture, flooding and transport. The assessments rely on multiple scenarios based on computer modelling of the future climate. The authors admit that there are large uncertainties leading to a wide range of possible results.

Are candid quotes are highlighted below

All the scenarios rely on computer models of the future climate and therefore inherently involve uncertainties. One of those involved in the report, defending the reliance on models, told me: “They’re the best we’ve got, they’re all we’ve got.” One aim of the work is to raise awareness of the scale of possible changes and to encourage key organisations to plan ahead. Environment Secretary Caroline Spelman said of the report: “It shows what life could be like if we stopped our preparations now, and the consequences such a decision would mean for our economic stability.”

The claim that with respect to the multi-decadal climate model predictions,  “[t]hey’re the best we’ve got, they’re all we’ve got” is wrong on two counts. First, these multi-decadal climate model predictions have no demonstrated skill of predicting changes in climate statistics when run in a hindcast mode. Second, the bottom-up, resource-based (contextual) vulnerability approach that we present in our paper

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing  with complexity and extreme events using a bottom-up, resource-based  vulnerability perspective. AGU Monograph on Complexity and  Extreme Events in Geosciences, in press.

is a much better way to assess risks faced by society and the environment in the coming decades.

source of image

Comments Off

Filed under Climate Science Misconceptions, Climate Science Reporting, Vulnerability Paradigm

Comment On Gavin Schmidt’s Post On His Weblog Real Climate Regarding The Dominate Role Of Anthropogenic Greenhouse Gas Concentrations On The Global Average Temperature Trends

Gavin Schmidt has presented  information in his weblog post on Real Climate titled

The AR4 attribution statement

which is incomplete and misleading.

His post starts with the text [highlight added]

Back in 2007, the IPCC AR4 SPM stated that:

Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”

This is a clear statement that I think is very well supported and correctly reflects the opinion of most climate scientists on the subject (and was re-affirmed in two recent papers (Jones and Stott, 2011;, Huber and Knutti, 2011)). It isn’t an isolated conclusion from a single study, but comes from an assessment of the changing patterns of surface and tropospheric warming, stratospheric cooling, ocean heat content changes, land-ocean contrasts, etc. that collectively demonstrate that there are detectable changes occurring which we can attempt to attribute to one or more physical causes.

He persists is using multi-decadal global model predictions as the tool to claim that the cause of global average temperatures increases over the last 50 years or so can mostly be explained by the increase in anthropogenic greenhouse gas concentration [and when he says “global average” he means “global annual average“] . In our article

Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union

we wrote

Unfortunately, the 2007 Intergovernmental Panel on Climate Change (IPCC) assessment did not sufficiently acknowledge the importance of these other human climate forcings in altering….. global climate ……… It also placed too much emphasis on average global forcing from a limited set of human climate forcings.

To this we should add that “the 2007 Intergovernmental Panel on Climate Change (IPCC) assessment did not sufficiently acknowledge the importance of NATURAL climate forcings in altering….. global climate ………”

Actually, it is straightforward to shed doubt on Gavin’s (and the IPCC) claim.  If the increase in anthropogenic greenhouse gas concentration were so  dominate we would expect the global average [annual] lower troposphere temperature to more-or less monotonically continue to rise in the last decade or so.  This clearly has not occurred, as illustrated, for example, in the figure below for the lower troposphere [from RSS; Figure 7]

and from the UAH analysis (see)

The first tic mark on the x-axis in the RSS figure is 1979.

The lower tropospheric ~global annual average lower tropospheric temperatures have been essentially flat for at least 10 years, presumably due to other human climate forcings, solar forcing, decadal and longer natural variability and/or  radiative feedbacks.

If Gavin were correct, we should also see the lower stratosphere continue to cool. As shown below (from RSS, figure 7), there has been no significant cooling for over 17 years!

Gavin is failing to see this complexity in the climate system. It is quite puzzling as most all climate scientist accept a positive radiative forcing from the human addition of greenhouse gases, but many of us do not accept that is the only first order effect, nor that it is the most dominate in terms of the effect of these forcings on society and the environment.

He may yet be correct for 50 year time scales, but the recent evidence he refers to is actually working to refute his hypothesis.

In this context, he also has ignored  the implications from the recent Loeb et al 2012 paper which posted on;

Brief Comment On The Nature Geoscience Paper “Observed Changes In Top-Of-The-Atmosphere Radiation And Upper-Ocean Heating Consistent Within Uncertainty” By Loeb Et Al 2012

In that post,  I wrote

Jim Hansen concluded in 2005 that the decadal mean planetary energy imbalance at the end of the 1990s was

,…..0.85 Watts per meter squared is the imbalance at the end of the decade.”

This value falls within the uncertainty range of the Leob et al 2012 study.  However, we are 13 years since the end of the 20th century, so Jim Hansen’s value for the imbalance must be larger (~0.95 Watts per meter squared from GISS?).

This question about whether or not the IPCC model predictions (as represented by the GISS models) are still consistent even with the large Loeb et al estimate should have been a major part of their article.  The Loeb et al 2012 even cited the Hansen paper but did not take the next step and complete model and observational comparisons. That the IPCC models are close to being refuted with respect to the magnitude of global warming even with the large Loeb et al values is an unspoken result of their findings. They missed a major implication from their results.

Gavin is very selective when he seeks to defend the dominance of anthropogenic greenhouse gases with respect to global annual average temperature changes.  In reality, Gavin’s conclusion on the role of the anthropogenic emissions of greenhouse gases as dominating changes in climate statistics is close to being refuted.

source of image

Comments Off

Filed under Climate Science Misconceptions

Seminar Announcement – On The Reliability Of Climate Models: How Well Do They Describe Observed Trends? By Geert Jan van Oldenborgh Of KMNI

By coincidence, after I posted

New Paper “Skill In The Trend And Internal Variability In A Multi-Model Decadal Prediction Ensemble” By Oldenborgh El Al 2012

the seminar scheduled in Boulder, Colorado titled

On the reliability of climate models: How well do they describe observed trends?

by Geert Jan van Oldenborgh was anounced.  Geert works at KNMI (Royal Netherlands Meteorological Institute) in De Bilt, Netherlands and his seminar is on Tuesday, January 31, 10:00 am in room 1D403 of the  David Skaggs Research Center.

The abstract reads

Climate models are widely used to construct local projections of future climate changes. For these to be used as “forecasts” the ensemble of climate models has to be reliable in the sense that the projected probability of outcomes should correspond with the realised probability. In weather and seasonal forecasts this is verified over a set of past forecasts. Since the local climate change signal is now emerging from the weather noise in many regions of the world, the reliability of climate model ensembles can be estimated by comparing the observed and modelled trends in temperature and precipitation over the past 50 to 100 years. The spatial dimension is used to gather the necessary statistics.

My Comment:   Implicit in this statement is that there is a background climate signal from which a local effect is expected to emerge. In reality, climate is very nonlinear, and as illustrated later in the abstract, the demonstration of model predictive (explanatory) skill is not clearly shown. Indeed, in his paper

Oldenborgh, G.J. van, F.J. Doblas-Reyes, B. Wouters and W. Hazeleger,  2012: Skill in the trend and internal variability in a multi-model decadal prediction ensemble.  accepted, Clim. Dyn.

they write

The modelled trends agree well with observations in the global mean, but the agreement is not so good at the local scale

and

The skill assessment does not take into account the considerable biases and drift of the models.

The abstract continues

Although global and continental trends are represented well, it is shown that in many regions of the world the observed local trends are not within the ensemble of modelled trends. These areas are larger than would be expected on the basis of chance fluctuations and are therefore a consequence of either misrepresentation of the trends or underestimation of low-frequency variability in climate models. Downscaling with regional climate models does not change this conclusion beyond the addition of orographic details.

My Comment:  His report that “Downscaling with regional climate models does not change this conclusion beyond the addition of orographic details” provides further support to our conclusion in the paper

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing  with complexity and extreme events using a bottom-up, resource-based  vulnerability perspective. AGU Monograph on Complexity and  Extreme Events in Geosciences, in press

that regional downscaling does not add any value beyond what can be achieved by just interpolating the global model results to a finer resolution grid resolution of such surface features as terrain.

The abstract continues

For European temperature and precipitation trend we have investigated the causes of the discrepancies. In winter, both temperature and precipitation have increased much faster than modelled due to an increase in westerly circulation associated with a significant increase in air pressure over the Mediterranean. In spring and summer the faster rise of temperature is over the land areas of southern Europe. In the Netherlands it is associated with a large increase in global radiation. The concomitant rise in East Atlantic SST causes an increase in coastal precipitation that is absent in the climate models. This is partially explainable by a wrong ocean current system in the North Atlantic Ocean, which is a well-known deficiency of coarse resolution ocean models. Finally, the decrease of mist and fog caused by decreased air pollution is not represented in climate models. None of these factors is associated with known modes of low-frequency variability, leading to the conclusion that the biases are more likely in the trend than in the variability.

My Comment:  His paragraph further confirms the importance of variations in regional atmospheric and ocean circulations even with respect to long term means. As we concluded in our paper

Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union

where we wrote

Unfortunately, the 2007 Intergovernmental Panel on Climate Change (IPCC) assessment did not sufficiently acknowledge the importance of these other human climate forcings in altering regional and global climate and their effects on predictability at the regional scale. It also placed too much emphasis on average global forcing from a limited set of human climate forcings.

The abstract concludes with

Time permitting, extreme hourly precipitation trends are discussed. Plotting these as a function of dew point temperature gives a common scaling behavior, between De Bilt and Hong Kong, two stations with long hourly time series. In the Netherlands this allows for an attribution of the increase of hourly extremes to local temperature rise. In Hong Kong this attribution cannot be made and other factors, such as possibly urbanisation, must be responsible for the observed increase.

My Comment:  This statement illustrates why attribution studies must move beyond CO2 and a few other greenhouse gases in order to explain long term climate trends. Landscape change is certainly one of the major, under-examined attributions as we discuss in our paper

Pielke Sr., R.A., A. Pitman, D. Niyogi, R. Mahmood, C. McAlpine, F. Hossain, K. Goldewijk, U. Nair, R. Betts, S. Fall, M. Reichstein, P. Kabat, and N. de Noblet-Ducoudré, 2011: Land  use/land cover changes and climate: Modeling analysis  and  observational evidence. WIREs Clim Change 2011, 2:828–850. doi: 10.1002/wcc.144.

source of image

Comments Off

Filed under Climate Science Presentations

New Paper “Improved Constraints On 21st-Century Warming Derived Using 160 Years Of Temperature Observations” By Gillet Et Al 2012

Dan Hughes alerted us to this new paper. It is

Gillett, N. P., V. K. Arora, G. M. Flato, J. F. Scinocca, and K. von Salzen  (2012), Improved constraints on 21st-century warming derived using 160 years of temperature observations, Geophys. Res. Lett.,39, L01704, doi:10.1029/2011GL050226.

The abstract reads [highlight added]

Projections of 21st century warming may be derived by using regression-based methods to scale a model’s projected warming  up or down according to whether it under- or over-predicts the response to anthropogenic forcings over the historical period.Here we apply such a method using near surface air temperature observations over the 1851–2010 period, historical simulations  of the response to changing greenhouse gases, aerosols and natural forcings, and simulations of future climate change under  the Representative Concentration Pathways from the second generation Canadian Earth System Model (CanESM2). Consistent with previous studies, we detect the influence of greenhouse gases, aerosols and natural forcings in the observed temperature record. Our estimate of greenhouse-gas-attributable warming is lower than that derived using only 1900–1999 observations. Our analysis  also leads to a relatively low and tightly-constrained estimate of Transient Climate Response of 1.3–1.8°C, and relatively  low projections of 21st-century warming under the Representative Concentration Pathways. Repeating our attribution analysis with a second model (CNRM-CM5) gives consistent results, albeit with somewhat larger uncertainties.

I have just two comments. First, while some will be pleased with the smaller global average temperature increase predicted for the 21st century, the assessment is based on:

i) a surface temperature record back to 1851 which not spatially representative and has unknown biases with respect to the changes in local conditions where the temperature measurements were made during this time period (e.g. see Fall, 2011),

and

ii) a model is used for the attribution study of the forcings, yet these models do not have all of the first order climate forcings and feedbacks accurately represented (e.g. see NRC, 2005).

When they write

 “……we detect the influence of greenhouse gases, aerosols and natural forcings in the observed temperature record”

they more accurately should state

“…….we detect IN THE MODEL the influence of greenhouse gases, aerosols and natural forcings WHEN COMPARED WITH the observed temperature record.

At some point, the entire climate science community is going to realize that models are just hypotheses; e.g. see

Short Circuiting The Scientific Process – A Serious Problem In The Climate Science Community

 Scientific rigor requires that real world observations be used to test the models, not the other way around! It is inappropriate to use multi-decadal climate model predictions (even in a hindcast mode) to make conclusions on real world attributions without such an observational validation. They are only a guide as to how we should set up observational studies in order to perform scientifically robust attribution studies.  

source of image

Comments Off

Filed under Climate Models

Response From George Taylor On The Oregon Debate On Climate Science

In response to the post

Inadequate Poll Of Views On Climate Science By Scott Learn Of The Oregonian – But At Least An Opportunity To Debate The Climate Issue

George Taylor and I exchanged the e-mails below. George was in the debate in Oregon sponsored by the state chapter of the American Meteorological Society.  I am pleased that George is leading an effort for constructive debate on the climate issue.

George’s Comment On The Debate

Thanks, Roger. Good to hear from you. All in all it went well. There were over 500 people in attendance! I stated out loud that “human activities DO affect climate, in a variety of ways. CO2, in my opinion, exerts a relatively minor influence but there are many other human factors, such as land use change and particulate emissions, that influence climate. All in all, however, it is my opinion that natural variations, notably solar radiation and tropical Pacific SST, have exerted a greater influence on GLOBAL climate than have human activities.”

You were the one who influenced my thinking, years ago, on the multiplicity of human influences!

http://www.oregonlive.com/environment/index.ssf/2012/01/presentation_by_global_warming.html

George

My Reply

Hi George

It is good to hear from you!

Can I post your e-mail below on my weblog? In terms of global, I have concluded that the global average of surface temperature, etc, is almost a worthless metric, as what really matters is the extent (and if) large scale regional circulation patterns are changed. If we [convince] the IPCC (and AMS and AGU leadership) that they are looking at the wrong metrics, we might be able to make some progress. :-)

With Best Regards

Roger

P.S. Can I post your e-mail below on my weblog?

George’s Response

Absolutely. And I concur about “global temp” and said so last night. The McKitrick-Essex book has a chapter on the meaninglessness of that statistic, and I referred to that.

Sure, use my email!

GT

source of image 

Comments Off

Filed under Climate Science Reporting, Debate Questions

New Paper “Skill In The Trend And Internal Variability In A Multi-Model Decadal Prediction Ensemble” By Oldenborgh El Al 2012

In my posts, I have urged that the focus of climate modeling research change from focusing on providing multi-decadal climate predictions to the assessment of predictability; e.g. see

The Difference Between Prediction and Predictability – Recommendations For Research Funding Related to These Distinctly Different Concepts

I was alerted by Jos de Laat of KNMI to an important new research paper that specifically addresses this issue. This paper is

Oldenborgh, G.J. van, F.J. Doblas-Reyes, B. Wouters and W. Hazeleger,  2012: Skill in the trend and internal variability in a multi-model decadal prediction ensemble.  accepted, Clim. Dyn.

The abstract [as it reads here] is [highlight added]

Decadal climate predictions have skill due to predictable components in boundary conditions (mainly greenhouse gases) and initial conditions (mainly the ocean). We investigated the skill of temperature and precipitation hindcasts from a set of four coupled ocean-atmosphere models. Regional variations in skill with and without trend due to global warming point to separate effects of the boundary forcing and the ocean initial state. In temperature most skill comes from the prescribed boundary forcing. The trend of the global mean temperature is represented well in the hindcasts, but variations around the trend show little skill. The models have non-trivial skill in hindcasts of North Atlantic SST beyond the trend. The same may hold for the decadal ENSO region, although the signal is less clear. Hence we conclude that the ocean initial state contributes significantly to skill in forecasting SST in these regions.

The conclusion contains the text

A 4-model 12-member ensemble of 10-yr hindcasts has been analysed for skill in SST, 2m temperature and precipitation. The main source of skill in temperature is the trend, which is primarily forced by greenhouse gases and aerosols. This trend contributes almost everywhere to the skill. Variation in the global mean temperature around the trend do not have any skill beyond the first year. However, regionally there appears to be skill beyond the trend in the two areas of well-known low-frequency variability: SST in parts of the North Atlantic and Pacific Oceans is predicted better than persistence. A comparison with the CMIP3 ensemble shows that the skill in the northern North Atlantic and eastern Pacific is most likely due to the initialisation, whereas the skill in the subtropical North Atlantic and western North Pacific are probably due to the forcing.

In the Atlantic, the ensemble shows clear skill in predicting an AMO index that is orthogonal to the trend in yrs 2–5, and reasonable skill in yrs 6–9. The skill in decadal ENSO is lower, not statistically significant, but in agreement with other studies. The CMIP3 ensemble shows less skill in both these indices. There is also an indication of skill in hindcasting decadal Sahel rainfall variations, which are known to be teleconnected to North Atlantic and Pacific SST. The uninitialised CMIP3 ensemble that includes volcanic aerosols reproduces these variations as well, but the models without volcanic aerosols do not. It therefore remains an open question whether initialisation improves predictions of Sahel rainfall.

The modelled trends agree well with observations in the global mean, but the agreement is not so good at the local scale.

These experiments are only a first step towards decadal forecasting using non-optimised methods from seasonal forecasting. The skill assessment does not take into account the considerable biases and drift of the models. It is based on only nine or ten data points and hence suffers from large statistical uncertainties. Larger ensembles sizes per model and more frequent and earlier starting dates will be required to characterise the skill of decadal forecasts better. The verification of decadal hindcasts can then be used to improve the climate models, their forcings and initialisation procedures to give more reliable and skilful climate forecasts.

The authors should be commended for focusing on this assessment of predictability. We need more such excellent studies! 

Comments Off

Filed under Assessment of climate predictability, Climate Change Forcings & Feedbacks, Research Papers

Comment On The Scientific American Interview By David Biello Titled “Michael Mann Defends Climate Computer Models”

I learned about this interview with Michael Mann

Michael Mann Defends Climate Computer Models

from Judy Curry’s post

Week in review 1/13/12

The text is below with highlights added and my comments inserted at several places in the text.  As I discuss below, Mike is misleading in his defense of multi-decadal climate models predictions as a robust scientific tool to forecast changes in climate statistics decades from now.

The interview starts with highlight added]

Penn State climate modeler Michael Mann talks about what computer models can tell us–and what they don’t need to. David Biello reports

Fair warning: the following is more than 60 seconds, and it’s about climate change.

“Even in high school my idea of a good time was sitting in front of a computer and solving problems.” Climatologist Michael Mann. “And that has always been true. I love using computational methods to learn about the way, hopefully, the way the world actually works.”

Some critics, such as physicist Freeman Dyson, charge that climate change science relies too much on such computer models. And even worse, that the climate scientists behind them are too much in love with their computational creations. Such mathematical approximations are crude, failing to capture the real world climate impacts of a cloud, for example. That makes them useful for understanding climate but not for predicting climate change, Dyson has argued. I asked Mann in a recent phone interview how he responded to such arguments.

My Comment:   Freeman Dyson is 100% correct.  As an example of this adoration of climate modeling, below is a quote from the 2006 report CCSP 1.1. in the Executive Summary

Although the majority of observational data sets show more warming at the surface than in the troposphere, some observational data sets show the opposite behavior. Almost all model simulations show more warming in the troposphere than at the surface. This difference between models and observations may arise from errors that are common to all models, from errors in the observational data sets, or from a combination of these factors. The second explanation [i.e.errors in the observational data sets”] is favored, but the issue is still open.

As indicated by that quote, the preference is to believe the models over real-world observations. That is backwards thinking!  At least they accept that the issue is still open.

The Scientific American interview continues

“I have to wonder if Freeman Dyson will get on an airplane or if he’ll drive a car because a lot of the modern day conveniences of life and a lot of our technological innovations of modern life are based on phenomena so complicated that we need to be able to construct models of them before we deploy that technology.

My Comment: Mike does not properly distinguish between the types of modeling. When airplanes or cars are built, the engineers are testing their models using real world airplanes and cars, as well as with wind tunnel evaluations. They can ground-truth their models.

With respect to atmospheric modeling, numerical modeling prediction of the weather for the coming days is ground-truthing, as the forecasts can be compared with real-world observations just a few days later.

With multi-decadal climate predictions, they can only realistically be tested from past climate conditions, unless we wait for the coming decades to pass. Even in the hindcast mode, however, the global climate models (whether downscaled to regions or not) have failed to predict changes in the statistics of regional climate. I invite any climate scientist to present evidence on my weblog (as an unedited guest post] that refutes this conclusion.

The interview continues

“In the case of the climate, of course, there is only one Earth, so we can’t do experiments with multiple Earths and formulate the science of climate change as if it’s an entirely observationally based, controlled experiment. We need to rely on conceptual models of the system we’re studying and it’s no different in any other field of science. In fact, the way science progresses is by conceptual models being put forward and then testing them against observations. One of the most, I think, striking examples of that was just within the last month, this announcement, the Higgs Boson.

“Its existence was predicted by the standard model of particle physics and the fact that there’s—we got a glimpse of it, it looks like it may very well be there—is a real victory for that model of science where you test, you put forward conceptual models of the way the world or the universe works and test those models against the observations and see the extent to which they can predict new observations and when they do, it gives you increased confidence in the models.

It’s no different in the case of climate change.  The models are simply at some level a formulation of our conceptual understanding and when someone says they don’t like models then I’m wondering what alternative they have in mind.

My Comment: Mike is in error.  With the Higgs Boson, its existence (the theory) is being tested against real world data. With the prediction of climate change,  even with coarse metrics such as the magnitude of global warming as diagnosed by changes in the heat content of the climate system, these global average forecasts on the verge of failing (e.g. see)!  With respect to the prediction of multi-decadal changes in regional climate statistics, which are needed by the impact community, these models have failed so far to show any skill.

The Scientific american interview continues

“How do they formalize their conceptual understanding? Through back-of-the-envelope, poorly conceived thought experiments?  It’s somewhat bewildering when I hear something like that from a premier scientist, and I think it belies a misunderstanding of the way models are used. In climate science, for example, where we don’t need an elaborate climate model to understand the basic physics and chemistry of greenhouse gases, so at some level the fact that increased CO2 warms the planet is a consequence of very basic physics and chemistry.

My Comment:   Mike is correct - “we don’t need an elaborate climate model to understand the basic physics and chemistry of greenhouse gases, so at some level the fact that increased CO2 warms the planet is a consequence of very basic physics and chemistry.”  However, Mike misses the point that this knowledge of physics does not then result in skillful global and regional predictions of changes in climate statistics.  The climate system is much more than just changes in the atmospheric concentration of CO2 and a few other greenhouse gases.  Mike is misunderstanding “the way models are used“.  He is confusing tested and verified model predictions with unverified model results.

The interview continues

“The details, how much warming you get, depend on things like feedbacks. And you can’t incorporate feedbacks through a back of the envelope approach.  You actually have to critically think about the interactions that take place in this very complex system. And those feedbacks ultimately determine the extent to which that initial warming will be amplified, but they don’t even change the fact that you elevate greenhouse gas concentrations in the atmosphere and you’ll get a warming of the surface. That’s basic physics and chemistry and people who claim that they don’t believe that, they don’t believe we’re warming the planet through increasing CO2 levels because of climate models, they don’t understand the fact that you don’t need a climate model to come to that conclusion. It’s basic physics and chemistry.

My Comment:   Mike is arguing about an issue that is not in disagreement!  Of course, if you add greenhouse gases, there is a radiative warming effect. However, its magnitude is relatively small unless there is a significant positive radiative feedback from added water vapor. It is this feedback, which involves the entire hydrologic cycle that is still so poorly understood; e.g. see

Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532.

The interview continues

The climate models come in because we wanna know how that’s modified by feedback.  What are the important feedbacks?  How will atmospheric circulation patterns change? And again, does Freeman Dyson, assuming he is willing to get on an airplane even though models have been used to test the performance of the airplane, assuming he does and he knows he’s going somewhere where they’ve predicted, where weather models have predicted rainfall for the next seven days, does he not pack his umbrella because he doesn’t believe the models? It’s just in that case the worst that will happen is somebody gets wet when they wouldn’t otherwise have. In this case, the worst that can happen is that we ruin the planet.”

—David Biello

My Comment:  Mike is misleading in his answer. As I wrote earlier, the ability of an airplane to fly and of a weather forecast days from now is tested against real data! Climate predictions over decadal time periods, in contrast, when tested in a hindcast mode, are failing to provide skillful forecasts. In fact they are misleading policymakers in their decision making.  Mike is misleading readers when he equates testable predictions which have been confirmed with real world observations with predictions which have failed to show any skill.  He implicitly recognizes this, as of yet lack of skill with the models when he writes “What are the important feedbacks?  How will atmospheric circulation patterns change?”   Indeed, it these are two major issues we still do not understand and Mike should have emphasized that.

As written in the Scientific American Interview, Freeman Dyson is 100% correct

  “that climate change science relies too much on such computer models. And even worse, that the climate scientists behind them are too much in love with their computational creations. Such mathematical approximations are crude, failing to capture the real world climate impacts of a cloud, for example. That makes them useful for understanding climate but not for predicting climate change”

It is an open question as to how long it is going to take funding agencies and policymakers to recognize this reality.

source image

Comments Off

Filed under Climate Models, Climate Science Misconceptions