Dutch science journalist Marcel Crok interviewed me in January for the Dutch monthly science magazine Natuurwetenschap & Techniek. The article (only available in Dutch) deals with the question how reliable global circulation models are. Marcel graciously made a transcript of the interview which gives a good idea of the perspective that is presented at Climate Science. I made several further edits for clarity and updating, as well as added several links to substantiate the statements.
The interview follows:
Recently the SPM of IPCCâs AR4 stated that itâs now very likely that most of the warming of the last 50 years is the result of anthropogenic CO2. Are Global Circulation Models crucial to âproveâ that AGW is already taking place the last 50 years?
My answer is ânoâ. The primary aspect that GCMâs have claimed to be able to show skillfully is a globally averaged surface temperature trend (e.g. see). But the models do this without including all the forcings. The models are incomplete. What they have shown is that CO2 is just one important climate forcing, but the 2005 National Research Council report Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties shows there are other first order climate forcings. Another problem is that our research suggests that the actual warming, particularly the minimum near surface-air temperatures on land, have been overstated. There is a warm bias in these data. So if the models agree with the temperature trends, they do this, at least in part, for the wrong reasons..
What are the problems with the surface temperature data?
The main problem is over land where most of the warming has actually occurred. The difficulty is that you have a lot of variation in temperatures over short distances. For example at night time, temperature measurements at one meter can differ from the value at two meters, especially in winter when the winds are light and calm. We have evidence that there is a warm bias in the surface record and that claims like â2005 being the warmest on record etc.â are not valid. The different communities that are collecting these data, the East Anglia group in the UK, the GISS group (NASA) and the group at NOAA, have different analyses, but they are basically extracting their record from the same raw data, which have these biases. We think the surface air temperature is a very poor metric to compare against. Unfortunately thatâs what the IPCC has chosen.
Is there an alternative?
I have suggested assessing the ocean heat content trends (see), which is a much more robust diagnostic for global warming or cooling. Ocean heat content is a long term filter. Sampling high frequency information is not needed.. If you sample enough locations, you can assess what areas have warmed or cooled in terms of Joules. That is a direct measure of heat! If we talk about global warming, temperature is only part of the question. You have to look at the mass involved and the temperature change. In the atmosphere if youâre talking about heat you have to consider water vapor. We published a paper that says if you deforest an area, the temperature could go up, but the actual heat content could go down, because you have less water vapor in the air (e.g. see). So heat is incompletely measured by temperature. So our advice is to use ocean heat content.
How do we measure ocean heat content and how good is the observational record?
You calculate ocean heat content by measuring water temperatures through depth. You sample the ocean at the surface, but also at different depths, and you look at changes in temperature over time. As the ocean is by far the largest store of heat in the climate system, you can use the change of heat content as an estimate for the imbalance of the earth climate system. The data go back 50 years or so, but the more recent data are better. Since a few years they started working on the ARGO buoys network, which is quite dense. The goal is to have around 3000 buoys at the end of this year.
What is the trend in ocean heat content?
Over the last 50 years there is a warming trend in the oceans. However between 2003 and 2005 some 20 percent of the accumulated heat was lost. The key paper which published these results was Lyman et al. 2006. This means there was radiative cooling of the earth climate system in those years. [As readers of Climate Science know, this conclusion has been corrected; there is not evidence of any loss of heat over this time period; see].
Now a few years ago scientists like Hansen, Barnett and Levitus said look there is warming in the ocean, and the global climate models say there should be warming. They agreed that the ocean heat content is the better diagnostic. Now we have these more recent data by Lyman et al that raises questions about the accuracy of the models. At the latest AGU meeting in December they debated this issue. There is criticism of the Lyman et al research. Maybe the data are not good enough or maybe there is more melt from glaciers influencing the data. But the data were assumed to be robust in the past. None of the models have replicated the two years of cooling. For me this implies that the models are missing important climate forcings and feedbacks. The models are useful but incomplete. [Even with the correction to the cooling, the absence of warming in the oceans since 2002 is still at variance with the global climate models, so that this criticism of them remains valid].
How do you use the models then?
We use them in what I call sensitivity studies. For example we looked at the influence of landscape changes on the climate of Florida. We found that by draining the marshes and putting urban areas in there is less water vapor in the atmosphere today than 100 years ago. The net result is a decrease in precipitation and this fits the observations. We used a regional model for that. We ran the model twice, once with the natural landscape and once with the current landscape. We found a significant effect on both the hydrological cycle and on the temperature. The temperatures are higher because there is less evaporation. So we donât need a large scale global warming effect to explain the changes in Florida.
This is a sensitivity experiment. And thatâs how I view the IPCC scenarios as well, as a set of âwhat ifâ experiments. What if there are no other climate forcings than CO2, what does the model say? The models say that if you add CO2 to the atmosphere, it affects the climate. I agree with that. But itâs not a prediction, because they donât have the other forcings and they certainly donât have all the cloud feedbacks, vegetation feedbacks etc. I think the IPCC has overstated their predictive capability and are too conservative in recognizing other human climate forcings. CO2 is not THE dominant human climate forcing. Thatâs what the National Academy of Sciences report indicated and that conclusion has not been recognized widely. Itâs a more complicated problem than CO2.
Are models able to reproduce regional variation in warming or cooling?
It is important to look at the spatial pattern of the heat content changes, in terms of what effects our weather. Like your weather in Europe right now, youâre getting very warm temperatures, but other areas have very cold temperatures. If you average it all together you may or may not get warming or cooling, but the focus on a globally averaged metric is almost useless in terms of what weather we actually experience.
Have the models shown skill in regional prediction for the last 30 years or a year by year basis or over a decade?
No, regional variation has not been demonstrated by any model. I donât know any credible modeler who claims predictive skill on the regional scale.
So is there predictive skill 50 years down the road for Holland? I think if they are honest they will say ânoâ?, there is no skill. Whatâs going to happen to the rain and snow, to temperatures? They might say the mean temperatures will be higher, but what about winter temperatures, the minimum temperatures, the maximum temperatures? Whatâs the evidence for the last 30 years you have been able to predict this? Did Holland warm or cool and what was the skill of the model?
But climate goes up and down all the time so you have to pick another five years. Whatâs going to happen the next five years in Europe, thatâs the challenge? The problem is, if they give forecasts 50 years in the future, nobody can validate that right now. From that sense, itâs not scientific. When I see peer reviewed papers that talk about 2050 or 2100, for me thatâs not science, thatâs just presenting a hypothesis, which is not testable. I donât even read those papers anymore. They need to have something that is testable.
You can always reconstruct after the fact what happened if you run enough model simulations. The challenge is to run it on a independent dataset, say for the next five years. But then they will say âthe model is not good for five years because there is too much noise in the systemâ?. Thatâs avoiding the issue then. They say you have to wait 50 years, but then you canât validate the model, so what good is it?
Itâs like weather prediction for tomorrow; you only believe it when you get to tomorrow. Weather is very difficult to predict; climate involves weather plus all these other components of the climate system, ice, oceans, vegetation, soil etc. Why should we think we can do better with climate prediction than with weather prediction? To me itâs obvious, we canât!
I often hear scientists say âweather is unpredictable, but climate you can predict because it is the average weatherâ?. How can they prove such a statement?
They claim itâs a boundary force problem since they look at it from an atmospheric perspective. They are assuming that the land surface doesnât change much, the ocean doesnât change much and that the atmosphere will come in some kind of statistical equilibrium. But the problem is the ocean is not static, the land surface is not static.
In fact I recently posted a blog on a paper by Filippo Giorgi (see). What he is doing is a transition in thinking. He concludes there are components of a boundary problem and components of an initial value problem with respect to 30 year predictions. If itâs a combination of the two, it therefore is an initial value problem and therefore has to be treated just like weather prediction!
Whatâs the difference between a boundary value and initial value problem?
Initial value means it matters what you start your model with, what your temperature is in the atmosphere, temperature in ocean, how vegetation is distributed, etc. They say it doesnât matter what this initial distribution is; the results will equilibrate after some time, the averages will become the same.
The problem is that the boundaries also change with time. These are not real boundaries; these are interfaces between the atmosphere and ocean, atmosphere and land, and land and ocean. These are all interactive and coupled.
There are two definitions of climate: 1) long term weather statistics or 2) climate is made up of the ocean, land ice sheets and the atmosphere. The latter definition is adopted by a 2005 NRC report on radiative climate forcings (see). This second definition indicates that it depends what you start your model with, e.g. if you start in the year 1950 with a different ocean distribution, you will get different weather statistics 50 years from then.
The question is why should we expect the climate system to behave in such a linear well behaved fashion when we know weather doesnât? In the Rial et al. paper (see), we show from the observations that, on a variety of time scales, climate has these jumps, these transitions, and these are not predicted by models. These are clearly non-linear and are clearly related to what you start your climate system with.
Most climate scientists, if you present this information to them, agree that climate is an initial value problem. There are some that still argue itâs a boundary problem. That makes it easier for them to say âif we put in CO2 from anthropogenic activities you get this very well behaved response for the next 100 yearsâ?. However, this perspective is not supported at all by the observational record. What that means is that when we perturb the climate system we could be pushing ourselves towards or away from threshold changes we donât understand. There is a risk in perturbing the climate system, certainly, but I donât think we can predict it.
Does it help to start with a regional weather model?
Regional weather prediction models perform downscaling all the time. The global weather model has reality based to it, weather data, satellite data, etc. So you are taking a global model that started as an initial value problem, it remembers its initial values for a period of time. You run it for maybe ten days. Itâs taking all the information and putting it into the sides of the regional model. So there is skill in doing that, but that skill degrades with time because eventually the large scale model has forgotten its initial data and starts to drift from reality. Thatâs why we donât have weather prediction models for longer than 10 or 12 days or so. In fact, 7 days is what the skill typically is thought to be (see and see).
What you require for a regional climate model years in the future, is that it faithfully replicates reality for 50 years in the future. The models are not capable of doing that; they have not demonstrated this skill. The regional models require skilful information from the global models at the sides of their simulation domains. Thatâs never been shown.
But regional models can reach higher spatial resolution than global modelâ¦
Regional models unfortunately require boundary conditions from global models. We have done a paper which states that you cannot add any value in terms of the large scale meteorology using a regional model (see). You can do sensitivity studies: what if you have the same weather pattern but a different landscape, for example, Europeâs natural landscape or the current landscape. We and others have found find that the changes in land surface processes and in the landscape are actually quite important for Europe, particularly in summer time (e.g. see).
So regional models are depending on the global models?
Yes, they all depend on the global models. What you get from the global models will be fed into the regional models. So this will be completely determined by the large scale, except that finer scale information from terrain and landscape effects can be included.
The regional model has sides to it; information has to be inserted at these sides. This can only come from the global prediction models. At every time step you give it new winds and temperatures at the sides of the model. You have to prescribe the values at the sides for all time steps, so thus for 10, 20, 30 years ahead. That means these regional models are determined strongly by the sides of the boxes. We have demonstrated that there is no value added from the large scale models, as stated above..
Regional models give you the illusion of higher resolution. In reality itâs no better than the global models. If a GCM will give you strong warming, the regional model will give you strong warming. The message is that these regional models are not giving us the information people think they are giving.
How come that all these model projections show such a linear picture?
They are forcing it with a steady linear forcing, increased CO2 for example. The model has not an adequate representation of the real world climate feedbacks. It doesnât have the other human forcings that were identified in the 2005 NRC report.. The models clearly look unlike the real world looks like in terms of its variation over time.
Sometimes skeptical people say you can do the same calculations on the back of an envelope?
I agree with the back of an envelope calculation comparison. If you increase CO2 radiatively and you donât consider any of these other effects, itâs a warming perturbation that you can calculate quite easily. The greatest effects are in the high latitude, because in the tropics the water vapor overwhelms the CO2 effect. The climate perturbation, however, is much more than that due to the radiative forcing of CO2. Thatâs an issue thatâs not widely recognized.
CO2 is also a biogeochemical forcing, so when you increase CO2, plants can respond. All plants like CO2, but some plants like CO2 more which may use water more efficiently. Thus there are complex nonlinear interactions due to increasing CO2. That really complicates how CO2 affects the climate system. Our work suggests that the biogeochemical effects of adding CO2 may have more effect on the climate system than the radiative effect of adding CO2. But the models have inadequately dealt with the biogeochemical effect of CO2.
How far are they in introducing other forcings in the models, like the landscape and the CO2 cycle?
Some of the models have land use changes, some have carbon cycles. But there is also the effect of aerosols, that influences the climate in a variety of ways. There is a Table in that 2005 NRC report about indirect effects of aerosols (see). It lists about six of them. These effects are very poorly understood. Some of them are warming, some cooling. They spatially affect the heating and/or cooling. They affect the precipitation processes.
These forcings are not adequately put into the models. The more forcings you put into the model and the more feedbacks, the more complicated and the more nonlinear these models become, which makes predictions even more difficult. So I think going down the road and using our models as predictive tools is not going to be successful.
Would it be possible to have land use in models in 5 years?
Yes, as shown, for example, by the recent paper by Feddema et al in Science (see). I wrote a comment on that paper (see). Feddema et al found that land use changes didnât change the global average temperature very much, but it changed precipitation patterns. If thatâs true, that you change the hydrological cycle, the effect on society is enormous. That work has been replicated and supported by others (see). It means again that we need other climate metrics. The NRC report recommended that we adopt these climate metrics. We need to stop focusing on the global average temperature, which has become a political icon, but doesnât really tell us what we should be looking at.
How should we validate the models?
The models need to be validated first of all for retrospective simulation, letâs say 1979 till present to see if the models can replicate the regional weather patterns, averaged over seasons for example, winter, summer, precipitation, temperature etc. That has not been done. Then you take the next five years. How well do the models replicate the weather patterns over the next five years?
So validation has been very poorly done. If they would adopt a protocol to do validation, then there would be more consensus in the climate community. Actually there is a lot of disagreement from the IPCC approach. But particularly the younger people do not want to take a position on this. There are a lot of people out there that are very disappointed with the process, but most of them donât want to speak out. The perspective that I present is actually much broader than you might realize.
The 2005 National Academy of Sciences report is an example. I was on that committee, but Michael Mann (of Real Climate) was also on that committee. So he subscribed to the recommendations as well but I do not see this NRC perspective discussed on that weblog. That report was completely ignored by the media. The findings of that report are questions that should be looked at by the IPCC. A few of these [IPCC] scientists are not communicating these complexities to the media and the policy makers and the policy makers get the idea that all this is solved and understood and summarized by the effect of adding CO2.to the atmosphere. The science in the peer reviewed literature does not support that narrow perspective, however.
Why are these IPCC scientists not âhonestâ about the complexities?
I donât doubt their sincerity, but I donât understand it. I think they have taken a position for a long time and just like anyone, itâs hard to change your view.
In Japan they are building the Earth Simulator, the biggest computer model so far. Do you expect there will be a trend towards ever bigger models?
Yes, I think so. The group in Japan â and also people like Peter Cox, Richard Betts and Martin Clausen at the Hadley Center – realize that you need to have a coupled climate system model. They call it earth simulator but it really is a climate simulator because itâs looking at feedbacks and forcings of land, the atmosphere, the ocean and continental ice etc.
Their goal is to accurately include the carbon cycle, the nitrogen cycle, vegetation growth, sea ice growth and decay, continental ice sheets growth and decay, and so forth. So itâs more than a typical atmosphere-ocean GCM. Itâs just a natural movement, which I think is inevitable, towards these more complete models.
In terms of understanding processes, higher resolution is going to help. But again, the problem is so complex, since we donât know all the feedbacks, forcings. If they use these models to make predictions, that canât be validated, I donât think thatâs an effective use of the resources. Effective use would be to better understand these interactions.
If you had all the money that is going into climate science, how would you spend it?
I would take a large percentage of that to assess what I call vulnerability (e.g. see). Since I think skilful multi-decadal climate prediction is inherently almost impossible, we should probably develop an assessment of what are our vulnerabilities to weather events and other environmental variations of all types.
For example how could Colorado protect itself against drought (see)? We know drought has occurred historically and prehistorically; e.g. back in 16th century there was this mega drought. Perhaps we should develop a more resilient system for water resources that regardless of how humans alter the climate in the next fifty years, we would be better protected, even when the natural cycle presents these surprises.
In vulnerability assessments you can also use models, but these are impact models. We can evaluate, for instance, what would have to happen to my resource before I have negative effects occurring and how could I protect against it? A nice example is the impact of sea level rise for The Netherlands. What could I do to protect myself regardless of what the reason is for it to occur? Significant money should go into such research. Vulnerability work has unfortunately not been funded very well.
What should society do?
I think we have to look for win-win solutions (e.g. see). Lots of the things that have been proposed make sense anyway, alternative energy for example. It lessens your dependence on risky sources of oil. Energy efficiency is a win-win for anybody, where you save money as well. Hybrid cars make sense; you reduce pollution of all types, you reduce CO, NO, ozone and you have the benefit of less CO2. Those win-win solutions can permit the consensus to move forward.
The vulnerability framework is more inclusive. You can feed in the threats from the models, if you want to. But you also have to look at historical climate change and protect yourself better, instead of relying fully on these predictions for the future. Relying on the multi-decadal global model predictions to make policy decisions is a very narrow (and risky) one dimensional approach.