Monthly Archives: April 2007

Interview By Marcel Crok Of Roger A. Pielke Sr – January 2007

Dutch science journalist Marcel Crok interviewed me in January for the Dutch monthly science magazine Natuurwetenschap & Techniek. The article (only available in Dutch) deals with the question how reliable global circulation models are. Marcel graciously made a transcript of the interview which gives a good idea of the perspective that is presented at Climate Science. I made several further edits for clarity and updating, as well as added several links to substantiate the statements.

The interview follows:

Recently the SPM of IPCC’s AR4 stated that it’s now very likely that most of the warming of the last 50 years is the result of anthropogenic CO2. Are Global Circulation Models crucial to ‘prove’ that AGW is already taking place the last 50 years?

My answer is ‘no’. The primary aspect that GCM’s have claimed to be able to show skillfully is a globally averaged surface temperature trend (e.g. see). But the models do this without including all the forcings. The models are incomplete. What they have shown is that CO2 is just one important climate forcing, but the 2005 National Research Council report Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties shows there are other first order climate forcings. Another problem is that our research suggests that the actual warming, particularly the minimum near surface-air temperatures on land, have been overstated. There is a warm bias in these data. So if the models agree with the temperature trends, they do this, at least in part, for the wrong reasons..

What are the problems with the surface temperature data?

The main problem is over land where most of the warming has actually occurred. The difficulty is that you have a lot of variation in temperatures over short distances. For example at night time, temperature measurements at one meter can differ from the value at two meters, especially in winter when the winds are light and calm. We have evidence that there is a warm bias in the surface record and that claims like ‘2005 being the warmest on record etc.’ are not valid. The different communities that are collecting these data, the East Anglia group in the UK, the GISS group (NASA) and the group at NOAA, have different analyses, but they are basically extracting their record from the same raw data, which have these biases. We think the surface air temperature is a very poor metric to compare against. Unfortunately that’s what the IPCC has chosen.

Is there an alternative?

I have suggested assessing the ocean heat content trends (see), which is a much more robust diagnostic for global warming or cooling. Ocean heat content is a long term filter. Sampling high frequency information is not needed.. If you sample enough locations, you can assess what areas have warmed or cooled in terms of Joules. That is a direct measure of heat! If we talk about global warming, temperature is only part of the question. You have to look at the mass involved and the temperature change. In the atmosphere if you’re talking about heat you have to consider water vapor. We published a paper that says if you deforest an area, the temperature could go up, but the actual heat content could go down, because you have less water vapor in the air (e.g. see). So heat is incompletely measured by temperature. So our advice is to use ocean heat content.

How do we measure ocean heat content and how good is the observational record?

You calculate ocean heat content by measuring water temperatures through depth. You sample the ocean at the surface, but also at different depths, and you look at changes in temperature over time. As the ocean is by far the largest store of heat in the climate system, you can use the change of heat content as an estimate for the imbalance of the earth climate system. The data go back 50 years or so, but the more recent data are better. Since a few years they started working on the ARGO buoys network, which is quite dense. The goal is to have around 3000 buoys at the end of this year.

What is the trend in ocean heat content?

Over the last 50 years there is a warming trend in the oceans. However between 2003 and 2005 some 20 percent of the accumulated heat was lost. The key paper which published these results was Lyman et al. 2006. This means there was radiative cooling of the earth climate system in those years. [As readers of Climate Science know, this conclusion has been corrected; there is not evidence of any loss of heat over this time period; see].

Now a few years ago scientists like Hansen, Barnett and Levitus said look there is warming in the ocean, and the global climate models say there should be warming. They agreed that the ocean heat content is the better diagnostic. Now we have these more recent data by Lyman et al that raises questions about the accuracy of the models. At the latest AGU meeting in December they debated this issue. There is criticism of the Lyman et al research. Maybe the data are not good enough or maybe there is more melt from glaciers influencing the data. But the data were assumed to be robust in the past. None of the models have replicated the two years of cooling. For me this implies that the models are missing important climate forcings and feedbacks. The models are useful but incomplete. [Even with the correction to the cooling, the absence of warming in the oceans since 2002 is still at variance with the global climate models, so that this criticism of them remains valid].

How do you use the models then?

We use them in what I call sensitivity studies. For example we looked at the influence of landscape changes on the climate of Florida. We found that by draining the marshes and putting urban areas in there is less water vapor in the atmosphere today than 100 years ago. The net result is a decrease in precipitation and this fits the observations. We used a regional model for that. We ran the model twice, once with the natural landscape and once with the current landscape. We found a significant effect on both the hydrological cycle and on the temperature. The temperatures are higher because there is less evaporation. So we don’t need a large scale global warming effect to explain the changes in Florida.

This is a sensitivity experiment. And that’s how I view the IPCC scenarios as well, as a set of ‘what if’ experiments. What if there are no other climate forcings than CO2, what does the model say? The models say that if you add CO2 to the atmosphere, it affects the climate. I agree with that. But it’s not a prediction, because they don’t have the other forcings and they certainly don’t have all the cloud feedbacks, vegetation feedbacks etc. I think the IPCC has overstated their predictive capability and are too conservative in recognizing other human climate forcings. CO2 is not THE dominant human climate forcing. That’s what the National Academy of Sciences report indicated and that conclusion has not been recognized widely. It’s a more complicated problem than CO2.

Are models able to reproduce regional variation in warming or cooling?

It is important to look at the spatial pattern of the heat content changes, in terms of what effects our weather. Like your weather in Europe right now, you’re getting very warm temperatures, but other areas have very cold temperatures. If you average it all together you may or may not get warming or cooling, but the focus on a globally averaged metric is almost useless in terms of what weather we actually experience.

Have the models shown skill in regional prediction for the last 30 years or a year by year basis or over a decade?

No, regional variation has not been demonstrated by any model. I don’t know any credible modeler who claims predictive skill on the regional scale.

So is there predictive skill 50 years down the road for Holland? I think if they are honest they will say “noâ€?, there is no skill. What’s going to happen to the rain and snow, to temperatures? They might say the mean temperatures will be higher, but what about winter temperatures, the minimum temperatures, the maximum temperatures? What’s the evidence for the last 30 years you have been able to predict this? Did Holland warm or cool and what was the skill of the model?

But climate goes up and down all the time so you have to pick another five years. What’s going to happen the next five years in Europe, that’s the challenge? The problem is, if they give forecasts 50 years in the future, nobody can validate that right now. From that sense, it’s not scientific. When I see peer reviewed papers that talk about 2050 or 2100, for me that’s not science, that’s just presenting a hypothesis, which is not testable. I don’t even read those papers anymore. They need to have something that is testable.

You can always reconstruct after the fact what happened if you run enough model simulations. The challenge is to run it on a independent dataset, say for the next five years. But then they will say “the model is not good for five years because there is too much noise in the systemâ€?. That’s avoiding the issue then. They say you have to wait 50 years, but then you can’t validate the model, so what good is it?

It’s like weather prediction for tomorrow; you only believe it when you get to tomorrow. Weather is very difficult to predict; climate involves weather plus all these other components of the climate system, ice, oceans, vegetation, soil etc. Why should we think we can do better with climate prediction than with weather prediction? To me it’s obvious, we can’t!

I often hear scientists say “weather is unpredictable, but climate you can predict because it is the average weatherâ€?. How can they prove such a statement?

They claim it’s a boundary force problem since they look at it from an atmospheric perspective. They are assuming that the land surface doesn’t change much, the ocean doesn’t change much and that the atmosphere will come in some kind of statistical equilibrium. But the problem is the ocean is not static, the land surface is not static.

In fact I recently posted a blog on a paper by Filippo Giorgi (see). What he is doing is a transition in thinking. He concludes there are components of a boundary problem and components of an initial value problem with respect to 30 year predictions. If it’s a combination of the two, it therefore is an initial value problem and therefore has to be treated just like weather prediction!

What’s the difference between a boundary value and initial value problem?

Initial value means it matters what you start your model with, what your temperature is in the atmosphere, temperature in ocean, how vegetation is distributed, etc. They say it doesn’t matter what this initial distribution is; the results will equilibrate after some time, the averages will become the same.

The problem is that the boundaries also change with time. These are not real boundaries; these are interfaces between the atmosphere and ocean, atmosphere and land, and land and ocean. These are all interactive and coupled.

There are two definitions of climate: 1) long term weather statistics or 2) climate is made up of the ocean, land ice sheets and the atmosphere. The latter definition is adopted by a 2005 NRC report on radiative climate forcings (see). This second definition indicates that it depends what you start your model with, e.g. if you start in the year 1950 with a different ocean distribution, you will get different weather statistics 50 years from then.

The question is why should we expect the climate system to behave in such a linear well behaved fashion when we know weather doesn’t? In the Rial et al. paper (see), we show from the observations that, on a variety of time scales, climate has these jumps, these transitions, and these are not predicted by models. These are clearly non-linear and are clearly related to what you start your climate system with.

Most climate scientists, if you present this information to them, agree that climate is an initial value problem. There are some that still argue it’s a boundary problem. That makes it easier for them to say “if we put in CO2 from anthropogenic activities you get this very well behaved response for the next 100 yearsâ€?. However, this perspective is not supported at all by the observational record. What that means is that when we perturb the climate system we could be pushing ourselves towards or away from threshold changes we don’t understand. There is a risk in perturbing the climate system, certainly, but I don’t think we can predict it.

Does it help to start with a regional weather model?

Regional weather prediction models perform downscaling all the time. The global weather model has reality based to it, weather data, satellite data, etc. So you are taking a global model that started as an initial value problem, it remembers its initial values for a period of time. You run it for maybe ten days. It’s taking all the information and putting it into the sides of the regional model. So there is skill in doing that, but that skill degrades with time because eventually the large scale model has forgotten its initial data and starts to drift from reality. That’s why we don’t have weather prediction models for longer than 10 or 12 days or so. In fact, 7 days is what the skill typically is thought to be (see and see).

What you require for a regional climate model years in the future, is that it faithfully replicates reality for 50 years in the future. The models are not capable of doing that; they have not demonstrated this skill. The regional models require skilful information from the global models at the sides of their simulation domains. That’s never been shown.

But regional models can reach higher spatial resolution than global model…

Regional models unfortunately require boundary conditions from global models. We have done a paper which states that you cannot add any value in terms of the large scale meteorology using a regional model (see). You can do sensitivity studies: what if you have the same weather pattern but a different landscape, for example, Europe’s natural landscape or the current landscape. We and others have found find that the changes in land surface processes and in the landscape are actually quite important for Europe, particularly in summer time (e.g. see).

So regional models are depending on the global models?

Yes, they all depend on the global models. What you get from the global models will be fed into the regional models. So this will be completely determined by the large scale, except that finer scale information from terrain and landscape effects can be included.

The regional model has sides to it; information has to be inserted at these sides. This can only come from the global prediction models. At every time step you give it new winds and temperatures at the sides of the model. You have to prescribe the values at the sides for all time steps, so thus for 10, 20, 30 years ahead. That means these regional models are determined strongly by the sides of the boxes. We have demonstrated that there is no value added from the large scale models, as stated above..

Regional models give you the illusion of higher resolution. In reality it’s no better than the global models. If a GCM will give you strong warming, the regional model will give you strong warming. The message is that these regional models are not giving us the information people think they are giving.

How come that all these model projections show such a linear picture?

They are forcing it with a steady linear forcing, increased CO2 for example. The model has not an adequate representation of the real world climate feedbacks. It doesn’t have the other human forcings that were identified in the 2005 NRC report.. The models clearly look unlike the real world looks like in terms of its variation over time.

Sometimes skeptical people say you can do the same calculations on the back of an envelope?

I agree with the back of an envelope calculation comparison. If you increase CO2 radiatively and you don’t consider any of these other effects, it’s a warming perturbation that you can calculate quite easily. The greatest effects are in the high latitude, because in the tropics the water vapor overwhelms the CO2 effect. The climate perturbation, however, is much more than that due to the radiative forcing of CO2. That’s an issue that’s not widely recognized.

CO2 is also a biogeochemical forcing, so when you increase CO2, plants can respond. All plants like CO2, but some plants like CO2 more which may use water more efficiently. Thus there are complex nonlinear interactions due to increasing CO2. That really complicates how CO2 affects the climate system. Our work suggests that the biogeochemical effects of adding CO2 may have more effect on the climate system than the radiative effect of adding CO2. But the models have inadequately dealt with the biogeochemical effect of CO2.

How far are they in introducing other forcings in the models, like the landscape and the CO2 cycle?

Some of the models have land use changes, some have carbon cycles. But there is also the effect of aerosols, that influences the climate in a variety of ways. There is a Table in that 2005 NRC report about indirect effects of aerosols (see). It lists about six of them. These effects are very poorly understood. Some of them are warming, some cooling. They spatially affect the heating and/or cooling. They affect the precipitation processes.

These forcings are not adequately put into the models. The more forcings you put into the model and the more feedbacks, the more complicated and the more nonlinear these models become, which makes predictions even more difficult. So I think going down the road and using our models as predictive tools is not going to be successful.

Would it be possible to have land use in models in 5 years?

Yes, as shown, for example, by the recent paper by Feddema et al in Science (see). I wrote a comment on that paper (see). Feddema et al found that land use changes didn’t change the global average temperature very much, but it changed precipitation patterns. If that’s true, that you change the hydrological cycle, the effect on society is enormous. That work has been replicated and supported by others (see). It means again that we need other climate metrics. The NRC report recommended that we adopt these climate metrics. We need to stop focusing on the global average temperature, which has become a political icon, but doesn’t really tell us what we should be looking at.

How should we validate the models?

The models need to be validated first of all for retrospective simulation, let’s say 1979 till present to see if the models can replicate the regional weather patterns, averaged over seasons for example, winter, summer, precipitation, temperature etc. That has not been done. Then you take the next five years. How well do the models replicate the weather patterns over the next five years?

So validation has been very poorly done. If they would adopt a protocol to do validation, then there would be more consensus in the climate community. Actually there is a lot of disagreement from the IPCC approach. But particularly the younger people do not want to take a position on this. There are a lot of people out there that are very disappointed with the process, but most of them don’t want to speak out. The perspective that I present is actually much broader than you might realize.

The 2005 National Academy of Sciences report is an example. I was on that committee, but Michael Mann (of Real Climate) was also on that committee. So he subscribed to the recommendations as well but I do not see this NRC perspective discussed on that weblog. That report was completely ignored by the media. The findings of that report are questions that should be looked at by the IPCC. A few of these [IPCC] scientists are not communicating these complexities to the media and the policy makers and the policy makers get the idea that all this is solved and understood and summarized by the effect of adding CO2.to the atmosphere. The science in the peer reviewed literature does not support that narrow perspective, however.

Why are these IPCC scientists not ‘honest’ about the complexities?

I don’t doubt their sincerity, but I don’t understand it. I think they have taken a position for a long time and just like anyone, it’s hard to change your view.

In Japan they are building the Earth Simulator, the biggest computer model so far. Do you expect there will be a trend towards ever bigger models?

Yes, I think so. The group in Japan – and also people like Peter Cox, Richard Betts and Martin Clausen at the Hadley Center – realize that you need to have a coupled climate system model. They call it earth simulator but it really is a climate simulator because it’s looking at feedbacks and forcings of land, the atmosphere, the ocean and continental ice etc.
Their goal is to accurately include the carbon cycle, the nitrogen cycle, vegetation growth, sea ice growth and decay, continental ice sheets growth and decay, and so forth. So it’s more than a typical atmosphere-ocean GCM. It’s just a natural movement, which I think is inevitable, towards these more complete models.

In terms of understanding processes, higher resolution is going to help. But again, the problem is so complex, since we don’t know all the feedbacks, forcings. If they use these models to make predictions, that can’t be validated, I don’t think that’s an effective use of the resources. Effective use would be to better understand these interactions.

If you had all the money that is going into climate science, how would you spend it?

I would take a large percentage of that to assess what I call vulnerability (e.g. see). Since I think skilful multi-decadal climate prediction is inherently almost impossible, we should probably develop an assessment of what are our vulnerabilities to weather events and other environmental variations of all types.

For example how could Colorado protect itself against drought (see)? We know drought has occurred historically and prehistorically; e.g. back in 16th century there was this mega drought. Perhaps we should develop a more resilient system for water resources that regardless of how humans alter the climate in the next fifty years, we would be better protected, even when the natural cycle presents these surprises.

In vulnerability assessments you can also use models, but these are impact models. We can evaluate, for instance, what would have to happen to my resource before I have negative effects occurring and how could I protect against it? A nice example is the impact of sea level rise for The Netherlands. What could I do to protect myself regardless of what the reason is for it to occur? Significant money should go into such research. Vulnerability work has unfortunately not been funded very well.

What should society do?

I think we have to look for win-win solutions (e.g. see). Lots of the things that have been proposed make sense anyway, alternative energy for example. It lessens your dependence on risky sources of oil. Energy efficiency is a win-win for anybody, where you save money as well. Hybrid cars make sense; you reduce pollution of all types, you reduce CO, NO, ozone and you have the benefit of less CO2. Those win-win solutions can permit the consensus to move forward.

The vulnerability framework is more inclusive. You can feed in the threats from the models, if you want to. But you also have to look at historical climate change and protect yourself better, instead of relying fully on these predictions for the future. Relying on the multi-decadal global model predictions to make policy decisions is a very narrow (and risky) one dimensional approach.

Leave a comment

Filed under Climate Science Op-Eds, Climate Science Reporting

Evidence Of Health Problems With Ethanol Fuels

Thanks to Dev Niyogi for alerting me to an important news report on the risk of ethanol vehicles on health. This article follows up on the Climate Science weblog

Will Climate Effects Trump Health Effects In Air Quality Regulations?

The article in Energy Daily is titled

“Ethanol Vehicles Pose A Significant Risk To Human Health”

The text reads in part,

“Ethanol is widely touted as an eco-friendly, clean-burning fuel. But if every vehicle in the United States ran on fuel made primarily from ethanol instead of pure gasoline, the number of respiratory-related deaths and hospitalizations would likely increase, according to a new study by Stanford University atmospheric scientist Mark Z. Jacobson. His findings are published in the April 18 online edition of the journal Environmental Science and Technology (ES and T).

‘Ethanol is being promoted as a clean and renewable fuel that will reduce global warming and air pollution,” said Jacobson, associate professor of civil and environmental engineering. “But our results show that a high blend of ethanol poses an equal or greater risk to public health than gasoline, which already causes significant health damage.’…..

The deleterious health effects of E85 will be the same, whether the ethanol is made from corn, switchgrass or other plant products, Jacobson noted. “Today, there is a lot of investment in ethanol,” he said. “But we found that using E85 will cause at least as much health damage as gasoline, which already causes about 10,000 U.S. premature deaths annually from ozone and particulate matter. The question is, if we’re not getting any health benefits, then why continue to promote ethanol and other biofuels?

“There are alternatives, such as battery-electric, plug-in-hybrid and hydrogen-fuel cell vehicles, whose energy can be derived from wind or solar power,” he added. ‘These vehicles produce virtually no toxic emissions or greenhouse gases and cause very little disruption to the land-unlike ethanol made from corn or switchgrass, which will require millions of acres of farmland to mass-produce. It would seem prudent, therefore, to address climate, health and energy with technologies that have known benefits.'”

The entire news release, and paper from which it is based is worht reading.

Leave a comment

Filed under Climate Science Misconceptions, Climate Science Reporting

Bias In Climate Science Reporting Even In The Economist

The April 21, 2007 issue of the Economist had an interestiing article entitled

“Dengue Fever: A deadly scourge”

The article starts with

“Millions at risk as a new outbreak of dengue fever sweeps Latin America”

“There is no vaccine. There is also no good way to treat it—just fluids and the hope that the fever will break. At first it seems like a case of severe flu, but then the fever rises, accompanied by headaches, excruciating joint pain, nausea and rashes. In its most serious form, known as dengue haemorrhagic fever (DHF), it involves internal and external bleeding and can result in death. Fuelled by climate change, dengue fever is on the rise again throughout the developing world, particularly in Latin America.

Mexico identified 27,000 cases of dengue fever last year, more than four times the number in 2001. In El Salvador, whose population is not much more than 6% of Mexico’s, the number soared to 22,000 last year, a 20-fold increase on five years earlier. Uruguay recently reported its first case in 90 years. In Brazil, 135,000 cases were diagnosed in the first three months of this year, a rise of about a third over the same period last year. Paraguay, the country worst affected in relation to population size, has reported more than 25,000 cases so far this year, six times the total for the whole of last year—and even this is probably an underestimate.”

However, buried in this text is the remarkable claim that this diease is

“Fuelled by climate change, dengue fever is on the rise again throughout the developing world, particularly in Latin America.”

What is the scientific evidence for this statement that the dengue fever is “fuelled by climate change”?

I value reading the Economist but the insertion of such scientifically unsubstantiated claims detracts significantly from the journalistic intergrity and accuracy of this magazine. It makes one wonder if other science articles in the Economist, in areas outside of my expertise, are similarly biased.

Leave a comment

Filed under Climate Science Reporting

Another Study Of The Importance Of Land Use/Land Cover Change In Long Term Near-Surface Air Temperature Trends

A valuable new paper has been published which further documents the large role of land use/land cover on long term near-surface air temperature trends. The paper is

He, J. F., J. Y. Liu, D. F. Zhuang, W. Zhang, and M. L. Liu
2007: Assessing the effect of land use/land cover change on the change of urban heat island intensity Theor. Appl. Climatol. DOI 10.1007/s00704-006-0273-1

The abstract reads,

“Due to rapid economic development, China has experienced one of the greatest rates of change in land use=land cover during the last two decades. This change is mainly urban expansion and cultivated land reduction in urban growth regions, both of which play an important role in regional climate change. In this paper, the variation of the urban heat island (UHI) caused by urbanization has been evaluated with an analysis of land use change in China. First, meteorological observation stations were grouped by different land cover types (dry land, paddy field, forest, grassland, water field, urban, rural inhabitable area, industrial and mineral land, and waste land) throughout China. These stations were subdivided into urban and non-urban classes. Then, a new method was proposed to determine the UHI intensity from the difference between the observed and the interpolated temperature of urban type weather stations. The results indicate that the trends of UHI intensity in different land change regions are spatially correlated with regional land and its change pattern. During 1991–2000, the estimated UHI intensity has increased by 0.11 _C per decade in spring and has fluctuated in other seasons throughout China resulting from land use change.â€?

There is an interesting definition of “global warmingâ€? in the paper. It reads,

“Global warming can be partitioned into (1) the urban heat island effect, (2) the effect of deforestation, (3) the effect of secular micro-climate shift, (4) the influence of general global warming with particular reference to the tropics (Harger, 1995).â€?

They also write

“We found that the variability of UHI intensity has a strong spatial connection with the pattern of land use, i.e. the influence of the UHI effect on urban stations is not only related to land use change type, but also to land use change structures. In regions where land use change type represented the sharp expansion of an urban area and the reduction in water area, the urban stations were often easily affected by UHI. On the other hand, in regions where land use change type was the expansion of urban area accompanied by an increase in water area or vegetation area, the urban stations were seldom affected by UHI.â€?

In the last sentence, it should be emphasized that while they did not find an “UHIâ€? effect, there certainly must have still been a land use/land cover change effect on the temperatures. Urbanization is just one of a diverse spectrum of types of the human alterations of the Earth’s landscape.

Leave a comment

Filed under Climate Change Metrics

Checks and Balances in Climate Assessment – A Guest Weblog By Hendrik Tennekes

A few weeks ago I promised to write an essay on checks and balances in climate assessments. I realized that this was going to be a tough endeavor. I am not a policy maker by profession, and I want to be fair to all parties in the climate debate. In this essay I will analyze the shortcomings of the search for consensus, and suggest an alternative by which the IPCC Summaries for Policy Makers are replaced by Policy Assessments by Policy Makers. Since all problems of present and future climate hit hardest in countries where poverty reigns, I will follow David Henderson and advocate a strong involvement of the Organization for Economic Cooperation and Development.

I am not a dreamer. As a long-time civil servant I know only too well that some people will always attempt to find ways of working around the rules of the game if they get a chance. I don’t believe in a make-believe world where everyone is playing fair. But I do believe that the rules of the game can be written such that negative consequences are minimized. A system where the advocates for competing parties have to argue their cases before an independent judge seems preferable above a system that aims for artificial consensus. In the same spirit, I am a fan of the separation of powers in government. A system by which authority is divided among the legislative, executive, and judicial branches makes it hard to hide conflicts of interests and one-sided advocacy attempts.

Fortunately, I do not have to start from scratch. Here in Holland, the Environmental Assessment Agency is charged with the duty to provide independent climate assessments to Parliament and the Executive Branch. It was not easy to make this change in the civil service machinery, however. The State Institute for Public Health and Environmental Quality fought the idea tooth and nail for many years. The civil servants there had become accustomed to doing research and advocacy simultaneously, which gave them plenty of opportunities to directly influence the policy development processes in the Ministries of Public Health and of Environment. But the National Advisory Council on Science Policy argued, successfully in the end, that a research institution cannot be the sole judge of its own research.

This system is not perfect. No system is. The demarcation lines between the tasks of the Assessment Agency and those of various research organizations and environmental groups are trespassed at times. Poorly guarded boundaries encourage two-way smuggling. On the other hand, well-defended boundaries can prevent valuable cross-fertilization. In any case, no boundary is ever completely impermeable, and to pretend otherwise is dangerous. Thus, I feel the agency is doing a fine job overall. Its adaptation report of October 2005 contained no alarmist language whatsoever, and was reasonably fair in comparing the pros and cons of global warming for various sectors of society. In recent months, Agency officers have spoken loud and clear when necessary.

I am not sufficiently knowledgeable on the civil-service situation in the USA, but I was involved in some of the processes that led to the creation of IPCC in 1988. The World Meteorological Organization (WMO) and the United Nations Environmental Program (UNEP) realized from the very beginning that their dreams of a United Nations Climate Ministry would be far beyond their grasp. There exists no World Government; autonomous nation states are unlikely to accelerate any progress in that direction. The Constitution of the USA is a testament to the farsighted Founding Fathers, but that example will not be followed soon anywhere else on the globe. An intergovernmental agency like IPCC is about all one can hope to achieve.

The formation of IPCC was affected in no small measure by prominent physicists associated with WMO. The global physics community has dominated the fast track to research funding as long as I can remember. Incessant propaganda claiming that physics is the foundation of all the natural sciences has been very effective. It creates an atmosphere in which the entire world seems to be waiting for the ultimate discovery of the Higgs particle or the next great discovery in astrophysics. The fund-raising successes of the physics community, however, depend in no small degree on the principal weakness of the peer review system. I know this is a sensitive issue, as all respectable scientists revere peer review, notwithstanding its shortcomings. As I see it, peer review is about the best possible way of weeding out substandard research results. It is not perfect; the occasional sloppy paper slips through. But that is not the core of the problem. In peer review, specialists in a particular sub-discipline evaluate the manuscripts of their immediate colleagues only. They share the same set of assumptions on the comparative relevance of their discipline. They are unlikely to interrogate their colleagues on matters beyond the technical accuracy of their work. And they refuse to involve scholars and scientists from other disciplines in the assessment of the broader relevance of their research and of the programs in which their research is embedded. To give a concrete example: the physics community will not allow any biologist or climate scientist to pass judgment on its programmatic choices.

This has worked well for physics. The physics community appeals to peer review to make sure that they do not have to submit their plans for independent assessment. They much prefer to deal with bureaucrats in funding agencies directly, without spies from other disciplines sitting in. They have succeeded in maintaining this routine because such matters as the Big Bang have no relevance to the welfare of society. In other disciplines, such as aerospace engineering or medical research, this would be inconceivable. The Federal Aviation Agency is the national watchdog for the aerospace industry, and the Food and Drug administration is the civil-service assessment agency for the food we consume and the medicines we swallow. I would not want it any other way. I am willing to believe most producers of goods or information are honest, but I do insist on separation of powers, on a system that provides appropriate checks and balances.

How to restructure IPCC, with this many boundary conditions around?
It is easy to criticize the current IPCC process. Are there plausible alternatives? Experience has shown that the scientists who author the three assessment reports find it hard to cope with the conflicts of interest involved when they have to collaborate with the diplomats who have final say over the precise texts of the Summaries for Policy Makers. In earlier years, scientists who felt the Summaries were too alarming quit the IPCC process out of desperation. The current round of negotiations alienated the scientists who felt the final texts were watered down too much by the diplomats. I propose that these dilemmas be avoided by a clear division between the duties of the scientists who write the Assessment Reports and the governmental representatives who write political documents. I prefer that Scientific Assessments be written by scientists and Policy Assessments by policy makers. The idea that Summaries for Policy Makers should be political distillates of scientific documents causes endless confusion; it should be abandoned. Nobody should be party to the scientific assessment first and then show up at the political negotiations as a government representative or scientific advisor to a diplomat. Strict adherence to the maxim that scientists should not act as disguised policy makers and vice versa would make entanglement of interests a lot harder than it is now. I don’t mind that senior scientists participate in the writing of Policy Assessments by Policy Makers, but those that do cannot participate in scientific assessments.

An arrangement of this type solves some other chronic problems as well. The much-proclaimed consensus that the IPCC process is designed to generate is a diplomatic necessity but a scientific monstrosity. I readily admit that policy makers have to aim for consensus. The final products generated by the IPCC process are addressed at the governments that gave IPCC their instructions. Differences of opinion have to be removed by negotiations before the political texts are finalized. In diplomacy, there is no other way. But diplomatic bargaining is a job for diplomats, not for scientists. One cannot bargain with scientific evidence, and one should not bargain with scientific opinions. Repeated claims that 2500 scientists are involved in the IPCC process appeal to primitive perceptions of science in the general public. Scientific opinions have an aura of objectivity; this perception is cleverly exploited by the IPCC staff. I much prefer a diplomatic consensus that stands on its own feet. That way, everyone can see who is responsible for what.

If the assessment process is split in the way I propose, the scientists involved in Scientific Assessments get the breathing space they need to represent the full range of evidence and opinions circulating in the scientific community. They need not aim for artificial consensus and could safely welcome both alarmists and skeptics of various kinds in their midst. However, the freedom to embrace a wide range of opinions will not work if the recruiting process for scientists willing to get involved in writing Assessments remains as obscure as it has been so far. The concept of checks and balances requires that power be distributed among a sufficiently large number of independent bodies, in a manner that is transparent to outside, critical observers.

At present, the selection of scientists contributing to the three Working Groups is done by the anonymous IPCC staff, which itself is recruited informally from the meteorological and environmental communities. I believe that the selection process for contributing scientists should be run by the National Academies of Science of the countries involved, or equivalent blue-ribbon groups in countries without such institutions.

Actually, the idea that Summaries for Policy Makers should be transformed into Policy Assessments by Policy Makers also gives much-needed breathing space to the policy makers. In the arrangement I propose, policy debates can become multi-dimensional and explicit. Let me borrow Roger Pielke Jr’s way of making this clear in response to a draft of this essay:

“The only way for IPCC to become accountable would be to ask that it explicitly engage in a discussion of policy options, with the intent to lay out a wide range of possible actions. To do this in a fair way would require the representation of a plurality of perspectives in the assessment process. If IPCC pretends to engage publicly only in questions of science and impacts, the politics will enter through the back door.â€?

The hardest problem of all is the bureaucracy that runs the show. The IPCC staff has acquired the same habits that bureaucrats everywhere employ. The staff is not required to divulge its internal processes and is therefore free to operate as an advocacy group for a limited range of interests. They do much of the homework behind the scenes, they engage in lobbying wherever they smell advantages, they remain invisible when they recruit scientists for the assessment process, they write draft reports and press releases, and so on. I cannot blame these anonymous bureaucrats. This is the way things work in practice everywhere. Bureaucrats play their games under the table, where the constitutional separation of powers cannot reach them. I know of no way to redress this.

But I did come across an alternative proposed by David Henderson, the former OECD economist who was the lead author of the Dual Critique of the Stern Review a few months ago. Henderson wants to involve the Organization for Economic Cooperation and Development in climate change assessments. Let me quote from the text of lectures Henderson gave in Australia and New Zealand in February and March of this year:

“My first proposal is for joint action on the part of an international group of central economics departments and agencies in the governments of the thirty member countries of OECD. These various departments and agencies could become involved in the policy process collectively, to good effect and without delay. The mechanism for this is OECD itself. A distinctive feature of OECD is that it is the only international agency in which ministers and officials from these departments and agencies are able, if they so wish, to review systematically issues across the entire spectrum of microeconomic and structural policies. They can do so, with secretariat back-up from the OECD’s Economics Department, in and through the Economic Policy Committee, which is their own committee.â€?

Henderson goes on and states:

“My second proposal is that funds should now be made available to commission, prepare and publish a full independent review of the Fourth Assessment Report. The review would cover the whole range of issues and topics, economic and procedural as well as scientific, and policy-related as well as analytical. Its preparation would be entrusted to an international team of authors, reporting to a suitably constituted steering group.â€?

These proposals appeal to me. The bureaucrats in IPCC have been able to operate without having to worry about countervailing powers. Absent these, mere humans cannot make balanced assessments of anything. If OECD were to take the lead in organizing Assessments of the IPCC assessments, it would establish a level playing field, in which all matters that have been swept under the rug finally will have to be dealt with. If the economists of OECD get involved, the unavoidable evaluation of costs and benefits of various proposals would become an integral part of the assessment process. I am stating this with some emphasis, because environmental bureaucrats have a habit of framing issues such that they are not responsible for the consequences. They refuse to engage in discussions of the cost-benefit ratio of their proposals. They avoid getting involved in questions concerning poverty, malnutrition, and unemployment.

Before I finish this essay with a personal note, I want to quote Daniel Sarewitz, who responded to a draft of this essay with the following comments:

“Part of the problem with science is that we want it to provide answers. We need to understand that science, like humanity, is imperfect, laden with ambiguities and contradictions, and always, always, less literally useful the closer it is to human practice and experience. If we could be comfortable with that, then maybe all these other expectations could be tempered.â€?

I share these concerns. There is no answer to the many dilemmas of climate assessment. We should not expect from science what it cannot deliver, and we cannot expect scientists to be superhuman. But we can and should attempt to make assessment processes a little more accountable. That is what I have tried. I know my proposals leave several problems unsolved, but I take that in stride.

Finally, I am not a lawmaker. Designing the procedures that will establish the much-needed checks and balances in climate assessments requires the efforts of many professionals. I know I have a limited perspective, but I will be happy to participate in deliberations that lead to the desired objective.

Leave a comment

Filed under Guest Weblogs

A Paper On The Complexity of Carbon Sequestration To Mitigate Global Warming

A short essay

Pielke Sr., R.A., 2001: Carbon sequestration — The need for an integrated climate system approach. Bull. Amer. Meteor. Soc., 82, 2021.

discussed the complexity of carbon assimilation through deliberate landscape manipulation

A new paper has appeared (and thanks to Timo Hämeranta for again alerting us to such important papers!)

Bala, Govindasamy, K. Caldeira, M. Wickett, T. J. Phillips, D. B. Lobell, C. Delire, and A. Mirin, 2007. Combined climate and carbon-cycle effects of large-scale deforestation. PNAS published online before print April 9, 2007

which provides a detailed confirmation of the complexity of carbon assimilation through deliberate landscape management.

The abstract reads,

“The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO2 to the atmosphere, which exerts a warming influence on Earth’s climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth’s climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.”

The only issue with the excellent research contribution is that the terms “carbon” and “climate” are separated. The paper itself, however, demonstrates that the carbon cycle is part of the climate. That they are “combined” reinforces the need to consider the climate as a system as illustrated, for example, by Figure 1-1 in the 2005 National Research Council Report “Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties“.

The article does make an effective point that “forests remain environmentally valuable resources for many reasons unrelated to climate”. This is why the IPCC and other assessments need to move beyond their narrow focus on the global average radiative forcing of CO2 as the dominant environmental concern.

Leave a comment

Filed under Climate Change Forcings & Feedbacks

New Program To Evaluate The Major Role of Nitrogen Within The Climate System

Climate Science has reported on the very important role of nitrogen deposition within the climate system (e.g. see). There is now a new program which identifies environmental risks associated with nitrogen pollution. With much of the attention on carbon dioxide as the dominate climate forcing, this intent to include nitrogen pollution is a much needed broadening of environmental concerns.

The announcement that was sent to me, with a request to post of Climate Science, follows,

“The David and Lucile Packard Foundation invites you to be part of an online collaboration to create strategies for reducing nitrogen pollution. Please join at The David and Lucile Packard Foundation invites you to be part of an online collaboration to create strategies for reducing nitrogen pollution. Please join at http://nitrogen.packard.org.

We would also like to ask if you would post this site on your blog so that your readers and other bloggers interested in the nitrogen pollution problem can participate.

An increasingly dangerous threat to our environment and human health, nitrogen pollution is degrading water quality and coastal ecosystems, contributing to climate change and posing a variety of health risks. Despite its rapid growth and harmful consequences, the problem of nitrogen pollution has received relatively little attention, except in areas suffering the consequences. In response to this gap, the Packard Foundation is exploring opportunities for philanthropic investments to make a significant contribution to solutions. The Foundation has not yet decided whether or not to establish a grant making program in this area; this decision will depend in large part on whether promising investment strategies and opportunities can be identified.

Since the most robust strategies for addressing a problem as complex as nitrogen pollution can not be developed by Packard alone, the Foundation has launched a public forum for collaboration. Everyone with an interest in reducing nitrogen pollution is invited to join and work together to create effective strategies for addressing this pressing problem.

The forum will be live and open to public participation through May 10th.

Packard will make the full product of this forum available to the Foundation’s Trustees at its June Board meeting and the Foundation staff will use the product of the site in developing a recommended strategy for the Trustees to consider. Once the forum closes, the outcomes of this work will be available to the public, archived online and protected under a Creative Commons License.

Thank you in advance for participating in this important collaboration.

Here are a couple tips for getting registered and contributing to Nitrogen.packard.org:

To register or sign in for Nitrogen.packard.org, click the “Sign In” link in the upper right corner of the screen. From the registration screen, enter your username and password or click register to register a new username

Before you begin participating, introduce yourself to the community by clicking on the “Introduce Yourself” link on the left hand column of the home page. Once on the introductions page, click on the edit button in the upper right hand corner, and then add your introduction to the list.

Now you’re ready to participate! The items in the left-hand column of the home page are the different ways you can participate on the site. For instance, you can choose to edit the nitrogen/agriculture strategy by clicking on the wiki link, or you can discuss the strategies by clicking on the discussion link

We recommend you start by going to the wiki and reading through the strategies. Then go to the strategy that aligns with your own work, go to the bottom of that strategy, and add a paragraph describing the work that you already have underway under “Projects, Programs, and Organizations.”

Even more valuable, of course, would be for you to start revising the possible outcomes or strategies or for you to add an entirely new strategy that you think would be effective.

Finally, please contribute your thoughts to the discussion section, rate the impact and cost effectiveness of each strategy by taking the survey, and help expand and refine the stakeholder map.

If you experience any technical difficulties registering or using the site, please be sure to email or call Tech Support: Nitrogen@packard.org; tel:1-650-917-7288.”

Congratulations to the David and Lucile Packard Foundation for taking on this very important and much needed effort!

Leave a comment

Filed under Climate Change Forcings & Feedbacks, Vulnerability Paradigm

Contribution of land-atmosphere processes to recent European summer heat – A New Paper

There is a new paper on the European heat wave of 2003 and of other years (thanks to Charles Muller for alerting me to this new contribution!). The paper is

Fischer E. M., S. I. Seneviratne, D. Lüthi, C. Schär (2007), Contribution of land-atmosphere coupling to recent European summer heat waves, Geophys. Res. Lett., 34, L06707, doi:10.1029/2006GL029068.

The abstract reads,

“Most of the recent European summer heat waves have been preceded by a pronounced spring precipitation deficit. The lack of precipitation and the associated depletion of soil moisture result in reduced latent cooling and thereby amplify the summer temperature extremes. In order to quantify the contribution of land-atmosphere interactions, we conduct regional climate simulations with and without land-atmosphere coupling for four selected major summer heat waves in 1976, 1994, 2003, and 2005. The coupled simulation uses a fully coupled land-surface model, while in the uncoupled simulation the mean seasonal cycle of soil moisture is prescribed. The experiments reveal that land-atmosphere coupling plays an important role for the evolution of the investigated heat waves both through local and remote effects. During all simulated events soil moisture-temperature interactions increase the heat wave duration and account for typically 50–80% of the number of hot summer days. The largest impact is found for daily maximum temperatures during heat wave episodes.”

Excerpts from the paper are

“As regards climate change, model simulations indicate that extraordinary hot summers over Europe and other mid-latitudinal regions will become more frequent, more intense and longer lasting in the future [e.g., Meehl and Tebaldi, 2004; Giorgi et al., 2004], partly associated with an increase in interannual temperature variability [e.g., Schär et al., 2004; Vidale et al., 2007]. Seneviratne et al. [2006] found that the latter variability increase is strongly related to land-atmosphere coupling.”

and

” The regional climate model experiments reveal a major contribution of land-atmosphere interactions to the spatial and temporal extent of all four heat waves. In all the cases considered, the difference between coupled and uncoupled simulations is considerably larger than the model biases. Land-atmosphere interactions over the drought regions account for typically 50–80% of the NHD. This is mainly due to local effects through the limitation of evaporation (and compensation by sensible heat flux) due to drought conditions. Additionally drought conditions may have remote effects on areas around or outside the actual drought region, through changes in atmospheric circulation and advection of air masses. These mechanisms enhance the anticyclonic circulation over or slightly downstream of a drought anomaly. However, a larger computational domain or global simulations would be needed to further explore this effect.”

This paper adds to the information on the reasons for these heat waves and how anomalous they are as discussed in

Chase, T.N., K. Wolter, R.A. Pielke Sr., and Ichtiaque Rasool, 2006: Was the 2003 European summer heat wave unusual in a global context? Geophys. Res. Lett., 33, L23709, doi:10.1029/2006GL027470.

The Fischer et al paper emphasizes a point that is made frequently on Climate Science that we need a regional focus on the role of human- and natural causes of climate variability and change. The use of the global average temperature completely misses these regional issues.

Leave a comment

Filed under Climate Change Forcings & Feedbacks

An Excellent Media Summary On The Need For Regional and Local Climate And Other Evironmental Information

An excellent news report on the need for regional and local scale climate and environmental information has been published on April 19, 2007 in the Guardian Unlimited. The article is by James Bloom and is entitled “Think global, calculate local”.

The reference to my perspective is accurately written;

“He resigned from the IPCC in 1995 because of a disagreement over what should constitute a First Order Climate Forcing. He says: ‘The leadership of the IPCC has decided to focus on a global average surface temperature perspective and CO2 Forcing as the most important issues. This is an inappropriately narrow view of the human role in the climate system.’

Pielke thinks the governing bodies should take a bottom-up perspective, giving more prominence to local events such as deforestation. ‘As with an El Niño, which alters rainfall patterns thousands of kilometres away, land use change can alter rainfall and other aspects of the climate system across long distances, even in the absence of a global average surface temperature change.'”

The entire news article is worth reading.

Finally, the media is presenting more balance in their coverage of the diverse perspectives on the climate change issue.

Leave a comment

Filed under Climate Science Reporting

Another Paper On Ocean Heat Content Which Documents Continuing Problems With The Ability to Skillfully Simulate Climate Process

There is another useful paper on the ocean heat content issue, which adds to the discussion of ocean heat content. It is

Gleckler, P. J., K. R. Sperber, and K. AchutaRao (2006), Annual cycle of global ocean heat content: Observed and Simulated, J. Geophys. Res., 111, C06008, doi:10.1029/2005JC003223.

The abstract reads,

“This study focuses on the annual cycle of global ocean heat content and its variation with depth. Our primary objective is to evaluate a recent suite of coupled ocean atmosphere simulations of the twentieth century in the context of available observations. In support of this objective, we extend the analysis and interpretation of observational estimates. In many respects, the collection of models examined compare well with observations. The largest signal in the annual cycle of ocean heat content is in the midlatitudes, where all the models do a credible job of capturing the amplitude, phasing, and depth penetration. Judging the models’ performance at high latitudes is more complex because of the sparseness of observations and complications owing to the presence of sea ice. The most obvious problems identified in this study are in the tropics, where many climate models continue to have troublesome biases.”

Excerpts from the paper are,

“It is widely recognized that understanding climate variability and potential climate change requires a thorough grasp of oceanic influences. The ocean plays a central role in moderating climate because of its enormous heat capacity relative to the atmosphere.”

“Perhaps the most prevalent tropical systematic error is associated with the Southern Hemisphere split ITCZ in the central-to-eastern Pacific. Recent evidence indicates that in at least one model the split ITCZ is due to deficiencies in the atmospheric model that become manifest within about 24 hours of initialization with observations (J. Boyle, personal communication, 2005). Even though the source of this problem may lie exclusively in the atmospheric model, air-sea interactions are known to exacerbate this shortcoming [Gleckler et al., 2004]. ”

“The historical forcing simulations examined here were initialized from control runs (which have no time varying external forcing), most of which exhibit appreciable drift over time in global ocean heat content. This drift represents slow secular change as a model approaches equilibrium after coupling. How large is this drift compared to the annual cycle of ocean heat content? We have computed the drift in the control runs of each of the models examined here over the contemporaneous (1957–1990) period. The intermodel standard deviation of the drift in the global ocean heat content during this period is 23.5 * 10**22 J, or 0.7 * 10**22 J/yr.”

“Drift in these simulations is more of an issue when evaluating the evolution of annual mean global ocean heat content over the historical period. For these calculations, drift is usually removed by subtracting contemporaneous control run heat content from the twentieth century simulations [e.g., Gregory et al., 2004; Gleckler et al., 2006]. Although models have improved considerably and few now employ flux adjustments, a further reduction in control climate drift is clearly desired.”

The type of analysis presented in this paper is a very useful framework that should be the focus of assessments of global warming and cooling. It also shows that major problems still exist in the ability of coupled atmosphere-ocean models to skillfully predict climate change.

Leave a comment

Filed under Climate Change Metrics