Monthly Archives: May 2007

The 2007 IPCC WG1 Authors are Climate Skeptics.

The word “skeptic” has been used to either implicitly and explicitly criticize those who disagree with the IPCC perspective on the role of humans in global climate change. As presented at the website Wikipedia, the definition of a “climate skeptic” is given as

“Climate scientists agree that the global average surface temperature has risen over the last few decades. Within this general agreement, a small number of scientists disagree with the conclusions drawn by the mainstream scientific community that most of this warming is attributable to human activities. The consensus position of the climate science community has been summarized in the 2001 Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) as follows:

1. The global average surface temperature has risen 0.6 ± 0.2 °C since the late 19th century, and 0.17 °C per decade in the last 30 years.

2. There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities”, in particular emissions of the greenhouse gases carbon dioxide and methane.

3. If greenhouse gas emissions continue the warming will also continue, with temperatures increasing by 1.4 °C to 5.8 °C between 1990 and 2100. Accompanying this temperature increase will be a sea level rise of 9 cm to 88 cm, and increases in some types of extreme weather. On balance the impacts of global warming will be significantly negative, especially for larger values of warming.”

There is another link on Wikipedia titled “Category:Global warming skeptics”.

However, the issue really is which segment of the climate science community (and other communities) is actually more skeptical?

The Wikipedia definition of a “skepticism” includes

“1. an attitude of doubt or a disposition to incredulity either in general or toward a particular object

2. the doctrine that true knowledge or knowledge in a particular area is uncertain”

By this definition, the actual climate skeptics are the authors of the 2007 WG1 IPCC report! They have decided to ignore or minimize the findings of other climate assessments, such as in the report

National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp.

Listed below are the climate science findings that the IPCC WG1 authors are skeptical about since they do not appropriately assess these issues in their report. As written in the 2005 NRC report

“EXPANDING THE RADIATIVE FORCING CONCEPT

Despite all these advantages, the traditional global mean TOA radiative forcing concept has some important limitations, which have come increasingly to light over the past decade. The concept is inadequate for some forcing agents, such as absorbing aerosols and land-use changes, that may have regional climate impacts much greater than would be predicted from TOA radiative forcing. Also, it diagnoses only one measure of climate change—global mean surface temperature response—while offering little information on regional climate change or precipitation. These limitations can be addressed by expanding the radiative forcing concept and through the introduction of additional forcing metrics. In particular, the concept needs to be extended to account for (1) the vertical structure of radiative forcing, (2) regional variability in radiative forcing, and (3) nonradiative forcing….”

1. Account for the Vertical Structure of Radiative Forcing

“The relationship between TOA radiative forcing and surface temperature is affected by the vertical distribution of radiative forcing within the atmosphere. This effect is dramatic for absorbing aerosols such as black carbon, which may have little TOA forcing but greatly reduce solar radiation reaching the surface. It can also be important for land-use driven changes in the evapotranspiration flux at the surface, which change the energy deposited in the atmosphere without necessarily affecting the surface radiative flux. These effects can be addressed by considering surface as well as TOA radiative forcing as a metric of energy imbalance. The net radiative forcing of the atmosphere can be deduced from the difference between TOA and surface radiative forcing and may be able to provide information on expected changes in precipitation and vertical mixing. Adoption of surface radiative forcing as a new metric will require research to test the ability of climate models to reproduce the observed vertical distribution of forcing (e.g., from aircraft campaigns) and to investigate the response of climate to the vertical structure of the radiative forcing.

PRIORITY RECOMMENDATIONS:

Test and improve the ability of climate models to reproduce the observed vertical structure of forcing for a variety of locations and forcing conditions.

Undertake research to characterize the dependence of climate response on the vertical structure of radiative forcing.

Report global mean radiative forcing at both the surface and the top of the atmosphere in climate change assessments.

2. Determine the Importance of Regional Variation in Radiative Forcing

Regional variations in radiative forcing may have important regional and global climatic implications that are not resolved by the concept of global mean radiative forcing. Tropospheric aerosols and landscape changes have particularly heterogeneous forcings. To date, there have been only limited studies of regional radiative forcing and response. Indeed, it is not clear how best to diagnose a regional forcing and response in the observational record; regional forcings can lead to global climate responses, while global forcings can be associated with regional climate responses. Regional diabatic heating can also cause atmospheric teleconnections that influence regional climate thousands of kilometers away from the point of forcing. Improving societally relevant projections of regional climate impacts will require a better understanding of the magnitudes of regional forcings and the associated climate responses.

PRIORITY RECOMMENDATIONS:

Use climate records to investigate relationships between regional radiative forcing (e.g., land-use or aerosol changes) and climate response in the same region, other regions, and globally.

Quantify and compare climate responses from regional radiative forcings in different climate models and on different timescales (e.g., seasonal, interannual), and report results in climate change assessments.

3. Determine the Importance of Nonradiative Forcings

Several types of forcings—most notably aerosols, land-use and land-cover change, and modifications to biogeochemistry—impact the climate system in nonradiative ways, in particular by modifying the hydrological cycle and vegetation dynamics. Aerosols exert a forcing on the hydrological cycle by modifying cloud condensation nuclei, ice nuclei, precipitation efficiency, and the ratio between solar direct and diffuse radiation received. Other nonradiative forcings modify the biological components of the climate system by changing the fluxes of trace gases and heat between vegetation, soils, and the atmosphere and by modifying the amount and types of vegetation. No metrics for quantifying such nonradiative forcings have been accepted. Nonradiative forcings have eventual radiative impacts, so one option would be to quantify these radiative impacts. However, this approach may not convey appropriately the impacts of nonradiative forcings on societally relevant climate variables such as precipitation or ecosystem function. Any new metrics must also be able to characterize the regional structure in nonradiative forcing and climate response.

PRIORITY RECOMMENDATIONS:

Improve understanding and parameterizations of aerosol-cloud thermodynamic interactions and land-atmosphere interactions in climate models in order to quantify the impacts of these nonradiative forcings on both regional and global scales.

Develop improved land-use and land-cover classifications at high resolution for the past and present, as well as scenarios for the future.

4. Provide Improved Guidance to the Policy Community

The radiative forcing concept is used extensively to inform climate policy discussions, in particular to compare the relative impacts of forcing agents. For example, integrated assessment models use radiative forcing as input to simple climate models, which are linked with socioeconomic models that predict economic damages from climate impacts and costs of various response strategies. The simplified climate models generally focus on global mean surface temperature, ignoring regional temperature changes and other societally relevant aspects of climate, such as rainfall or sea level. Incorporating these complexities is evidently needed in policy analysis. It is important to communicate the expanded forcing concepts as described in this report to the policy community and to develop the tools that will make their application useful in a policy context.

PRIORITY RECOMMENDATION:

Encourage policy analysts and integrated assessment modelers to move beyond simple climate models based entirely on global mean TOA radiative forcing and incorporate new global and regional radiative and nonradiative forcing metrics as they become available.”

The conclusion that the IPCC WG1 authors consciously chose to minimize and even ignore most of these recommendations certainly permits them to be labeled as climate skeptics.

Leave a comment

Filed under Climate Science Misconceptions, Climate Science Op-Eds

Are There More Storms Than There Used To Be? by Peggy LeMone

Dr. Lemone has graciously permitted us to post on Climate Science her weblog below from her website. Her first guest weblog appeared on Climate Science May 9 2007.

The Guest weblog follows:

The work of Roger Pielke, Sr., discussed in the last blog, suggests that thunderstorms might be more common than they were 100 years ago. Are they?

My first job in science was as a college student. Ten hours a week, I worked on putting together a ‘tornado climatology’ for Professor Grant Darkow at the University of Missouri. To do this, I had to find out when and where tornadoes occurred in my home state of Missouri. I used records from the U.S. Weather Service (Monthly Weather Review, for more recent years, Storm Data), and weekly newspapers from small towns around the state.

Why? We wanted to see if we could learn more about how tornadoes form by knowing better about:

• When tornadoes form
• Where tornadoes form
• How big they are
• How long they stayed on the ground
• What time of day they happen
• What direction they come from

This information usually came from eyewitness reports of a funnel cloud and damage reports that lined up along a narrow track that lined up with the funnel cloud. Ted Fujita, who is known for the Enhanced Fujita Scale of tornado intensity, developed methods for associating damage patterns with tornadoes.

What did we find out? We found out that

There were more tornadoes where there were more people. If there are more people, the chances go up of someone seeing a tornado. And tornado damage is more obvious (at least when you aren’t looking for it) where there are houses than in an open field. Also, if there are enough people, there is often a small newspaper to report the tornado.

The area around St. Louis, Missouri, had the most tornadoes per unit area. This was not only because there were a lot of people in and around St. Louis, but there was a storm chaser and scientist named Ed Brooks who found and reported tornadoes that no one would have known about otherwise.

Looking at the map in Figure 1, there also seems to be more tornadoes on the west side of the state than the eastern side, although Kansas City (where the Missouri River reaches the west side of the state) probably accounts for some of the high numbers there.

fig1-lemone-copy.jpg
Figure 1. Number of tornadoes in Missouri by County between 1916 and 1969. St Louis is on the east side of the state, in the county that has 33 tornadoes. Kansas City is at the north end of the straight-north-south part of the state’s west border. Figure Courtesy of Grant. L. Darkow, University of Missouri-Columbia.

We also found out that the number of tornadoes went up with time, with a rapid increase in the 1950s. Some of this is related an increase in the number of people and newspapers with time.

Also, the Weather Service (then the Weather Bureau) started tornado forecasting in the early 1950s. So not only were people reminded to look for them, but weather forecasters would look for evidence of tornadoes to check to see how good their forecasts were. About the same time, the Weather Service started to publish summaries of tornadoes and other severe weather in Storm Data, making the information much easier to find.

Apparently my hard work between about 1965 and 1968 didn’t increase the number of tornadoes counted – aside from a big peak in 1967, the number of tornadoes was about the same as in the late 1950s.

Note that the tornado deaths stayed about the same in spite of the increase in population. If the number of tornadoes really had increased, a steady death rate might mean better tornado warnings. But we felt that the small change in the number of deaths meant that the number of destructive tornadoes hadn’t changed that much (and thus probably the total number didn’t either).

fig2-lemone-copy.jpg
Figure 2. For the state of Missouri, tornadoes and tornado deaths by year. Note the big jump in the mid-1950s, which corresponds to when the Weather Service formally started recording storm occurrence in Storm Data. Figure courtesy of Grant L. Darkow. University of Missouri-Columbia.

Tornadoes occur with strong thunderstorms, which usually produce lots of rain. An easier question to answer is whether there are more heavy rainstorms.

Why is this? First, there are lots and lots of rain gauges. But they are not everywhere, so scientists lump together several gauges in an area. Obviously, the time changes will be more accurate if the area is large enough to have lots of rain gauges with records for a long time.

But the areas you look at should be small enough so that you can see how things vary from place to place. By very carefully combining rain gauge records and measuring how accurate the resulting trends are, Pavel Groisman and his colleagues at the National Climatic Data Center found out that they can detect an increase in very heavy rainfall in the east-central United States. In other places, the trend is too small to say for sure using their methods.

Since thunderstorms clouds are cumulonimbus clouds, you could count observations of cumulonimbus clouds to see whether there are more thunderstorms. (To see what a cumulonimbus cloud looks like, see cloud chart at http://www.globe.gov/fsl/pdf/en_fr_es.pdf). Between the early 1950s and the early 1990s, when human observers were replaced by automated weather stations, there was a good continuous record of cloud-type observations by U.S. Weather Service human observers. (Now, students like you take observations for GLOBE-related projects like the CloudSat mission for “ground truth.â€?). During this time, the number of cumulonimbus cloud observations increased during the spring and fall, but the reports didn’t change much during the summer. The authors suggest that the spring and fall increase is related to more warm days as winter gets shorter.

So the small number of clues here suggests that there are more heavy rain events in some parts of the country, and there are more thunderstorms clouds – in the spring and fall. But we really can’t see whether there are more – or fewer – tornadoes.

And the much harder question is to answer is, “why?â€?

My sincere thanks to Grant Darkow for checking my facts and allowing me to reproduce his figures.

Leave a comment

Filed under Guest Weblogs

A New Paper On The Role Of Land Surface Processes On Tropical Cyclone Activity

There is a new paper which documents the close coupling between land surface processes (in this case dust from the Sahara Desert and the Sahel in Africa, and tropical cyclone activity. The paper is

Wu L. (2007), Impact of Saharan air layer on hurricane peak intensity, Geophys. Res. Lett., 34, L09802, doi:10.1029/2007GL029564.

The abstract reads,

“The Saharan air layer (SAL), which is associated with African dust outbreaks, forms as air moves across the Sahara Desert, containing substantial amounts of mineral dust. While the relationships of Sahel rainfall with African dust outbreaks and Atlantic hurricane activity have been documented in previous studies, analyses of various independent datasets show that the Sahel rainfall, SAL activity and hurricane peak intensity in the Atlantic basin are highly correlated. The long-term trend in hurricane peak intensity generally follows the Sahel rainfall and SAL activity. The decreasing trend in hurricane intensity by the mid-1980s was associated with the enhancing SAL activity (drying relative humidity and enhancing vertical shear) and the severe drought in the Sahel, while the recent moderate increasing trend in hurricane intensity is consistent with the weakening SAL activity (wetting relative humidity, weakening vertical shear and decreasing dust load) and the ameliorating Sahel drought. This study suggests that the SAL may act as a link between the summer African monsoon and Atlantic hurricane activity.”

Excerpts from the paper read,

“The recent increasing hurricane activity has been related to the SST warming that occurred in the tropical Atlantic since the 1970s [Emanuel, 2005; Webster et al., 2005] and the Atlantic Multi-decadal Oscillation (AMO) [Goldenberg et al., 2001]. In this study, the mean SST for the hurricane peak season is averaged over 6–18°N, 20–60°W, the same area used by Emanuel [2005]. The SST data are obtained from the Extended Reconstructed SST (ERSST) dataset of the National Oceanic and Atmospheric Administration (NOAA) and the AMO index is obtained from the NOAA Climate Analysis Branch. The mean SST is not statistically correlated with the mean peak intensity (Figure 3c). On the other hand, the AMO index is statistically correlated with the peak intensity. The index has a correlation coefficient of 0.46 with the peak intensity. The hurricane peak intensity was consistent with the decreasing trend in the AMO index by the 1970s, but after the 1980s the intensity has been trending upward at a much slower rate than the SST and AMO index. The combined SAL effect index or the Sahel rainfall can better account for the 58-year trend in the mean peak intensity than the SST or AMO index. However, this is not to say that the SST and AMO have nothing to do with the hurricane intensity trends. Sahel is one of the most climatically sensitive zones in the world [Zeng, 2003]. The variability of the Sahel rainfall is closely associated with the global SST changes, the summer African monsoon, and even anthropogenic influences [Giannini et al., 2003; Held et al., 2005]. The tropical SST and AMO may affect hurricane intensity through the SAL.”

“While the close relationships of the Sahel rainfall with African dust outbreaks and Atlantic hurricane activity have been documented in previous studies, in this study evidence is provided through analyses of various datasets associated with African dust outbreaks, Sahel droughts and Atlantic hurricane peak intensity, suggesting that the SAL may act as a link between the summer African monsoon and Atlantic hurricane activity. The combined SAL effect index or Sahel rainfall can much better account for the trend in the mean peak intensity over the past 58 years than the tropical SST or AMO index. The long-term trend in hurricane intensity generally follows the Sahel rainfall or SAL activity. Since these high correlations of the peak intensity with the SAL activity and Sahel rainfall are derived from the independent datasets, uncertainties involved in datasets may not qualitatively affect the results of this study. In summary, the decreasing trend in hurricane intensity by the mid-1980s was associated with the enhancing SAL activity (drying relative humidity and increasing vertical shear) and the severe drought in the Sahel, while the recent moderate increasing trend in hurricane intensity is accompanied with the weakening SAL activity (wetting relative humidity, decreasing vertical shear and dust load) and the ameliorating Sahel drought.”

As has been emphasized in the literature (e.g. see) and in the 2005 National Research Council report, the climate system, in terms of its response to human- and natural-climate forcings (including land degradation in the Sahel from overgrazing), is much more complex than indicated by the 2007 IPCC Report.

Leave a comment

Filed under Climate Change Forcings & Feedbacks

Comment In February 2007 issue Of The Bulletin Of The American Meteorological Society On Attribution Based On Model and Observation Intercomparisons

There was a very interesting Comment published in the February 2007 issue of the Bulletin of the American Meteorological Society. It is entitled

A. T. J. de Laat, 2007: Mixing Politics and Science in Testing the Hypothesis That Greenhouse Warming is Causing a Global Increase in Hurricane Intensity Bulletin of the American Meteorological Society Volume 88, Issue 2 (February 2007) pp. 251–252 DOI: 10.1175/BAMS-88-2-251 [pages
251-252].

The paper is discussing recent claims on the attribution of hurricane intensity to global warming, but the discussion by A.T.De Laat is applicable to the conclusions based on any result of the multi-decadal global model predictions reported in the 2007 IPCC Statement for Policymakers.

Excerpts from the paper read,

“The line of reasoning here is that natural factors alone cannot explain the observed twentieth-century temperature variations, while including greenhouse
gases does. The logical fallacy is the “fallacy of false dilemma/either–or fallacy,â€? that is, the number of alternatives are (un)intentionally restricted, thereby omitting relevant alternatives from consideration (Haskins 2006).

“That global twentieth-century temperature variations can be explained by using a simple model merely points to a certain consistency between this model or climate model simulations and observations. Furthermore, the fact that the late-twentieth-century warming is unexplained by two factors (solar variations and aerosols) and can be explained by including a third factor (greenhouse gases) does not prove that greenhouse gases are the cause; it just points to a missing process in this model. In fact, this whole line of reasoning does not prove the existence of global warming; it is merely consistent with it. As an example, it is still debated whether or not land surface temperature changes during the twentieth century are affected by anthropogenic non–greenhouse gas processes and whether or not these processes affect surface temperatures on a global scale (Christy et al. 2006; Kalnay et al.2006; de Laat and Maurellis 2006).

There is a risk associated with this line of reasoning in that it suggests that understanding temperature variations of the climate system as a whole is very simple and completely understood, all one has to consider is the amount of incoming and outgoing radiation by changes in atmospheric absorbers and reflectors. Notwithstanding the fact that temperature is not a conserved quantity in any physical system, and thus is not the best metric to study energy within the climate system, it also suggests that other processes and nonlinear behavior of the climate system are either nonexistent or do not affect (decadal and global) temperature variations. Presenting climate science this way oversimplifies the complexity of the climate system and possibly overstates our current understanding. Furthermore, this simple model is of limited use to climate scientists other than to very qualitatively explain—not understand—climate variability. By suggesting that climate science is simple and straightforward, the model surely does not help bridge the gap between climate science and the general public.â€?

The entire Comment is worth reading.

Leave a comment

Filed under Climate Models, Climate Science Misconceptions

Boulder Colorado August 27-29 2007 Meeting On “Detecting the Atmospheric Response to the Changing Face of the Earth: A Focus on Human-Caused Regional Climate Forcings, Land-Cover/Land-Use Change, and Data Monitoring”

The registration is now open for the meeting

Detecting the Atmospheric Response to the Changing Face of the Earth: A Focus on Human-Caused Regional Climate Forcings, Land-Cover/Land-Use Change, and Data Monitoring 27-29 August 2007 Boulder, Co.

The meeting topics have been published previously on Climate Science (see), and the draft agenda is given here.

If you are working in the topic area of this meeting, please attend!

Leave a comment

Filed under Climate Science Meetings

Impact of Desert Dust Radiative Forcing on Sahel Precipitation – A New Research Paper

A valuable new paper has appeared on the role of dust within the climate system. A significant portion of this dust results from human mismanagement of semi-arid landscapes (e.g. see and see). The paper is

Yoshioka, M., N.M. Mahowald, A.J. Conley, W.D. Collins, D.W. Fillmore, C.S. Zender, and D.B. Coleman, 2007: Impact of Desert Dust Radiative Forcing on Sahel Precipitation: Relative Importance of Dust Compared to Sea Surface Temperature Variations, Vegetation Changes, and Greenhouse Gas Warming. J. Climate, 20, 1445–1467.

The abstract reads,

“The role of direct radiative forcing of desert dust aerosol in the change from wet to dry climate observed in the African Sahel region in the last half of the twentieth century is investigated using simulations with an atmospheric general circulation model. The model simulations are conducted either forced by the observed sea surface temperature (SST) or coupled with the interactive SST using the Slab Ocean Model (SOM). The simulation model uses dust that is less absorbing in the solar wavelengths and has larger particle sizes than other simulation studies. As a result, simulations show less shortwave absorption within the atmosphere and larger longwave radiative forcing by dust. Simulations using SOM show reduced precipitation over the intertropical convergence zone (ITCZ) including the Sahel region and increased precipitation south of the ITCZ when dust radiative forcing is included. In SST-forced simulations, on the other hand, significant precipitation changes are restricted to over North Africa. These changes are considered to be due to the cooling of global tropical oceans as well as the cooling of the troposphere over North Africa in response to dust radiative forcing. The model simulation of dust cannot capture the magnitude of the observed increase of desert dust when allowing dust to respond to changes in simulated climate, even including changes in vegetation, similar to previous studies. If the model is forced to capture observed changes in desert dust, the direct radiative forcing by the increase of North African dust can explain up to 30% of the observed precipitation reduction in the Sahel between wet and dry periods. A large part of this effect comes through atmospheric forcing of dust, and dust forcing on the Atlantic Ocean SST appears to have a smaller impact. The changes in the North and South Atlantic SSTs may account for up to 50% of the Sahel precipitation reduction. Vegetation loss in the Sahel region may explain about 10% of the observed drying, but this effect is statistically insignificant because of the small number of years in the simulation. Greenhouse gas warming seems to have an impact to increase Sahel precipitation that is opposite to the observed change. Although the estimated values of impacts are likely to be model dependent, analyses suggest the importance of direct radiative forcing of dust and feedbacks in modulating Sahel precipitation.”

Excepts from the paper read,

“Our model simulations suggest that radiative forcing of dust acts to reduce the global average precipitation.”

and

“These results are sensitive to the models and methodologies that are used. However, the results are important because they show that the direct radiative forcing of dust has played a role in the observed
droughts in the Sahel comparable to the roles played by the sea surface temperatures and vegetation, which have been studied extensively. These results also provide a mechanism whereby drought in the Sahel region can cause increased dust, which then feedbacks to cause a further precipitation reduction.”

This paper further supports the perspective emphasized in the 2005 National Research Council report that we need to move beyond the radiative forcing of CO2 as the dominate human effect on the climate system.

National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp.

Leave a comment

Filed under Climate Change Forcings & Feedbacks

Perspective Of Professor William R. Cotton On “Global Warming”

Recently, Professor William R. Cotton of the Department of Atmospheric Science at Colorado State University was asked the question below,

“I heard you are speaking out about global warming. Do you have a presentation you use that could help me understand your reasons?”

Bill has okayed my posting of his answer which is given below.

“I am not exactly speaking out against global warming. But, I don’t think the science is as solid as many lead us to believe. Don’t get me wrong, the science of how greenhouse gases directly affect climate is strong. But where it gets messy is all the feedbacks in the system that the theory relies upon and most particularly the role of clouds. Also when it comes to future scenarios (predictions?) decades or longer I point out there are many other factors affecting climate and some of these can be quite large but often are not predictable. Many of these are related to aerosols either natural (volcanoes) or manmade. Then there is also the wildcard with respect to solar variability impacting climate. I think there is something going on there that we just don’t understand. I try to keep up on papers in that area and so far am not convinced about their physical arguments especially the cosmic ray/cloud cover arguments But just because we can’t explain it doesn’t mean something important isn’t happening.

I have attached a copy of the recent talk I gave at the University of Tel-Aviv. I didn’t put it on a slide but I also point out that this position is purely from my personal scientific evaluation. My book on “Human Impacts on Weather and Climate,” 2nd Edition by Cotton and Pielke published by Cambridge is out by the way.

I also point out that I am very “green” as I ride a bicycle to and from work 12 miles a day, I have a Toyota Prius, fly a sailplane, sail boats and paddle kayaks, have an electric lawnmower and weedwacker, florescent lights throughout the house, and support reducing pollution of all sorts.

I put the figure showing the correlation between greenhouse gas emissions and population to show that the bottom line is we are overloading our planet and that as long as we keep putting more and more people on it we will be increasing the likelihood of serious impacts on water resources, air quality, and weather and climate. However, as a scientist I have to draw the line between being “objective” and being an advocate of policies.”

Professor Cotton’s powerpoint talk, entitled “Global Climate Change: A Skeptic’s Perspective“, which provides more depth into his research and perspective on climate change is available.

Leave a comment

Filed under Guest Weblogs

A Short Summary Of Why Skillful Climate Prediction Is Much More Difficult Than Skillful Weather Prediction

Climate Science has already weblogged on the claim in the 2007 IPCC WG1 report that,

““Projecting changes in climate due to changes in greenhouse gases 50 years from now is a very different and much more easily solved problem than forecasting weather patterns just weeks from now. To put it another way, long-term variations brought about by changes in the composition of the atmosphere are much more predictable than individual weather events.â€? [from page 105]

This weblog provides a short summary of why such a claim is absurd.

First, all climate and weather models include two components; a dynamic core (which involves advection, the pressure gradient force, and the gravitational acceleration) and parameterized or prescribed) physical, chemical and biological processes. Only the dynamic core is basic physics. All parameterizations are engineering code which means they include tunable components.

Weather prediction models parameterize long- and short-wave radiative flux divergence, stable clouds and precipitation, deep cumulus clouds, turbulence, and air-sea and air-land fluxes. The state variables in weather model are the three components of velocity, temperature, pressure, density of air, and the three phases of water (and sometimes other gaseous and aerosol components). A detailed discussion of this type of model is given, for example, in

Pielke, R.A., Sr., 2002: Mesoscale meteorological modeling. 2nd Edition, Academic Press, San Diego, CA, 676 pp. [Table of Contents]

The state variables are initialized from real world observations such as from radiosonde and satellite data. If the weather model is a regional model, it obtains information through lateral boundary conditions. The dynamic core of the weather model, therefore, is constrained by the real-world initial conditions and lateral boundary conditions. Most of the surface boundary conditions are prescribed. This includes, for instance, sea surface temperature, sea ice coverage, vegetation, and snow cover. Only certain quantities, such as soil moisture and land surface temperature may be permitted to change in response to the land-air fluxes. When the initial conditions of the weather model are “forgotten”, the parameterizations must skillfully predict the evolution of the state variables from that time forward, which is the reason that the weather prediction accuracy degrades and becomes of no value after a certain time period (e.g. see).

A climate model, in contrast, must model more processes than in a weather model (such as biogeochemistry of vegetation on land and plants in the ocean; sea ice dynamics; aerosol processes; ocean circulation; ground freezing and thawing; snow accumulation and melt and sublimation, etc. – see). For some of these climate processes (which involve physics, biology and chemistry) they are modeled, as with a weather model, by a dynamical core and by parameterizations. These include sea ice dynamics and ocean circulation, which both have advection, pressure gradient and gravitational parts, as well as the parameterization of other effects (such as turbulence, phase changes of water). Some of the climate processes, such as biogeochemistry and biogeography have no dynamical core, and are completely parameterized models.

Thus, a climate model involves more parameterizations with their tunable components than for a weather model, as well as additional new state variables (such as salinity, ice, snow, vegetation type and its root depth etc) for which initial conditions are required for all of these variables.

The climate model also has no real world constraint such as supplied by real-world initial conditions (and for a regional model lateral boundary conditions). This real-world data constrains its predictions. Instead, the state variables required for the dynamic core of each component of the climate model (i.e. the state variables for the atmosphere, land, ocean and continental ice) must be generated from the parameterizations!

The claim by the IPCC that an imposed climate forcing (such as added atmospheric concentrations of CO2) can work through the parameterizations involved in the atmospheric, land, ocean and continental ice sheet components of the climate model to create skillful global and regional forecasts decades from now is a remarkable statement. That the IPCC states that this is a “much more easily solved problem than forecasting weather patterns just weeks from now” is clearly a ridiculous scientific claim. As compared with a weather model, with a multi-decadal climate model prediction there are more state variables, more parameterizations, and a lack of constraint from real-world observed values of the state variables.

Leave a comment

Filed under Climate Science Misconceptions

Guest Weblog by Barry H. Lynn, Richard Healy, and Len Druyan

Introduction by Roger A. Pielke Sr.

Climate Science has had a very productive e-mail exchange of perspectives in response to the weblog of May 14 2007. The authors of the article referred to in the weblog have graciously agreed to write a guest weblog which is given below. For background on the authors, a brief biographical summary of each scientist is:

Dr. Barry Lynn is a research scientist at the Hebrew University of Jerusalem. The research on climate change was conducted while he was an associate research scientist at Columbia University and Carnegie Mellon University. Dr. Lynn’s interests include studying the impacts of “greenhouse” gases on climate and the effect of aerosols on precipitation. Many of his papers have been published by the AMS and JGR. He is also the C.E.O of Weather It Is LTD (www.weather-it-is.com), a company that produces weather forecasts and climatological information, with an emphasis on deriving new economic applications from such products.

Rick Healy is a systems analyst at the National Ocean Sciences Accelerator Mass Spectrometry Facility (NOSAMS) at the Woods Hole Oceanographic Institution, developing computational methods in performing high precision radiocarbon analysis. He also collaborates with the NASA/Goddard Institute for Space Studies (GISS) climate-modeling group in New York. His interests include Regional Climate Impacts using integrated regional climate models to study climate change issues. He also collaborates with scientists at UMass Amherst in paleoclimatetracer studies using the GISS d18O tracer model. http://nosams.whoi.edu/research/staff_healy.html

Dr. Druyan is a Senior Research Scientist and the Director of the Center for Climate Systems Research which is a unit of the Earth Institute at Columbia University. Alternatively referred to as “GISS at Columbia”, CCSR is the administrative umbrella for many Columbia University research scientists based at the Goddard Institute for Space Studies. Dr. Druyan’s research interests focus on climate variability in tropical latitudes. His published work relates to a range of themes, including the Indian and West African summer monsoons, Sahel drought, African wave disturbances, climate change impacts on tropical cyclones, El Niño and other sea-surface temperature anomaly impacts on regional climates and seasonal climate prediction for Brazil. He has conducted climate simulation studies using several versions of the GISS GCM and more recently using a regional climate model (RM3) that represents variables at higher spatial resolution. Dr. Druyan’s research group is using the RM3 for collaborative research in the context of the African Monsoon Multidisciplinary Analysis (AMMA) and the West African Monsoon Modeling and Evaluation (WAMME).

The Guest Weblog follows:

This is a brief response to the posted critiques of our recent paper in the Journal of Climate. (Lynn et al., see ) We have since had a constructive dialogue with Dr. Pielke by email and we appreciate his giving us this opportunity for clarification on his blog. We hope to correct the mistaken impression that we were in any way looking to sensationalize dangers from global warming. Contrary to the impression promoted on the blog, we were diligent in our research. In addition, our paper was peer reviewed and it underwent revisions consistent with the suggestions of three (presumably) professional reviewing scientists.

The study was based on both observed data and model simulations. A significant result of the observational analysis was finding the strong inverse relationship between eastern U.S precipitation frequency and maximum surface temperatures (see figure). Dr. Pielke’s main criticism seems to focus on the poor performance of the AOGCM that provided data for driving our regional model. The AOGCM admittedly has deficiencies, as do all models. We believe that this AOGCM has comparable skill (or lack of skill) to other GCMs that formed the basis of the IPCC fourth assessment. The AOGCM is a tool, albeit imperfect, for projecting the broad scale climate consequences of increasing concentrations of greenhouse gases. It accounts for the distribution of oceans and continents and all of the major interactions between the different Earth systems affecting the climate. However, it has serious flaws regarding the simulation of regional climate.

Dr. Pielke maintains that dynamic downscaling by even the most skillful regional model cannot improve the simulation of a flawed climate simulation by a GCM. However, he concedes that the regional model solution is less sensitive to the driving GCM when the nested domain is large, as in our case. We also counter that (in our double-nested experiments) since the outer regional model domain extended from the Pacific to the Atlantic Oceans, no GCM data generated over the US was ever used to drive the regional model. In addition, we claim that crucial aspects of our climate simulations were indeed improved – by the alternative moist convection schemes operating at high horizontal resolution in the regional model. At least one version of our regional model simulated summertime precipitation frequencies that were much more realistic than the GCM.

Dr. Pielke correctly points out that the regional model cannot correct for GCM errors in the timing or trajectories of synoptic systems. We reply that radiation feedbacks that depend on the frequency of precipitation, mean ground wetness and frequency of cloudiness are far more important in determining rates of warming over the next 80 years. This downscaling produced a greater warming trend over the eastern US into the 2080s than the GCM because it did not make the mistake of “predictingâ€? rain on 65% of the summer days (see figure). Was this result adversely affected by the GCM data streaming in at the boundaries over the Pacific and Atlantic? The same GCM boundary conditions were used to drive another version of the regional model with a convection scheme that made the same mistake (as the GCM) of predicting rain too frequently. This version produced a more gradual warming trend just like the GCM. A third version that underestimated afternoon precipitation predicted the most severe warming trend. Based on all of this evidence, we are convinced that the radiation feedbacks created by the precipitation regime control the warming rates, and that our paper’s “apocalypticâ€? prediction of 5°C warming over the eastern US between the 1990s and 2080s is the most realistic prediction – a correction if you will to the underestimate of IPCC models that rain too frequently. See http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2FJCLI3672.1 or http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCLI3672.1

fig1pielkeblog.jpg
Relationship between the JJA anomalies of mean maximum T for the eastern US vs. the percent of rainy days in the corresponding seasons, 1977-2004.

fig-2pielkeblog.jpg
Precipitation frequency (percent of rainy days during JJA) for JJA 1993-97 and JJA 2083-87 over the eastern US for observations and model versions. “Scaledâ€? observations refers to frequencies within 4° x 5° AOGCM grid elements.

Leave a comment

Filed under Guest Weblogs

More Presentation Of Climate Predictions as Scientific Fact

There is a new Science paper

Richard Seager, Mingfang Ting, Isaac Held, Yochanan Kushnir, Jian Lu, Gabriel Vecchi, Huei-Ping Huang, Nili Harnik, Ants Leetmaa, Ngar-Cheung Lau, Cuihua Li, Jennifer Velez, and Naomi Naik: Model Projections of an Imminent Transition to a More Arid Climate in Southwestern North America Published online 9 April 2007 [DOI: 10.1126/science.1139601] (in Science Express Reports) [thanks to Willie Soon for alerting us to it]

The abstract reads

“How anthropogenic climate change will impact hydroclimate in the arid regions of Southwestern North America has implications for the allocation of water resources and the course of regional development. Here we show that there is a broad consensus amongst climate models that this region will dry significantly in the 21st century and that the transition to a more arid climate should already be underway. If these models are correct, the levels of aridity of the recent multiyear drought, or the Dust Bowl and 1950s droughts, will, within the coming years to decades, become the new climatology of the American Southwest.”

An excerpt from the paper reads,

“In the multi-model ensemble mean there is a transition to a sustained drier climate that begins in the late 20th and early21st centuries”

and

“The drying of subtropical land areas that, according to the models is imminent or already underway, is unlike any climate state we have seen in the instrumental record. It is also distinct from the multidecadal megadroughts that afflicted the American Southwest during Medieval times …which have also been attributed to changes in tropical SSTs…The most severe future droughts will still occur during persistent La Niña events but they will be worse than any since the Medieval period because the La Niña conditions will be perturbing a base state that is drier than any experienced recently.”

This result appears to be contradictory to an earlier study

Trenberth, K. E., T. J. Hoar, El Niño and climate change, Geophys. Res. Lett., 24(23), 3057-3060, 10.1029/97GL03092, 1997.

Their abstract reads,

“A comprehensive statistical analysis of how an index of the Southern Oscillation changed from 1882 to 1995 was given by Trenberth and Hoar [1996], with a focus on the unusual nature of the 1990–1995 El Niño-Southern Oscillation (ENSO) warm event in the context of an observed trend for more El Niño and fewer La Niña events after the late 1970s. The conclusions of that study have been challenged by two studies which deal with only the part of our results pertaining to the length of runs of anomalies of one sign in the Southern Oscillation Index. They therefore neglect the essence of Trenberth and Hoar, which focussed on the magnitude of anomalies for certain periods and showed that anomalies during both the post-1976 and 1990–mid-1995 periods were highly unlikely given the previous record. With updated data through mid 1997, we have performed additional tests using a regression model with autoregressive-moving average (ARMA) errors that simultaneously estimates the appropriate ARMA model to fit the data and assesses the statistical significance of how unusual the two periods of interest are. The mean SOI for the post-1976 period is statistically different from the overall mean at <0.05% and so is the 1990–mid-1995 period. The recent evolution of ENSO, with a major new El Niño event underway in 1997, reinforces the evidence that the tendency for more El Niño and fewer La Niña events since the late 1970s is highly unusual and very unlikely to be accounted for solely by natural variability.”

The 2007 Science paper is yet another example of overselling of a process study as we discuss in our book

Cotton, W.R. and R.A. Pielke, 2007: Human impacts on weather and climate, Cambridge University Press, 330 pp.

The Seager et al 2007 paper is clearly is an example of the publication of a prediction, which has yet to be tested in its accuracy, as a scientific contribution. At least, with their claim of almost perpetual drought in the Southwest USA, we can track this over the next few years to either refute or support their conclusions.

Leave a comment

Filed under Climate Change Metrics, Climate Models