Monthly Archives: July 2005

More On The Arctic; Have The Coldest Temperatures In The Mid-Troposphere Warmed?

In the July 15th blog, the issue of Arctic sea ice melting was discussed. The issue of tropospheric temperature trends has also been investigated. In two papers (see Chase et al., 2002: A proposed mechanism for the regulation of minimum midtropospheric temperatures in the Arctic. J. Geophys. Res., 107(D14), 10.10291/2001JD001425 and Tsukernik et al., 2004: On the regulation of minimum mid-tropospheric temperatures in the Arctic. Geophys. Res. Lett., 31, L06112, doi:10.1029/2003GL018831), we investigated whether the area coverage of the coldest temperatures at 500 mb (which is in the mid-troposphere) in the Northern Hemisphere winter has decreased between 1950 and 1998 (it had not as shown in the first of these two papers; see Figure 1 in that paper).

The motivation of this study was a discussion in front of 500 mb weather maps by Professor Ben Herman and me during my visit at the University of Arizona. We both had noted that the coldest temperatures (-40°C to -45°C) were typically reached in November, rather than continuing to become colder for the rest of the winter. We found there is a feedback between the ocean sea surface temperatures and the temperatures at 500 mb. Even though the air at 500 mb could become colder for short times over large continental areas such as Siberia, the air is advected often enough over ice-free ocean (but near freezing) that cumulus convective mixing results in a vertical temperature lapse rate that is nearly moist adiabatic. This produces 500 mb temperatures that are close to -45°C.

This self-regulation of the climate system indicates that the Arctic troposphere, in terms of the areal average of the coldest mid-tropospheric temperatures, is more resilient to change than expressed in the Arctic Climate Impact Assessment (ACIA) report. This is another set of peer-reviewed papers that was ignored in the ACIA study.

Leave a comment

Filed under Climate Change Metrics

Is Arctic Sea Ice Melting?

In the 2005 Arctic Climate Impact Assessment (ACIA) report, it was stated that:

“Over the past 30 years, the annual average sea-ice extent has decreased by about 8%… and the melting trend is accelerating”, and that “Sea-ice extent in summer has declined more dramatically than the annual average, with a loss of 15-20% of the late-summer ice coverage.”

They do caveat these statements by stating that “There is also significant variability from year to year.”

My AT786 class examined this issue. In 2004, I published a paper with Glen Liston, Bill Chapman, and Dave Robinson which concluded for the period 1973-2002 that “The sea-ice decline from 1973 is about 6%, while from 1980 the decrease to 2002 is about 3%…….the 1980-2002 observed decrease is less then the simulated decrease of actual sea-ice areal coverage reported in Global Warming and Northern Hemisphere Sea Ice Extent by Vinnokov et al. 1999. This paper was a follow up to a paper in 2000 by myself with Glen Liston and Alan Robock. One immediate question is why were these two papers not cited in the ACIA report?

Our class then examined the current state of Arctic sea ice. What is clearly evident in the data as of June-July 2005, is that Arctic sea-ice coverage is close to its long-term mean at this time of the year. After being well below average this past winter, the spring melt was slower than average. As discussed in the two papers I cite above, in terms of a warming feedback to the atmosphere through the radiative effect with respect to sea-ice coverage (the ice-albedo effect), it is the summer spring and summer areal coverage that is most critical (since in the winter with the long nights, there is little if any sunlight to reflect back into space).

The multi-year long-term trend and thickness of Arctic sea ice has also been used to claim the sea ice is melting. Indeed, this may be the case, although the data are more difficult to quantify in terms of long-term variability than areal coverage. Areal coverage, however, is the component of sea ice which has the most direct impact on the climate system through the ice-albedo feedback effect. As seen in these graphs, there is no clear trend since 1997. The melting trend is not accelerating. Moreover, a linear trend poorly captures the temporal behavior of this complex component of the climate system. If the sea-ice coverage returns to an earlier coverage, the clock is reset with respect to a linear trend.

Our conclusion is that the Arctic Systems Science report, which received so much media attention, significantly overstated the actual trends of Arctic sea-ice coverage.

Leave a comment

Filed under Climate Change Metrics

What Are Climate Models? What Do They Do?

Climate models are comprised of fundamental concepts and parameterizations of physical, biological, and chemical components of the climate system, expressed as mathematical formulations, and then averaged over grid volumes. These formulations are then converted to a programming language so that they can be solved on a computer and integrated forward in discrete time steps over the chosen model domain. A global climate model needs to include component models to represent the oceans, atmosphere, land, and continental ice and the interfacial fluxes between each other. Weather models are clearly a subset of a climate model (a discussion of mesoscale weather models is given in Pielke, R.A., Sr., 2002: Mesoscale meteorological modeling. 2nd Edition, Academic Press, San Diego, CA, 676 pp), where the basic framework of all scales of weather models is presented). On the global scale, it is very important to distinguish global atmospheric-ocean circulation models (AOGCMs) from global climate models. Global climate models need to include all important components of the climate system as discussed in a 2005 National Research Council report, while AOGCMs up the present have not.

There are three types of applications of these models: for process studies, for diagnosis and for forecasting.

Process studies: The application of climate models to improve our understanding of how the system works is a valuable application of these tools. In an essay, I used the term sensitivity study to characterize a process study. In a sensitivity study, a subset of the forcings and/or feedback of the climate system may be perturbed to examine its response. The model of the climate system might be incomplete and not include each of the important feedbacks and forcings.

Diagnosis: The application of climate models, in which observed data is assimilated into the model, to produce an observational analysis that is consistent with our best understanding of the climate system as represented by the manner in which the fundamental concepts and parameterizations are represented. Although not yet applied to climate models, this procedure is used for weather reanalyses (see the NCEP/NCAR 40-Year Reanalysis Project).

Forecasting: The application of climate models to predict the future state of the climate system. Forecasts can be made from a single realization, or from an ensemble of forecasts which are produced by slightly perturbing the initial conditions and/or other aspects of the model. Mike MacCracken, in his very informative response to my Climatic Change essay seeks to differentiate between a prediction and a projection.

With these definitions, the question is where does the IPCC and US National Assessment Models fit? Since the General Circulation Models do not contain all of the important climate forcings and feedbacks (as given in the aforementioned 2005 NRC report) the models results must not be interpreted as forecasts. Since they have been applied to project the decadal-averaged weather conditions in the next 50-100 years and more, they cannot be considered as diagnostic models since we do not yet have the observed data to insert into the models. The term projection needs to be reserved for forecasts, as recommended in Figure 6 in R-225.

Therefore, the IPCC and US National Assessments appropriately should be communicated as process studies in the context that they are sensitivity studies. It is a very convoluted argument to state that a projection is not a prediction. The specification to periods of time in the future (e.g., 2050-2059) and the communication in this format is very misleading to the users of this information. This is a very important distinction which has been missed by impact scientists who study climate impacts using the output from these models and by policymakers.

Leave a comment

Filed under Climate Models

The Globally-Averaged Surface Temperature Trend – Incompletely Assessed? Is It Even Relevant?

The globally-averaged surface temperature trend has been highlighted as an icon of climate change. For example, a meeting was held In Exeter, United Kingdom from Feb 1-3, 2005 entitled “Avoiding Dangerous Climate Change.” The focus on a globally-averaged temperature trend was clear in the emphasis at the meeting. The Hadley Centre brochure relevant to this meeting stated “Once a tolerable (i.e., non-dangerous) change has been determined – say in terms of a global temperature rise – we then have to calculate what this corresponds to in terms of tolerable greenhouse concentrations in the atmosphere.” The message is that a clear global surface temperature threshold exists over which there are dangerous effects on the climate system.

This perspective however, avoids discussing the real issue associated with long-term variability and changes in climate.

First, in the context of atmospheric circulation changes (which is, after all what produces our weather), it is the regional tropospheric temperature and humidity trends that are important, not a global average surface temperature A change in the globally-averaged surface, or even globally-averaged tropospheric, temperature are important primarily in the context of how this results in circulation changes. The globally-averaged surface temperature is a very poor metric to use to assess these circulation changes. The 2005 NRC report recognized this limitation in using globally-averaged surface temperatures. Secondly, with respect to even “global warming” the ocean heat content changes, rather than the surface temperature anomaly provides a more robust metric (see R-247).

With respect to the surface temperature itself, there are several issues with respect to the spatial representativeness of the trends that have been incompletely (or not at all) investigated. These are:

1. Poor microclimate exposure:
This is a land issue. The use of photographs to exclude questionable stations is obvious (and we are quite puzzled why anyone would not make this a high priority). The effect of poor exposure (which results in different site exposure depending on the wind direction) and changes in the site conditions over time have not been quantified. Our qualitative assessment based on the photographs that we have seen is that this it is likely to insert a warm bias for most sites.

2. Moist enthalpy:
This is both a land and an ocean issue. The use of the terms “warming” and “cooling” are being incompletely used when there is significant water vapor in the surface air (tropics and mid-latitude warm seasons, in particular). This will produce a warm bias when the air actually became drier over time, and a cool bias when the air becomes more humid over time. This effect has not been quantified with respect to how it influences regional and global surface temperature trends. It has been shown to be significant for individual sites.

3. Vertical lapse rate issues (paper in preparation):
The influence of different lapse rates, heights of observations and surface roughness have not been quantified. For example, windy and light wind nights should not have the same trends at most levels in the surface layer, even if the surface-layer averaged temperature trend was the same.

4. Uncertainty in homogeneity adjustments:
Time of observation, instrument changes, and urban effects have been recognized as important adjustments (see R-234) that are required to revise temperature trend information in order to produce improved temporal and spatial homogeneity. However, these adjustments do not report in the final homogenized temperature anomalies, the statistical uncertainty that is associated with each step in the homogenization process.

Thus even if the globally-averaged surface temperature was a particularly appropriate metric to assess climate change, there are issues on the robustness of this data set which have been overlooked. Our recommendation, however, is to deemphasize the globally-averaged surface temperature as a climate change metric and assess instead circulation changes as defined by tropospheric temperature and water vapor (and for the ocean, temperature and salinity) variability and trends.

Leave a comment

Filed under Climate Change Metrics

What is Climate? Why Does it Matter How We Define Climate?

The title of this weblog is “Climate Science,” so the first thing we need to do is define “climate.” For many, the term refers to long-term weather statistics. However, on this blog we are adopting the definition that is provided in the 2005 National Research Council (NRC) report where the climate is the system consisting of the atmosphere, hydrosphere, lithosphere, and biosphere. Physical, chemical, and biological processes are involved in interactions among the components of the climate system. Figure 1-1 and 1-2 in the report illustrate this definition of climate very clearly. In the NRC report, climate forcings were extended beyond the radiative forcing of carbon dioxide to include the biogeochemical influence of carbon dioxide, but also a variety of aerosol forcings (see Table 2-2 in the report), nitrogen deposition, and land-cover changes. Each of these forcings has been determined to influence long-term weather statisitics as well as other aspects of the climate.

However, this concept of climate and its alterations by humans, has been generally ignored. The NRC report listed above certainly appears to have been incompletely missed by policymakers. As an example, at the G-8 meeting, the term “climate change” is used interchangably with “global warming.” However, the human influence on climate is much more complex and multi-dimensional than captured by the term “global warming” (see, for example, https://pielkeclimatesci.files.wordpress.com/2009/10/r-260.pdf; http://www.nap.edu/books/0309095069/html/15.html and https://pielkeclimatesci.files.wordpress.com/2009/10/r-225.pdf). The term “global warming” is generally used to refer to an increase in the globally-averaged surface temperature in response to the increase of well-mixed greenhouse gases, particularly CO2.

If, however, we are interested in atmospheric and ocean circulation changes, which, afterall is what creates our weather, we need to focus on how humans are altering these circulations. Ocean heat content changes are the much more appropriate metric than a globally-averaged surface temperature when evaluating “global warming” in any case (https://pielkeclimatesci.files.wordpress.com/2009/10/r-247.pdf).

Thus it matters how we define climate and climate forcing (http://www.nap.edu/books/0309095069/html/15.html). By ignoring a number of the other first-order climate forcings, we are not properly addressing the threat we face in the future, but instead relying on the overly simplistic view of focusing on reductions in carbon dioxide emissions as the way to reduce our “dangerous intervention” in the climate. With respect to the changes of circulations, and therefore, weather, we need to identify and quantify the role of spatially heterogeneous climate forcings such as from aerosols and land-cover change, in addition to the influence of well-mixed greenhouse gases. These heterogeneous climate forcings could represent a more significant threat to our future climate system than the risk of an increase in the atmospheric concentration of CO2.

Hopefully, this blog will stimulate discussion, as well as illuminate reasons why this broader perspective on climate variability and change has been mostly ignored.

1 Comment

Filed under Definition of Climate