A Short Explanantion Of Why The Monitoring Of Global Average Ocean Heat Content Is The Appropriate Metric to Assess Global Warming

On May 27 2008, I presented the following post

A Short Explanantion Of Why The Monitoring Of Global Average Ocean Heat Content Is The Appropriate Metric to Assess Global Warming.

I am reposting most of the post today, in response to the discussions in the post

Pielke Sr and scientific equivocation: don’t beat around the bush, Roger 

on the weblog Skeptical Scientist [and also in the comments on the weblog Watts Up With That].

A Short Explanantion Of Why The Monitoring Of Global Average Ocean Heat Content Is The Appropriate Metric to Assess Global Warming

Climate Science has posted numerous weblogs (e.g. see and see) and several papers (e.g. see) on the value of using ocean heat content changes to assess climate system heat changes.  We have also presented evidence of major problems, including a significant warm bias, with the use of land temperature data at a single level to monitor these heat changes (e.g. see and see).

To consisely illustrate the issue, the definition of the global average surface temperature anomaly, T’, can be used. The equation for this in NRC (2005) is

dH/dt = f – T’/lamda

where H is the heat content of the climate system, T’ is the change in surface temperature in response to a change in heat content (the temperature anomaly), f is the radiative forcing at the tropopause, and lambda is called the “climate feedback paramter” [although more accurately, it should be called the “surface temperature feedback parameter”!]. T’ is on the order of tenths of degrees C per decade and must be computed from a spatially heterogenous set of temperature anomoly data, particularly over land.

Moreover,in this approach, there are four variables: H, f, T’ and lamda. This is clearly an unnecessarily complicated way to compute climate system heat changes.

The alternative is much more straightforward. Simply compute H at one time and H at a second time (using ocean heat content measurements; e.g. see). The uncertainty in the data needs to be quantified, of course, but within these uncertainty brackets, a robust evaluation of global warming can be obtained. For example, a time slice of ocean heat content at any particular time can be compared with an earlier time slice, and within the uncertainty of the observations and their spatial representativeness,  can be used to document the change in H between these two time periods.  There is also no “unrealized heating”, as is claimed when T’ is used.

The change in H can then be used to communicate to policymakers and others the magnitude of global warming in Joules, which, unlike temperature in degrees Celsius, Joules is a physics unit for heat.

Why is this not a priority? There are two possible reasons. First, the time period of good data is much shorter than for the surface temperatures. However, since the IPCC models predict continuing warming, the emphasis on the data from the last decade is well placed. Secondly, the assumption still exists that the ocean is not well sampled or that there are large errors in the measurements. These concerns have been taken care of by excellent global coverage of the oceans by the Argo network (see) and by recent corrections to the data (e.g. see).

Therefore, it is time to move beyond seeking to evaluate T’ and instead directly monitor values of H for different time periods as the primary metric of global warming.

Comments Off on A Short Explanantion Of Why The Monitoring Of Global Average Ocean Heat Content Is The Appropriate Metric to Assess Global Warming

Filed under Climate Change Metrics

Comments are closed.