Monthly Archives: July 2008

Comments On The Draft CCSP Report “Global Climate Change Impacts in the United States”

The Draft report Global Climate Change Impacts in the United States has been released. There is an announcement of the Public Review Draft of the Unified Synthesis Product Global Climate Change in the United States. Public comments are due by August 14 2008 [Climate Science readers are urged to submit comments].

This US Climate Change Science Program (CCSP) report is Co-Chaired by Thomas R. Karl, Jerry Melillo, and Thomas C. Peterson with the Senior Editor and Synthesis Team Coordinator Susan J. Hassol. These are the same individuals who have led past CCSP reports (e.g. see and see), with Tom Karl and Tom Peterson deliberately excluding scientific perspectives that differ from their viewpoints (i.e. see). Susan Hassol was writer of the HBO Special “To Hot Not to Handle”. This HBO show clearly had a specific perspective on the climate change issue, and lacked a balanced perspective. The HBO Executive Producer was Ms. Laurie David. 

A clear real conflict of interest is obvious.

As a result, this report continues the biased narrow perspective of the earlier CCSP reports, as has been reported on a number of times on Climate Science and in other communications (e.g. see and see). As just one example of the bias, the Karl et al report starts with the text

The Future is in Our Hands

Human-induced climate change is affecting us now. Its impacts on our economy, security, and quality of life will increase in the decades to come. Beyond the next few decades, when warming is “locked in” to the climate system from human activities to date, the future lies largely in our hands. Will we begin reducing heat trapping emissions now, thereby reducing future climate disruption and its impacts? Will we alter our planning and development in ways that reduce our vulnerability to the changes that are already on the way? The choices are ours.”

This statement perpetuates the rejected perspective on the role of humans in the climate system that

the human influence [on the climate system] is dominated by the emissions into the atmosphere of greenhouse gases, particularly carbon dioxide.

The perspective that is, however, supported by a wide body of scientific evidence (e.g. see) is that

natural variations are more important than recognized in the Karl et al CCSP synthesis report and that the human influence involves a diverse range of first-order climate forcings, including, but not limited to the human input of CO2.

The remainder of the Karl et al CCSP report necessarily miscommunicates climate information since it is built on their incorrect focus on “reducing heat trapping emissions”, rather than also on the role of natural variations as observed in the past, and on the other first order climate forcings such as the role of aerosols in precipitation, nitrogen deposition and land use/land cover change (e.g. see).

For example, their claim that

Historical climate and weather patterns are no longer an adequate guide to the future”

is not supported by the observational evidence (e.g. see where an example is presented of past data that we should use to plan for the future).

Thus the conclusion is that the US CCSP Program has failed in its mission. These reports have become stale and in-bred, since the same people are repeating their perspective on the climate issue.

The CCSP program, initiated within the Bush Administration, offered the opportunity to provide an independent assessment of the role of humans and natural variability in the climate system, as well as a comprehensive framework for reducing societal and environmental vulnerability to risk from climate variations and change through adaptation and mitigation. The CCSP process, however, has not succeeded in this goal.

As recommended in the Climate Science weblog [see] we need new scientists who are not encumbered by their prior advocacy positions on climate change to lead the preparation of balanced climate assessment reports.

The response of the media when this report is released in its final form will also be enlightening. Those reporters who parrot the synthesis without questioning its obvious bias and conflict of interest should be identified as sycophants. Those who adequately communicate the diversity of scientifically supported disagreements with the report should be lauded for the true journalist that they are.

 

  

Comments Off

Filed under Climate Science Reporting

On the Credibility of Climate Predictions by Koutsoyiannis et al. 2008

An outstanding and very important new paper has appeared which raises further issues with respect to the inability of the IPCC multi-decadal global models to predict future climate. The paper is

Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, and A. Christofides, 2008: On the credibility of climate predictions, Hydrological Sciences Journal, 53 (4), 671-684.

with the abstract

“Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.”

Here is an extract from the conclusions that effectively summarizes the implications of their results

“At the annual and the climatic (30-year) scales, GCM interpolated series are irrelevant to reality. GCMs do not reproduce natural over-year fluctuations and, generally, underestimate the variance and the Hurst coefficient of the observed series. Even worse, when the GCM time series imply a Hurst coefficient greater than 0.5, this results from a monotonic trend, whereas in historical data the high values of the Hurst coefficient are a result of large-scale over-year fluctuations (i.e. successions of upward and downward ‘trends’. The huge negative values of coefficients of efficiency show that model predictions are much poorer than an elementary prediction based on the time average. This makes future climate projections at the examined locations not credible. Whether or not this conclusion extends to other locations requires expansion of the study, which we have planned. However, the poor GCM performance in all eight locations examined in this study allows little hope, if any. An argument that the poor performance applies merely to the point basis of our comparison, whereas aggregation at large spatial scales would show that GCM outputs are credible, is an unproved conjecture and, in our opinion, a false one.”

A fundamental and societally relevant conclusion from this study is that the use of the IPCC model predictions as a basis for policy making is invalid and seriously misleading.

Comments Off

Filed under Climate Models

Documentation Of Continued Significant Land Use Change

Timo Hämeranta has again provided us important new research papers, this time on the latest information on deforestation [Thanks Timo!].  

He has alerted us to the following:

Sukumar, Raman, 2008. Forest Research for the 21st Century. Science Editorial Vol. 320, No 5882, p. 1395, June 13, 2008

where the abstract reads

“Last month the United Nations (UN) concluded a biodiversity conference in Bonn, Germany, where delegates from 191 countries negotiated “access to and sharing of the benefits of the rich genetic resources of the world.” Many of these resources reside in forests, which cover 4 billion hectares or 30% of Earth’s land. Forests are decreasing at a rate of 7 million hectares annually, mostly in the tropics. How can research encompassing the ecological, social, economic, and political dimensions of forest conservation contribute to reducing forest destruction and maintaining biodiversity, climatic stability, and the livelihoods of the poor, 40 to 50% of whose resources come from forests?”

and

Hansen, Matthew C., Stephen V. Stehman, Peter V. Potapov, Thomas R. Loveland, John R. G. Townshend, Ruth S. DeFries, Kyle W. Pittman, Belinda Arunarwati, Fred Stolle, Marc K. Steininger, Mark Carroll, and Charlene DiMiceli et al., 2008. Humid tropical forest clearing from 2000 to 2005 quantified by using multitemporal and multiresolution remotely sensed data. PNAS Vol. 105, No 27, pp. 9439-9444

with the abstract

“Forest cover is an important input variable for assessing changes to carbon stocks, climate and hydrological systems, biodiversity richness, and other sustainability science disciplines. Despite incremental improvements in our ability to quantify rates of forest clearing, there is still no definitive understanding on global trends. Without timely and accurate forest monitoring methods, policy responses will be uninformed concerning the most basic facts of forest cover change. Results of a feasible and cost-effective monitoring strategy are presented that enable timely, precise, and internally consistent estimates of forest clearing within the humid tropics. A probability-based sampling approach that synergistically employs low and high spatial resolution satellite datasets was used to quantify humid tropical forest clearing from 2000 to 2005. Forest clearing is estimated to be 1.39% (SE [Standard Error] 0.084%) of the total biome area. This translates to an estimated forest area cleared of 27.2 million hectares (SE 2.28 million hectares), and represents a 2.36% reduction in area of humid tropical forest. Fifty-five percent of total biome clearing occurs within only 6% of the biome area, emphasizing the presence of forest clearing “hotspots.” Forest loss in Brazil accounts for 47.8% of total biome clearing, nearly four times that of the next highest country, Indonesia, which accounts for 12.8%. Over three-fifths of clearing occurs in Latin America and over one-third in Asia. Africa contributes 5.4% to the estimated loss of humid tropical forest cover, reflecting the absence of current agro-industrial scale clearing in humid tropical Africa. “

Timo summarized from the paper that

“…Forest clearing is estimated to be 1.39% (SE 0.084%) of the total biome area. This translates to an estimated forest area cleared of 27.2 million hectares (SE 2.28 million hectares), and represents a 2.36% reduction in area of humid tropical forest….”

“Current global deforestation is 0,28% of forest are per year and forested area is 30% of the Earth’s land area, and the land area is 29% of the Earth’s surface…..”

and concluded, as has Climate Science, that these huge landscape conversions must have important consequences on the global climate system. The importance of such land use change in the coming decades was expertly evaluated in the paper

Feddema et al. 2005: The importance of land-cover change in simulating future climates, 310, 1674-1678.

 

Comments Off

Filed under Climate Change Forcings & Feedbacks

When Will They Ever Learn? By Hendrik Tennekes

When Will They Ever Learn By Hendrik Tennekes

The current special issue of Climatic Change, the journal edited by Stanford University professor Stephen Schneider, is devoted to Learning and Climate Change. This appealed to me, so I started browsing. Let me quote a few highlights.

 

On p. 6, University of Maryland professor Andreas Lange writes:

 

“Considering limits in applying the expected utility framework to climate change problems, we consider a more recent framework with ambiguity aversion, which accounts for situations with imprecise or multiple probability distributions. We discuss both the impact of ambiguity  aversion on decisions and difficulties in applying such a non-expected utility framework to a dynamic context.”

 

On p. 67, MIT professor Mort Webster writes:

 

“We model endogenous learning by calculating posterior distributions of climate sensitivity from Bayesian updating, based on temperature changes that would be observed for a given true climate sensitivity and assumptions about errors, prior distributions, and the presence of additional uncertainties.”

 

On p. 139, University of California at Santa Barbara professor Charles Kolstad writes:

 

“Uncertainty with Complete Learning leads to higher expected membership but lower expected aggregate net benefits than No Learning, while Partial Learning almost certainly leads to lower membership and even lower expected aggregate net benefits.”

 

I am tempted to counter these exquisite examples of Bayesian scholarship with a few lines from my contributions to turbulence theory:

 

“The approximately log-normal probability distribution of the microstructure of turbulent flows has given rise to the concept of microstructural intermittency. The smallest scales of motion are active only in a small fraction of the space-time domain, requiring a revision of  the classic Kolmogorov theory of small-scale turbulence. The universal use of Fourier decompositions, however, is a complicating factor, because it causes spurious kinetic energy dispersion  in wave-number space.”

 

And what about an example of my hermetic jargon on the North Atlantic storm track?

 

“The meridional convergence of the zonal transient eddy momentum flux drives the momentum of the jet stream in a such a way that the suggestion is created that a negative eddy viscosity prevails. This concept, however, seems to contradict the Second Law of Thermodynamics, unless one is willing to make a thorough study of the energetics of the General Circulation.”

 

Obviously, it is not all that hard to poke fun with convoluted academic writing styles. I can do it too if I have to. But my mood changed when I came to the last paper in this issue of Climatic Science. It is a paper on Negative Learning by Michael Oppenheimer of Princeton University, Brian O’Neill of IIASA in Laxenburg, Austria, and Mort Webster. Negative Learning? The authors give a definition on p. 158:

 

“Negative Learning is a decrease or sustained divergence in the correspondence between the true outcome and the uncertainty characterization or belief over time.”

 

Their first alarm is sounded one page earlier:

 

“Our study suggests that twenty years of experience with large international assessments has failed to solve, and in some respects even aggregated the problem posed by negative learning for policy makers.”

 

I am capable of distinguishing between a bugle call on the battlefield and the incomprehensible jargon used by the Bayesian crowd, so I started reading this paper in earnest. On p. 156, I found:

 

“Overconfidence is one likely cause of negative learning, but it is by no means the only one. The use of expert elicitation to assess knowledge and uncertainty among limited groups of experts sometimes involves a  reflexive revision of judgments that is known to consolidate beliefs, revealing that some group interactions can lead to negative learning.”

 

Now, where did I hear such sounds before? Certainly not from James Hansen and Paul Crutzen, nor from Gavin Schmidt at RealClimate, nor from Susan Solomon, nor anyone of the recent 2007 Nobel Peace Prize winners at IPCC.  The message that Oppenheimer, O’Neill, and Webster want to convey is phrased in terms even a retired engineer like me can understand (p. 167-169):

 

“Given past experience, we recommend that the assessment process should be overhauled such that characterizing uncertainty becomes a co-equal partner with consensus building. Attention should be devoted to the implications of poorly understood or hypothetical but plausible processes and alternative model structures. Recommendations for improvements that would minimize the possibility of negative learning in the production and use of such assessments include avoiding uncertainty assessment based only on model intercomparison and explicit reporting of disagreements among assessment authors. In addition, research funding from mission-oriented agencies also bears the potential to reinforce existing assumptions. Funding could be explicitly allocated to research that explores alternative processes or assumptions not present in current models.”

 

“Almost no effort has been devoted to understanding why learning in the global change arena, whether in basic science or assessment, sometimes goes awry. It would be timely to perform critical reviews of particular assessment case histories, not just to compare predictions to outcomes, but to understand how specific judgments were made. Accordingly, IPCC working group discussions should become much more transparent, so that the basis for particular decisions might be understood by non-participants and participants alike. With more than three decades of experience in hand, the scientific community should apply the same strict standards of scholarship to examining how assessments perform and how they might be improved that it applies to its own research.”

 

These conclusions warm my heart. Is this paper in Climatic Change, certainly not a publication vessel for climate skeptics, perhaps a harbinger of things to come? Does the fact that worldwide temperatures have stopped climbing ten years ago finally penetrate even the stubborn and prejudiced minds that have been instrumental in forging the Consensus Doctrine? Is the tide finally shifting?

 

The message from Oppenheimer et al. to the IPCC crowd is evidently that IPCC should stop its addiction to Negative Learning. I want to add a message of my own. It is:

 

When will you ever learn?

 

In 1964, Bob Dylan wrote one of his most memorable lyrics:

 

“Come gather ‘round people

Wherever you roam

And admit that the waters

Around you have grown

And accept it that soon

You’ll be drenched to the bone.

If your time is to you

Worth savin’

Then you better start swimmin’

Or you’ll sink like a stone

For the times they are a-changin’.

 

The line it is drawn

The curse it is cast

The slow one now

Will later be fast

As the present now

Will later be past

The order is

Rapidly fadin’.

And the first one now

Will later be last

For the times they are a-changin’.”

 

I for one, hope that the times will yet change within my lifetime. 

Comments Off

Filed under Guest Weblogs

Important New Research On The Role Of Aerosols On Precipitation By Professor Chidong Zhang

Professor Chidong Zhang of the University of Miami presented an important talk on June 19 2008 at NOAA’s Earth System Research Laboratory entitled “Climatic Effect of Aerosol on Tropical Rainfall: Evidence from Satellite Observations.” The abstract reads

“Many efforts have been made to investigate whether and how aerosol may affect precipitation. These efforts have yielded inconsistent and therefore controversial results. Especially, in the absence of observed aerosol effects on precipitation on climate scales, the null hypothesis “There is no climatic effect of aerosol on precipitation” has not been rejected. This has led the IPCC 2007 report to state “the sign of the global change in precipitation due to aerosols is not yet known”. In this study, we use long-term satellite observations of aerosol and rainfall to document large-scale co-variability of the two variables over the tropical Atlantic and West Africa. When influences due to known climate phenomena (e.g., ENSO, NAO, TAV) and meteorological factors (e.g., water vapor) are ruled out and analysis domains are carefully designed, such co-variability, especially its spatial patterns, suggests possible aerosol effects on rainfall. Large reductions in rainfall are found over the western tropical Atlantic Ocean and over the Gulf of Guinea in months of anomalously high aerosol concentration. Such reductions are statistically different from random and overall interannual variability. We propose that these reductions signify the climatic effect of aerosol on precipitation distribution and variability. Statistical results based on long-term satellite data are confirmed by consistent results based on more recent and higher quality satellite observations.”

This very important talk is based on the research of Professor Zhang in the following two papers

Huang, J., C. Zhang, and J. M. Prospero, 2008: Large-Scale Effects of African Aerosol on Precipitation of the West African Monsoon. Quart. J. Royal Meteor. Soc., in press.

The abstract reads

“We used multiyear satellite observations to study aerosol effects on the large-scale variability in precipitation of the West African Monsoon (WAM). We found a significant precipitation reduction associated with high aerosol concentration near the Guinea coast from late boreal autumn to winter. The largest aerosol-induced reduction (~ 1.5 mm d-1) is about 50% of the mean in the region and is mainly in the rain rate range of 2-17 mm day-1) under maritime environment off the northern coast of the Gulf of Guinea. This reduction cannot be attributed to known climate and weather factors such as El Niño-Southern Oscillation, North Atlantic Oscillation, Atlantic sea surface temperature, and water vapour. The fractional precipitation variance related to aerosol is about 13%, a value comparable to those attributed to the known climate factors. Aerosol responsible for the observed precipitation reduction can be traced back to various African sources where aerosol emissions have varied considerably over the past several decades, in part attributable to human activities.”

and

Huang, J., C. Zhang, and J. M. Prospero, 2008: Aerosol-Induced Large-Scale Variability in Precipitation over the Tropical Atlantic. J. Climate, submitted.

The abstract reads

“We used multiyear satellite observations to document a relationship between the large-scale variability in precipitation over the tropical Atlantic and aerosol traced to African sources. During boreal winter and spring, there is a significant reduction in precipitation south of the Atlantic marine intertropical convergence zone during months when aerosol concentrations are anomalously high over a large domain of the tropical Atlantic Ocean. This reduction cannot be attributed to known climate factors such as El Niño-Southern Oscillation, North Atlantic Oscillation, and zonal and meridional modes of tropical Atlantic sea surface temperature, or to meteorological factors such as water vapor. The fractional variance in precipitation related to aerosol is about 12% of the total interannual variance, which is of the same order of magnitude as that related to each of the known climate and weather factors. A backward trajectory analysis confirms the African origin of aerosols that directly affect the changes in precipitation. The reduction in mean precipitation mainly comes from decreases in moderate rain rates (10 – 20 mm/day), while light rain (<10 mm/day) can actually be enhanced by aerosol. Our results suggest aerosols have a clearly identifiable effect on climate variability in precipitation in the Pan-Atlantic region.”

The role of aerosols on precipitation (from industrial activity but also fires and blowing dust) was reported in my recent House subcommittee testimony (see) on overlooked and/or under reported issues in the IPCC and CCSP reports. Professor Zhang’s research provides further documentation of this critically important topic with respect to the human role on the climate system.

Comments Off

Filed under Climate Change Forcings & Feedbacks

The Value Of Paleoclimate Records In Assessing Vulnerability to Drought: A New Paper Meko et al 2008

There is a seminal new study of drought in the western United States that extends the period of assessment back to 800 A.D [and thanks to Connie Woodhouse for providing me a copy of their paper].  The paper is

Meko, D., C. A. Woodhouse, C. A. Baisan, T. Knight, J. J. Lukas, M. K. Hughes, and M. W. Salzer (2007), Medieval drought in the upper Colorado River Basin, Geophys. Res. Lett., 34, L10705, doi:10.1029/2007GL029988.

The abstract reads

“New tree-ring records of ring-width from remnant preserved wood are analyzed to extend the record of reconstructed annualflows of the Colorado River at Lee Ferry into the Medieval Climate Anomaly, when epic droughts are hypothesized from other paleoclimatic evidence to have affected various parts of western North America. The most extreme low-frequency feature of the new reconstruction, covering A.D. 762-2005, is a hydrologic drought in the mid-1100s. The drought is characterized by a decrease of more than 15% in mean annualflow averaged over 25 years, and by the absence of high annual flows over a longer period of about six decades. The drought is consistent in timing with dry conditions inferred from tree-ring data in the Great Basin and Colorado Plateau, but regional differences in intensity emphasize the importance of basin-specific paleoclimatic data in quantifying likely effects of drought on water supply.”

Figure 2 from that paper provides must be considered one of the most important illustrations of the natural variability of the climate system as well as the observation that the climate is never static.

 


Figure caption from Meko et al2008: Time series plot of 25-year running mean of reconstructed flows of the Colorado River at Lee Ferry . Flows are plotted as percentage of the 1906–2004 mean of observed naturalflows (18.53 billion cubic meters, or 15.03 million acre-ft). Confidence intervalderived from 0.10 and 0.90 probability points of ensemble of 1000 noise-added reconstructions. Horizontal dashed line is lowest25-year running mean of observed flows (1953–1977). [the Colorado River, of course, is a major source of water for much of the southwest United States].

What this figure tells us include that

  • periods of drought occurred that were longer and more severe in the pre-historical time period

and that

  • the climate of this region is never static but varies significantly over time.

One obvious conclusion is that regardless of how humans are altering the climate system, the natural variations are significantly larger that stated in the 2007 IPCC assessment. This conclusions adds significant new support for the paper

Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38,

where we report that

“The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional,
change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the norm”

and that

“It is imperative that the Earth’s climate system research community embraces this nonlinear paradigm if we are to move forward in the assessment of the human influence on climate.”

The significance of these observational findings is that the regional multi-decadal predictions based on the current generation of IPCC models should not be used by the impacts and policy making communities. The IPCC models fail to skillfully predict climate features such as drought, as exemplified in the figure from the Meko et al paper. A more appropriate procedure for the impacts and policy communities would be to use historical and paleoclimate data to explore the consequences to society and the environment in the coming decades, if these extreme climate conditions were to reoccur. In terms of how humans are altering the climate system, we do not know if we are making such extremes more of less likely (despite scientifically flawed claims to the contrary; e.g. see and see), but we do know that such extreme weather conditions occurred in the past. Therefore, prudent policy management would plan for their eventual reoccurrence.

Comments Off

Filed under Vulnerability Paradigm

Climate Assessment Oligarchy – The IPCC

An oligarchy is a

“form of government in which all power is vested in a few persons or in a dominant class or clique; government by the few.”

This definition certainly fits with the IPCC, as illustrated by the closed meeting in which Gerald Meehl, Jonathan Overpeck, Susan Solomon, Thomas Stocker, and Ron Stouffer are organizing in Hawaii in March 2009. This meeting is reported at

Joint IPCC-WCRP-IGBP Workshop: New Science Directions and Activities Relevant to the IPCC AR5 [Tuesday, March 03, 2009 - Friday, March 06, 2009 at the University of Hawaii International Pacific Research Center Honolulu , Hawaii].

While the meeting is to be mostly self-funded [which means federal contracts and grants and other such sources will be used to pay for the trip], it raises the issue as to why such a remote location is chosen. Presumably the particpants should be concerned about the emission of CO2 into the atmosphere from the jet aircraft that will transport them to Hawaii.

The Workshop is also open to only the IPCC Working Group 1 Lead Authors [LAs] and Contributing Lead Authors [CLAs] from all four assessments.  While the goals of the Workshop are appropriate scientific topics, the closed character of the Workshop and its location perpetuates the exclusiveness of the IPCC process.  

This small community of climate scientists is controlling the agenda with respect to the assessment of climate change. This is an oligarchy.

Climate Science urges that a new group of climate scientists be empowered to lead the next IPCC report. The inbred group of scientists who are to attend the Hawaii workshop, while most are excellent scientists, have a conflict of interest in that they have already presented their viewpoints on the role of humans in the climate system [at the expense of excluding peer reviewed science viewpoints, however; eg. see the Appendix in Pielke 2008].

The next IPCC assessment should involve only scientists who have not taken a strong position on the IPCC reports, but who have outstanding scientific credentials. Among the first questions they should address are the three hypotheses, only one of which can be true;

  • The human influence is minimal and natural variations dominate climate variations on all time scale;
  • While natural variations are important, the human influence is significant and involves a diverse range of first-order climate forcings (including, but not limited to the human input of CO2);
  • The human influence is dominated by the emissions into the atmosphere of greenhouse gases, particularly carbon dioxide.
  • This research question has been discussed on Climate Science (e.g. see).

    Without new scientists leading the IPCC process as LAs and CLAs, the next IPCC report is doomed to continue to be completed by an oligarchy that is using its privileged position to advocate for a particular perspective on the role of humans within the climate system [the third hypothesis above]. The next IPCC report will not be a balanced assessment, but continue to be policy advocacy in the guise of a scientific framework.

     

    Comments Off

    Filed under Climate Science Meetings, Climate Science Op-Eds