Policy Peril Segment 5: Is the Science Debate “Over”? Updated 08/17/09

by Marlo Lewis on August 5, 2009

Today’s post in my series of commentaries on excerpts from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, challenges the Gorethodox dogma that the science debate on global warming is “over.”

There are three basic issues in the climate change science debate:

  • Detection – Has the world warmed, and if so, by how much?
  • Attribution – How much of the observed warming (especially since the mid-1970s) is due to increases in atmospheric greenhouse gas concentrations?
  • Sensitivity – How much additional warming should we expect from continuing increases in greenhouse gas concentrations?

Despite what you’ve heard over and over again, these basic issues are unsettled, and more so now than at any time in the past decade. The science debate is not “over.” Reports of the death of climate skepticism have been greatly exaggerated.

Because of time constraints (Policy Peril runs under 40 minutes), the film briefly explores only the most important of the three basic issues: climate sensitivity. Today’s clip comes from that part of the film: an interview with University of Alabama in Huntsville atmospheric scientist Dr. Roy Spencer. To watch the Spencer interview, click here. To watch the entire movie, click here.

Here’s how this post is organized. First, I’ll reproduce the text of Spencer’s interview. Then, I’ll review some recent research bearing on the three fundamental science issues: detection, attribution, and sensitivity.

Text of today’s film clip:

Narrator: All the IPCC models assume that a CO2-induced warming will produce more high-altitude cirrus clouds, which then trap even more heat in the atmosphere. This is what’s called a positive climate feedback. Roy Spencer and his colleagues use satellites to study cirrus cloud behavior.

Dr. Roy Spencer (University of Alabama in Huntsville): Last August, August of 2007, we published research which showed from a whole bunch of satellite data that when the tropical atmosphere heats up–there are these periods when the atmosphere heats up from more rain activity or cools down from less rain activity–that when it heats up, the skies actually open up. The cirrus clouds that are up high, in the troposphere, in the upper atmosphere, open up and let more cooling infrared radiation escape to space. And it was a very strong effect.

Narrator: Spencer says that if climate models incorporated the negative feedback his team discovered, the models might forecast 75% less warming.

This is definitely not the Al Gore view of climate sensitivity. In fact, in An Inconvenient Truth (p. 67), Gore suggests we could get “three times as much” warming by mid-century as has occurred since the “depth of the last ice age.” That would mean a warming of 10ºC-12ºC by mid-century! Gore’s implicit warming forecast goes way beyond the IPCC best-estimate forecast range of 1.8ºC  to 4.0ºC (IPCC WWI AR4, Summary for Policymakers, p. 13). As we’ll see below, several strands of evidence suggest that the IPCC models are also too “hot.”

Detection

The world has warmed overall during the past 130 years, as evidenced by melting glaciers, longer growing seasons, and both proxy and instrumental data. However, the main era of “anthropogenic” global warming supposedly began in the mid-1970s, and ongoing research by retired meteorologist Anthony Watts leaves no doubt that in recent decades, the U.S. surface temperature record–reputed to be the best in the world–is unreliable and riddled with false warming biases.

Watts and a team of more than 650 volunteers have visually inspected and photographically documented 1003, or 82%, of the 1,221 climate monitoring stations overseen by the U.S. Weather Service. In a report summarizing an earlier phase of the team’s investigation (a survey of 860+ stations), Watts says, “We were shocked by what we found.” He explains:

We found stations located next to exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations–nearly 9 of every 10–fail to meet the National Weather Services’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source. In other words, 9 or every 10 stations are likely reporting higher or rising temperatures because they are badly sited.

“It gets worse,” Watts continues:

We observed that changes in the technology of temperature stations over time also have caused them to report a false warming trend. We found gaps in the data record that were filled in with data from nearby sites, a practice that propagates and compounds errors. We found adjustments to the data by both NOAA and another government agency, NASA, cause recent temperatures to look even higher.

How big a problem is this? According to Watts, “The errors in the record exceed by a wide margin the purported rise in temperature of 0.7ºC (about 1.2ºF) during the twentieth century.” Based on analysis of 948 stations rated as of May 31, 2009, Watts estimates that 22% of stations have an expected error of 1ºC, 61% have an expected error of 2ºC, and 8% have an expected error of 5ºC.

watts_fig23

Watts concludes that, “this record should not be cited as evidence of any trend in temperature that may have occurred across the U.S. during the past century.” He further concludes: “Since the U.S. record is thought to be ‘the best in the world,’ it follows that the global database is likely similarly compromised and unreliable.”

A related issue is the influence of urban heat islands on long-term temperature records. Climate Change Reconsidered, a report by the Nongovernmental International Panel on Climate Change (NIPCC), written by Drs. Craig Idso and S. Fred Singer with 35 contributors and reviewers, reviews more than 40 studies on urban heat islands. For example, a study by Oke (1973) of the urban heat island strength of 10 settlements in the St. Lawrence Lowlands of Canada found that a population as small as 1,000 people could generate a heat island effect of 2ºC-2.5ºC. From this study and the others reviewed, the NIPCC concludes:

It appears almost certain that surface-based temperature histories of the globe contain a significant warming bias introduced by insufficient corrections for the non-greenhouse-gas-induced urban heat island effect. Furthermore, it may well be impossible to make proper corrections for the deficiency, as the urban heat island of even small towns dwarfs any concommitant augmented greenhouse effect that may be present [p. 95; emphasis in original].

In a comment submitted to EPA regarding its proposed endangerment finding, University of Alabama in Huntsville (UAH) atmospheric scientist John Christy notes two additional reasons to conclude that the IPCC surface data records exaggerate warming trends:

As a culmination of several papers and years of work, Christy et al. (2009) demonstrates that popular surface datasets overstate the warming that is assumed to be greenhouse related for two reasons. First, these datasets use only stations that are electronically (i.e. easily) available, which means the unused, vast majority of stations (usually more rural and representative of actual trends but harder to find) are not included. Secondly, these popular datasets use the daily mean surface temerpature (TMean) which is the average of the daytime high (TMax) and nighttime low (TMin). In this study (and its predecessors, Christy 2002, Christy et al. 2006, Pielke Sr. et al. 2008, Walters et al. 2007 and others) we show that TMin is seriously impacted by surface development, and thus its rise is not an indicator of greenhouse gas forcing. Some have called this the Urban Heat Island effect, but, as described in Christy et al. 2009, it is much more than this and encompasses any development of the surface (e.g. irrigated agriculture).

For example, the UK Hadley Center, relying on two electronic surface stations, computed a TMax temperature trend in East Africa of 0.14ºC per decade during 1905-2004. Christy, using data from 45 stations, found a trend of only 0.02ºC per decade.

christy-uah-v-hadcrut3

In California, Christy found that the only significant warming trend is for TMin in the irrigated San Joaquin Valley. Note, in the non-irrigated Sierra mountains, where models project a greenhouse gas-induced warming should occur, there is actually a decreasing temperature trend.

christy-tmin-ca

Obviously, temperature data are the starting point of any analysis of global warming. But if we can’t trust the U.S. and IPCC temperature records, how do we know how much global warming has actually occurred?

Satellite observations are not influenced by heat islands and irrigation, or subject to the quality control problems detailed by Watts. Moreover, satellite records tally well with weather balloon observations–an independent database. So maybe detection should be based solely on satellite data, which do show some warming over the past 30 years. However, the “debate is over” crowd is unlikely to embrace this solution.  The satellite record shows a relatively slow rate of warming–about 0.13ºC per decade–hence a relatively insensitive climate.

uah-temperature-anomalies-jan-1979-june-20093

Moreover, as can be seen in the above chart of the University of Alabama-Huntsville (UAH) satellite record, some of the 0.13ºC/decade ”trend” comes from the 1998 El Nino warming pulse. Remove 1998, and the 30-year satellite record trend drops to 0.12ºC/decade.

Attribution

The IPCC, the leading spokesman for the alleged scientific consensus, claims that, “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” How does the IPCC know this? The IPCC offers three main reasons.

First, according to the IPCC, “Paleoclimate reconstructions show that the second half of the 20th century was likely the warmest 50-year period in the Northern hemisphere in 1300 years” (IPPC AR4, WGI, Chapt. 9, p. 702).  The warmth of recent decades coincided with a rapid increase in GHG concentrations. Therefore, the IPCC reasons, most of the recent warming is likely due to anthropogenic GHG emissions.

This argument is unconvincing if the warming of recent decades is not unusual or unprecedented in the past 1300 years. As it happens, numerous studies indicate that the Medieval Warm Period (MWP)–roughly the period from AD 800 to 1300, with peak warmth occurring about AD 1050–was as warm as or warmer than the Current Warm Period (CWP).

The Center for the Study of Carbon Dioxide and Global Change has analyzed more than 200 peer-reviewed MWP studies produced by more than 660 individual scientists working in 385 separate institutions from 40 countries. The Center divides these studies into three categories–those with quantitative data enabling one to infer the degree to which the peak of the MWP differs from the peak of the CWP (Level 1), those with qualitative data enabling one to infer which period was warmer (Level 2), although not by how much, and those with data enabling one to infer the existence of a MWP in the region studied (Level 3). An interactive map showing the sites of these studies is available at CO2Science.org.

Only a few Level 1 studies determined the MWP to have been cooler than the CWP; the vast majority indicate a warmer MWP. On average, the studies indicate that the MWP was 1.01ºC warmer than the CWP.

mwpquantitative

Figure Description: The distribution, in 0.5ºC increments, of Level 1 studies that allow one to identify the degree to which peak MWP temperatures either exceeded (positive values, red) or fell short of (negative values, blue) peak CWP temperatures.

Similarly, the vast majority of Level 2 studies indicate a warmer MWP:

mwpqualitative

Figure Description: The distribution of Level 2 studies that allow one to determine whether peak MWP temperatures were warmer than (red), equivalent to (green), or cooler than (blue), peak CWP temperatures.

The IPCC’s second main reason for attributing most recent warming to the increase in GHG concentrations is that climate models “cannot reproduce the rapid warming observed in recent decades when they only take into account variations in solar output and volcanic activity. However . . . models are able to simulate observed 20th century changes when they include all of the most important external factors, including human influences from sources such as greenhouse gases and natural external factors” (IPCC, AR4, WGI, Chapt. 9, p. 702).

This would be decisive if today’s models accurately simulate all important modes of natural variability. In fact, models do not accurately simulate the behavior of clouds and ocean cycles. They may also ignore important interactions between the Sun, cosmic rays, and cloud formation.

Richard Lindzen of MIT spoke to this point at the Heartland Institute’s recent (June 2, 2009) Third International Conference on Climate Change:

What was done [by the IPCC], was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multi-Decadal Oscillation), claim that such models nonetheless adequately depicted natural internal climate variability, and use the fact that models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man. The argument makes arguments in support of intelligent design seem rigorous by comparison.

“Fingerprint” studies are the third basis on which the IPCC attributes most recent warming to anthropogenic greenhouse gases. Climate models project a specific pattern of warming through the vertical profile of the atmosphere–a greenhouse “fingerprint.” If the observed warming pattern matches the model-projected fingerprint, that would be strong evidence that recent warming is anthropogenic. Conversely, notes the NIPCC, “A mismatch would argue strongly against any signficant contribution from greenhouse gas (GHG) forcing and support the conclusion that the observed warming is mostly of natural origin” (NPICC, p. 106).

Douglass et al. (2007) compared model-projected and observed warming patterns in the tropical troposphere. The observed pattern is based on three compilations of surface temperature records, four balloon-based records of the surface and lower troposphere, and three satellite-based records of various atmospheric layers–10 independent datasets in all.

“While all greenhouse models show an increasing warming trend with altitude, peaking around 10 km at roughly two times the surface value,” observes the NIPCC, “the temperature data from balloons give the opposite result; no increasing warming, but rather a slight cooling with altitude” (p. 107). See the figures below.

hot-spot

The mismatch between the model-predicted greenhouse fingerprint and the observed pattern is profound. As the Douglass team explains: “Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modeled trend is 100% to 300% higher than observed, and above 8 km, modeled and observed trends have opposite signs.”

douglass

Figure description: Temperature trends for statellite era (ºC/decade). HadCRUT, GHCH and GISS are compilations of surface temperature observations. IGRA, RATPAC, HadAT2, and RAOBCORE are balloon-based observations of surface and lower troposphere. UAH, RSS, UMD are satellite-based data for various layers of the atmosphere. The 22-model average comes from an ensemble of 22 model simulations from the most widely used models worldwide. The red lines are the +2 and -2 standard errors of the mean from the 22 models. Source: Douglass et al. 2007.

The NIPCC concludes that the mismatch of observed and model-calculated fingerprints “clearly falsifies the hypothesis of anthropogenic global warming (AGW)” (p. 108). I would put the state of affairs more cautiously. In view of (1) significant evidence that the MWP was as warm as or warmer than the CWP, (2) the inability of climate models to simulate important modes of natural variability, and (3) the failure of observations to confirm a greenhouse fingerprint in the tropical trosophere, the IPCC claim that “most” recent warming is “very likely” anthropogenic should be considered a boast rather than a balanced assessment of the evidence.

Climate Sensitivity

The most important unresolved scientific issue in the global warming debate is how sensitive (reactive) the climate is to increases in GHG concentrations.

Climate sensitivity is typically defined as the global average surface warming following a doubling of carbon dioxide (CO2) concentrations above pre-industrial levels. The IPCC says a doubling is likely to produce warming in the range of 2ºC to 4.5ºC, with a most likely value of about 3ºC (IPCC, AR4, WGI, Chapt. 10, p. 749). The IPCC presents a range rather than a specific value because of uncertainty regarding the strength of the relevant feedbacks.

In a hypothetical climate with no feedbacks, positive or negative, a CO2 doubling would produce 1.2ºC of warming (IPCC, AR4, WGI, Chapt. 8, p. 631). In most climate models, the dominant feedbacks are positive, meaning that the warmth from rising GHG levels causes other changes (in water vapor, clouds, or surface reflectivity, for example) that either increase the retention of outgoing long-wave radiation (OLR) or decrease the reflection of incoming short-wave radiation (SWR).

In his speech at the June 2 Heartland Institute conference, Professor Lindzen summarized his research on climate sensitivity, which has since been accepted for publication by Geophysical Research Letters. Lindzen argues that climate feedbacks and sensitivity can be inferred from observed changes in OLR and SWR following observed changes in sea-surface temperatures. For fluctuations in OLR and SWR, Lindzen and his colleagues used the 16-year record (1985-1999) from the Earth Radiation Budget Experiment (ERBE), as corrected for altitude variations associated with satellite orbital decay. For sea surface temperatures, they used data from the National Centers for Environmental Prediction. For climate model simulations, they used 11 IPCC models forced with the observed sea-surface temperature changes.

The results are striking. All 11 IPCC models show positive feedback, “while ERBE unambiguously shows a strong negative feedback.”

lindzen-erbe-vs-models1

Figure description: ERBE data show increasing top-of-the-atmosphere radiative flux (OLR plus reflected SWR) as sea surface temperatures rise whereas models forecast decreasing radiative flux. Source: Lindzen and Choi 2009.

The ERBE data indicate that the sensitivity of the actual climate system “is narrowly constrained to about 0.5ºC,” Lindzen estimates. ”This analysis,” says Lindzen in a recent commentary, “makes clear that even when all models agree, they can be wrong, and that this is the situation for the all important question of climate sensitivity.”

erbe-v-model-sensitivity4

At the Heartland Institute’s Second International Conference on Climate Change (March 2009), Dr. William Gray of Colorado State University presented satellite-based research that may explain the low climate sensitivity the Lindzen team infers from the ERBE data.

The IPCC climate models assume that CO2-induced warming significantly increases upper troposphere clouds and water vapor, trapping still more OLR that would otherwise escape to space. Most of the projected warming in the models comes from this positive water vapor/cloud feedback, not from the CO2. Satellite observations do not support this hypothesis, Gray contends:

Observations of upper tropospheric water vapor over the last 3-4 decades from the National Centers of Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis data and the International Satellite Cloud Climatology Project (ISCCP) data show that upper tropospheric water vapor appears to undergo a small decrease while Outgoing Longwave Radiation (OLR) undergoes a small increase. This is the opposite of what has been programmed into the GCMs [General Circulation Models] due to water vapor feedback.

The figure below comes from the NCEP/NCAR reanalysis of upper troposphere water vapor and OLR.

reanalysis-olr-and-water-vapor-50

Figure description: NCEP/NCAR renalysis of standardized anomalies at 400 mb (~7.5 km altitude) water vapor content (i.e. specific humidity — in blue) and OLR (in red) from 1950 to 2008. Note the downward trend in moisture and upward trend in OLR.

Gray’s paper deals with water vapor in the upper troposphere. What about high-altitude cirrus clouds, which climate models also predict will increase and trap more OLR as GHG concentrations increase?

Spencer et al. (2007), the study Dr. Spencer spoke about in today’s Policy Peril film clip, found a strong negative cirrus cloud feedback mechanism in the tropical troposphere. Instead of steadily building up as the tropical oceans warm, cirrus cloud cover suddenly contracts, allowing more OLR to escape. As mentioned, Spencer estimates that if this mechanism operates on decadal time scales, it would reduce model estimates of global warming by 75%.

A 2008 study by Spencer and colleague William D. Braswell examines the issue of climate feedbacks related to low-level clouds. Lower troposphere clouds tend to cool the Earth by reflecting incoming SWR. Observations indicate that warmer years have less cloud cover compared to cooler years. Modelers have interpreted this correlation as positive feedback effect in which warming reduces low-level cloud cover, which then produces more warming.

Spencer and Braswell found that climate modelers could be mixing up cause and effect. Random variations in cloudiness can cause substantial decadal variations in ocean temperatures. So it is equally plausible that the causality runs the other way, and increases in sea-surface temperature are an effect of natural cloud variations. If so, then climate models forecast too much warming. For more on this, visit Spencer’s Web site.

In a study now in peer review for possible publication in the Journal of Geophysical Research, Spencer and colleagues analyzed 7.5 years of NASA satellite data and “discovered,” he reports on his Web site, “that, when the effects of clouds-causing-temperature-change is accounted for, cloud feedbacks in the real climate system are strongly negative.” “In fact,” he continues, “the resulting net negative feedback was so strong that, if it exists on the long time scales associated with global warming, it would result in only 0.6ºC of warming by late in this century.”

In related ongoing satellite research, Spencer finds new evidence that “most” warming of the past century “could be the result of a natural cycle in cloud cover forced by a well-known mode of natural climate variability: the Pacific Decadal Oscillation (PDO).”

Whether or not the PDO proves to be a major player in climate change, Spencer has identified a potentially serious error in all IPCC modeling efforts:

Even though they never say so, the IPCC has simply assumed that the average cloud cover of the Earth does not change, century after century. This is a totally arbitrary assumption, and given the chaotic variations that the ocean and atmosphere circulations are capable of, it is probably wrong. Little more than a 1% change in cloud cover up or down, and sustained over many decades, could cause events such as the Medieval Warm Period or the Little Ice Age.

Finally, recent temperature history also suggests that most climate models are too “hot.” Dr. Patrick Michaels touched on this topic in Policy Peril (albeit not in today’s excerpt).

Carbon dioxide emissions and concentrations are increasing at an accelerating rate (Canadell, J.G. et al. 2008). Yet, there has been no net warming since 2001 and no year was as warm as 1998.

global-temperature-past-decade

Figure description: Observed monthly global temperature anomalies, January 2001 through April 2009, as compiled by the Climate Research Unit. Source: Paul C. Knappenberger.

Paul C. Knappenberger (”Chip” to his friends) quite reasonably wonders, “[H]ow long a period of no warming can be tolerated before the forecasts of the total warming by century’s end have to be lowered?” After all, he continues, “We’re already into the nineth year of the 100 year forecast and we have no global warming to speak of.” It is instructive to compare these data with climate model projections.

A good place to start is with the climate model projections that NASA scientist James Hansen presented in his 1988 congressional testimony, which launched the modern global warming movement.

The figure below, from congressional testimony by Dr. John Christy, a colleague of Roy Spencer at the University of Alabama in Huntsville, shows how Hanesn’s model and reality diverge.

hansen-models-vs-reality1

Figure description: The red, orange, and purple lines are Hansen’s model forecasts of global temperatures under different emission scenarios. The green and blue lines are actual temperatures from two independent satellite records. Source: John Christy.

“All model projections show high sensitivity to CO2 while the actual atmosphere does not,” Christy notes. “It is noteworthy,” he adds, “that the model projection for drastic CO2 cuts still overshot the observations. This would be considered a failed hypothesis test for the models from 1988.”

What about the models used by the IPCC in its 2007 Fourth Assessment Report (AR4)? How well are they replicating global temperatures?

ipcc-models-vs-recent-temperatures

This figure, also from Dr. Christy’s testimony, is adapted from Dr. Patrick Michaels’s testimony of February 12, 2009. The red and orange lines show the upper and lower significant range (95% of all model runs are between the lines) of global temperature trends calculated by 21 IPCC AR4 models for multi-year segments ending in 2020. The blue and green lines show observed temperatures ending in 2008 from satellite (University of Alabama in Huntsville) and surface (U.K. Hadley Center for Climate Change) records.

Christy comments:

The two main points here are (1) the observations are much cooler than the mid-range of the model spread and are at the minimum of the model simulations and (2) the satellite adjustment for surface comparisons is exceptionally good. The implication of (1) is that the best estimates of the IPCC models are too warm, or that they are too sensitive to CO2 emissions.

Christy illustrates this another way in his comment on EPA’s endangerment proposal.

christy-models-standard-error1

Figure description: Mean and standard error of 22 IPCC AR4 model temperature projections in the mid-range (A1B) emissions scenario. From 1979 to 2008, the mean projection of the models is a warming of 0.22ºC per decade. HADCRUT3v (green) is a surface dataset, UAH (blue) and RSS (purple) are satellite data sets.

Christy comments:

. . . even with these likely spurious warming effects in HADCRUT3v and RSS, the mean model trends are still significantly warmer than the observations at all time scales examined here. Thus, the model mean sensitivity, a quantity utilized by the IPCC as about 2.6ºC per doubled CO2, is essentially contradicted in these comparisons.

Michaels, in his testimony, shows that if year 2008 temperatures persist through 2009, then the observed temperature trend will fall below the 95% confidence range of model projections. In other words, the models will have less than a 5% probability of being correct.

ipcc-models-vs-temperatures-through-2009

Although the IPCC AR4 models have not failed yet, they are, in Michaels’s words, “in the process of failing,” and the longer the current temperature regime persists, the worse the models will perform.

Conclusion

The climate science debate is not “over.” In fact, it is just starting to get very, very interesting. All the basic issues–detection, attribution, and sensitivity–are unsettled and more so today than at any time in the past decade.

A final thought–anyone who wants further convincing that the debate is not over should read the marvelous NIPCC report. On a wide range of issues (nine main topics and 60 sub-topics), the report demonstrates that the scientific literature allows, and even favors, reasonable alternative assessments to those presented by the IPCC.

P.S. Previous posts in this series are available below:

  • Policy Peril: Looking for an antidote to An Inconvenient Truth? Your search is over
  • Policy Peril Segment 1: Heat Waves
  • Policy Peril Segment 2: Air Pollution
  • Policy Peril Segment 3: Hurricanes
  • Policy Peril Segment 4: Sea-Level Rise
  • papertiger August 5, 2009 at 9:08 pm

    I didn’t even watch the film this time, because your supporting information was so gripping.

    We have three overarching theories of natural climate;

    Lindzen’s iris effect, Miskolczi’s saturated greenhouse, and Svensmark’s comoclimatology.

    All three have a pretty good argument.
    How about a grand unification theory to nail down the co2 sensitivity?

    Comments on this entry are closed.

    Previous post:

    Next post: