August 2009

What a global warming alarmist beast the Energy Foundation is. For example, according to its 333-page (thanks to hundreds of grant awards to a seemingly infinite dependency class of environmentalist nonprofits) tax return for 2007 (the most recent available on Guidestar), EF has a bottomless well of funds to draw from: $68,907,029 in revenues (including $1.36 million in investment income); $53,600,903 in expenses — heck, they’re so rich, they even gave the Rockefellers money. Take that, big oil!

So how does EF get its money? They ‘splain:

Current Energy Foundation partners are: Cinco Hermanos, ClimateWorks Foundation, The Doris Duke Charitable Foundation, The Grousbeck Family Foundation, The William and Flora Hewlett Foundation, The Kresge Foundation, The McKnight Foundation, The Mertz Gilmore Foundation, The Cynthia & George Mitchell Foundation, The David and Lucile Packard Foundation, The Pisces Foundation, The Schmidt Family Foundation, The Simons Foundation, The Sea Change Foundation, and The TOSA Foundation.

The top four givers to EF for 2007 were the William and Flora Hewlett Foundation ($21,485,800), the Doris Duke Charitable Foundation ($20,727,743), the David and Lucile Packard Foundation ($7,050,000), and The TOSA Foundation ($4,250,000). Consider that when you make your next computer printer purchase.

While it’s usually somewhat easy to find out whose money is going where, it’s less so when you try to find out how they conduct their business. However, a document (Microsoft Word) I obtained that originated with EF’s climate program officer David Tuft offers some insight into their hiring approach, what they look for, and what they want them to do. The source who provided it to me said this document, a job description for an EF consultant in Virginia, was being shopped around to moderate or Republican-leaning consultants:

Specifically, we are looking for someone who has a deep understanding of the political landscape in Virginia, has an understanding of energy and climate issues, and can assist in developing an Energy Foundation funding plan for educating key decisionmakers.

“Educating” seems innocent enough; an EF propaganda campaign does not.

A strategic plan would identify levers for advancing the dialogue around climate policy including: issues specific to the state or district, economic and other analyses to help build the case; messengers and compelling messengers; economic and other stakeholders who would be influential (i.e. major industries, renewable energy enterprises, agriculture, forestry, electric coops, national security experts and the faith community).  After devising a plan, the consultant would recommend a plan for implementation.

Levers?

For example, the strategy might include the need to fund a specific jobs study to be performed by a UVA economist that shows how a federal carbon cap would create jobs or the need to build a coalition of businesses around the state that would support a federal carbon cap and identify who has the ability to bring these people together.

What a shock — part of the job is to track down accredited-but-pliable researchers to provide the results EF wants so as to advance their agenda. Therefore any research or recommendations they have paid for — from the Western Climate Initiative, to the Midwestern Greenhouse Gas Accord, to the Regional Greenhouse Gas Initiative, to anything else they fund — should be viewed for the garbage that it is and thrown in the trashcan.

When it comes to understanding climate change, the El Nino/Southern Oscillation (ENSO) is one of the least understood but most important aspects of the climate system. Dynamics related to ENSO, like the Madden-Julian Oscillation, Pacific Decadal Oscillation, and Meridional Overturning Circulation, dominate the Indian, Pacific, and North Atlantic Oceans. With that said, a recent paper published in the Journal of Geophysical Research titled “Influence of the Southern Oscillation on tropospheric temperature” represents a misinterpretation of ENSO.

The new paper by McLean, De Freitas, and Carter blames global warming on ENSO and has garnered a lot of attention and enthusiasm from the skeptical community.  Their conclusion may indeed hold water, but not for the reasons they claim.

ENSO describes a pattern of sea surface temperatures, pressure, and wind in the Equatorial Pacific. During El Nino events, the pressure differential between the East and West Pacific falls, trade winds slow, and warm water sloshes upward and eastward until it appears in the temperature record in the Cold Tongue of the Pacific. During La Nina events, the opposite happens.

ENSO is classically defined as heat redistribution; no heat enters the system during El Nino events or leaves the system during La Nina events. The heat is merely moved from the subsurface (hidden from the temperature record) to the surface (included in the temperature record). Behind all the math used in the paper, the fact remains that ENSO has been falling since 1976, while temperatures have risen. If they accept the classical definition of ENSO as non-radiative (which one can only assume they do), then they cannot blame ENSO for global warming.

It is for this reason that the conclusions of the paper are not supported by the work in their paper. All they establish is that ENSO drives global temperatures over the short-term; they do nothing to show it would explain the trend. Unless they challenged the conventional view of ENSO as non-radiative, their case holds no water. With that said, there is significant evidence that ENSO may in fact be radiative, particularly the 1976/7, 1986/7, and 1997/8 events. For more information on how the data demonstrate that ENSO is a radiative oscillation, visit my blog, here.

In conclusion, McLean, De Freitas, and Carter’s research is a reminder that the alarmists have no monopoly on papers that prove less than the authors claim; the alarmists just publish most of them.

Congress plans to spend $200 million on luxury jets for liberal House leaders, even though it earlier denounced the automakers for having corporate jets, and even though the luxury jets the House plans to buy emit vast amounts of pollution and greenhouse gases. Now they’ll be able to go on foreign junkets and hob-nob with wealthy lobbyists in style.

As Victor Davis Hanson notes, this excess and hypocrisy is typical of a House Speaker “Pelosi who rails about carbon footprints, but wants the biggest private-use jet she can get,” tax-raising liberal Congressmen like “Dodd and Rangel, who skip out on their own taxes, and find all sorts of immoral ways to finance and maintain second and third” homes, and Obama Administration nominees like Treasury Secretary Tim Geithner and HHS nominee Daschle “who favor more taxes — if they can avoid taxes, or have tax-free limo service.”

The know-nothings in Congress are poised to waste billions more on the cash-for-clunkers program, even though most Americans oppose it. It will have no overall environmental benefit, note CBS News and Fox News commentaries, even though its sponsors falsely claimed it would.

The clunkers program was slated to cost a billion dollars for the entire year, but it ended up running out of money after just 5 days. (Now, these same geniuses claim they can overhaul the health-care system for just a trillion dollars in increased federal spending. Don’t believe them: it will raise taxes and harm the insured. Health care bills always cost more than predicted.).

The cash-for-clunkers program is monumentally wasteful and stupid, destroying perfectly good automobiles, cutting off the supply of cheap used cars needed by poor people, and rewarding people who bought gas guzzlers rather than fuel-efficient vehicles.

It also provides surprisingly little benefit to the Detroit automakers that it was intended to bail out, who have already received more than $70 billion from taxpayers, and it wipes out jobs at used-car and parts businesses.

Congressional leaders and Obama also back a huge cap-and-trade carbon tax that would do little to protect the environment, while costing the economy trillions. The cap-and-trade tax was pushed through the House before the text of the bill even became available. The bill was over 1090 pages long and contained special interest giveaways to a legion of big corporations and their lobbyists. At the last minute, 300 more pages were added to the bill that few in Congress had even read, and had to be manually inserted into the existing 1000 pages after the bill was passed, based on guesses about where those pages would fit in. Thus, the bill did not even really exist at the time it was passed.

In 2008, Obama privately admitted to a San Francisco Chronicle reporter that his cap-and-trade carbon tax would cause people’s electric bills to “skyrocket.” The cap-and-trade tax will do little to cut greenhouse gas emissions, since it contains so many special interest giveaways and environmentally-destructive provisions like protections for ethanol, which promotes soil erosion and deforestation. Meanwhile, Obama has thwarted more use of nuclear energy, which reduces greenhouse gas emissions, by blocking use of the Yucca Mountain nuclear-waste disposal site after billions of dollars in taxpayer money had already been spent developing it.

The House has already passed $2 billion in additional spending on the wasteful cash-for-clunkers program, adding to more than $70 billion in wasteful auto bailouts. Senate Majority Leader Harry Reid (D-NV) wants to ram more spending on clunkers through the Senate before rising public opposition makes that possible — the same way Congressional leaders rammed through the $800 billion stimulus package before the public learned what was in it.

Buried in the stimulus package were provisions that ended welfare reform. The stimulus package is now projected to cut the size of the economy “in the long run.” The Administration claimed it would deliver a short-run “jolt” that would quickly lift the economy, but unemployment rose rapidly after its passage, and the package has actually destroyed thousands of jobs in America’s export sector, as well as subsidizing welfare and waste.

Today’s post in my series of commentaries on excerpts from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, challenges the Gorethodox dogma that the science debate on global warming is “over.”

There are three basic issues in the climate change science debate:

  • Detection – Has the world warmed, and if so, by how much?
  • Attribution – How much of the observed warming (especially since the mid-1970s) is due to increases in atmospheric greenhouse gas concentrations?
  • Sensitivity – How much additional warming should we expect from continuing increases in greenhouse gas concentrations?

Despite what you’ve heard over and over again, these basic issues are unsettled, and more so now than at any time in the past decade. The science debate is not “over.” Reports of the death of climate skepticism have been greatly exaggerated.

Because of time constraints (Policy Peril runs under 40 minutes), the film briefly explores only the most important of the three basic issues: climate sensitivity. Today’s clip comes from that part of the film: an interview with University of Alabama in Huntsville atmospheric scientist Dr. Roy Spencer. To watch the Spencer interview, click here. To watch the entire movie, click here.

Here’s how this post is organized. First, I’ll reproduce the text of Spencer’s interview. Then, I’ll review some recent research bearing on the three fundamental science issues: detection, attribution, and sensitivity.

Text of today’s film clip:

Narrator: All the IPCC models assume that a CO2-induced warming will produce more high-altitude cirrus clouds, which then trap even more heat in the atmosphere. This is what’s called a positive climate feedback. Roy Spencer and his colleagues use satellites to study cirrus cloud behavior.

Dr. Roy Spencer (University of Alabama in Huntsville): Last August, August of 2007, we published research which showed from a whole bunch of satellite data that when the tropical atmosphere heats up–there are these periods when the atmosphere heats up from more rain activity or cools down from less rain activity–that when it heats up, the skies actually open up. The cirrus clouds that are up high, in the troposphere, in the upper atmosphere, open up and let more cooling infrared radiation escape to space. And it was a very strong effect.

Narrator: Spencer says that if climate models incorporated the negative feedback his team discovered, the models might forecast 75% less warming.

This is definitely not the Al Gore view of climate sensitivity. In fact, in An Inconvenient Truth (p. 67), Gore suggests we could get “three times as much” warming by mid-century as has occurred since the “depth of the last ice age.” That would mean a warming of 10ºC-12ºC by mid-century! Gore’s implicit warming forecast goes way beyond the IPCC best-estimate forecast range of 1.8ºC  to 4.0ºC (IPCC WWI AR4, Summary for Policymakers, p. 13). As we’ll see below, several strands of evidence suggest that the IPCC models are also too “hot.”

Detection

The world has warmed overall during the past 130 years, as evidenced by melting glaciers, longer growing seasons, and both proxy and instrumental data. However, the main era of “anthropogenic” global warming supposedly began in the mid-1970s, and ongoing research by retired meteorologist Anthony Watts leaves no doubt that in recent decades, the U.S. surface temperature record–reputed to be the best in the world–is unreliable and riddled with false warming biases.

Watts and a team of more than 650 volunteers have visually inspected and photographically documented 1003, or 82%, of the 1,221 climate monitoring stations overseen by the U.S. Weather Service. In a report summarizing an earlier phase of the team’s investigation (a survey of 860+ stations), Watts says, “We were shocked by what we found.” He explains:

We found stations located next to exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations–nearly 9 of every 10–fail to meet the National Weather Services’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source. In other words, 9 or every 10 stations are likely reporting higher or rising temperatures because they are badly sited.

“It gets worse,” Watts continues:

We observed that changes in the technology of temperature stations over time also have caused them to report a false warming trend. We found gaps in the data record that were filled in with data from nearby sites, a practice that propagates and compounds errors. We found adjustments to the data by both NOAA and another government agency, NASA, cause recent temperatures to look even higher.

How big a problem is this? According to Watts, “The errors in the record exceed by a wide margin the purported rise in temperature of 0.7ºC (about 1.2ºF) during the twentieth century.” Based on analysis of 948 stations rated as of May 31, 2009, Watts estimates that 22% of stations have an expected error of 1ºC, 61% have an expected error of 2ºC, and 8% have an expected error of 5ºC.

watts_fig23

Watts concludes that, “this record should not be cited as evidence of any trend in temperature that may have occurred across the U.S. during the past century.” He further concludes: “Since the U.S. record is thought to be ‘the best in the world,’ it follows that the global database is likely similarly compromised and unreliable.”

A related issue is the influence of urban heat islands on long-term temperature records. Climate Change Reconsidered, a report by the Nongovernmental International Panel on Climate Change (NIPCC), written by Drs. Craig Idso and S. Fred Singer with 35 contributors and reviewers, reviews more than 40 studies on urban heat islands. For example, a study by Oke (1973) of the urban heat island strength of 10 settlements in the St. Lawrence Lowlands of Canada found that a population as small as 1,000 people could generate a heat island effect of 2ºC-2.5ºC. From this study and the others reviewed, the NIPCC concludes:

It appears almost certain that surface-based temperature histories of the globe contain a significant warming bias introduced by insufficient corrections for the non-greenhouse-gas-induced urban heat island effect. Furthermore, it may well be impossible to make proper corrections for the deficiency, as the urban heat island of even small towns dwarfs any concommitant augmented greenhouse effect that may be present [p. 95; emphasis in original].

In a comment submitted to EPA regarding its proposed endangerment finding, University of Alabama in Huntsville (UAH) atmospheric scientist John Christy notes two additional reasons to conclude that the IPCC surface data records exaggerate warming trends:

As a culmination of several papers and years of work, Christy et al. (2009) demonstrates that popular surface datasets overstate the warming that is assumed to be greenhouse related for two reasons. First, these datasets use only stations that are electronically (i.e. easily) available, which means the unused, vast majority of stations (usually more rural and representative of actual trends but harder to find) are not included. Secondly, these popular datasets use the daily mean surface temerpature (TMean) which is the average of the daytime high (TMax) and nighttime low (TMin). In this study (and its predecessors, Christy 2002, Christy et al. 2006, Pielke Sr. et al. 2008, Walters et al. 2007 and others) we show that TMin is seriously impacted by surface development, and thus its rise is not an indicator of greenhouse gas forcing. Some have called this the Urban Heat Island effect, but, as described in Christy et al. 2009, it is much more than this and encompasses any development of the surface (e.g. irrigated agriculture).

For example, the UK Hadley Center, relying on two electronic surface stations, computed a TMax temperature trend in East Africa of 0.14ºC per decade during 1905-2004. Christy, using data from 45 stations, found a trend of only 0.02ºC per decade.

christy-uah-v-hadcrut3

In California, Christy found that the only significant warming trend is for TMin in the irrigated San Joaquin Valley. Note, in the non-irrigated Sierra mountains, where models project a greenhouse gas-induced warming should occur, there is actually a decreasing temperature trend.

christy-tmin-ca

Obviously, temperature data are the starting point of any analysis of global warming. But if we can’t trust the U.S. and IPCC temperature records, how do we know how much global warming has actually occurred?

Satellite observations are not influenced by heat islands and irrigation, or subject to the quality control problems detailed by Watts. Moreover, satellite records tally well with weather balloon observations–an independent database. So maybe detection should be based solely on satellite data, which do show some warming over the past 30 years. However, the “debate is over” crowd is unlikely to embrace this solution.  The satellite record shows a relatively slow rate of warming–about 0.13ºC per decade–hence a relatively insensitive climate.

uah-temperature-anomalies-jan-1979-june-20093

Moreover, as can be seen in the above chart of the University of Alabama-Huntsville (UAH) satellite record, some of the 0.13ºC/decade ”trend” comes from the 1998 El Nino warming pulse. Remove 1998, and the 30-year satellite record trend drops to 0.12ºC/decade.

Attribution

The IPCC, the leading spokesman for the alleged scientific consensus, claims that, “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” How does the IPCC know this? The IPCC offers three main reasons.

First, according to the IPCC, “Paleoclimate reconstructions show that the second half of the 20th century was likely the warmest 50-year period in the Northern hemisphere in 1300 years” (IPPC AR4, WGI, Chapt. 9, p. 702).  The warmth of recent decades coincided with a rapid increase in GHG concentrations. Therefore, the IPCC reasons, most of the recent warming is likely due to anthropogenic GHG emissions.

This argument is unconvincing if the warming of recent decades is not unusual or unprecedented in the past 1300 years. As it happens, numerous studies indicate that the Medieval Warm Period (MWP)–roughly the period from AD 800 to 1300, with peak warmth occurring about AD 1050–was as warm as or warmer than the Current Warm Period (CWP).

The Center for the Study of Carbon Dioxide and Global Change has analyzed more than 200 peer-reviewed MWP studies produced by more than 660 individual scientists working in 385 separate institutions from 40 countries. The Center divides these studies into three categories–those with quantitative data enabling one to infer the degree to which the peak of the MWP differs from the peak of the CWP (Level 1), those with qualitative data enabling one to infer which period was warmer (Level 2), although not by how much, and those with data enabling one to infer the existence of a MWP in the region studied (Level 3). An interactive map showing the sites of these studies is available at CO2Science.org.

Only a few Level 1 studies determined the MWP to have been cooler than the CWP; the vast majority indicate a warmer MWP. On average, the studies indicate that the MWP was 1.01ºC warmer than the CWP.

mwpquantitative

Figure Description: The distribution, in 0.5ºC increments, of Level 1 studies that allow one to identify the degree to which peak MWP temperatures either exceeded (positive values, red) or fell short of (negative values, blue) peak CWP temperatures.

Similarly, the vast majority of Level 2 studies indicate a warmer MWP:

mwpqualitative

Figure Description: The distribution of Level 2 studies that allow one to determine whether peak MWP temperatures were warmer than (red), equivalent to (green), or cooler than (blue), peak CWP temperatures.

The IPCC’s second main reason for attributing most recent warming to the increase in GHG concentrations is that climate models “cannot reproduce the rapid warming observed in recent decades when they only take into account variations in solar output and volcanic activity. However . . . models are able to simulate observed 20th century changes when they include all of the most important external factors, including human influences from sources such as greenhouse gases and natural external factors” (IPCC, AR4, WGI, Chapt. 9, p. 702).

This would be decisive if today’s models accurately simulate all important modes of natural variability. In fact, models do not accurately simulate the behavior of clouds and ocean cycles. They may also ignore important interactions between the Sun, cosmic rays, and cloud formation.

Richard Lindzen of MIT spoke to this point at the Heartland Institute’s recent (June 2, 2009) Third International Conference on Climate Change:

What was done [by the IPCC], was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multi-Decadal Oscillation), claim that such models nonetheless adequately depicted natural internal climate variability, and use the fact that models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man. The argument makes arguments in support of intelligent design seem rigorous by comparison.

“Fingerprint” studies are the third basis on which the IPCC attributes most recent warming to anthropogenic greenhouse gases. Climate models project a specific pattern of warming through the vertical profile of the atmosphere–a greenhouse “fingerprint.” If the observed warming pattern matches the model-projected fingerprint, that would be strong evidence that recent warming is anthropogenic. Conversely, notes the NIPCC, “A mismatch would argue strongly against any signficant contribution from greenhouse gas (GHG) forcing and support the conclusion that the observed warming is mostly of natural origin” (NPICC, p. 106).

Douglass et al. (2007) compared model-projected and observed warming patterns in the tropical troposphere. The observed pattern is based on three compilations of surface temperature records, four balloon-based records of the surface and lower troposphere, and three satellite-based records of various atmospheric layers–10 independent datasets in all.

“While all greenhouse models show an increasing warming trend with altitude, peaking around 10 km at roughly two times the surface value,” observes the NIPCC, “the temperature data from balloons give the opposite result; no increasing warming, but rather a slight cooling with altitude” (p. 107). See the figures below.

hot-spot

The mismatch between the model-predicted greenhouse fingerprint and the observed pattern is profound. As the Douglass team explains: “Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modeled trend is 100% to 300% higher than observed, and above 8 km, modeled and observed trends have opposite signs.”

douglass

Figure description: Temperature trends for statellite era (ºC/decade). HadCRUT, GHCH and GISS are compilations of surface temperature observations. IGRA, RATPAC, HadAT2, and RAOBCORE are balloon-based observations of surface and lower troposphere. UAH, RSS, UMD are satellite-based data for various layers of the atmosphere. The 22-model average comes from an ensemble of 22 model simulations from the most widely used models worldwide. The red lines are the +2 and -2 standard errors of the mean from the 22 models. Source: Douglass et al. 2007.

The NIPCC concludes that the mismatch of observed and model-calculated fingerprints “clearly falsifies the hypothesis of anthropogenic global warming (AGW)” (p. 108). I would put the state of affairs more cautiously. In view of (1) significant evidence that the MWP was as warm as or warmer than the CWP, (2) the inability of climate models to simulate important modes of natural variability, and (3) the failure of observations to confirm a greenhouse fingerprint in the tropical trosophere, the IPCC claim that “most” recent warming is “very likely” anthropogenic should be considered a boast rather than a balanced assessment of the evidence.

Climate Sensitivity

The most important unresolved scientific issue in the global warming debate is how sensitive (reactive) the climate is to increases in GHG concentrations.

Climate sensitivity is typically defined as the global average surface warming following a doubling of carbon dioxide (CO2) concentrations above pre-industrial levels. The IPCC says a doubling is likely to produce warming in the range of 2ºC to 4.5ºC, with a most likely value of about 3ºC (IPCC, AR4, WGI, Chapt. 10, p. 749). The IPCC presents a range rather than a specific value because of uncertainty regarding the strength of the relevant feedbacks.

In a hypothetical climate with no feedbacks, positive or negative, a CO2 doubling would produce 1.2ºC of warming (IPCC, AR4, WGI, Chapt. 8, p. 631). In most climate models, the dominant feedbacks are positive, meaning that the warmth from rising GHG levels causes other changes (in water vapor, clouds, or surface reflectivity, for example) that either increase the retention of outgoing long-wave radiation (OLR) or decrease the reflection of incoming short-wave radiation (SWR).

In his speech at the June 2 Heartland Institute conference, Professor Lindzen summarized his research on climate sensitivity, which has since been accepted for publication by Geophysical Research Letters. Lindzen argues that climate feedbacks and sensitivity can be inferred from observed changes in OLR and SWR following observed changes in sea-surface temperatures. For fluctuations in OLR and SWR, Lindzen and his colleagues used the 16-year record (1985-1999) from the Earth Radiation Budget Experiment (ERBE), as corrected for altitude variations associated with satellite orbital decay. For sea surface temperatures, they used data from the National Centers for Environmental Prediction. For climate model simulations, they used 11 IPCC models forced with the observed sea-surface temperature changes.

The results are striking. All 11 IPCC models show positive feedback, “while ERBE unambiguously shows a strong negative feedback.”

lindzen-erbe-vs-models1

Figure description: ERBE data show increasing top-of-the-atmosphere radiative flux (OLR plus reflected SWR) as sea surface temperatures rise whereas models forecast decreasing radiative flux. Source: Lindzen and Choi 2009.

The ERBE data indicate that the sensitivity of the actual climate system “is narrowly constrained to about 0.5ºC,” Lindzen estimates. ”This analysis,” says Lindzen in a recent commentary, “makes clear that even when all models agree, they can be wrong, and that this is the situation for the all important question of climate sensitivity.”

erbe-v-model-sensitivity4

At the Heartland Institute’s Second International Conference on Climate Change (March 2009), Dr. William Gray of Colorado State University presented satellite-based research that may explain the low climate sensitivity the Lindzen team infers from the ERBE data.

The IPCC climate models assume that CO2-induced warming significantly increases upper troposphere clouds and water vapor, trapping still more OLR that would otherwise escape to space. Most of the projected warming in the models comes from this positive water vapor/cloud feedback, not from the CO2. Satellite observations do not support this hypothesis, Gray contends:

Observations of upper tropospheric water vapor over the last 3-4 decades from the National Centers of Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis data and the International Satellite Cloud Climatology Project (ISCCP) data show that upper tropospheric water vapor appears to undergo a small decrease while Outgoing Longwave Radiation (OLR) undergoes a small increase. This is the opposite of what has been programmed into the GCMs [General Circulation Models] due to water vapor feedback.

The figure below comes from the NCEP/NCAR reanalysis of upper troposphere water vapor and OLR.

reanalysis-olr-and-water-vapor-50

Figure description: NCEP/NCAR renalysis of standardized anomalies at 400 mb (~7.5 km altitude) water vapor content (i.e. specific humidity — in blue) and OLR (in red) from 1950 to 2008. Note the downward trend in moisture and upward trend in OLR.

Gray’s paper deals with water vapor in the upper troposphere. What about high-altitude cirrus clouds, which climate models also predict will increase and trap more OLR as GHG concentrations increase?

Spencer et al. (2007), the study Dr. Spencer spoke about in today’s Policy Peril film clip, found a strong negative cirrus cloud feedback mechanism in the tropical troposphere. Instead of steadily building up as the tropical oceans warm, cirrus cloud cover suddenly contracts, allowing more OLR to escape. As mentioned, Spencer estimates that if this mechanism operates on decadal time scales, it would reduce model estimates of global warming by 75%.

A 2008 study by Spencer and colleague William D. Braswell examines the issue of climate feedbacks related to low-level clouds. Lower troposphere clouds tend to cool the Earth by reflecting incoming SWR. Observations indicate that warmer years have less cloud cover compared to cooler years. Modelers have interpreted this correlation as positive feedback effect in which warming reduces low-level cloud cover, which then produces more warming.

Spencer and Braswell found that climate modelers could be mixing up cause and effect. Random variations in cloudiness can cause substantial decadal variations in ocean temperatures. So it is equally plausible that the causality runs the other way, and increases in sea-surface temperature are an effect of natural cloud variations. If so, then climate models forecast too much warming. For more on this, visit Spencer’s Web site.

In a study now in peer review for possible publication in the Journal of Geophysical Research, Spencer and colleagues analyzed 7.5 years of NASA satellite data and “discovered,” he reports on his Web site, “that, when the effects of clouds-causing-temperature-change is accounted for, cloud feedbacks in the real climate system are strongly negative.” “In fact,” he continues, “the resulting net negative feedback was so strong that, if it exists on the long time scales associated with global warming, it would result in only 0.6ºC of warming by late in this century.”

In related ongoing satellite research, Spencer finds new evidence that “most” warming of the past century “could be the result of a natural cycle in cloud cover forced by a well-known mode of natural climate variability: the Pacific Decadal Oscillation (PDO).”

Whether or not the PDO proves to be a major player in climate change, Spencer has identified a potentially serious error in all IPCC modeling efforts:

Even though they never say so, the IPCC has simply assumed that the average cloud cover of the Earth does not change, century after century. This is a totally arbitrary assumption, and given the chaotic variations that the ocean and atmosphere circulations are capable of, it is probably wrong. Little more than a 1% change in cloud cover up or down, and sustained over many decades, could cause events such as the Medieval Warm Period or the Little Ice Age.

Finally, recent temperature history also suggests that most climate models are too “hot.” Dr. Patrick Michaels touched on this topic in Policy Peril (albeit not in today’s excerpt).

Carbon dioxide emissions and concentrations are increasing at an accelerating rate (Canadell, J.G. et al. 2008). Yet, there has been no net warming since 2001 and no year was as warm as 1998.

global-temperature-past-decade

Figure description: Observed monthly global temperature anomalies, January 2001 through April 2009, as compiled by the Climate Research Unit. Source: Paul C. Knappenberger.

Paul C. Knappenberger (”Chip” to his friends) quite reasonably wonders, “[H]ow long a period of no warming can be tolerated before the forecasts of the total warming by century’s end have to be lowered?” After all, he continues, “We’re already into the nineth year of the 100 year forecast and we have no global warming to speak of.” It is instructive to compare these data with climate model projections.

A good place to start is with the climate model projections that NASA scientist James Hansen presented in his 1988 congressional testimony, which launched the modern global warming movement.

The figure below, from congressional testimony by Dr. John Christy, a colleague of Roy Spencer at the University of Alabama in Huntsville, shows how Hanesn’s model and reality diverge.

hansen-models-vs-reality1

Figure description: The red, orange, and purple lines are Hansen’s model forecasts of global temperatures under different emission scenarios. The green and blue lines are actual temperatures from two independent satellite records. Source: John Christy.

“All model projections show high sensitivity to CO2 while the actual atmosphere does not,” Christy notes. “It is noteworthy,” he adds, “that the model projection for drastic CO2 cuts still overshot the observations. This would be considered a failed hypothesis test for the models from 1988.”

What about the models used by the IPCC in its 2007 Fourth Assessment Report (AR4)? How well are they replicating global temperatures?

ipcc-models-vs-recent-temperatures

This figure, also from Dr. Christy’s testimony, is adapted from Dr. Patrick Michaels’s testimony of February 12, 2009. The red and orange lines show the upper and lower significant range (95% of all model runs are between the lines) of global temperature trends calculated by 21 IPCC AR4 models for multi-year segments ending in 2020. The blue and green lines show observed temperatures ending in 2008 from satellite (University of Alabama in Huntsville) and surface (U.K. Hadley Center for Climate Change) records.

Christy comments:

The two main points here are (1) the observations are much cooler than the mid-range of the model spread and are at the minimum of the model simulations and (2) the satellite adjustment for surface comparisons is exceptionally good. The implication of (1) is that the best estimates of the IPCC models are too warm, or that they are too sensitive to CO2 emissions.

Christy illustrates this another way in his comment on EPA’s endangerment proposal.

christy-models-standard-error1

Figure description: Mean and standard error of 22 IPCC AR4 model temperature projections in the mid-range (A1B) emissions scenario. From 1979 to 2008, the mean projection of the models is a warming of 0.22ºC per decade. HADCRUT3v (green) is a surface dataset, UAH (blue) and RSS (purple) are satellite data sets.

Christy comments:

. . . even with these likely spurious warming effects in HADCRUT3v and RSS, the mean model trends are still significantly warmer than the observations at all time scales examined here. Thus, the model mean sensitivity, a quantity utilized by the IPCC as about 2.6ºC per doubled CO2, is essentially contradicted in these comparisons.

Michaels, in his testimony, shows that if year 2008 temperatures persist through 2009, then the observed temperature trend will fall below the 95% confidence range of model projections. In other words, the models will have less than a 5% probability of being correct.

ipcc-models-vs-temperatures-through-2009

Although the IPCC AR4 models have not failed yet, they are, in Michaels’s words, “in the process of failing,” and the longer the current temperature regime persists, the worse the models will perform.

Conclusion

The climate science debate is not “over.” In fact, it is just starting to get very, very interesting. All the basic issues–detection, attribution, and sensitivity–are unsettled and more so today than at any time in the past decade.

A final thought–anyone who wants further convincing that the debate is not over should read the marvelous NIPCC report. On a wide range of issues (nine main topics and 60 sub-topics), the report demonstrates that the scientific literature allows, and even favors, reasonable alternative assessments to those presented by the IPCC.

P.S. Previous posts in this series are available below:

  • Policy Peril: Looking for an antidote to An Inconvenient Truth? Your search is over
  • Policy Peril Segment 1: Heat Waves
  • Policy Peril Segment 2: Air Pollution
  • Policy Peril Segment 3: Hurricanes
  • Policy Peril Segment 4: Sea-Level Rise
  • Teaching Moment

    by Chris Horner on August 5, 2009

    So I’m in the suburbs of St. Looie today doing a town hall meeting with
    Rep. Todd Akin — fully subscribed with a crowd that was, ah, rather
    enthusiastic– when I have what may be the most fun experience in
    this whole strange anti-alarmism trip to date, as good as The Daily Show
    or even blogging on NRO.

    That was when, in the scrum afterward speaking with those interested is
    more on the subject, a woman hesitates then says “Ah…I’m a science
    teacher…(Pause)” Look to the nametag. Face. Name tag. “No, you were
    ‘my’ science teacher!”, 8th grade, 32-ish years ago!

    That’s what she thought, too, quite pleased with what I’m doing and
    appalled at science and educators having sold their souls for guaranteed
    billions each year (for now…). So, she must remain nameless, of”
    course, knowing how our friends work. But before Team Soros and other
    PG-monitors start shrieking that this just shows the product of youthful
    indoctrination, recall how for many reasons she would have been far more
    likely to have been brainwashing me with the at-the-time still
    less-exposed “consensus — just as phony then as the claims are now —
    about catastrophic Man-made global cooling.

    No, she’s just an educator sick about what she’s witnessing. Regardless,
    very cool, and worth getting up at 5 and heading home by midnight
    (regional airport living, gotta love it), as was the whole event. As
    much as I want to see an ugly defeat, the crash-and-burn salting the
    political earth from whence this monstrosity came, I increasingly think
    that the Senate are best served just not bringing cap-and-trade up. It
    looks decreasingly wise to test the loyalty (and career interest) of
    Sen.s Bayh, Nelson, Lincoln, Landrieu and of course McCaskill. That
    means BTU II, or that line in Animal House expressing the lack of
    foresight in having trusted one’s fraternity brothers with Fred’s Caddy.

    [youtube:http://www.youtube.com/watch?v=5dZtbz2U9-c 285 234]

    You need to a flashplayer enabled browser to view this YouTube video

    Duke Energy CEO and pro-carbon capper Jim Rogers writes for the Wall Street Journal today in support of the expansion of nuclear power capacity in the U.S. Amen to the nukes; not to the inevitable cap tax (however you design it, it’s a tax).

    Endeavoring to enhance his green-cred, Rogers also bemoans America’s alleged lagging performance in the mythical “race to develop green energy technologies.” He writes:

    As John Doerr, a partner at the venture-capital firm Kleiner Perkins Caufield & Byers, recently told a U.S. Senate energy panel, “The United States led the world in the electronics revolution, and we led in biotechnology and the Internet. But we are letting the energy technology revolution speed by us.”

    Mr. Doerr noted that the U.S. is home to only one of the top 10 wind turbine producers, only one of the 10 largest photovoltaic solar panel producers, and only two of the top 10 advanced-battery manufacturers.

    China is leading this race, and I saw this first hand during a recent trip there. China has doubled its wind-energy capacity each of the past four years, and it is expected to become the world’s largest manufacturer of wind turbines this year. It is already the world’s leading producer of solar panels. The Chinese understand that clean-energy technologies are the key to controlling their energy future.

    Does it really matter who’s developing technology, if it’s really worthwhile, so long as we get to use it? Has every technological advancement become a success in the U.S. because we won some “race?” If so, forgive me for not being devastated about missing out on the wind turbine and photovoltaic “revolution.”

    And has Rogers now become the mouthpiece of the Chi-coms? I’m sure the regime lackeys ushered him around to all their propagandaful “green” sites, while their pollutin’ polysilicon plants were passed by. Another convenient U.S. dupe for the communists.

    More proof that government does things better! In traditional “astroturfing,” a company would pay a PR firm to set up a fake grassroots organization aimed at promoting or fending off legislation that would affect the company.  Her Majesty’s Government in the UK, however, has decided to take this a step further and fund groups that lobby it, thereby creating a groundswell of public opinion in favor of its legislation.  According to a new report by the Taxpayers Alliance, it is doing this, on the issue of global warming alone, to the tune of $12 million a year*.

    As Matt Sinclair says:

    With the government funding political campaigns as well, the voice of the public is diluted still further. Popular pressure is crowded out by well-funded professional campaigns, but those campaigns don’t even represent an actual economic interest. Instead, those campaigns represent the views of politicians and officials and allow them to push their ideological preoccupations to prominence in the public discourse. Green campaigns like the Sustainable Development Commission and the New Economics Foundation loom large in the public debate and make it easier for politicians to justify – to themselves, the media and the public – ever more draconian attempts to force cuts in emissions.

    It is important that Americans understand how disconnected policy in Britain is from the preferences and priorities of the public. British politicians like to strut around on the world stage boasting about the radical action that the country is taking, for example, how we lead the world in setting carbon reduction targets. They hope that the U.S. won’t want to let the side down and can be pressured into embracing similar policies to ours. The European example might not quite have the same appeal if Americans understood that Britain is putting in place green policies not because of popular pressure but in order to satisfy a government-funded lobby. Ordinary people pay the price in the form of higher electricity bills, prices at the pump and fares for their airline tickets.

    Political contempt for taxpayers and the electorate is running at record levels on both sides of the Atlantic, it seems.

    *Note that this figure is above what the “well-funded” anti-alarmism groups probably spend in total  on the issue even in the US, and probably globally.

    Last week the Science & Public Policy Institute published “Climate Money,” a new study by Joanne Nova that documents how the U.S. federal government has spent $32 billion on alarmist global warming science since 1990.

    This week, the TaxPayers’ Alliance released a new report, “Burning Our Money,” on the scale that British taxpayers’ money is being used to fund lobbying and political campaigning. It found that £38 million – $60 million – was being spent on taxpayer funded lobbying and political campaigning in just a year, most of which aims to secure greater government intervention to try to cut greenhouse gas emissions. It is about climate alarmism and related policy activism, not sober science and free-market reliance.