Marlo Lewis

Today’s excerpt from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, rebuts the argument that regulatory climate policies can’t be bad for the economy because so many big businesses support them.

This is an odd argument coming from people who are usually suspicious of big business, or even hostile to corporations. When did they decide that corporate support is some kind of good-housekeeping seal of approval?

To watch today’s film excerpt, click here. To watch the entire movie, click here. The text of today’s film clip follows.

Narrator: Some big corporations call for caps on CO2 emissions. Supposedly, this proves such policies won’t harm the economy. In fact, all it proves is that special interests can make windfall profits from energy rationing schemes.

Remember that $5 trillion loss the Lieberman-Warner bill would inflict on the economy? Well, that’s only half the story.

Dr. David Kreutzer (Heritage Foundation): The Lieberman-Warner bill also enacts a huge transfer from the consumers of energy to groups that are picked out–special interest groups–that Congress would designate. So after America has lost $5 trillion in income, there will be another $5 trillion taken and transferred from energy consumers.

Commentary

A corporation may lobby for cap-and-trade for various bottom-line reasons unrelated to environmental concern:

  • In a carbon-constrained world, a company like GE, which makes nuclear reactors and wind turbines, can expect to sell more of its products.  
  • Utilities like PG&E that generate most of their electricity from hydro-electric dams, natural gas, or nuclear power can make a killing in the carbon market if the emission allowances are allocated for free based on a firm’s historic electricity output rather than historic emissions.
  • Conversely, utilities like Duke Energy that generate most of their electricity from coal can make a killing if the emission allowances are allocated for free based on a firm’s historic emissions.
  • Wall Street firms like Goldman Sachs salivate at the prospect of a new, multi-trillion-dollar market in carbon permits, futures, and derivatives. They can make big bucks as brokers and carbon portfolio managers.

The last bullet merits additional comment, because if there ever was a policy issue that pits Wall Street against Main Street, cap-and-trade is it. The Breakthrough Institute summarizes the key finding of a non-public Goldman Sachs report titled “Carbonomics: Measuring impact of US carbon regulation on select industries”:

In a section titled “Carbon exchanges — build it, and they will (must) come to trade,” it estimates the bill [Waxman-Markey] would grow the global carbon market to become one of the biggest in the world, with trading volume of 175 to 263 million contracts per year – larger than the oil and gas markets combined and approximately the third-largest commodity market in the world after U.S. interest rates and stock indexes. The analysts estimate the profit margin for financial firms resulting from the new carbon market could reach $2 billion annually.

 Baptists and Bootleggers

Corporate support for cap-and-trade should really come as no surprise, because nearly all “public-interest” regulation depends on marriages of convenience between the high-minded (or lofty-talking) and the narrowly interested–between those who seek regulation based on some moral, religious, or ideological concern and those who seek regulation to rig the market in their favor.

Economist Bruce Yandle of Clemson University was among the first to develop the theory of the Baptist-Bootlegger coalition as an explanation of public policy change. 

“The theory,” says Yandle, “draws on colorful tales of states’ efforts to regulate alcoholic beverages by banning Sunday sales at legal outlets. Baptists fervently endorsed such actions on moral grounds. Bootleggers tolerated the actions gleefully because it limited their competition.” 

Baptists provided the moral justification–the public-interest rationale–for restricting the sale of alcoholic beverages. Bootleggers provided the filthy lucre–the campaign contributions to politicians supporting the restrictions (known as ”blue laws“). 

Nothing better illustrates the “bootlegger” role of big business in advancing the climate policy agenda than Enron’s lobbying and PR campaign for the Kyoto Protocol.

Enron, that poster child of corporate fraudulance, was a leading advocate of cap-and-trade in the climate treaty negotiations culminating in the Kyoto Protocol. Enron was a natural gas distributor, and Kyoto would suppress (or kill) electricity production from coal, boosting demand for Enron’s core business. Carbon controls would also pump up the market for Enron’s wind turbines and energy management services. In addition, Enron’s energy traders  expected to make juicy commissions on the purchase and sale of emission allowances.

On December 12, 1997, the day after the Kyoto conference, Enron environmental affairs director John Palmisano, in a memorandum to colleagues, enthused:

If implemented, this agreement [the Kyoto Protocol] will do more to promote Enron’s business than almost any other regulatory initiative outside of restructuring of the energy and natural gas industries in Europe and the United States. The potential to add incremental gas sales, and additional demand for renewable technology is enormous. In addition, a carbon emissions trading system will be developed.

For both its high-profile and behind-the-scenes lobbying for Kyoto, Enron became the darling of green groups (a fact many prefer to forget). Palmisano elaborated:

Through our involvement with the climate change initiative, Enron now has excellent credentials with many “green” interests including Greenpeace, WWF [World Wildlife Fund], NRDC [Natural Resources Defense Council], German Watch, the U.S. Climate Action Network, the European Climate Action Network, Ozone Action, WRI [World Resources Institute], and Worldwatch. Such praise went like this: “Other companies should be like Enron, seeking out 21st century business opportunities” or “Progressive companies like Enron are…” or “Proof of the viability of the viability of market-based energy and environmental programs is Enron’s success in power and SO2 [sulfur dioxide] trading.” 

At the end of his memo, Palmisano exulted: ”I predict business opportunities within three years. . . This agreement will be good for Enron stock!!”

Many rent-seeking companies follow the trail that Enron blazed. For example, big-business lobbyists had a strong hand in crafting the Waxman-Markey cap-and-trade bill, the American Clean Energy and Security Act (ACES, H.R. 2454).

All the distinguishing features of the Waxman-Markey cap-and-trade provisions were spelled out months in advance of the bill’s introduction by the United States Climate Action Partnership (US-CAP), in a January 2009 report called A Blueprint for Legislative Action. Core US-CAP proposals incorporated into Waxman-Markey include:

  1. Year 2020 emission reduction targets significantly less stringent  than those called for by the European Union (17% below 2005 levels instead of 20%-30% below 1990 levels).
  2. Generous provision of free emission allowances (energy-ration coupons) rather than 100% auctioning as called for by President Obama (the Heritage Foundation’s August 6, 2009  analysis, p. 4, estimates that 85% to 101% [!] of the coupons will be given away in the early years of the program).
  3. Generous ”carbon offset” provisions authorizing regulated U.S. firms to pay non-regulated entities to reduce, avoid, or sequester emissions in lieu of reducing emissions themselves (the Breakthrough Institute estimates that the Waxman-Markey offsets will allow U.S. emissions to increase through 2030).

A Carbon Cartel

In February 2007 testimony before the Senate Environment and Public Works Committee, CEI President Fred Smith noted that cap-and-trade “is an ugly combination of two of the greatest ills to affect the market economy over the past two hundred years–cartelization and central planning.” The emissions cap, which determines how much CO2-emitting energy society may use, is set by the government–that’s the central planning element. The provision of emission allowances under the cap effectively creates a cartel.

The emissions allowances (energy-ration coupons) function just like the production quota allocated among members of OPEC (Organization of Petroleum Exporting Companies), the only difference being that the ration coupons can be bought and sold. The economic effect, though, of both oil production quota and emission allowances is the same: restrict energy supply, raise energy prices, and create monopoly profits for a favored few.  Fred commented:

As a result of this cartelization, energy costs rise, real wages fall, and output and employment fall. We know these are the effects of cartels, which is why we used to put the people who set up cartels in jail. Yet the Climate Action Partnership wants legal blessing for this new cartel. Any legislation enacting cap-and-trade would actually ennoble a new generation of robber barons and provide legal protection for their profiteering activities.

A key point to bear in mind is that the amount of wealth transferred from consumers to cartel members can greatly exceed the overall loss to the economy. See the diagram below.

wealth-transfer-under-cap-and-trade

Figure description: 1.5 gigatons of carbon (GtC) is the hypothetical amount of CO2 emissions society produces in the absence of a cap. When there is no cap, the right to emit CO2 costs zero dollars per ton of carbon. The hypothetical cap requires a 20% reduction in emissions from 1.5GtC to 1.2 GtC. The right to emit CO2 now costs $50/tC. That increases the cost of energy, which then reduces economic output (the dark shaded triangle). However, the amount taken and transferred from energy consumers–the additional dollars they must spend for home heating oil, natural gas, electricity, and gasoline (the lightly shaded square)–can be much larger.

Think again of OPEC. As long as oil prices don’t get so high that they depress the global economy, the wealth transferred from consumers to OPEC members will exceed the overall reduction in global GDP.

In the European Emissions Trading System (ETS), utilities made out like bandits during the first two years of the program. Governments gave the utilities more free ration coupons than they needed. The utilities then passed their imaginary costs onto their customers by raising rates. Then they sold the surplus coupons they didn’t need to manufacturers whose electric rates they had raised. Thanks to the ETS, the utilities achieved a two-fold (albeit short-lived) windfall profit. Open Europe, the British free-market think tank, provides the gory details in this hard-hitting report.

In the run-up to Waxman-Markey, cap-and-trade proponents repeatedly said that they had learned from Europe’s mistakes, and here in the USA all emission allowances would be auctioned in competitive bids. Yes, your electric rates would “necessarily skyrocket,” Barack Obama said, when campaigning for the White House. But, he assured us, the revenues would be returned somehow to taxpayers. Cap-and-trade would become cap-and-dividend.

That, however, was unacceptable to US-CAP, and in the sausage factory known as the legislative process, they carried the day. The Heritage Foundation’s August 6, 2009  report describes what happened:

In order to get the Waxman-Markey cap-and-trade bill through the House Energy and Commerce Committee . . . Members of Congress promised generous handouts for various industries and special interests. In the near-term, the legislation promises to distribute 85-101% of the allowances to various interest groups at no cost . . . The biggest winners are the electric utilities, receiving 43.75% of the emission allowances in 2012 and 2013.

To read previous posts in this series, click on the links below.

  • Policy Peril: Looking for antidote to An Inconvenient Truth? Your search is over.
  • Policy Peril Segment 1: Heat Waves
  • Policy Peril Segment 2: Air Pollution
  • Policy Peril Segment 3: Hurricanes
  • Policy Peril Segment 4: Sea-Level Rise
  • Policy Peril Segment 5: Is the Science Debate Over?
  • Policy Peril Segment 6: Cap and Trade
  • Policy Peril Segment 7: Fuel Economy Standards 
  • Policy Peril Segment 8: Coal
  • Today’s excerpt from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, is on the global warming movement’s anti-coal campaign and the dangers it poses to U.S. consumers and the economy. To watch today’s clip, click here. To watch the entire film, click here.

    The text of today’s excerpt follows. I provide additional commentary and links to supporting information in the footnotes.

    Narrator: First and foremost, they want to ban construction of new coal-fired power plants. [1] Why? Coal is the most carbon-intensive fuel. It releases the most carbon dioxide per unit of energy produced. [2]

    More importantly, emissions from new coal plants are expected to swamp, by as much as five to one, all the emission reductions that Europe, Canada, and Japan might achieve under the U.N. global warming treaty, the Kyoto Protocol. Either global warming activists kill coal, or coal will bury Kyoto. [3]

    coal-v-kyoto

    Figure Source: Myron Clayton, New coal plants bury ‘Kyoto,’ Christian Science Monitor, 23 December 2004.

    Narrator: To be fair, the activists say they’ll allow new coal generation, if the power plants deploy something called CCS, carbon capture and storage technology. [5] The idea is that instead of releasing CO2 into the air, the power stations would capture it, liquefy it, and then transport it to underground storage sites. [6] There’s just one problem. No commercial coal plants today have CCS technology. [7]

    I asked Mary Hutzler, formerly head of analysis at the Energy Information Administration, how long it would take just to determine whether a CCS system would be economical for utilities to build.

    Mary Hutzler, former Acting Acting Administrator, Energy Information Administration: It probably requires an immense amount of research and development. People have told me 1o to 15 years alone. [8]

    Narrator: Mary also told me that building a national CCS pipeline network could take another decade. Developing the regulations would also take years. [9] So the proposed moratorium is really a ban on new coal plants for 20 years or more.

    What’s the risk here? New coal generation is forecast to supply two-thirds of all new electric power over the next two decades. By 2030, new coal generation is expected to provide 15% of all our electricity. [10] So banning it, could create one heck of a power deficit. Frequent blackouts and power failures–an energy crisis would not be an unlikely consequence. At a minimum, our electric bills would go way up.

    Narrator: But Al Gore is not content to ban new coal plants. He now proposes to scrap all existing coal plants and natural gas power plants too. He says we must replace all carbon-based electricity with carbon-free electricity in just 10 years–by 2018. [11]

    Ben Lieberman (Heritage Foundation): The idea is absolutely off the charts, unrealistic. [12]

    Dr. Patrick Michaels (Cato Institute): Al Gore is proposing the literally, physically impossible. [12]

     Commentary

    [1] James Hansen, the NASA scientist whose congressional testimony during the hot summer of 1988 launched the global warming movement, calls coal power plants ”factories of death“ and “the single greatest threat to civilization and all life on our planet.” The “top priority of any climate policy must be to stop the building of traditional coal plants,” writes climate crusader Joe Romm. He continues: “A climate policy that does not start by achieving at least the first goal, a moratorium on coal without CCS, must be labeled a failure.” “The silver bullet [for global warming] is no more coal,” says Architecture 2030. “Kill Coal. Coal is the enemy of the human race,” declares the Sustainable Development Issues Network. My Google search shows that global warming and coal are discussed on some 4,470,000 Web sites. It’s a safe bet most of those sites share the Gorethodox sentiments quoted above. 

    [2] Different fossil (carbon-based) fuels emit different amounts of CO2 in relation to the energy they produce. For a variety of fuels, the U.S. Energy Information Administration compares pounds of CO2 emitted per energy output measured in British thermal units (Btu).

    Fuel                                                        Pounds/Btu

    Natural Gas                                          117

    Liquefied petroleum gas                 139

    Gasoline                                                156

    Coal (bituminous)                             205

    Coal (subituminous)                        213

    Coal (lignite)                                       215

    Petroleum coke                                 225

    Coal (anthrocite)                              227

    From these numbers, we can calculate the emission ratios (or relative CO2 intensity) of the fuels. For example, bituminous coal is 1.37 times more CO2-intensive than gasoline, and 1.75 more CO2-intensive than natural gas.

    [3] The Christian Science Monitor chart shown above and in the film clip is based on late 2004 estimates by UDI-Platts, the U.S. Energy Information Administration (EIA), and unspecified industry sources. David Hawkins of the Natural Resources Defense Council (NRDC), in a February 2005 speech, presented a similar bottom line, based on International Energy Agency (IEA) data. He said:

     The International Energy Agency (IEA) forecasts that 1400 GW of new coal plants will be built worldwide in the next 25 years alone. To put that in context, current U.S. coal capacity is about 330 GW and global capacity is 1000 GW. This enormous increase in coal capacity will lock us into a huge additional commitment to global warming unless we use technologies that reduce CO2 emissions to minimal levels; marginal efficiency improvements will not prevent this lock-in.

    The lifetime emissions from just this next wave of coal investment will be about 580 billion tons of CO2. That amount is more than half the total loading of the atmosphere with CO2 from all forms of fossil fuel combustion in the past 250 years!

    Build scores or hundreds of new coal plants, and the Kyoto CO2 reductions barely amount to a drop in the bucket. As has been widely reported, China is building coal power plants at the rate of one a week.

    [5] A wide-ranging coalition of environmental groups called “Coal Moratorium Now“ demands that no new coal-fired power station be built unless it is equipped with carbon capture and storage. In 2008, Reps. Henry Waxman (D-CA) and Ed Markey (D-MA)–the authors of the 2009 Waxman-Markey cap-and-trade bill (H.R. 2454, the American Clean Energy and Security Act)–introduced legislation (H.R. 5575) to impose a moratorium on new coal plants lacking CCS. In March 2009, state legislators introduced a similar bill in Texas. In April 2009, the UK Government proposed regulations requiring new coal plants to install CCS on at least 400 MW of output–about 25% of the output of an average power station. In addition, the power stations would have to capture 100% of their emissions by 2025–if the applicable technology exists by then. That’s a big “if.”

    [6] A wealth of both basic and technical information on CCS is available in studies by MIT, the U.S. Government Accounting Office, the Electric Power Research Institute (EPRI), the Congressional Research Service, the Department of Energy (DOE), and Glaser et al. (2008).

    [7] Oil companies sometimes inject CO2 into wells to squeeze more petroleum out of them–a technique called enhanced oil recovery (OER). Sometimes people talk as if a CCS system could piggy-back on EOR projects. But, as MIT’s Future of Coal report points out, CO2 injection for EOR has “limited significance for long-term, large-scale CO2 sequestration–regulations differ, the capacity of EOR projects is inadequate for large-scale deployment, the geologic formation has been disrupted by production, and EOR projects are usually not well instrumented [monitored for CO2 leakage; p. xiii].”

    The Department of Energy (DOE), citing rising costs, pulled the plug on FutureGen, a $1.5 billion government-industry partnership to build the world’s first commercial scale CCS power plant. In July 2009, however, FutureGen Alliance, Inc. announced it had reached an agreement with DOE to begin “construction of the first commercial-scale, fully integrated carbon capture and sequestration project in the country in Matton, Ill.” So there is still not even a commercial-scale demonstration project, though there may be in the next few years.

    [8] MIT’s March 2007 Future of Coal report calls for large demonstration projects in 3-4 sites in different regions of the country costing “$500 million over eight years.” Better still, MIT argues, “Five large tests could be planned an executed for under $1 billion, and address the chief concerns for roughly 70% of U.S. [coal generation] capacity. Information from these projects would validate the commercial scalability of  geologic carbon storage and provide a basis for regulatory, legal, and financial decisions needed to ensure safe, reliable, economic sequestration” (p. 54).

    EPRI’s Bryan Hannegan estimated in March 2007 that CO2 capture (including compression, transportation, and storage) would increase the levelized cost of an Integrated Gassification Combined Cycle (IGCC) coal power plant by ”about 40-50%” (p. 5). IGCC is already more costly than the more common pulverized coal (PC) power plants. EPRI is confident that additional RD&D will lower carbon capture costs. But by how much and how soon is uncertain.

    A February 2009 Stanford University study, citing a September 2008 McKinsey & Co. study and other sources, says that CCS is projected to increase the capital costs of new coal power plants by almost 50%. “On the basis of avoided emissions, the cost of CCS ranges from $30-$90/ tonne CO2, which translates into a 60-80% increase in the levelized cost of electricity ($/MWh).” 

    A July 2009 Harvard University study estimates that early adopters of carbon capture technology will incur a cost of $100-$150/ton of CO2 avoided (equivalent to 8-12 cents/kWh). Once the technology matures, the additional cost will fall to $35-$50/ton of CO2 avoided (equivalent to 2-5 cents/kWh), the researchers estimate. For comparison, in 2009, residential electric rates were 20.9 cents/kWh in Connecticut, 9.2 cents/kWh in Kansas, and 14.6 cents/kWh in California.

    How long between early adoption and technological maturity? According to the researchers, increasing scale, learning by doing, and technological innovation “are expected to reduce abatement [CO2 capture] costs by approximately 65% by 2030, although such estimates are inevitably uncertain” (emphasis added). 

    In plain speak, it may take many years to sort out the economics of CCS.

    [9] The scale of the network of pipelines and storage sites required to transport and bury CO2 from U.S. coal power plants is staggering. According to MIT’s Future of Coal report (p. ix):

    • The United States produces about 1.5 billion tons per year of CO2 from coal-burning power plants.
    • If all of this is CO2 is transported for sequestration, the quantity is equivalent to three times the weight and, under typical operating conditions, one-third the annual volume of natural gas transported by the U.S. gas pipeline system.
    • If 60% of the CO2 produced from U.S. coal-based power generation were to be captured and compressed into a liquid for geologic sequestration, its volume would about equal the total U.S. oil consumption of 20 million barrels per day.
    • At present the largest sequestration project is injecting one millions tons/year of carbon dioxide (CO2) from the Sleipner gas field into a saline aquifer under the North Sea.

    Even if Congress approves such a system, and major environmental groups support it, NIMBY (”not in my backyard”) protests and litigation could block or delay implementation for many years. Some people just don’t like energy projects, regardless of how “green” the projects purport to be. For the gory details, check out the U.S. Chamber of Commerce’s ”Project No Project“ Web site. 

    [10] Two-thirds of all new generation and 15% of total U.S. electric supply–these estimates came from the Energy Information Administration’s (EIA) 2008 Annual Energy Outlook. See the figure below.

    eia-2008-coal-electric-generation

    Coal’s estimated share of new generation and total generation are lower in EIA’s Annual Energy Outlook 2009. EIA forecasts that from 2007 to 2030, new coal generation will provide 64% of all new generation and 9% of total U.S. electric supply. See the figure below.

    eia-2009-coal-electric-generation1

    Actually, it’s remarkable that EIA still forecasts a robust increase in electric generation from coal. Coal increasingly operates in a politically hostile, litigious environment. The Sierra Club, for example, claims that its activists, lawyers, and allies, working with state and local leaders, have prevented 100 planned coal power plants from being built over the past eight years. Click here for a partial list.

    For example, even in Texas, an energy-producing state, environmental activists stopped TXU Corp. from building eight of 11 planned new coal power plants, despite estimates by the Perryman Group that investment in the new plants, over five years, would add $25.8 billion to state GDP, $17.3 billion to in-state personal income, and 389,000-plus person-years of employment.

    [11] I’m not making this up. The text and video of Gore’s speech calling for carbon-free electricity by 2018 are available here.

    [12] According to the EIA, in 2008, renewable sources generated 356 billion kWh, of which 259.7 billion kWh, or 73%, came from conventional hydro-electric dams. Total net generation by the electric power sector was 3852 billion kWh. So renewables provided only 9% of total generation, which means that only about 2.4% came from the politically-correct renewables–wind, biomass, solar, and geothermal.

    Note that non-hydro renewable sources would provide even less electricity but for a plethora of market-rigging federal and state tax breaks and subsidies and Soviet-style production quotas known as renewable portfolio standards.

    Coal and natural gas provided 2654 billion kWh, or about 69% of total U.S. electric generation in 2008. Gore and his allies would undoubtedly oppose the construction of new large hydroelectric dams even if suitable sites were available. So what Gore and “We Can Solve It” are proposing to do, is replace the 69% of our electricity that comes from coal and natural gas with the non-hydro renewables that currently supply only 2.4%–all in 10 years. 

    This plan would fail–dismally. Our electricity rates would skyrocket, because the demand for renewable electricity, ramped up by mandates, would vastly exceed supply. No transition that big and that fast would be smooth. Service disruptions and blackouts would likely be frequent and perversive–a chronic energy crisis.

    Gore’s plan would also set a world record for government waste, since hundreds of profitable coal and natural gas power plants would have to be decommissioned long before the end of their useful lives.   

     To read previous posts in this series, click on the links below:

    Today’s excerpt from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, is on two global warming policies Congress has adopted: fuel economy standards and biofuel mandates.

    Here are my previous posts in this series:

    To watch today’s film excerpt, click here. To watch the entire film, click here.

    The text of today’s film clip immediately follows. It includes footnotes to additional commentary and supporting information.

    Narrator: If stopping new coal is the global warming movement’s top priority, a close second is jump-starting a ‘beyond petroleum’ transport system. They propose to do this by tightening new-car fuel economy standards. Why?

    A car that gets more miles to the gallon emits less CO2 per mile [1]. But the federal fuel economy program, also known as CAFE, has serious downsides.

    Sam Kazman (General Counsel, Competitive Enterprise Institute): Now there are lots of problems with fuel economy mandates. One thing, they raise new car prices. [2] Secondly, they restrict consumer choice. [3] But the worst thing is an effect you never hear their advocates talking about. Namely, fuel economy mandates kill people. [4]

    Narrator: Here’s why. Heavier cars provide more mass to absorb collision forces, and bigger cars provide more space between the occupant and the point of impact. [5] Make a car smaller and lighter, and it will go farther on a gallon of gas.

    Kazman: But you also make it less safe. According to the National Academy of Sciences, the current CAFE standard by downsizing cars, contributes to about 2,000 fatalities per year. [6]

    Narrator: Legislation Congress passed in December 2007  requires a 40% increase in fuel economy by 2020. [7] In 2007, only two out 1,153 vehicle models met the new standards. [8] So expect more downsizing in the years ahead.

    Another ‘beyond petroleum’ policy is to require the sale of alternative fuels. In December 2007 Congress also mandated that motor fuel producers sell 36 billion gallons of ethanol a year by 2022, with 15 billion gallons coming from corn kernals. [9] The result, we’re diverting massive quantities of grain from food to auto fuel. This contributes to the surge in global grain prices that is pushing millions of the world’s poorest people to the brink of starvation. [10]

    But at least ethanol cuts down on CO2 emissions, right? Actually, no.

    Dr. Dennis Avery (Hudson Institute): As we expand the cropland, then we get into the real trouble, because we release the greenhouse gas that’s stored in the soil as carbon. And with corn, we release twice as much gas as we would have released if we burned gasoline in the first place. [11]

    [1] A gallon of gasoline (which weighs about 6.3 lbs.) produces 20 lbs. of CO2 when burned. If a car gets more miles to the gallon, it will emit fewer lbs. of CO2 per mile driven. The relationship between fuel economy (mpg) and lbs. CO2/mile is so strict that EPA bases its fuel economy ratings of vehicle models on tests that measure the carbon content of the emissions, principally CO2.

    Unsurprisingly, virtually all CO2-reduction options for new motor vehicles are fuel-economy-increasing options. See p. 10 of the National Automobile Dealer Association’s comment on EPA’s reconsideration of California’s request for a waiver to establish greenhouse gas emission standards for new motor vehicles. 

    [2] There are basically two ways to increase fuel economy–downsizing (making cars smaller and lighter) and new technology. Typically, advanced technology costs more than conventional technology. The Energy Information Administration, for example, estimates that California’s greenhouse gas/fuel economy standards, which President Obama recently adopted, will increase the average price of a new car by $1,860 in 2016. [Obama’s program will also impose heavy burdens on the nearly prostrate U.S. auto industry, as economist Keith Hennessey explains.]

    [3] The CAFE program all but killed the market for large station wagons, because automakers could not produce millions of these once popular “family cars” and meet the CAFE standard for their vehicle fleets.

    In addition, as a general matter, because fuel economy mandates increase vehicle cost, they inevitably price some consumers out of the market for certain vehicle models, restricting their choices.

    Ironically, the federal fuel economy program boost the production and sale of gas-guzzling SUVs. Consumers who might otherwise have purchased big station wagons instead bought large SUVs. Congress regulated SUV fuel economy less stringently because (1) SUVs are built on a light-truck chassis and thus are classified as trucks rather than as passenger cars, and (2) most SUVs traditionally were used for farming and business rather than commuting. Fuel economy standards helped create the boom market for low-mpg SUVs–a classic case of the law of unintended consequences.

    [4] Sam debates the issue of whether CAFE kills with an analyst from Natural Resources Defense Council (NRDC) here.

    [5] I am always amazed when people with scientific credentials deny the safey implications of regulatory-induced vehicle downsizing. How can they claim that size and weight don’t matter? That’s denying the laws of physics. There’s a reason why boxing matches don’t pit lightweights against heavyweights, or why marathon runners don’t play professional football.

    Yes, new technology can improve the crashworthiness of small cars. But, as Sam explains elsewhere, a large car with new technology will still be safer than a small car with new technology. To the extent that CAFE constrains the production and sale of larger, heavier vehicles, it limits auto safety.

    [6] Sam refers to a National Academy of Sciences/National Research Council (NRC) study, Effectiveness and Impact of Corporate Average Fuel Economy (CAFE) Standards. See pp. 25-29, especially p. 27. The NRC estimates that in 1993, a typical year, downweighting and downsizing of cars contributed to 1,300 to 2,600 auto fatalities, 13,000 to 26,000 incapacitating injuries, and 97,000 to 195,000 total injuries.  

    [7] The so-called Energy Independence and Security Act (EISA). Click here to read the Congressional Research Service’s summary of the EISA provisions.

    [8] Prior to investigating, I had assumed there must be at least 30-50 models on the road that met the fuel economy standards mandated by the 2007 Energy Independence and Security Act. But EPA’s fuel economy ratings for model year 2008 reveals that only two out of 1,153 models, the Toyota Prius and Honda Civic Hybrid, met or exceeded the standard (35 mpg for both city and highway driving conditions).

    [9] Click here to read the Congressional Research Service’s summary of the EISA provisions.

    [10] I provide references here on biofuel policy and world hunger. In May 2008, the International Food Policy Research Institute estimated that biofuel demand accounted for 30% of the increase in world cereal prices during 2007-2008. For further discussion, see Dennis Avery’s October 2008 paper for the Competitive Enterprise Institute. 

    [11] Dennis’s CEI paper recaps the literature on CO2 increases from biofuel policy-induced land-use changes, including Searchinger et. al. (2008) and Fargione et al. (2008). Additional reviews of these studies are available on World Climate Report and CO2Science.Org.

    Today’s excerpt from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, is on cap-and-trade.  

    What is cap and trade?

    Cap-and-trade is Al Gore’s (and the environmental community’s) leading “solution” to the alleged “climate crisis”–the centerpiece, for example, of the Kyoto Protocol climate treaty.

    There are many technical  issues in the design and implementation of a cap-and-trade program, but the basic idea is as follows. 

    The government establishes a legal limit–a “cap”–on the total quantity of greenhouse gases that regulated (”covered”) entities may emit. Each covered entity must acquire one federally-created or -certified allowance (permit, ration coupon) for every ton of carbon dioxide-equivalent (CO2-e) greenhouse gases it emits. The total number of allowances allocated exactly equals the number of tons permissible under the cap. Thus, as the cap tightens, the supply of coupons shrinks, and emissions from covered entities decline.

    An entity with high emission-reduction costs may simply decide to cut its energy use and economic output, but it may also buy surplus coupons from an entity with lower emission-reduction costs. The buying and selling of ration coupons is the “trade” part of cap-and-trade.

    “Market-based” is a misnomer

    Supposedly, cap-and-trade leads to an economically-”efficient” solution. Participants are motivated to innovate and search for cheap emission-reduction opportunities not only to minimize their own costs but also to generate surplus coupons they can sell in the carbon trading market.

    Cap-and-trade is often called “market-based” because each business, spurred by the desire to minimize costs and (if possible) amass surplus coupons it can sell for a profit, determines where and how to cut its emissions. This is in contrast to “command-and-control” regulation in which a central authority prescribes the emission rates (e.g. lbs. of Co2 per Megawatt hour of electricity generated or sold) or energy efficiencies covered entities must achieve, or the fuel types (e.g. wind, solar, geothermal) or technologies (e.g. carbon capture and storage) they must use.

    In practice, however, cap-and-trade legislation typically contains buckets of command-and-control provisions. For example, the Waxman-Markey cap-and-trade bill (about which more later) mandates electric generation from renewable sources and imposes tough new efficiency standards for buildings, appliances, transport systems, and industry.

    More fundamentally, as my colleague Myron Ebell points out in his testimony on Waxman-Markey, cap-and-trade is not really “market-based.” Cap-and-trade “subordinates markets to central planning. It takes the most important economic decisions [e.g. what kinds of energy technologies will dominate the market and how much consumers will have to pay for energy] out of the hands of private individuals acting in the market and puts them in the hands of government.”

    Far from being “based” on the market, cap-and-trade would effect a gigantic expansion of government power and control over markets. The “cap” in cap-and-trade creates a government-run rationing system for the carbon-based fuels that supply 85% of our energy. Our liberties are at risk, as Myron explains in his testimony:

    If enacted, Title III [the cap-and-trade portion of Waxman-Markey] would be the single largest government intervention in the economy since the Second World War. That was the last time–and we hope it remains the last time–when people had to present ration coupons in order to buy gasoline (and many other products including cars, tires, sugar, coffee, meat, cheese, butter, and shoes). While the debate has focused on costs, far too little attention has been paid to the extent that political and economic freedoms would be lost or impinged upon under cap-and-trade. I urge the Committee and the House to consider seriously and deeply the threat to our liberties posed by putting government in charge of how much and what type of energy we can consume.

    Today’s Policy Peril excerpt

    In today’s Policy Peril film excerpt, Dr. David Kreutzer, an economist with the Heritage Foundation, discusses his team’s analysis of the Lieberman-Warner bill (S. 2191), the leading cap-and-trade bill of 2008. You can view today’s film clip here. To watch Policy Peril from start to finish, click here. Previous posts in this series are available immediately below.

  • Policy Peril: Looking for an antidote to An Inconvenient Truth? Your search is over
  • Policy Peril Segment 1: Heat Waves
  • Policy Peril Segment 2: Air Pollution
  • Policy Peril Segment 3: Hurricanes
  • Policy Peril Segment 4: Sea-Level Rise
  • Policy Peril Segment 5: Is the Science Debate “Over”?
  •  Enough preliminaries; here’ s the text of today’s film excerpt:

    Narrator: Okay, so the global warming scare is built on speculation and hype. Now let’s look at the other side of the equation–the policies being promoted to combat global warming. What are they, and what are the associated risks?

    Several bills in Congress call for deep emission cuts by 2050. The most prominent in 2008 was the Lieberman-Warner bill. It would require a 70% emissions cut.

    Dr. David Kreutzer (Heritage Foundation): When we analyzed the impact of the Lieberman-Warner bill, we found three things: Incomes go down, taxes go up, and jobs go away.

    Narrator: Lieberman-Warner would reduce cumulative U.S. GDP by $5 trillion during 2012 to 2030. Let’s put that in perspective. A typical hurricane striking a U.S. coastal community does about $5 billion in damage.

    In the portion of the film just after today’s clip, Dr. Kreutzer compares the economic damage from Lieberman-Warner to that caused by a typical landfalling hurricane:

    Dr. Kreutzer: Well, adjusting for increases in wealth over the next 20 years, that means that the damage done by Lieberman-Warner in economic terms is the equivalent of over 600 hurricanes. Now, normally we have slightly less than two hurricanes per year that make landfall. So this is orders-of-magnitude worse than the damage that would be done by these weather storms, the hurricanes. That’s a big hit to the economy.  

    Commentary

    Cap-and-trade is an energy tax

    The Heritage Foundation study of Lieberman-Warner is available here. The Heritage folks point out what should be obvious. Eighty-five percent of U.S. energy comes from carbon-based (greenhouse gas-emitting) fuels. Capping emissions therefore means capping (restricting) energy use and/or compelling suppliers and consumers to switch from lower-cost fossil fuels to more expensive “alternative” energy sources. 

    Cap-and-trade “works” (reduces emissions) by making carbon-based energy more costly for consumers. Peter Orszag, President Obama’s budget director, unequivocally affirmed this point in his April 24, 2008 Senate Finance Committee testimony (p. 3) when he was Director of the Congressional Budget Office (CBO):

    Under a cap-and-trade program, firms would not ultimately bear most of the costs of the allowances but instead would pass them along to their customers in the form of higher prices. Such price increases would stem from the restriction on emissions and would occur regardless of whether the government sold emission allowances or gave them away. Indeed, the price increases would be essential to the success of a cap-and-trade program because they would be the most important mechanism through which businesses and households would be encouraged to make investments and behavior changes that reduced CO2 emissions.

    Barack Obama put the point more bluntly in January 2008, when campaigning as a presidential candidate. He said:

    Under my plan of a cap-and-trade system, electricity rates would necessarily skyrocket . . . because I’m capping greenhouse gases, coal power plants, natural gas — you name it — whatever the plants were, whatever the industry was, they would have to retrofit their operations. That will cost money; they will pass that money on to consumers.   

    In short, cap-and-trade is an energy tax by another name. As Myron likes to say: “There are three things you need to know about cap-and-trade: It’s a tax, it’s a tax, it’s a tax.” And since energy is the lifeblood of modern economies, energy taxes or their regulatory equivalent unavoidably raise consumer prices, reduce economic output, and reduce employment.

    Energy tax impacts 

    The Heritage study estimated the following impacts from the cap-and-trade component of Lieberman-Warner:

    • Cumulative GDP losses are at least $1.7 trillion and could reach $4.8 trillion by 2030 (in inflation-adjusted 2006 dollars).
    • Single-year GDP losses hit at least $155 billion annually and could exceed $500 billion (in inflation-adjusted 2006 dollars).
    • Annual job losses exceed 500,000 before 2030 and could approach 1,000,000.
    • The average household will pay $467 more each year for its natural gas and eletricity (in inflation-adjusted 2006 dollars).

    A study by the National Association of Manufacturers and the American Council for Capital Formation came to similar conclusions. According to NAM/ACCF, Lieberman-Warner would:  

    • Raise natural gas prices for residential consumers by 26% to 36% in 2020, and 108% to 146% in 2030.
    • Raise electricity prices for residential consumers by 28% to 33% in 2020, and 101% to 129% in 2030.
    • Raise gasoline prices by 29% or $1.10 (based on prices prevailing as of June 2008).
    • Reduce GDP growth by $151 billion to $210 billion in 2020, and $631 to $669 billion in 2030 (in 2007 dollars).
    • Reduce net job creation by 1.2 million to 1.8 million in 2020, and 3 million to 4 million in 2030.

    Charles River Associates also projected heavy economic impacts. In their analysis, Lieberman-Warner would:

    • Reduce real annual household spending by an average of $800 to $1,300 in 2015.
    • Reduce GDP by $160 billion to $250 billion in 2015.
    • Produce net job losses of 1.5 million to 2.3 million in 2015.

    The frothings of right-wing paranoia, you say? Well, then EPA, too, must be part of the vast right-wing conspiracy. In EPA’s analysis , Lieberman-Warner would:

    • Increase gasoline prices by $0.53 a gallon in 2030.
    • Reduce U.S. GDP by $238 billion to $983 billion in 2030.
    • Increase electricity prices by 44% in 2030.

    All pain for no gain 

    All in all, not a pretty picture! Yet Lieberman-Warner would have no measurable impact on global temperatures for many decades, if ever. Assuming for a moment the correctness of the scientific basis for these policies, Lieberman-Warner would prevent 0.013ºC of global warming by 2050, Dr. Patrick Michaels estimates. Even if all industrialized countries adopt Lieberman-Warner, total global warming averted is 0.11ºC by 2050–too little for scientists to detect.

    With this abysmal cost-benefit ratio (trillions in costs for undetectable global warming reductions), it is small wonder that S. 2191 died in the Senate in June 2008. 

    Rube Goldberg Green

    But perhaps the main reason Lieberman-Warner fizzled is that the U.S. Chamber of Commerce exposed the bill as a Rube Goldberg scheme rife with mandates, regulation, and red tape. The Chamber’s Lieberman-Warner flow chart is one of those pictures worth a thousand words. Please take a moment to behold the infernal complexity of it all!

    The sausage factory known as the “legislative process” always mingles and mangles cap-and-trade with prescriptive mandates, special-interest carve outs, and bureaucratic empire building.

    Rent seeking

    Special-interest manipulation and gaming are an unavoidable affliction. Consider Europe’s emissions trading system (ETS), which was a bonanza for special interests during the first three years of its operation (2005 to 2007). In Europe’s Dirty Secret: Why the EU Emissions Trading Scheme isn’t working, the British think tank Open Europe details a host of abuses, including:

    • Governments over-allocated allowances to domestic firms (to reduce costs and create competitive advantage), collapsing credit prices from €33 to €0.20 per ton, “meaning that the system did not reduce emissions at all.”
    • Utilities got free allocations, passed the imaginary costs onto customers in the form of higher electric rates, and then sold the coupons they didn’t need — double dipping at the expense of industrial manufacturers and consumers.
    • Small institutions like hospitals did not get free coupons and ended up subsidzing well-connected energy companies.

    Dr. Kreutzer’s colleague Ben Lieberman (who also appears in Policy Peril) testified recently before the Senate Foreign Relations Committee on Europe’s experience with cap-and-trade. Ben’s take on the hearing is a knee-slapper:

    I was the only one on the panel who thought the problems in Europe were not fixed. The repesentative from Shell said that the original problem was the over-allocation of free allowances, which has since been corrected–and he then argued for more free allocations for refiners. A BASF representative also said the problem with free allocations had been fixed–and went on to say that the chemical industry needs more free allocations.

    The Heritage Foundation analysis of Lieberman-Warner also found that it would transfer immense wealth from consumers to special interests. Later on in Policy Peril, Dr. Kreutzer comments:

    Dr. Kreutzer: The Lieberman-Warner bill also enacts a huge transfer from the consumers of energy to groups that are picked out–special interest groups that Congress would designate. So after America has lost $5 trillion in income, there will be another $5 trillion taken and transferred from energy consumers.

    Regressive

    Because even an idealized cap-and-trade program is the regulatory equivalent of an energy tax, its economic impact is regressive, meaning that it imposes a relatively greater burden on poor households, who spend a larger share of their income on energy and other basic necessities. The Congressional Budget Office (CBO) report, Tradeoffs in Allocating Allowances for CO2 Emissions (April 2007), is crystal clear on the point:

    Regardless of how allowances were distributed, most of the cost of meeting a cap on CO2 emissions would be borne by consumers, who would face persistently higher prices for products such as electricity and gasoline. Those price increases would be regressive in that poorer households would be a larger burden relative to their income than wealthier households would.

    Mirage of regulatory predictability

    Proponents spout a lot of happy chatter about how cap-and-trade will create a “predictable” regulatory framework for businesses, because Congress will specify in advance how much and how fast emissions must decline. But this claim ignores the enormous potential of cap-and-trade bills to spawn a new era of regulatory litigation, creating uncertainty and delays for business investment. Have a look again at the U.S. Chamber chart of Lieberman-Warner. The bill contains 300 regulations and mandates, each of which most go through the bureaucratic process illustrated in the center of the chart. Many of those rulemakings would likely be litigated. 

    Moreover, the “predictability” most important to business is cost predictability. Uncertainty regarding compliance costs makes it difficult for businesses to plan and attract capital for major projects. Key point: A cap produces cost uncertainty precisely to the extent that it achieves emissions certainty.

    That is, when the quantity of emissions is fixed by law, covered firms have to comply regardless of what it costs, and any number of factors outside the covered entity’s control — unseasonable weather, natural disasters, energy crises, business cycles — can affect cost.

    Proponents of greenhouse gas cap-and-trade schemes tout the Clean Air Act’s Acid Rain sulfur dioxide (SO2) emissions trading system as a model. But as  Ken Green, Stephen Hayward, and Kevin Hasset of the American Enterprise Institute point out:

    SO2 trading prices have varied from a low of $70 in per ton in 1996 to a high of $1500 per ton in late 2005. SO2 allowances have a monthly volatility of 10 percent and an annual volatility of 43 percent over the last decade.  

    The potential for cap-and-trade to generate allowance-price volatility — hence energy-price volatility — is vast. As Green, Hayward, and Hasset also note, in 1994, California’s South Coast Air Quality Management District (SCAQMD) launched RECLAIM (Regional Clean Air Incentives Market), an emissions trading program for SO2 and nitrogen oxides (NOx). SCAQMD estimated that SO2 and NOx would be reduced by 14 and 8 tons per day respectively, by 2003, at half the cost of prescriptive, command-and-control approaches. The authors comment:

    RECLAIM never came close to operating as predicted and was substantially abandoned by 2001. Between 1994 and 1999, NOx emissions fell only 3 percent, compared to a 13 percent reduction in the five years before RECLAIM. There was extreme price volatility aggravated by California’s electricity crisis of 2000. NOx permit prices ranged from $1,000 to $4,000 per ton between 1994 and 1999, but soared to an average price of $45,000 per ton in 2000, with some individual trades over $100,000 per ton. Such high prices were not sustainable, and SCAQMD removed electric utilities from RECLAIM in 2001.

    Waxman-Markey: impacts and offsets

    The big kahuna of cap-and-trade bills this year is the American Clean Energy and Security Act (ACES), H.R. 2454, commonly known as Waxman-Markey for its co-sponsors, House Energy and Commerce Chairman Henry Waxman (D-CA), and Energy & Environment Subcommittee Chairman Ed Markey (D-MA).

    On March 31, 2009, Waxman and Markey circulated a “discussion draft” of ACES. On May 13, 2009, Dr. Kreutzer and the Heritage team published their economic impact assessment of the cap-and-trade provisions. The discussion draft cap-and-trade program aimed to reduce greenhouse gas emissions from covered sources 20% below 2005 levels by 2020, 42% below by 2030, and 83% below by 2050. The Heritage analysis projected that, by 2035, the bill would:

    • Reduce cumulative GDP by $7.5 trillion.
    • Lower average annual employment by 844,000 jobs, reducing employment by 1.9 million jobs in peak years.
    • Raise electricity rates 90% after adjusting for inflation.
    • Raise inflation-adjusted gasoline prices by 74%.
    • Raise an average family of four’s yearly energy bill by $1,500.
    • Increase inflation-adjusted federal debt by 29%, or $33,400 additional debt per person.

    A key uncertainty in estimating the economic impacts of a cap-and-trade program is the extent to which covered entities may meet their obligations by earning or purchasing “offsets.” An offset is a credit for greenhouse gas-reducing investments in economic sectors or geographic regions not subject to the cap. For example, offsets may be awarded for investing in tree plantations in developing countries (trees remove CO2 from the air).

    The Breakthrough Institute contends that the offset provisions in Waxman-Markey are so generous they all but eliminate any real constraint on U.S. domestic CO2 emissions until 2025 or 2030. Indeed, the bill authorizes up to 2 billion tons in offsets for domestic projects and 1.5 billion tons in offsets for international projects. (All of which, incidentally, is tacit admission that the costs of cap-and-trade can be severe and must in some way be mitigated or limited.)

    Other analysts note that offsets are highly susceptible to fraud and creative accounting. For example, a Chinese company might increase its emissions of hydrochloroflourocarbons (HCFCs), which are very potent synthetic greenhouse gases, just so offset-seeking U.S., European, and Japanese businesses can pay the Chinese company to reduce those emissions. Assuring the integrity of an offset is “challenging,” says the Government Accounting Office (GAO), ”because it involves measuring the reductions achieved through an offset project against a projected baseline of what would have occurred in its absence.” The House of Representatives had an offset program to achieve “carbon neutrality,” but abandoned it after finding out the program was paying farmers to do what they would do anyway (use tilling practices that keep the carbon buried in the soil).  Award enough dubious offsets, and the Waxman-Markey cap becomes a leaky sieve.

    On the other hand, the Heritage Foundation’s May 13, 2009 study argues that the bill, perhaps recognizing the potential for fraud, ”includes significant hurdles for those wishing to use offsets.” Heritage assumes in its analysis that offsets will alleviate the stringency of the caps by 15%.

    Charles River Associates (CRA), in a May 2009 study commissioned by the U.S. Black Chamber of Commerce, assumes full use of international offsets, notwithstanding well-known “difficulties in measuring, veryifying, and ensuring the permanence” of the emission reductions claimed for such projects. Under this assumption, total U.S. emissions from 2012 to 2050 to exceed the cap by about 30%–double the 15% assumed in the Heritage analysis.  Accordingly, the CRA study of Waxman-Markey, as introduced on May 15, 2009, projected smaller although still significant economic impacts. 

    Under the Waxman-Markey cap-and-trade program, CRA estimates:

    • Retail natural gas rates would increase by 10% in 2015, 16% in 2030, and 34% in 2050 relative to the baseline in the Energy Information Administration’s (EIA) Annual Energy Outlook 2009 (AEO09).
    • Retail electric rates would increase by 7.3% in 2015, 22% in 2030, and 45% in 2050 relative to the AEO09 baseline.
    • The per-gallon cost of gasoline would increase by 12 cents in 2015, 23 cents in 2030, and 59 cents in 2050 relative to baseline levels.
    • U.S. employment would decline by 2.3 million to 2.7 million jobs in each year of the policy through 2030 relative to baseline levels (even after accounting for “green job” creation).
    • Average wages would decline by $170 in 2015, $390 in 2030, and $960 in 2050 relative to basline levels.
    • Average household purchasing power would decline by $730 in 2015, $830 in 2030, and $940 in 2050 relative to baseline levels.
    • GDP in 2030 would be 1.1% or $350 billion lower than the baseline level.

    Rejected consumer protections

    Waxman and Markey introduced their bill in the House on May 15 and the House Energy and Commerce Committee appoved a marked-up (amended) text on June 5. It is quite revealing what amendments the Committee rejected.

    On largely party-line votes, Committee Democrats voted down:

    • Rep. Fred Upton’s (R-MI) amendment suspending the Act if the EPA Administrator determines that the U.S. unemployment rate has reached 15% as result of the Act.
    • Rep. Lee Terry’s (R-NB) amendment suspending the Act if the price of gasoline exceeds $5 a gallon.
    • Rep. Roy Blunt’s (R-MO) amendment suspending the Act if retail electricity prices increase by more than 10%.

    Waxman-Markey grows and grows

    Heritage Foundation’s analysis of Waxman-Markey as approved by the House Energy and Commerce Committee on June 5 is available here. To obtain enough votes needed for passage, Waxman and Markey and House Speaker Nancy Pelosi (D-CA) kept expanding the bill with more and more goodies for utilities and other affected interests. Between Committee approval on June 5 and placement on the House Calendar on June 19 the bill grew from 742 pages to about 1,200 pages. Then, at 3:00 a.m. the night before the House floor vote on June 26, the bill grew by almost 300 pages, finally weighing in at 1,427 pages. Most House members had no idea what they were voting on. Waxman-Markey as passed is so complicated that CBO needed 156 closely-printed pages just to summarize the bill’s provisions.

    On August 6, 2009, David Kreutzer and his Heritage Foundation colleagues (Karen Campbell, William Beach, Ben Lieberman, and Nicolas Loris) released their analysis of Waxman-Markey as passed. The results are not too different from their initial analysis of the Waxman-Markey discussion draft. Under Waxman-Markey as passed:

    • Impose a defacto energy tax on the U.S. economy costing $5.7 trillion during 2012-2035.
    • Cumulative GDP losses are $9.4 trillion between 2012 and 2035.
    • Single year GDP losses are $400 billion in 2025 and will ultimately exceed $700 billion.
    • Net job losses approach 1.9 million in 2012 and could approach 2.5 million in 2035.
    • A family of four on average will pay $839  more per year on energy-related utility costs.
    • Cumulative manufacturing output is $585 billion lower than the baseline amount by 2035 .
    • Gasoline prices will rise by 58% ($1.38 more per gallon) and residential electricity rates will rise by 90%.

    A report by the American Council for Capital Formation (ACCF) and the National Association of Manufacturers (NAM), using the National Energy Modeling System (NEMS) developed by the Energy Information Administration (EIA), arrives at similar results:

    • In 2030, inflation-adjusted GDP is reduced by 1.8% ($419 billion) under a low-cost scenario and by 2.4% ($571 billion) under a high cost scenario compared to the baseline forecast. For perspective, Social Security payments to retirees in 2008 totaled $612 billion.
    • Cumulative GDP losses during 2012-2030 range from $2.2 trillion under the low-cost case to $3.1 trillion under the high cost case.
    • In 2030, industrial output levels are reduced by between 5.3% and 6.5% under the low- and high-cost scenarios.
    • Even when “green jobs” are factored in, total U.S. employment averages 420,000 to 610,000 fewer jobs each year under the low- and high-cost scenarios than under the baseline forecast. By 2030, there are between 1,790,000 and 2,440,000 fewer jobs overall.
    • Electricity prices are 5% to 8% higher by 2020, and by 2030 electricity prices are between 31% and 50% higher.
    • In 2030, household income declines from $730 in the low-cost case to $1,248 in the high cost case.

    Postage stamp per day?

    You may have heard from supporters that Waxman-Markey would cost the average family only $175 per year in 2020, or about a postage stamp per day, according to analyses by the Congressional Budget Office (CBO) and the EPA. That’s a small price to pay, we’re told, to save the planet!

    The Heritage team’s rebuttal is worth quoting at length. Here’s their take on the EPA analysis:

    First, the EPA employs a technique from the financial world called “discounting” to reduce the value [of the Waxman-Markey economic impacts]. For example, the EPA estimates that the inflation-adjusted cost per household in 2050 will be $1,287. However, after this value is discounted to the present, the cost is $140 per household . . . If a househhold must pay $1,287 in 2050, the $140 represents the amount that household would have to pay into an interest-bearing account today so that hte interest would allow it to grow to $1,287 by 2050. Discounting can be a legitimate tool for cost-benefit and investment analysis where costs are paid and benefits are received at different times. Thus, both are discounted to the same point in time and compared. Without discounted environmental impacts for comparison, using the technique, here, does little except undercount the cost that families will actually pay in 2050.

    Second, the EPA measures consumption, not income. The broadest and best measure of cost if lost income–lost GDP. Consumption only comes after taxes and savings are deducted. Igoring lost savings and lost payments for government services underestimates sthe cost by about 40%.

    Third, the EPA measures cost per household. Households are not necessarily families. One person living alone counts as a household, as do three single people sharing an apartment. The EPA uses an average household size of 2.6 people. Converting from this EPA household size to a family of four adds more than 50% to the cost estimate.

    So, EPA’s $174 cost per household is actually above $2,700 (even after adjusting for inflation) when presented as lost income per family of four. That is not a postage stamp per day.

    Regarding the CBO analysis, the Heritage team writes:

    The CBO study, on the other hand, does not even attempt a comprehensive measure of lost income and it explicitly states so in footnote 3 of its report . . . The CBO’s methodology effectively measures the administrative costs of collecting and distributing the allowances rather than the full economic cost.

    Additional commentary by Dr. Kreutzer the CBO and EPA analyses is available here, here, and here.

    More pain for no gain

    A final observation: Even if you think global warming is a big problem, Waxman-Markey would have no discernible effect on global temperatures or sea level rise even if all industrialized nations adopt it. Paul C. Knappenberger, my colleague at the free-market energy blog, Masterresource.Org has written brilliantly and extensively on these matters (see herehere, here, here, and here).

    Today’s post in my series of commentaries on excerpts from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, challenges the Gorethodox dogma that the science debate on global warming is “over.”

    There are three basic issues in the climate change science debate:

    • Detection – Has the world warmed, and if so, by how much?
    • Attribution – How much of the observed warming (especially since the mid-1970s) is due to increases in atmospheric greenhouse gas concentrations?
    • Sensitivity – How much additional warming should we expect from continuing increases in greenhouse gas concentrations?

    Despite what you’ve heard over and over again, these basic issues are unsettled, and more so now than at any time in the past decade. The science debate is not “over.” Reports of the death of climate skepticism have been greatly exaggerated.

    Because of time constraints (Policy Peril runs under 40 minutes), the film briefly explores only the most important of the three basic issues: climate sensitivity. Today’s clip comes from that part of the film: an interview with University of Alabama in Huntsville atmospheric scientist Dr. Roy Spencer. To watch the Spencer interview, click here. To watch the entire movie, click here.

    Here’s how this post is organized. First, I’ll reproduce the text of Spencer’s interview. Then, I’ll review some recent research bearing on the three fundamental science issues: detection, attribution, and sensitivity.

    Text of today’s film clip:

    Narrator: All the IPCC models assume that a CO2-induced warming will produce more high-altitude cirrus clouds, which then trap even more heat in the atmosphere. This is what’s called a positive climate feedback. Roy Spencer and his colleagues use satellites to study cirrus cloud behavior.

    Dr. Roy Spencer (University of Alabama in Huntsville): Last August, August of 2007, we published research which showed from a whole bunch of satellite data that when the tropical atmosphere heats up–there are these periods when the atmosphere heats up from more rain activity or cools down from less rain activity–that when it heats up, the skies actually open up. The cirrus clouds that are up high, in the troposphere, in the upper atmosphere, open up and let more cooling infrared radiation escape to space. And it was a very strong effect.

    Narrator: Spencer says that if climate models incorporated the negative feedback his team discovered, the models might forecast 75% less warming.

    This is definitely not the Al Gore view of climate sensitivity. In fact, in An Inconvenient Truth (p. 67), Gore suggests we could get “three times as much” warming by mid-century as has occurred since the “depth of the last ice age.” That would mean a warming of 10ºC-12ºC by mid-century! Gore’s implicit warming forecast goes way beyond the IPCC best-estimate forecast range of 1.8ºC  to 4.0ºC (IPCC WWI AR4, Summary for Policymakers, p. 13). As we’ll see below, several strands of evidence suggest that the IPCC models are also too “hot.”

    Detection

    The world has warmed overall during the past 130 years, as evidenced by melting glaciers, longer growing seasons, and both proxy and instrumental data. However, the main era of “anthropogenic” global warming supposedly began in the mid-1970s, and ongoing research by retired meteorologist Anthony Watts leaves no doubt that in recent decades, the U.S. surface temperature record–reputed to be the best in the world–is unreliable and riddled with false warming biases.

    Watts and a team of more than 650 volunteers have visually inspected and photographically documented 1003, or 82%, of the 1,221 climate monitoring stations overseen by the U.S. Weather Service. In a report summarizing an earlier phase of the team’s investigation (a survey of 860+ stations), Watts says, “We were shocked by what we found.” He explains:

    We found stations located next to exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

    In fact, we found that 89 percent of the stations–nearly 9 of every 10–fail to meet the National Weather Services’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source. In other words, 9 or every 10 stations are likely reporting higher or rising temperatures because they are badly sited.

    “It gets worse,” Watts continues:

    We observed that changes in the technology of temperature stations over time also have caused them to report a false warming trend. We found gaps in the data record that were filled in with data from nearby sites, a practice that propagates and compounds errors. We found adjustments to the data by both NOAA and another government agency, NASA, cause recent temperatures to look even higher.

    How big a problem is this? According to Watts, “The errors in the record exceed by a wide margin the purported rise in temperature of 0.7ºC (about 1.2ºF) during the twentieth century.” Based on analysis of 948 stations rated as of May 31, 2009, Watts estimates that 22% of stations have an expected error of 1ºC, 61% have an expected error of 2ºC, and 8% have an expected error of 5ºC.

    watts_fig23

    Watts concludes that, “this record should not be cited as evidence of any trend in temperature that may have occurred across the U.S. during the past century.” He further concludes: “Since the U.S. record is thought to be ‘the best in the world,’ it follows that the global database is likely similarly compromised and unreliable.”

    A related issue is the influence of urban heat islands on long-term temperature records. Climate Change Reconsidered, a report by the Nongovernmental International Panel on Climate Change (NIPCC), written by Drs. Craig Idso and S. Fred Singer with 35 contributors and reviewers, reviews more than 40 studies on urban heat islands. For example, a study by Oke (1973) of the urban heat island strength of 10 settlements in the St. Lawrence Lowlands of Canada found that a population as small as 1,000 people could generate a heat island effect of 2ºC-2.5ºC. From this study and the others reviewed, the NIPCC concludes:

    It appears almost certain that surface-based temperature histories of the globe contain a significant warming bias introduced by insufficient corrections for the non-greenhouse-gas-induced urban heat island effect. Furthermore, it may well be impossible to make proper corrections for the deficiency, as the urban heat island of even small towns dwarfs any concommitant augmented greenhouse effect that may be present [p. 95; emphasis in original].

    In a comment submitted to EPA regarding its proposed endangerment finding, University of Alabama in Huntsville (UAH) atmospheric scientist John Christy notes two additional reasons to conclude that the IPCC surface data records exaggerate warming trends:

    As a culmination of several papers and years of work, Christy et al. (2009) demonstrates that popular surface datasets overstate the warming that is assumed to be greenhouse related for two reasons. First, these datasets use only stations that are electronically (i.e. easily) available, which means the unused, vast majority of stations (usually more rural and representative of actual trends but harder to find) are not included. Secondly, these popular datasets use the daily mean surface temerpature (TMean) which is the average of the daytime high (TMax) and nighttime low (TMin). In this study (and its predecessors, Christy 2002, Christy et al. 2006, Pielke Sr. et al. 2008, Walters et al. 2007 and others) we show that TMin is seriously impacted by surface development, and thus its rise is not an indicator of greenhouse gas forcing. Some have called this the Urban Heat Island effect, but, as described in Christy et al. 2009, it is much more than this and encompasses any development of the surface (e.g. irrigated agriculture).

    For example, the UK Hadley Center, relying on two electronic surface stations, computed a TMax temperature trend in East Africa of 0.14ºC per decade during 1905-2004. Christy, using data from 45 stations, found a trend of only 0.02ºC per decade.

    christy-uah-v-hadcrut3

    In California, Christy found that the only significant warming trend is for TMin in the irrigated San Joaquin Valley. Note, in the non-irrigated Sierra mountains, where models project a greenhouse gas-induced warming should occur, there is actually a decreasing temperature trend.

    christy-tmin-ca

    Obviously, temperature data are the starting point of any analysis of global warming. But if we can’t trust the U.S. and IPCC temperature records, how do we know how much global warming has actually occurred?

    Satellite observations are not influenced by heat islands and irrigation, or subject to the quality control problems detailed by Watts. Moreover, satellite records tally well with weather balloon observations–an independent database. So maybe detection should be based solely on satellite data, which do show some warming over the past 30 years. However, the “debate is over” crowd is unlikely to embrace this solution.  The satellite record shows a relatively slow rate of warming–about 0.13ºC per decade–hence a relatively insensitive climate.

    uah-temperature-anomalies-jan-1979-june-20093

    Moreover, as can be seen in the above chart of the University of Alabama-Huntsville (UAH) satellite record, some of the 0.13ºC/decade ”trend” comes from the 1998 El Nino warming pulse. Remove 1998, and the 30-year satellite record trend drops to 0.12ºC/decade.

    Attribution

    The IPCC, the leading spokesman for the alleged scientific consensus, claims that, “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” How does the IPCC know this? The IPCC offers three main reasons.

    First, according to the IPCC, “Paleoclimate reconstructions show that the second half of the 20th century was likely the warmest 50-year period in the Northern hemisphere in 1300 years” (IPPC AR4, WGI, Chapt. 9, p. 702).  The warmth of recent decades coincided with a rapid increase in GHG concentrations. Therefore, the IPCC reasons, most of the recent warming is likely due to anthropogenic GHG emissions.

    This argument is unconvincing if the warming of recent decades is not unusual or unprecedented in the past 1300 years. As it happens, numerous studies indicate that the Medieval Warm Period (MWP)–roughly the period from AD 800 to 1300, with peak warmth occurring about AD 1050–was as warm as or warmer than the Current Warm Period (CWP).

    The Center for the Study of Carbon Dioxide and Global Change has analyzed more than 200 peer-reviewed MWP studies produced by more than 660 individual scientists working in 385 separate institutions from 40 countries. The Center divides these studies into three categories–those with quantitative data enabling one to infer the degree to which the peak of the MWP differs from the peak of the CWP (Level 1), those with qualitative data enabling one to infer which period was warmer (Level 2), although not by how much, and those with data enabling one to infer the existence of a MWP in the region studied (Level 3). An interactive map showing the sites of these studies is available at CO2Science.org.

    Only a few Level 1 studies determined the MWP to have been cooler than the CWP; the vast majority indicate a warmer MWP. On average, the studies indicate that the MWP was 1.01ºC warmer than the CWP.

    mwpquantitative

    Figure Description: The distribution, in 0.5ºC increments, of Level 1 studies that allow one to identify the degree to which peak MWP temperatures either exceeded (positive values, red) or fell short of (negative values, blue) peak CWP temperatures.

    Similarly, the vast majority of Level 2 studies indicate a warmer MWP:

    mwpqualitative

    Figure Description: The distribution of Level 2 studies that allow one to determine whether peak MWP temperatures were warmer than (red), equivalent to (green), or cooler than (blue), peak CWP temperatures.

    The IPCC’s second main reason for attributing most recent warming to the increase in GHG concentrations is that climate models “cannot reproduce the rapid warming observed in recent decades when they only take into account variations in solar output and volcanic activity. However . . . models are able to simulate observed 20th century changes when they include all of the most important external factors, including human influences from sources such as greenhouse gases and natural external factors” (IPCC, AR4, WGI, Chapt. 9, p. 702).

    This would be decisive if today’s models accurately simulate all important modes of natural variability. In fact, models do not accurately simulate the behavior of clouds and ocean cycles. They may also ignore important interactions between the Sun, cosmic rays, and cloud formation.

    Richard Lindzen of MIT spoke to this point at the Heartland Institute’s recent (June 2, 2009) Third International Conference on Climate Change:

    What was done [by the IPCC], was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multi-Decadal Oscillation), claim that such models nonetheless adequately depicted natural internal climate variability, and use the fact that models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man. The argument makes arguments in support of intelligent design seem rigorous by comparison.

    “Fingerprint” studies are the third basis on which the IPCC attributes most recent warming to anthropogenic greenhouse gases. Climate models project a specific pattern of warming through the vertical profile of the atmosphere–a greenhouse “fingerprint.” If the observed warming pattern matches the model-projected fingerprint, that would be strong evidence that recent warming is anthropogenic. Conversely, notes the NIPCC, “A mismatch would argue strongly against any signficant contribution from greenhouse gas (GHG) forcing and support the conclusion that the observed warming is mostly of natural origin” (NPICC, p. 106).

    Douglass et al. (2007) compared model-projected and observed warming patterns in the tropical troposphere. The observed pattern is based on three compilations of surface temperature records, four balloon-based records of the surface and lower troposphere, and three satellite-based records of various atmospheric layers–10 independent datasets in all.

    “While all greenhouse models show an increasing warming trend with altitude, peaking around 10 km at roughly two times the surface value,” observes the NIPCC, “the temperature data from balloons give the opposite result; no increasing warming, but rather a slight cooling with altitude” (p. 107). See the figures below.

    hot-spot

    The mismatch between the model-predicted greenhouse fingerprint and the observed pattern is profound. As the Douglass team explains: “Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modeled trend is 100% to 300% higher than observed, and above 8 km, modeled and observed trends have opposite signs.”

    douglass

    Figure description: Temperature trends for statellite era (ºC/decade). HadCRUT, GHCH and GISS are compilations of surface temperature observations. IGRA, RATPAC, HadAT2, and RAOBCORE are balloon-based observations of surface and lower troposphere. UAH, RSS, UMD are satellite-based data for various layers of the atmosphere. The 22-model average comes from an ensemble of 22 model simulations from the most widely used models worldwide. The red lines are the +2 and -2 standard errors of the mean from the 22 models. Source: Douglass et al. 2007.

    The NIPCC concludes that the mismatch of observed and model-calculated fingerprints “clearly falsifies the hypothesis of anthropogenic global warming (AGW)” (p. 108). I would put the state of affairs more cautiously. In view of (1) significant evidence that the MWP was as warm as or warmer than the CWP, (2) the inability of climate models to simulate important modes of natural variability, and (3) the failure of observations to confirm a greenhouse fingerprint in the tropical trosophere, the IPCC claim that “most” recent warming is “very likely” anthropogenic should be considered a boast rather than a balanced assessment of the evidence.

    Climate Sensitivity

    The most important unresolved scientific issue in the global warming debate is how sensitive (reactive) the climate is to increases in GHG concentrations.

    Climate sensitivity is typically defined as the global average surface warming following a doubling of carbon dioxide (CO2) concentrations above pre-industrial levels. The IPCC says a doubling is likely to produce warming in the range of 2ºC to 4.5ºC, with a most likely value of about 3ºC (IPCC, AR4, WGI, Chapt. 10, p. 749). The IPCC presents a range rather than a specific value because of uncertainty regarding the strength of the relevant feedbacks.

    In a hypothetical climate with no feedbacks, positive or negative, a CO2 doubling would produce 1.2ºC of warming (IPCC, AR4, WGI, Chapt. 8, p. 631). In most climate models, the dominant feedbacks are positive, meaning that the warmth from rising GHG levels causes other changes (in water vapor, clouds, or surface reflectivity, for example) that either increase the retention of outgoing long-wave radiation (OLR) or decrease the reflection of incoming short-wave radiation (SWR).

    In his speech at the June 2 Heartland Institute conference, Professor Lindzen summarized his research on climate sensitivity, which has since been accepted for publication by Geophysical Research Letters. Lindzen argues that climate feedbacks and sensitivity can be inferred from observed changes in OLR and SWR following observed changes in sea-surface temperatures. For fluctuations in OLR and SWR, Lindzen and his colleagues used the 16-year record (1985-1999) from the Earth Radiation Budget Experiment (ERBE), as corrected for altitude variations associated with satellite orbital decay. For sea surface temperatures, they used data from the National Centers for Environmental Prediction. For climate model simulations, they used 11 IPCC models forced with the observed sea-surface temperature changes.

    The results are striking. All 11 IPCC models show positive feedback, “while ERBE unambiguously shows a strong negative feedback.”

    lindzen-erbe-vs-models1

    Figure description: ERBE data show increasing top-of-the-atmosphere radiative flux (OLR plus reflected SWR) as sea surface temperatures rise whereas models forecast decreasing radiative flux. Source: Lindzen and Choi 2009.

    The ERBE data indicate that the sensitivity of the actual climate system “is narrowly constrained to about 0.5ºC,” Lindzen estimates. ”This analysis,” says Lindzen in a recent commentary, “makes clear that even when all models agree, they can be wrong, and that this is the situation for the all important question of climate sensitivity.”

    erbe-v-model-sensitivity4

    At the Heartland Institute’s Second International Conference on Climate Change (March 2009), Dr. William Gray of Colorado State University presented satellite-based research that may explain the low climate sensitivity the Lindzen team infers from the ERBE data.

    The IPCC climate models assume that CO2-induced warming significantly increases upper troposphere clouds and water vapor, trapping still more OLR that would otherwise escape to space. Most of the projected warming in the models comes from this positive water vapor/cloud feedback, not from the CO2. Satellite observations do not support this hypothesis, Gray contends:

    Observations of upper tropospheric water vapor over the last 3-4 decades from the National Centers of Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis data and the International Satellite Cloud Climatology Project (ISCCP) data show that upper tropospheric water vapor appears to undergo a small decrease while Outgoing Longwave Radiation (OLR) undergoes a small increase. This is the opposite of what has been programmed into the GCMs [General Circulation Models] due to water vapor feedback.

    The figure below comes from the NCEP/NCAR reanalysis of upper troposphere water vapor and OLR.

    reanalysis-olr-and-water-vapor-50

    Figure description: NCEP/NCAR renalysis of standardized anomalies at 400 mb (~7.5 km altitude) water vapor content (i.e. specific humidity — in blue) and OLR (in red) from 1950 to 2008. Note the downward trend in moisture and upward trend in OLR.

    Gray’s paper deals with water vapor in the upper troposphere. What about high-altitude cirrus clouds, which climate models also predict will increase and trap more OLR as GHG concentrations increase?

    Spencer et al. (2007), the study Dr. Spencer spoke about in today’s Policy Peril film clip, found a strong negative cirrus cloud feedback mechanism in the tropical troposphere. Instead of steadily building up as the tropical oceans warm, cirrus cloud cover suddenly contracts, allowing more OLR to escape. As mentioned, Spencer estimates that if this mechanism operates on decadal time scales, it would reduce model estimates of global warming by 75%.

    A 2008 study by Spencer and colleague William D. Braswell examines the issue of climate feedbacks related to low-level clouds. Lower troposphere clouds tend to cool the Earth by reflecting incoming SWR. Observations indicate that warmer years have less cloud cover compared to cooler years. Modelers have interpreted this correlation as positive feedback effect in which warming reduces low-level cloud cover, which then produces more warming.

    Spencer and Braswell found that climate modelers could be mixing up cause and effect. Random variations in cloudiness can cause substantial decadal variations in ocean temperatures. So it is equally plausible that the causality runs the other way, and increases in sea-surface temperature are an effect of natural cloud variations. If so, then climate models forecast too much warming. For more on this, visit Spencer’s Web site.

    In a study now in peer review for possible publication in the Journal of Geophysical Research, Spencer and colleagues analyzed 7.5 years of NASA satellite data and “discovered,” he reports on his Web site, “that, when the effects of clouds-causing-temperature-change is accounted for, cloud feedbacks in the real climate system are strongly negative.” “In fact,” he continues, “the resulting net negative feedback was so strong that, if it exists on the long time scales associated with global warming, it would result in only 0.6ºC of warming by late in this century.”

    In related ongoing satellite research, Spencer finds new evidence that “most” warming of the past century “could be the result of a natural cycle in cloud cover forced by a well-known mode of natural climate variability: the Pacific Decadal Oscillation (PDO).”

    Whether or not the PDO proves to be a major player in climate change, Spencer has identified a potentially serious error in all IPCC modeling efforts:

    Even though they never say so, the IPCC has simply assumed that the average cloud cover of the Earth does not change, century after century. This is a totally arbitrary assumption, and given the chaotic variations that the ocean and atmosphere circulations are capable of, it is probably wrong. Little more than a 1% change in cloud cover up or down, and sustained over many decades, could cause events such as the Medieval Warm Period or the Little Ice Age.

    Finally, recent temperature history also suggests that most climate models are too “hot.” Dr. Patrick Michaels touched on this topic in Policy Peril (albeit not in today’s excerpt).

    Carbon dioxide emissions and concentrations are increasing at an accelerating rate (Canadell, J.G. et al. 2008). Yet, there has been no net warming since 2001 and no year was as warm as 1998.

    global-temperature-past-decade

    Figure description: Observed monthly global temperature anomalies, January 2001 through April 2009, as compiled by the Climate Research Unit. Source: Paul C. Knappenberger.

    Paul C. Knappenberger (”Chip” to his friends) quite reasonably wonders, “[H]ow long a period of no warming can be tolerated before the forecasts of the total warming by century’s end have to be lowered?” After all, he continues, “We’re already into the nineth year of the 100 year forecast and we have no global warming to speak of.” It is instructive to compare these data with climate model projections.

    A good place to start is with the climate model projections that NASA scientist James Hansen presented in his 1988 congressional testimony, which launched the modern global warming movement.

    The figure below, from congressional testimony by Dr. John Christy, a colleague of Roy Spencer at the University of Alabama in Huntsville, shows how Hanesn’s model and reality diverge.

    hansen-models-vs-reality1

    Figure description: The red, orange, and purple lines are Hansen’s model forecasts of global temperatures under different emission scenarios. The green and blue lines are actual temperatures from two independent satellite records. Source: John Christy.

    “All model projections show high sensitivity to CO2 while the actual atmosphere does not,” Christy notes. “It is noteworthy,” he adds, “that the model projection for drastic CO2 cuts still overshot the observations. This would be considered a failed hypothesis test for the models from 1988.”

    What about the models used by the IPCC in its 2007 Fourth Assessment Report (AR4)? How well are they replicating global temperatures?

    ipcc-models-vs-recent-temperatures

    This figure, also from Dr. Christy’s testimony, is adapted from Dr. Patrick Michaels’s testimony of February 12, 2009. The red and orange lines show the upper and lower significant range (95% of all model runs are between the lines) of global temperature trends calculated by 21 IPCC AR4 models for multi-year segments ending in 2020. The blue and green lines show observed temperatures ending in 2008 from satellite (University of Alabama in Huntsville) and surface (U.K. Hadley Center for Climate Change) records.

    Christy comments:

    The two main points here are (1) the observations are much cooler than the mid-range of the model spread and are at the minimum of the model simulations and (2) the satellite adjustment for surface comparisons is exceptionally good. The implication of (1) is that the best estimates of the IPCC models are too warm, or that they are too sensitive to CO2 emissions.

    Christy illustrates this another way in his comment on EPA’s endangerment proposal.

    christy-models-standard-error1

    Figure description: Mean and standard error of 22 IPCC AR4 model temperature projections in the mid-range (A1B) emissions scenario. From 1979 to 2008, the mean projection of the models is a warming of 0.22ºC per decade. HADCRUT3v (green) is a surface dataset, UAH (blue) and RSS (purple) are satellite data sets.

    Christy comments:

    . . . even with these likely spurious warming effects in HADCRUT3v and RSS, the mean model trends are still significantly warmer than the observations at all time scales examined here. Thus, the model mean sensitivity, a quantity utilized by the IPCC as about 2.6ºC per doubled CO2, is essentially contradicted in these comparisons.

    Michaels, in his testimony, shows that if year 2008 temperatures persist through 2009, then the observed temperature trend will fall below the 95% confidence range of model projections. In other words, the models will have less than a 5% probability of being correct.

    ipcc-models-vs-temperatures-through-2009

    Although the IPCC AR4 models have not failed yet, they are, in Michaels’s words, “in the process of failing,” and the longer the current temperature regime persists, the worse the models will perform.

    Conclusion

    The climate science debate is not “over.” In fact, it is just starting to get very, very interesting. All the basic issues–detection, attribution, and sensitivity–are unsettled and more so today than at any time in the past decade.

    A final thought–anyone who wants further convincing that the debate is not over should read the marvelous NIPCC report. On a wide range of issues (nine main topics and 60 sub-topics), the report demonstrates that the scientific literature allows, and even favors, reasonable alternative assessments to those presented by the IPCC.

    P.S. Previous posts in this series are available below:

  • Policy Peril: Looking for an antidote to An Inconvenient Truth? Your search is over
  • Policy Peril Segment 1: Heat Waves
  • Policy Peril Segment 2: Air Pollution
  • Policy Peril Segment 3: Hurricanes
  • Policy Peril Segment 4: Sea-Level Rise
  • In An Inconvenient Truth, Al Gore warns that global warming could raise sea levels by 20 feet, and he implies that this could happen quite suddenly–in our lifetimes or those of our children.

    Specifically, on pp. 204-206 of the book version of AIT, Gore warns that if half the Greenland Ice Sheet and half the West Antarctic Ice Sheet melted or broke up and slipped into the sea, some 100 million people living in Beijing, Shanghai, Calcutta, and Bangladesh would  “be displaced,” “forced to move,” or “have to be evacuated.”  Is there any truth to it?

    Today’s clip from CEI’s film Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself, again features Dr. Patrick Michaels of the Cato Institute, former Virginia State Climatologist, author of several superb books (most recently, Climate of Extremes: The Global Warming Science They Don’t Want You to Know), and prolific blogger on World Climate Report.

    Here are my previous posts in this series:

    To watch today’s video clip, click here. To watch Policy Peril from start to finish, click here.

    The text of today’s excerpt immediately follows. It includes some graphics from the film and footnotes to the pertinent scientific literature.

    Dr. Patrick Michaels: This even as there is a purported large melt of ice from Greenland. It turned around — the thermohaline circulation became stronger. [1] 

    Narrator: Hmm. These facts are inconvenient only for the makers of An Inconvenient Truth. But who can forget the scenes where a 20-foot wall of water rolls across the world’s coastal communities. In the book version [of AIT], Gore says, “If Greenland melted or broke up and slipped into the sea–or if half of Greenland and half of Antarctica melted or broke up and slipped into the sea–sea levels worldwide would increase by 18 to 20 feet.” Reality check! How much ice is Greenland shedding?

    Dr. Michaels: The actual loss of ice from Greenland is about 25 cubic miles per year. [2] Now, if that seems like a lot, there are about 700,000 cubic miles of ice on Greenland. The loss rate is four-tenths of one percent of its ice mass, per century. I didn’t say per year. I didn’t say per decade. I said four-tenths of one percent per century[3]

    Narrator: That translates into how much sea-level rise?

    Dr. Michaels: If you take a look at the IPCC’s latest volume, by the year 2100, they have two inches of sea-level rise resulting from the loss of Greenland ice. Not two feet. Not 20 feet. Two inches! [4] That’s the “consensus of scientists,” okay. Whether or not we believe in consensus science, that’s what they say.

    Narrator:  Gore says global warming could melt half of Greenland. Is that plausible?

    Dr. Michaels: The United Nations [IPCC] projects that if we raise carbon dioxide to four times the background level–that would be about 1,100 parts per million, right now we’re at about 385 parts per million–and maintain that for 1,000 years, that Greenland would lose about half its ice in a millennium. [5] Now, we don’t have enough fossil fuel to maintain that concentration for 1,000 years.

    Narrator: Gore also says half the ice sheet could break off because of “moulins.” For me, this was the scariest part of  An Inconvenient Truth. Moulins are cracks that channel meltwater from the surface of the ice sheet to the bedrock below. By lubricating the bedrock, moulins could destabilize the ice sheet, Gore says. [6]

    moulin

    Well, a recent study in Science magazine lays that fear to rest. A small meltwater lake poured down a moulin at a flow rate exceeding that of Niagara Falls. [7] Yet, Science magazine reports, “For all the lake’s water dumped under the ice that day, and all the water drained into new moulins in the following weeks, the ice sheet moved only an extra half meter near the drained lake.” [8] An extra half meter. [9]

    Notes

    [1] The thermohaline circulation “became stronger.” Dr. Michaels (Pat to his friends] just finished debunking Gore’s claim that ice melt from Greenland will inject enough fresh water into the North Atlantic to disrupt the thermohaline circulation (THC, a.k.a. oceanic “conveyor belt”), which most scientists–though not Richard Seager of Columbia University’s Lamont-Doherty Earth Observatory–believe is responsible for Europe’s mild winters. There was a brief scare about THC shutdown a few years ago when Bryden et al. (2005) reported that the Atlantic branch of the conveyor belt slowed by 30% between 1957 and 2004. But one year later, Richard Kerr of Science magazine reported on new data showing that the Bryden study was a “false alarm.” In fact, Dr. Michaels says, alluding to Boyer et al. (2006) and Latif et al. (2006), the THC became stronger. The Center for the Study of Carbon Dioxide and Global Change reviews Latif et al. (2006) here, and reviews many other THC studies. I discuss Gore’s warming-causes-cooling fantasy on pp. 11-12 of my April 2007 testimony before the Colorado Republican Study Committee.

    [2] The estimate of 25 cubic miles of Greenland ice loss per year comes from Luthcke et al. (2006), a study summarized here on NASA’s Web page.

    [3] Greenland has approximately 3 million cubic kilometers of ice. To convert cubic kilometers into cubic miles, you multiply the number of cubic kilometers by 0.2399. Hence, Greenland has about 719,000 cubic miles of ice. It is losing about 25 cubic miles of ice per year, which translates into a rate of 2,500 cubic miles per century. 2,500 is 4/10ths of 1% of 719,000.

    [4] In the IPCC’s mid-range emissions scenario (A1B), Greenland ice loss contributes between 1 centimeter (cm) and 8 cm of sea-level rise in the 21st Century; in the IPCC’s high-end emissions scenario (A1FI), Greenland ice loss contributes between 2 cm and 12 cm per year (IPCC AR4 WGI, Chapter 10: Climate Change Protections, Table 10.7, p. 820). Translating into inches, Greenland ice loss contributes between 0.4 and 3.1 inches in mid-range A1B emissions scenario and between 0.7 and 4.7 inches in A1FI high-end emissions scenario.

    [5] Pat here refers to Ridley et al. (2005), as reviewed in Chapter 10 of IPCC AR4 WGI, on p. 830. The figure below shows what the researchers project would happen to Greenland’s ice if carbon dioxide concentrations increase to four times pre-industrial levels and stay there for 3,000 years.

    greenland-ice-melt-ipcc-751

    [6] Gore’s photograph and diagram of moulins come from Zwally et al. (2002), published in Science magazine.

    moulin-diagram

    In AIT, Gore animates the diagram so that the ice sheet begins to break apart at the E.Q. (equilibrium) line. This is thoroughly misleading. The E.Q. line of an ice sheet is the elevation at which glacier melting and snow accumulation are equal. Above the E.Q. line, snow accumulation exceeds ice melt; below it, ice melt exceeds snow accumulation. The E.Q. line is not a fault line or fissure in the ice.

    More importantly, Zwally et al. (2002) is not evidence of an impending ice sheet crackup. The researchers found that moulins associated with summer ice melt accelerate glacial flow, but only by a few percent. For example, the flow rate of one outlet glacier increased from 31.3 cm/day in winter to 40.1 cm in July, falling back to 29.8 cm in August, increasing annual movement by about 5 meters. Apocalypse not!

    [7] In a study updating the Zwally team’s research, Joughin et al. (2008) found somewhat more glacier acceleration associated summer ice melt and moulins. However, the study’s bottom-line conclusion is pointedly non-apocalyptic:

    Surface-enhanced basal lubrication has been invoked previously as a feedback that would hasten the ice sheet’s demise in a warming climate. Our results show that several fast-flowing outlet glaciers, including Jakobshavn Isbrae, are relatively insensitive to this process . . . Our results thus far suggest that surface-melt enhanced lubrication will have a substantive but not catastrophic effect on the Greenland Ice Sheet’s future evolution.

    [8] In a companion article (cited in this Policy Peril excerpt), Science magazine reporter Richard Kerr quotes Pennsylvania State University glaciologist Richard Alley on moulin-induced ice sheet lubrication:

    “Is it run for the hills, the ice is falling into the ocean?” asks Alley. “No, it matters but it’s not huge.”

     Kerr goes on to observe, as noted above, that an entire 4 km-long, 8 m-deep melt-water lake disappeared down a moulin in about 1.4 hours–at an average rate of 8,700 cubic meters per second, “exceeding the average flow rate of Niagara Falls.” Yet, despite all the water dumped under the ice that day and all the water drained into new moulins in the following weeks, the ice sheet moved only “an extra half meter near the drained lake.” 

    [9] To put the extra half meter of glacial movement in perspective, consider that the Greenland Ice Sheet extends 2,530 kilometers (1,570 miles) North-South and has a maximum width of 1094 kilometers (680 miles) near its northern margin.

    In a segment of Policy Peril immediately following today’s film excerpt, Pat also discusses studies in Science magazine indicating that the West Antactic Ice Sheet (WAIS) is more stable than scientists previously believed. The researchers found that outlet glaciers drag debris under the ice that piles up into “wedges.” These hidden land forms then prop up and stabilize the ice shelf.

     stabilizing-wedges 

    The significance? Scientists once worried that sea-level rise of just a few feet could lift the WAIS off its island moorings, hastening its break up and demise. However, as Anderson (2007)  reports in Science magazine, in a review of Anandakrishnan et al. (2007) and Alley et al. (2007) and their discovery of  stabilizing land forms under the WAIS, “At the current rate of sea level rise, it would take several thousand years to float the ice sheet off its bed.”

    A more recent study by Pollard and DeConto (2009), reviewed by the Center for the Study of Carbon Dioxide and Global Change, concludes that “the WAIS will begin to collapse when nearby ocean temperatures warm by roughly 5ºC.” How long would that take? 

    In a companion article, Huybrechts (2009) estimates that, “The required ocean warmings, on the order of 5ºC, may well take several centuries to develop.” He asserts that “such an outcome could result from the accumulation of greenhouse-gas emissions projected for the twenty-first Century, if emissions are not greatly reduced.” His source here, however, is simply the IPCC report with its questionable assumptions about climate feedbacks and sensitivity. Huybrechts continues:

    The implied transition time for a total collapse of the West Antarctic ice sheet of one thousand to three thousand years [in Pollard and DeConto (2009)] seems rapid by Antarctic standards. But it is nowhere near the century timescales of West Antarctic ice sheet decay based on simple marine ice-sheet models. 

    And one to three thousand years is certainly nowhere near the years-to-decades innundation of the world’s coastal communities that Al Gore conjures up in An Inconvenient Truth.

    Is global warming making hurricanes more destructive? Did global warming contribute to the devastation of New Orleans by Hurricane Katrina? Would Kyoto-style energy rationing help avert future weather-related catastrophes?

    Well, just ask Al Gore! In An Inconvenient Truth, Gore claims there’s a “strong new emerging consensus” that global warming is increasing the duration and intensity of hurricanes (AIT, p. 81), he depicts New Orleans as a global warming victim (pp. 94-95), and the threat of increasingly powerful storms is a major part of the alleged “climate crisis” that Gore proposes to solve by restricting our access to carbon-based energy.

    Gore’s message is not subtle. The movie poster for An Inconvenient Truth shows a hurricane spinning out of the smokestack of a coal-fired power plant.

    ait-movie-poster

    As noted in previous posts, I am blogging on excerpts from CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself. Our film (DVD, actually) provides skeptical perspectives on Gore’s “science” and “solutions.”

    Today’s excerpt is on hurricanes. To watch it, click here. There’s more in Policy Peril on hurricanes, so if you want to pick up the thread where this snippet leaves off, fast foreward to about 7 minutes and 30 seconds into the film, which you can access here.

    The indented section immediately below presents the text of today’s video excerpt, along with the charts appearing in the clip, and links to the supporting scientific papers:

    Narrator: What about hurricanes? Gore says there is a “strong new emerging consensus” that global warming is increasing the intensity and duration of major tropical cyclones [hurricanes].

    Dr. Patrick Michaels (Cato Institute): I can find a whole bunch of papers that say yes, a bunch that say no, a bunch of papers that say, “I don’t know.”

    Narrator: Here’s a study Al Gore will never cite. Phil Klotzbach of Colorado State University measured changes in hurricane strength in the world’s six major hurricane basins from 1986 to 2005. There’s an increase in the North Atlantic, a decrease in the Northeast Pacific, and not much change anywhere else.

    klotzbach-tropical-cylones-1986-2005

    To persuade us that hurricanes are becoming stronger, Gore reports that economic damages from hurricanes increased dramatically in recent decades. He shows this on a graph similar to this one.

    pielke07_fig11

    Figure description: U.S. hurricane damages, 1900-2005, not adusted for changes in population, wealth, and the consumer price index. Source: Pielke, Jr. et al. 2008. Normalized hurricane damages in the United States: 1900-2005. Natural Hazards Review Volume 9, Issue 1, pp. 29-42. 

    But the graph is misleading. Consider this fact, more people today live in just two Florida counties, Dade and Broward, than lived in all 109 coastal counties from Texas to Virginia in 1930. There’s tons more stuff in harm’s way than there used to be. No wonder damages are bigger!

    Dr. Michaels:  If you take a look at hurricane damages and adjust for population levels, for property values–things you have to adjust for–for inflation, you find there is no significant increase in the damage rate. In fact, the 1926 Florida Hurricane is the record holder by farby I believe about $50 billion more damage than Katrina in today’s dollars. So, there’s just no link here.

    pielke-jr-normalized-hurricane-damages

    Figure description: U.S. hurricane damages, 1900-2005, if all hurricane strikes had hit the same locations but with today’s population, wealth, and consumer price index. Source: Pielke, Jr. et al. 2008.

    A study on hurricane damages in China comes to the same conclusion as did the Pielke team. From 1983 to 2006, the researchers found no long-term trend in economic losses due to hurricanes once changes in population, the consumer price index, and, most importantly, GDP are taken into account. See the figure below.

    economic-losses-in-china-from-hurricanes-half-size

    Source: Zhang, Q. et al. 2009. Tropical cyclone damages in China 1983-2006. Bulletin of the American Meteorological Society, April 2009.

    Let’s take a closer look at some of Dr. Michaels’s statements.

    Michaels (Pat to his friends) says he can find a “bunch of papers” on all sides of the debate on the possible influence of global warming on hurricanes–the point being that Al Gore’s “strong new emerging consensus” does not in fact exist. 

    Pat, his research associate Paul C. Knappenberger, and Dr. Robert Davis of the University of Virginia provide a partial list of “skeptical” references on pp. 33-34 of their comment on EPA’s Advance Notice of Proposed Rulemaking: Regulating Greenhouse Gases under the Clean Air Act (ANPR). I reproduce that list below, plus other studies of skeptical bent, provide links to the scientific papers (or their abstracts) along with pertinent graphic materials, and summarize key finding in bold italics. I also include links to Web-based commentary by Dr. Michaels or the Center for the Study of Carbon Dioxide and Global Change.

    Key papers affirming the existence of a trend towards stronger hurricanes include:

    • Emanuel, K. 2005. Increasing destructiveness of tropical cyclones over the past 30 years. Nature 436: 686-688. Hurricane strength, a combination of wind speed and duration, which Emanuel calls the “power dissipation index,” increased by 50% since the mid-1970s, and is highly correlated with sea-surface temperature.
    • Webster, P. et al. 2005. Changes in Tropical cyclone number, duration, and intensity in a warming environment. Science 309: 1844-1846. The number and percentage of major (Category 4 & 5) hurricanes increased from 1970 to 2004.
    • Trenberth, K.E. et al. 2007. Observations: Surface and Atmospheric Change [chapter 3 of Climate Change  2007: The Physical Science Basis. Working Group I Report, Fourth Assessment Report, Intergovernmental Panel on Climate Change]. The intensity of tropical cyclones has increased since 1970.

    Papers disputing the global existence and/or magnitude of a trend towards stronger hurricanes include:

    • Landsea, C.W. et al. 2005. Hurricanes and global warming. Nature 438: E11-13. Emanuel mishandled data and his methodology is flawed.
    • Landsea, C.W. et al. 2006. Can we detect trends in extreme tropical cyclones. Science 313: 452-454. The apparent trend towards more powerful hurricanes is a consequence of improved monitoring in recent years of non-landfalling hurricanes. [For Pat Michaels’s review, click here.]
    • Klotzbach, P.J. 2006. Trends in global tropical cyclone activity over the past twenty years (1986-2005). Geophysical Research Letters 33 doi: 10.1029/2006GL025881. From 1986 to 2005, there was an increase in hurricane strength (”accumulated cyclone energy”) in the North Atlantic, a decrease in the Northeast Pacific, and not much change in the other four hurricane basins. [For Pat Michaels’s review, click here.]
    • Swanson, K.L. 2007.  Impact of scaling behavior on tropical cyclone intensities. Geophysical Research Letters 34 doi: 10.1029/2007GL030851. There is no statistically significant correlation between sea surface temperatures and average tropical cyclone intensity in either the Atlantic or western Pacific Ocean from 1950 to 2005. [For Pat Michaels’s review, click here.]

    In their comment on EPA’s ANPR, Michaels, Knappenberger, and Davis also list other papers that “do not draw as close a linkage between anthropogenic climate changes and increasing hurricane frequency and/or intensity” as the IPCC purports to find in the scientific literature. Studies of this stripe include:

    • Briggs, W.M. 2008. On the changes in the number and intensity of North Atlantic tropical cyclones. Journal of Climate 21: 1387-1402. There is “almost no evidence that distributional mean of individual storm intensity, measured by storm days, track length, or individual storm power dissipation index, has changed (increased or decreased) through time.” [For Pat Michaels’s review, click here.]
    • Knutson, T.R. et al. 2008 . Simulated reduction in Atlantic hurricane frequency under twenty-first century warming conditions. Nature Geosciences doi:10.1038/ngeo202. Global warming will decrease Atlantic hurricane frequency by so much more than it will increase average hurricane strength that cumulative Atlantic hurricane power in the 21st Century will decrease by 25%.
    • Wang, C. & Lee, S.K. 2008. Global warming and United States landfalling hurricanes. Geophysical Research Letters 35(1): L02708. Warming of the Atlantic Ocean is associated with an increase in vertical wind shear, which in turn coincides with a “weak but robust” downward trend in U.S. landfalling hurricanes. See the figure below. [For Pat Michael’s review, click here.]

    wang_lee_us-landfalling-hurricanes1

    Figure description: Number of U.S. landfalling hurricanes from 1851 to 2006 (red bars), the black straight line is the long-term trend, the blue line is the seven-year running mean (from Wang & Lee 2008). 

    • Kossin, J.P. & Vimont, D.J. 2007. A more general framework for understanding  Atlantic hurricane variability and trends. Bulletin of the American Meteorological Society 88(11): 1767-1781. A large part of the variability of Atlantic hurricane intensity, duration, and frequency can be explained by interannual and multidecadal shifts in a natural oscillation known as the Atlantic Meriodonal Mode (AMM). [For Pat Michaels’s review, click here.]
    • Landsea, C.W. 2007. Counting Atlantic tropical cyclones back to 1900. EOS: Transactions of the American Geophysical Union 88. Improved monitoring in recent years is responsible for most, if not all, of the observed trend in increasing frequency of tropical cyclones.
    • Latif, M., Keenlyside, N., & Bader, J. 2007. Tropical sea surface temperature, vertical wind shear, and hurricane development. Geophysical Research Letters 34(1) L01710. From 1870 to 2003, there is no sustained long-term trend in Atlantic hurricane activity; a key variable controlling wind shear and, thus, Atlantic hurricane strength is the warmth of the topical North Atlantic relative to the warmth of the tropical Indian and Pacific Oceans. [For a review by the Center for the Study of Carbon Dioxide and Global Change, click here.]
    • Nyberg, J. et al.  2007. Low hurricane activity in the 1970s and 1980s compared to the past 270 years. Nature 447(7145): 698-701. The average frequency of North Atlantic hurricanes decreased gradually from the 1760s until the early 1990s, reaching anomalously low values during the 1970s and 1980s. The period of enhanced activity since 1995 is not unusual compared to other periods of high hurricane activity “and thus appears to represent a recovery to normal hurricane activity rather than a direct response to sea surface temperature.” [For Pat Michaels’s review, click here.]
    • Vecchi, G.A. & Soden, B.J. 2007a. Increased tropical Atlantic wind shear in model projections of global warming. Geophysical Research Letters 34: 1029/2006GL028905. A suite of state-of-the-art climate model experiments project “substantial” increases in tropical Atlantic and East Pacific wind shear over the 21st Century–changes “comparable to or larger than” model-projected changes in other factors affecting hurricanes; hence, the models “do not suggest a strong anthropogenic increase” in hurricane activity in those basins. [For a review by the Center for the Study of Carbon Dioxide and Global Change, click here.]
    • Vecchi, G.A. & Soden, B.J. 2007b. Effect of remote sea surface temperature change on potential tropical cyclone intensity. Nature 450(7172): 1066-1070. Changes in hurricane intensity due to natural climate variations (such as the warming of one ocean basin relative to another) “may be larger than the response to the more uniform patterns of greenhouse-gas-induced warming.” [For a review by the Center for the Study of Carbon Dioxide and Global Change, click here.]
    • Vecchi, G.A., Swanson, K.L, and Soden, B.J. 2008. Whither Hurricane Activity? Science 322: 687-689. Science is currently unable to decide, due to insufficient data, which factor chiefly controls Atlantic hurricane intensity–the absolute sea surface temperature (SST) of the basin’s main development region, or the region’s SST relative to other tropical ocean basins. If absolute SST is the key factor, then global warming should increase Atlantic hurricane activity. If relative SST is the key factor, then the future should exhibit “little long term trend” in hurricane intensity. [For Pat Michaels’s review, click here; for a review by the Center for the Study of Carbon Dioxide and Global Change, click here.]
    • Besonen, M.R. et al. 2008. A 1000-year, annually resolved record of hurricane activity from Boston, Massachusetts. Geophysical Research Letters 35, L14705, doi: 1029/2008GL039950. Analysis of sediment cores from Lower Mystic Lake shows “centenniel-scale variations in  frequency [of category 2-3 hurricanes in the Boston area] over the past millennium . . . with a period of increased activity between the 12th-16th centuries and decreased activity during the 11th and 17th-19th centuries.” In other words, there is considerable natural variability on centenniel scales, and Boston got hit with hurricanes long before the “greenhouse” era of SUVs and coal-fired power plants. [For Pat Michaels’s review, click here.]
    • Mock, C.J. 2008. Tropical cyclone variations in Louisiana, U.S.A., since the late 18th century. Geochemistry, Geophysics, Geosystems, 9, Q05Vo2, doi: 10.1029/2007GC001846. “Parts of the early and mid-19th century exhibit greater tropical cyclone and hurricane activity [in Lousiana] than at any time within the last few hundred years.” Indeed, “If a higher frequency of major hurricanes occurred in the near future in a similar manner as the early 1800s or in single years such as in 1812, 1831, and 1860, it would have devastating consequences for New Orleans, perhaps equalling or exceeding the impacts such as in hurricane Katrina in 2005.” In short, there is no long-term trend in hurricanes in the vicinity of New Orleans, as the figure below shows. [For Pat Michaels’s review, click here; for a review by the Center for the Study of Carbon Dioxide and Global Change, click here.]

    la_storms

    Figure description: Upper graph shows number of Louisiana tropical cyclones along with centered 10-year running sums; lower graph shows number of Lousiana hurricanes along with centered 10-year running sums. Green dots show major hurricanes.

    • Chenoweth, M. and D. Devine. 2008. A document-based 318-year record of tropical cyclones in the Lesser Antilles, 1690-2007. Geochemistry, Geophysics, Geosystems 9, Q08013, doi: 10.1029, 2008GC002066. Newspaper accounts, ship logbooks, meteorological journals, and other documents were used to create a new database on hurricane intensity in the islands that wrap around the eastern end and southern fringe of the Caribbean sea. Applying a new methodology, the researchers found no evidence that hurricane activity is increasing over three centuries of recorded events. [For Pat Michaels’s review, click here.]
    • Klotzbach, P.J. and W.M. Gray. 2008. Multidecadal variability in North Atlantic tropical cyclone activity. Journal of Climate 21, 3929-3935. From 1878 to the present, the ups and downs of hurricane activity in the North Atlantic show “remarkable agreement” with changes in the Atlantic Multidecadal Oscillation (AMO), which shifts back and forth between  positive (warm) and negative (cool) phases. See figure below. [For Pat Michaels’s review, click here.]

    amo-and-hurricane-activity

    Figure description:  Annually averaged Atlantic basin hurricanes (H), hurricane days (HD), major hurricanes (MH), and major hurricane days (MHD) for the top 20 AMO years (blue bars) and bottom 20 AMO years (red bars).

    • Kuleshov, Y. et al. 2008. On tropical cyclone activity in the Southern Hemisphere: Trends and the ENSO connection. Geophysical Research Letters 35, L14508, doi: 10.1029/2007GL032983. For the 1981/82 to 2005/2006 hurricane seasons, there are no apparent trends in the total numbers and cyclone days in the Southern Hemisphere (South Indian Ocean and South Pacific Ocean. [For Pat Michaels’s review, click here.]
    • Chan, J.C.L. 2007. Decadal variations of intense typhoon occurrence in the western North Pacific. Proceedings of the Royal Society. A, 464: 249-272. From 1960 to 2005, the number of category 4 and 5 hurricanes in the North Pacific undergo a strong multidecadal (16-32 years) variation, largely due to El Nino-Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO), but exhibit no long-term trend. ”Thus, at least for the WNP (western North Pacific], it is not possible to conclude that these variations in intense typhoon activity are attributable to the effect of global warming.” [For Pat Michael’s review, click here; for a review by the Center for the Study of Carbon Dioxide and Global Change, click here.]
    • Englehart, P.J. et al. 2008. Defining the frequency of near-shore tropical cyclone activity in the eastern North Pacific from historical surface observations (1921-2005). Geophysical Research Letters 35, L03706, doi: 10.1029/2007GL032546. Long-term tropical cyclone frequency off the Pacific coast of Mexico exhibits a significant “negative” trend. See the figure below. [For Pat Michaels’s review, click here.]

    long-term-tc-frequency-off-the-pacific-coast-of-mexico

    Figure description: Hurricane frequency off the Pacific coast of Mexico (from Englehart 2008)

    To wrap up, no consensus has been reached about the possible influence of global warming on hurricanes. Consider this joint statement by 120 members of the World Meteorological Organization:

    The possibility that greenhouse gas induced global warming may have already caused a substantial increase in some tropical cyclone indices has been raised (e.g., Mann and Emanuel, 2006), but no consensus has been reached at this time.

    A considerable body of science, ably reviewed by Dr. Michaels, the Center for the Study of Carbon Dioxide and Global Change, and the Nongovernmental International Panel on Climate Change, indicates that natural multi-decadal climate oscillations are now and will remain the dominant driver of Atlantic hurricane behavior in the 21st Century.

    Some final thoughts. First, Gore’s prophesy of an era of increasingly more violent and/or frequent hurricanes no longer seems as plausible as it did in the aftermath of Katrina and Rita and the active hurricane seasons of 2004 and 2005.

    Ryan Maue, a Ph.D. candidate in Florida State University’s Department of Meteorology, in eye-popping charts of accumulated cyclone energy (ACE) in the Northern hemisphere and globally, shows that we have entered a period that might be described as a “hurricane depression.” ACE in the Northern hemisphere is now at its lowest level in 30 years:

    maue-northern-hemisphere-accumulated-cyclone-energy-1970-20091

    Figure description: May-June-July ACE, 1970 – 2009. The three month ACE for 2009 is the second lowest since 1970.

    Globally, tropical cyclone energy is also near record lows over the past 50 years.

     maue-ace-global-1959-2009

    Second, even if global warming does make hurricanes stronger, that would not be a valid reason to pursue Kyoto-style energy rationing. As Bjorn Lomborg points out, carbon suppression policies would have no measurable effect on hurricane behavior for many decades (if ever), yet would cumulatively cost trillions of dollars. The possible influence of global warming on hurricanes is simply one more reason to undertake proven, cost-effective measures such as improving building codes, evacuation routes, early warning systems, and emergency response capabilities. Indeed, it is another reason, as Kerry Emanuel, Peter Webster, and eight other leading hurricane experts affirm in a joint statement, for governments to stop subsidizing development in high-risk areas.

    Finally, the best hurricane protection strategy for developing countries is economic growth. In 1955, a Category 5 hurricane called Janet slammed into Chetumal, on Mexico’s Yucatan Penninsula, killing 600 people. On August 21, 2007, another Category 5, Hurricane Dean, hit the same spot and killed no one. It may be the first time in history when a Category 5 hit a populated area and everyone survived. What changed between 1955 and 2007? Not the weather. The big difference, Dr. Michaels observes, is that Mexico today is much wealthier than it was in the 1950s. Storm warning information is now widely available, there are better roads for evacuation, and emergency response programs are better funded. Unfortunately, Al Gore’s agenda to reduce global CO2 emissions 50% by 2050 can succeed only if poor countries accept emission limitations that stifle their economic development–a topic we’ll discuss later in this series.

    Finally, New Orleans was not a global warming victim, as Gore insinuates. Katrina was the worst natural disaster in U.S. history not because of any extra power the storm allegedly got from global warming (Katrina was only a Category 1 hurricane by the time it hit New Orleans), but because the federal government failed to construct adequate flood defenses for a city built below sea level in a known hurricane corridor. My colleague, John Berlau, chronicles this sad tale in his 2006 book, Eco-Freaks.

    When Al Gore’s film, An Inconvenient Truth (AIT), came out in 2006, I expected to see some hard-hitting criticism by scientists of Gore’s unfounded alarmism and by economists of his blithe disregard of the human suffering that energy rationing (cap-and-trade) and mandatory reliance on costly and under-performing renewable energy would inflict on low-income households and poor countries. However, with a few notable exceptions, Gore’s film got mostly rave reviews, earned an Academy Award, and later helped bag him the Nobel Prize.

    Because few specialists in the science and economics fields took Gore to task, I jumped into the breach. At first, I thought I could write an adequate expose of Gore’s errors and exaggerations in about 20 pages. But as I dug into the book version of AIT, I found that nearly all of Gore’s assertions about climate change and climate policy were either one-sided, misleading, exaggerated, speculative, or just plain wrong. My critique-published by CEI in March 2007 under the title Al Gore’s Science Fiction: A Skeptic’s Guide to An Inconvenient Truth [1]-grew to 150 pages.

    I gave several talks based on this research, including an hour-long presentation on C-SPAN [2]and a minute and fifteen seconds of fame on the Oprah Winfrey Show [3],* along with several video shorts [4]produced by CEI.

    Conversations with friends and colleagues persuaded me, though, that the best strategy was to fight fire with fire-produce our own “documentary” about global warming.

    We teamed up with Jared Lapidus, a talented young New York-based filmmaker. Jared and I interviewed over 20 experts during 2008 and early 2009. The result is a film titled Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself. To view the film, click here [1].

    Policy Peril reviews the science to assess whether global warming is the “planetary emergency” Al Gore claims it is. We take a critical look at what Gore and other alarmist claim regarding heat waves, global temperature forecasts, air pollution, malaria, hurricanes, ice sheet disintegration, sea level rise, and the paradoxical scenario in which global warming causes a new ice age. We conclude that global warming is not a catastrophe in the making. There is no climate “crisis.”

    We then review the human costs-the health and safety risks as well as adverse impacts on jobs and growth-of Al Gore’s proposed “solutions”: carbon taxes, cap-and-trade, fuel economy standards, bans on new coal-fired power plants, mandates to “repower America” with renewable energy, and carbon tariffs.

    The film concludes that these policies have potentially devastating impacts on human welfare, especially to the extent they are exported to developing countries-which they must be if the world is to reduce global emissions 50% by 2050, as Gore and others advocate.

    Finally, the film examines alternative strategies to enhance human well-being in a warming world. It concludes that “focused adaptation”-solving with proven methods existing health and environmental problems that global warming might aggravate (such as malaria and hunger)-would save far more lives at less cost than Kyoto-style energy rationing schemes. Moreover, the best climate protection strategy for the world is free trade and economic growth.

    Over the next two weeks, I’ll be posting one excerpt from the film a day along with comments and links to newer information that has since come out. The global warming debate is very far from “over.” In fact, the scientific, economic, and moral case against Kyoto-style energy rationing keeps getting stronger.

    I look forward to your comments on the film, the individual segments, and the supporting materials.

    – Marlo Lewis, Senior Fellow, Competitive Enterprise Institute

    [1] Al Gore’s Science Fiction: A Skeptic’s Guide to An Inconvenient Truth: http://ceiondemand.org/2009/07/17/policy-peril-the-truth-about-global-warming/

    [2] C-SPAN : http://www.bibliotecapleyades.net/ciencia/ciencia_climatechange04.htm

    [3] Oprah Winfrey Show: http://www.oprah.com/slideshow/oprahshow/oprahshow1_ss_20061205/12

    [4] video shorts : http://www.youtube.com/profile?user=askepticsguide

    [5] last word : http://www.oprah.com/slideshow/oprahshow/oprahshow1_ss_20061205/13

    [6] Youtube: http://www.youtube.com/watch?v=HXw17pIuL0w

    EDIT: Links to the individual segments.

    Part 1 — Heat Waves

    Part 2 — Air Polution

    Part 3–Hurricanes

    Part 4–Sea Level Rise

    Last Friday, I launched a blog series on CEI’s film, Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself.  The film is our antidote to Al Gore’s Scare-U-Mentary, An Inconvenient Truth. The blog series highlights 10 short segments of the film, one each day this week and next.

    Yesterday’s blog was on the hype about heat waves–the claim that people will drop like flies from heat stress in U.S. cities unless urgent action is taken to cut carbon dioxide (CO2) emissions.

    Today’s segment rebuts a related scare–the claim that global warming will sicken and kill thousands of us by increasing air pollution. Click here if you want to watch Policy Peril in its entirety. Click here to watch the segment on air pollution.

    Here’s the text:

    Narrator: But maybe the heat will get us by creating more air pollution. That’s what the Natural Resources Defense Council, or NRDC, said in a report titled Heat Advisory. It sounds plausible because smog forms when emissions of nitrogen oxides and volatile organic compounds bake in the heat of the Sun. However, the NRDC report is fundamentally flawed.

    Joel Schwartz (American Enterprise Institute): NRDC uses emissions from 1996 to “predict” ozone levels, smog levels, in the 2050s. So we’re already below the emissions of 1996, and emissions continue to drop because of fleet turnover to cleaner vehicles, because power plants are getting cleaner. And most of those emissions are going to be gone even in about 20 years. And in the 2050s there’s going to be hardly any pollution in the air. But NRDC assumes we’re going to have 1996 emissions levels in 2050.

    Narrator: Like heat-related mortality, air pollution levels have fallen as cities have warmed. U.S. air quality should keep improving regardless of climate change.

    Commentary

    NRDC’s Heat Advisory report (September 2007) claims that, under a likely global warming scenario, the number of “bad air” days” (days when ozone levels exceed the 8-hour health-based air quality standard set by the U.S. Environmental Protection Agency) would increase by as much as 155% in some of the 10 cities studied. NRDC further states that, “by mid-century, people living in 50 cities in the eastern United States would see a 68 percent (5.5 day) increase in the average number of days exceeding the health-based 8-hour ozone standard established by EPA.” This means the number of unhealthy (”Red Alert”) days would “double.”

    Joel Schwartz masterfully debunked Heat Advisory in two columns published in National Review. In the first column (September 14, 2007), Joel showed that NRDC used 1996 emissions to “predict” ozone levels in the 2050s and 2080s, even though “actual emissions of ozone-forming pollutants are already more than 25% lower than they were in 1996 and will drop another 70%-80%  in just the next 20 years, based on already-adopted and implemented federal requirements.”

    Could this be an innocent mistake? Does NRDC not know that laws and regulations already on the books have cut emissions since 1996 and will keep on doing so for decades to come? No way.

    As Joel documents, in press release after press release, NRDC enthusiastically applauds various new EPA rules that will dramatically reduce smog-forming emissions from automobiles, diesel trucks, off-road diesel engines, diesel fuel, and power plants.

    “Most egregious of all,” Joel comments, “the NRDC report was authored by prominent university and government climate and public health scientists.” These seemingly non-political researchers (Joel names names) lent “the color of their scientific credentials and government and university affiliations” to NRDC’s effort to mislead the public.

    Joel also cites a more realistic appraisal of global warming’s impact on air quality–an article in the Journal of Geophysical Research by researchers from NESCAUM (Northeast States for Coordinated Air Use Management) and Georgia Tech. These analysts project that, from the year 2000 to 2050, “The combined effect of climate change and emissions reductions lead to a 20% decrease (regionally varying from -11% to -28%) in the mean summer maximum daily 8-hour ozone levels over the United States.” They also project a 23% decrease in mean annual fine particulate (PM2.5) concentrations. Joel comments that these estimates are conservative, because “pollutant emissions and ambient levels are dropping much faster than they assume in their study (a fact which I show here). 

    The decline in polluting emissions, despite increases in urban summer air temperatures, is quite dramatic, as Joel illustrates in the figures below.

    emissions_trends

    Figure description: Trends in Estimated U.S. Air Pollutant Emissions, 1970-2006. Data Source: U.S. EPA, Air Quality and Emissions – Progress Continues in 2006.

    ozone-vs-temperature2

    The same story of dramatic progress in reducing emissions “continues in 2008,” as EPA tells us on its Web site.

    Percent Change in Emissions

                                                            1980 vs 2008           1990 vs 2008

    Carbon Monoxide                              -56                                 -46

    Lead                                                     -97                                 -60

    Nitrogen Oxides                                -40                                 -35

    Volatile Organic Compounds         -47                                 -31

    PM 10                                                   -68                                 -38

    PM 2.5                                                   NA                                -57

    Sulfur Dioxide                                     -56                                 -50

    Source: EPA, Emission Trends, http://www.epa.gov/airtrends/aqtrends.html#comparison

    Dan Lashof, director of NRDC’s Climate Center, tried to rebut Joel’s critique. He did not challenge Joel’s central points–emissions are already well below 1996 levels, current policies ensure emissions will continue to drop, and, therefore, air quality predictions assuming that 1996 emissions will persist into the 2050s and beyond are completely unrealistic. Instead, Lashof argued that Heat Advisory presents “projections,” not “predictions,” and that the researchers had to use emissions data from an actual year, such as 1996, because “there are no reliable estimates of [ozone] precursor emissions extending to the mid-21st Century.” Moreover, holding emission levels constant is the only way to isolate the effect of global warming on ozone concentrations.

    In the second of his National Review columns (September 26, 2007), Joel rips these lame excuses to shreds. He cites several statements in Heat Advisory and the accompanying press release in which NRDC clearly presents its findings as predictions of what will happen in a warming world.

    Joel also pokes fun at Lashof’s excuse that NRDC had to use 1996 emissions because who the heck knows what emissions will be 50 years from now. This is emphatically not what NRDC says about the CO2 emissions that allegedly control our climate destiny. Can you even imagine NRDC saying that climate models must use 1996 CO2 emissions to estimate CO2 concentrations in 2050 or 2080 because mid-century estimates of CO2 emissions are uncertain? Joel comments:

    Climate activists have no problem trying to force the people of the world to spend trillions of dollars for CO2 reductions based on the presumption that climate models are accurate. But when it comes to ozone, NRDC pleads uncertainty and then chooses increases in future ozone-forming emissions that are grossly at odds with any plausible future scenario. If anything, the statement that “there are no reliable estimates … extending to the mid-21st Century” is far more applicable to greenhouse gas emissions and climate models’ predictive skill than it is for smog-forming emissions.

    What about the claim that researchers must hold smog-precursor emissions constant to isolate the global warming impact on future ozone concentrations? EPA offers the same rationale on p. 78 of the Technical Support Document (TSD) for its proposed finding that greenhouse gas emissions endanger public health and welfare (EPA’s official response to the Supreme Court’s decision in Massachusetts v. EPA, 2007). However, the only accurate way to isolate the “global warming effect” on ozone concentrations would be to compare ozone levels in warming and non-warming scenarios based on plausible projections of precursor emissions in the 2020s, 2050s, and 2080s.

    Again, EPA would not pay any attention to climate change scenarios that assume 1996 or even 2009 CO2 emissions in 2020, 2050, 0r 2080. So why put any credence in “studies” that assume 1996 ozone precursor emissions in perpetuity even though today’s emissions are already significantly below 1996 emissions? By the 2050s and 2080s, the “global warming effect,” if any, on ozone formation, will likely be negligible. The real point of holding emissions constant is not to isolate a warming effect, but to scare the public.

    Those interested in additional information should find the following items useful. The U.S. Chamber of Commerce provides an excellent literature summary on global warming and air pollution in its detailed review of EPA’s endangerment proposal and TSD. Joel Schwartz’s book, No Way Back, explains why air pollution will continue to decline in the decades ahead. Finally, Joel presents his critique of the warming-will-destroy-air-quality scare in this video from the Heartland Institute’s first annual International Conference on Climate Change.

    As announced last Friday, each day this week and next I’ll post an excerpt of CEI’s film Policy Peril: Why Global Warming Policies Are More Dangerous Than Global Warming Itself. The film is our antidote to Al Gore’s An Inconvenient Truth . If you want to watch Policy Peril in its entirety, click here.

    Today’s segment is on heat waves. Gore and others claim that global warming will make heat waves more frequent and severe, leading to a massive increase in heat-related mortality. Click here to watch the Policy Peril segment on heat waves.

    Here’s the text:

    Narrator: We often hear that global warming will increase the frequency and severity of heat waves. People will drop like flies! Sounds plausible, doesn’t it? But wait a minute. Summer air temperatures in U.S. Cities have been rising over the past three decades, in part because cities generate heat islands, which expand as cities grow. Yet heat-related mortality has gone down.

    Dr. Patrick Michaels (Cato Institute): Bob Davis and I did some work at UVA [University of Virginia] on heat-related mortality, and published it in the refereed literature, showing that the more frequent heat waves are, the fewer people die. That’s because they adapt. And in the average American city–the average of all American cities–heat-related mortality is going down, significantly, not up.  In fact, in the cities in the southern United States–Phoenix, which has a very old population, Tampa–there’s hardly any heat-related mortality at all.

    Narrator: As long as politicians don’t make electricity so costly that low-income households can’t afford to run their air conditioners, heat-related death rates should continue to decline, even in a warming world.

    Commentary

    In An Inconvenient Truth (p. 75 of the book version), Gore states, “We have already begun to see the kinds of heat waves that scientists say will become more common if global warming is not addressed. In the summer of 2003 Europe was hit by a massive heat wave that killed 35,000 people.”

    Gore implies that global warming killed 35,000 people. Yet heat waves have occurred in Europe (and elsewhere) from time immemorial. How does Gore know that global warming caused the 2003 heat wave? Or, if global warming was a contributing factor, how does Gore know how much extra oomph the 2003 heat wave got from global climate change?

    In fact, it is impossible to link any single heat wave or other extreme weather event to global climate change.

    However, if global warming were responsible for the 2003 Europe heat wave, we would at least expect that, globally, the summer of 2003 would have been a hot one. In fact, the 2003 summer was about average or slightly cooler than average compared to the previous 23 summers.

    During June, July and August of 2003, more than half the planet was cooler than the mean temperature of the period from 1979 through 2003. Europe–a tiny fraction of the Earth’s surface–was the only place experiencing unusual heat. See the Figure below.

    europe_heat_thickness

    Figure description: 1000-500 mb thickness temperature anomalies for June, July, and August 2003. Colors (violet to red) indicate standard deviations below, at, and above the mean summer temperature for 1979-2003. Sources: Chase et al., 2006, World Climate Report

    There is a simpler, natural explantion for Europe’s hot summer: An atmospheric circulation anomaly trapped a bubble of hot dry air over Europe for several weeks. Here is what the United Nations Environment Program–hardly a bunch of global warming skeptics–had to say:

    This extreme weather was caused by an anti-cyclone [high pressure system] firmly anchored over the Western European land mass holding back the rain-bearing depressions that usually enter the continent from the Atlantic Ocean. This situation was exceptional in the extended length of time (over 20 days) during which it conveyed very hot dry air from south of the Mediterranean.

    Chase et al. (2006), a team of scientists from Colorado and France, found nothing “unusual” about the 2003 Europe heat wave that would indicate a change in global climate conditions. Among their conclusions: 

    • Extreme warm anomalies equally, or more unusual, than the 2003 heat wave occur regularly.
    • Extreme cold anomalies in the summer months also occur regularly and occasionally exceed the magnitude of the 2003 warm anomaly in standard deviations from the mean.
    • Natural variability in the form of El Nino and volcanic eruptions appear to be of much greater importance in causing extreme regional temperature anomalies than a simple upward trend in time.
    • Regression analyses do not provide strong support for the idea that regional heat waves are increasing with time.  

    The death toll in the Europe 2003 summer heat wave was shockingly high–but part of the blame falls on Europe’s historic distaste for air conditioning and the fact that many able-bodied Europeans go on vacation in August, leaving the elderly and infirm to fend for themselves.

    Dr. Patrick Michaels, the expert I interviewed for the Policy Peril segment on heat waves, points out that a heat wave of similar magnitude hit France (the epicenter of the 2003 heat wave) in 2006, yet the death toll was about 2,000 people–almost 4,400 less than the standard weather-mortality model would predict. The reason, Michaels argues, is that the 2003 heat wave taught the French a big fat lesson about air conditioning and spurred public and private action to make people safer:

    In response to the tragegy of 2003, the French government implemented the National Heat Wave Plan that included a “set-up of a system of real-time surveillance of health data, compilation of scientific recommendations on the prevention and treatment of heat-related diseases, air conditioning equipment for hospital and retirement homes, drawing up of emergency plans for retirement homes, city-scale censuses of the isolated and vulnerable, visits to those people during the alert periods, and set up of a warming system.” In other words, France adapted to the heat wave by providing information to the population at-large and air conditioning to the most vulnerable. No doubt people were also personally more aware of the dangers of summer heat in 2006 than they were three years earlier.

    In the United States, heat-related mortality has been going down, decade by decade, even as urban summer air temperatures have increased.

     heat-related-mortality-in-us-cities

    Figure explanation: Population-adjusted heat-related mortality for 28 maor cities across the United States. Each bar of the histogram for each city represents a different 10-year period. The left bar represents heat-related mortality in the 1960s/70s, the middle bar represents the 1980s, and the right bar is the 1990s. No bar at all (in cities like Phoenix and Tampa) means no statistically significant heat-related mortality during the decade. Source: Davis et al. (2003), Changing heat-related mortality in the United States.

     There is no reason not to expect these trends to continue. Think about it this way. Adaptation is what human beings by nature do. There are very few Eden-like spots on Earth where people can survive and thrive without housing, clothing, and agriculture–all forms of adaptation. In free societies especially, people constantly adapt (innovate, experiment, modify private behavior and public policy) to improve their health, safety, and comfort.

    If global warming makes more U.S. cities more like Phoenix or Tampa, we can reasonably anticipate that more cities will have heat-mortality rates like Phoenix and Tampa–practically zero!

    For a useful overview of the scientific literature, see the U.S. Chamber of Commerce’s comment on EPA’s proposed endangerment finding. The Chamber draws the common-sense conclusion: “Overall, there is strong evidence that populations can acclimatize to warmer climates via a range of behavioral, physiological, and technological adaptations” (p. 4).

    Air conditioning is one of the great health-enhancing, life-saving technologies of the modern world. Air conditioners run on electricity. What we really have to worry about, especially in a warming world, is that politicians will adopt energy policies–actually, anti-energy policies–that force low-income households to turn off their air conditioners in hot weather.

    The cap-and-trade bill Congress is now debating, the Waxman-Markey bill, named for its co-sponsors Rep. Henry Waxman (D-CA) and Ed Markey (D-MA), would function as a massive energy tax, driving up the cost of gasoline, heating oil, and electricity. A study by the Heritage Foundation finds that Waxman-Markey would increase annual household electricity costs by $468. At the same time, many household incomes would decline as GDP drops by up to $300 billion per year. Similarly, Charles River Associates, in a study for the U.S. Black Chamber of Commerce, estimates that Waxman-Markey would increase electricity prices while decreasing average household purchasing power by $730 in 2015, $800 in 2020, and $830 in 2030. This is a recipe for sickness and death.

    It’s just one reason our film is subtitled, “Why Global Warming Policies Are More Dangerous Than Global Warming Itself.”