Features

In January, EPA proposed the Carbon Pollution Standard, a regulation that requires new coal-fired power plants to install carbon capture and sequestration (CCS) technology. Because CCS is not yet commercially viable, it is prohibitively expensive. As a result, EPA’s Carbon Pollution Standard effectively bans the construction of new coal-fired power plants.

Without further ado, here are the top six reasons that EPA’s proposed Carbon Pollution Standard is illegal:

#6. EPA’s Carbon Pollution Standard increases conventional pollution.

Capturing, transporting, and sequestering greenhouse gases from a power plant is an energy-intensive process that leads to a general energy penalty varying on the order of 15-25%. This energy penalty requires the additional consumption of fuel, which increases conventional pollution. While there are technologies to mitigate increases in nitrogen oxides and particulate matter pollution, nothing can mitigate a precipitous increase in emissions of ammonia pollution (see chart below) caused by use of carbon capture and sequestration. Also, because CCS-outfitted power plants use more fuel, they generate greater volumes of combustion wastes, primarily coal ash and boiler slag. Increases in conventional pollution, per se, don’t sink the Carbon Pollution Standard, but the Agency must address these adverse environmental impacts, at the very least. See Sierra Club v. Costle, 657 F. 2d 298, at 331. The proposed Carbon Pollution Standard fails to do so.

amonia inc#5. EPA’s Carbon Pollution Standard is not based on “adequately demonstrated” technology.

The Clean Air Act requires that the Carbon Pollution Standard be based on an “adequately demonstrated” technology, which the courts have interpreted as being “commercially demonstrated.” In a recent post for Master Resource, I compared the state of CCS technology today to past pollution control technologies whose commercial viability was adjudicated by the courts. The results of this analysis are summarized in the chart below and demonstrate that CCS is not “adequately demonstrated.”

Yeatman-Chart1#4. EPA’s Carbon Pollution Standard is not “achievable.”

In addition to requiring that the Carbon Pollution Standard be based on an “adequately demonstrated” (i.e., commercially viable) technology, the Clean Air Act requires that the regulation must be “achievable.” The D.C. Circuit Court of Appeals, in turn, has interpreted “achievability” to mean that the regulation is capable of being met in all parts of the country. See National Lime Association V. EPA, 627 F. 2d 416 at 443. This is a problem for the EPA, because the types of geological formations that are capable of storing vast volumes of greenhouse gases for sequestration are distributed unevenly throughout the country. Indeed, the agency identified only 12 States that practice the type of sequestration (enhanced oil recovery) that EPA believes will meet the requirements of the Carbon Pollution Standard.

[click to continue…]

Post image for Judge Kavanaugh’s Dissent in the EPA Mercury Rule Case: Will the Supreme Court Review EPA’s $9.6 Billion Reg?

Yesterday, the D.C. Circuit Court of Appeals ruled 2-1 to uphold the EPA’s nonsensical Mercury Air Toxics Standards (MATS) Rule. The MATS Rule requires electric utilities to install maximum achievable control technology (MACT) to reduce emissions of mercury and other hazardous air pollutants (HAPs) from coal-fired power plants.

The rule is nonsensical because, as explained below, although one of the most expensive regulations in history (officially estimated at $9.6 billion in 2016), its health benefits are illusory.

In the case, titled White Stallion Energy Center LLC et al. v. EPA et al., Judge Brett Kavanaugh wrote a powerful dissenting opinion, as my colleague William Yeatman noted yesterday. Kavanaugh agreed with industry petitioners that EPA unreasonably excluded cost considerations (economic impacts) when determining whether MACT regulation of power plant HAPs is “appropriate and necessary.”

The two-judge majority partly based their opinion on the Supreme Court’s ruling in Whitman v. American Trucking Ass’ns (2001) that EPA may not take costs into consideration when setting national ambient air quality standards (NAAQS).

Kavanaugh argues the majority ‘misreads’ or ‘over-reads’ Whitman by ignoring a key difference between the Clean Air Act provisions governing NAAQS rulemakings – §108(a) and §109(b) – and the provision addressing potential MACT regulation of power plant HAPs – §112(n)(1)(A).

The NAAQS provisions clearly allow no room for cost considerations. If an air pollutant is emitted by numerous or diverse mobile or stationary sources and the associated air pollution is reasonably anticipated to endanger public health or welfare, then, pursuant to §108(a), EPA must establish NAAQS for those pollutants, and, pursuant to §109(b), the standards must be requisite to protect public health with an adequate margin of safety. Period. End of story.

In contrast, §112(n)(1)(A) requires EPA to study and issue a report on the public health hazards anticipated to occur as a result of power plant HAP emissions, and then apply MACT regulation “if” the Administrator “finds such regulation is appropriate and necessary.” The provision does not define the terms “appropriate” and “necessary.” Common sense suggests that a regulation is “appropriate” if the benefits justify the costs.

Perhaps more importantly, §112 tasks EPA to determine whether MACT regulation of HAPs is “appropriate and necessary” only for “electric steam generating units.” For all other major sources of HAP emissions, EPA has no discretion and is simply required to promulgate MACT regulations. The statute thus seems to contemplate that, in the special case of coal power plants, MACT regulation may not be appropriate even if the associated HAP emissions pose public health hazards. In other words, a less stringent form of Clean Air Act regulation (such as new source performance standards) or state-level regulation might be “appropriate.”

Yeatman opines that Kavanaugh’s dissent may persuade the Supreme Court to review the case. If so, the Court might rule that EPA is allowed or even required to consider costs when determining what is “appropriate” when regulating HAP emissions from power plants. That, in turn, could set the stage for litigation on whether the MATS Rule is too costly to be “appropriate” within the meaning of the statute.

Of course, EPA contends the MATS Rule is a bargain at almost any price, delivering $33 billion to $89 billion in annual health benefits. Litigation reviving public debate on such claims could be a great teaching moment.

Our June 2012 study, All Pain and No Gain, provides a detailed critique of EPA’s MATS Rule health benefit estimates. Below is a summary of key points. [click to continue…]

Yesterday, the National Association of Manufacturers announced the launch of an ad campaign targeting EPA’s pending revision of the national ozone standard. The pitch, which I’ve re-posted immediately below, is running in 10 States: Arkansas, Colorado, Minnesota, North Carolina, Virginia, Iowa, Michigan, West Virginia, Kentucky, Ohio, and Missouri.

 

The National Association of Manufacturers has every reason to fear the ozone rule, as do all Americans. The minimum standard that EPA is considering would trigger the Clean Air Act’s ultra-onerous “Part D” controls for 75% of the country; at most, this unfortunate fate would befall 96 % of the country. By EPA’s own accounting, the regulation could cost up to $90 billion a year—even though the agency concedes that only $23 billion in ozone emissions controls is known to exist. To clarify, this means that the rule could necessitate the creation, out of whole cloth, of almost $70 billion a year in control technologies.

But it’s not EPA’s fault! In fact, the agency doesn’t have the discretion to set the ozone standard. Instead, this responsibility is given to an insular group of advisers, the seven-member Clean Air Science Advisory Committee (CASAC). There are trillions of dollars at stake—that’s TRILLIONS, with a T—yet CASAC is in no way accountable to U.S. voters. Indeed, virtually no voters know of this group’s existence.  Worst of all, CASAC is uniquely ill-suited to designate a standard for a non-threshold pollutant like ozone, due to a professional bias. In this post, I will explain briefly this undemocratic, yet ultra-powerful, institution of environmental policy-making.

[click to continue…]

In which region of the world are plants most productive in photosynthesizing water and carbon dioxide into carbohydrates? If you guessed the tropical rain forest, you’d be wrong. The region with the highest gross primary production (GPP) from photosynthesis is the U.S. corn belt.

That is the finding of a new study (Guanter et al. 2014) published in Proceedings of the National Academy of Sciences (PNAS). The team of 20 researchers used satellite-based spectroscopy to monitor sun-induced chlorophyll fluorescence (SIF), an electromagnetic signal emitted as a byproduct of photosynthesis.

marlo post

Global map of maximum monthly sun-induced chlorophyll fluorescence (SIF) per 0.5° grid box for 2009.

The results of the study really shouldn’t be surprising. The U.S. leads the world in combined private-public R&D spending on agriculture and is the world’s top corn producer and agricultural exporter. Nonetheless — and this too is not surprising — the corn belt GPP reflected in satellite SIF data substantially exceeds the GPP estimated in carbon cycle models. The researchers report:

Our SIF-based crop GPP estimates are 50–75% higher than results from state-of-the-art carbon cycle models over, for example, the US Corn Belt and the Indo-Gangetic Plain, implying that current models severely underestimate the role of management.

Perhaps to appease the political-correctness guardians at PNAS, the study begins with a warning that “past advances” in agriculture are “threatened by climate change,” and the authors say their research is significant because it provides benchmark data for “more reliable projections of climate impact on crop yields.”

Clearly, though, the finding is also significant for another reason. It doesn’t fit into the fear narrative promoted by the recently-released IPCC Working Group II (WG2) report on climate impacts. Current models “severely underestimate the role of management.” That suggests current models underestimate farmers’ ability to adapt to climate change. [click to continue…]

Post image for The Troubling Basis for EPA’s Rosy Cost-Benefit Analysis of the Clean Air Act

Perhaps you’ve heard or seen the eye-popping statistics, trumpeted by EPA and its supporters, regarding the incredible benefits supposedly wrought by the Clean Air Act.

In a recent study, for example, EPA claimed that in 2020 alone, the Clean Air Act would be responsible for “approximately $2 trillion” in benefits. Given that the costs of the Clean Air Act are estimated to be $65 billion in 2020, this represents a benefits-to-cost ratio of more than 30, which, of course, renders EPA’s work in a very favorable light. And it’s not just EPA trumpeting these numbers. Last week, Senate Environment and Public Works Chairwoman Barbara Boxer cited data from the aforementioned report, in order to rebut Republican criticisms of EPA Administrator Gina McCarthy. Indeed, environmental special interests always are quick to cite these benefits when they defend the agency they’ve captured.

Eighty-five percent ($1.7 trillion) of the $2 trillion number is derivative of two variables: (1) how many deaths EPA purports to prevent and (2) the supposed value of these prevented deaths. EPA forecasts that its regulations will prevent almost 240,000 deaths in 2020; the agency estimates that the value of a statistical life is about $7.4 million. Multiply those two data points, then adjust for inflation, and voila!—you’re at $2 trillion in “benefits” in 2020.

In this post, I will briefly explain that this result is a total sham, because the underlying numbers are unreliable to the point of being meaningless.

Start with EPA’s calculation of “prevented deaths”—i.e., the mortality benefits of environmental regulations. In fact, this estimate is based almost entirely on controversial, “secret” science. To be precise, in establishing a relationship between decreased air pollution and decreased mortality, EPA relies on decades-old data from the two reports, the Harvard Six Cities Study and the American Cancer Society’s Cancer Prevention Study II. So when EPA claims that it will prevent 240,000 deaths in 2020, this number is an extrapolation from these two key studies.

And yet, despite the evident importance of these two studies, EPA refuses to make publicly available the underlying data. For two years, House Science, Space and Technology Committee Lamar Smith has pressed the Agency to produce this “secret science.” And for two years, his requests have been rebuffed by the EPA. Remember, these studies were taxpayer funded. They serve, moreover, as the justification for public policy. And yet, EPA refuses to turn over the data to a Member of Congress. To read about Chairman Smith’s diligent efforts to unlock this secret science, see here, here, and here. Senate Environment & Public Works Ranking Member David Vitter also is investigating the matter. Suffice it to say for this post, EPA’s estimates of mortality avoided due to the Clean Air Act cannot be trusted.

What about the other variable, the value of a statistical life? How does the agency calculate this figure? The EPA does not place a dollar value on individual lives. Rather, when conducting a benefit-cost analysis of new environmental policies, the Agency uses estimates of how much people are willing to pay for small reductions in their risks of dying from adverse health conditions that may be caused by environmental pollution.

Below is the example provided by EPA:

“Suppose each person in a sample of 100,000 people were asked how much he or she would be willing to pay for a reduction in their individual risk of dying of 1 in 100,000, or 0.001%, over the next year. Since this reduction in risk would mean that we would expect one fewer death among the sample of 100,000 people over the next year on average, this is sometimes described as “one statistical life saved.” Now suppose that the average response to this hypothetical question was $100. Then the total dollar amount that the group would be willing to pay to save one statistical life in a year would be $100 per person × 100,000 people, or $10 million. This is what is meant by the “value of a statistical life.”

Simply put, this metric makes no sense. The “value” of each “prevented death” is ascertained by asking people how much hypothetical money they’d be willing to spend in order to avoid a fraction of one percent chance of death. How could this possibly have meaning? Absolutely nothing is concrete. The question doesn’t actually pertain to your money, after all. More importantly, there’s zero referent for estimating the value of reducing your mortality risk by a fraction of one percent. The “benefit” is a total abstraction.

[click to continue…]

Post image for Do Skeptics ‘Reposition’ Warming as ‘Theory’ or Do Alarmists ‘Reposition’ Fear as ‘Fact’? Revisiting an Urban Legend

How many times have you heard climate activists claim skeptics are just latter-day “tobacco scientists?” Google “tobacco scientists” and “global warming,” and you’ll get about 1,110,000 results. With so much (ahem) smoke, surely there must be some fire, right?

Al Gore helped popularize this endlessly repeated allegation. In An Inconvenient Truth (p. 263), he contends that just as tobacco companies cynically funded corrupt scientists to cast doubt on the Surgeon General’s report linking cigarette smoking to cancer, so fossil fuel companies fund “skeptics” to create the appearance of scientific controversy where none exists.

Here’s the pertinent passage:

The misconception that there is serious disagreement among scientists about global warming is actually an illusion that has been deliberately fostered by a relatively small but extremely well-funded cadre of special interests, including Exxon Mobil and a few other oil, coal, and utilities companies. These companies want to prevent any new policies that would interfere with their current business plans that rely on the massive unrestrained dumping of global warming pollution into the Earth’s atmosphere every hour of every day.

One of the internal memos prepared by this group to guide the employees they hired to run their disinformation campaign was discovered by the Pulitzer Prize-winning reporter Ross Gelbspan. Here was the group’s stated objective: to “reposition global warming as theory, rather than fact.”

This technique has been used before. The tobacco industry, 40 years ago, reacted to the historic Surgeon General’s report linking cigarette smoking to lung cancer and other lung diseases by organizing a similar disinformation campaign. 

One of their memos, prepared in the 1960s, was recently uncovered during one of the lawsuits against the tobacco companies in behalf of the millions of people who have been killed by their product. It is interesting to read it 40 years later in the context of the global warming campaign:

“Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing controversy.” Brown and Williamson Tobacco Company memo, 1960s

There’s just one problem with this tale of corruption and intrigue — much of it is false and all of it is misleading. Let’s examine the flaws in this urban legend, going from minor to major.

[click to continue…]

Post image for Which Sovereign Merits Judicial Deference When State & Federal Governments Conflict under the Clean Air Act’s Cooperative Federalism Arrangement?

What is the proper scope of review when an Article III court adjudicates a federalism dispute under the Clean Air Act? Is a court supposed to review the reasonableness of the state’s determinations? Or is it supposed to review the reasonableness of the EPA’s review of the reasonableness of the state’s determination? Simply put, to which sovereign should courts defer?

In light of the Obama administration’s aggressive oversight of Clean Air Act programs operated by the States, this is a hugely consequential question, with billions of dollars at stake. Yet there exists little statutory direction and conflicting case law to guide lower courts in their review of State-Federal disagreements pursuant to the Clean Air Act. On January 29, the State of Oklahoma petitioned the Supreme Court to revisit this matter, and clarify which sovereign warrants ascendant respect from reviewing courts.

Below, in the first of a two part series, I explore how the cooperative federalism regulatory regime established by the Clean Air Act confuses judicial deference to agency decision-making. Ultimately, I urge the Supreme Court to grant Oklahoma’s cert petition, in order to cut through the uncertainty and establish unequivocally the boundaries of authority between the State and Federal Governments under the Clean Air Act.

In the second part of the series, I will make the case that States, and not the EPA, are the proper recipients of the court’s respect.

Cooperative Federalism Conundrum: Delegation Can Be Split; Deference Can’t

[click to continue…]

Post image for Would Keystone XL Serve the U.S. National Interest?

Would the Keystone XL Pipeline (KXL) serve the U.S. national interest? If the State Department answers that question in the affirmative, TransCanada Corporation can finally begin building the pipeline, more than five and a half years after originally applying for a construction permit.

TransCanada recently submitted comments to State making the case for an affirmative “national interest determination” (NID). The comments are clear, comprehensive, accurate, and, in my opinion, compelling.

Inspired by those comments, I will attempt here to state the common sense of the issue in my own words.

The interminable controversy over the KXL is stunningly pointless. Do modern commerce and transport chiefly run on petroleum-based products? Yes. Are pipelines the most economic, efficient, and safe way to transport large volumes of petroleum? Yes. Is Canada our closest ally and biggest trading partner? Yes. Is Canada already the largest single source of U.S. petroleum imports? Yes. Would building the KXL enhance the efficiency of oil transport from Canada to U.S. markets? Yes. Would building the KXL support tens of thousands of American jobs and add billions to the GDP during the construction period? Yes. Would all the financing be private and not cost taxpayers a dime? Yes.

So how could building the KXL not be in U.S. national interest?

In 2012, TransCanada sought permission to build the “Gulf Coast Project” (the green line in the map below), the southern leg of the 1,700 mile pipeline it originally proposed to build from Hardisty, Canada to Port Arthur, Texas. State environmental agencies and the U.S. Army Corps of Engineers granted all necessary permits for the Gulf Coast Project by August 2012.

Keystone XL Pipeline Gulf Coast Route, State Department Final EIS 2014

Construction began in August 2012 and the project commenced commercial service in January 2014. The earth did not shake, the sky didn’t fall, no one felt a “disturbance in the Force . . . as if millions of voices suddenly cried out in terror and were suddenly silenced.” [click to continue…]

Post image for EPA: Artless Dodging on ‘Carbon Pollution’ Rule

Yesterday, the House Science Energy and Environment Subcommittees held a joint hearing on the “Science of Capture and Storage: Understanding EPA’s Carbon Rules.” EPA Air Office Acting Administrator Janet McCabe was the sole witness on the second panel. Her testimony begins about one hour and 54 minutes (1:54) into the archived Webcast. Although calm and non-ideological in tone, McCabe’s responses in the lengthy Q&A were terse, usually uninformative, and often evasive.

The hearing focused on carbon capture and storage (CCS), the technology new coal-fired power plants will have to install to meet the carbon dioxide (CO2) New Source Performance Standards (NSPS) in EPA’s proposed “Carbon Pollution Rule.” Under the Clean Air Act, NSPS are to “reflect the degree of emission limitation achievable through the application of the best system of emission reduction which . . . has been adequately demonstrated.”

Environment Subcommittee Chair David Schweikert (R.-Ariz.) kicked off the Q&A (1:59) by noting that the “Carbon Pollution Rule” assumes CCS technology is “robust and ready to go,” yet the “previous panel was pretty crisp, even from right to left, that there are still some real concerns on the technology itself.” He asked for “technical” information clarifying how EPA set the CO2 standards.

McCabe responded by explaining that the “Carbon Pollution Rule” does not actually mandate the use of CCS, it sets a performance standard based on CCS, and let’s covered facilities decide for themselves how to meet the standard. Okay, but that’s a distinction without a difference, since the only known technology that can reduce CO2 emissions from coal plants as much as CCS is CCS.

McCabe continued:

When it comes to the technology that we based those numbers on [i.e. 1,100 lbs. CO2 per MWh for new coal plants], we believe that if you look across all the information and data that’s available, that there is adequate and robust data showing that the various components that we based the standard on are in use, have been in use, and will be ready.

In other words, instead of providing technical information addressing the concerns raised during the previous panel, McCabe said, in effect, ‘Trust us, we’re the experts.’

Rep. Suzanne Bonamici (D-Ore.), noting GOP Members’ concerns about the cost of CCS, asked McCabe to discuss the “costs associated with the lack of action to address climate change and increasing emissions.” McCabe responded (2:10):

That’s a very good question. There are costs to our economy and to society from the impacts of climate change that is already happening. In 2013, there were seven extreme weather events. Which I think is a nice way of saying great, big, huge horrible storms that cost the economy over a billion dollars each. This is a real economic impact on our communities, our families across the country.

Prompted by Bonamici, McCabe went on to include “health care costs” and “disruption to families and whole communities” among the costs of inaction.

Whether deliberately or otherwise, McCabe blurs the distinction between climate risk and climate change risk. Hurricanes are not some new phenomenon unique to the Age of Global Warming. Huge, horrible storms have billion-dollar costs — that is the nature of the beast. Blaming hurricanes on CO2 emissions is unscientific. There has been no long-term trend in the strength or frequency of hurricanes, none in global accumulated cyclone energy, and none in hurricane damages once losses are adjusted to take into account increases in population, wealth, and the consumer price index. The U.S. is currently experiencing the longest period on record with no major (category 3-5) hurricane landfall strikes.

Blaming hurricane damages on congressional gridlock (“lack of action”) is loopy. Even complete elimination of U.S. CO2 emissions via immediate and total shutdown of the economy would avert only a hypothetical 0.19°C of warming by 2100 – too small a reduction to have any detectable effect on weather patterns. Ergo, no lesser emission reductions that might have been implemented during the past decade or two could provide any meaningful protection to people or property even if one assumes all seven billion-dollar storms in 2013 were ginned up by climate change. [click to continue…]

Post image for More Studies Find Lower Climate Sensitivity

Secy. of State John Kerry last week exhorted all State Department officials to conclude a new international climate change agreement, integrate climate change with other priorities, and, in general, “elevate the environment in everything we do.” In the same week, climate researchers produced two more studies undercutting Kerry’s opinion that climate change is “perhaps the world’s most fearsome weapon of mass destruction.”

The studies address the core scientific issue of climate sensitivity — the question of how much warming results from a given increase in atmospheric carbon dioxide (CO2) concentrations.

There are two types of sensitivity estimates. Equilibrium climate sensitivity (ECS) is an estimate of the increase in ‘steady state’ surface temperature after the climate system has fully adjusted to a doubling of CO2 concentrations — a process assumed to take centuries or longer due to oceanic thermal inertia. Transient climate sensitivity (TCS) is the estimated increase in surface temperature during the 20-year period when CO2 doubling occurs, presumably during the final decades of this century.

ECS is the key variable in both climate model predictions of future global warming and model estimates of the “social cost of carbon” – the damage allegedly inflicted on society by an incremental ton of CO2 emissions.

The IPCC’s 2007 Fourth Assessment Report (AR4) estimated a “likely” ECS range of 2°C-4.5°C, with a “best estimate” of 3°C. Since 2011, however, the warming pause and the growing divergence of model predictions and observed global temperatures have been the impetus for several studies finding that IPCC sensitivity estimates are too hot.

Cato Institute scientists Patrick Michaels and Chip Knappenberger maintain a growing list of such studies, which totaled 18 as of February 2014.

Climate Sensitivity Michaels & Knappenberger 18 Studies Feb 2014

The average sensitivity estimate of the 18 studies is just under 2°C. In other words, the AR4 “best estimate” of 3°C is 50% higher than the mean estimate of the new studies. That may be why the IPCC’s 2013-2014 Fifth Assessment Report (AR5) declines to offer a “best estimate.”

A new “best estimate” of 2°C would deflate the scary climate change impacts featured elsewhere in AR5, but recycling the same old 3°C “best estimate” would deflate the IPCC’s claim to be an honest broker. So instead the IPCC chose to lower the low end of the “likely” sensitivity range. Whereas the “likely” range in AR4 was 2°C-4.5°C, in AR5 it is 1.5°C-4.5°C.

That small concession, however, does not dispel the growing challenge to consensus climatology. As indicated in the chart above, the average sensitivity of the climate models used in AR5 is 3.2°C. That is 60% higher than the mean of recent estimates (<2°C). Let’s take a quick look at three studies that have come out this year. [click to continue…]