The Need for New Federal Anti-Spam Legislation

The CAN-SPAM Act of 2003 was passed in an attempt to stop “the extremely rapid growth in the volume of unsolicited commercial electronic mail” and thereby reduce the costs to recipients and internet service providers of transmitting, accessing, and discarding unwanted email.1 The Act obligates the senders of commercial email to utilize accurate header information, to “clear[ly] and conspicuous[ly]” identify their emails as “advertisement or solicitation,” and to notify recipients of the opportunity to opt-out of receiving future emails.2 Once an individual has opted out, that sender is then prohibited from emailing them further.3 Despite high hopes, the Act has largely been considered a failure for four reasons: 1. It eliminates many pre-existing private causes of action against senders,4 2. it does not require senders to receive permission before initiating contact,5 3. it relies upon a system of opt-out links that are both distrusted and frequently abused,6 and 4. it has been under-enforced.7

In the absence of comprehensive federal legislation, alternative solutions to the spam problem have proliferated, including state-by-state statutory regimes, decentralized private regulation, and restraints on the acquisition of email addresses itself. This paper examines some of the shortcomings of these alternative approaches, advocating instead for a new, more complete federal statutory regime.

I. State Solutions: Low Compliance, Low Enforcement

Some states have tried statutory approaches to curtailing spam.8 These regimes vary widely, ranging from completely prohibiting all “unsolicited commercial email,”9 to permitting unsolicited commercial emails but requiring that they contain certain keywords in the subject line,10 to merely requiring truthfulness in the sender and subject lines,11 to seemingly no regulation whatsoever.12 These broad categories have further differentiation—some states’ laws only apply to email sent to more than a certain number of recipients,13 for example—so that the result is a substantially heterogeneous patchwork of regulation across the country.

Since email addresses, unlike physical addresses, offer no indication of the location of the recipient, complying with each state’s particular laws becomes all but impossible for senders of commercial email. The result is low voluntary compliance, with weak enforcement mechanisms—low-incentive private causes of action14 and underfunded state investigators15—unable to pick up the slack. Uniform federal legislation against spam that preempts these state regimes, includes greater incentives for bringing private action, and allocates funding for investigations would increase compliance and improve enforcement across the country.16

II. Decentralized Private Regulation: Anticompetitive Concerns

Private internet service providers (ISPs) have also stepped in, generating lists of websites that they believe “send or support the sending of spam,” and “blocking transmission” between those websites and the addresses in its own system.17 This decentralized process of private regulation may be more flexible and adaptive to changing technology,18 but it creates significant anticompetitive concerns.19

The criteria for blacklisting can be quite elastic—despite dedicating significant resources to fighting spam and policing relay use, MIT ran afoul of one such blacklist for simply having “bad email practices”20—and could easily allow ISPs to engage in selective enforcement, disproportionately blocking the websites and communications of competitors. Since ISPs are already natural monopolies, with customers in a given location typically having few, if any, alternatives, market forces would do little to restrain capricious blocking activity. Furthermore, ISPs that operate as part of much larger corporations have added potential for abuse by leveraging their blocking power in other markets; AT&T, for example, might use its position as an ISP to block the website and commercial messages of a competing cell phone carrier while allowing its own to go through. In this way, allowing ISPs to maintain blacklists enables them to magnify their already significant market power. Federal legislation against spam can obviate the need for private blacklists, stopping spam without generating anticompetitive forces.

III. Restraints on Email Address Acquisition: No Protection in Many Cases

The Computer Fraud and Abuse Act (CFAA)21 as well as the common law of contract and trespass have been used to curtail spam indirectly by policing the illegitimate acquisition of email addresses.22 However, these solutions are incomplete at best. Focusing on the acquisition of email addresses does nothing for individuals whose email addresses are already in the hands of spammers. Additionally, these restraints miss a wide variety of email address acquisition techniques. Lists of email addresses can still be bought, sold, or posted for free by companies that acquired them. Users may unwittingly leave their contact information searchable on social media sites. Many email addresses can even be guessed.23 Federal legislation addressing the act of spamming directly is needed to close these gaps, and provide recourse once an email address has been acquired.

IV. Conclusion: Crafting Better Federal Spam Regulation

A new federal statutory regime regulating spam is needed to replace CAN-SPAM. State regulations are prohibitively difficult to comply with, and lack proper enforcement mechanisms. Private regulation raises too many anticompetitive concerns. Restrictions on email address acquisition, while beneficial, are an inadequate solution on their own. New federal regulation that directly targets spamming activity, requires opting in rather than opting out, provides sufficient incentives for private parties to file complaints or bring suit, and dedicates resources for investigations would go far in reducing spam below its current level.

  1. 15 U.S.C. § 7701(a)(2)-(3), (6) (2012).
  2. Id. § 7704(a)(1)-(2), (5).
  3. Id. § 7704(a)(4)(A).
  4. See Amit Asaravala, With This Law, You Can Spam, Wired (Jan. 23, 2004), (quoting Lawrence Lessig as saying the Act “is an abomination . . . . It’s ineffective and it’s affirmatively harmful because it preempts” other causes of action).
  5. See Statement on CAN SPAM, Coal. Against Unsolicited Commercial Email (Dec. 16, 2004), (noting that by adopting an opt-out system, “[CAN-SPAM] gives each marketer in the United States one free shot at each consumer’s e-mail inbox . . . .”).
  6. See Daniel Solove, What Exactly Is a “Spammer”?, Concurring Opinions (Jan. 7, 2007), (“It is common knowledge that you shouldn’t click the opt out link on an unsolicited email because many spammers use that trick as a way to verify that people have read the spam and will then send people even more spam.”).
  7. See Jonathan K. Stock, A New Weapon in the Fight Against Spam, Mondaq (Oct. 8, 2004), (“The [CAN-SPAM] Act has largely gone unenforced.”).
  8. See, e.g., Washington v. Heckel, 24 P.3d 404 (Wash. 2001) (finding Heckel liable for sending unsolicited commercial email by applying Washington’s Commercial Electronic Mail Act, codified at Wash. Rev. Code § 19.190 (2012)).
  9. Cal. Bus. & Prof. Code § 17529.2(a)-(b) (West 2013).
  10. See, e.g., 815 Ill. Comp. Stat. 511/10(a)(a-15) (2012) (mandating use of “ADV” and “ADV:ADLT” in “unsolicited electronic mail advertisement’s subject line[s]”); Alaska Stat. § 45.50.479 (2012) (mandating the use of subject line keywords only for sexually explicit content); Wis. Stat. § 944.25 (2012) (same).
  11. See, e.g., Nev. Rev. Stat. §§ 205.492, 205.511 (2012); N.D. Cent. Code § 51-27-01 (2012); Wash. Rev. Code §§ 19.190.010 to .110 (2012).
  12. Hawaii, for example, “has no statutes addressed specifically to commercial email and spam.” Legal Information Institute, Hawaii, Cornell L. Sch., (last accessed Nov. 17, 2013).
  13. See, e.g., La. Rev. Stat. Ann. §§ 14:73.1, 14:73.6 (only applying to “electronic message[s] . . . sent in the same or substantially similar form to more than one thousand recipients”).
  14. Many states, for example, only allow plaintiffs to recover actual damages or a predetermined maximum amount; these are likely insufficient to incentivize the costly and time-consuming process of obtaining a lawyer, filing suit, and litigating. See, e.g., ,s>R.I. Gen. Laws § 6-47-2(h) (2012) (capping what plaintiffs may receive at $100, plus legal fees); Pa. Stat. Ann. § 2250.7(a)(1) (West 2013) (same); Me. Rev. Stat. tit. 10 § 224.1497(7)(B) (2012) (allowing for recovery of actual damages or $250, whichever is greater); Mo. Rev. Stat. § 407.1129 (2012) (allowing for recovery of actual damages or $500, whichever is greater).
  15. See Electronic Crime Needs Assessment for State and Local Law Enforcement, Nat’l Inst. of Justice (Mar. 2001),, at iv (voicing “serious concerns about the capability of [state] law enforcement resources to keep pace” with a wide variety of computer crimes).
  16. One might argue that senders of unsolicited commercial email ought to simply identify the strongest state law, obey it, and then they will be safe in every state. The result of this, however—uniform anti-spam law across the country—is more legitimately achieved through federal legislation than a single state’s unilateral action. Some states, for one reason or another, may not want stronger anti-spam laws. The federal legislative process would balance these interests, and take different states’ desires into account.
  17. Media3 Technologies v. Mail Abuse Prevention System, No. 00–CV–12524–MEL., 2001 WL 92389, at *2 (D. Mass. Jan. 2, 2001).
  18. See David G. Post, Of Black Holes and Decentralized Law-Making in Cyberspace, 2 Vand. J. Ent. L. & Prac. 70 (2000).
  19. Decentralized private regulation also raises the same compliance concerns outlined above. There are many different possible definitions of spam, let alone what constitutes “supporting” the sending of spam. Since ISPs may cover residents of multiple states, and states may have multiple ISPs, the result is a patchwork of private policies overlaid onto a patchwork of state regulations, further hampering compliance.
  20. See Lawrence Lessig, The Spam Wars, The Indus. Standard (Dec. 31, 1998).
  21. 18 U.S.C. § 1030 (2012).
  22. See, e.g., v. Verio, 356 F.3d 393 (2d Cir. 2004) (determining that querying Register’s servers to obtain their customers’ email information for spamming purposes constituted a breach of contract on the part of Verio, as well as trespass to chattels); America Online v. LCGM, 46 F. Supp. 2d 444, 450 (E.D. Va. 1998) (determining that by using an AOL membership to harvest the email addresses of AOL users, LCGM was in violation of AOL’s Terms of Service, and as a result both “exceeded authorized access” and “accessed without authorization” for the purposes of the CFAA).
  23. Combinations of common first and last names with popular domains such as or generate numerous positive results as of this writing (search conducted via web applets such as Linksy’s Find-Email ( There are even programs designed to help automate such guessing, such as the Gmail extension Rapportive (

Expanding the Prosecutor’s Purview: Interpreting the Wartime Suspension of Limitations Act


The question of when a war exists has been extensively considered in international law,1 but the subject is greatly important in the regulation of government contracting because of the little-known Wartime Suspension of Limitations Act (WSLA).2 The Act declares that when the nation is “at war,” the statute of limitations on fraud committed against the United States government will not take effect. When the nation is at war, the general five-year statute of limitations on federal crimes can be extended without end for fraud in government contracting.3

The Fifth Circuit’s 2012 ruling in United States v. Pfluger4 has garnered significant attention within the business community because it ruled that the wars in Iraq and Afghanistan have not ended, thus extending the statute of limitations on fraud against the government.5 Despite the implications of the ruling for companies engaged in government contract work, no scholarship has discussed what the court called a “minimally developed area of law.”6 This Comment seeks to fill that gap by tracing how courts have interpreted the “at war” section of the Act. In the three cases where federal courts have considered this question, they have all come out differently on how to interpret this provision of the Act.

The interpretation will have broad ramifications for the regulation of government contracting because the Act applies to all government contracting, done both in and outside of the war zone. Under United States v. Prosperi, courts have interpreted the Act to give the government power to prosecute fraud committed against the government, even where that fraud has no relation to the war.7 In that case, the court upheld the government’s use of the Act to prevent the statute of limitations from taking effect on fraud committed by a contractor who was working on Boston’s “Big Dig” project. Though the fraud had nothing to do with the war, the court reasoned that because the fraud took place during wartime, the government could stop the clock on the statute of limitations.8

Government contracting was a $516.3 billion industry in FY 2012, and the industry is significantly impacted by the statute of limitations on contracting work.9 By extending the statute of limitations, companies can be deterred from fraud that may be too complex to discover quickly. Similarly, increases in the statute of limitations will raise the regulatory exposure of a company in a government contract and will prevent them from closing the books on earlier contract work. Though the law does not involve action from an administrative agency, it is regulation in Barak Orbach’s conceptualization of a regulation as government action that can “directly influence (or ‘adjust’) conduct of individuals and firms” and which “enables, facilitates or adjusts activities, with no restrictions.”10 The law and its interpretation will directly adjust the activity of firms by determining the length of their exposure to costly prosecutions from the government for their contracting work.

This Comment proceeds in two sections. First, the Comment reviews how courts have interpreted the Wartime Suspension of Limitations Act. The Comment argues that Pfluger marked a departure from the functionalist test in Prosperi that, in an era without formal surrender treaties, will extend the “at war” section of the Act without any end point. Second, the Comment argues for a return to the functionalist test in which the courts are held responsible for determining on a factual basis when the nation is at war. This is a more difficult task for the courts but is necessary to properly construe the statute.

I. Defining “At War”

There have only been three cases that have sought to interpret the “at war” section of the Wartime Suspension of Limitations Act.

A. United States v. Shelton

In the first case, United States v. Shelton, the court ruled that the Gulf War never met the definition of “at war” because a formal Declaration of War was requiring to trigger the statute.11 The case involved a local official in Texas who was indicted in June 1992, more than five years after he was alleged to have engaged in fraud against the United States government in conjunction with his position as Deputy Director of the Texas Department of Community Affairs.12 The government responded that because of the 1991 Gulf War, the statute of limitations was halted.13 The court ruled that “the recent conflict with Iraq did not constitute a ‘war’ as that term is used in the Suspension Act” because the statute was designed for “massive and pervasive conflicts [such] as World War II,” which the Gulf War was not.14

The Shelton court took a strongly formalist approach to the definition of “at war.” No Declaration of War had been issued since World War Two, and under the court’s interpretation, the U.S. would not have been “at war” during the Korean and Vietnam Wars. While various military courts had adopted more expansive definitions of war, the court held that “armed conflict to amount to a ‘war’ for military purposes admittedly should be a lower standard than to constitute a war for civilian purposes.”15 According to the court, the only trigger for the “at war” section would be action from Congress that “formally recognized that conflict as a war. The Judicial Branch of the United States has no constitutional power to declare a war.”16 Shelton avoids complications about when the Gulf War may have ended by arguing that it never began for the purposes of the statute. In seeking to divorce the definition of “at war” from any functional interpretation of U.S. military action, the court was distancing its interpretation of the conflict from Dellums v. Bush, in which Judge Harold Greene ruled that “here the forces involved are of such magnitude and significance as to present no serious claim that a war would not ensue if they became engaged in combat,” which they ultimately did.17

B. United States v. Prosperi

In United States v. Prosperi, the government used the “at war” provision to charge a contractor in Boston’s “Big Dig” with fraud that would have otherwise been time-barred.18 Even though the fraud had nothing to do with the conflicts in Afghanistan and Iraq, the court held that “it makes no difference that the fraud in this case involved a construction project unrelated to the Iraqi or Afghani conflicts.”19

In determining the scope of the “at war” provision, Prosperi rejected the formalist approach in Shelton and adopted a functionalist approach. While noting that courts should generally abstain from wading into questions “fraught with gravity,”20 the court ruled that there are “cases, however, that leave no choice to a court but to interpret statutory or contractual language that depends on the determination of the existence of a declared or undeclared state of war.”21 The court criticized the Shelton decision for missing the major conflicts that should meet the “at war” trigger: “[t]he Shelton formulation thus does not capture the Korean War or the Vietnam War, two of the largest, bloodiest, and most expensive military campaigns in our nation’s history (nor does it capture the conflicts in Iraq and in Afghanistan).”22 Prosperi rejected the requirement for a Declaration of War because “there is no compelling logic connecting a formal declaration of war with the state of being at war.”23

In moving away from a formalist definition, the Prosperi decision opened up questions concerning what level of violence would constitute war. The court ceded that “not every shot fired or every armed skirmish is of sufficient magnitude to stop the running of the statute of limitations.”24 In establishing that the conflicts in Iraq and Afghanistan constituted the United States being “at War,” the decision engaged in a thorough examination of the resources used and American lives lost.25

Prosperi returned to a more formalist definition to define the “termination of hostilities.”26 Instead of applying its earlier empirical examination of the existence of hostilities, the court argued that the “end of more recent conflicts have been signaled by Presidential pronouncement or by the diplomatic or de jure recognition of a former belligerent or a newly constituted government.”27

The court ruled that the U.S. recognition of the Afghan government on December 22, 2001 constituted the end of hostilities in Afghanistan and that President George W. Bush’s “Mission Accomplished” speech aboard the USS Abraham Lincoln constituted the end of hostilities in Iraq.28 While Prosperi had criticized the formalism of Shelton in determining whether a war exists, it returned to this formalism in making its assessment of when war ends.

C. United States v. Pfluger

In United States v. Pfluger, the government used the “at war” provision of the statute to extend the statute of limitations in a case involving fraud by a U.S. soldier in Iraq. David Pfluger was a lieutenant colonel in Iraq accused of taking kickbacks in connection with contracts he arranged for fuel for his Forward Operating Base.29 Pfluger challenged his conviction because the statute of limitations had run, and the government responded that because the nation was “at war” the statute of limitations was suspended.30 The court rejected the narrow definition of war from Prosperi and ruled that the Act “mandates formal requirements for the termination clause to be met.”31

In the decision, the court tried to narrow potential applications of the case to future litigation. Noting that the precedent could lead to “absurd” applications, the court said that it was only claiming that the standard worked in this case: the court “need only determine that it is not an absurd result that the hostilities in the armed conflict authorized by either the AUMF or the AUMF-I were ongoing in May 2004,” when the conduct was carried out.32 To justify that position, the court relied on the standards of active combat developed in Hamdi v. Rumsfeld.33 However, the court went beyond this to state that because the President hadn’t engaged in the “formal requirements for terminating the WSLA’s suspension of limitations” up through “this date,” WSLA would still be in effect when the decision was made in June 2012.34

With the Supreme Court denying certiorari in the appeal of Pfluger, the expansive standard from that case is the most recent law on the subject.

II. Towards a Functionalist Interpretation of the WSLA

Interpretations of the WSLA have been marred first by under-inclusive formalism and now by over-inclusive formalism. In Shelton, the court’s formalism led them to rule that only a Declaration of War could trigger the start of “at war” provision. In Pfluger, the court ruled that only a formal surrender could trigger the end of the “at war” provision. As the law stands in Pfluger, the court has already admitted that it is open to “absurd” interpretations because the AUMF will never expire. In an era where the United States is engaged in what a “forever war” that is difficult to end, the WSLA could mean an indefinite cessation of the statute of limitations for fraud against the government.35

Instead of the formalism of Shelton and Pfluger, other jurisdictions should return to the functionalist test developed in Prosperi. While the functionalist test requires a more in-depth engagement with the facts of the conflict, the alternative is to avoid the question and create a definition that is either far too narrow or far too broad. However, courts should seek to depart from the definition of the “termination of hostilities” offered in Prosperi. There the court argued that the recognition of the government in Afghanistan and a speech by the President regarding the war in Iraq could suffice to establish the termination of hostilities. While there is a need to establish firm dates for the termination of hostilities, such a formal test represents a departure from the functionalism that Prosperi offered in determining whether there are hostilities.

The underlying intent of the Senate was to prevent fraud against the government as they hastily assembled large-scale military procurement programs. The Senate Report accompanying the 1942 enactment of the law stated that in “normal times the present 3-year statute of limitations may afford the Department of Justice sufficient time to investigate, discover, and gather evidence to prosecute frauds against the Government. The United States, however, is engaged in a gigantic war program. Huge sums of money are being expended for materials and equipment in order to carry on the war successfully.”36 With that in mind, “it is recognized that in the varied dealings opportunities will no doubt be presented for unscrupulous persons to defraud the Government or some agency.”37 The Senate was seeking to curtail fraud against the government in the abnormal circumstance in which the country was engaged in large-scale military operations. Increasingly, the “normal times” are wartime, not peacetime, which has allowed for an expansion of the law beyond the limited intent of the Act’s drafters.38

The ruling in Pfluger has stretched the law to a limitless standard wherein the statute of limitations on these crimes will never run out. While courts should not return to the standard in Shelton of never recognizing a war, courts should use the functionalism from Prosperi, which is consistent with the Senate’s objectives, in determining whether a conflict was a war. In determining the termination of hostilities, the court will need to engage in a fact-specific analysis to determine whether or not the “at war” clause was in effect on the date in question. Though this represents a far greater task for courts than any have taken, this is the only way to determine properly when the hostilities have ceased.

Rather, the courts should seek to narrowly tailor their decisions on the dates of termination for the WSLA. When the question cannot be avoided, the court should focus on an individual date in question and then see whether or not the nation was at war. From there, the court can apply the fact-based functionalist test from Prosperi. This approach may be more intensive for the courts, but it is remarkably better than the alternative of the baseless date for the end of hostilities in Prosperi and the indefinite definition offered in Pfluger.

As a regulatory matter, the Executive could play a greater role in helping to ensure transparency and predictability in this process. The Department of Defense should issue guidance that informs companies and courts when it considers the United States to be “at war” for the purposes of the WSLA. This would remove the onus from the courts to interpret activity in far-flung battlefields and would afford firms a clearer understanding of the statute of limitations on their government contracting work. The confusion in this area, and the divergent holdings in different jurisdictions make this a ripe area for greater regulatory oversight from the Executive.

  1. See generally Mary Dudziak, War Time: An Idea, Its History, Its Consequences (2012).
  2. Wartime Suspension of Limitations Act, 18 U.S.C. § 3287 (2012). (“When the United States is at war or Congress has enacted a specific authorization for the use of the Armed Forces, as described in section 5(b) of the War Powers Resolution (. . . the running of any statute of limitations applicable to any offense (1) involving fraud or attempted fraud against the United States or any agency thereof in any manner, whether by conspiracy or not, or (2) committed in connection with the acquisition, care, handling, custody, control or disposition of any real property or personal property of the United States, or (3) . . ., shall be suspended until three years after the termination of hostilities as proclaimed by the President or by a concurrent resolution of Congress.”).
  3. See 18 U.S.C. § 3282 (2006) (providing for a five-year statute of limitations for non-capital offenses except as otherwise provided).
  4. 685 F.3d 481 (5th Cir. 2012).
  5. Lance Duroni, Justices Won’t Review Ex-Army Officer’s Bribery Indictment, Law360 (Feb. 19, 2013, 8:38 PM),
  6. United States v. Pfluger, 685 F.3d 481, 482 (5th Cir. 2012).
  7. United States v. Prosperi, 573 F. Supp. 2d 436, 442 (D. Mass. 2008) (“[I]t makes no difference that the fraud in this case involved a construction project unrelated to the Iraqi or Afghani conflicts.”).
  8. Prosperi, 573 F. Supp. 2d at 442.
  9. Eric Katz, Most Top Contractors Increased Business with Federal Government in 2012, Government Executive (May 8, 2013).
  10. Barak Orbach, What Is Reglation?, 30 Yale J. Reg. Online 1, 4 (2012).
  11. United States v. Shelton, 816 F. Supp. 1132 (W.D. Tex. 1993).
  12. Shelton, 816 F. Supp. at 1134.
  13. Shelton, 816 F. Supp. at 1134.
  14. Shelton, 816 F. Supp. at 1132.
  15. Shelton, 816 F. Supp. at 1135.
  16. Shelton, 816 F. Supp. at 1135.
  17. Dellums v. Bush, 752 F. Supp. 1141, 1145 (D.D.C. 1990).
  18. Prosperi, 573 F. Supp. 2d at 436.
  19. Prosperi, 573 F. Supp. 2d at 442.
  20. Prosperi, 573 F. Supp. 2d at 442 (quoting Ludecke v. Watkins, 335 U.S. 160, 169 (1948)).
  21. Prosperi, 573 F. Supp. 2d at 442.
  22. Prosperi, 573 F. Supp. 2d at 445.
  23. Prosperi, 573 F. Supp. 2d at 446.
  24. Prosperi, 573 F. Supp. 2d at 449.
  25. Prosperi, 573 F. Supp. 2d at 452 & n.28.
  26. Prosperi, 573 F. Supp. 2d at 454.
  27. Prosperi, 573 F. Supp. 2d at 454.
  28. Prosperi, 573 F. Supp. 2d at 455.
  29. United States v. Pfluger, 685 F.3d 481, 481 (5th Cir. 2012).
  30. Pfluger, 685 F.3d at 481.
  31. Pfluger, 685 F.3d at 485.
  32. Pfluger, 685 F.3d at 485.
  33. Pfluger, 685 F.3d at 485 (citing Hamdi v. Rumsfeld, 542 U.S. 507, 521 (2004)).
  34. Pfluger, 685 F.3d at 485 (citing Hamdi v. Rumsfeld, 542 U.S. 507, 521 (2004)).
  35. Harold Koh, How To End the Forever War, Yale Global Online, (May 14, 2013),
  36. S. Rep. No. 1544, 77th Cong., 2d Sess., at 1.
  37. Id. at 2.
  38. Dudziak, supra note 1, at 8.

Essay Responding to Brian H. Potts, “The President’s Climate Plan for Power Plants Won’t Significantly Lower Emissions”

Critics of the Obama Administration’s recently announced efforts to control climate pollution seek to discredit the idea of existing power plant greenhouse gas emissions limits based on legal arguments that are both shortsighted and unfounded.1 2 These arguments have most recently appeared in an essay posted on the Yale Journal on Regulation Online by attorney Brian Potts,3 but some of the same ideas were put forward late last year by C. Boyden Gray, at a Resources for the Future Forum.4 Their basic premise is that the EPA doesn’t have the authority to regulate existing power plant greenhouse gas emissions at all. Mr. Potts also argues that the Agency is constrained by actions it already has taken under another program.

This might seem like just an interesting legal question, but it has much larger implications. In the absence of Congressional action, using available regulatory authorities in the very near term is the only present option for the U.S. to make the vigorous efforts needed to bring our climate emissions under control.5 Existing fossil-fueled power plants are the largest U.S. industrial source of carbon dioxide.6 Once emitted from a smokestack, some portion of carbon dioxide emissions persists in the atmosphere for over a century, causing enduring climate damage – so the need for near term curtailment of such emissions is extraordinarily compelling.7 Attempts to cut off efforts to reduce carbon dioxide emissions from the power sector before they’ve even begun therefore raise significant concerns. Without quick action on this sector, which is some 40% of the carbon dioxide problem, we simply cannot get a handle on U.S. climate pollution. Fortunately, contrary to the arguments put forward by Attorneys Potts and Gray, the EPA does have authority to regulate power sector greenhouse gas emissions using Clean Air Act section 111(d).

The EPA’s authority under 111(d) includes authority to set standards for existing power plant climate pollutants

At the heart of the matter is language contained in section 111(d) of the Clean Air Act, describing when the EPA must direct the establishment of “standards of performance” – limits on air pollution emissions – for existing sources in listed industries.8 This section of the Act was adjusted in 1990, reflecting significant substantive changes made by Congress to an entirely different section of the Act that deals with air toxics like mercury, arsenic and chromium. As described below, Congress, in making housekeeping amendments to section 111(d), was primarily preserving a framework that avoided double regulation of toxic air pollutants emitted by existing sources.

The relevant part of Clean Air Act section 111(d) as it appears in the U.S. Code requires the EPA to “prescribe regulations” under which states submit plans including “standards of performance” for “any existing source” and “for any air pollutant (i) [which is not regulated under the Act’s national ambient air quality standards rules] or emitted from a source category which is regulated under [the section governing air toxics].”9 Attorneys Potts and Gray read this provision very narrowly as barring the EPA’s ability to limit greenhouse gas emissions from existing fossil-fuel fired power plants, because coal-fired power plants (since 2000) are “a source category which is regulated under” the air toxics provisions.10 They assert that because existing power plants’ air toxics are regulated, their greenhouse gas emissions cannot be regulated as well. This argument, however, does not make sense from either a legal or a policy perspective.

In 1990, Congress completely overhauled the air toxics provisions of the statute, moving from the previous system (under which the EPA was supposed to list and regulate toxic air pollutants individually), to one in which Congress itself listed 188 toxic pollutants in section 112(b) of the statute and required the EPA to list and then regulate specified industries.11 Under the new system, for each listed industry the EPA issues new and existing source standards for each emitted 112(b) toxic pollutant.12

Additionally, prior to 1990, section 111(d) authorized existing source performance standards for “any air pollutant (i) for which air quality criteria have not been issued or which is not included on a list published under section … [108(a) of the Clean Air Act] or … [112 (b)(1)(A) of the Clean Air Act].”13 At that time, section 112(b)(1)(A) required the Administrator to list the hazardous air pollutants for which regulation would be promulgated.14 After 1990, section 112(b)(1) contained the list of air toxics that Congress declared must be regulated when emitted by listed industries.15

It is important to understand that as part of the 1990 overhaul of the air toxics provisions, conforming changes to other parts of the statute were required. Attempting to retain the prior prohibition on double regulation of industrial air toxics, and with the EPA now regulating industry-by-industry, both the House and the Senate made non-substantive cleanup changes accompanying the 1990 overhaul.

The House amendment to section 111(d) shows up in section 108, “Miscellaneous Provisions,”16 while the Senate amendment appears in later section 110, “Conforming Amendments.”17 While the codified statute includes only the language of the first-in-time House amendment, both the House and the (later-in-time) Senate amendments appear in the Statutes at Large, which provide controlling “legal evidence of laws.”18 In the Statutes at Large, the conforming amendments are reflected in parentheses: section 111(d) applies to “any air pollutant…which is not included on a list published under section 7408(a) (or emitted from a source category which is regulated under section 112) [House amendment] (or 112(b) [Senate amendment]).”19 While the codified version contains only the House amendment, an additional longstanding rule of statutory construction in such circumstances is, “the last provision in point of arrangement must control” – that is, the Senate amendment controls here.20

Taken together, this means that an accurate reading of the statute is that section 111(d) is to be used to regulate any air pollutant that is not included on a list published under section 7408(a) or 112(b). In addition to being doctrinally correct, this result also simply makes good policy sense, as it avoids double regulation: section 111(d)’s prohibition on using existing source standards for pollutants for which National Ambient Air Quality Standards have been issued avoids double regulation because existing sources of those pollutants are regulated already under state implementation plans.21 The prohibition on using existing source standards for air toxics avoids double regulation because existing sources of those pollutants are regulated under section 112.22 Attorneys Potts and Gray overlook this. Regardless, their position—that these housekeeping changes could have been intended to effectuate a sweeping change in the meaning of section 111(d)—simply does not hold water.

The EPA is unconstrained by prior Best Available Control Technology (BACT) determinations in directing issuance of existing source performance standards

Mr. Potts additionally takes a position in the alternative, namely that any authority the EPA has under section 111(d) to direct existing source standards for greenhouse gases is constrained by its prior efforts under the greenhouse gas permitting authority for new sources. He erroneously asserts that preconstruction permits issued under the Prevention of Significant Deterioration (“PSD”) program found in section 165 of the Act limit the EPA’s authority to prescribe existing source standards under section 111(d) of the New Source Performance Standards program.23 The PSD provisions do require each new (or modified) source to achieve emissions controls that are more stringent than those provided by the performance standard applicable to all new (or as relevant, existing) sources.24 But nothing in the statute requires the converse, i.e., that limits in any new source PSD permits, bind the EPA in setting later existing source performance standards for that industry.

The statute’s language requires that a standard of performance, whether for a new or existing source, be one reflecting the “best system of emissions reduction which (taking into account the costs and non-air quality health and environmental impact and energy requirements) the Administrator determines has been adequately demonstrated.”25 The D.C. Circuit has interpreted the phrase “adequately demonstrated” in the new source performance standard setting context as technology forcing: “look[ing] toward what may fairly be projected for the regulated future,” not as meaning that all sources must be able to meet the requirement.26 While there is much less experience with existing source standards, the statutory definition of “standard of performance” is the same for both new and existing sources,27 strongly suggesting that advancing control technologies must be a consideration to be weighed in existing source standard-setting as well. And, while the EPA is not prohibited from evaluating past greenhouse gas permits for some evidence of the achievability of emissions reductions levels, just as clearly, nothing in the statute or the case law restricts the EPA to prior permit levels in any way, when directing the appropriate level of existing source standards. In fact, the opposite is true. Given that the same definition of “standard of performance” applies in both new and existing source standard setting, it stands to reason that the Agency also must direct that existing source standards be based on the best system of emissions reduction, reasonably expected to serve the interests of pollution control at existing sources, without becoming exorbitantly costly.28

This is not just a battle of legal niceties. We must acknowledge that the “interests of pollution control” in the context of power plant carbon dioxide emissions includes the interest in taking strong near term steps to avoid a climate catastrophe.29 Reducing carbon dioxide emissions from existing sources in the nation’s power sector is absolutely essential, as the Obama Administration’s recently unveiled Climate Plan recognizes. While this industry surely would like to be immune from climate change regulation, with all due respect to Mr. Potts, I believe we are fortunate that his view that the EPA’s authority in this area is limited or constrained is not grounded in the law.

  1. The author would like to thank Jordan Asch for his assistance in the production of this Essay.
  2. President Obama’s Climate Action Plan is available online. Executive Office of the President, The President’s Climate Action Plan, Executive Office of the President (June 2013),
  3. Brian H. Potts, The President’s Climate Plan for Power Plants Won’t Significantly Lower Emissions, 31 Yale J. on Reg. Online (Aug. 22, 2013).
  4. Climate Change Fight Over Utility GHG Rule Shifts Back To EPA After Court Rejects Lawsuit, Inside EPA Environmental Newsstand, Weekly Analysis (Dec. 14, 2012), Mr. Gray’s argument has also recently surfaced at a forum held in Washington on the substance of EPA’s forthcoming proposal. See Kyle Danish, Section 111(d) and Regulation of CO2 Emissions from Existing Power Plants, at slide 6 (presentation made at the Bipartisan Policy Center meeting on GHG Regulation of Existing Power Plants Under the Clean Air Act: What Is It and How Will It Work?) (September 25, 2013),
  5. See Justin Gillis, By 2047, Coldest Years May Be Warmer Than Hottest in Past, Scientists Say, N.Y. Times, Oct. 10, 2013 at A9 (noting that a recent study by scientists at the University of Hawaii shows that “a vigorous global effort” is needed now in order to delay the onset of unprecedented temperatures and allow for societal adaptation).
  6. Fossil-fuel fired electricity generation is responsible for approximately 40% percent of U.S. man-made carbon dioxide emissions. U.S. EPA, Inventory of U.S. Greenhouse Gas Emissions and Sinks, 1990-2011, Envtl. Prot. Agency (Apr. 12, 2013) (EPA 430-R-13-001, April 12. 2013), Executive Summary at ES-54, Table ES-2 (“Recent Trends in U.S. Greenhouse Gas Emissions and Sinks”),
  7. Endangerment and Cause or Contribute Findings for Greenhouse Gases Under Section 202(a) of the Clean Air Act: Final Rule, 74 Fed. Reg. 66496, 66517 n.18 (Dec. 15, 2009) (discussing the climate forcing lifetimes of various greenhouse gases, including carbon dioxide) (hereinafter “Endangerment Finding”). See also, Intergovernmental Panel on Climate Change, Climate Change 2013: The Physical Science Basis: Summary for Policymakers, Working Group I Contribution to the IPCC Fifth Assessment Report (Sept. 27, 2013) at SPM-20 (summarizing the finding that “[d]epending on the [analyzed] scenario, about 15 to 40% of emitted CO2 will remain in the atmosphere longer than 1,000 years.”).
  8. 42 U.S.C. § 7411(d).
  9. Id. § 7411(d)(1).
  10. Id. § 7411(d)(1)(A)(i); see also Regulatory Finding on the Emissions of Hazardous Air Pollutants From Electric Utility Steam Generating Units, 65 Fed. Reg. 79825, 79830 (Dec. 20, 2000) (discussing this regulatory change in section III, “What is EPA’s Regulatory Finding?”); National Emission Standards for Hazardous Air Pollutants From Coal- and Oil-Fired Electric Utility Steam Generating Units and Standards of Performance for Fossil-Fuel-Fired Electric Utility, Industrial-Commercial-Institutional, and Small Industrial-Commercial-Institutional Steam Generating Units, 77 Fed. Reg. 9304, 9308 (Feb. 16, 2012) (discussing the regulatory scheme in subsection C, “What is the relationship between this final rule and other combustion rules?”).
  11. see Sierra Club v. EPA, 353 F.3d 976, 979-80 (D.C. Cir. 2004) (describing the history of the air toxics provisions from 1970-1990).
  12. See generally 42 U.S.C. §§ 7412(c), (d) (describing the process for listing industries and regulating new and existing sources of 112(b)-listed air toxics).
  13. 42 U.S.C. § 7411(d)(1) (1988) (emphasis added) (current version amended by Pub. L. No. 101-549 and at 42 U.S.C. §7411(d)(1)).
  14. 42 U.S.C. § 7412(b)(1)(A) (1988) current version amended by Pub. L. No. 101-549 and at 42 U.S.C. §7412(b)(1)(A)). Section 108 of the Clean Air Act was and continues to be the section describing the Administrator’s duty to list air pollutants for which National Ambient Air Quality Standards (NAAQS) will be issued. 42 U.S.C. § 7408. Carbon dioxide is not a NAAQS pollutant, and it is not a listed hazardous air pollutant. See Prevention of Significant Deterioration and Title V Greenhouse Gas Tailoring Rule; Final Rule, 75 Fed. Reg. 31514, 31520 (noting that EPA has not proposed or finalized a decision setting a NAAQS for any greenhouse gas); and 42 U.S.C. § 7412(b)(1) (presenting the list of hazardous air pollutants).
  15. 42 U.S.C. § 7412(b)(1) (2012).
  16. Clean Air Act Amendments, Pub. L. No. 101-549 § 108, 104 Stat. 2399, 2467 (1990).
  17. Id. § 110, 104 Stat. 2574.
  18. United States Nat’l Bank of Oregon v. Independent Ins. Agents of Am., 508 U.S. 439, 448 (1993) (citing 1 U.S.C. § 112).
  19. Clean Air Act Amendments, Pub. L. No. 101-549 §§ 108, 302, 104 Stat. 2399, 2467, 2574 (1990).
  20. See, e.g. Lodge 1858, Am Fed. of Gov’t Employees v. Webb, 580 F.2d 496, 510 & n.31 (D.C. Cir. 1978) (citing over 80 cases so holding).
  21. 42 U.S.C § 7410(a)(2)(C).
  22. 42 U.S.C. §§ 7412(d)(2)-7412(d)(3).
  23. The Clean Air Act’s PSD preconstruction permitting provisions and relevant definitions are found at 42 U.S.C. §§ 7465-7469.
  24. See 42 U.S.C. § 7479(3) (2012) (“In no event shall application of ‘best available control technology’ result in emissions of any pollutants which will exceed the emissions allowed by any applicable standard established pursuant to section [111] of this title.”).
  25. 42 U.S.C. §7411(a)(1) (2012).
  26. Lignite Energy Council v. EPA, 198 F.3d 930, 934 (D.C. Cir. 1999) (quoting Portland Cement Ass’n v. Ruckelshaus, 486 F.2d 375, 391 (D.C. Cir. 1973)).
  27. See 42 U.S.C. § 7411(a)(1) (2012) (defining “standard of performance”); 42 U.S.C. § 7411(d)(1) (referring to the establishment of “standards of performance” for existing sources).
  28. For reference to exorbitance as the upper bound on the cost factor under the definition of “standard of performance” in 42 U.S.C. § 7411(a)(1), see Lignite Energy Council, 198 F.3d at 933 (noting that EPA’s choice in balancing the statutory factors “will be sustained unless the environmental or economic costs of using the technology are exorbitant” and citing National Asphalt Pavement Ass’n v. Train, 539 F.2d 775, 786 (D.C. Cir. 1976)).
  29. For reference to the “interests of pollution control” as relevant to the selection of the best system of emission reduction, see Essex Chemical v. Ruckelshaus, 486 F.2d 427, 433 (1973), cert. denied, 416 U.S. 969 (1974).

Antitrust Enforcement in Private Equity: Target, Bidder, and Club Sizes Should Matter

This Comment argues that plaintiffs have painted “club deals” with a broad brush as anticompetitive, whereas applying the facts alleged plaintiffs themselves to the antitrust regulators’ measurement of market concentration—the Herfindahl-Hirschman Index—implies a more nuanced conclusion: consortium bidding can be pro-competitive for large targets, small bidders and small clubs.

“We will have to trust [K]inder and the honor among thieves.”

– Goldman Sachs internal e-mail expressing hope that Rich Kinder, CEO of buyout target Kinder Morgan, would honor an exclusivity agreement with Goldman, despite the rejection of the agreement by Kinder Morgan’s special committee1

“It is hard to imagine Henry R. Kravis, co-founder of Kohlberg Kravis, calling up David M. Rubenstein, co-founder of Carlyle, to scheme about how to keep a lid on the bidding for a particular company.”

– Andrew Ross Sorkin2


In October 2006, The Wall Street Journal reported that the Antitrust Division of the Department of Justice was investigating whether the largest private equity (PE) firms had colluded to keep buyout prices down.3 A class action against 13 PE firms, also known as “financial sponsors,” followed in December 2007.4 Although the DOJ has backed off,5 the private plaintiffs are pressing forward. The suit alleges that the defendants’ bid-rigging and market-allocating in 17 leveraged-buyouts (LBOs) from 2003 to 2007 deprived shareholders in the target companies of competitive prices, in violation of Section 1 of the Sherman Act.6 The district court has winnowed the claims7 and defendants,8 but most of the largest names in private equity—including Blackstone, Carlyle, Goldman Sachs, KKR, and TPG—remain in court.

This Comment proceeds in three parts to analyze whether the facts alleged in the plaintiffs’ Fifth Amended Complaint (the “Complaint”) state a claim for relief under the quantitative framework set forth by the DOJ and Federal Trade Commission (FTC). While the Complaint is careless with respect to the underlying financial concepts,9 it nevertheless states a claim that should survive a motion to dismiss under the federal pleading standards established by Twombly10 and Iqbal.11

Part I of this Comment shows how the Horizontal Merger Guidelines (HMGs), which focus on behavior by sellers rather than buyers, nevertheless apply to this suit. Measuring the Herfindahl-Hirschman Index (HHI) of market concentration as instructed by the HMGs, Part II models the competitive effects of “club” formation in each of the deals challenged by the lawsuit, concluding that consortia bidding can reduce market concentration for large targets, small bidders and small clubs. Part III concludes with suggestions for further research.

I. The Horizontal Merger Guidelines Apply Literally and by Analogy

The DOJ and FTC, the federal antitrust regulators, have jointly published guidelines apprising regulated parties of how the agencies model the competitive effects of horizontal mergers.12 These guidelines also inform the agencies’ analysis of non-merger horizontal collusion, such as that alleged in the Complaint.13 The HMGs primarily address competition among sellers, not buyers.14 However, the HMGs state that market power of buyers—monopsony—“has adverse effects comparable to enhancement of market power by sellers.”15 Thus, the agencies use “an analogous framework” to analyze competitive effects of buying power.16 Accordingly, the DOJ is tasked with analyzing whether club deals resulted in buyer market power that reduced head-to-head competition for target companies.17 Taking the facts alleged in the Complaint as true, they sometimes did. TPG Founder David Bonderman admitted that “[consortia] . . . limit[] bidding” so that “[there’s] less competition for the biggest deals.”18

In particular, the HMGs instruct the DOJ to interview sellers19 (here, the target companies) about the impact of the anticompetitive behavior. Here, this may be unnecessary, as the DOJ already has comparable, but even more probative evidence in several of the deals: contemporaneous instructions by managers, directors and advisers of the target companies not to bid in groups. For example, in the auction for Philips / NXP, the target directed Bain Capital, KKR and Silver Lake not to combine into a single consortium, but those bidders allegedly defied the instructions in a secret agreement.20 Contemporaneous records such as these accomplish the same investigatory purpose as ex post interviews.

Further, the HMGs emphasize the importance of the definition of the relevant market. “Market definition focuses solely on demand substitution factors, i.e., on customers’ ability and willingness to substitute away from one product to another in response to a price increase.”21 Antitrust defendants routinely argue that customers can switch to imperfect substitutes—that is, the defendants control only a small fraction of the relevant market. Thus, the defendant PE funds in Dahl might argue that other types of investors—such as mutual, index, and hedge funds—are part of their market. Because those non-control investors22 lack the two key tools of PE buyers—operational control and capital structure replacement23—they cannot and do not pay comparable prices. In particular, the difference between the price paid for stock by non-control investors and that paid by PE buyers on average exceeds 5%, the threshold set by the agencies to trigger competition concerns.24

Thus, under the qualitative guidance set forth by the antitrust agencies and the Supreme Court, the plaintiffs in Dahl have stated a claim for relief sufficient to survive a motion to dismiss. The next part maps the quantitative guidance set forth by the HMGs onto each LBO attacked by the Complaint, revealing that, according to the agencies’ own definition of market concentration, club bidding increases competitiveness of large LBOs when individual bidders are too small to bid alone.

II. HHI Analysis in the Club Deal Context Turns on Target Size, Bidder Size and Club Size

A. Model Specification

The heart of the HMGs is an instruction to the agencies to quantify the impact of mergers via the Herfindahl-Hirschman Index (HHI). A tool created by and for the use of the agencies, the HHI also applies in private antitrust suits.25 It is a measure of market concentration calculated by summing the squares of the market shares (expressed in percentage points) of all firms in the market.26 The agencies define un-concentrated markets as having an HHI below 1,500, moderately concentrated markets as having an HHI between 1,500 and 2,500, and highly concentrated markets as having an HHI above 2,500.27 Mergers that increase concentration concern the antitrust agencies. Both the measured size of the market and the size of its biggest players have powerful effects on the HHI.

I use the HHI rubric to analyze clubbing by modeling club formation as the merger—for purposes of whichever target the particular club is eying—of the club’s members. This is a natural application of HHI both because (like merged funds) cooperating funds cease competitively bidding with one another and because (like merged funds) cooperating funds can pool their capital. Allowing clubs has two effects on HHI, the sign of the net effect of which is ambiguous. First and more obviously, allowing clubs increases the size of the biggest players, thereby increasing HHI. Second and less obviously—and indeed ignored by the plaintiffs in Dahl—allowing clubs may bring new players into the market, increasing its size and decreasing HHI. “For example, in a corporate auction involving numerous well-heeled bidders, less wealthy bidders cannot compete. By joining forces, and thus combining resources, poorer contestants can gain access to the contest, thus increasing competition.”28 The following analysis estimates which effect dominates and provides a framework for modelers using different financial assumptions—for instance, different assumptions about leverage ratio or fund diversification requirements—to do the same.

The model estimates the PE fundraising by every fund named in the Complaint for the five-year period ended on December 31, 200729 (the last year of the “Conspiratorial Era” alleged by the Complaint30). It models 100% of those funds as available for the buyouts attacked in the Complaint.31 It makes 15% of the equity capital32 of a given sponsor available to any given deal.33 From these parameters, it computes the maximum equity check that any given fund could write.

To calculate the required total equity for each target, the model takes the deal size34 from the Complaint and applies 3:1 debt-to-equity leverage.35

For each LBO, the model calculates the ‘pre-club’ HHI—i.e., the HHI of the market for that target36 under a regulatory regime where clubbing does not exist. It defines the market as every fund that (1) was either ultimately part of the purchasing consortium or was identified in the Complaint as “participating” or “interested” in the auction process, and (2) could write the entire equity check itself. The market size is the sum of the maximum equity checks of all funds satisfying both criteria.37

The model then calculates the ‘post-club’ HHI—i.e., the HHI of the market for that target under a regulatory regime where clubbing is permitted. It defines the market to include the winning club and every losing fund that (1) is identified in the Complaint as “participating” or “interested” in the auction process, and (2) could write the entire equity check itself. Intuitively, relative to the pre-club analysis, by relaxing the constraint for winning bidders that the bidder be able to afford the entire equity check on its own, the model captures the expansion of the market facilitated by resource-pooling.38 The market size is the sum of the maximum equity check of the club and that of all losing funds in the market.

Finally, the model checks whether the HHIs trip any of the three red flags raised by the HMGs. Those are, in order of increasing concern: (1) an HHI increase over 100 resulting in moderate concentration (“100+” in Table A); (2) an HHI increase between 100 and 200 resulting in high concentration (“100-200” in Table A); and (3) an HHI increase over 200 resulting in high concentration (“200+” in Table A).39

B. Results and Analysis

Table A: HHI Effects of Club Formation in LBOs Challenged in Dahl [Note: Table A has been omitted from the online version. Please refer to the PDF.]

Table A summarizes the results of the model. Under the HHI framework, and accepting the factual allegations in the Complaint as true, the DOJ would likely find the majority of the challenged LBOs to be concerning. Of the 21 LBOs analyzed, 3 trip the least serious red flag, 0 trip the moderately serious red flag, 15 trip the most serious red flag (sometimes by thousands of points), and only 3 trip no red flags at all. This result is intuitive because the accused clubs were often large—larger than would be needed to bring funds into the market, and the fund sizes peaked around the time of the Conspiratorial Era.40

As Table A shows, however, five of the 14 deals tripping the most serious red flag—HCA, Kinder Morgan, Clear Channel, TXU, and Alltel (the five biggest challenged deals)—had, according to the facts alleged by the plaintiffs, no pre-club market. There was no pre-merger market because no sponsor could afford the equity check on its own. Thus, allowing consortium bidding for these large targets did not increase market concentration and should not be viewed as tripping any HHI red flag.41

The deals that did have pre-club markets but nevertheless triggered no red flag—AMC, Harrah’s, and Sabre—exemplify circumstances where the agencies would be well-advised to stay their hand. In AMC, the winning club represented only 10% of the un-concentrated market (2 of the 10 interested firms42). Harrah’s was a highly concentrated market, but the required check was big enough that allowing consortia brought a player into the market, thereby reducing concentration. In addition, the winning club represented just 38% of the buying power in that market (2 of the 8 interested firms43). Sabre, too, was won by a club of just 2 firms (of the 10 total that were allegedly interested44), representing 18% of the market. In fact, in two of these three deals, the post-club market was less concentrated than the pre-club market. Intuitively, when clubs are small, the net effect of allowing them may be pro-competitive—even for fairly small targets.

III. Conclusion

The model and its results show that antitrust regulators (and courts) ought to consider the size of targets, funds and clubs in analyzing whether consortium bidding is anti-competitive in any given instance. In general, targets that are large relative to the funds bidding on them justify formation of clubs—especially small clubs. One size does not fit all.

Nor does one size fit all in modeling these LBOs. Additional precision could be achieved by applying deal-specific leverage ratios based on empirical data from deal databases. The across-the-board assumption of a maximum equity check size of 15% of any given fund could similarly be improved by sponsor-specific estimates based on empirical data on the accused funds—and other funds run by the same sponsors but outside the Conspiratorial Era. At the theoretical level, one could model the plaintiffs’ and/or defendants’ potential objections—some of which are outlined above45—to the underlying mechanics of the model proposed herein.

Further research should analyze the natural experiment furnished by the inception of the DOJ investigation and cessation of the club era: did post-2007 premiums rise back to pre-2003 levels? The experiment is unfortunately clouded by the collapse of the debt bubble in 2008; the narrow window of roughly spring 2007 to summer 2008 may offer the most probative data.

Also unaddressed by this Comment is the qualitative policy question is what sort of reciprocity amounts to an agreement in restraint of trade. For example, it has been shown that a winning strategy in repeated Prisoner’s Dilemma is tit-for-tat.46 If all players follow tit-for-tat, they will always cooperate, despite never having an agreement to do so. Were the defendants in Dahl just playing tit-for-tat, or were they explicitly colluding? Twombly would suggest that a bare allegation of the former, without more, does not an antitrust claim make. The Complaint seizes upon the quid pro quos exchanged by the defendants,47 but merely offering to include a third party in a project because the third party previously involved you is not necessary anticompetitive, even though it is a tit-for-tat. The better question is whether the quid pro quos entailed not competing—for instance, bidding low, bidding with the intent to withdraw, or not bidding at all.48

  1. Fifth Amended Complaint at 133-34 & n.399, Dahl v. Bain Capital Partners, LLC, No. 1:07-cv-12388-EFH (D. Mass. filed Oct. 10, 2012) [hereinafter Complaint].
  2. Andrew Ross Sorkin, Colluding or Not, Private Equity Firms Are Shaken, N.Y. Times (Oct. 22, 2006).
  3. Dennis K. Berman & Henny Sender, Private-Equity Firms Face Anticompetitive Probe, Wall St. J. (Oct. 10, 2006).
  4. Complaint, Davidson v. Bain Capital Partners, LLC, No. 1:07-cv-12388-EFH (D. Mass. filed Dec. 28, 2007).
  5. White & Case, A Recent Court Decision Revives Concern That Some Club Deals Could Violate the Antitrust Laws 1 (2009).
  6. 15 U.S.C. § 1 (2006) (“Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal.”); see Complaint, supra note 2, at 1, 7-12.
  7. Dahl v. Bain Capital Partners, LLC, No. 1:07-cv-12388-EFH, 2013 WL 950992, at *15 (D. Mass. Mar. 13, 2013) (narrowing the plaintiffs’ theory to a conspiracy among the defendants to refrain from “jumping,” i.e., outbidding, each other’s announced deals).
  8. Dahl v. Bain Capital Partners, LLC, No. 1:07-cv-12388-EFH, 2013 WL 4606512 (D. Mass. Aug. 29, 2013) (dismissing THL); Dahl v. Bain Capital Partners, LLC, No. 1:07-cv-12388-EFH, 2013 WL 3802433, at *10 (D. Mass. July 18, 2013) (dismissing Apollo and Providence).
  9. See, e.g., Complaint, supra note 2, at 52 (alleging, for example, that the gain extracted by a financial sponsor in a dividend recapitalization would, in the absence of the LBO, have flowed to the shareholders, without making any showing of how the shareholders would have extracted such a gain from lenders or alternative sources).
  10. Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007) (when plaintiffs “have not nudged their claims across the line from conceivable to plausible, their complaint must be dismissed.”). While Twombly has come to stand trans-substantively for a heightened pleading standard, it has particular applicability in the antitrust context: the Supreme Court held that the parallel failure by companies to enter each other’s lucrative markets, without more, did not state an antitrust claim sufficient to survive a motion to dismiss. Id. at 548-50.

    Unlike in Twombly, the Complaint in Dahl cites documentary evidence of several alleged agreements, or attempts to form agreements, by the defendants. See, e.g., Complaint, supra note 2, at 23 (e-mail from Blackstone President Tony James to KKR Founder George Roberts stating “[w]e would much rather work with you guys than against you. Together we can be unstoppable but in opposition we can cost each other a lot of money.”); id. at 27 (“KKR has agreed not to jump our deal since no one in private equity ever jumps an announced deal.”); id. at 28 (alleging coordination of bids amongst supposed competitors); id. at 142 (alleging that KKR dropped out of the Freescale auction in exchange for Blackstone’s dropping out of the HCA auction). But see Dahl, 2013 WL 950992, at *14-15 (narrowing the plaintiffs’ theory to jumping and holding that mere quid-pro-quos, without more, do not evidence an antitrust conspiracy); Jessica Jackson, Much Ado About Nothing? The Antitrust Implications of Private Equity Club Deals, 60 Fla. L. Rev. 697, 708-10 (2008) (listing shallow pockets, diversification requirements, debt fundraising, and shared expertise as reasons for consortium bidding).

  11. Ashcroft v. Iqbal, 556 U.S. 662, 679 (2009) (“[W]here the well-pleaded facts do not permit the court to infer more than the mere possibility of misconduct, the complaint has alleged—but it has not ‘show[n]’—‘that the pleader is entitled to relief.’” (second alteration in original) (quoting Fed. R. Civ. P. 8(a)(2))).
  12. U.S. Dep’t of Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (2010) [hereinafter HMGs].
  13. Complaint, supra note 2, at 15-16. See also Fed. Trade Comm’n & U.S. Dep’t of Justice, Antitrust Guidelines for Collaborations Among Competitors 5 (2000) (“The Agencies treat a competitor collaboration as a horizontal merger in a relevant market and analyze the collaboration pursuant to the Horizontal Merger Guidelines if appropriate, which ordinarily is when: (a) the participants are competitors in that relevant market; (b) the formation of the collaboration involves an efficiency-enhancing integration of economic activity in the relevant market; (c) the integration eliminates all competition among the participants in the relevant market; and (d) the collaboration does not terminate within a sufficiently limited period by its own specific and express terms.” (footnote omitted)); Jackson, supra note 11, at 712-14 (arguing for the applicability of the collaboration guidelines to Dahl and predicting that the clubs would be treated as joint ventures).
  14. See HMGs, supra note 13, at 2.
  15. Id.
  16. Id.
  17. Id. at 3.
  18. Complaint, supra note 2, at 3.
  19. HMGs, supra note 13, at 4. The HMGs, however, warn that seller interviews may suffer from defects of honesty and accuracy.
  20. Complaint, supra note 2, at 149.
  21. HMGs, supra note 13, at 7-8.
  22. Mutual funds and hedge funds generally do not own a large enough stake in a firm to control it. The summer of 2013, however, witnessed the rising power of activist hedge funds, winning board elections and concessions from management despite owning less than 20% of the firm. See, e.g., Health Management Associates, Inc., Current Report 2-3 (Form 8-K Aug. 12, 2013) (documenting written consent by sufficient shareholders to remove and replace all members of the HMA board of directors with hedge fund Glenview’s slate); Health Management Associates, Inc., Schedule 13D at 2-4 (Aug. 16, 2013) (documenting Glenview’s mere 15% stake in HMA).
  23. Defendant PE funds would also argue that strategic buyers should be considered part of the market. Strategic buyers can use operational control and capital structure replacement and do pay premiums, so they likely should be included in the market. See Complaint, supra note 2, at 196-98 (showing that strategic buyers paid premiums averaging 22%, compared to 16% for sole-sponsor LBOs and 8% for club deals).
  24. HMGs, supra note 13, at 9.
  25. E.g., Rothery Storage & Van Co. v. Atlas Van Lines, Inc., 792 F.2d 210, 219-20 (D.C. Cir. 1986).
  26. HMGs, supra note 13, at 18 & n.9. For instance, a one-firm market has an HHI of 10,000 (100 squared). A two-firm market split equally by the two firms has an HHI of 5,000 (50 squared plus 50 squared). A 100-firm market split equally by the 100 firms has an HHI of 100. A higher HHI implies greater market concentration. Like an least-squares regression, the HHI is particularly susceptible to large outliers—such as a club consisting of all or most of the biggest firms in the market.
  27. Id. at 19.
  28. Pa. Ave. Funds v. Borey, 569 F. Supp. 2d 1126, 1133 (W.D. Wash. 2008); accord In re Toys “R” Us, Inc. S’holder Litig., 877 A.2d 975, 1009 (Del. Ch. 2005) (“The ‘cooperative’ bid that First Boston permitted the KKR Group to make gave the Company a powerful bidding competitor to the Cerberus consortium . . . .”).
  29. For AlpInvest ($5.4 billion), Apax ($18.9 billion), Apollo ($13.9 billion), Bain ($17.3 billion), Blackstone ($28.4 billion), Carlyle ($32.5 billion), Cerberus ($6.1 billion), Goldman Sachs ($31.0 billion), Hellman & Friedman ($12.0 billion), KKR ($31.1 billion), Lehman Brothers ($8.5 billion), Leonard Green ($7.2 billion), Madison Dearborn ($6.5 billion), Permira ($21.5 billion), Providence ($16.4 billion), T.H. Lee ($7.5 billion), TPG ($23.5 billion), Silver Lake ($11.0 billion), and Warburg Pincus ($13.3 billion), I take the corresponding five-year total directly from Private Equity International, PEI 50 (2007). PEI 50 does not list the other sponsors or strategic buyers involved in the challenged LBOs, so as a rough proxy, their corresponding total is estimated to be equal to that of the lowest (50th) fund on the list: $3.9bn.
  30. Complaint, supra note 2, at 1.
  31. This estimate could be somewhat below or above the true figure. It could be too low because a fund’s life may be greater than five years (a common lifespan is ten years, of which, roughly five is focused on deploying capital and roughly five is focused on returning capital); a fund that closed six years ago would be incorrectly excluded from the estimate. See Steven N. Kaplan & Per Stromberg, Leveraged Buyouts and Private Equity, 23 J. Econ. Perspectives 121, 123 (2009). On the other hand, it could be too high because some sponsors—such as Carlyle—are “supermarkets” offering a variety of funds, each targeting a different geography (some outside of North America, the location of the challenged LBOs). See Gregory Zuckerman & Ryan Dezember, Carlyle’s 3 Founders Share $400 Million-Plus Payday, Wall St. J., Jan. 12, 2012 (“Carlyle has launched many more smaller buyout funds than rivals, sometimes with a narrow focus . . . .”).
  32. Equity capital is cash from the sponsor itself and its limited partners representing ownership in the fund’s portfolio companies. Acquisition financing that is not equity capital is debt: money borrowed from banks, bondholders or non-traditional lenders that generates the “leverage” in “leveraged buyout.”
  33. Buyout funds invest in a handful of companies each, so it would be incorrect to assume that an entire fund’s equity, or even the majority of it, were in the market for a given deal merely because that fund’s sponsor was a bidder. See, e.g., David Snow, Private Equity: A Brief Overview 7, PEI Media (2007) (“For example, a firm that says it specialises in doing deals that require between $25 million and $100 million in equity per deal may target $750 million for a fund to complete roughly 7 to 10 deals before needing to go back to investors for more capital.”).
  34. See Complaint, supra note 2, at 59 (PanAmSat); id. at 63 (AMC); id. at 66 (Loews); id. at 72 (Toys “R” Us); id. at 78 (SunGard); id. at 84 (Neiman Marcus); id. at 93 (Texas Genco); id. at 103 (Education Management); id. at 113 (Univision); id. at 120 (Michaels Stores); id. at 127 (HCA); id. at 134 (Aramark); id. at 142 (Kinder Morgan); id. at 150 (Freescale Semiconductor); id. at 156 (Philips Semiconductor / NXP); id. at 163 (Harrah’s); id. at 170 (Clear Channel); id. at 176 (Sabre); id. at 184 (Biomet); id. at 190 (TXU); id. at 199 (Alltel). Excluded are Nalco (data not provided), Cablecom (data not provided), Warner Music (data not provided), Susquehanna (data not provided), Vivendi (deal not consummated), and Community Health Systems (deal not consummated).
  35. That is, 25% of the purchase price is modeled as equity. While the leverage ratios varied from deal to deal, 75% debt for mega-buyouts during the 2003 – 2007 boom era is a reasonable approximation. See Kaplan & Stromberg, supra note 31, at 124 (“The buyout is typically ?nanced with 60 to 90 percent debt . . . .”). But see Ulf Axelson et al., Borrow Cheap, Buy High?: The Determinants of Leverage and Pricing in Buyouts 44 (Nat’l Bureau of Econ. Research, Working Paper No. 15952, 2010) (calculating the debt-to-enterprise-value ratio of public-to-private LBOs during the era of the transactions challenged in Dahl as averaging between 0.65 and 0.68, i.e., 2:1 debt-to-equity leverage).
  36. A separate criticism of the antitrust accusations, not explored here, is that it may not even make sense to characterize a single target as a market.
  37. For instance, suppose only one fund is interested in the target, and that that fund has $10 billion in committed equity capital. The model estimates 15%, or $1.5 billion, of the equity as available for the target.

    Now suppose the total enterprise value of the target is $8 billion. At the modeled 3:1 debt-to-equity ratio, the equity check for the LBO would be $2 billion. Since that exceeds the would-be bidder’s available equity ($1.5 billion), the fund is out of the market and the market size is $0.

    On the other hand, suppose the total enterprise value of the target is $4 billion. At the modeled 3:1 debt-to-equity ratio, the equity check for the LBO would be $1 billion. Since that does not exceed the would-be bidder’s available equity, the fund is in the market and the market size is $1.5 billion.

    Note that it is irrelevant that the market modeled is the market for equity rather than the market for total enterprise value. Intuitively, because HHI is computed from market shares (percentages) rather than absolute dollar values, it doesn’t matter where in this arithmetic the leverage is introduced.

  38. For instance, suppose a $10 billion fund and a $20 billon fund are interested in the target. The model estimates 15%, or $1.5 billion of the former fund and $3 billion of the latter fund, as available for the target. Suppose the total enterprise value of the target is $8 billion. At the modeled 3:1 debt-to-equity ratio, the equity check for the LBO would be $2 billion.

    Under a regime where clubbing is forbidden, the $10 billion fund is out of the market, so only the $20 billion fund is in. The market size is $3 billion.

    On the other hand, under a regime where clubbing is permitted, both funds are in the market. The market size is $4.5 billion.

  39. HMGs, supra note 13, at 19.
  40. The Dahl defendants would argue that the HHIs are inflated (and red flags exaggerated) by excluding from the market PE funds merely because the Complaint failed to identify those funds as interested in a given target. The defendants might also argue that the post-club HHI should use a market size including funds too small to bid alone but able to bid as clubs, even if they did not happen to be members of the winning consortium. To be sure, the defendants should be permitted to make such showings. At the motion-to-dismiss stage, however, the facts of a well-pleaded complaint are treated as true. Ashcroft v. Iqbal, 556 U.S. 662, 664 (2009).
  41. Arithmetically, this result also flows from the HHI formula itself if the pre-club HHI is interpreted as “undefined” rather than zero.
  42. Complaint, supra note 2, at 63.
  43. Id. at 163-64.
  44. Id. at 176-77.
  45. E.g., supra note 41.
  46. Robert Axelrod, The Evolution of Cooperation (1984) (describing tit-for-tat as a strategy by which you cooperate on turn n if and only if your opponent cooperated on turn n – 1); see also Berman & Sender, supra note 3 (quoting Harvard Business School Professor Josh Lerner as characterizing PE bidding as a repeat game).
  47. E.g., Complaint, supra note 2, at 72-73 (Goldman tells Silver Lake that Goldman has, in exchange for Silver Lake’s letting it into SunGard, a “quid pro quo obligation.” (emphasis added)).
  48. See, e.g., id. at 46 (alleging that Blackstone promised future deals to other bidders in exchange for dropping out).

What Do Subprime Securitization Agreements Say About Mortgage Modification?

This paper presents the results of the only publicly available empirical study of what agreements governing subprime securitized mortgages say about mortgage modification.

The mortgage and foreclosure crisis continues to transfix the nation, even as housing markets across the country show signs of improvement.1 The government has tried to promote mortgage modifications to allow homeowners to stay in their homes.2 One particular focus of these efforts has been subprime loans that are securitized, that is, transferred into pools held by trusts and administered by servicers on behalf of investors who buy certificates. For securitized mortgages, a contract called the pooling and servicing agreement governs what the servicer may do to modify the mortgages in the pool.3

Throughout the crisis, an important factor in policy analysis has been what the pooling and servicing agreements that govern securitized subprime mortgages say about mortgage modification. Do these agreements forbid mortgage modifications, so that the most effective modification programs have to trump these agreements, raising all the issues that attend government modification of private contracts?4 Or do the agreements by and large permit mortgage modifications, so that policymakers designing modification programs should concentrate on other possible rigidities that frustrate modification?

Whether and how pooling and servicing agreements constrain mortgage modification is relevant to current policy debates. It is still an open question whether the federal government should take action to trump anti-modification provisions in private securitization agreements, for example by trying to establish the supremacy of federal servicing standards over these agreements. Another question currently under debate is whether local governments should exercise the power of eminent domain to take securitized mortgages and modify them. Municipalities across the country have considered or are currently considering this idea.5 Proponents of such a strategy argue that securitization agreements limit the modification of securitized mortgages, so local governments must step in.6 Another open question is whether differences in terms across securitization agreements impede an effective policy response to problems in the mortgage market, so that the agreements should be standardized.7 The findings reported here suggest that subprime securitization agreements are in fact heterogeneous, so the results bear on this question as well.

Subprime securitization agreements are worth studying because subprime mortgages are commonly understood as lying at the heart of the mortgage and foreclosure crisis. The Financial Crisis Inquiry Report cites “an explosion in risky subprime lending” as the first factual basis for its lead conclusion that the financial crisis was avoidable.8 The Report contains two dissents. One identifies as the lead cause of the crisis a credit bubble “the most notable manifestation of which was increased investment in high-risk mortgages.”9 The other states that “the losses associated with the delinquency and default” of “27 million subprime and Alt-A mortgages” in themselves “fully account for the weakness and disruption of the financial system that has become known as the financial crisis.”10 Although the majority and dissenters disagreed on many topics, they concurred that risky (i.e. subprime) lending was a key part of the crisis.

Subprime mortgages originated in the mid-2000s continue to be important today even though conditions in the mortgage markets have been improving by many measures. For example, as of 2012, subprime delinquencies in mortgage insurer MGIC’s portfolio were at over 60% of their 2006 level.11 Mortgage insurer Radian’s 2012 exposure to mortgages rated A- or below was more than 55% of its 2006 exposure.12

Moreover, even after every subprime loan in existence today is paid off or foreclosed on, subprime loans will remain an important case study of private-label mortgage securitization. The subprime market apparently has started to revive,13 and it will be important to understand how this market has functioned in the past as we discuss how best to regulate it in the future – including whether documentation should be standardized.

Given the importance of what securitization agreements say, it is surprising that no systematic study of the contents of these agreements appears to exist in the academic literature. This paper is a first step toward filling that gap. It presents the results of a review of transaction documents from the sixty-five largest subprime securitization programs from 2006, the last full year of the subprime securit- ization boom. These programs accounted for approximately 75% of the dollar volume of subprime mortgage-backed securities issued in 2006 about which the Bloomberg Financial Information Service has information.14 Although this paper certainly is not intended to be the last word on the contents of securitization agreements, it is the most comprehensive review of subprime securitization contract terms undertaken to date.

One notable finding is that only about 8% of the agreements by dollar volume contain outright bans on mortgage modification. Most agreements (60% by dollar volume) permit material modification subject to conditions. Among the most common conditions are that default must be reasonably foreseeable or imminent, that the servicer must follow “normal and usual” servicing practices, and that the servicer must act in the interests of certificate-holders, the securitization trust, and/or the trustee. Another relatively common condition requires insurer approval for modification of more than 5% of loans in a securitization pool.

Review Process

The Bloomberg Financial Information service lists 614 deals from 2006 in the “Res B/C” category.15 Bloomberg has further in- formation on 482 of the 614 deals. These 482 deals cover $435 billion of volume, which is similar to the $449 billion volume for 2006 subprime securitization reported by Inside Mortgage Finance.16 Deals are grouped by “program.” A program is defined as a series of related deals with the same “motive force.”17 The 482 deals fall into 152 “programs” identified with different names in the database.

The results reported here are based on a review of transaction documents from deals in the sixty-five largest subprime securitization programs. The sixty-five largest programs include securities with $323 billion in aggregate principal at issuance, or about 75% of the dollar volume for subprime securitizations in 2006 about which Bloomberg has some information.

The review covered transaction documents from one randomly selected deal in each program. The numerical results reported be- low are based on aggregating the results of these reviews of sixty-five deals.

A key assumption of the approach is that deals within pro- grams are likely to be similar. This assumption is based in part on guidance from a major New York law firm with extensive experience in securitization and in part on input from the American Securitization Forum. To check the assumption that modification provisions are likely to be the same across programs, the author reviewed half the deals in the two largest programs, a Countrywide program and a First Franklin program. This supplemental review turned up no significant differences in the material-modification provisions among deals in either program.


1. Subprime securitization contracts may expressly bar, expressly authorize, or remain silent on material modification. Express authorization is the most common arrangement (60% of contract volume), followed by silence (32% of volume), and express bar (8%).

The chart below shows the relative prevalence of the three types of contract term. (The chart is omitted. Please refer to the original pdf document.)

Outright bans on material modification are relatively rare, but the situation where there is no express authorization to make material modifications is fairly common. “Material” modifications are defined here to include long extensions of loan maturity (more than a year or so) and reductions of principal or interest. Material modifications, as defined here, do not include short extensions of time to pay or waivers of late fees or penalty interest. Many contracts provide for such minor modifications without authorizing more significant ones.

The paper now turns to a discussion of the two largest categories: the one in which material modification is expressly authorized and the one in which material modification is neither expressly authorized nor expressly barred, in turn.

2. When material modification is expressly authorized, it is subject to conditions.

The review did not identify any contracts that simply authorized the servicer to modify contracts without conditions on the exercise of this authority.18 The chart below illustrates the proportion of the dollar volume of 2006 subprime mortgage-backed securities subject to various conditions. The percentages are relative to the total volume of securities that have express modification provisions (that is, relative to the green 60% slice in the chart above). Totals add up to much more than 100% because more than one constraint applies to the typical loan.

Certain general standards are extremely common: Servicers typically must follow normal and usual servicing practices, act in the interests of investors, and service loans in the same manner as they service their own loans. It is also common for securitization documents to require the servicer to service the loans that are the subject of the transaction in the same manner as other loans that it services for third parties, although this type of provision is less prevalent (29.3% of principal volume). Clearer standards could promote modification by reducing litigation risk.

Provisions that require reasonably foreseeable or imminent default before material modifications are allowed are also extremely common (83.4% of principal volume is subject to such provisions), and provisions requiring actual default as a prerequisite for material modification exist, though they are not common. Presumably, standards based on loan-to-value and debt-to-income ratios could supply an objective means of meeting a “reasonably foreseeable or imminent default” test.

It is also common to require permission from third parties involved with the transaction for material modifications. Such parties include the rating agency, the credit insurer, or the trustee. More than half the total principal volume covered by this study is subject to such a requirement. (The figures reported here include provisions that re- quire permission only when 5% or more of the total volume or number of loans is modified).

3. Even when material modification is expressly authorized, not all types of material modification are necessarily permitted.

The most common form of express authorization to make material modifications takes the form of allowing the servicer to modify “any” term of the mortgage loan as long as specified conditions are met. However, not all authorizations take this form; some authorize only a subset of material modifications. Among 2006 subprime securitizations for which some material modification is permitted, 23% of the dollar volume of principal is governed by provisions that do not expressly authorize material term extensions, that do not expressly authorize principal reductions, or that expressly limit or do not expressly authorize interest rate reductions. Examples of the latter class of provision include those that prohibit loan modification to reduce interest below 5% or below half the rate otherwise applicable under the contract.

4. When material modification is not expressly authorized, the contract typically contains a broad provision empowering the servicer “to do any and all things that may be necessary or desirable in servicing the loan,” or words to that effect. Even when such language is absent, the grant of power to service is a basis for arguing that the servicer may modify the loans.

Turning to the smaller group of securitized subprime mortgage loans for which modification is not expressly authorized (the burgundy slice of the pie graph above), approximately two thirds of the dollar volume of principal in this class is covered by the broad, catch-all grant of power above, which appears to provide a contractual basis for making modifications that satisfy the other standards in the contract.

5. In cases where material modification is not expressly authorized, there are contractual constraints on the power to modify, frequently arising from the agreements’ general provisions.

Where power to make material modifications is not express, if it is assumed that the power to make material modifications may be inferred from the general grant of power to the servicer to service the loans (and possibly to “do all things necessary or desirable” to do so), this implied power will be limited by the general servicing standards in the agreement and, frequently, by specific modification constraints as well.19 The chart below (omitted, please refer to the original pdf document) illustrates the relative importance of various constraints on any implied power to modify. The percentages are expressed relative to the aggregate principal volume of securities that contain neither express authorization nor express prohibition of material loan modifications.

Areas for Further Research

As noted, this paper is a first step toward understanding contractual restrictions on mortgage modification. Although a review of deals within the two largest subprime securitization programs supports the proposition that modification provisions are consistent across deals in a single program, further testing of this hypothesis may be desirable.

Assuming the technique of sampling by program is justified, it would be ideal to extend the research to years before 2006 and to non-subprime deals. Additional aspects of these contracts may also be relevant to mortgage modification. For example, prepayment penalty waivers are typically subject to standards that are different from those governing other loan modifications, and it appears based on casual observation that these standards are quite heterogeneous and often stringent. Another area for further research is contractual limits on servicer liability. A casual review suggests that these limits are both widespread and heterogeneous, suggesting that different servicers face widely varying levels of liability risk if they modify mortgage loans in a manner inconsistent with the contract documents.


Perhaps the most striking finding of the study reported here is just how heterogeneous subprime securitization agreements’ mortgage modification provisions actually are. Many different provisions appear in many – but not most – subprime mortgage agreements. It is very difficult to form a simple picture of what the typical agreement says and to make policy based on that picture, because it appears that for the most part there is no typical agreement. It is easy to imagine that policymakers will demand some kind of standardization as part of any effort to revive the moribund private mortgage securitization market.

Another noteworthy finding is that outright contractual bans on mortgage modification seem rare, appearing in only 8% of the sample. That is not to say that pooling and servicing agreements pose no significant obstacles to mortgage modifications: the restriction on modifying more than 5% of loans in a mortgage securitization pool without insurer approval, which appeared in nearly a third of agreements that impliedly authorize modification, certainly seems important.20

Informed discussion of the contents of existing pooling and servicing agreements will help advance the debate over a number of issues currently under debate, including the use of eminent domain to take mortgages, the use of federal powers to override the agreements, and whether pooling and servicing agreements should be standardized. This paper has taken a first step toward promoting such informed discussion.

  1. Two articles about the foreclosure crisis ran in the New York Times on July 25, 2013. See Shaila Dewan, New Defaults Trouble a Mortgage Program, N.Y. Times,, July 25, 2013, at B2 (describing level of defaults among homeowners who received mortgage modifications under TARP program and quoting Treasury source as saying “the housing market and the economy are improving”); Trip Gabriel, Welcome Mat for Crime as Neighborhoods Crumble, N.Y. Times, July 25, 2013 (explaining that in Cleveland, Ohio the foreclosure crisis’s “legacy of abandoned homes has frayed neighborhoods, leaving behind those who cannot afford to get out, while providing shelter to people on the social margins” and suggesting that this situation contributes to crime in the city).
  2. See, e.g., Lisa Prevost, Loan Modifications, Proactively, N.Y. Times, June 28, 2013 (describing FHFA’s creation of new Streamlined Loan Initiative to promote modifications and differences between the new program and the preexist- ing Home Affordable Modification Program).
  3. See generally Adam Levitin & Tara Twomey, Mortgage Servicing, 28 Yale J. On Reg. 1 (2011) (describing servicing of securitized mortgages).
  4. See, e.g., Anna Gelpern & Adam Levitin, Rewriting Frankenstein Con- tracts: Workout Prohibitions in Mortgage-Backed Securities, 82 S. Cal. L. Rev. 1075, 1149 (2009) (“[I]t would be relatively simple to legislate away both the contractual and TIA barriers to amending RMBS PSAs”).
  5. See Jody Shenn, Eminent Domain Plan Decried by DoubleLine Sees New Life, Bloomberg News, July 17, 2013 (reporting that Chicago and San Bernardino County rejected a plan to seize mortgages using eminent domain last year but that cities in California and Nevada, including North Las Vegas, El Monte, and Richmond, are proceeding with such plans).
  6. See, e.g., Robert C. Hockett, It Takes a Village: Municipal Condemnation Proceedings and Public/Private Partnerships for Mortgage Loan Modification, Value Preservation, and Local Economic Recovery, 18 Stan. J. L. Bus. & Fin. 121, 138-43 (2012) (arguing that structural and contractual impediments to modifications are part of the justification for use of local eminent-domain powers to take mortgages).
  7. See, e.g., Patricia A. McCoy, The Home Mortgage Foreclosure Crisis: Lessons Learned 37 (May 7, 2013) (unpublished manuscript),
  8. National Commission On The Causes Of The Financial And Economic Crisis In The United States, The Financial Crisis Inquiry Report xvii (2011).
  9. Id. at 417-18 (dissenting statement of Commissioners Hennessey, Holtz-Eakin, and Thomas).
  10. Id. at 451 (dissenting statement of Commissioner Wallison).
  11. MGIC Investment Corp. Annual Report (Form 10-K) March 1, 2013, at 25 (2012 data); MGIC Investment Corp. Annual Report (Form 10-K) (March 1, 2011), at 22 (2006 data).
  12. Radian Group Inc. Annual Report (Form 10-K) (February 22, 2013) at 99 (primary insurance in force data for 2012); Radian Group Inc. Annual Report (Form 10-K) (March 10, 2009) at 119 (primary insurance in force data for 2006). Moreover, Lender Processing Services reports that 35% of delinquent mortgages in July 2013 were Alt-A and subprime. Lender Processing Services, LPS Mortgage Monitor: August 2013 Mortgage Performance Observations (2013). LPS’ involvement in the robosigning scandal impairs its credibility. See David McLaughlin, LPS Reaches $120 Million Deal in “Robosigning” Probe, Bloomberg, Jan. 31, 2013; Dustin A. Zacks, Robo-Litigation, 60 Clev. St. L. Rev. 867, 902 (2013) (describing allegations against LPS and employees). However, there is no obvious motive to falsify this figure and it is broadly consistent with our other observations.
  13. See Katherine Rushton, Fresh Fear over U.S. Subprime Lending, Telegraph, June 1, 2013 (discussing resurgence of the U.S. subprime mortgage market); Center for Public Integrity, Subprime Lending Execs Back in Business Five Years After Crash, Sept. 14, 2013, that many former executives of subprime lenders are developing “new loans that target borrowers with low credit scores and small down payments, pushing the limits of the tighter lending standards that have prevailed since the crisis.”).
  14. Bloomberg has some information on 482 subprime issues from 2006. These 482 deals cover $435 billion of volume, which is similar to the $449 billion volume for 2006 subprime securitization reported by Inside Mortgage Finance. The sixty-five programs reviewed here cover 80% of this volume, but not all documents were available for review for some programs, so the actual coverage is approximately 75%.
  15. This list was accessed by using the “DQRP” function. The definition of “subprime” is drawn from classifications made by the Bloomberg Financial In- formation Service. The “Res B/C” loans are categorized as “subprime” in this paper. Based on interactions with Bloomberg representatives, it appears that Bloom- berg put each deal in the “Res B/C” category if the following three categories made up more than 50% of the dollar volume of principal in the deal at the time of classification: (1) “B- or C-rated loans.” Bloomberg made this determination based on the loans’ description in the prospectus; most frequently the prospectus described the loans as “subprime,” “scratch-and-dent,” “blemished-credit” or the like. (2) Home equity loans. Here the governing criterion was whether the loans’ purpose was to take equity out of the home rather than for purchase, although apparently deals might also be put in this category if the arrangers described the deal as a home-equity deal. (3) Loans that were thirty days or more delinquent at the time of classification.
  16. Adam B. Ashcraft & Til Schuermann, Understanding the Securitization of Subprime Mortgage Credit, Fed. Res. Bank Of N.Y. Staff Reports No. 318, at 4 (citing Inside Mortgage Finance data). Because there is no single definition of “subprime,” complete agreement on dollar volumes would not be expected.
  17. The motive force might be the mortgage originator, the underwriter, or another party. This definition of “program” was turned into a working definition by assuming that deals in the same numerical sequence are part of the same pro- gram. For example, we located 26 deals in Bloomberg with names “CWL 2006-1” through “CWL 2006-26.” We were able to locate information about the sponsor, depositor, master servicer, and underwriter for 21 of these deals, and all 21 shared a sponsor, depositor, master servicer, and underwriter. We assumed that these 26 deals were part of the same program. This pattern holds in most cases: deals in the same numerical sequence typically have the same originator or underwriter, suggesting that the same motive force is involved in each deal.
  18. One program, covering approximately 2% of the dollar volume, authorized modification if, in the servicer’s reasonable and prudent judgment, the modification “could be in the best interest” of investors. This appears to be the most flexible standard encountered in the documents.
  19. Agreements that do not expressly authorize material modifications often expressly authorize minor modifications, such as short extensions of time to pay or waivers of late fees or penalty interest.
  20. This restriction is less important if rating agencies or other stakeholders do not count successful modifications against the 5% cap, as has been reported. See Diane E. Thompson, Foreclosing Modifications: How Servicer Incentives Discourage Loan Modifications, 86 Wash. L. Rev. 755, 784 (2011) (citing Monica Perelmuter & Waqas I. Shaikh, Standard & Poor’s Criteria Revised Guidelines For U.S. RMBS Loan Modification And Capitalization Reimbursement Amounts 3 (Oct. 11, 2007).

The President’s Climate Plan for Power Plants Won’t Significantly Lower Emissions

* Brian H. Potts is a partner at the law firm of Foley & Lardner LLP and has authored numerous articles on the Clean Air Act. He holds an LL.M in Energy Law from the University of California, Berkeley School of Law, a J.D. from Vermont Law School, and a B.S. from Centre College. Parts of this Essay appeared in an op-ed originally published by Mr. Potts in The Hill on August 5, 2013.

President Obama released his Climate Action Plan to much fanfare on June 25, 2013 in an attempt to reduce the country’s greenhouse gas emissions “in the range of 17 percent below 2005 levels by 2020.”1 The centerpiece of the plan is to have the Environmental Protection Agency (EPA) issue carbon dioxide (CO2) standards for new and existing power plants, which account for roughly one-third of this country’s emissions.2

Obama’s announcement sent the news media and politicians into a frenzy, with vast proclamations on both sides about these regulations shutting down a large number of power plants, increasing electric costs, and killing the coal industry.3 But as this Essay will illustrate, these proclamations are exaggerated. The new regulations are unlikely to cause any significant retirements of existing coal-fired power plants (the largest emitters by far4), and at best will lead to no more than about a five percent reduction in power plant emissions once fully implemented around 2020. The reason for this lackluster result is not President Obama’s fault. The Clean Air Act – which governs the EPA’s ability to issue these standards – and more than forty years of federal court and EPA decisions interpreting the Act leave the EPA’s hands tied.

This Essay will explain why.

I. The Clean Air Act’s Limitations

The Clean Air Act allows the EPA to set emission standards for new and existing power plants based on the best emissions control technology available. In making this determination, the EPA must consider what emission reductions are achievable from the available technologies, but it must also take into account cost and prove that the chosen technology has been adequately demonstrated.

Unfortunately, the carbon control technology options for coal-fired power plants are limited. The only options are to improve the efficiency of the plant, to switch to lower emitting fuels like natural gas, or to sequester the carbon underground (which involves separating the carbon from the exhaust stream and piping it to underground storage reservoirs, sometimes hundreds of miles away).5 There are no add-on scrubbers that remove carbon dioxide during the process like there are with other pollutants.

The Clean Air Act allows the EPA to set two different types of technology standards. First, EPA can set blanket emission standards, called new source performance standards (NSPS), which uniformly apply to all sources covered by them.6 EPA has set more than eighty of these uniform technology standards for various categories of sources since the Act was adopted in 1970 – covering everything from landfills and petroleum refineries to dry cleaners.7 It set the first NSPS for power plants in 1971, and has revised those standards various times. But the current NSPS for power plants only regulates conventional pollutants, such as particulate matter, nitrogen oxides (NOx) and sulfur dioxide (SO2).8 CO2 is not regulated.

The second general type of technology standard that the EPA can impose is done on a case-by-case (or power plant by power plant) basis. These case-by-case technology standards, called best available control technology (BACT) determinations, are conducted when a company obtains a construction permit to build a new plant or to significantly modify an existing one.9 BACT determinations are set for each pollutant “subject to regulation” under the Clean Air Act, which includes all of the conventional pollutants regulated by the NSPS and others.10 In 2011, the EPA began requiring new and significantly modified power plants to conduct BACT determinations for CO2.11 Every new or modified plant subject to BACT has a slightly different limit to meet, based on the specific layout of the plant and what control technologies are available for that plant.

Now here’s the rub: according to the Clean Air Act, the case-by-case BACT standards must be more stringent than the blanket, uniform NSPS ones. The definition of “best available control technology” in the Act specifically states that “[i]n no event shall application of ‘best available control technology’ result in emissions of any pollutants which will exceed the emissions allowed by any applicable standard established pursuant to [the NSPS] section of this title.”12 In other words, as the D.C. Circuit Court of Appeals has said, “[a]t a minimum, . . . BACT [is] as restrictive as NSPS.”13

This requirement is important because President Obama’s plan calls for the EPA to issue the NSPS standards for new and existing power plants. Yet the EPA has already approved the more stringent case-by-case BACT standards for a number of existing and planned coal-fired power plants, and they have not amounted to much, generally requiring modest efficiency improvements that are relatively inexpensive and achieve at most about a five percent reduction in emissions.14

For example, a case-by-case BACT standard for CO2 was set for the Wolverine Power Supply Cooperative, Inc.’s proposed 600 MW coal fired plant located in Rogers City, Michigan in 2011 and led to only a 4.7% CO2 emissions reduction. A BACT standard for CO2 was also set in 2011 for the existing coal-fired George Neal South Power Plant in Salix, Iowa, and it resulted in only about a 1.2% reduction in emissions.15 These BACT determinations are representative of the various determinations set to date, and the plants are typical of coal-fired power plants in the industry.

These past BACT determinations have put the agency in a difficult legal position. The President’s planned NSPS regulations will almost certainly need to be less stringent than these BACT determinations, or the EPA risks violating the Act. In other words, if the EPA tries to adopt uniform NSPS limits for new or existing power plants that require more than about a five percent reduction in emissions, it will almost certainly run afoul of the Act.

While the statutory language unequivocally provides that BACT limits must be more stringent than the NSPS, the EPA might argue that the converse is not also true, and that the agency can set NSPS standards that are more stringent than previous BACT determinations. But this argument seems tenuous. The technology tests under the Act are the same in all material respects for the NSPS and BACT. So, even if the courts agree that the Act does not bar the EPA from adopting NSPS that are more stringent than previous BACT determinations, they would almost certainly find that the EPA was arbitrary and capricious if the agency set an NSPS at a significantly more stringent level than past BACT determinations that were recently set for the exact same types of sources.16

With this background, let’s turn to Obama’s specific plan for regulating new and existing sources under the NSPS program.

II. The Standards for New Power Plants Will Not Have Any Effect on Emissions

The President’s plan directs the EPA to issue CO2 standards for existing power plants by 2015 and for new plants as expeditiously as possible.17 In April of 2012, well before the release of the President’s climate plan, the EPA issued its first proposal to regulate CO2 from new power plants using the NSPS program.18 That proposal determined, rather counter-intuitively, that the best control technology available for new coal-fired power plants was to be a natural gas-fired power plant, and set the standard based on a typical natural gas-fired plant’s emissions.19 In other words, the proposal – if finalized – would essentially ban the future construction of all coal-fired power plants in this country because no conventional coal-fired plants can meet the standard. Not surprisingly, the utility industry and coal companies went ballistic with public comments (the EPA received more than 2.6 million comments) – many arguing that the EPA’s approach was unlawful.20

The primary legal problems with the EPA’s approach are fairly obvious. A natural gas plant is not a carbon control technology; it’s a different kind of power plant. And courts (and even the EPA) have said for years that technology-based limits should not “redefine the source,” which is exactly what the EPA’s proposal would do.21 Moreover, the EPA’s proposal would impose an NSPS limit for new power plants that is vastly more stringent than any of the existing BACT standards for coal-fired power plants.

The overwhelming industry response and these legal issues led the EPA to recently announce plans to re-issue the proposal in late September of this year, and separate the control technology determination for plants that burn coal as compared to those burning natural gas.22 But even if the EPA does separate coal plants from gas plants in the new proposal, the control technology options for coal-fired power plants are still limited, and the EPA is hampered by past precedent. Technology-based limits cannot require a plant to switch from coal to natural gas,23 and the EPA in its April 2012 proposal has already basically admitted that separating and sequestering the carbon is too costly, noting that it “would add around 80 percent to the cost of electricity” for a new plant.24 All of the previously mentioned BACT decisions considered and eliminated both fuel switching and sequestration as the technological choice. That just leaves the EPA with efficiency improvements in new plant design, which would at best lead to minor emissions reductions – if new plants are built that replace older, higher emitting ones.

Yet even the EPA does not expect new coal-fired power plants to be built any time soon. The EPA admitted in its original NSPS proposal that current market conditions make it highly unlikely that anyone will build a new coal plant between now and 2020, regardless of what the EPA does with the CO2 NSPS.25 Natural gas prices are too low and are forecasted by the EPA to stay that way, while other EPA regulations aimed at mercury and SO2 emissions are pushing the cost of new coal-fired power plants up compared to gas plants. In fact, even though the EPA’s proposal would basically ban the construction of new coal-fired power plants, the agency admitted in its proposal that it “believes that this proposed rule is not likely to produce changes in emissions of greenhouse gases or other pollutants” because of market conditions.26

Given these current market conditions and the looming legal battles, some in the industry believe the EPA will back down from its proposal and allow the construction of new coal plants, as long as they include the most efficient design (a “supercritical” advanced coal plant).27 This would allow the EPA to avoid setting bad legal precedent on the new NSPS, which could impact the viability of its plans for the existing power plant NSPS.

Either way, the EPA’s new power plant NSPS is not likely to have any impact on CO2 emissions between now and 2020.

III. The Standards for Existing Sources Might Reduce Emissions Slightly, or Not at All

The prospect for existing power plant standards, which Obama’s plan calls for the EPA to finalize by June of 2015, is equally ho-hum. As discussed, these existing plant standards should be less stringent than the case-by-case BACT ones, so even if Obama follows through with his promise, the most we are likely to see are standards for existing coal-fired power plants based on modest efficiency improvements.

The EPA’s own statements in the past also pose an obstacle to proposing more stringent standards for existing power plants. The agency has already publicly taken a narrow view of the technological alternatives available to it in imposing CO2 standards for existing sources. In 2008, the agency admitted that these standards would likely focus on incremental improvements in the heat rates of existing units through options that “are well known in the industry” (aka energy efficiency improvements).28 Plus in its already released proposal for new plants, the EPA made the same admission, saying that “[t]he most likely candidates for control actions [for modified existing sources] would be efficiency measures.”29

Existing BACT standards and the EPA’s past statements are not even the EPA’s most significant legal problem associated with setting standards for existing plants. Most of the EPA’s current NSPS regulations only apply to new or significantly modified sources. They do not generally apply to existing plants, as the Act grandfathered existing sources out of many of its most stringent requirements. The President’s plan, however, is to have the EPA issue NSPS for existing sources using Section 111(d) of the Clean Air Act, a rarely used section that the EPA generally has only used to regulate smaller sources that are not otherwise regulated under the Act.

Section 111(d) states that the EPA can force the states to adopt standards for “any existing source for any air pollutant: . . . which is not . . . emitted from a source category which is regulated under section [112] of the Act . . .”30 Based on a plain reading, this section seems to only allow the EPA to issue existing source NSPS for categories of sources that are not subject to hazardous air pollutant standards under Section 112 of the Act. Power plants are a category of sources, however, that are subject to hazardous air pollutant standards for mercury under Section 112.31 As such, it is highly questionable whether the EPA can even regulate existing power plants at all using Section 111(d).32

IV. Conclusion

The EPA’s regulation of CO2 from new and existing power plants under the NSPS program faces significant legal hurdles. The EPA’s best hope for the regulations to make it through the courts is if the agency takes a conservative approach and sets the NSPS for new and existing sources based on something less than the existing BACT standards. This means – at best – reductions in the range of 1-5% by 2020 for existing sources, with no emission reductions from new sources (since none are likely to be built).

The President’s climate plan calls for a 17% reduction in total nationwide emissions by 2020. Given no emission reductions are expected from the new power plant NSPS – and assuming minimal reductions from the existing power plant NSPS – the Administration will need to obtain significantly more reductions from the sources comprising the other two-thirds of national emissions, or the President’s 17% total reduction goal by 2020 will not be met.

  1. Executive Office of The President, The President’s Climate Action Plan 6 (June 2013) [hereinafter Obama’s Climate Plan],
  2. Id. at 6.
  3. See, e.g., Brenda Buttner, How Will Obama’s Fight on Global Warming Change the Economy?, Fox News (June 29, 2012); Jim Malewitz, In Obama Climate Plan, States Seek Flexibility, USA Today (July 12, 2013); Mark Drajem, Republicans Propose Limiting Obama Climate Plan in Budget, Bloomberg (July 22, 2013).
  4. Obama’s Climate Plan at 6.
  5. Interdisciplinary MIT Study, The Future of Coal: Options for a Carbon Constrained World 5, 17-42 (2007),
  6. 42 U.S.C. § 7411.
  7. See 40 C.F.R. pt. 60 (containing all of the EPA’s final NSPS regulations).
  8. 40 C.F.R. §§ 60.40-52da.
  9. 42 U.S.C. § 7475(a)(4). BACT is the case-by-case technology standard that applies in areas currently meeting EPA’s national ambient air quality standards. In those areas that are not currently meeting air quality standards (called nonattainment areas), there is a separate and more stringent control technology standard called the lowest achievable emission rate (LAER) standard. Because there are no national air quality standards for CO2, LAER is not applicable to the analysis in this Essay.
  10. 42 U.S.C. § 7475(4).
  11. Prevention of Significant Deterioration and Title V Greenhouse Gas Tailoring Rule, 75 Fed. Reg. 31514, 31516 (final rule June 3, 2010).
  12. 42 U.S.C. § 7479(3).
  13. State of New York v. EPA, 413 F.3d 3, 13 (D.C. Cir. 2005).
  14. See, e.g., BACT Analysis and Correspondence for MidAmerican Energy Company George Neal South Power Plant Construction Permit (Jan. – Mar. 2011) (on file with author); US EPA Comments on MidAmerican Energy Company George Neal South Power Plant Construction Permit (May 06, 2011) (on file with author); Construction Permit for MidAmerican Energy Company George Neal South Power Plant (May 16, 2011) [hereinafter “Neil South Materials”] (on file with author). See also EPA Comments on Wolverine Power Supply Cooperative, Inc. Construction Permit (May 19, 2011) (on file with author); Response to EPA Comments on Wolverine Power Supply Cooperative, Inc. (June 29, 2011) [hereinafter “Wolverine Materials”] (on file with author).
  15. Neil South Materials, supra note 14; Wolverine Materials, id.
  16. Unless, of course, a new technology popped up between the time of the BACT determinations and EPA setting the NSPS (which seems unlikely in this case).
  17. Environmental Protection Agency, Presidential Memorandum – Power Sector Carbon Pollution Standards (June 25, 2013).
  18. Standards of Performance for Greenhouse Gas Emissions for New Stationary Sources: Electric Utility Generating Units, 77 Fed. Reg. 22392 (proposed Apr. 13, 2013).
  19. Id. at 22398 (“We propose that a [natural gas combined cycle] facility is the best system of emission reduction”).
  20. eRulemaking Program Management Office, U.S. Environmental Protection Agency (search for docket number “EPA-HQ-OAR-2011-0660;” then click on “open docket folder” for Greenhouse Gas New Source Performance Standard for Electrical Generating Units; comment letter count will be on the right side of the screen) (last visited Aug. 2013).
  21. See, e.g., Sierra Club v. U.S. E.P.A., 499 F.3d 653, 655 (2007) (“The EPA’s position is that “best available control technology” does not include redesigning the plant proposed by the permit applicant.”); Longleaf Energy Associates, LLC v. Friends of the Chattahoochee, Inc., 681 S.E.2d 203 (Ga. App. 2009) (“The BACT analysis did not, however, require the EPD to consider any alternative control technology that, if applied to the proposed power plant, would constitute a redesign of the plant.”); In re Old Dominion Electric Cooperative, 3 E.A.D. 779, 793 n. 38 (EPA Adm’r 1992) (“[T]raditionally, EPA does not require a . . . [permit] applicant to change the fundamental scope of its project.”).
  22. Zach Coleman, EPA Sends White House Revised Rule for Power Plants, The Hill (July 1, 2013).
  23. See Sierra Club v. EPA, 499 F.3d 653; see also Larry Parker & James E. Mccarthy, EPA’S BACT Guidance For Greenhouse Gases From Stationary Sources, Congressional Research Service 16 (November 22, 2010) (“Natural gas substitution for coal in a facility is generally considered by EPA to be an option that redefines the source”).
  24. 77 Fed. Reg. 22392, 22415 (“The DOE/National Energy Technology Laboratory . . . estimates that using today’s commercially available [carbon capture and sequestration] technologies would add around 80 percent to the cost of electricity for a new pulverized coal (PC) plant”).
  25. Id. at 22398 (“[O]ur Integrated Planning Model [IPM] model projects that for economic reasons, natural gas-fired EGUs will be the facilities of choice until at least 2020”).
  26. Id. at 22430.
  27. Dawn Reeves, EPA’s Stationary Source GHG Rules Face New Legal, Policy Uncertainty, Inside EPA 1 (March 22, 2013).
  28. Parker & Mccarthy, supra note 23, at 16.
  29. 77 Fed. Reg. 22392, 22421.
  30. 42 U.S.C. § 7411(d).
  31. National Emissions Standards for Hazardous Air Pollutants From Coal- and Oil-Fired Electric Utility Steam Generating Units and Standards of Performance for Fossil-Fuel-Fired Electric Utility, Industrial-Commercial-Institutional, and Small Industrial-Commercial-Institutional Steam Generating Units; Final Rule, 77 Fed. Reg. 9304 (February 16, 2012).
  32. See American Electric Power Co. v. Connecticut, 131 S. Ct. 2527, 2537, n. 7 (2010) (“EPA may not employ §7411(d) if existing stationary sources of the pollutant in question are regulated under . . . the ‘hazardous air pollutants’ program, §7412.”).

Moving Forward with Regulatory Lookback

* Cary Coglianese is the Edward B. Shils Professor of Law, Professor of Political Science, and Director of the Penn Program on Regulation at the University of Pennsylvania Law School. This essay is based upon remarks delivered at a Progressive Policy Institute (PPI) forum on “Regulating in the Digital Age” in Washington, D.C., on May 9, 2013. The author is grateful for helpful comments from Brady Sullivan, Jonathan Wiener, participants at the PPI forum, and the editorial team at the Yale Journal on Regulation.

President Obama has rightly called on government agencies to establish ongoing routines for reviewing existing regulations to determine if they need modification or repeal. Over the last two years, the White House Office of Information and Regulatory Affairs (OIRA) has overseen a signature regulatory “lookback” initiative that has prompted dozens of federal agencies to review hundreds of regulations. This regulatory initiative represents a good first step toward increasing the retrospective review of regulation, but by itself will do little to build a lasting culture of serious regulatory evaluation. After all, past administrations have made similar review efforts, but these ad hoc exercises have never taken root. If President Obama is serious about institutionalizing the practice of retrospective review, his Administration will need to take further steps in the coming years. This essay offers three feasible actions – guidelines, plans, and prompts – that President Obama’s next OIRA Administrator should take to move forward with regulatory lookback and improve both the regularity and rigor of regulatory evaluation.

Responding to an executive order from President Obama, dozens of federal agencies over the last two years have undertaken extensive reviews of the regulations on their books, looking for antiquated, counterproductive, or unnecessary rules that should be modified or eliminated. According to the Administration, agencies have collectively completed more than five hundred regulatory reviews and initiated policy modifications expected to yield cost savings in the billions of dollars. These results look good, to be sure, but they are only a small step toward achieving the Administration’s broader goal of institutionalizing retrospective regulatory analysis. To avoid squandering the progress made so far, the Administration must use the next several years to take additional steps to improve retrospective regulatory analysis and identify still better targets for the application of more rigorous evaluation research.

The Obama Administration has sometimes characterized its existing retrospective review initiative as “historic”1 and “unprecedented.”2 But actually it is far from unprecedented. President Clinton issued an executive order requiring agencies to develop programs by which they would “periodically review” existing regulations,3 and Vice President Gore oversaw a government-wide regulatory review process that trimmed a sizeable number of pages of outmoded rules from the .4 Under President George W. Bush, the White House Office of Information and Regulatory Affairs (OIRA) invited members of the public to nominate existing rules needing review and reconsideration, a process which led to the scrutiny of nearly four hundred rules and regulatory guidance documents.5

Although retrospectively reviewing regulation is far from new, what makes the Obama Administration’s latest round of review distinctive is its laudable but ambitious goal of institutionalizing the practice of what the Administration calls regulatory lookback.6 President Obama’s first OIRA Administrator, Cass Sunstein, proclaimed that the Administration’s lookback would not be a “one-time endeavor” as in previous administrations; instead, the Obama Administration’s lookback aspires to be just a first step toward building “a regulatory culture of regular evaluation.”7

Widespread acceptance of continuous regulatory review is exactly what is needed to fulfill what President Obama has rightly characterized as the government’s duty to “measure, and seek to improve, the actual results of regulatory requirements.”8 Unfortunately, the federal government’s treatment of retrospective regulatory review still lags far behind agencies’ practice of prospectively analyzing proposed regulations, a process institutionalized by President Reagan and overseen by OIRA for the last thirty years. It is fair to say that retrospective review is today where prospective analysis was in the 1970s: ad hoc and largely unmanaged.

Without doing more, the Obama Administration’s recent lookback initiative will end up in the same dustbin as the regulatory review processes initiated under Clinton and Bush. Sure, some discrete improvements in specific regulations will likely result, but retrospective review will remain a periodic and unsystematic fancy rather than a serious, ongoing part of regulatory policymaking.

How to move forward? One way would be to create a new, independent regulatory institution dedicated to retrospective review, along the lines of proposals offered by, among others, Michael Greenstone of MIT and Michael Mandel and Diana Carew of the Progressive Policy Institute.9 There is much to be said for such proposals. But as anyone who follows Washington politics knows, it will undoubtedly take considerable time—not to mention clout—before Congress might enact even such appealingly bipartisan proposals. Even if a new institution were to be authorized, funded, and staffed, it would take still more time for that body to begin to conduct reviews and make recommendations. The Obama team would likely be in its closing days, if not gone from Washington altogether, by the time a new institution could begin to have an impact.

Fortunately, the Obama Administration does not need to wait for the creation of a new institution before taking steps to embed evaluation more deeply and permanently into the regulatory process. Acting entirely on its own, the Administration can still move forward with action that will help institutionalize retrospective review for the next three years and beyond. Specifically, the White House’s OIRA should issue government-wide regulatory evaluation guidelines, require the creation of evaluation plans for significant rules as part of the prospective review process, and adapt the practice developed by George W. Bush’s OIRA of issuing “prompt letters” so as to promote targeted, value-added regulatory evaluation.

Evaluation Guidelines. OIRA first needs to establish specific guidelines for agencies to follow in conducting retrospective evaluations of existing regulations. At present, far too many agencies’ reviews are little more than glances in the rearview mirror, drawing mainly on anecdotes and expert impressions. Glances back may be better than nothing, but they fall far short of what it will take to create a credible, evidence-based approach to regulation. Rather than relying on impressions, the federal government needs careful, systematic research that addresses the question of causation: What benefits and costs can actually be attributed to a regulation after it has been implemented?10 Getting reliable answers to this causal question requires adherence to exacting standards for research design and statistical analysis, yet federal agencies currently lack clear guidance about how to conduct high quality retrospective reviews.

It is instructive that when it comes to producing prospective regulatory analysis, agencies can turn to OIRA’s Circular A-4, a lengthy document that provides both a general guide to conducting regulatory analysis as well as concrete prescriptions for analysts to follow. Circular A-4 offers regulatory analysts in agencies specific instructions, such as, “You should not use benefit transfer in estimating benefits if resources are unique or have unique attributes,” and, “You should provide estimates of net benefits using both 3 percent and 7 percent” discount rates.11

Admittedly, some of what can be found in Circular A-4 may also be helpful in conducting retrospective analysis, but nothing in A-4 offers a specific framework for approaching retrospective evaluation. The Circular contains nothing about making causal attributions by estimating counterfactuals or about how to undertake statistical analysis of regulatory impacts. If the Obama Administration is serious about deepening and strengthening regulatory review, at the very least it should create retrospective evaluation guidelines comparable to Circular A-4.

Evaluation Plans. Issuing evaluation guidelines is not only the most feasible action the Administration could take in the near term, it also would provide a foundation upon which to base additional steps. One such additional step would be to require agencies to include in each prospective regulatory impact analysis (RIA) a plan for the subsequent evaluation of the proposed rule. An evaluation plan would constitute only a small part of an overall RIA, and it would be non-binding in the sense that an agency would not be obligated to carry out the plan. Nevertheless, such a plan would provide a future guide whenever the agency, OIRA, or the public does later deem it appropriate to look back at the rule after it has been implemented. A plan for retrospective evaluation should, among other things, discuss:

  • ways of operationalizing the proposed rule’s objectives, specifying metrics that could be used in the future to assess whether each objective had been met;
  • sources of data that either currently exist or would need to be developed in order to estimate the impact of the rule on the specified metrics;
  • the time frame when the rule’s objectives could be expected to accrue or, relatedly, the time frame when retrospective evaluation would be appropriate; and
  • research or analytic designs that could be used in evaluating the rule (e.g., sources of cross- sectional or longitudinal variation, other potential explanatory factors that might need to be controlled, and possible statistical approaches to estimating counterfactuals).

An evaluation plan would be useful if the agency later re-examined the rule in a future administration’s lookback process. Such planning also would help prompt agencies early in the rule development process—even when proposed rules are being drafted—to begin to think about retrospective evaluation needs, such as what data could be collected or identified in advance of the rule’s implementation in order to facilitate subsequent measurement. Evaluation plans should be made publicly available, and as such they may help stimulate independent evaluation research by other entities, including the Administrative Conference of the United States, the National Academy of Sciences, and the National Academy of Public Administration, as well as by university and think-tank researchers.

OIRA is well positioned to oversee formal evaluation planning as part of regulatory development. Imposing a requirement for the submission of formal evaluation plans would serve to implement the periodic review of existing significant regulations demanded under both Section 6 of Executive Order 13,56312 and Section 5 of Executive Order 12,866,13 not to mention the provisions of Executive Order 13,610.14 In addition, since the data needed for evaluation may at times call for information-collection requests, OIRA’s role in implementing the Paperwork Reduction Act15 would also make it an appropriate entity to interact with agencies over plans for evaluation.

Evaluation Prompts. Finally, OIRA should extend to the context of retrospective review the earlier OIRA practice of sending agencies occasional prompt letters.16 Evaluation prompts would identify specific existing rules that the Administration believes should be targeted for in-depth review, above and beyond whatever the agency may do in the ordinary course of the ongoing lookback process called for under the existing executive orders.

Far too many of the retrospective reviews that agencies have conducted to date have been impressionistic, rather than systematic or rigorously empirical. Of course, that is to be expected with the short time frame agencies have been given under the retrospective review initiatives in recent administrations. Moreover, for some rules no more than a close but informal glance back will be warranted. For other rules, though, a more in-depth, serious evaluation will be needed to advance the goals of sound regulatory governance.

As with regulatory plans, OIRA is well positioned to implement the prompt letter proposal given its familiarity with rules across the entire sweep of the federal government. Better than any other entity, OIRA can determine what the federal government’s top priorities for regulatory evaluation should be. Specifically, OIRA should issue prompt letters calling for in-depth evaluation in at least three types of cases:

  • Close calls. Rules should be evaluated rigorously when they had, at the time they were promulgated, high expected costs or benefits but relatively small expected net benefits in their RIAs. If the costs of such a rule turned out after implementation to be substantially larger than estimated, or the benefits substantially smaller, the rule would no longer have benefits that justify its costs.
  • High uncertainty. Relatedly, rules expected to impose high benefits or costs merit subsequent evaluation if the prospective benefit or cost estimation exhibited high levels of uncertainty. For these rules, a follow-on investigation would reduce the uncertainty.
  • Common issues. Rules that present common issues of either benefit or cost estimation – or that rely on common assumptions – are prime candidates for rigorous retrospective review, as serious efforts to evaluate their benefits and costs retrospectively would help validate or improve prospective estimation techniques applicable to other rules.

Although OIRA lacks the capacity to conduct the needed rigorous retrospective evaluation research on its own, it is distinctively positioned to help identify opportunities like these, where evaluation could assist in improving regulatory outcomes, reducing regulatory burdens, or validating or improving methods of regulatory impact analysis. OIRA could, of course, also welcome other agencies or members of the public to make suggestions for rules that should be subjected to evaluation prompt letters.

OIRA’s prompt letters would urge agencies to allocate internal agency research funds to conduct in-depth empirical evaluations of rules in accord with OIRA’s evaluation guidelines. Agencies could alternatively seek assistance from entities such as the National Science Foundation or the National Academy of Sciences to fund or facilitate systematic regulatory assessments. Either way, given OIRA’s placement within the Office of Management and Budget, it may be positioned to help support the allocation of necessary budgetary resources for its priority regulatory evaluations. OIRA’s statutory role in overseeing the Paperwork Reduction Act also positions it to stand ready to process expeditiously the approvals of information requests that may be needed to collect the data needed to undertake evaluations subject to prompt letters.

* * *

These three proposals—evaluation guidelines, evaluation plans, and evaluation prompts—could all be implemented without any congressional action shortly after the confirmation of the next OIRA Administrator. Although these three steps by themselves will not cure everything that ails the nation’s regulatory system, they nevertheless represent meaningful steps toward better regulatory analysis and ultimately better regulation. Evaluation, after all, is needed to identify both real successes and real problems that need fixing.17

Institutionalizing rigorous evaluation practices will by no means come easily. Rigor and quality have not always described even the prospective regulatory impact analyses that agencies have been required to complete under OIRA’s oversight for the last several decades.18 With time, though, the practice of regulatory analysis can improve and deepen. Building a culture of retrospective evaluation is a long-term proposition, and at this juncture it requires taking steps to maintain the momentum the Obama Administration has generated with its extensive lookback initiative. The only way to advance the administration’s admirable objectives of improving both regulation and regulatory evaluation is to keep moving forward with looking back.

  1. Press Release, White House Office of the Press Sec’y, White House Announces New Steps to Cut Red Tape, Eliminate Unnecessary Regulations (May 10, 2012),
  2. Cass Sunstein, A Smarter Approach to Regulation (August 7, 2012),
  3. Exec. Order No. 12,866, 58 Fed. Reg. 51,735 §5 (1993).
  4. See, e.g., John Kamensky, Assistant to the Deputy Dir. of Mgmt., U.S. Office of Mgmt. and Budget, The U.S. Reform Experience: The National Performance Review, Presentation at Indiana University at the Conference on Civil Service Systems in Comparative Perspectives, Indiana University (April 6, 1997),
  5. Office Of Mgmt. & Budget, Stimulating Smarter Regulation: 2002 Report To Congress On The Costs And Benefits Of Regulations And Unfunded Mandates On State, Local, And Tribal Entities 4 (noting that 316 regulations and guidance documents were considered in 2002, in addition to 71 in 2001)
  6. Exec. Order No. 13,610, 77 Fed. Reg. 28,467 (May 14, 2012) (calling for agency action “to institutionalize regular assessment of significant regulations”).
  7. Cass Sunstein, Regulation: Looking Backward, Looking Forward, Address Before the 2012 A.B.A. Admin. L. & Reg. Pract. Sec., Washington, D.C., May 10, 2012,
  8. Exec. Order No. 13,563, 76 Fed. Reg. 3, 21 (2011).
  9. Michael Greenstone, Toward a Culture of Persistent Regulatory Experimentation and Evaluation, in New Perspectives On Regulation 111 (David Moss & John Cisternino, eds., 2009); Michael Mandel & Diana G. Carew, Progressive Pol’y Inst., Regulatory Improvement Commission: A Politically Viable Approach To U.S. Regulatory Reform (2013),
  10. Cary Coglianese, Evaluating the Impact of Regulation and Regulatory Policy, (OECD Expert Paper No. 1, 2012).
  11. Office Of Mgmt. & Budget, Circular A-4: Regulatory Analysis (Sept. 17, 2003).
  12. Exec. Order No. 13,563, 76 Fed. Reg. 3,821, 3,822 (2011).
  13. Exec. Order No. 12,866, 58 Fed. Reg. 51,735, 51,739 (1993).
  14. Exec. Order No. 13,610, 77 Fed. Reg. 28,467, 28,469 (2012).
  15. 44 U.S.C. §§ 3501-3521 (2012).
  16. See John D. Graham, Saving Lives Through Administrative Law and Economics, 157 U. Pa. L. Rev. 395, 460-463 (2008) (describing the development and use of OIRA’s regulatory prompt letters).
  17. See Cary Coglianese, Thinking Ahead, Looking Back: Assessing the Value of Regulatory Impact Analysis and Procedures for Its Use, 3 Korea Leg. Res. Inst. J. L. & Leg. 5, 18 (2013) (S. Kor.) (discussing how evaluation research seeks “to attribute, causally, both the good and bad outcomes to regulations”).
  18. See Robert W. Hahn & Patrick M. Dudley, How Well Does the U.S. Government Do Benefit-Cost Analysis?, 1 Rev. Envtl. Econ. & Pol’y 192, 209 (2007) (analyzing seventy-four federal regulatory impact analyses [RIAs] completed between 1982-1999 and concluding “that many RIAs are of poor quality”).

What Is Government Failure?

* Professor of Law, University of Arizona College of Law. This is the second essay in the Series “What Is . . . ?” that explores the meaning of basic terms in regulation. See Barak Orbach, What Is Regulation?, 30 Yale J. Reg. Online 1 (2012). I thank Paul Connell and Sivan Korn for comments and suggestions.

The phrase “government failure” as a term of art originated in the critique of government regulation that emerged in the 1960s. This critique premised that “market failures” were the only legitimate rationale for regulation. Although the phrase is a popular currency in scholarship and politics, people attribute to it different values. As a result, all seem to expect the government to fail, many believe that government inaction cannot constitute a failure, and alleged failures tend to be disputed. This essay seeks to establish a coherent meaning for the term “government failure” and its relatives (e.g., “government breakdown,” “regulatory failure”).

I. Introduction

In Capitalism and Freedom, Milton Friedman explained why “the scope of the government must be limited.”1 “The existence of a free market,” Friedman wrote, “does not . . . eliminate the need for government. On the contrary, government is essential both as a forum for determining the ‘rules of the game’ and as an umpire to interpret and enforce the rules decided on. What the market does is . . . minimize the extent to which the government need participate directly in the game.”2 Friedman firmly believed that it was not “an accident that so many . . . government reforms [go] awry.”3 The “central defect” of regulatory measures, he argued, is that “they seek through government to force people to act against their own immediate interests in order to promote a supposedly general interest.”4 Under this thesis, the government is likely to fail whenever it interferes with the freedom of choice. Thus, since “pretty much all law consists in forbidding [people] to do some things that they want to do,”5 the failure is inevitable.6

The concept of “government failure” is somewhat peculiar. People and institutions may fail in their actions, but they may also fail by not taking action. The phrase “government failure,” however, as it is commonly used, connotes ineffective government action,7 in the first place, or when it could have solved a given problem or set of problems more efficiently, that is, by generating greater net benefits.”)] implying that less government action is necessarily better.8 In Milton Friedman’s words: “[T]he government solution to a problem is usually as bad as the problem and very often makes the problem worse.”9

The phrase “government failure” emerged as a term art is in the 1960s with the rise of intellectual and political criticism of regulation.10 Building on the premise that the only legitimate rationale for government regulation is market failure,11 economists advanced new theories explaining why government interventions in markets are costly and tend to fail. This line of literature supposedly established the theoretical foundations of the phrase “government failure.”

Despite their growing popularity, the phrase “government failure” and its relatives (e.g., “government breakdown,” “regulatory failure”) do not have any clear meaning.12 Some use these phrases to describe government intervention in the private domain that results in undesirable outcomes. For others, these phrases may also mean lack of or inadequate government regulation. Yet, many identify a government failure in any perceived societal problem. Thus, people who use the phrase “government failure” often disagree with each other about what a failure means. Neither the prevalence of studies of government failures nor the use of the phrase “government failure” necessarily says much about the standards of “failure.”

This essay intends to clarify the general meaning of the term “government failure” by focusing on a few properties of failures.

[Figure 1: omitted]13

II. Inaction as a Failure

Can government inaction ever be a failure? A byproduct of the controversy over regulation is an artificial distinction between action and inaction.14 On one side of the controversy, people see over-regulation. On the opposite side, people observe insufficient regulatory safeguards―too little regulatory action or frequent inaction.15 These opposite perspectives delineate the approaches to government inaction. Some posit that inaction cannot be scrutinized, let alone considered a failure,16 are many. First, an agency decision not to enforce often involves a complicated balancing of a number of factors which are peculiarly within its expertise. [T]he agency must . . . assess whether a violation has occurred, . . . whether agency resources are best spent on this violation or another, whether the agency is likely to succeed if it acts, whether the particular enforcement action requested best fits the agency’s overall policies, and [other factors]. . . .

In addition[,] . . . when an agency refuses to act it generally does not exercise its coercive power over an individual’s liberty or property rights, and thus does not infringe upon areas that courts often are called upon to protect. . . .

] while others maintain that similar rules should apply to action and inaction.17

Superficially, both positions may appear plausible. Indeed, both positions have strong expressions in the case law of the U.S. Supreme Court,18 and in the academic literature.19 Several legal standards—such as standing,20 the interpretation of legislative inaction,21 and the un-reviewability presumption22 — frequently serve as tools for dismissing critique of inaction.

But distinction is artificial and analytically flawed. Values and other preferences often shape views regarding its relevance. For example, consider action and inaction of individuals. Assume an individual can take an action that would prolong her life, but some individuals do not take such action. Should the state require action? In Cruzan v. Missouri, Justice Antonin Scalia was willing to “acknowledge that the distinction between action and inaction has some bearing.”23 “It would not make much sense”, he explained, “to say that one may not kill oneself by walking into the sea, but may sit on the beach until submerged by the incoming tide; or that one may not intentionally lock oneself into a cold storage locker, but may refrain from coming indoors when the temperature drops below freezing.”24 Justice Scalia, therefore, argued that the distinction between action and inaction might be utterly irrelevant.25

In National Federation of Independent Business v. Sebelius (“NFIB”), the Court considered a similar issue: the validity of the so-called “individual mandate,” a minimum coverage of health insurance policy. Writing for the Court, Chief Justice John Roberts declared that “[t]o an economist, perhaps, there is no difference between activity and inactivity[, but] the distinction between doing something and not doing nothing would not have been lost on the Framers, who were ‘practical statesmen,’ not metaphysical philosophers.”26 In NFIB, Justice Scalia agreed with the Chief Justice on this point.

Moreover, government inaction means, among other things, accommodation of externalities. The underlying logic of exempting government inaction from scrutiny is that “government actions can violate the Constitution, but government failures to act against private wrongdoers cannot.”27 Under this premise, for example, environmental regulation violates polluters’ constitutional rights, while government inaction on environmental issues does not violate the rights of those affected by pollution. Similarly, gun control measures abridge Second Amendment rights, but government inaction concerning gun control does not abridge victims’ rights. Or, restrictions on tobacco sales infringe constitutional rights of businesses, whereas inaction on tobacco does not infringe the public rights.28 And so on.

In sum, the distinction between action and inaction is often a matter of framing, and cannot be depicted as substantive. When applied to the government, the distinction narrows the government’s fundamental duty to “restrain men from injuring one another.”29 wise and frugal Government, which shall restrain men from injuring one another, shall leave them otherwise free to regulate their own pursuits of industry and improvement, and shall not take from the mouth of labor the bread it has earned.”).] If government inaction cannot constitute a failure, than people are free to harm each other, including by imposing one’s costs on society.

III. Imperfection as a Failure

What degree of imperfection defines a government failure? Thomas Aquinas taught believers that “[t]o sin is to fall short of a perfect action”30 and that “sinning is . . . a deviation from that rectitude which an act ought to have.”31 Today, people understand that the pursuit of perfection is impractical.32 For example, in corporate law, fiduciary duties and the business judgment rule emphasize that officers and directors can make mistakes.33 Yet, people sometimes perceive deviations of public policies from ideal norms or theoretical solutions as government failures. Such perceptions, the so-called nirvana fallacy,34 are common among both critics and advocates of regulation. In effect, they reflect unrealistic demands for perfection in the spirit of Thomas Aquinas.35

In this spirit, many critics of regulation focus on ideal norms of liberty and freedom, and believe that most regulatory measures are imperfect and fail society; that is, “government is the problem.”36 Likewise but with different values, many advocates of regulation are troubled by “problems”—imperfections in our world—and believe that society can address such problems with regulations.

The references to the invisible hand and the precautionary principle as plausible guidelines for public policies illustrate these perspectives. Invisible hand arguments ordinarily propose that markets are generally efficient and government actions burden and disrupt them.37 The precautionary principle prescribes that activities that pose certain risks to the environment or human lives should be banned until safety is established.38 cancer in man or animal.” 21 U.S.C. § 348(c)(3). See Less v. Reilly, 968 F.2d 985 (9th Cir. 1992).] Both concepts offer reliance on simplistic frameworks that never have proved themselves, or more precisely, have proved their ineffectiveness.39 was the belief that markets do not fail, that unfettered markets would lead to efficient outcomes, and that government intervention would simply gum up the works.”); Henry T.C. Hu, Efficient Markets and the Law: A Predictable Past and Uncertain Future, 4 Ann. Rev. Fin. Econ. 179 (2012); Carmen M. Reinhart & Kenneth Rogoff, This Time is Different: Eight Centuries of Financial Folly (2009); Steven G. Medema, The Hesitant Hand (2009); Robert J. Shiller, Irrational Exuberance (2d ed. 2005). For the precautionary principle see generally Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (2005); Cass Sunstein, Irreversibility, 9 Law, Probability & Risk 227 (2010).] Proponents of these concepts will always identify government failures. Under invisible hand theories, government regulation is unwarranted intervention in markets. Under the precautionary principle, the government is unlikely to do enough to prevent all activities that pose risks to lives and the environment.

Imperfection is not all about the degree of government conduct; it may also be about the form. Two general forms of imperfections are commonly used to define government failures: a deviation from adequate cost-benefit analysis and a mismatch between normative expectations and public policies.40 acted arbitrarily and capriciously for having failed once again . . . adequately to assess the economic effects of a new rule. Here the Commission inconsistently and opportunistically framed the costs and benefits of the rule; failed adequately to quantify the certain costs or to explain why those costs could not be quantified; neglected to support its predictive judgments; contradicted itself; and failed to respond to substantial problems raised by commenters.

Com. v. Wasson, 842 S.W.2d 487, 501 (Ky. 1992):

By 1974 [when Kentucky enacted its anti-sodomy law] there had already been a sea change in societal values insofar as attaching criminal penalties to extramarital sex. The question is whether a society that no longer criminalizes adultery, fornication, or deviate sexual intercourse between heterosexuals, has a rational basis to single out homosexual acts for different treatment.


Cost-benefit analysis as a standard for government failures underscores the inability to define ex ante precise criteria for government failures and potential misuse of hindsight.41 When undesirable outcomes materialize, we can supposedly employ a Learned-Hand-like formula (or a more sophisticated analysis) to examine whether the government adequately invested in precautions to address the risk.42 Such inquiries do not account for budgetary constraints, ex ante knowledge of risks, and available precautions. Therefore, such inquiries may be reasonable for certain domains but not for others.

Mismatches between normative expectations and public policies may also establish perceptions of government failures. Examples of such perceptions may include the inability of the federal government to address child labor until 1938,43 the endorsement of eugenics and the maintenance of eugenics programs until 1974,44 and the choice to ignore irrational exuberance during the housing bubble of 2000s.45 The existence of a mismatch implies that the underlying normative expectation is not in consensus. For some portion of the public there is no mismatch.46 But those, whose normative views clash with existing public policies, may perceive the contrast as a government failure. Normative contrasts of this type begin with the general controversy over regulation: For some a government failure is a consequence of too much regulation, while for others it is a result of too little regulation.47

In sum, every government failure represents some imperfection in government performance, but not every imperfection in government performance is a failure. Although we often “see” government failures, threshold standards that separate tolerable imperfections from government failures do not exist. In corporate law, only extreme situations of improper intent, conflict of interest, carelessness, and inattention or a failure to be informed of all facts material to a decision may result in liability for decisions made or not made.48 By contrast, under some theories, the government fails whenever it acts because of the inevitable imperfections. Such theories are impractical.

IV. Defining Failure

What government’s actions and inaction ought be considered government failures? When “bad things” happen, such as a natural disaster hits a major city, a financial bubble bursts, or a Ponzi scheme unravels, a government failure is often declared.49 The reasoning for such public verdicts is that “the public stewards . . . ignored warnings and failed to question, understand, and manage evolving risks . . . to the well-being of the American public.”50 Such failures supposedly refer to substantial imperfections in government performance.

Designed by humans for a complex reality, regulations tend to be imperfect. Government failures, including a lack of, or inadequate regulation, merely reflect the imperfect nature of regulation. Indeed, the phrase “government failure” as a term of art was born in critique of regulation.

“Government failure” as a concept in regulation refers to substantial imperfection in government performance. Such imperfections are comprised of inadequate actions and unreasonable inactions. The scope of the imperfection is related to the level of a disregarded risk, inadequacy of cost-benefit analysis, deviation from popular normative expectations, and magnitude of misallocated resources. In essence, government failures are accidents that cannot be eliminated, but their costs can be reduced.

  1. Milton Friedman, Capitalism and Freedom 2 (1962).
  2. Id. at 15.
  3. Id. at 196.
  4. Id.
  5. Adkins v. Children’s Hospital, 261 U.S. 525, 568 (1923) (Holmes, J., dissenting).
  6. See also Milton Friedman & Rose Friedman, Free to Choose: A Personal Statement 5-6 (1980).

    In the government sphere, as in the market, there seems to be an invisible hand, but it operates precisely the opposite direction from Adam Smith’s: an individual who intends to serve the public interest by fostering government intervention is “led by an invisible hand to promote” private interests, “which was no part of his intention.”

  7. See, e.g., Clifford Winston, Government Failure Versus Market Failure 2–3 (2006) (“Government failure . . . arises when government has created inefficiencies because it should not have intervened [in the market
  8. See generally Peter L. Kahn, The Politics of Unregulation: Public Choice and Limits on Government, 75 Cornell L. Rev. 280 (1990).
  9. Milton Friedman, An Economist’s Protest 6 (1975). See also Milton Friedman, Why Government is The Problem (1993).
  10. See, e.g., Roland N. McKean, The Unseen Hand in Government, 55 Am. Econ. Rev. 496 (1965); Charles Wolf, Jr., A Theory of Nonmarket Failure: Framework for Implementation Analysis, 22 J. L. & Econ. 107 (1979). See also Barry Goldwater, Consciences Of A Conservative (1960) (introducing a general critique of government regulation).
  11. See Francis M. Bator, The Anatomy of Market Failure, 72 Q.J. Econ. 351 (1958). Executive Order 12,866 states that “material failures of private markets to protect or improve the health and safety of the public, the environment, or the well-being of the American people” may establish a “compelling public need” for regulation. Exec. Order No. 12,866 § 1.
  12. See generally Cary Coglianese ed., Regulatory Breakdown: The Crisis Of Confidence In U.S. Regulation (2012).
  13. For the methodology and its limitations, see Jean-Baptiste Michel et al., Quantitative Analysis of Culture Using Millions of Digitized Books, 331 Sci. 176 (2011).
  14. The analysis of the distinction is of course rather old. For example, Thomas Aquinas distinguished between sins of commission and sins of omission, arguing that the latter are “less grievous” than sins of commission, but stressed that inaction may constitute a sin. Thomas Aquinas, 1 The Summa Theologica 316-20 (Fathers of the English Dominican Province trans., 1981).
  15. See generally Nicholas Bagley & Richard L. Revesz, Centralized Oversight of the Regulatory State, 106 Colum. L. Rev. 1260 (2006).
  16. See, e.g., Heckler v. Chaney, 470 U.S. 821, 831-32 (1985):

    The reasons for [the general unsuitability for judicial review of agency decisions to refuse enforcement

  17. See, e.g., Bagley & Revesz, supra note 35.
  18. See, e.g., DeShaney v. Winnebago County Dept. of Soc. Servs., 489 U.S. 189 (1989) (holding in a 6-3 decision that a local social service worker’s failure to prevent child abuse did not violate the Due Process Clause although the social worker “had reason to believe” abuse was occurring.); Massachusetts v. EPA, 549 U.S. 497 (2007) (holding in a 5-4 decision that the EPA’s denial of a petition for rule making was “arbitrary, capricious, or otherwise not in accordance with law”); Caperton v. A.T. Massey Coal Co., Inc., 556 U.S. 868 (2009) (holding in a 5-4 decision that a judge’s failure to recuse, when a “probability of actual bias” exists, may make him subject to disqualification).
  19. See, e.g., Lisa Schultz Bressman, Judicial Review of Agency Inaction: An Arbitrariness Approach, 79 N.Y.U. L. Rev. 1657 (2004); William N. Eskridge, Jr., Interpreting Legislative Inaction, 87 Mich. L. Rev. 67 (1988); Kahn, supra note 8; Peter H.A. Lehner, Judicial Review of Administrative Inaction, 83 Colum. L. Rev. 627 (1983); Ronald M. Levin, Understanding Unreviewability in Administrative Law, 74 Minn. L. Rev. 689 (1990); Glen Stazsewski, The Federal Inaction Commission, 59 Emory L.J. 369 (2009); David A. Strauss, Due Process, Government Inaction, and Private Wrongs, 1989 Sup. Ct. Rev. 53 (1989); Cass R. Sunstein, Reviewing Agency Inaction After Heckler v. Chaney, 52 U. Chi. L. Rev. 653 (1985).
  20. See, e.g., Allen v. Wright, 468 U.S. 737 (1984) (denying standing to petitioners who sought to challenge agency inaction, specifically the agency’s failure to adopt standards for denying tax exemptions from racially segregated private schools); Lujan v. Defenders of Wildlife, 504 U.S. 555 (1992) (denying standing to petitioners that sought to challenge agency inaction—the Secretary of Interior’s refusal to enforce certain requirements of the Endangered Species Act); Massachusetts v. EPA, 549 U.S. 497 (granting standing to petitioners that that sought to challenge agency inaction— The EPA’s refusal to regulate greenhouse gas emissions).
  21. See Eskridge, supra note 37. Justice Scalia articulated the strongest opposition against giving any meaning to legislative inaction. See, e.g., Johnson v. Transp. Agency, Santa Clara Cnty., Cal., 480 U.S. 616, 672 (1987) (Scalia, J., dissenting) (“vindication by congressional inaction is a canard”).
  22. The unreviewability presumption supposedly shields agencies’ inaction from judicial review. See Heckler v. Chaney, 470 U.S. 821, 832-33 (1985) (holding that an administrative agency’s decision not to take action is “presumptively unreviewable; the presumption may be rebutted where the substantive statute has provided guidelines for the agency to follow in exercising its enforcement powers.”) In Massachusetts v. EPA, the Court clarified—effectively narrowed—the presumption, distinguishing rulemaking denials from decisions not to enforce and holding that the former are subject to judicial review. Massachusetts v. EPA, 549 U.S. 497 (2007).
  23. Cruzan by Cruzan v. Dir., Missouri Dep’t of Health, 497 U.S. 261, 296 (1990).
  24. Id.
  25. Id. at 296-97.
  26. National Federation of Independent Business v. Sebelius, 132 S.Ct. 2566, 2589 (2012).
  27. David A. Strauss, Due Process, Government Inaction, and Private Wrongs, 1989 Sup. Ct. Rev. 53, 53 (1989). See also David A. Strauss, Why Was Lochner Wrong?, 70 U. Chi. L. Rev. 373 (2003).
  28. See, e.g., Walgreen Co. v. San Francisco, 185 Cal. App. 4th 424 (2010) (addressing San Francisco’s ban on sales of tobacco products in pharmacies).
  29. See, e.g., Thomas Jefferson, Inaugural Address, 10 Annals Of Cong. 763, 765 (1801) (“[A
  30. Aquinas, supra note 14, at 138.
  31. Id. at 312.
  32. See, e.g., Herbert A. Simon, Models of Man 198 (1957) (“The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world.”).
  33. See, e.g., In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996); In re Citigroup Inc. Shareholder Derivative Litigation, 964 A.2d 106 (Del. Ch. 2009); Stone v. Ritter, 911 A.2d 362 (Del. 2006); In re Walt Disney Co. Derivative Litigation, 907 A.2d 693 (Del. Ch. 2005); Brehm v. Eisner, 906 A.2d 27 (Del. 2006).
  34. Harold Demsetz, Information and Inefficiency: Another Viewpoint, 12 J. L. & Econ. 1, 1 (1969) (criticizing the “nirvana approach” to public policy).
  35. See Barak Orbach, What Is Regulation?, 30 Yale J. Reg. Online 1, 6 (2012) (defining “regulation” as “state intervention in the private domain, which is a byproduct of our imperfect reality and human limitations.”).
  36. President Ronald Reagan’s Inaugural Address, 127 Cong. Rec. 715, 716 (1981).
  37. For discussions of several aspects of the invisible hand thesis see Barak Orbach, Invisible Lawmaking, 79 U. Chi. L. Rev. Dialogue 1 (2012); Barak Orbach, Regulation: Why And How The State Regulates 144-55 (2012); Adrian Vermeule, The Invisible Hand in Legal and Political Theory, 96 Va. L. Rev. 1416 (2010).
  38. See, e.g., Douglas A. Kysar, Regulating From Nowhere: Environmental Law and the Search for Objectivity (2010). San Francisco supposedly expressly endorsed the Precautionary Principle. See San Francisco, Resolution No. 129–03 (Mar. 13, 2003); San Francisco Environment Code and Precautionary Principle Policy, Ordinance No. 171–03 (July 3, 2003). Similarly, the 1957 Delaney Clause effectively adopted this principle. The Delaney Clause banned all food additives having the potential of “induc[ing
  39. For the invisible hand see, for example, The Financial Crisis and the Role of Federal Regulators: Hearing Before the H. Comm. on Gov’t Oversight and Reform, 110th Cong. 17 (Oct. 23, 2008) (testimony of Alan Greenspan, Former Chairman of the Federal Reserve) (stating that “those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity (myself included) are in a state of shocked disbelief. . . . The whole intellectual edifice . . . collapsed.”); Joseph Stiglitz, Regulation and Failure, in New Perspectives On Regulation 11, 17-18 (David Moss & John Cisternino eds., 2009) (“the primary reason for the government failure [that led to the Great Recession
  40. See, e.g., Bus. Roundtable v. S.E.C., 647 F.3d 1144, 1148-49 (D.C. Cir. 2011):

    [The SEC

  41. See, e.g., John F. Morrall III, A Review of the Record, 10(4) Reg. 25 (1986); John Morrall III, Saving Lives: A Review of the Record, 27 J. Risk & Uncertainty 221 (2003). For a discussion of the controversy over cost-benefit analysis, see Orbach, Regulation, supra note 40, at 113-18. For the hindsight bias, see Scott A. Hawkins & Reid Hastie, Hindsight: Biased Judgments of Past Events after the Outcomes Are Known, 107 Psych. Bull. 331 (1990); Kim A. Kamin & Jeffrey J. Rachlinski, Ex-Post ≠ Ex Ante, 19 L. & Human Behavior 89 (1995); Jeffrey J. Rachlinski, A Positive Psychological Theory of Judging in Hindsight, 65 U. Chi L. Rev. 571 (1998).
  42. United States v. Carroll Towing Co., 159 F.2d 169 (2d Cir. 1947) (Hand, J.). See generally Mark Grady, Untaken Precautions, 18 J. L. Stud. 139 (1989); Allan M. Feldman & Jeonghyun Kim, The Hand Rule and United States v. Carroll Towing Reconsidered, 7 Am. L. & Econ. Rev. 523 (2005).
  43. The 1938 Fair Labor Standards Act was the first federal legislation that set limit on child labor that survived scrutiny of the Supreme Court. Hammer v. Dagenhart, 247 U.S. 251 (1918) (holding that the Keating-Owen Child Labor Act of 1916 that restricted child labor was unconstitutional); The Child Labor Tax Case, 259 U.S. 20 (1922) (invalidating a federal tax imposed on goods produced with child labor); United States v. Darby, 312 v. 100 (1941) (upholding the constitutionality of the Fair Labor Act). For the history of the child labor debate, see Hugh D. Hindman, Child Labor: An American History (2002); Kriste Lindenmeyer, A Right to Childhood: The U.S. Children’s Bureau and Child Welfare, 1912–1946 (1997).
  44. North Carolina, the last state to engage in eugenics, sterilized people “for the best interest of their mental, moral or physical improvement” until 1974. N.C. Ch. 1281 § 35-36 (1973). See The Governor’s Task Force to Determine the Method of Compensation for Victims of North Carolina’s Eugenics Board: Final Report (Jan. 2012); Kim Severson, Thousands Sterilized, a State Weighs Restitution, N.Y. Times, Dec. 10, 2011, at A1. See also Buck v. Bell, 274 U.S. 200 (1927) (Holmes, J.) (“It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind.”).
  45. See National Commission On The Causes Of The Financial And Economic Crisis In The United States, The Financial Crisis: Inquiry Report (final report, Jan. 2011).
  46. Lawrence v. Texas stands for the potential normative mismatch that the majority may have to tolerate. Lawrence v. Texas, 539 U.S. 558, 577 (2003) (“[T]he fact that the governing majority in a State has traditionally viewed a particular practice as immoral is not a sufficient reason for upholding a law prohibiting the practice.”).
  47. See supra Section I; Orbach, What Is Regulation?, supra note 35.
  48. See supra note 33.
  49. See, e.g., Hurricane Katrina Lessons Learned, Staff, The Federal Response to Hurricane Katrina: Lessons Learned (Feb. 2006); National Commission on the Causes of the Financial and Economic Crisis in the United States (final report, Jan. 2011) (hereinafter, Inquiry Report); U.S. Senate Permanent Subcommittee on Investigation, Committee on Homeland Security and Governmental Affairs, Wall Street and the Financial Crisis: Anatomy of a Financial Collapse (Apr. 13, 2011); U.S. S.E.C. Office of Investigations, Investigation of Failure of the SEC to Uncover Bernard Madoff’s Ponzi Scheme (Aug. 2009).
  50. Inquiry Report, at xvii.

Who Will Run the EPA?

* Professor of Law, Georgetown University Law Center. The author was Senior Climate Policy Counsel to EPA Administrator Lisa P. Jackson from January to July 2009, and Associate Administrator of the Office of Policy from July 2009 to December 2010. This essay is based on public documents and the author’s experience in those positions.

With President Obama’s nomination of Gina McCarthy as the new Administrator of the Environmental Protection Agency (EPA), much attention has turned to her record as the EPA official in charge of air pollution programs, experience as the head of two states’ environmental agencies, and views on specific policies and priorities. And with the President’s nomination of Sylvia Mathews Burwell to be the Director of the Office of Management and Budget (OMB), attention has likewise turned to her record and experience. Few recognize, however, the tight relationship between the two nominations: the Obama administration’s approach to governing will make Ms. Burwell Ms. McCarthy’s boss.

Few environmental statutes in this country put the President (or his aides in the White House) in charge of environmental decisions; most give the job to the EPA or, more specifically, its Administrator. Even fewer environmental statutes require rules to be evaluated according to cost-benefit analysis; most specify a different kind of decision-making framework for such rules.

Nevertheless, the Obama administration has continued and deepened a longstanding practice of White House control over EPA rules, with cost-benefit analysis as the guiding framework. OMB is the central player in this structure: it reviews, under a cost-benefit rubric, all agency rules that it deems “major” under executive orders mandating this review. EPA rules deemed major by OMB are not issued without OMB’s imprimatur. Thus does the OMB director become the EPA Administrator’s boss.

This result would be bad enough, given the tension between it and the legal structures governing environmental policy. But it turns out the OMB itself seems not to want to accept accountability for running U.S. environmental policy. In a new law review article by Cass Sunstein, the former head of the OMB office that acts as the White House’s regulatory gatekeeper, Sunstein insists that he actually didn’t have very much power.1 In fact, he says, decisions about rules most frequently turned on other players in the White House, Cabinet heads outside the agency proposing the rule, or even career staff in other agencies or in the OMB itself. In Sunstein’s rendering, it appears that everyone is responsible for the shape and scope of environmental policy in this administration. Which means no one is accountable.

In concrete terms, this leaves us unable to know whom to blame when the OMB delays the EPA’s list of “chemicals of concern” for almost three years,2 holds the Occupational Safety and Health Administration’s rule on crystalline silica for over two years,3 does not accept delivery of a notice of new data on EPA’s proposal to regulate coal ash impoundments,4 or insists on extensive, substantive changes to the Food and Drug Administration’s new rules on food safety.5 Perhaps it is the OMB itself, or another office in the White House, or the White House Chief of Staff, or the head of the Department of Agriculture, or a GS-12 at the Small Business Administration.6 We just don’t know.

Part of the reason we don’t know is that the Obama administration does not follow its own rules on transparency in the process of OMB review. Two years ago, President Obama issued an executive order reaffirming his embrace of a Clinton-era executive order governing OMB review.7 The Clinton-era order requires transparency throughout the OMB process; at almost every step of the way, the order – which, again, President Obama reaffirmed in his own executive order on OMB review – requires disclosure of important decision points and documents:

  • if an agency plans a regulatory action that the OMB thinks is inconsistent with the President’s policies or priorities, the OMB must tell the agency so, in writing;8
  • if a dispute arises between the OMB and the action agency over whether a particular rule should issue, and one of these parties requests resolution of the dispute by the President or Vice-President, the OMB must note – in a “publicly available log” – who requested elevation and when;9
  • if the OMB returns a rule to an agency for further consideration, the Office of Information and Regulatory Affairs Administrator must provide a “written explanation” for this return;10
  • if a regulatory proposal changes between the time it goes to OMB and the time it emerges, the agency must identify those changes (“in a complete, clear, and simple manner”);11
  • and if the OMB insists on changes to the regulatory proposal during its review, the agency must identify those changes for the public (“in plain, understandable language”).12

The Obama administration follows almost none of these rules on transparency. The OMB does not explain in writing to agencies that items on their regulatory agenda do not fit with the President’s agenda. The OMB does not keep a publicly available log explaining when and by whom disputes between the OMB and the agencies were elevated. Indeed, when the first elevation of an EPA rule occurred in President Obama’s first term, I drafted a brief memo for the EPA’s docket explaining that elevation had occurred and noting the outcome. The OMB told me in no uncertain terms that the memo must not be made public. Moreover, except in one instance – President Obama’s direction to then-EPA Administrator Lisa Jackson to withdraw the final rule setting a new air quality standard for ozone – the OMB has not returned rules to agencies with a written explanation about why they have not passed the OMB review.13 Instead, the OMB simply hangs onto the rules indefinitely, and they wither quietly on the vine. This is how it comes to pass that a list of chemicals of concern or a workplace rule on crystalline silica lingers at the OMB for years.

Some agencies do post “before” and “after” versions of rules that have gone to the OMB. These redlined documents often feature hundreds of changes. There is nothing here like the “complete, clear, and simple manner” of disclosure contemplated by the Executive Order. There is also often no document that explains which changes were made at the OMB’s behest. Where, as Sunstein explains, changes might come from the OMB, from another White House office, from another Cabinet head, or from a career staffer in a separate agency, the failure to follow the Executive Order’s rules on transparency means that no one is ultimately accountable for the changes that occur. Who is responsible, for example, for the hundreds of technical changes made to the EPA’s scientific analyses of air quality rules?14 We simply don’t know.

Here, too, the OMB is the stumbling block when it comes to transparency. Agencies know full well that they are not to be too transparent. The OMB reprimanded the EPA when the EPA accidentally posted interagency comments on its proposal to regulate coal ash impoundments.15 But why shouldn’t the public know who is responsible for changing the rules? In fact, without knowing the expertise and affiliation of the kibitzers, it is hard to evaluate their comments.

The problems go deeper still. The OMB maintains a “Regulatory Review Dashboard” that contains a good deal of information about rules under review, how long they have been under review, and so on.¹16It is spiffy and informative, but woefully incomplete. Some rules go to the OMB “informally” and do not appear on the Dashboard at that time. Some rules go to the OMB and appear on the Dashboard only weeks after the agency has sent them.17 Some items go to the OMB and never appear on the Dashboard.18 Some rules are done, from the agency’s perspective, but the White House prevents their transmittal to the OMB.19 The truth is, the Dashboard purports to be, but is not, a full picture of the items under review at any given time. Thus it misleads at the same time it informs.

What can be done?

First, Senators considering the nominations of Ms. McCarthy and Ms. Burwell should ask them about the relationship between the EPA and the OMB. They should ask who will be in charge of the EPA’s regulatory program. They should ask whether we will know who is in charge. They should ask on what basis decisions about environmental policy will be made.

Second, the OMB should follow – and allow agencies to follow – the disclosure requirements of the Executive Order under which its review occurs.

Third, if the OMB decides not to allow a rule to issue, it should return the rule to the relevant agency with a written (and public) explanation as to why it is doing so. It should stop holding onto rules indefinitely. It is not plausible to suggest – as Professor Sunstein has20 – that long periods of review simply mean that the OMB and the agencies are working hard on getting the rules right. This may be true in some cases, but some of those rules are never going home to the agencies. The OMB should say so and explain why.

Fourth, the OMB should follow the deadlines set out in the Executive Order. The Order quite clearly contemplates that the OMB has 90 days to review rules, 120 if the head of the OMB and the head of the relevant agency agree on an extension.21 But the OMB takes the position that if the head of the agency asks for an extension, review can continue indefinitely. This is a strained reading of the Executive Order (as Sunstein himself seems to acknowledge).22 More important, the way the head of an agency often comes to “request” an extension is that she (or her staff) receives a call from the OMB, asking the agency head to ask the OMB for an extension. Thus the OMB has unmoored itself completely from the deadlines set out in the Executive Order; review is over only when the OMB says it’s over.

Changes like these would be modest; they would simply bring the OMB into line with the Executive Orders it purports to be following. More substantial changes – such as loosening the OMB’s grip on the agencies, ceasing the OMB’s meddling with agencies’ scientific findings, relaxing the cost-benefit stranglehold on regulatory policy – would also be welcome. But to start, just following the rules laid out by the President himself would be nice.

  1. Cass R. Sunstein, The Office of Information and Regulatory Affairs: Myths and Realities, Harv. L. Rev. (forthcoming 2013),
  2. The government website on regulatory review shows that this list has been under review at OMB since May 12, 2010. See TSCA Chemicals of Concern List, Regulatory Review Dashboard (last visited Mar. 25, 2013) (pending OMB review as of Mar. 25, 2013),
  3. This rule has been under review since February 14, 2011. See OSHA Occupational Exposure to Crystalline Silica Rule, Regulatory Review Dashboard (last visited Mar. 25, 2013) (pending OMB review as of Mar. 25, 2013),
  4. The EPA’s website on rulemaking shows that a Notice of Data Availability was sent to the OMB for review on March 12, 2012. Coal Combustion Residuals generated by Electric Power Plants, U.S. Envtl. Prot. Agency, (last visited Mar. 25, 2013). Neither the EPA’s nor the OMB’s website indicates that the rule has been accepted by OMB for review. Id.; Search Results, Regulatory Review Dashboard, (search “RIN” for “2050-AE81” and search “Agency for Environmental Protection Agency) (returning “no results found”) (last visited Mar. 25, 2013).
  5. Documents showing extensive changes to the FDA’s rule on the growing, harvesting, packing and holding of produce for human consumption are available through at!documentDetail;D=FDA-2011-N-0921-0029. Documents showing extensive changes to the FDA’s rule on good manufacturing practice and hazard analysis and risk-based preventive controls for human food are available through at!documentDetail;D=FDA-2011-N-0920-0014.
  6. Sunstein mentions all of these kinds of possibilities in explaining the influences on the OMB process of regulatory review. Sunstein, supra note 1, at 17.
  7. Exec. Order No. 13,563, 76 Fed. Reg. 3821, 3833 (Jan. 21, 2011) (reaffirming Exec. Order No. 12866 of Oct. 4, 1993).
  8. Exec. Order No. 12,866, 58 Fed. Reg. 51735, 51744 (Oct. 4, 1993) at § 6(a)(3)(E)(iii).
  9. Id. at § 6(b)(4)(C)(i).
  10. Id. at § 6(b)(3).
  11. Id. at § 6(a)(3)(E)(ii).
  12. Id. at § 6(a)(3)(E)(iii).
  13. The website on regulatory review shows only one return letter (on ozone) issued during the Obama administration. OIRA Return Letters, Office Of Info. And Regulatory Affairs, (last visited Mar. 25, 2013).
  14. Wendy Wagner has painstakingly documented such changes in a study prepared for the Administrative Conference of the United States. Wendy Wagner, Science In Regulation: A Study Of Agency Decisionmaking Approaches (2013),
  15. See Cent. For Effective Gov’t, Changes to Coal Ash Proposal Place Utility’s Concerns Above Public Health (2010) (recounting the same episode).
  16. Regulatory Review Dashboard, (last visited Mar. 25, 2013).
  17. For example, compare the EPA’s report of when it sent its rule on electronic reporting regarding water pollution permits to the OMB, Dec. 22, 2011, to its report on when the OMB “received” the rule, Jan. 20, 2012. See NPDES Electronic Reporting Rule, U.S. Envtl. Prot. Agency (last visited Mar. 25, 2013) (listing dates for “NPRM: Sent to OMB for Regulatory Review” and “NPRM: Received by OMB”). See also Search Results for NPRM Review Status, Regulatory Review Dashboard, (search “RIN” for “2020-AA47” and search “Agency for Environmental Protection Agency) (showing OMB’s received date to be Jan. 20, 2012).
  18. See supra note 4.
  19. Juliet Eilperin, Obama Administration Slows Environmental Rules as it Weighs Political Cost, Wash. Post, Feb. 12, 2012, (stating that the White House had not given EPA permission to send a rule on cars and trucks to OMB).
  20. Sunstein, supra note 1.
  21. Exec. Order. No. 12866, supra note 8, at § 6(b)(2)(B),(C).
  22. Id.

Backyard Politics, National Policies: Understanding the Opportunity Costs of National Fracking Bans

*Associate Professor of Law, Politics & Regulation, University of Texas at Austin. Ph.D., Duke University; J.D., University of North Carolina; B.A., Gettysburg College. I would like to thank Hannah Wiseman of Florida State University College of Law for her editorial and sourcing suggestions for this essay.

Some local communities in the United States, particularly in the Northeast, are scrambling to oppose natural gas production enabled by hydraulic fracturing (or fracing, fracking, or hydrofracking) in shale formations. Local opposition to the impacts of fracking is understandable, but recent proposals for national bans ignore a key, more potent threat. Due to a mismatch between the benefits and costs of fracking, on the one hand, and the distribution of political and legal influence, on the other, the voices of those opposed to extraction may drown out the more distant voices of those suffering from the widespread future effects of coal—the primary fossil alternative to gas. Energy policy processes must recognize the opportunity costs of banning gas, including the consequences of continuing to rely on coal as our primary electricity source. The negative environmental impacts of natural gas extraction must be addressed, and our focus on gas ought not to divert attention from the need to develop more sustainable energy alternatives. However, policymakers should not adopt the myopic view advocated by some anti-fracking activists. Rather, policymakers should formulate energy policies that fully weigh the costs and benefits of alternative courses of action and consider the interests of those under-represented in the policy process.


The policy debate over hydraulic fracturing (or “fracking”) in the Northeastern states overlying the Marcellus Shale has generated much more heat than light. It is the mirror image of the climate change debate, during which opponents of climate change turned a blind eye to climate science and the empirical evidence supporting the notion that human activity is driving global warming. Now, opponents of fracking are turning a blind eye to the opportunity costs—that is, the relative environmental and health risks—of limiting shale gas production in the United States.

States are right to demand proof that fracking will occur safely before permitting thousands of new wells; as the scale of drilling and fracking expands, regulations must protect against spills, contamination, and other impacts.1 This Essay argues, however, that the fracking debate problematically ignores the fact that decisions to ban natural gas development at a broad level—national bans, for example—inure mostly to the benefit of coal-fired power, a far dirtier (and deadlier) resource than gas.2

As is often the case in energy policy, there is a mismatch between the distribution of the costs and benefits of fracking, on the one hand, and the distribution of political influence (votes), on the other.3 Communities that experience disproportionate burdens of natural gas development—such as noise, odors, surface contamination, and heavy road traffic4—may logically choose to ban fracking, if the community decides that the local costs outweigh local benefits (such as jobs). Some of the benefits of shale gas production (like reducing coal emissions), in contrast, are remote and highly dispersed; decades from now, far fewer citizens will die early deaths from inhaling fine particulate matter, for example. These potential voters, lobbyists, or litigants will not incur their damages until later, and so cannot participate in political and legal processes now; their absence distorts the policymaking process.

Fracking decisionmakers must recognize not just the economic benefits of natural gas, but also the environmental benefits—namely, the immediate displacement of coal and its long-term climate and air quality impacts—while also addressing the concerns of those who bear the brunt of gas extraction burdens. The answer is not to ban fracking or unreasonably delay it; it is to ensure that fracking is conducted in as safe a manner as possible while simultaneously working toward ever-cleaner energy solutions.5

I. The Controversy

Fracking involves the injection of water, sand, and chemicals deep into shale formations to fracture rock, thereby freeing formerly inaccessible natural gas. Fracking has transformed American energy markets, creating an ample domestic supply of gas and driving domestic natural gas prices to record lows. Some people support shale gas production in their communities, because it brings economic benefits (e.g., royalty payments to landowners, jobs, and local taxes).6 At the same time, fracking has generated intense local opposition in some places, particularly in the northeastern United States, where critics worry about the impacts of fracking on drinking water and air quality, among other things.7

That opposition has split local communities and provoked litigation and conflict over proposed bans and regulatory standards at the state and federal level. Several New York courts have allowed local bans on fracking despite a state preemption provision.8 Pennsylvania towns persuaded the Commonwealth Court to reverse that state’s requirement that municipalities allow fracking in all zones,9 in a decision now on appeal with the Pennsylvania Supreme Court.

This increasingly disputed process of shale gas development has important environmental impacts, and the magnitude of the risks cannot currently be quantified.10 Indeed, the shale gas boom increases certain long-known development risks simply by enabling more gas wells to be drilled and “fracked”; the sheer expansion in scale can increase the cumulative effects of minor events such as spills,11 as with any industrial activity. But from the initial data, gas development enabled by fracking does not appear to justify a national ban.12 Indeed, from the studies of fracking undertaken to date,13 and existing understandings of existing contamination incidents,14 many of the risks associated with fracking appear similar to those associated with a variety of other commonly accepted (but regulated) industrial activities. This is not to say that those risks are insignificant, particularly because unlike many other industrial risks, they occur—quite literally—in people’s backyards. Compliance issues aside, when a well is being drilled and fracked, the production area is a hive of truck traffic, power generators, and other activities that can transform a quiet rural or suburban landscape into an industrial area. There are also important risks beyond backyards, including improper disposal of wastes in surface waters and injection wells, spills of drilling and fracturing materials, and local noise and road use impacts.15 Thus, it is entirely logical for some people to oppose fracking in their backyards.16 While many of the impacts of fracking appear to be temporary, it is little wonder some people do not want to endure them.

It is at this point, however, that the case against fracking goes off the rails. In their efforts to keep fracking out of their backyards, opponents of the practice have sought to convince policymakers to impose nationwide bans on fracking. A group called Americans Against Fracking has argued for a full fracking ban within the United States,17 and other countries, such as France,18 already prohibit the practice. The move to ban fracking has had more success at the local level19 than at the state20 or national21 level. Although parties advocating for bans within particular municipalities or states in some cases have good reasons to be wary of drilling and fracturing22 due to sensitive natural resources, areas with high tourism values, and other unique conditions, from a national perspective, there is a risk that these small pockets of opposition to a relatively clean fossil fuel could overshadow broader public opinion, which, at least in one recent study, seems to be generally positive toward natural gas production.23

What is missing from these legal and policy conflicts is any sense of the relative health, safety and environmental risks posed by fracking, and the opportunity costs of discouraging shale gas production on a national level.

II. The Missing Piece

While natural gas is used directly by end users, one of its primary uses in the American economy is as a fuel source for electricity generation. Although gas-fired electric generation capacity has been growing, coal-fired power plants have always comprised the dominant part of the American electric generation mix—that is, until the last five years.24

Coal-fired and natural gas-fired power plants produce many of the same pollutants, but the gas-fired plants emit about half the carbon dioxide and small fractions of the most lethal pollutants (like sulfur dioxide, fine particles, and mercury) emitted by coal-fired plants on a per-BTU basis.25 That is why regulators and environmentalists have tried for years to reduce emissions from coal-fired power plants. Indeed, the so-called “EPA war on coal” that featured in Republican campaign ads last year is really the culmination of decades of litigation and halting, tentative efforts to regulate the long understood environmental and health risks associated with coal combustion.26

Now, market forces are doing what regulation and lawsuits could not—closing down coal-fired power plants in significant numbers. The U.S. Energy Information Administration reported in the spring of 2012 that for the first time ever, gas-fired plants generated more electricity than coal-fired plants.27 That same agency projects much faster growth in gas-fired capacity in the coming years, primarily because natural gas prices are expected to remain relatively low.28

The environmental and health benefits of this transition—and the environmental and health costs of the slowing or foregoing it—are likely to be enormous. A February 2011 study by health professionals concluded that our reliance on coal for energy causes tens of thousands of premature deaths per year, far more than any other energy source.29 The authors estimated that these externalities cost the American public as much as half a trillion dollars each year, and “conservatively” estimated that if these costs were internalized (that is, borne by the industry), the price of electricity generated from coal would double or triple.30 An August 2011 analysis by economists offered further support for the notion that substituting natural gas for coal in the electric generation mix would yield enormous health and environmental benefits—benefits that would greatly exceed the costs.31

To be sure, we are still learning about the full environmental impacts of fracking, and gas alone will not solve our climate problem. Indeed, some contend that natural gas poses a greater climate change risk than coal due to methane leakage during natural gas production32; others dispute that contention.33 But methane leakage is a problem amenable to technical, regulatory solutions34; moreover, climate change impacts comprise only a small minority of the health and environmental costs of reliance on coal.35 Thus, regardless of the methane leakage issue, there is no real support for the notion that fracking poses greater pollution or health risks than our reliance on coal for energy.36 While we must regulate the risks of fracking, the effects of shale gas development and fracking do not appear to justify outright bans—particularly in light of the relatively unpleasant energy alternatives.

III. The Cost-Benefit-Influence Mismatch

So why the disconnect between the fracking policy debate and our understanding of the relative risks of fracking compared to other forms of electricity generation? As is often the case in policymaking,37 the problem is the mismatch between the distribution of the costs and benefits (in this case—of fracking), on the one hand, and the distribution of political influence (votes), on the other.

In the debate over climate change policy, those who will bear the costs of limiting greenhouse gas emissions—representatives and customers of the energy industry—are much better represented in the American policymaking process than those who will benefit from greenhouse gas emissions limits—future generations of Americans and residents of foreign countries who are particularly vulnerable to future harms associated with climate change. There is a similar kind of missing voice in the fracking policy debate.

Just as many of those who will be harmed by coal’s greenhouse gas emissions have no voice in the policy process, those unlucky enough to be killed by inhaling fine particles, mercury, or other byproducts of coal combustion cannot identify their killer. By contrast, those who must endure the risks associated with fracking know exactly where to point the finger. Consequently, they exert pressure on policymakers, skewing policy toward bans on fracking even at the cost of more (and far more harmful) emissions from coal-fired electricity generation.

This is a common phenomenon in the world of energy policy. It is perfectly logical to oppose the siting of high-voltage transmission lines across my property, or for the residents of Martha’s Vineyard to oppose construction of the Cape Wind project off their shores.38 Nevertheless, despite local opposition, both the transmission line and the wind farm may well provide positive net benefits for society as a whole.

These are the kinds of land-use conflicts that play out daily in the American policy process. Local, state and federal regulators must resolve these conflicts by weighing the costs and benefits of alternative courses of action. Presumably, in making these decisions, policymakers ought to consider both the concerns of those directly impacted by extraction in their backyards and the interests of the un- or under-represented people who will benefit from a transition from coal to natural gas. Indeed, states have begun to craft innovative solutions to ensure that communities that bear the disproportionate impacts of gas development receive resources necessary to address those impacts. Pennsylvania allows municipalities to impose a fee on unconventional, fractured gas wells, which supports long-term local development.39 Colorado, in turn, has issued recommendations for state and local governments to work together to address local conflicts over gas extraction.40 States are adapting their policy processes to try to reconcile the interests of those who experience the direct, immediate costs (and benefits) of fracking, and the large portions of the populace that would enjoy broad, future benefits from gas.


In the debate over fracking, it makes no sense for state or federal governments to ban fracking, given the opportunity costs of doing so, and environmental and other benefits it promises. Natural gas will not solve all of our climate woes and air pollution problems, but it is simultaneously affordable and environmentally superior to its leading competitor, coal. If properly extracted, and if used simultaneously with renewable resources, it can serve as a bridge to a more sustainable energy future. Burning this bridge through national or other widespread bans would be a mistake.

  1. See, e.g., Department of the Interior, Environment, and Related Agencies Appropriations for Fiscal Year 2013: Hearing Before the S. Appropriations Comm., 112th Cong. 6 (2012) (statement of Lisa P. Jackson, Administrator, Environmental Protection Agency) (“[W]e must make sure that the ways we extract [gas] . . . do not risk the safety of public water supplies.”).
  2. Of course, decisions about which fuels we will use to generate electricity are made mainly by private sector actors, based upon cost considerations. As shale gas production drives down the costs of natural gas in the United States, that cost decrease has fueled the displacement of coal-fired plants by gas-fired plants.
  3. For a longer discussion of competing interests in fracking, see David Spence, Federalism, Regulatory Lags, and the Political Economy of Energy Production, 161 U. Penn. L. Rev. 431 (2012).
  4. See, e.g., Society of Petroleum Engineers, White Paper on SPE Summit on Hydraulic Fracturing 1, 5, 6 (2011) (describing traffic, noise, and other problems).
  5. See Thomas Friedman, Op-Ed., Get It Right on Gas, N.Y. Times, Aug. 5, 2012, (quoting Faith Birol, Chief Economist, International Energy Agency). (“‘[A] golden age for gas is not necessarily a golden age for the climate’—if natural gas ends up sinking renewables.”).
  6. See, e.g., N.Y. St. Dep’t of Envtl. Conservation, Revised Draft Supplemental Generic Environmental Impact Statement on the Oil, Gas, and Solution Mining Program, at ES-17 (2011), (estimating positive economic impacts).
  7. See, e.g., Joseph De Avila, Battle Over Fracking Goes Local, Wall St. J. (Aug. 29, 2012), (describing approximately 100 municipal moratoria on fracking and 35 bans in New York alone).
  8. Cooperstown Holstein Corp. v. Town of Middlefield, 943 N.Y.S.2d 722 (N.Y. Sup. Ct.. 2012); Anschutz Exploration Corp. v. Town of Dryden, 940 N.Y.S.2d 458 (N.Y. Sup. Ct. 2011); Weiden Lake Prop. Owners Ass’n v. Klansky, 936 N.Y.S.2d 62 (N.Y. Sup. Ct. 2011).
  9. Robinson Twp. v. Commonwealth, 52 A.3d 463 (Pa. Comnw. Ct. 2012).
  10. See U.S. Gov’t Accountability Office, GAO-12-732, Oil and Gas: Information on Shale Resources, Development, and Environmental and Public Health Risks 4 (2012), (concluding that risks cannot currently be quantified due to a lack of adequate scientific information).
  11. For a discussion of some of the potential risks and state responses to those risks, see Hannah Wiseman, Risk and Response in Fracturing Policy, 83 U. Colo. L. Rev. (2013). For a survey of the academic literature on fracking risks, see Spence, supra note 4.
  12. See, e.g., N.Y. St. Dep’t of Envtl. Conservation, supra note 7 (comprehensively examining risks).
  13. This is a growing and diverse literature. For a summary, see Spence, supra note 4, at 442-47, 491-93.
  14. See Wiseman, supra note 11 (describing some of the contamination incidents); Daniel J. Rozell & Sheldon J. Reaven, Water Pollution Risk Associated with Natural Gas Extraction from the Marcellus Shale, 32 Risk Analysis 1382, 1384 (2011),
  15. See id.
  16. Coal mining sometimes occurs in people’s backyards, but its footprint is too large to fit within individual farms, ranches or small towns. By contrast, gas development has a smaller footprint in that the construction and operation of individual wells requires much less space on the surface, making development technically possible in more places, and increasing the likelihood of human-energy conflicts. See, e.g., Applications and Permits, City of Fort Worth, (last visited Feb. 19, 2013) (showing 1,483 permitted wells within city limits and 526 additional permitted wells); Hannah Wiseman, Urban Energy, Fordham Urb. L.J. (forthcoming 2013) (on file with author) (describing the likely expansion of conflicts).
  17. The group’s board features Gasland director Josh Fox, actor Mark Ruffalo, and singer Natalie Merchant. See Americans Against Fracking, (last visited Feb. 13, 2013) (specifying both the group’s goal of a nationwide ban, and listing members of its board).
  18. See Tara Patel, France to Keep Fracking Ban to Protect Environment, Sarkozy Says, Bloomberg, Oct. 4, 2011,
  19. See, e.g., De Avila, supra note 7 (mapping resolutions in New York to allow, ban, or temporarily disallow high-volume hydraulic fracking); John R. Nolon & Victoria Polidoro, Hydrofracking: Disturbances Both Geological and Political: Who Decides?, 44 Urb. Law. 507, 522-26 (2012) (describing some of the bans, and court responses to them, in detail); City Council Proclamation, City of Pittsburgh, Feb. 8, 2011,|Text|&Search=%22hydraulic+fracturing%22 (noting the 2010 ban on fracking in the City of Pittsburgh and commending the City of Buffalo, New York for its ban).
  20. The state of Vermont has banned fracking. Vermont Fracking Ban: Green Mountain State Is First In U.S. To Restrict Gas Drilling Technique, Associated Press, May 16, 2012, (describing the Vermont ban as largely symbolic, since Vermont has few shale gas resources). Vermont lacks shales and likely would not experience fracking, but may wish to make a statement about its risk concerns. New York has imposed a moratorium on certain kinds of hydraulic fracturing pending further study of the problem. New York’s Department of Environmental Conservation has completed a comprehensive environmental impact statement, which the DEC concluded was required by that state’s environmental quality act. See N.Y. State Dep’t of Envtl. Conservation, supra note 7. The Department of Environmental Conservation proposed high-volume hydraulic fracturing rules near the end of 2012, with a comment period open through January 2013. N.Y. Dep’t of Envtl. Conservation, High Volume Hydraulic Fracturing Proposed Regulations, 6 N.Y.C.R.R. 52, 190, 550-556, 560, 750, The rulemaking process has since been extended through a refilling of the rule, which the DEC initiated “in order to give New York State Commissioner of Health, Dr. Nirav Shah, time to complete his review” of the environmental impact statement. N.Y. Dep’t of Envtl. Conservation, High-Volume Hydraulic Fracturing Proposed Regulations,, Bills have been introduced into the New Jersey and Maryland legislatures to impose moratoria on fracking there, though the governors of both states have already imposed moratoria pending further study. See Assembly Bill No. 3644, 215th Leg. (N.J. 2013), (text of the proposed New Jersey legislation); Tom Johnson, Fracking Ban Doesn’t Go Far Enough for Environmentalists, NJSpotlight, February 4, 2013, (describing the New Jersey legislation and the governor’s moratorium); Timothy B. Wheeler, O’Malley Panel Urges ‘Fracking’ Safeguards, Balt. Sun, Jan. 7, 2013, (describing the situation and Maryland); Del. Shane Robinson and Sen. Karen Montgomery Introduce Statewide Ban on Fracking, Food & Water Watch (Jan. 13, 2013), (describing the new bills introduced into the Maryland legislature).
  21. The authors are unaware of any proposed congressional legislation banning fracking nationwide.
  22. State bans on fracking reflect a precautionary approach in light of the unknown magnitude of risks. New York, for example, having seen the many impacts in neighboring Pennsylvania and possessing an unfiltered water supply above parts of the shale, is wary of development.
  23. Univ. Tex. Austin Energy Poll, (last visited Feb. 19, 2013) (data on file with authors). Study results of course vary substantially. One study of 750 likely New York voters suggested that 42% support “the Department of Conservation allowing hydrofracking to move forward in parts of update New York,” 36% oppose it, 15% do not have enough information. Siena Research Inst., Obama Poised to Carry New york, Comparable to ’08 (2012),–%20final.pdf.
  24. For a summary of these trends, see U.S. Energy Info. Admin., Annual Energy Outlook 2012 (Early Release),
  25. U.S. Energy Info. Admin., Natural Gas Issues and Trends (1998),
  26. For a summary of these rules, see James E. McCarthy & Claudia Copeland, Cong. Research Serv., R41914, EPA’s Regulation of Coal-Fired Power: Is a “Train Wreck” Coming? (2011),
  27. U.S. Energy Info. Admin., Short Term Energy Outlook Tables (2012),
  28. See U.S. Energy Info. Admin., supra note 24.
  29. See Paul R. Epstein et al., Full Cost Accounting for the Life Cycle of Coal, 1219 Annals N.Y. Acad. of Sci. 73, 82-83 (2011) (assessing the negative externalities associated with coal production, including premature deaths). For a summary of other studies, see External Costs of Coal, Sourcewatch, (last modified Nov. 5, 2011).
  30. Id. at 93.
  31. Nicholas Z. Muller et al., Environmental Accounting for Pollution in the United States Economy, 101 Am. Econ. Rev. 1649, 1664, 1667-69 (2011) (estimating environmental damages of $53 billion annually for coal combustion and less than $1 billion per year for natural gas-fired generation).
  32. The scholarly debate on the methane leakage issue is just getting underway. One early study estimated that as much as 7.9 percent of the methane produced from natural gas wells escapes into the atmosphere as the result of leaks or venting, an amount that could undermine the climate change advantages of natural gas. See Robert W. Howarth, Renee Santoro & Anthony Ingraffea, Methane and the Greenhouse Gas Footprint of Natural Gas from Shale Formations, 106 Climatic Change, June 2011; see also Gabrielle Petron, et al., Hydrocarbon Emissions Characterization in the Colorado Front Range-a Pilot Study, Forthcoming from the Journal of Geophysical Research, 117 J. Geophys. Res. 1 (2012) (suggesting that existing estimates of fugitive methane emissions from gas operations are underestimates). Recently, the National Oceanic and Atmospheric Administration announced results from a study of methane emissions on Utah that are consistent with the Howarth data. Jeff Tollefson, Methane Leaks Erode Green Credentials of Natural Gas, Nature, January 2, 2013,
  33. A report from Cambridge Energy Research Associates contends that the Howarth study is plagued by measurement and methodological errors that resulted in an overestimate of methane emissions from gas production operations. The alleged errors include failing to distinguish between methane emission rates from venting versus flaring of gas, failing to account for the standard industry practice of capturing methane in flowback water, and more. IHS Cambridge Energy Research Assocs., Mismeasuring Methane: Estimating Greenhouse Gas Emissions from Upstream Natural Gas Development (2011) (private report) (on file with author). See also David A. Kirchgessner et al.,, Estimate of Methane Emissions from the U.S. Natural Gas Industry, 35 Chemosphere, no. 6, 1997 at 1365; Michael Levi, Yellow Flags on a New Methane Study, Council Foreign Rel., Feb. 13, 2012, (identifying methodological problems with the Petron study).
  34. See Jim Marson, Elements: Shale Drilling Can Be a Win-Win, Austin Am.-Statesman, Jan. 13, 2013, (detailing the Environmental Defense Fund’s qualified support for shale gas production, with controls on methane leakage). States and the EPA are considering additional regulation to address methane leakage. Pennsylvania, for example, is moving to tighten methane leakage rules. Associated Press, Pa. Moves to Limit Air Emissions from Gas Industry, FuelFix (Feb. 1, 2013), Several states would like the EPA to further tighten its rules, or implement them more quickly. See Kevin Begos, NY, 6 Other States Suing EPA Over Drilling Methane, StarGazette, Dec. 11, 2012, (recounting litigation aimed at forcing more action on methane leakage by EPA).
  35. See Epstein et al., supra note 29, at 9 (ascribing most of the costs of coal to non-greenhouse gas emissions).
  36. See Spence, supra note 3, for a survey of this literature. However, fugitive methane emissions are a problem that is amenable to technical solutions. See Oil and Natural Gas Sector: New Source Performance Standards and National Emissions Standards for Hazardous Air Pollutants Reviews, 76 Fed. Reg. 52738, 52757 (Aug. 23, 2011) (to be codified 40 C.F.R. pts. 60, 63).
  37. See, e.g., Randall Bartlett, Economic Foundations of Political Power 155 (1973) (providing a broad description of public choice theory, which suggests that those with the highest individual stakes in decision will win out over disparate groups of individuals who would, collectively, be highly impacted by a decision); Brian Galle & Kirk J. Starke, Beyond Bail-outs: Federal Tools for Preventing State Budget Crises, 87 Ind. L.J. 599, 608 (2012) (“Politically, voters and officials may both anticipate that they will not be around when the future comes: they may die, they may move, or they may be voted or term-limited out of office, so that the future costs represent an intertemporal externality.”)
  38. Ten Taxpayer Citizens Grp. v. Cape Wind, 373 F.3d 183 (1st Cir. 2004).
  39. H.R. 1950, 2011 Leg. (Pa. 2011),
  40. Task Force on Cooperative Strategies Regarding State and Local Regulation of Oil and Gas Development, Protocols Recommendations (Apr. 18, 2012),