Bulletin

The Oversight Board’s Dormant Power to Review Facebook’s Algorithms

PDF Download
Abstract: This Essay argues that Facebook’s Oversight Board—an independent “Supreme Court” for Facebook, tasked with reviewing the platform’s content-moderation decisions—has the additional power to review Facebook’s algorithms. While much of the literature on the Board assumes that it does not have this power, the text and structure of the Board’s Charter clearly provide for oversight of algorithms. This is an important observation because many of the serious problems plaguing online speech today—misinformation, radicalization, and community safety—are driven by algorithmic amplification. Equipped with the powers this Essay identifies, the Board could play a significant role in curbing the pernicious effects of algorithmic amplification of speech on Facebook.

Introduction

In the wake of an attempted insurrection at the U.S. Capitol, Facebook suspended President Trump.11. Guy Rosen, Our Response to the Violence in Washington, Facebook (Jan. 7, 2021, 8:05 AM PST), https://about.fb.com/news/2021/01/responding-to-the-violence-in-washington-dc (https://perma.cc/4EBV-C36G). Facebook said that it “believe[d] the risks of allowing Trump to continue to use [Facebook] . . . [we]re simply too great.”22. Id. Soon after, Facebook referred its decision to the Facebook Oversight Board33. Nick Clegg, Referring Former President Trump’s Suspension from Facebook to the Oversight Board, Facebook (Jan. 21, 2021), https://about.fb.com/news/2021/01/referring-trump-suspension-to-oversight-board (https://perma.cc/J9UZ-LYG6). —an independent “Supreme Court”44. Ezra Klein, Mark Zuckerberg on Facebook’s Hardest Year, and What Comes Next, Vox (Apr. 2, 2018, 6:00 AM EST), https://www.vox.com/2018/4/2/17185052/mark-zuckerberg-facebook-interview-fake-news-bots-cambridge (https://perma.cc/6VQD-KN2T). for Facebook, tasked with reviewing the platform’s content-moderation decisions. The Board partially upheld Facebook’s decision. It found that while Facebook was justified in its decision to restrict Trump’s access to the platform, it was “not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension.”55. Case Decision 2021-001-FB-FBR, Facebook (May 5, 2021), https://www.oversightboard.com/decision/FB-691QAMHJ (https://perma.cc/FC4H-CNC9). To remedy this failure, the Board gave Facebook “six months” to “review” Trump’s suspension and “to determine and justify a proportionate response.”66. Id. For commentary on the Board’s decision, see Kari Paul, Facebook Ruling on Trump Renews Criticism of Oversight Board, The Guardian (May 5, 2021, 11:39 AM EST), https://www.theguardian.com/technology/2021/may/05/facebook-oversight-board-donald-trump (https://perma.cc/AAB9-5WQQ); Kevin Roose, Facebook’s ‘Supreme Court’ Tells Zuckerberg He’s the Decider, N.Y. Times (May 6, 2021), https://www.nytimes.com/2021/05/06/technology/facebook-oversight-board-trump.html (https://perma.cc/Y4XH-EKFG); and Margaret Sullivan, Facebook’s Oversight Board Whiffed. Trump Deserves a Permanent Exile, Wash. Post (May 5, 2021, 4:20 PM EST), https://www.washingtonpost.com/lifestyle/media/trump-facebook-oversight-board/2021/05/05/878eef6a-adb3-11eb-acd3-24b44a57093a_story.html (https://perma.cc/NX8A-EVDN). In other words, the Board passed the buck back to Facebook.77. Ultimately, Facebook suspended Trump for two years. After the close of two years, Trump’s suspension will be reviewed, and he will be reinstated if there is no longer “a serious risk to public safety.” Nick Clegg, In Response to Oversight Board, Trump Suspended for Two Years; Will Only Be Reinstated if Conditions Permit, Facebook (June 4, 2021), https://about.fb.com/news/2021/06/facebook-response-to-oversight-board-recommendations-trump (https://perma.cc/XX8N-4UPH).

Trump’s suspension sparked debate on the role of the Board, its effectiveness, and how social media companies should self-regulate.88. See, e.g., Adonis Hoffman, Twitter, Facebook Right to Block Trump—Big Tech Must Self-Regulate to Protect Public Safety, Fox News (Jan. 9, 2021), https://www.foxnews.com/opinion/trump-twitter-facebook-adonis-hoffman (https://perma.cc/93X3-YM2E); Vicki Jackson & Martha Minow, Facebook Suspended Trump. The Oversight Board Shouldn’t Let Him Back, Lawfare (Mar. 8, 2021, 11:02 AM), https://www.lawfareblog.com/facebook-suspended-trump-oversight-board-shouldnt-let-him-back (https://perma.cc/G748-7XHG) (arguing that the Board should take into account “Facebook’s interest in its own freedom of expression” when making decisions); Damian Reilly, The Problem with Facebook’s ‘Supreme Court’, The Spectator (Mar. 15, 2021, 2:00 AM), https://www.spectator.co.uk/article/the-problem-with-facebook-s-supreme-court (https://perma.cc/SZ95-SZW9) (calling the Board an “ingenious” move from Mark Zuckerberg, “enabling him to shift the blame” for unpopular decisions “from himself onto the board”). A central aspect of this conversation—and the topic of this short Essay—is the Board’s competence to address the viral spread of content accelerated by Facebook’s algorithms. Facebook’s design decisions help explain why social media posts can produce dangerous real-world consequences. “[W]hat people actually read on Facebook is what’s in their News Feed” yet the content of this feed is “determined entirely by Facebook’s algorithm[s].”99. Jon Evans, Facebook Isn’t Free Speech, It’s Algorithmic Amplification Optimized for Outrage, TechCrunch (Oct. 20, 2019, 9:00 AM EST), https://techcrunch.com/2019/10/20/facebook-isnt-free-speech-its-algorithmic-amplification-optimized-for-outrage (https://perma.cc/8RZ2-DRET). These algorithms “use signals like how many people react to, comment on or share posts to determine how high they appear in News Feed.”1010. Adam Mosseri, Bringing People Closer Together, Facebook (Jan. 11, 2018), https://about.fb.com/news/2018/01/news-feed-fyi-bringing-people-closer-together (https://perma.cc/8RZ2-DRET). The net result is that Facebook’s algorithms spread the most attention-grabbing content, encouraging people to spend more time on the platform viewing more ads and further padding Facebook’s bottom line.1111.  See Olivia Solon, Ex-Facebook President Sean Parker: Site Made to Exploit Human ‘Vulnerability, The Guardian (Nov. 9, 2017, 3:11 PM EST), https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker-vulnerability-brain-psychology (https://perma.cc/595R-3PY6) (“(W)hen Facebook was being developed the objective was: ‘How do we consume as much of your time and conscious attention as possible?’”). Indeed, Facebook’s algorithms form the backdrop to its decision to suspend President Trump. Some have charged that Trump’s Facebook posts were dangerous because there were “staggering amounts of sensational misinformation about the election on Facebook’s platform, shunted into echo chambers by Facebook’s algorithms, and insulated from counter-speech by Facebook’s architecture.” 1212.  Jameel Jaffer, Katy Glenn Bass, Alex Abdo, Katie Fallow & Lyndsey Wajert, Submission to Facebook Oversight Board, Knight Inst. 2 (Feb. 11, 2021), https://knightcolumbia.org/documents/submission-to-facebook-oversight-board-2-11-2021 (https://perma.cc/QPB3-E9GG) (“(T)he content-moderation decisions that the Board has been tasked with reviewing can’t actually be separated from (Facebook’s) design decisions. . . . ”).

This point was not overlooked by the Board. In its decision on Trump’s suspension, the Board said that it “sought clarification from Facebook” about the extent to which its “design decisions, including algorithms, policies, procedures and technical features, amplified Mr. Trump’s posts after the election.”1313.  Case Decision 2021-001-FB-FBR, supra note 5. However, “Facebook declined to answer these questions.”1414.  Id. This made it “difficult for the Board to assess whether less severe measures, taken earlier, may have been sufficient to protect the rights of others.”1515.  Id.

Facebook’s refusal to answer the Board’s questions is the latest move in a simmering debate over the Board’s powers. While Facebook maintains that the Board lacks the power to access the platform’s algorithms, members of the Board have expressed their disagreement. For example, very recently a Facebook spokesperson said that “the company is not planning to refer any cases regarding recommendation or engagement algorithms to the board, and that content-ranking algorithms are not currently in the scope of the board’s appeal process.”1616.  Rebecca Heilweil, If Mark Zuckerberg Won’t Fix Facebook’s Algorithms Problem, Who Will?, Vox: Recode (Mar. 26, 2021, 12:20 PM EST), https://www.vox.com/recode/22335801/algorithms-artificial-intelligence-facebook-instagram-recommendations (https://perma.cc/4MCY-XZ7V). On the other hand, Board member John Samples expressed his view that the Board “could comment on algorithms in its non-binding recommendations.”1717.  Id. And Alan Rusbridger, a former editor of The Guardian and current Board member, noted that “[a]t some point we’re going to ask to see the algorithm, I feel sure, whatever that means.”1818.  Alex Hern, Alan Rusbridger Says Oversight Board Will Ask to See Facebook’s Algorithm, The Guardian (Mar. 2, 2021, 2:21 PM EST), https://www.theguardian.com/technology/2021/mar/02/alan-rusbridger-says-oversight-board-will-ask-to-see-facebooks-algorithm (https://perma.cc/7FY3-3YUU).

Problematically, much of the academic literature assumes that Facebook is correct. Thus, the Knight Institute has lamented that Facebook placed “design decisions . . . off limits” for Board review. And so the Board has “effectively been directed to take Facebook’s design as a given.”1919.  Id. Similarly, Evelyn Douek has noted that “Facebook’s algorithms . . . [are] conspicuously absent” from the Board’s Charter2020.  Evelyn Douek, What Kind of Oversight Board Have You Given Us?, U. Chi. L. Rev. Online: Blog (May 5, 2020), https://lawreviewblog.uchicago.edu/2020/05/11/fb-oversight-board-edouek (https://perma.cc/XWD6-3EAX). —the document which lays out the Board’s powers, responsibilities, and relationship with Facebook.2121.  Oversight Board Charter, Facebook (Sept. 2019), https://about.fb.com/wp-content/uploads/2019/09/oversight_board_charter.pdf (https://perma.cc/MC9C-HBY3).

This short Essay disagrees. It suggests that the powers already vested in the Board give it authority to review Facebook’s algorithms. Under the terms of the Charter, the Board has authority to both access Facebook’s algorithms as part of its standard review process and to make recommendations regarding algorithms’ impact on Facebook. Importantly, Board members could leverage the textual and structural arguments in this Essay to push for expansive jurisdiction.

The debate over the Board’s power to review algorithms is important both for the efficacy of the Board in reviewing the speech of 2.85 billion active Facebook users2222.  See H. Tankovska, Number of Monthly Active Facebook Users Worldwide as of 1st Quarter 2021, Statista (May 21, 2021), https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide (https://perma.cc/Z7NQ-RCYD). and for the future of internet regulation. Currently, Section 230 of the Communications Decency Act immunizes Internet Service Providers from liability for harms caused by third-party content.2323.  47 U.S.C. § 230 (2018). But this liability shield is under attack. Section 230 was a “favorite punching bag of President Trump” and is likely to be reviewed by President Biden.2424.  Rachel Lerman, Social Media Liability Law is Likely to Be Reviewed Under Biden, Wash. Post. (Jan. 18, 2021, 8:00 AM EST), https://www.washingtonpost.com/politics/2021/01/18/biden-section-230 (https://perma.cc/D5NP-W9MZ). Yet, as commentators have pointed out, “reductions to Section 230’s scope pose serious risks to Internet speech.”2525.  Eric Goldman, Why Section 230 Is Better than the First Amendment, 95 Notre Dame L. Rev. 33, 33 (2019). If this Essay is correct and the Facebook Oversight Board has real power to control the worst excesses of third-party speech on Facebook—including the amplification of that speech by Facebook’s algorithms—then it could be a model for a viable private alternative to government regulation.

The remainder of this Essay proceeds as follows. First, Part I provides a brief introduction to the Board’s mission and illustrates why oversight of algorithms is essential if the Board is to do its job. Part II gives an overview of the Board’s powers and adjudicatory process. Part III builds on this foundation. It argues that the Board has authority to both access Facebook’s algorithms as part of its standard review process and to make recommendations regarding algorithms’ impact on Facebook. Part IV suggests that the Board is competent to review algorithms because it has access to the necessary expertise and funding. Finally, Part V proposes some possible uses of the Board’s power to review algorithms and considers what the Board should do if Facebook denies that it has the powers this Essay suggests.

  1. The Board’s Mission and Algorithmic Harms

The Facebook Oversight Board was created to ensure that Facebook promotes free expression by correctly balancing “voice” (or free speech) with other community values, including authenticity, safety, privacy, and dignity.2626.  See Oversight Board Charter, supra note 21, art. 2, § 2 (“The board will review content enforcement decisions and determine whether they were consistent with Facebook’s content policies and values.”); Monika Bickert, Updating the Values that Inform Our Community Standards, Facebook (Sept. 12, 2019), https://about.fb.com/news/2019/09/updating-the-values-that-inform-our-community-standards (https://perma.cc/W8LH-8YD6) (listing Facebook’s values). This mission is clearly stated in the Board’s Charter, which acknowledges that although “[f]ree expression is paramount . . . there are times when speech can be at odds with” other values.2727.  Oversight Board Charter, supra note 21, at 2. The idea that unfettered speech should be limited to serve other community values is not new. Thus, under the First Amendment, the government may regulate the time, place, and manner of speech in public fora to serve other compelling interests—such as “protect[ing] public safety.”2828.  Reed v. Town of Gilbert, 576 U.S. 155, 175 (2015) (Alito, J., concurring); see also id. at 173 (“A sign ordinance narrowly tailored to the challenges of protecting the safety of pedestrians drivers and passengers . . . well might survive strict scrutiny.”).

If the Board is to live up to its mission and hold Facebook accountable to its values, it is crucial that the Board reviews Facebook’s algorithms. Just as Facebook’s content moderation decisions implicate the balance between free expression and community values, so do its algorithms. Because algorithms control the reach, spread, and impact of content on Facebook, the harm a particular post can inflict is directly tied to its algorithmic amplification. Further, because algorithms optimize for engagement, they may already bias Facebook’s platform toward some community goals and away from others. Take a hypothetical example. If Facebook amplifies a post lambasting a public figure for indiscretion in their private life, this decision prefers voice (and possibly authenticity) to privacy (and arguably safety). Therefore, if the Board is later tasked with reviewing a content-moderation decision to remove the post, Facebook’s initial decision to amplify is critical context for understanding how Facebook’s values are implicated.

As the following examples illustrate, many of Facebook’s problems are driven by its algorithms. Facebook’s own research shows that the content of users’ News Feeds influences their emotions.2929.  Adam D.I. Kramer, Jamie E. Guillory & Jeffery T. Hancock, Experimental Evidence of Massive Scale Emotional Contagion Through Social Networks, 111 Proc. Nat’l Acad. Sci. 8788 (2014), http://www.pnas.org/content/111/24/8788.full.pdf (https://perma.cc/8JDJ-KVG4). Researchers manipulated the content of users’ feeds to contain either more positive or more negative content and monitored the effect on users’ posting behavior. “The results show emotional contagion . . . . [F]or people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred.”3030.  Id. at 8789. Perhaps even more worryingly, a leaked internal report found that one-third of large German political groups on Facebook contained extremist content.3131.  Adam Smith, Facebook Knew Its Algorithm Made People Turn Against Each Other But Stopped Research, Independent (May 28, 2020), https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-algorithm-bias-right-wing-feed-a9536396.html (https://perma.cc/KA6U-3GCR). And most people joining these groups were encouraged by Facebook’s algorithm. The report admitted: “Our recommendation systems grow the problem.”3232.  Id. Finally, a slide from a 2018 presentation delivered at Facebook acknowledged that “[o]ur algorithms exploit the human brain’s attraction to divisiveness” and if left unchecked would feed users “more and more divisive content in an effort to gain user attention and increase time on the platform.”3333.  Jeff Horowitz & Deepa Seetharaman, Facebook Executives Shut Down Efforts to Make the Site Less Divisive, Wall St. J. (May 26, 2020, 11:38 AM EST), https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499.html (https://perma.cc/A4AG-LQDC).

But despite awareness of these problems, Facebook has been unwilling to act. After the 2018 presentation, “Mr. Zuckerberg and other senior executives largely shelved the basic research . . . and weakened or blocked efforts to apply its conclusions to Facebook products.”3434.  Id. It is perhaps unsurprising that Zuckerberg was not interested in reform. Divisive content is Facebook’s profit engine. Because users are more likely to spend time on the platform engaging with such content, they are also more likely to view Facebook’s advertisements.3535.  Erin Black, How Facebook Makes Money by Targeting Ads Directly to You, CNBC: Tech (Apr. 2, 2019, 8:00 AM EST), https://www.cnbc.com/video/2019/04/01/how-facebook-makes-money-by-targeting-ads-directly-to-you.html (https://perma.cc/4C3X-8ASD) (“(Facebook) makes a whopping 99 percent of its revenue from advertising.”).

It is therefore vital that the Board has oversight of Facebook’s algorithms. Because algorithms contribute to the balance between voice, authenticity, safety, privacy, and dignity on Facebook, the Board cannot do its job without reviewing them. Access to Facebook’s algorithms would allow the Board to fulfill Zuckerberg’s initial vision of an organization capable of “ultimately mak[ing] the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.”3636.  Klein, supra note 4. Happily, as the remainder of this Essay makes clear, a close read of the Board’s Charter reveals that it already has power to review Facebook’s algorithms.

  1. Overview of the Board’s Powers and Adjudicatory Process

The Board’s Charter is a “constitution-like document that la[ys] out the structural relationship between Facebook [and] the Oversight Board.”3737.  Kate Klonick, The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression, 129 Yale L.J., 2418, 2458 (2020). Importantly, it describes the Board’s authority and procedures for review. Additional detail on the Board’s structure and powers are provided by its Bylaws.3838.  Oversight Board Bylaws, Facebook 31 (Jan. 2021), https://www.oversightboard.com/sr/governance/bylaws (https://perma.cc/7FR5-T4QZ). A basic outline of the Board’s review process is as follows.

There are two ways in which a case may reach the Board. First, if a Facebook user “disagree[s] with the outcome of Facebook’s decision” and has “exhausted appeals” within Facebook, they may submit their case to the Board.3939.  Oversight Board Charter, supra note 21, art. 2, § 1. The Board then has discretion to choose which submissions it will review.4040.  Id. Alternatively, Facebook itself can submit requests for Board review.4141.  Id. Facebook’s submissions to the Board may “includ[e] additional questions related to the treatment of content beyond whether the content should be allowed or removed completely.”4242.  Id. Facebook can also refer cases for automatic expedited review when “content could result in urgent real world consequences.”4343.  Id. art. 3, § 7.2.

Once the Board accepts a case, it must “review and decide on content in accordance with Facebook’s content policies and values.”4444.  Id. art. 2. Facebook’s values ensure free expression by balancing “voice” with four additional goals: safety, privacy, authenticity, and dignity.4545.  Monika Bickert, Updating the Values that Inform Our Community Standards, Facebook (Sept. 12, 2019), https://newsroom.fb.com/news/2019/09/updating-the-values-that-inform-our-community-standards (https://perma.cc/BG7K-8MB4). Facebook’s “content policies” are not expressly defined in the Charter and are given an unhelpfully circular definition in the Bylaws: “Facebook policies” are those “content policies and procedures that govern content on the platform (e.g. Community Standards or Community Guidelines).”4646.  Oversight Board Bylaws, supra note 38. Further sources of authority on which the Board may rely are “prior board decisions,” which “have precedential value,” and “human rights norms.”4747.  Oversight Board Charter, supra note 21, art. 2, § 2.

The record for Board review is provided by Facebook. It must “provide information, in compliance with applicable legal and privacy restrictions, that is reasonably required for the board to make a decision.”4848.  Id. art. 3, § 3. As the Bylaws elaborate, this information is to include: (1) a graphic of the content in question; (2) information about where the content was posted; (3) information about the person who posted the content; and (4) the case history within Facebook’s internal appeals process.4949.  Oversight Board Bylaws, supra note 38, art. 2, § 2.2.2. Further, the Board can “gather additional information” that “may be required to provide additional context,” including “through subject matter experts, research requests or translation services.”5050.  Oversight Board Charter, supra note 21, art. 3, § 3. And the Bylaws expressly state that the Board may request “information on engagement and reach of . . . content” and “information regarding Facebook’s decision and policies.”5151.  Oversight Board Bylaws, supra note 38, art. 2, § 2.2.2. Finally, the “posting person or the reporting person” and individuals and groups “immediately depicted or impacted by the content in question” have the opportunity to submit written statements to the Board.5252.  Oversight Board Charter, supra note 21, art. 3, § 3. Note, however, that the Board has discretion to restrict submissions by interested groups. The posting or reporting person may always submit a written statement. Id.

Once the Board has reviewed a case, it will issue “[a] final decision” by majority vote.5353.  Id. art. 3, § 4. The decision will include “a determination on the content,” and a “plain language explanation of the board’s rationale.”5454.  Id. At the Board’s discretion, “the final decision may include a policy advisory statement, which will be taken into consideration by Facebook to guide its future policy development.”5555.  Id. Unlike the Board’s case-specific decisions, its policy advisory statements are non-binding. Facebook is bound to “implement” the former “promptly,” but need only “analyz[e]” and “consider[]” the latter.5656.  Id. art. 4.

  1. The Board Can Make Policy Recommendations About Algorithms

With a firm grip on how the Board reviews cases, we now turn to the Board’s power to review Facebook’s algorithms. This Part argues, first, that Facebook can use its established power to “request information” to access Facebook’s algorithms as part of its standard review process. Second, this Part also argues that it is within the scope of the Board’s power to make recommendations about how Facebook’s algorithms work.

The Board Has Power to Access Facebook’s Algorithms

Like any other interpretive exercise, we begin with the text. The Charter states that the Board will have access to any information “reasonably required for the board to make a decision.”5757.  Id. art. 3, § 3. Plus, according to the Bylaws, the Board may request “information on engagement and reach of . . . content” and “information regarding Facebook’s decision and policies.”5858.  Oversight Board Bylaws, supra note 38, art. 2, § 2.2.2. This power is clearly broad enough to cover a request to see Facebook’s algorithms. Moreover, none of the exceptions to information gathering by the Board preclude such a request.

The Board’s Information-Gathering Power Covers Algorithms

Access to algorithms is “reasonably required” for the Board to make decisions. This is because the reach of content is critical to Board decisionmaking. And algorithms determine, to a large extent, the engagement and reach of a Facebook post.

If the Board is to assess the consequences of speech for Facebook’s community values it must know the reach of content on the platform. Indeed, the Board has already recognized this. In a recent case about COVID-19 misinformation,5959.  Case Decision 2020-006-FB-FBR, Facebook (Jan. 28, 2021), https://oversightboard.com/decision/FB-XWJQBU9A (https://perma.cc/AL65-XTTJ?type=image). the Board overturned Facebook’s decision to remove a post because Facebook had failed to “demonstrate[ ] how th[e] . . . post contributed to imminent harm.”6060.  Id. Significantly, the Board recognized that whether “imminent harm” will flow from speech turns, in part, on “the reach of [the] speech.”6161.  Id. Thus, information about reach is critical to Board decisionmaking. Without it, the Board cannot fully appreciate whether a post poses a risk to privacy, safety, or dignity.

Importantly, Facebook’s algorithms determine the reach of content on the platform. Facebook “use[s] signals like how many people react to, comment on or share posts to determine how high they appear in News Feed.”6262.  Mosseri, supra note 10. This data, in turn, is used to “prioritize posts that spark conversations and meaningful interactions between people” and to “predict which posts [users] might want to interact with [their] friends about.”6363.  Id. Simply put, Facebook’s algorithms determine which posts are widely seen and which are not. It is therefore important for the Board to know, in evaluating whether a post was treated appropriately by Facebook, whether the post was amplified or not.

To drive home the point, consider the Board’s review of President Trump’s suspension.6464.  See Clegg, supra note 3. The Board upheld Facebook’s decision because it was “necessary and proportionate to protect the rights of others.”6565.  Case Decision 2021-001-FB-FBR, supra note 5. The Board’s analysis of this “crucial question,” turned on two factors: (1) the “risk of violence” posed by Trump’s speech; and (2) whether “less restrictive measures” were possible.6666.  Id. But complete analysis of these factors is impossible without access to Facebook’s algorithms. The “risk of violence” posed by Trump is hard to quantify without knowing how his posts were amplified by Facebook.6767.  See Jaffer et al., supra note 12, at 2 (noting that misleading speech was dangerous precisely because it was amplified by Facebook, particularly within certain Facebook groups which were “insulated from counter-speech by Facebook’s architecture”). As a crude proxy for information about algorithmic amplification, the Board used information on the number of people who “follow(ed)” Trump on Facebook to quantify the reach of Trump’s speech. Case Decision 2021-001-FB-FBR, supra note 5. But this data provides only an incomplete picture because it does not tell the Board whether Trump’s posts were prioritized—i.e., made more visible—by Facebook’s algorithms. And the Board itself recognized that it could not tell whether less restrictive measures were possible without access to Facebook’s algorithms.6868.  Case Decision 2021-001-FB-FBR, supra note 5 (noting that Facebook’s refusal to answer questions about its algorithms makes it “difficult for the Board to assess whether less severe measures, taken earlier, may have been sufficient to protect the rights of others”). In other words, because the Board did not have access to algorithms in adjudicating Trump’s suspension, it had to guess on the two factors which were the most important to its review. To prevent future guesswork, seeing Facebook’s code is “reasonably required” for the Board’s decisionmaking.6969.  Oversight Board Charter, supra note 21, art. 3, § 3.

Limits on the Board’s Information-Gathering Power Do Not Apply to Algorithms

Although the Board’s information-gathering power is limited by the Charter and Bylaws, none of these exceptions deny the Board access to Facebook’s algorithms.

The Charter contains just one express carve out on information the Board may access: Facebook may withhold information from the Board “in compliance with applicable legal and privacy restrictions.”7070.  Id. But this does not prohibit the disclosure of algorithms. There are no legal restrictions on Facebook sharing its own proprietary code, presumably under a confidentiality agreement, with the Board. Nor are there privacy concerns—no sensitive personal identifying information would need to be shared in disclosing an algorithm.

The Bylaws provide additional circumstances in which Facebook may withhold information. Three are particularly relevant here. First, Facebook may decline requests that are not “technically feasible.”7171.  Oversight Board Bylaws, supra note 38, art. 2, § 2.2.2. Although this could conceivably cover algorithms, it is not clear why it would be technically difficult for Facebook to disclose its code to the Board. For example, it is widely reported that Facebook uses a variant of the Vickery-Clarke-Groves (VCG) algorithm to decide which advertisements to display on a user’s News Feed.7272.  Hal R. Varian & Christopher Harris, The VCG Auction in Theory and Practice, 104 Am. Econ. Rev. 442, 442 (2014) (“Facebook uses a Vickery-Clarke-Groves (VCG) auction.”). And the VCG algorithm can be clearly described and interpreted.7373.  Id. at 442-45 (describing how the VCG algorithm works). If it is “technologically feasible” to describe analogous algorithms, there are likely no technical barriers to Facebook sharing its algorithms.

Second, the Bylaws allow Facebook to withhold information where it “cannot or should not be provided because of . . . data protection restrictions or concerns.”7474.  Oversight Board Bylaws, supra note 38, art. 2, § 2.2.2. While this provision seems designed to protect user data held by Facebook, data protection concerns could be raised by sharing algorithms with the Board. Indeed, much of Facebook’s value consists in its intellectual property, including its algorithms.7575.  Jim Farmer, Facebook, Zuckerberg and Intellectual Property, Intellectual Property (Aug. 2011), https://www.immagic.com/eLibrary/ARCHIVES/GENERAL/IMM/I110708F.pdf (https://perma.cc/WL47-QVVN) (“Facebook is a unique company – only seven years after it was founded, it is valued at $90 billion (£55.1 billion). Facebook is also a company whose only assets are intellectual property – designs, computer algorithms (business processes) and talent.”); see also Jeffrey Leon, Facebook Heads to Trial in Tax Dispute that Could Cost $9 Billion, Bloomberg Tax (Feb. 14, 2020, 12:00 PM EST), https://news.bloombergtax.com/daily-tax-report/facebook-heads-to-trial-in-tax-dispute-that-could-cost-9-billion (https://perma.cc/V9U8-WRRX) (describing an IRS suit against Facebook, in which the government alleges that Facebook’s intangible assets, including trademarks and copyrights, are “worth $13.8 billion—more than twice as much as Facebook’s $6.5 billion valuation”). But Facebook’s algorithms could be shared with the Board in a way that mitigates the risk of data leaks. Facebook could use confidentiality agreements, security arrangements, and encryption to ensure that its algorithms are not inadvertently publicized. And, in deciding whether to withhold data, Facebook should balance the risk of data loss against the benefits of disclosure. As the first Part of this Essay demonstrates, Board review of algorithms could help to address some of the most serious problems facing Facebook.

Finally, Facebook may decline information requests where it “determines that the information is not reasonably required for decision-making in accordance with the intent of the charter.”7676.  Oversight Board Bylaws, supra note 38, art. 2, § 2.2.2. But, as noted above, algorithms are so central to expression on Facebook that it is hard to argue that they are not “reasonably required” information needed for the Board to fulfill its responsibilities in accordance with the “intent of the charter.”

The Board Has the Power to Issue Policy Recommendations About Algorithms

If the Board has the power to access and analyze Facebook’s algorithms, what can it do with this power? One answer to this question is highlighted above: the Board can use information about how Facebook works to contextualize its decisions on content. A post with potentially misleading information is a far greater threat to safety if it is viewed by five million people than if it is viewed by five people. But the Board can use information about algorithms in an even more consequential way: it can make recommendations to Facebook about how its algorithms work.

  1. Facebook’s Algorithms Are Amenable to Board Policy Recommendations

In addition to adjudicating appeals from Facebook users, the Board can issue “policy guidance,”7777.  See, e.g., Oversight Board Charter, supra note 21, art. 1, § 4 (“(T)he board can provide policy guidance, specific to a case decision or upon Facebook’s request, on Facebook’s content policies.”). For other references to this power, see id. art 2, § 7.3; id. art. 4; and id. art. 5, § 3. “policy advisory opinion[s],”7878.  Id. Introduction (“(T)he board is to protect free expression . . . by issuing policy advisory opinions on Facebook’s content policies.”). For other references, see id. art. 4; and id. art. 5, § 1. and “recommendations.”7979.  Id. art. 1, § 5 (“Members (of the Board) will collaborate . . . (to) issue . . . policy recommendations using clearly articulated reasoning.”). For other references, see id. art. 3, § 1; and id. art. 5, § 3. These terms are used in a similar way to one another throughout the Charter, but none are precisely defined.8080.  For a discussion on the possible different meanings of these terms, see discussion infra Section II.B.2. The Board’s policy guidance is “advisory” and may “concern the clarification of a previous decision by the board or guidance on possible changes to Facebook’s content policies.”8181.  Oversight Board Charter, supra note 21, art. 3, § 7.3.

There are two limits on the Board’s ability to issue guidance to Facebook. First, the Board cannot issue guidance on its own motion. Instead, it must do so either as an addendum to its decision in a “case” or “upon Facebook’s request.”8282.  Id. art. 1, § 4. And, second, the guidance must relate to “Facebook’s content policies.”8383.  Id. art. 3, § 7.3; see also id. Introduction (stating that the Board may issue “policy advisory opinions on Facebook’s content policies”). The second limit is much more significant and could limit the Board’s power to issue guidance about algorithms. Are algorithms “content polices” that are proper subject matter for an advisory opinion? There are four reasons to think they are.

First, algorithms fit neatly within the ordinary meaning of the phrase “content policies.” A “policy” is a “course or method of action selected from among alternatives . . . to guide and determine present and future decisions.”8484.  Policy, Merriam-Webster (2021), https://www.merriam-webster.com/dictionary/policy (https://perma.cc/9L5G-LKDG). An “algorithm” is the “set of rules a machine . . . follows to achieve a particular goal.”8585.  What Does Algorithm Mean?, Merriam-Webster (May 27, 2021), https://www.merriam-webster.com/dictionary/algorithm (https://perma.cc/JNQ9-2V3C). Therefore, like other policies, an algorithm applies rules to new situations. It is immaterial that an algorithm applies those rules through an automated process, rather than through human judgment. Just as Facebook’s Community Standards reflect Facebook’s rules for user content, so too Facebook’s algorithms reflect Facebook’s rules for content reach and spread. Both Community Standards and algorithms are “policies” that regulate Facebook content.

Indeed, the notion that an algorithm is a type of policy is confirmed by the computer science literature. Take just one example. In computing, “scheduling” is the method by which work is assigned to resources (e.g., Computer Processing Units (CPUs) or Random Access Memory (RAM)). The aim of scheduling is to assign processes to resources over time in a way that meets system objectives, “such as response time, throughput, and processor efficiency.”8686.  William Stallings, Operating Systems: Internals and Design Principles 392 (4th ed. 2004). Scheduling according to these objectives is achieved by “scheduling algorithms,” with different algorithms prioritizing different criteria.8787.  Id. at 398. Importantly for our purposes, “scheduling algorithms” are interchangeably referred to as “scheduling policies” in the literature.8888.  See, e.g., Remzi H. Arpaci-Dusseau & Andrea C. Arpaci-Dusseau, Operating Systems: Three Easy Pieces § 7.1 (2018) (describing various scheduling algorithms as “scheduling policies”). This suggests that it is in no way forced or awkward to call an algorithm a “content policy.” Indeed, doing so fits with the industry vernacular.

Facebook’s community-standard moderation and its algorithmic decisionmaking look even more similar when we consider that some “types of problematic content are addressed . . . directly through the [algorithmic] ranking process.”8989.  Nick Clegg, You and the Algorithm: It Takes Two to Tango, Medium (Mar. 31, 2021), https://nickclegg.medium.com/you-and-the-algorithm-it-takes-two-to-tango-7722b19aa1c2 (https://perma.cc/24UX-W3V8). That is, Facebook decides that some content “might not violate Facebook’s Community Standards,” but is “still problematic.”9090.  Id. For this content, Facebook “reduces the[ ] distribution” of the offending post.9191.  Id. In other words, both Facebook’s downranking algorithms and its content moderation decisions police content for very similar reasons. Indeed, the practical effect of each decision—moderation or downranking—might be identical. While moderation removes content altogether, downranking means that a post is not visible to other users. So it is straightforward to call an algorithm a “content policy” when that algorithm controls content based on a determination that content is “problematic.”

Second, any ambiguity in the phrase “content policies” should be construed in favor of including algorithms. This is supported both by the nature of the Charter and by the function of the Board. To paraphrase John Marshall, we must never forget that this is a Charter we are expounding.9292.  McCulloch v. Maryland, 17 U.S. 316, 407 (1819) (“(W)e must never forget that it is a constitution we are expounding.”). Just as the U.S. Constitution does not “partake of the prolixity of a legal code,”9393.  Id. neither does the Charter. The Charter is capaciously drafted and leaves much further definition to the Bylaws. As Kate Klonick’s research shows, the Charter was drafted to be “intentionally vague” in order to allow later exposition to deal with new situations.9494.  Klonick, supra note 37, at 2462. This suggests that the nature of the Charter counsels in favor of a broad reading. Whereas a narrow definition of “content policies” would unduly restrict the hard-to-amend Charter, a broad definition could be further refined in the Bylaws.

Plus, the purpose of the Board supports a broad reading of “content polices.” As noted above, the Board’s mission is to balance speech with Facebook’s other values—authenticity, safety, privacy, and dignity.9595.  Oversight Board Charter, supra note 21, art. 1, § 4. It cannot do this unless it can issue guidance on Facebook’s algorithms. Algorithms are often the reason content has a large impact on safety, privacy, or dignity. Moreover, if algorithms are unreviewable, they can be used to do an end run around the Board’s jurisdiction. Facebook can simply downrank content using an algorithm rather than moderate it.9696.  Clegg, supra note 89. Take a simple hypothetical. A user posts misleading information which Facebook fears could have an adverse effect on public health. But Facebook suspects that a decision to remove the post would be borderline under its Community Standards. And so it has a choice. It can either moderate the content and remove it from the platform, or it can downrank the post so it does not appear on other users’ News Feeds. The practical effect of each decision is to render the post a nullity, but only the former is amenable to Board review. Allowing the Board to issue policy recommendations about algorithms would at least provide indirect Board oversight of the latter class of decisions.9797.  For more on this, see discussion infra Section V.B.

Third, reading “content polices” to encompass algorithms fits with the definition of “Facebook policies” in the Bylaws. The Bylaws define “Facebook policies” as “Facebook . . . content policies and procedures that govern content on the platform (e.g., Community Standards or Community Guidelines).”9898.  Oversight Board Bylaws, supra note 38, at 35. Algorithms fit within this definition because it is natural to refer to an algorithm as a “procedure . . . govern[ing] content on [Facebook’s] platform.” And it is clear from the Bylaws that Community Standards and Community Guidelines are not the only types of content policy—rather they are examples of a greater set of policies governing content on Facebook.

Fourth, the Board’s advisory opinions to date suggest that it has the power to issue policy guidance covering algorithms. Crucially, in Case Decision 2020-004-IG-UA, a case concerning Facebook’s decision to remove a breast cancer awareness post, the Board issued a policy advisory statement in which it recommended that Facebook: (1) “[i]mprove the automated detection of images . . . to ensure that posts raising awareness of breast cancer symptoms are not wrongly flagged for review”; and (2) “[i]mplement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes.”9999.  Case Decision 2020-004-IG-UA, Facebook (Jan. 28, 2021), https://oversightboard.com/decision/IG-7THR3SI1 (https://perma.cc/CU6N-W8T5). Both of these recommendations touch on how Facebook’s algorithms operate. The first recommendation explicitly suggests that Facebook improve its image-detection algorithm. And the second proposes a manual audit to ensure that Facebook’s algorithmic image-detection mechanisms are working correctly. Similarly, in Case Decision 2020-006-FB-FBR, which reversed Facebook’s removal of a misleading post relating to COVID-19, the Board suggested that Facebook “[c]onduct an assessment of its existing range of tools to deal with health misinformation” and that it “[c]onsider the potential for development of further tools that are less intrusive than content removals.”100100.   Case Decision 2020-006-FB-FBR, supra note 59. Again, both recommendations implicate Facebook’s algorithms—Facebook’s “tools to deal with . . . misinformation” are undoubtedly algorithmic. And so it is only a small step forward for the Board to first access Facebook’s algorithms and issue more particularized recommendations about how they should operate.

  1. Even if Facebook Algorithms Are Not “Content Policies,” the Board May Issue Recommendations About Them

The previous Section argued that Facebook’s algorithms are “content polices” about which the Board can issue advisory opinions and policy guidance. But even if this argument fails to persuade, a second textual argument may nevertheless suggest that the Board can issue recommendations about algorithms. This is because the text of the Charter imposes no limit on the content of the Board’s recommendations.

The Charter indicates that the Board can issue three different types of guidance to Facebook. Recall that the Charter uses the terms “advisory opinions,” “policy guidance,” and “recommendations” to refer to the Board’s power to issue statements that are not “decisions.” Unhelpfully, none of these terms are defined either in the Charter or in the Bylaws. But it seems likely that each refers to a discrete action the Board may take. This is supported by the fact that the Charter often refers to two of these actions in the same sentence. For example, Article 4 provides that when a Board decision “includes policy guidance or a policy advisory opinion,” Facebook must take action to analyze the guidance.101101.   Oversight Board Charter, supra note 21, art. 4 (emphasis added). Similarly, in Article 5, Facebook commits to “seek policy guidance from the board and . . . to tak[e] action on the board’s . . . recommendations.”102102.   Id. art. 5, § 3 (emphasis added).

Basic principles of interpretation confirm that these terms should take on different meanings. Consider the following two canons of construction. First, a material variation in terms suggests a variation in meaning.103103.   See Antonin Scalia & Bryan A. Garner, Reading Law: The Interpretation of Legal Texts 170-73 (2012) (“(W)here the document has used one term in one place, and a materially different term in another, the presumption is that the different term denotes a different idea.”). And so where a document “says land in one place and real estate later, the second provision presumably includes improvements as well as raw land.”104104.   Id. at 170. Second, and relatedly, a text should be construed to give every word effect; a word should not “needlessly be given an interpretation that causes it to duplicate another provision or to have no consequence.”105105.   Id. at 174. Indeed, “words with no meaning . . . should be regarded as the exception rather than the rule.”106106.   Id. Applied to the Charter, these basic interpretive presumptions suggest that “policy guidance,” “policy advisory opinion[s],” and “recommendations” are different types of actions the Board may take.

The best reading may be that “policy advisory opinions” are statements issued by the Board appended to case-specific content decisions,107107.   This fits with the Board’s use of the term “policy advisory statement” to describe guidance appended to its case decisions. See, e.g., Case Decision 2020-005-FB-UA, Facebook (Jan. 28, 2021), https://oversightboard.com/decision/FB-2RDRCAVQ (https://perma.cc/D8Z5-RYZ3) (“In a policy advisory statement. . . .”). “policy guidance” is advice requested by Facebook outside of a specific decision,108108.   Oversight Board Charter, supra note 21, art. 3, § 7.3 (“Independent of any pending case, Facebook may request policy guidance from the board.”). and recommendations are the concrete, particularized steps the Board suggests Facebook should take.109109.   Again, this fits with the Board’s use of the term “recommendations.” The Board’s advisory opinions to date include bullet-point lists of specific steps the Board recommends to Facebook. See, e.g., Case Decision 2020-006-FB-FBR, supra note 59 (“(T)he Board recommends that Facebook . . . .”). And so although an advisory opinion or policy guidance document must broadly relate to Facebook’s “content policies,” the Board’s concrete recommendations may have a broader ambit.

The distinction between policy guidance, advisory opinions, and recommendations is important. While “advisory opinions” and “policy guidance” must pertain to “Facebook’s content policies,”110110.   Oversight Board Charter, supra note 21, art 3, § 7.3; see also id. Introduction (“(T)he board . . . (may) issu(e) policy advisory opinions on Facebook’s content policies.”). there is no such textual limit on the Board’s power to issue recommendations. On this view, it would not matter whether or not algorithms are “content polices”—Facebook could issue a recommendation regardless.

  1. The Board Is Competent to Review Facebook’s Algorithms

The power to access Facebook’s algorithms would be meaningless if the Board was not competent to analyze them. Fortunately, the Charter gives the Board access to technical experts and outside research, both of which could provide technological advice to aid the Board’s review. And, as detailed below, in its early decisions the Board has demonstrated an encouraging ability to draw on outside expertise.

Even if the Board members themselves do not have the technical background or training to understand and analyze Facebook’s algorithms, the Board has the power to leverage expert advice. It may “gather additional information,” submit “research requests,” and may rely on “subject matter experts.”111111.   Id. art. 3, § 3. The Board’s ability to consult with experts is buttressed by Facebook’s commitment to “provid[e] funding to support the board’s operations.”112112.   Oversight Board Bylaws, supra note 38, art. 2, § 1.3; see also Oversight Board Charter, supra note 21, art. 5, § 1 (“The board will be funded . . . to support its operations and expenses.”). This suggests that the Board could fill any gaps in its technical competence by recruiting computer scientists to guide its review of algorithms.

Encouragingly, the Board has already made extensive use of subject matter experts in its early cases. For example, in Case Decision 2020-002-FB-UA—overturning Facebook’s decision to remove a post concerning Muslims in Myanmar—the Board consulted “context experts,” including “[a]n independent research institute headquartered at the University of Gothenburg, and drew on “linguistic expertise” from “Lionbridge Technologies.”113113.   Case Decision 2020-002-FB-UA, Facebook (Jan. 28, 2021), https://oversightboard.com/decision/FB-I2T6526K (https://perma.cc/2GX3-8R27); see also Case Decision 2020-003-FB-UA, Facebook (Jan. 28, 2021), https://oversightboard.com/decision/FB-QBJDASCV (https://perma.cc/5MQ3-ZLBP) (consulting experts); Case Decision 2020-006-FB-FBR, supra note 59 (same); Case Decision 2020-007-FB-FBR, Facebook (Feb. 12, 2021), https://www.oversightboard.com/decision/FB-R9K87402 (https://perma.cc/4U82-J3UH) (same). It is particularly noteworthy that the Board explained the nature of its reliance on outside experts. Disclosing the identities of the research institutes involved promotes transparency and reduces the risk that one-sided expert advice could inappropriately influence the Board. In sum, the Board’s reliance on outside experts so far is an encouraging sign that it could rely on research institutes and computer scientists to assist its review of Facebook’s algorithms.

Finally, from Facebook’s perspective, there is only a very limited threat that one-sided expert advice could influence the Board’s decisionmaking and negatively impact Facebook’s platform. There are two significant safeguards in place to mitigate this risk. First, it would be open to Facebook to present its own expert views on algorithms. For instance, in Case Decision 2020-003-FB-UA, the Board noted Facebook’s use of outside expertise in forming its content policy. “The Board welcomes Facebook’s explanation that its designation of this term as a slur followed consultations with local experts and civil society organizations aware of its contextual usage.”114114.   Case Decision 2020-003-FB-UA, supra note 113. Second, unlike the Board’s case-specific decisions, the Board’s guidance, advisory opinions, and recommendations are non-binding. Facebook is required to “implement” the former “promptly,” but need only “analyze[]” and “consider[]” the latter.115115.   Oversight Board Charter, supra note 21, art. 4. And so where the Board makes a recommendation about algorithms, Facebook may reject that recommendation.

  1. How Should the Board Use Its Power to Review Algorithms?

So far, we’ve established that the Board has a strong textual claim to access Facebook’s algorithms and to issue policy recommendations about them. This Part turns to potential applications of these powers. The first Section runs through two example cases in which the Board’s power to review algorithms could be leveraged. The second Section considers how the Board should respond if Facebook were to deny that it has the power to review algorithms.

  1. Two Example Applications
  2. Future Decisions on Inflammatory Speech by World Leaders

As an initial example, consider a hypothetical in which the Board is again called upon to adjudicate the suspension of a world leader for allegedly inflammatory and dangerous speech on Facebook (as it was called upon to review the suspension of President Trump). As discussed above, algorithms are important context for the reach of content posted on Facebook. And so the Board should ask Facebook for information on how algorithms amplified the leader’s posts. Access to algorithms might also tell the Board something about the types of users most likely to have had the posts displayed in their News Feeds. Because algorithms ensure that every user’s Feed is “personalized to our individual tastes and preferences . . . reflecting our unique network of friends, Pages, and Groups,”116116.   Clegg, supra note 89. it is very likely that some users were exposed to the posts more systematically than others. Facebook might even be able to provide the Board with data on which other types of content correlated with the posts on users’ News Feeds. For instance, were other incendiary claims frequently displayed to users alongside the leader’s posts? All of this information would allow the Board to make an informed decision about the risk of violence posed by the speech in question.

In addition, the Board could request information about how Facebook plans to deal with the leader’s posts in the future, if the Board were to reinstate them. Does Facebook intend to treat the leader differently if he or she is reinstated? Will it amplify their posts in the same way? Will it monitor what other types of content are displayed to users alongside their posts? If Facebook has no plans to amend its algorithmic treatment of the leader, this would be a good opportunity for the Board to issue a policy recommendation to Facebook about ways in which it could do so.

  1. Downranking Algorithms

A second example application illustrates how the Board could address algorithms that downrank “problematic” content.117117.   Id. This type of algorithm is particularly difficult to address because it is necessarily insulated from Board review. Where Facebook downranks content, there is no initial “decision” for a user to appeal and so they cannot “exhaust[] appeals”—a necessary prerequisite to submitting a case for Board review.118118.   Oversight Board Charter, supra note 21, art. 2, § 1.

Fortunately, there is an avenue for the Board to exercise indirect review of Facebook’s downranking algorithms. Consider the facts of Case Decision 2020-006-FB-FBR.119119.   Case Decision 2020-006-FB-FBR, supra note 59. The post at issue contained misinformation regarding COVID-19 and was removed by Facebook because, in their view, it presented a risk to public safety. Importantly, one factor in the Board’s decision to overrule Facebook was that it failed to “demonstrate why it did not choose a less intrusive option than removing the content.”120120.   Id. And therein lies the hook for Board review of downranking algorithms. In future decisions involving misinformation, the Board should ask Facebook how its algorithms identify and downrank types of problematic content; this is critical context if the Board is to understand whether Facebook had a “less intrusive” tool at its disposal.

Moreover, with this information in hand, the Board could issue policy recommendations to Facebook, suggesting how Facebook could better use its downranking algorithms. For instance, the Board could suggest that where Facebook downranks a post based on a determination that it is “problematic,” Facebook should notify the user and allow them to appeal the decision. This recommendation would both improve transparency and, even more importantly, would prevent Facebook from doing an end run around the Board’s jurisdiction by downranking content rather than moderating it.121121.   An end run is prevented if downranking decisions are appealable by users because they can ultimately be submitted to the Board for review.

  1. Putting Pressure on Facebook

What if Facebook refuses to recognize that the Board has the power to both access and make recommendations about Facebook’s algorithms? Given Facebook’s current stance that “content-ranking algorithms are not currently in the scope of the board’s appeal process,”122122.   Heilweil, supra note 16. some pushback from Facebook might be expected. One possible solution is as follows.

To head off opposition, the Board should leverage the power of public opinion. On a cynical view, the Board’s value to Facebook is that it is a “fig leaf of legitimacy that lets [Facebook] avoid responsibility for controversial calls.”123123.   Editorial Board, Will Facebook’s Oversight Board Actually Hold the Company Accountable, Wash. Post (May 17, 2020, 12:23 PM EST), https://www.washingtonpost.com/opinions/will-facebooks-oversight-board-actually-hold-the-company-accountable/2020/05/17/e1d46f50-93cd-11ea-9f5e-56d8239bf9ad_story.html (https://perma.cc/S57E-G43W). But the Board can only provide cover to Facebook if the company follows the Board’s decisions. The Board should leverage this position to its advantage. If Facebook declines a Board request for information about algorithms, the Board should say so in its published opinions. If Facebook decides not to implement a Board policy recommendation, the Board should not be afraid to argue back in later opinions. Indeed, because each case is reviewed by a different “panel of board members,”124124.   Oversight Board Charter, supra note 21, art. 3, § 2. repeated assertions of the same policy proposal would be even more powerful.

Just as the Supreme Court, Congress, and the President occasionally engage in constitutional dialogue, each sending signals to their coordinate branches of government, so too could the Board and Facebook engage in a back-and-forth over Facebook’s policies. As with constitutional dialogue, conversation between the Board and Facebook would play out in public. And so public opinion could play a role in settling institutional conflict. Indeed, even the fact of disagreement between Facebook and the Board could promote conversations among Facebook’s users, the public at large, and even regulators, about the best way forward for the platform.

Conclusion

This Essay has challenged the standard narrative that the Board does not have the power to review Facebook’s algorithms. A careful reading of the Board’s Charter reveals that it already has the authority to both access Facebook’s algorithms as part of its standard review process and to make recommendations about algorithms’ impact on Facebook.

This is an important observation because many of the serious problems confronting Facebook today—misinformation, radicalization, and community safety—are driven by Facebook’s algorithms. Because algorithms control the reach, spread, and impact of content on Facebook, the harm a particular piece of content can inflict is directly tied to its algorithmic amplification. If the Board is to live up to its mission and hold Facebook accountable to its values, then it is critical that the Board begin to wield the powers identified in this Essay.

†  Law Clerk, U.S. Court of Appeals for the Third Circuit; Yale Law School, J.D. 2021; M.A. (Law) Oxford, 2017. My thanks to David Schulz, Robert Post, and the Media Freedom and Information Access Clinic for encouragement and helpful insights. And to Michael Loughlin for many hours spent bouncing around ideas. I am also grateful to the editorial staff of the Yale Journal on Regulation for their suggestions throughout the publication process. This Essay reflects only my personal views and not the views of the U.S. Court of Appeals for the Third Circuit or any member thereof.