Reply to the Dream Team: The Critically Constructive Pathways of Building Equality Machines, by Orly Lobel
*This is the final post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at Science, The Economist, and Kirkus.
Lobel will be in NYC (NYU March 23 at 1pm)and New Haven (Yale March 28 at 5pm) for book talks, which are open all. Additional book tour dates can be found here.
The dream of every author is to have smart readers who comment on and react to her writing. The participants of this symposium are a dream team of brilliant scholars, each of whom inspire me with their own work. Sparking a continued discussion and elaboration on how to forge the path forward—how we can, together as a global public, envision and build equality machines for a better future—was raison d’être of The Equality Machine. As Matthew Bodie writes, one of the ultimate goals of the book is “to entice reluctant allies onto the battlefield for the future of tech with an optimistic vision for the future.” Some of the brightest—and sometimes reluctant—allies are here as contributors to this symposium. Several of them rightly describe how the vision of leveraging artificial intelligence and digital technology for good will only be realized if the political and social will to do so exist, and if the right laws and policies are put in place. And no better leaders to shape these laws and policies than the thoughts leaders who have chimed in here, honing their broad expertise in work law, intellectual property law, health and biomedicine, administrative law, security law, criminal justice, financial regulation, and more.
In a new article, “The Law of AI for Good,” I seek to apply insights from The Equality Machine by critically examining the current legislative and regulatory reforms being introduced in the United States and Europe and argue that these are limited in their reach and too narrow in their vision on what a tech regulatory scheme can and should look like. Indeed, in The Equality Machine, I argue that, ironically and perversely, the focus on AI harms has meant fewer rather than more government interventions. As Talia Gillis writes, the EU draft AI law, for example, avoids defining algorithmic discrimination and instead relies on the flat, ambiguous, too often problematic (as Nicholson Price and his collaborators show in a recent article), and—as I argue—simply erroneous solution of inserting “humans-in-the-loop” as a panacea to all AI risks. Gillis correctly advises that “effective regulation needs instead to focus on a clearer articulation of the goals of AI regulation and careful consideration of how data collection and automation can actually promote fairness.”
This symposium’s contributors bring such clarity in their articulation of both the goals and methods of AI regulation. As Jessica L. Roberts puts it, regulators too must be innovators and disrupters. Drawing on her extensive health policy research, including forthcoming coauthored work with Peter Salib, Roberts provides important policy solutions for building health equality machines, including mandating representative data collection. This resonates with what I recently called a shift from data minimization and privacy (including what I called the narrowing of biopolitical opportunities) on the front end to regulating data misuse and harms. Roberts rightly recognizes that hard questions need to be answered and hard choices need to be made at different points between different social goals, including accelerating innovation and access to technology and ensuring accuracy and inclusion.
Understanding how technology interacts with such thorny, enduring normative challenges is at the heart of my analysis in The Equality Machine. I argue that hard choices and competitive normative principles have always been central to the democratic process, and I demonstrate how technology, at times, can both help surface the tensions in more transparent ways and mitigate some of the toughest questions. Indeed, as several of this symposium’s essays note, contrary to its frequent characterization as a “black box,” AI presents an opportunity for more rather than lesstransparency. As Mark Lemley writes, AIs have to show their work. Lemley correctly notes that AI offers something that human decision-making cannot: the chance to be deliberate in designing systems that recognize and confront the imperfections too often hidden in our human society.
Taking a more positive approach toward digitization and automation can unsurprisingly lead to a misreading that illuminating potential means less government intervention. To the contrary, when we take a more balanced approach to the risks as well as the benefits of technology, the policy possibilities expand. Elena Chachko, who has led the way in examining the geopolitical turn of platforms, describes the limits of a public-private governance approach in the digital era, calling for a more robust regulatory scheme that ensures the benefits of collaborative governance. In the context of content moderation (and platform regulation more broadly), this can mean that rather than the flat on-off debates we are currently having (as with the debate over Section 230 in Gonzalez v. Google LLC, soon to be decided by the Supreme Court), government can incentivize or mandate specific moderation practices and processes. In the background, a big elephant in the room of the new governance approach is the need to ensure competitive markets—a focus of much of my recent research and my two previous books, Talent Wants to Be Free and You Don’t Own Me. Regulated tech industries today dominated by category kings – the Big Fives and beyond – present a particular challenge to building equality machines and ensuring distributive justice to spread the benefits of technology and data. To this end, I argue, among other things, that we should legally mandate much more access both to data and to the capabilities of AI as a public good.
Data privacy is a double-edged sword. As Talia Gillis, Stephanie Bornstein, and Chris Slobogin have each shown in their research on mortgage markets, workplace hiring, and criminal justice, respectively, allowing an algorithm to know the inputs and affirmatively consider protected identities (rejecting with Gillis coined “the input fallacy”) can in fact be the more promising path for combating inequality and bias and, as Bornstein writes, for “developing law and procedures for better monitoring, accountability, and transparency.” In his book Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk, Slobogin’s deep analysis of automation underway in the U.S. criminal justice system illuminates how, contrary to the dominant narratives about automation, algorithms are more accurate than people at assessing risk. Just as with the spheres I explore in The Equality Machine—work, education, media and political participation, health, care, family and intimate relations—Slobogin contends that AI can provide a valuable quantification of the trade-offs and hard normative questions we have always had to grapple with in designing regulatory systems.
A major frontier in the quest to build and successfully scale equality machines is what I call the nascent field of behavioral human-machine trust. In The Equality Machine, I sought to examine what we already know about the generational, cultural, gender, historical, and other differences in how people interact with disembodied algorithms and embodied robots. Building on my collaborative behavioral policy research, including with On Amir and Yuval Feldman, I argue that—just the field behavioral science first developed in marketing departments, catering to corporate profit, interested in consumer behavior, and only later came to be recognized as significant in policy-making focused on well-being and welfare—so should policy-makers now turn their attention to understanding the human biases that lead to irrational algorithmic aversion and algorithmic adoration. Beyond the hard work of envisioning, designing and building equality machines, deploying and using equality machines should entails rational trust.
Pallavi Bugga and Nicholson Price rightly contend that human-AI interactions are hard to model and to regulate, but we have no choice at this point but to research and translate what we know about human-machine interactions into better policy. As Bugga and Price suggest, I am hopeful that smart policy choices can help both humans and machines become agents of equality and fairness. Bugga and Price write that virtuous cycles are unlikely to occur by accident. But importantly, we need to evaluate the shifts to digital technology through a dynamic comparative lens—not just on whether they outperform human decision-making today, but also how much we can detect flaws in each system, human and machine, and how much capacity there is for each, brain and algorithm, to improve over time. As Rachel Arnow-Richmanwrites, we need to remember that the human mind, not artificial intelligence, is the ultimate “black box.” As Arnow-Richman describes, “accepted research tells us that in every interaction we rely on faulty assumptions, stereotype, and cognitive errors we can neither observe nor fully understand. And our powers of self-correction are inherently limited.” And as Harry Surden compellingly describes, technology can either ameliorate or exacerbate human biases and his essay provides a rich framework about how technology interacts with well-documented behavioral effects, including extremity bias, negativity bias, availability bias, naïve realism, and epistemic bubbles. Indeed, I see Surden’s essay alongside Bugga and Price’s in this symposium as part of the nascent scholarly inquiry into behavioral human-machine trust.
Matt Bodie fairly asks whether The Equality Machine goes far enough in its definitions of equality. He sees the book’s vision of equality as more focused on uprooting discrimination and disparities based on race, ethnicity, and gender, “leaving class-based economic inequality somewhat in the shadows.” He further suggests that an underlying current of the book is a more radical vision of equality, seen in my call, for example, for much greater access to private data for public purposes. I take quite a bit of time in the book articulating some of my priors—most notably my global justice stance and my social democratic liberal values. At the same time, I purposely leave some of the analysis and choices open. I describe how different societies can reasonably disagree about how to balance individual freedoms with collective rights; how different lines may be reasonably drawn depending on context and era between privacy and transparency, liberty and safety, and speech and equality, to name a few fundamental principles. My own stance has been shaped by my personal path, education, and diverse worldviews—from Rawlsian traditions (and John Rawls himself when I held a Safra Fellowship at the Center for Ethics), through my research on prosocial behavior, to my dual Israeli-American identity, including my military service past and my Californian present. In The Law of AI for Good, I confront these dualities and tensions within contemporary techno liberalism head-on, arguing that,
The pathologies of contemporary technology policy may be iterations of larger pathologies of liberal democracies and particularly the American civil rights tradition: a focus on law-as-negative-constraints rather than governance; a focus on rights as civil liberties as opposed to socioeconomic welfare; a focus on anti-classification as opposed to substantive equality and distributive justice; a focus on the individual as the unit of protection as opposed to the collective good; a focus on adaptive as opposed to anticipatory regulation; and a focus on protecting the status quo as opposed to planning for and investing in change.
Still, I recognize that these balances need to be constantly worked out, as always, through democratic deliberation, and that the pendulums of progressive reform are always moving. And I very much appreciate and am thankful to Colleen Chien for her leadership in leveraging technology for progressive reform, and her reminder that “the context [of] the pursuit of equality must always be understood, continually nurtured, and never taken for granted.” I absolutely subscribe to her call for more robust notions of equality than the liberal colorblind lens. As both Chien and Arnow-Richman insightfully frame it, inequality creates opportunity for change. Most importantly, going back to the ever-green tensions in liberal society, Chien suggests that “the demand for justice is not necessarily that all incomes or outcomes be equal, or even that every single opportunity be open to all, but that all have the right to not only survive but thrive.”
Relatedly, Tomer Kenneth and Oren Tamir worry that leveraging AI and robots for social purposes such as care work and education means a lost opportunity to reimagine the economy of care and social welfare in democratic and human societies in the first place. In other words, Kenneth and Tamir make salient the discord between incremental improvement and radical change. I take seriously their critique that overreliance on the benefits of technology can be problematic and can possibly, as they warn, breed complacency. Still, I have long rejected the notion that one needs to be purely critical and do so from the outside to avoid being coopted or complacent. As Harry Surden writes, we need to be neither reflexively critical nor unrealistically optimistic. To me, circling back to Bodie’s insight about bringing more people into a constructive, skin-in-the-game, mature and balanced effort to harness technology for good, contemporary arguments about complacency that pervade the techlash discourse risk perpetuating—and indeed has already created, as I show in the book—vicious cycles of empowering the mere few who are shaping our futures. As Kenneth and Tamiracknowledge in their vision of a “mature position,” inaction and avoidance when the tools of technology are there to help are as dangerous as imperfect action. There absolutely is a role for many kinds of activism from within and without but should very much be weary if too many talent, ethical thinkers opt-out. Critical thinking is a necessary but insufficient path to build a better future. And in many ways, as I argue in The Equality Machine, critique is easier than the art and science of building something better. As Harry Surden writes, “It is easy for the human mind to conceive of worst-case-scenario problems that AI technology might wreak on society, but it often takes much more mental effort to conceive of the subtle and incremental societal improvements in terms of scientific research, social equality, healthcare, access to information and knowledge, communication, enhancements to the arts and entertainment, communication, standard of living, safety, new types of jobs, that technologies like AI might also bring about.”
Arnow-Richman fiercely took the lead on this Yale Journal of Regulation symposium, and I am tremendously grateful to her and the journal’s editors, Jacob Wirz, Elaine Hou, and Chris Walker. Arnow-Richman beautifully captures the vision of adopting an equality machine mindset when she describes how moments of failure “force us to stare unblinkingly at the depth and pervasiveness of bias in what we take for granted in the work-a-day world.” AI, as Arnow-Richman insightfully writes, “affords us the rare chance to observe bias in action, to examine it unselfconsciously, reverse engineer, and ultimately chart a new course.” I am so very honored to continue this critical and constructive conversation with the wonderful participants of this symposium. I hope to see many of them and the readers of the symposium soon on the road: this coming week in person in NYC (NYU March 23 at 1)and New Haven (Yale March 28 at 5). Now, let us build equality machines!
Orly Lobel is the Warren Distinguished Professor of Law and Director of the Center for Employment and Labor Policy at the University of San Diego.