On the Need For (and Difficulties of) Reaching A “Mature Position” About AI, by Oren Tamir & Tomer Kenneth
*This is the ninth post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at Science, The Economist, and Kirkus.
How things change. In the not-too-remote past, most of us saw advances in technology, and particularly the idea of artificial intelligence (AI), as incredibly promising—a tool beyond our wildest dreams to carry humanity forward. Hit the fast forward button to today, and darker visions of AI seem to be much more prevalent. True, the scary scenes of AI defying or controlling humans are not here (yet?). Nonetheless, already today AI presents real and substantial problems. Indeed, AI can be used and designed, and too often is used and designed, in forms that severely aggravate already existing injustices and inequalities in human society. And the increased use of AI can create new domains and opportunities for similar inequalities and injustices to occur.
Orly Lobel’s The Equality Machine seeks to recover the hopefulness of the past about AI, and to update it for the present and future. By meticulously collecting rich stories and anecdotes from so many places and contexts, and by moreover embedding these stories within rich insights from an enormously diverse bodies of literature (including various streams in the social sciences and the humanities), The Equality Machine powerfully illustrates that many are still too quick to forget that AI can do good, and that it is already doing much good for humanity.
At the same time, Lobel rightly and consistently recognizes the challenges that the rise of AI brings with it. But, and here’s the twist, Lobel nonetheless insists that the way to deal with the risks is not to keep with the pessimistic view which takes an unfair and deeply one-sided picture of AI’s impact. Instead, The Equality Machine calls on us to adopt a sophisticated mindset that is at once hopeful and critical about AI. Furthermore, Lobel sketches several high-level principles that can serve as a kind of a blueprint for policymakers and lawmakers as they keep confronting AI’s risks and opportunities.
Lobel’s book is an insightful (and fun!) read, and we’ve learnt so much from it. All this is of course entirely unsurprising to anyone who has read Lobel’s previous work. We’re also quite sympathetic to Lobel’s core argument. Like her, we are also confident that the most sensible approach to AI—like virtually any issue of policy—can’t be skewed to one-side and overly obsess about only certain types of risks and failures. Risks and opportunities, after all, exist on all sides; they attend to both action and inaction on technology, be it AI or whatever. We thus agree with Lobel that what the future of AI policy (like, again, any policy domain) needs is to strive towards what Albert Hirschman once called (quite condescendingly, to be sure) a “mature position”: one that is alert to all risks and benefits in a particular field at once.
The Equality Machine is undoubtedly an important and much needed corrective that helps us, we think, see the crucial need for this sort of “mature position” with respect to AI. But is it enough?
We’re not entirely sure. As it happens, in several points of the book, we found ourselves wondering whether Lobel’s argument and presentation, in seeking to bring us into this entirely correct “mature position” about AI and to get us out of the state of myopic AI pessimism, doesn’t actually fall into a kind of opposite trap: that of painting a too rosy picture of the place of AI in our society.
Take for example Lobel’s treatment of the use of robots in the field of care for the elderly and the sick. Lobel paints an incredibly optimistic picture, explaining how the use of robots in this space have helped to bridge rather than substitute for human interactions, including by encouraging interactions between patients and by improving positive emotions (p. 263). Lobel notes improvements in experience and functioning of both the elderly and the sick as a result of the use of robots (“it works”). She also explains that relying on robots alleviates the burden of those who work in care. Both are clearly important benefits. But we think the issue is slightly more complex. The problem isn’t only, and not even necessarily, from the side of the patients: i.e., that interaction with robots is fake and only mimicking human-interaction, a problem that Lobel indicates some philosophers, for example, have highlighted. The problem may be that by increasing our reliance on robots in care we may be too quickly passing on an opportunity to reimagine the economy of care in democratic and human societies in the first place. Indeed, as some have argued, providing care can be a unique activity that can help build social cement, empathy, and understanding on the side of those who provide it. As a result, and contrary Lobel’s suggestion, the ability of AI to burden elderly care should not necessarily drive us to rapidly displace humans with robots in giving care. Instead, we might actually want to increase the place of humans in providing care (or at least democratically deliberate on that question).
Or take another example from the book, where Lobel describes the story of Dr. Timnit Gebru, who was fired from Google over a dispute with company executives about publishing an article on the potential risks and harms of large language models (p. 290-94). Gebru later transformed into a leader in the AI space. And in this role, at least initially, she has voiced views that were deeply critical, especially of the introduction of ethics departments in every major tech corporation as intrinsically dangerous. Gebru claimed that these departments are inherently co-opted in favor of corporate power and interests, and that the only way forward is from the outside through the involvement and direction of marginalized communities. Lobel makes clear that she doesn’t subscribe to this criticism and advocates a different form of leadership and critique in the AI domain than what Gebru was embracing at the time. That role acknowledges the risk of private power and self-regulation, but nonetheless believes that private power can be steered (or nudged) in prosocial goals.
But here, too, Lobel’s view seems to us too quick. For one, it is not at all clear that the participatory vision of AI governance Gebru has been calling for is isn’t a good one. A more empowered democratic and participatory vision in tech governance (and governance more broadly) strikes us as quite attractive. For another, even if we were to agree with Lobel that private initiatives could be steered in prosocial directions, that doesn’t make the kind of more aggressive, hardline critique Gebru has been pursuing is necessarily wrong or damaging. To the contrary: this exact form of criticism might be crucial to make sure that these types of initiatives do take place. Public discourse (even sharp one) and aggressive regulatory interventions are part of the needed ecosystem of engagement that should be encouraged in the governance of AI applications. The deep inequalities and power of imbalances that animate this domain, which Lobel consistently acknowledges in her book, are exactly why we should support different forms of activism that speak truth to power.
For us, these two examples are but a few of some real moments of complacency in The Equality Machine, rather than an illustration of the kind of “mature position” that Lobel truly—and more globally—seeks to advance. Which leads us to ask: what accounts for this? More specifically, are these moments a result of the genre and tone of Lobel’s book—which seeks to strongly push against the present AI pessimism? Or are these moments instead indications of a deeper potential problem in Lobel’s framework? That is, maybe to arrive at the desired mature position with respect to AI, something more ambitious than what Lobel recommends might be needed?
We don’t have a definitive answer; we suspect that both the genre of the book and the general framework endorsed in the book have a pull here. But what we are confident about is that Lobel’s indispensable The Equality Machineprovides us with the best framework to think through these essential questions going forward, and will make the debate more insightful and less one-sided.
Oren Tamir is a post-doctoral fellow at Harvard Law School and NYU School of Law. He can be reached at otamir@law.harvard.edu. Tomer Kenneth is a doctoral candidate and a fellow at the Information Law Institute (ILI) at NYU School of Law. He can be reached at tomer.kenneth@law.nyu.edu.