Notice & Comment

The Health Equity Machine?, by Jessica L. Roberts

*This is the fourth post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at ScienceThe Economist, and Kirkus.

In her inspiring and compelling book, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, Orly Lobel makes the controversial claim that AI-based technologies like smart devices, learning algorithms, and robots can further equality and social justice. She asserts that “[t]echnology can be lifesaving: it can help us be healthier, safe, and more equal” (p. 1). Not surprisingly, then, one opportunity ripe for intervention is my area of expertise: health care.[1]Certainly, health inequity is a pervasive and all-too-often deadly problem in the United States. Several groups experience well-documented disparities, including people of colorLGBTQ+ individuals, and people with disabilities.[2] Members of these populations tend to have lower relative health, increased disease risk, worse outcomes, more barriers to access, less health insurance coverage, and more experiences of discrimination than their white, straight, non-disabled counterparts.[3] Health AI, which often promises to transform health care in terms of accessibility, affordability, and effectiveness, may seem like just what the proverbial doctor ordered. What then makes the thesis of The Equality Machine so provocative?

Lobel’s take is surprising because, as she notes, much of the discourse on AI has gloomy, dystopian overtones, focusing on the technology’s shortcomings and not its possibilities.[4] To be sure, AI has the potential to replicate, entrench, or even worsen existing inequalities.[5] While acknowledging these possibilities, Lobel nonetheless provides a refreshingly optimistic take, arguing that technology can be a force for good—an equality machine, as she puts it[6]—and not just the newest manifestation of our old transgressions.[7] She encourages us to be “builders, not blockers” who “strive to develop technologies that enhance our lives and open possibilities” (p. 41). She tells readers that “[t]he question must not be whether an algorithm is flawed or risks an accident or error in judgment, but whether it is safer, fairer, and more unbiased relative to what came before it” (p. 297). And if you had any doubt that Lobel is an iconoclast in this regard, just as I sat down to review her book, JAMA Health Forum published an article called Garbage In, Garbage Out—Words of Caution on Big Data. Like so many others, it describes the negative consequences of incomplete or biased data and warns medical journal editors and the Food and Drug Administration (FDA) alike to develop clear guidelines for assessing the predictions resulting from AI and machine learning.

Of course, many of these critiques are not unique to health AI.[8] They are problems endemic to AI generally. To start, people create algorithms, often using proxies for key variables. And if the human creators select the wrong proxy, the results could be disastrous. Consider an algorithm that Lobel describes in her chapter on health care, designed to help providers make allocation decisions (p. 146). The designers of the algorithm used costs (something that is fairly easy to measure) as a proxy for need (something that is significantly harder to quantify). The resulting algorithm underpredicted the value of additional health care in populations who are priced out of needed health care. One study found that such an algorithm failed to identify over half of the Black patients who could have benefited from more care. The issue here was in the design: cost turned out to be a poor proxy to identify who could benefit. However, flawed algorithmic design is only one way that AI can reflect social inequality. 

Biases or gaps in training data can also lead to inequality.[9] Lobel explains that “machines learn from the data they are fed. That data is what we humans have produced over time. If a machine is fed partial, inaccurate, or skewed data, it will mirror those limitations and biases” (p. 25).[10] Flawed data can generate inaccurate results in at least two ways. 

First, biased inputs lead to biased outputs. For example, Lobel describes an algorithm for identifying associations between words trained on articles from Google News (p. 26). She explains that, “because our societies are unequal,” “[o]nce a machine is trained—namely, once it has read through thousands of news articles—it will exhibit stereotypes to a troubling degree” (p. 26). The same thing can happen in the context of health AI. Consider that health care providers notoriously underdiagnose and undertreat pain in female patients and in patients of color. An algorithm trained on that biased data will likewise underestimate the incidence of pain and the benefits of treatment for individuals in those populations.

 Similarly, incomplete data can also undermine equality. Lobel notes that developers who create AI to detect cancerous moles must ensure that the algorithm’s training data includes a variety of skin tones. She informs us that “[i]f algorithms are not trained with a diversity of skin types in mind, their output will be (at least for some populations) garbage, to state it bluntly” (p. 147). An algorithm trained on incomplete information will not make accurate predictions for people who fall outside the dataset. Unfortunately, certain health disparities populations are also underrepresented in biomedical research, making existing data about them hard to come by.

And finally, algorithms can generate outcomes that are simultaneously actuarily correct but also socially damaging. Lobel frames this issue as “a tension between equality and accuracy” (p. 31). She asks readers, “[w]hat if an algorithm could achieve perfect accuracy in predicting success in school or at work and this prediction showed that certain groups were more apt for a given task?” (p. 31). Thus, an algorithm may make correct predictions, but acting on that information could perpetuate inequality. For instance, a scheduling algorithm that intentionally double-books providers when patients are likely to be no-shows had a disparate impact based on race because patients of color—due to structural barriers outside their control—are more likely to miss appointments.

But while these common pitfalls of AI threaten to reproduce or even worsen existing health disparities, Lobel is careful to point out the many ways that technology can improve health care. She describes the transformative potential of smart devices implanted in the body that would act as bionic organs,[11] the ability of AI to collect and analyze data from groups that have historically gone under-studied in biomedical research,[12] the promise of AI-based diagnostic and screening tools (especially as used in conjunction with human physicians),[13] the personal and clinical benefits of self-monitoring technologies,[14] and the capacity of machine learning to lower the costs and improve the success rate of IVF.[15]

While I share Lobel’s optimism for the potential of health technology to solve the enduring disparities that we as a country face, I worry that—without targeted interventions—the reality will fall regrettably short. Lobel herself tells readers, “it’s all about making deliberate choices” (p. 3). She offers nine guiding principles in her book,[16] including using policy to “incentivize, leverage, and oversee technology” (p. 10). The remainder of this post will consider what those policies might entail.[17]

So how do we build a health equity machine? In a forthcoming book chapter, my colleague Peter Salib and I discuss the potentially discriminatory effects of health AI and propose some policy solutions. Like Lobel, Salib and I advocate using AI itself to address certain disparities. Given that AI can search for patterns of discrimination literally around the clock, Lobel suggests that “bots could effectively become 24/7 watchdogs, detecting real-time patterns that exclude or disadvantage women and minorities in the marketplace” (p. 21). AI could perform these functions even without explicit information about social categories. Because the FDA medical device performance database does not include explicit information about gender, computer scientists developed an algorithm to scan patient incident reports for gendered pronouns (p. 149). Per Lobel, “[t[he resulting equality machine revealed with had been a hidden truth: in 340,000 incident reports of injury or death caused by medical devices, 67 percent involved women, while only 33 percent involved men” (p. 149). AI can, therefore, help us discover previously undetected disparities.

And, in some cases, once the AI detects discrimination, another algorithm could come in to prevent those inequalities in the future. Developers are currently working on debiasing software, which uses one algorithm to detect bias and a second algorithm to correct (pp. 27-28). Salib has proposed similar interventions in his work on big data affirmative action. He explains that a correction algorithm can learn to uncover bias that flows from decision-making AI. Once it quantifies the rate of the discriminatory error, the correction algorithm can adjust for it, effectively de-biasing the results of the first algorithm going forward. This approach could address inequalities when algorithms work too well—by making accurate predictions using poor proxies or biased data. If the correction algorithm learns to avoid certain forbidden disparate impacts, big data affirmative action could also address actuarially sound but socially problematic results. However, the absence of reliable, representative data makes accurate detecting and correcting hard to do. 

How then do we address data deficits? Lobel has advocated that, “as a matter of policy, we should make existing data sets easier for all to access for the purposes of research and monitoring, and that governments should initiate and fund the creation of fuller data sets as well as more experimentation with digital technology that promotes equality and other socially valuable goals” (p. 299). Yet, as noted, several health disparities populations are not in the datasets to begin with. Law- and policy-makers must then take action to incentivize—perhaps even compel—comprehensive, representative data collection. In our chapter, Salib and I propose allowing developers to create products from preexisting datasets and, assuming they are safe and reliable for the subset of the population represented in the data, to take those technologies to market. However, once the product goes on sale, the developer should have a preset amount of time to collect additional data to enable other populations to benefit. We think that this approach balances the desire to bring a socially beneficial and potentially profitable technology to market as soon as possible with the moral obligation to ensure that it will not create or perpetuate disparities. Developers could use the proceeds of that initial launch to offset the costs of expanding their product’s utility. In fact, in 2018, the notorious genetic testing company Myriad Genetics rolled out a direct-to-consumer test that was only accurate for women of exclusively European ancestry, leading to much public outcry. The company regrouped after the backlash and partnered with Cleveland Clinic to gain access to data that made the test accurate for “any and all interested women.” If Myriad can do it, we have faith that others can as well. Perhaps this solution is not perfect, but we hope that it is a move in the right direction. Building the equality machine will take some creativity and trial and error. Like the developers that they regulate, law- and policy-makers must be willing to try new strategies, fail, and learn from their mistakes. We, too, must be innovators and disrupters.

And in some ways, we don’t have a choice. Lobel informs us that “the train has left the station” (p. 2). Whether we like it or not, AI is going to play a growing role in our lives, including how we access and consume health care. We must then heed her call and think strategically now about how to build a brighter, healthier, and more equitable future.

Jessica L. Roberts is the Director of the Health Law & Policy Institute and the Leonard Childs Professor in Law at the University of Houston Law Center.


[1] See Orly Lobel, The Equality Machine: Harnessing Digital Technology for a Brigther, More Inclusive Future 41 (2022) (observing that “scholars are far more willing to accept the role of AI in areas such as medicine, climate studies, and environmental sustainability—which they perceive as more purely scientific”).

[2] See id. at 16 (stating that “unequal access to medical care and health benefits” is among the difficulties that women, people of color, and other socially marginalized groups face).

[3] See, e.g., René Bowser, Racial Bias in Medical Treatment, 105 Dick. L. Rev. 365 (2001); Sofia Carratala & Connor Maxwell, Health Disparities by Race and Ethnicity, Center for American Progress (May 7, 2020), https://www.americanprogress.org/issues/race/reports/2020/05/07/484742/health-disparities-race-ethnicity/; Hudaisa Hafeez et al., Health Care Disparities Among Lesbian, Gay, Bisexual, and Transgender Youth: A Literature Review, 9 Cureus 1 (2017); Lisa I. Iezzoni, Eliminating Health and Health Care Disparities Among the Growing Population of People with Disabilities, 30 Health Affs. 1947 (2011); Lisa I. Iezzoni et al., Physicians’ Perceptions of People with Disability and Their Health Care, 40 Health Affs. 297 (2021); Jennifer Kates et al., Health and Access to Care and Coverage for Lesbian, Gay, Bisexual, and Transgender (LGBT) Individuals in the U.S., KFF (May 03, 2018), https://www.kff.org/report-section/health-and-access-to-care-and-coverage-lgbt-individuals-in-the-us-health-challenges/What Are LGBTQ Health Care Disparities, UPMC Health Beat (Jan. 27, 2021), https://share.upmc.com/2021/01/lgbtq-health-care-disparities/.

[4] Lobel, supra note 1, at 3 (listing examples of best-selling books telling “horror stories about technology gone wrong, biased AI, and a looming dystopian human-machine future”); see also id. at 307 (“It has been too common among progressive thinkers to take a critical, often pessimistic stance about all systems, which all too often are designed by too few for the benefits of too few.”).

[5] See id., at 1 (noting that AI and other technology “can replicate and exacerbate ongoing injustices”).

[6] According to Lobel, “[a]n equality machine mindset actively charts the course of the future, anticipating the many ways in which the future is unknown.” Id. at 290.

[7] She asks her audience, “what if we flipped the script and instead adopted a mindset that inequality faces a tech challenge? What if we considered challenges as opportunities to do better—opportunities not only to address technology failures but to use technology to tackle societal failures?” Id. at 17 (emphasis in original). She implores us “to cut through the utopian/dystopian dualism and decipher what can be readily addressed, improved, and corrected, and which problems are more wicked and stickier.” Id. at 307.

[8] And some of these concerns go beyond AI as well. Id. at 35. (“In some ways, nothing about using statistical correlations and making predictions on these patterns is new. Scientific inquiry, medicine, marketing, and policy have all been grounded in forecasting the future based on the past.”).

[9] Comprehensive and representative data collection is key to the modern equality project. Lobel reminds us that “we cannot correct what we don’t measure” and “we cannot improve what we don’t study.” Id. at 19. For these reasons, Lobel rejects “blindness” approaches to equality. See id. at 29-32; see also id. at 301 (stating that “the best way to prevent discrimination may be authorize an algorithm to consider information about gender and race” (emphasis in original)).

[10] See also Lobel at 147 (“The quality of the output depends on the quality of the inputs, and again, as we have already seen, bias in, bias out, which is a subset of the more general problem that computer scientists call garbage in, garbage out (GIGO)—flawed or irrelevant input data produces nonsense output.”).

[11] Id. at 131-32 (recounting her personal experience with her daughter’s smart insulin pump and the “bionic pancreas” as the future of diabetes research).

[12] Id. at 133 (focusing on the historical exclusion of women from medical research and the ability of technology to provide women with actionable health-related information, specifically in the context of hormonal fluctuations, menstruation, and fertility).

[13] Id. at 136-40 (describing advances in both image-based and language-based AI for detecting disease and improving efficiency).

[14] Id. at 141-42 (relaying the story of an innovator whose wearable inspires her to go on daily early morning runs and the ability of biometric tracking to identify health risk).

[15] Id. at 145.

[16] Id. at 5-12. Those principles are (1) embracing technology as good but not necessarily perfect; (2) understanding mistakes as opportunities and correcting them; (3) scaling success and learning from experiments; (4) making equality a goal for technology; (5) collecting the data that enables us to identify inequality; (6) understanding technology as a public good; (7) creating AI that challenges stereotypes; (8) regulating technology in prosocial, forward-looking ways; and (9) making deliberately inclusive decisions regarding how technology develops and evolves.

[17] Specifically, I focus on ways to improve how health AI functions. Other challenges to equality, such as access to technology generally, will remain. See id. at 308 (“We also need to commit to tackling the root causes of inequality that go beyond any particular technology. We will never be able to address and correct social issues and the biases that exist within society simply through technology, but we need to make sure that technology fuels and supports positive change.”); see also  id. at 16 (explaining that effects of inequality are “far, far worse for those who have the least digital access, cannot afford new technologies, and experience the challenges of intersectional exclusion”).