Notice & Comment

Automating Federal Agencies, by Joshua D. Blank & Leigh Osofsky

The Trump Administration has recently proposed replacing many workers in federal agencies with automated systems, like chatbots, that can answer questions the public has about their legal rights and obligations. For instance, instead of human customer service representatives at the Department of Education, chatbots may soon be responsible for answering questions that millions of parents and students have about their student loans each year. This automation will become even more important if the federal government continues to shed workforces.

We have spent years studying federal agencies’ use of automation to provide guidance to the public. As part of our research for the Administrative Conference of the United States, which we describe further in our new book, we interviewed federal government officials about their development and use of automated legal guidance, and we examined these systems in detail. Our findings can help inform a shift toward automation of the government’s legal guidance.

Federal agencies like the IRS, the Department of Education, and the United States Citizenship and Immigration Services already use automation, including chatbots, to answer tens of millions of inquiries from the public about the law each year. They do so because federal agencies have a duty not only to enforce complex law but also to help the public comply with it. Over the past decade, to fill this gap, federal agencies developed automated legal guidance systems, such as the IRS’s Interactive Tax Assistant, or the Department of Education’s “Aidan,” or USCIS’s “Emma.”

These automated legal guidance systems offer several benefits. They help federal agencies make clear how the law applies to individuals’ situations. This can reduce the cost of applying the law, both for the public and for agencies. They can also help agencies make clear their views of the law, even when that law is unclear or subject to interpretation.

But there are costs of these automated systems as well. Offering a straightforward explanation of the law can obscure what the law actually is. For instance, we found examples where the IRS’s Interactive Tax Assistant would tell a taxpayer they could take a deduction they are not in fact entitled to take, as well as examples in which the Interactive Tax Assistant does the exact opposite. This happens because the automated system asks taxpayers a series of questions, to which they must provide a simple “yes” or “no” response (such as whether a particular expense is necessary for their trade or business). These questions tend to flatten out law that is often uncertain as a result of legal disputes between different courts or ambiguities in the governing tax code and regulations. When a taxpayer fills out her tax return erroneously in reliance on such an automated system, she may eventually be responsible not only for the tax she failed to pay, but also for penalties for taking the very position that the automated guidance system advised her to take. This is true notwithstanding the fact that the automated systems often provide no clear disclosure about the ways that users may not be able to rely on their advice. In this way, federal agencies’ automated guidance systems can impose legal risk on the public. They can also undermine public understanding of the actual state of the law, including its ambiguities.

Our research suggests that we need to be careful in expanding use of automated legal guidance systems. These systems can make decisions in ways that are not fully intelligible to humans, and this concern is greater with more sophisticated AI. The Department of Education’s problematic rollout of the new FAFSA form in 2024 illustrates the ways that programming and artificial intelligence can obscure problems with agency communications. If federal agencies are going to expand their use of automated systems, appropriate review needs to be in place to evaluate the decisions being communicated.

This review is even more important because the interviews we conducted show that federal employees may fail to appreciate some of the risks that automated guidance systems pose. We found that lawyers within federal agencies were often unsure of exactly how the automated systems developed guidance for the public. Moreover, lawyers were not always involved in developing or assessing the guidance. Those people responsible for the guidance systems tended to have technical and project management rather than legal backgrounds. They also tended to believe that what the automated systems offered was not “law,” even though agencies use these systems to advise millions of people a year about their legal rights and obligations.

The lack of attention regarding the potential legal issues with these systems reflects a deeper lesson regarding administrative law. Administrative law provides extensive process and procedure to agencies’ promulgation of legal rules, from notice-and-comment requirements to opportunities to challenge. These forms of process and procedure are accessible to sophisticated and well-resourced parties. In contrast, we found that administrative law lacks a meaningful way even to categorize the types of statements about the law that automated systems provide to the general public. Even though these automated systems can make real changes in people’s understanding of the underlying law as it applies to particular circumstances, both administrative law and scholarship largely ignore it. Our research thus points to a broader problem of administrative law’s inattention to the guidance that federal agencies offer to the general public.  

Our research suggests ways we can try to maximize the benefits and minimize the costs of automation of the federal government’s legal guidance. These include concrete approaches for conducting substantive evaluation of the systems. This is not currently happening with the government’s automated legal guidance systems, which tend to rely instead on user satisfaction surveys (much like the type that appear following everyday activities, such as online purchases and doctor appointments). Agency lawyers who are knowledgeable about the law need to be involved in the creation and supervision of these systems, not just programmers and technology staff who may miss many of the law’s underlying nuances. The government should be as transparent as possible about how automated guidance systems are making their decisions. When automated systems fail to provide helpful answers, members of the public should be able to access human beings for assistance. And those who use automated systems should be entitled to certain legal protections when the systems lead them astray.

Federal agencies have already taken a significant turn toward automation of legal guidance to the public; this administration’s plans may hasten this shift. Careful consideration can help ensure that this change in the relationship between federal agencies and the public happens in a way that is as mindful as possible of the values of transparency, accountability, and equity.

Joshua D. Blank is Professor of Law at the University of California, Irvine School of Law, and Leigh Osofsky is the William D. Spry III Distinguished Professor of Law at the University of North Carolina School of Law.  They are the co-authors of Automated Agencies: The Transformation of Government Guidance (Cambridge University Press, forthcoming 2025).