Clear Rules, Bold Innovation: Finding the Regulatory Sweet Spot for AI, by Kevin Frazier
Introduction
A deregulatory agenda can stifle innovation. Aggressive efforts to cut red tape may destroy the stable, predictable regulatory ecosystem that aids upstarts and protects consumers. Removal of regulatory guidance by the Federal Trade Commission (FTC) combined with the cessation of the Consumer Financial Protection Bureau’s (CFPB’s) work suggests that the Trump administration may soon cross the line from helpful elimination of red tape to unproductive creation of regulatory gray areas. These hasty and excessive maneuvers cut against the administration’s stated aim of achieving “global AI dominance.” Though state attorneys general (AGs) have attempted to clarify how state equivalents of federal consumer protection laws pertain to AI, that’s not a durable solution because it fails to eliminate much of the regulatory uncertainty that discourages entrepreneurial activity.
The administration must address the disinnovation that results from regulatory ambiguity in its AI Action Plan, which President Trump called for in his January 23, 2025 executive order and is due in July. Doing so will not only incorporate best practices gleaned from the uncertainty that delayed innovation in crypto but also align the administration’s policies with its stated objectives.
The Difference Between Cutting Red Tape and Creating Anti-Innovative Gray Areas
Everyone can agree that excessive red tape reduces economic dynamism. Just as President Obama asserted that our regulatory system “must identify and use the best, most innovative, and least burdensome tools for achieving regulatory ends,” President Trump maintains that “[c]ostly regulations and job-stifling bureaucratic red tape have held back the American economy for far too long.” Politicians aren’t the only ones staking out an anti-red tape agenda. Thought leaders, influencers, and podcasters on the right and left have voiced support for “Prosperity,” “Abundance,” and “A Building Agenda.” The throughline is a prioritization of substance over process. Each version of this common framework calls for the removal of regulatory hurdles that, though well-intentioned, may actually undermine the goals they are intended to facilitate.
Everyone can also agree regulatory ambiguity also squashes entrepreneurial endeavors, right? A glance at the effort to regulate crypto indicates broad agreement on this front. Financial advisors caution against investing in emerging technologies lacking clear rules and regulations, which led them to discourage investors getting too tied up in crypto. Fellows at the American Enterprise Institute recognize that clear regulations around novel technologies such as crypto can accelerate use by good actors while diminishing abuses of a technology by bad actors. Academics conclude that “[c]lumsy, piecemeal regulation [can] very well lead to the emergence of a regulatory equivalent of whack-a-mole.” As neatly summarized by MIT’s Antoinette Schoar, “Regulatory uncertainty is always the worst thing, especially for well-intentioned and reputable players.”
President Trump himself has stated a desire to clarify the regulatory environment around crypto—all with the intent of boosting innovation and adoption. A full review of the myriad individuals and institutions that concur with the President’s connection between transparent regulations and transformative technological development exceeds the scope of this humble essay, but it’s a long list. The Supreme Court has even put its pinky on this question—making the slightly attenuated, but relevant, observation that the “free flow of commercial information” benefits consumers and society as a whole. Denying market participants access to regulatory guidance, eliminating their ability to work with regulators in crafting rules, and generally muddying the regulatory waters is widely seen as a hindrance to “dominance” in any field. Yet, the FTC and CFPB have done just that.
The Creation of Regulatory Uncertainty Around AI
As reported by Wired, the FTC has erased the sort of regulatory guidance and transparency that most associate with a pro-innovation agenda. A blog on the legal questions posed by personal assistants such as Alexa collecting data that helps train AI models? You won’t find it today. Surely more than a few startups looking for any and all sources of data to train and fine-tune their models would appreciate that blog going back up.
An analysis of how data collection may run afoul of the Children’s Online Privacy Protection Act? Also gone. Again, it is highly likely that startups—aware of the numerous officials keenly aware of the risks technology can pose to minors—would appreciate the restoration of that webpage.
Tips on how businesses could stay within the law in developing chatbots? No longer available. The lack of such guidance may cause VCs to think twice before making a substantial bet on a startup with such a product. As a source told Wired, these actions undermine the Commission’s most important function with respect to fostering innovation—making its compliance expectations clear and broadly known.
Developments at the CFPB also tended toward creating more gray areas more so than productively cutting red tape. The agency’s acting director, Russell T. Vought, quickly sought to strip it of the talent and resources required to execute its statutory purposes, which include preventing unfair and deceptive practices, enforcing the Equal Credit Opportunity Act, and fostering competition in financial markets by lowering anti-competitive barriers to entry. Vought attempted to fire hundreds of employees, close the agency’s offices, and otherwise grind its operations to a halt. A recent court order has halted and reversed many of those actions. Only Congress can officially and definitively shut down the agency. While critiques of the CFPB having previously created regulatory hurdles have some merit, upstarts keen to understand how best to enter competitive markets while also complying with the law may soon lack access to helpful guidance. Those same upstarts—as well as the American public—may also find themselves more exposed to anti-competitive behaviors by larger players. The CFPB has long been a leading investigator into unfair and deceptive practices by massive firms. Under Vought, however, inquiries like the one into Meta’s data collection practices will likely come to an end.
Congress Failing to Step In
Congress has largely abdicated its responsibility to provide a clear regulatory framework for AI innovation. Despite numerous hearings, committee investigations, and public expressions of concern about AI risks, federal legislators have yet to pass meaningful AI legislation. Their general acceptance of the administration’s changes to agencies suggests that they have yet to see the connection between excessive deregulation and disinnovation. This federal inaction has created a regulatory blackhole that states are eagerly rushing to fill, leading to a patchwork of potentially inconsistent approaches across the country.
The legislative branch’s inaction has also made way for the executive branch to try its hand at AI governance. This approach likewise invites regulatory whipsaws—administrations change and agencies have limited jurisdictions. Congress alone possesses the ability to establish comprehensive, durable frameworks that can withstand changes in administration. Congressional action would also provide a democratic legitimacy to AI regulation that agency guidance simply cannot match.
Bills introduced in both chambers have repeatedly stalled in committee, victims of partisan gridlock and competing priorities. When legislation does move forward, it often focuses narrowly on specific use cases—such as deepfakes or AI use in the workplace—rather than establishing broader principles and guardrails. This piecemeal approach threatens to create its own form of regulatory fragmentation, where startups must navigate a maze of narrowly-tailored but potentially overlapping rules.
Perhaps most concerning is Congress’s apparent willingness to observe from the sidelines as federal agencies such as the FTC and CFPB pull back from providing even basic regulatory guidance. By failing to assert its constitutional role, Congress tacitly endorses a regulatory retreat that creates precisely the type of uncertainty that undermines America’s stated goal of achieving global AI dominance.
States Going in the Wrong Direction
This federal vacuum has prompted states to take matters into their own hands, but their approach may be creating more problems than solutions. With over 500 AI-related bills currently under consideration across various state legislatures, the regulatory landscape is becoming increasingly fragmented and contradictory.
Not all state efforts are misguided. Washington’s legislature is considering legislation that would establish public-private AI partnerships to foster collaboration between innovators and regulators. New York has introduced bills focused on AI literacy programs that could help prepare its workforce and consumers for the AI revolution. These targeted initiatives could actually advance innovation by creating informed markets and collaborative regulatory approaches.
However, several states remain perilously close to enacting legislation that would disrupt AI development across the nation. Governor Youngkin of Virginia recently vetoed legislation that would have imposed significant AI regulations with potential extraterritorial effects, recognizing the danger of one state attempting to set de facto national standards. For every thoughtful gubernatorial check, dozens of potentially conflicting bills continue to advance.
A regulatory patchwork would create enormous compliance challenges. Startups hoping to operate nationally must track developments across 50 different jurisdictions, each potentially defining “artificial intelligence” differently and imposing idiosyncratic requirements. What constitutes adequate disclosure in California may fall short in New York, while data requirements in Illinois may conflict with those in Texas.
The fragmentation disproportionately harms smaller players. Large, well-resourced labs with teams of lawyers can treat compliance as simply another operational cost and absorb the occasional fine. Meanwhile, innovative startups face existential threats from enforcement actions they neither anticipated nor budgeted for. The result is an innovation environment that perversely favors established companies over the very upstarts that historically drive technological breakthroughs.
Given these shortcomings of state-level regulation, some have looked to state attorneys general to provide clarity through enforcement of existing consumer protection laws, but this approach comes with its own significant limitations.
State AGs: Insufficient Guardians of Innovation
While state AGs have attempted to fill the guidance void, their efforts ultimately provide limited clarity for innovators. Several state AGs have issued statements or guidance documents clarifying how existing consumer protection statutes might apply to AI systems. These well-meaning efforts aim to provide some regulatory certainty amidst federal abdication, but they ultimately fall short of creating the stable environment entrepreneurs need.
The fundamental challenge is inconsistency. The California Attorney General’s interpretation of how existing privacy laws apply to large language models may differ markedly from New York’s approach, for example. Without a coordinated approach, these interpretations create a cacophony of compliance standards rather than clear guidance.
Even within states, enforcement priorities remain opaque. Some AGs have established dedicated AI task forces with significant investigative resources, while others have remained nearly silent on the issue. This uneven approach makes it virtually impossible for startups to assess enforcement risks across jurisdictions. Would developing an AI tool for healthcare trigger scrutiny in Minnesota but not in neighboring Wisconsin? Without clear signals, entrepreneurs must either accept unknown regulatory risks or limit their market reach—neither conducive to the innovation America seeks to foster.
Beyond inconsistency, state AGs face inherent limitations in their authority. While they can interpret existing consumer protection laws, they cannot create comprehensive frameworks that address AI’s unique characteristics. Most state consumer protection statutes predate even the Internet, let alone contemporary AI systems. Stretching these laws to cover modern AI inevitably creates legal uncertainty that only courts can resolve—a process that takes years and provides little real-time guidance to innovators.
The Trump administration’s forthcoming AI Action Plan represents an opportunity to overcome these limitations, but as we’ll see, its current trajectory suggests it may fail to provide the durable framework innovators require.
The AI Action Plan: A Tenuous Promise
The administration’s forthcoming AI Action Plan, mandated by President Trump’s January 23, 2025 executive order and expected in July, offers a potential avenue for addressing the regulatory morass. However, even this federal initiative may fail to provide the stable regulatory ecosystem needed for true innovation to flourish. Despite the President’s stated ambition for “global AI dominance,” the plan’s structure and authority may undermine its effectiveness.
The fundamental limitation of the AI Action Plan lies in its foundation. As the product of an executive order rather than legislation, it will inherently lack durability. Executive orders can be modified or revoked with the stroke of a pen—by either the current president or his successor. This impermanence creates an unstable foundation for long-term investments. Venture capitalists and corporate boards weighing commitments of hundreds of millions or billions of dollars to AI development need regulatory certainty that extends beyond presidential terms or political winds.
Early signals suggest the plan may perpetuate rather than resolve regulatory ambiguity. The administration’s reluctance to empower regulatory agencies with clear AI-specific mandates indicates a preference for vague principles over actionable guidance. While principles certainly have their place, innovators need concrete parameters within which to design and deploy their technologies. A plan that prioritizes deregulation without accompanying clarity will likely accelerate the current race to the regulatory bottom.
The plan also faces coordination challenges. Without a centralized AI authority, responsibility for implementation will scatter across multiple agencies with varying levels of technical expertise and regulatory capacity. This fragmentation mirrors the very state-level inconsistency the plan should aim to resolve. A startup developing an AI tool that analyzes financial data may still need to navigate contradictory guidance from the SEC, FTC, and CFPB.
America needs more than just another policy document—it requires a comprehensive national AI strategy that establishes balanced guardrails without stifling creativity, harmonizes regulations across state lines, and delivers the predictable environment that makes long-term technological investment possible.
The Need for a National Vision for Principles-Based AI Governance
For America to achieve AI dominance, we need to move beyond debating whether to have more regulation or less regulation, and instead discuss what would constitute responsive regulation—a coherent national vision that provides clarity without stifling innovation. The path forward lies in developing a principles-based framework that can inform regulatory efforts across all levels of government, from federal agencies to local municipalities. Such an approach would create the predictable ecosystem innovators need while maintaining the flexibility to address AI’s rapidly evolving landscape.
This national vision should rest on at least three interconnected pillars that balance innovation with appropriate guardrails:
First, broad AI literacy must become a national priority. Regulatory efforts should include provisions for educating Americans—from consumers to small business owners to public servants—about AI capabilities, limitations, and appropriate uses. An informed populace can better leverage AI tools to enhance productivity and creativity, while also recognizing potential risks. Just as financial literacy enables consumers to navigate complex financial products, AI literacy will empower Americans to make informed decisions about AI technologies. Regulatory frameworks that incorporate educational components will cultivate the informed markets necessary for innovation to thrive.
Second, robust AI innovation requires regulatory approaches that create space for experimentation while establishing clear boundaries. Rather than prescribing specific technical requirements that quickly become obsolete, regulations should articulate performance standards and outcomes. Startups need to understand what success looks like—what level of transparency is sufficient, what constitutes appropriate data governance, and what safety standards must be met. With these guardrails clearly established, innovators can direct their creativity toward solutions that advance human flourishing rather than navigating regulatory minefields.
Third, ongoing public-private collaboration must become the norm rather than the exception. Regulatory sandboxes—controlled environments where innovators can test novel applications under regulatory supervision—offer a promising model. These arrangements simultaneously increase government understanding of emerging AI capabilities while allowing good-faith actors to develop and deploy beneficial tools with reduced regulatory uncertainty. They also create natural opportunities for regulators to identify and take enforcement action against bad actors, enhancing consumer protection without hampering legitimate innovation.
For this principles-based vision to succeed, Congress must assert its authority to establish a coherent national framework that guides agency action and preempts conflicting state regulations. The administration, for its part, must recognize that cutting red tape without providing regulatory clarity creates the very uncertainty it seeks to eliminate. And state officials must understand that a fragmented regulatory landscape ultimately undermines their legitimate interest in protecting their citizens.
The stakes could not be higher. America’s technological leadership has always depended on its ability to create environments where innovative ideas can flourish. By developing a clear, principles-based approach to AI governance—one that provides certainty without rigidity—we can avoid the twin dangers of overregulation and underregulation. Both extremes lead to the same undesirable outcome: ceding global leadership in perhaps the most transformative technology of our lifetimes. A balanced vision, by contrast, would create precisely the stable, predictable regulatory ecosystem that aids upstarts and protects consumers—fulfilling the promise of AI while avoiding its pitfalls.
Kevin Frazier is an AI Innovation and Law Fellow at Texas Law.