Governors Break Ranks Over AI Regulation Patchwork
AI Chaos in the States—Can a Federal Pause Save Us?
In order to save the federal government millions in compliance costs as it updates agency technologies, the House of Representatives boldly included an up to 10-year pause on state regulatory regimes for artificial intelligence in its most recent budget proposal. Then Senator Ted Cruz (R-TX) announced that the same kind of pause would be included in his forthcoming legislation on artificial intelligence. Despite critiques of the idea by certain interest groups, a few academics, and a couple of senators, momentum is building around the idea—and for good reason.
More than 1,000 pieces of state AI legislative proposals incubate in America’s vaunted laboratories of democracy. Included among that avalanche of unprecedented legislative interest is a subset of bills specifically designed to regulate so-called algorithmic fairness and bias. Those bills comprise only a fraction of state legislative proposals—but their outsized impact and diverging legislative language alone would be enough justification to hit the pause button on state AI regulatory regimes. After all, Congress needs time to decide what to do next. The moratorium effectively provides that by creating something of an AI learning period, where policymakers and others can use the technology and learn what kinds of rules are (or are not) needed.
Nearly two dozen states have already proposed AI fairness legislation of some kind, with Colorado being the only state (so far) to pass the statutes into law. Recognizing its mistake, Colorado is trying to roll it back before the law goes into effect early next year—but its most recent attempt failed earlier this month. Connecticut has been something of a ringleader on AI fairness law. In mid-May, the Connecticut Senate passed its own AI fairness proposal after revisions and amendments watered down some of its burdensome requirements. Earlier that same week, Connecticut Governor Ned Lamont became the second Governor (Colorado’s Jared Polis was first) to raise concerns about the decisions of states to chart their own course on AI. Here’s how he put it: “I just worry about every state going out and doing their own thing, a patchwork quilt of regulations, Connecticut being probably stricter and broader than most, what that means in terms of AI development here.” Governors Polis and Lamont are right to be concerned. Other states ignore the yellow hazard lights emanating from Colorado at their own peril. Fortunately, Connecticut has started to take notice, even if their currently existing policy proposal is still ill-advised given patchwork concerns.
Although some experts have tried to sell these AI fairness bills as carbon copies of each other—suggesting that they have mitigated patchwork concerns—the American Consumer Institute dispelled that myth using standard tools of text analysis to measure legislative differences. As it turns out, these proposals are very different.
But the math, though convincing, still undersells how silly all of this has gotten.
States cannot even agree on basic definitions like artificial intelligence, high-risk, and consequential decision. Sometimes, they do not even define artificial intelligence directly at all—instead defining an “artificial intelligence system.” Take, for example, the definitions below from four different definitions of AI and artificial intelligence systems from state AI fairness proposals.
Connecticut: "An AI system is any machine-based system that (1) for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments and (2) may vary in its level of autonomy and adaptiveness after the system is deployed."
Georgia: “‘Artificial intelligence system’ or ‘AI system’ means an engineered or machine based-system that emulates the capability of a person to receive audio, visual, text, or any other form of information and use the information received to emulate a human cognitive process, including, but not limited to, learning, generalizing, reasoning, planning, predicting, acting, or communicating; provided, however, that artificial intelligence systems may vary in the forms of information they can receive and in the human cognitive processes they can emulate.”
New York: “‘Artificial intelligence decision system’ shall mean any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including any content, decision, prediction, or recommendation, that is used to substantially assist or replace discretionary decision making for making consequential decisions that impact consumers.”
Vermont: “‘Artificial intelligence’ means any technology, including machine learning, that uses data to train an algorithm or predictive model for the purpose of enabling a computer system or service to autonomously perform any task, including visual perception, language processing, and speech recognition, that is normally associated with human intelligence or perception.” And “‘Artificial intelligence system’ means any computer system or service that incorporates or uses artificial intelligence [as defined above].”
The confusion deepens in other definitions. Many of the proposed laws are triggered by the use of (poorly defined) AI to make so-called consequential decisions. 20 proposals across 14 states define consequential decision. That’s a small fraction of the 1,000+ state AI proposals—but even among this limited selection of bills, states cannot agree on basic definitions or parameters. The graphic below shows conflicts among those algorithmic fairness proposals that define a consequential decision.
Clearly, state legislatures have very different ideas about what kinds of decisions should be considered consequential. Everyone agrees that employment decisions are consequential. Most—but not all—agree that decisions about education, financial services, healthcare, housing, insurance, and legal and government services are also consequential. (Which decisions about these topics? That’s less clear.) But many disagree on whether decisions about criminal justice, utilities, transportation, voting, marital status, family planning, and more are, in fact, consequential decisions.
As many state legislatures gavel out for the year, some of the state-level fracas will fade, but it will not become silent. After all, four of the most active AI fairness debate participants —California, Illinois, Massachusetts, and New York—have full-time legislatures. How those debates resolve will influence the 2026 legislative trajectory of other states–unless both chambers of Congress can agree on an AI moratorium.
For example, Connecticut’s decision to marry its consequential decision framework with transparency requirements could be a sign of what is to come in 2026. Some more aggressive states will likely continue with heavy-handed approaches to AI fairness while others pivot away from audits and mandates toward transparency and accountability. Fortunately, transparency requirements are ripe targets for federal action—and Congress could send a strong message of leadership to the states by establishing preemptive transparency measures of their own. Congress is already considering this kind action with its AI Whistleblower Protection Act. Transparency could become one dominant plank of a national AI policy platform.
As states continue to debate and amend their algorithmic fairness proposals, the 10-year pause on state AI regulatory regimes is gaining steam. If recent statements by blue state Governors Polis and Lamont are any indication, the idea may have more bipartisan support than originally anticipated.
Logan Kolas is the Director of Technology Policy and Nate Karren is a Policy Analyst at the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, visit us at www.TheAmericanConsumer.Org or follow us on X @ConsumerPal.