Questions Every State Policymaker Should Ask about AI Legislation
Quick queries to cut through the jargon surrounding any AI legislative proposals
This FAQ will regularly be updated. Last updated on January 31, 2024.
States are facing an onslaught of “AI” regulation.1 AI (and algorithms more generally) are general purpose technologies and thus nearly every policy area will be affected by the regulatory environment for algorithms.
A swath of proposals are coming or are here. At the federal level, an American Action Forum (AAF) tracker details 70 legislative proposals in the 117th Congress that implicate AI systems.2 At the state level, as of January 9, 2024, states had enacted fourteen bills. The National Conference of State Legislatures is tracking 191 measures at the state level with new bills introduced every week.3
Artificial intelligence is a technically complex field, and legislators and those who communicate with them need to be able to cut through the jargon to get to the heart of various legislative proposals. This document offers some key questions that everyone should ask about proposed AI regulation.
Q1: What do you mean by ‘AI’?
The definition of AI is probably the most important definition in any AI regulatory or legislative proposal.
In the scientific community, there is no consensus definition of ‘Artificial Intelligence’. At the most general level, AI is software that simulates human intelligence. This definition is not very clarifying because it is difficult to define human intelligence. Computers and software have been simulating parts of human intelligence for decades.
AI arguably includes voice-activated personal assistants (like Alexa), recommendation systems for music, movies, books, etc., autonomous and semi-autonomous vehicles, personalized advertising and shopping, smart home devices, health and fitness trackers, telemedicine and health diagnostics, fraud detection in personal banking, automated customer service and chatbots, language translation apps, educational and learning apps, and content creation tools (for art, music, writing, etc.).
People also occasionally use AI to mean “computers” or “software”. Many proposals define “AI” in a way that sweeps in everyday computer software. Such legislation would add new regulatory burdens to large segments of the existing software ecosystem. Legislation that so broadly affects software development threatens overall U.S. international competitiveness.
Legislators can greatly improve AI legislation by adopting a narrower definition of AI. Frequently such legislation is motivated by ChatGPT or DALL-E or similar consumer-facing apps. These are all “generative AI”. By targeting generative AI rather than AI generally, legislators could reduce the risk of substantial side effects on many other kinds of algorithms. A good definition of “generative AI” could be:
"Generative AI" refers to artificial intelligence systems that can create original content, ideas, or data, based on learned patterns from a training dataset. These systems utilize machine learning techniques to produce new outputs that are coherent with their input data, without being explicitly programmed for specific content generation.
This definition remains broad but is still much more constrained than many proposed definitions of “artificial intelligence”.
Q2: What harm is this legislation trying to prevent?
A clear answer to this question helps clarify legislative intent, guides the appropriate scope of solutions, enables a more accurate cost-benefit analysis, facilitates clear enforcement and compliance, prevents government overreach, helps the public understand the legislation better, limits unintended consequences, and better enables future assessment of the legislation.
Legislation that targets clearly articulated, concrete harms such as physical injury or financial loss is easier to adjudicate, less likely to be abused, and more clearly benefits constituents. Legislation that mandates certain procedures without targeting a specific harm can impose costs while helping no one—or worse, actively preventing innovation that would help many. Legislation that asserts vague or subjective harms unleashes enforcers to be selective and arbitrary in their enforcement.
If proponents cannot detail what concrete harms to users the proposed legislation prevents or addresses, that should be a giant red flag.
Q3: Why are current legal powers insufficient?
As a general-purpose technology, AI will be deployed in industries and sectors that already have their own regulatory structures. Good legislation will identify any gaps in the existing regulatory regimes that need to be filled. Bad legislation will duplicate or even conflict with existing regulatory structures.
Supporters of new legislation should be able to explain why existing sector-specific state or federal legislation cannot address concerns about the use of AI in that sector.
Q4: Did some form of this legislation precede the release of ChatGPT?
Much of what is now called AI regulation merely recycles longstanding regulatory impulses and sometimes even actual legislation. Settled or stagnant fights over internet policy have been resurrected as AI concerns: intermediary liability, privacy, intellectual property, bias, etc. As noted above in Q1, many “new” regulatory proposals do not clearly distinguish between “AI” and the rest of computer algorithms—and indeed, it is difficult to draw such lines.
Policy entrepreneurs are taking advantage of the concerns about AI and the vagueness of AI definitions to re-up their preferred market interventions. Understanding the pedigree of various proposals can help clarify whether new legislation is focused on new issues raised by generative AI or if it is merely an opportunistic attempt to revive an otherwise dead proposal.