Defining “Artificial Intelligence” in State Legislation: An Analysis of the Current Landscape
How Lawmakers Are Shaping the Future of Technology
Introduction
Considering the near nonstop coverage of Artificial Intelligence (AI) technology, and its advancement and societal implications, it’s no surprise that lawmakers are interested in regulating it, and therefore defining it. Given this activity and the stakes, it became clear that an analysis of the definitions was necessary.
As of June 10th, 2024, searching “artificial intelligence” in Plural Policy’s legislation tracker, Plural Open, yields 5380 results. Plural Policy is a nonpartisan nonprofit that provides advocacy organizations, NGOs, and the public with software to access public policy data. Plural’s flagship software, the one used for this analysis, is Plural Open, formerly Open States, which Plural acquired in 2021. Plural Open carries on Open States’ function of aggregating legislation from all 50 states, DC, Puerto Rico, and the US Congress, which it then parses, standardizes, and makes public via its website and API. Plural Open uses web scrapers and APIs to collect legislative information from official state, territory, and federal websites.
Perhaps surprisingly, given the digital, borderless nature of artificial intelligence, less than one thousand of these AI-related bills are federal; over 4.4 thousand instances are in state legislation.
Whether at the federal or state level, precise definitions are paramount if our legislators are to succeed in crafting narrowly tailored bills. Bills should minimize real risks while refraining from interfering with the development of this emerging technology, which promises to make society orders of magnitude more productive and, consequently, enhance human flourishing. It is this very confidence in, and hope for, the AI-enabled future that inspired us to work with Abundance Institute to research the current landscape of AI definitions in state-level legislation.
We accessed and parsed state legislation using a combination of Plural Policy’s open-access online bill tracker (Plural Open), and Python code to find and record instances of AI definitions in this legislation. For Hawaii, Maryland, Nebraska, Utah, and Vermont, bill data was manually collected and parsed because of compatibility issues with our code.
Before describing state definitions, it is helpful to review the definitions of AI in federal legislation and regulation. Reviewing these definitions is important because AI-enabled online software doesn’t respect borders and, as will become apparent, a particular federal definition of AI is used by some states as their own.
Federal Definitions of “Artificial Intelligence”
Our discussion begins by reviewing the most current, consequential federal definitions of artificial intelligence. Defined in section 5002(3) of H.R. 6126, National Artificial Intelligence Initiative Act of 2020 (NAIIA), AI is defined as follows.
NAIIA (2020): "The term 'artificial intelligence' means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to: A) perceive real and virtual environments; B) abstract such perceptions into models through analysis in an automated manner; and C) use model inference to formulate options for information or action."
The NAIIA’s definition is consequential for multiple reasons. First, this is the definition used verbatim in the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Use of Artificial Intelligence (October 30, 2023). This order aims to “establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems” through the National Institute of Standards and Technology (NIST). By using NAIIA’s definition, the Biden Administration implicitly sets a precedent for future federal actions relating to AI, including executive orders in the current and future administration on or relating to this technology.
This definition is additionally influential at the legislative level. Recent federal legislation like H.R. 8315, the Enhancing National Frameworks for Overseas Restrictions of Critical Exports Act (May 8, 2024), and H.R. 688, the AI Foundation Model Transparency Act of 2023 (Dec. 22, 2023) both reference the definition of AI from NAIIA. We refer to these bills because they were the most recent to substantively treat artificial intelligence for regulatory purposes. H.R. 8315 in particular is likely to be one of the last major bills referencing AI before Congress’s August recess.
Another important AI definition invoked in federal legislation comes from section 238(g) of H.R. 5515, the John S McCain National Defense Authorization Act for Fiscal Year 2019 (NDAA). NDAA defines artificial intelligence as follows.
NDAA (2019): "1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. 2) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. 3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks. 4) A set of techniques, including machine learning, that is designed to approximate a cognitive task. 5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting."
S.1626, the Ask Act (May 16, 2023) and S.2346, the Stop CRT Act (July 18, 2023) both define AI by referencing NDAA’s definition. This is in addition to other state and federal legislation that has been influenced by the NDAA and NAIIA definitions since 2019 and 2020, respectively.
Fortunately for humanity, yet unfortunately for policy makers, AI is evolving and improving very quickly. Since the NAIIA defined AI in 2020, global private AI investment has topped $375 billion, with the US leading the back by investing 8.2x more than the next largest investor, China. In this tiny period of time, entirely new technologies and applications of AI have been invented, validated, iterated, and hyper-commercialized to billions of consumers. Therein lies the importance of regularly assessing and reassessing the definition of AI to keep pace with the rapid advancement of the technology.
Data and Methodology
We began collecting our data from Plural Open on May 28, 2024 and analyzed it over the proceeding weeks. To accomplish this, we wrote a series of Python scripts that programmatically searched state legislation on the Plural site, recorded bill text, and isolated the definitions of AI used in each bill. The following is a high-level overview of the procedure.
Search all federal and state legislation containing the keyword “artificial intelligence” in the current legislative session.
For each matching result returned from this query, retrieve and download the original bill text.
For each bill text, isolate any definition(s) of AI and record with the accompanying bill information.
After collecting all candidate definitions of AI, we read through each definition and categorized them into one of five buckets based on their level of specificity. It is worth noting that AI, in the form of web automation and large language models (LLMs), significantly streamlined our research workflows for this project.
Definitions of “Artificial Intelligence” in State Bills
We have sorted state definitions of “artificial intelligence” into four broad “buckets.” This link will take you to the Google Sheet where we compiled and categorized states’ definitions into four buckets. The definition distribution is displayed graphically and the definitions themselves are provided beneath the illustration.
Key
*** - A state that has a bill with its own definition of AI or related concept in addition to adopting a federal definition.
** - A state that has a bill establishing a task force or commission to, among other things, establish their own definition of artificial intelligence.
Bucket 1—state(s) adopts the federal definition of AI from the NAIIA:
Hawaii*** (SB 2572)
Illinois*** (SB 2847)
Maryland*** (HB 1297)
Nebraska (LB 1203)
Ohio (SB 21)
All of these states refer to the NAIIA in their definition of artificial intelligence.
Bucket 2—state(s) broadly defines AI:
Alabama (HB 172)
Alaska (SB 262)
Colorado (HB 24-1468)
Indiana (SB 150)
Louisiana (HB 916)
Pennsylvania (HB 1663, SR 143)
Bucket 3—state(s) narrowly defines AI:
California (SB 896)
Connecticut (SB 2)
D.C. (B25-0114)
Florida (SB 972 and SB 850)
Georgia (HB 887)
Hawaii*** (HB 2152)
Idaho (H 568)
Illinois*** (SB 2847)
Maryland*** (HB 1271)
Massachusetts (HD 4788)
Michigan (HB 5143)
Minnesota (SF 4696)
Mississippi (SB 2423)
New Hampshire (HB 1599)
New Mexico (HB 184)
New York (A 8195)
Puerto Rico (PS 1179)
Rhode Island (HB 7158)
Tennessee (SB 2668 and HB 2163)
Texas** (HB 2060 and HB 4695)
Utah SB (149)
Vermont H (711)
Washington (HB 1934)
Bucket 4—state(s) does not define AI:
Sub-bucket 4a—state(s) regulates AI:
Arizona
Iowa
Kansas
Kentucky**
Missouri
Nevada**
New Jersey**
North Carolina**
Oklahoma
Oregon**
South Carolina
South Dakota
Virginia**
West Virginia**
Wisconsin.
Sub-bucket 4b—state(s) does not regulate AI:
Arkansas
Delaware** (HB 333)
Maine
Montana
North Dakota.
Wyoming
In bucket 1, 5 states appropriate the NAIIA definition in their legislation. Across buckets 2 and 3, we discovered 29 unique definitions of artificial intelligence across states. For the 21 states in bucket 4a and 4b, there are no definitions of AI included, despite legislation (in bucket 4a) attempting to regulate AI in one form or another.
When one counts separately each permutation of artificial intelligence defined and regulated by states, such as “high-risk artificial intelligence,” “generative artificial intelligence,” “automated decision system,” “automated decision tool,” “automated support decision system,” “automated final decision system,” “algorithmic decision system,” and the like, the number of unique state definitions of AI expands to 57, with only 16 of the definitions specifying “artificial intelligence” per se.
Discussion
Bucket 1: 5 states have adopted the definition provided in section 5002(3) of the National Artificial Intelligence Initiative Act of 2020 (NAIIA). See Section II for the definition.
Bucket 2: 6 states have created their own, broad definitions of artificial intelligence. For example:
Alabama HB 172: “Artificial Intelligence. Any artificial system or generative artificial intelligence system that performs tasks under varying and unpredictable circumstances without significant human oversight or that can learn from experience and improve performance when exposed to data sets.” Borrowing largely from the NDAA.
Separating AI and generative AI is odd, considering the latter is a subset of the former. We will investigate states’ definition of “generative AI” specifically in future research.
Louisiana HB 916: “Artificial intelligence" means an artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.”
Bucket 3: 23 states/territories have created their own, specific, lengthy, and precise definitions of “artificial intelligence” that distinguish it from related software such as “generative artificial intelligence” (GAI), “automated decision systems” (ADS), “high-risk automated decision systems” (HRADS), and distinguishes these instances of genuine artificial intelligence from programs such as spam filters, firewalls, antivirus software, and the like. For example:
California SB 896: “As used in this chapter: (a) Artificial intelligence means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy. (b) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. (2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data. (c) (b) Generative artificial intelligence means the class of artificial intelligence models that emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text, and other digital content. (d) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice.”
Note that California distinguishes artificial intelligence from automated decision systems, and further specifies what ADS are not, i.e., algorithms that are not meaningfully autonomous or human-like, that we have long used in all sorts of applications. This is an important and meaningful distinction, as the different subfields of AI (machine learning, computer vision, natural language processing, robotics, etc) all use complicated algorithms that can be, irresponsibly, reduced to simply an “algorithm”.
New York A 8195: "’Advanced artificial intelligence system’ shall mean any digital application or software, whether or not integrated with physical hardware, that autonomously performs functions traditionally requiring human intelligence. This includes, but is not limited to the system: (a) Having the ability to learn from and adapt to new data or situations autonomously; or (b) Having the ability to perform functions that require cognitive processes such as understanding, learning, or decision-making for each specific task. 2. "High-risk advanced artificial intelligence system" shall mean any advanced artificial intelligence system that possesses capabilities that can cause significant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment.”
Note that New York, like California, creates subcategories of artificial intelligence (systems): “advanced” and “high-risk advanced,” both emphasize the human-like capacity to learn, understand, and act.
Bucket 4a: 15 states/territories are actively regulating AI but lack any apparent definition of the very technology whose particular implementations they are making illegal. For example:
Arizona HB 2307: “Artificial intelligence; prohibited use; enforcement; civil penalty; definition A. A person or entity that establishes, creates or uses an artificial intelligence system may not allow the system to create, store or use child sexual abuse material.”
Does not define “artificial intelligence system.”
Kansas SB 375: “AN ACT concerning elections; relating to the crime of corrupt political advertising; prohibiting the use of generative artificial intelligence to create false representations of candidates in campaign media or of state officials; amending K.S.A. 25-2407 and 25-4156 and repealing the existing sections.”
Does not define “generative artificial intelligence.”
Bucket 4b: 6 states/territories neither define nor regulate artificial intelligence.
Finally, there are a select few states that use a federal definition of artificial intelligence and define other instances of AI themselves (as indicated by the triple asterisks): Hawaii, Illinois, and Maryland.
Evaluations and Recommendations
Bucket 1: Legislation in this bucket adopts the NAIIA’s definition of AI, which in our assessment provides broad but useful guardrails for describing this technology. NAIIA’s first sentence, “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments,” is broad enough as to not meaninglessly narrow the scope of the technology to a particular use case, a specific technical implementation of AI, or a normative assessments of its usefulness. In the NAIIA’s second sentence “Artificial intelligence systems use machine and human-based inputs to…”, the broad concept of an AI system is broken down into 3 comprehensible parts:
“A) perceive real and virtual environments;” Perception is left broad, which we believe is the correct call given the speculative nature of defining perception, particularly in computer systems. While two types of environments are broken out—which is important considering that AI can “perceive” in both digital and physical environments—each of which have their own unique applications and constraints.
“B) abstract such perceptions into models through analysis in an automated manner;” Abstraction, models, and analysis are each left broad, which is consistent with describing perception in a broad manner. How “perception” occurs, how this input is “abstracted,” how this abstracted input is “modeled,” how this model does “analysis,” and to what extent this analysis occurs in an “automated manner” are all implementation questions that are highly context dependent, making this generality useful for legislative purposes.
“C) use model inference to formulate options for information or action." Consistent with A) and B), model inference and formulate options are similarly left general to abstract implementation details. Finally, breaking out the two outcomes of these systems: to acquire information or take some action (which includes acquiring information), in our assessment properly describes the primitive uses of AI systems: data and action.
Bucket 2: The worst states are those 8 that have adopted sweeping, catchall definitions of AI that, in an attempt to safeguard against legitimate risks, unintentionally extend to much software that cannot be seriously considered to be artificial intelligence. The risk here is that this legislation will render software illegal that consumers are, and have been, using safely and beneficially.
This generality problem appears in two different forms.
Conflating software with AI. AI can be used in software, and software can be AI. This conflation usually occurs when an AI system is described as a software system without a clear distinction or qualification. For example, a system that “receives input from a user and produces output” could describe an operating system, a social media app, or an infinite number of software systems. Therefore, going too broad loses the precision of defining AI and begins to encompass broader software systems, which cannot be regulated in the same manner.
Conflating algorithms with AI. AI, or more specifically, the components of an AI system, use algorithms to process and manipulate data. But an algorithm, as defined mathematically (e.g. defined instructions intended to perform a task), is too broad to be used to describe AI, which runs the risk of classifying very basic algorithms of having an AI-quality. For example, a recipe for baking a cake is an algorithm, but that does not make it AI or related to AI.
Bucket 3: Beginning with positives, these states provide lengthy, precise definitions of “artificial intelligence.” Some also go on to distinguish AI from subspecies thereof, such as “generative artificial intelligence” (GAI), “automated decision systems” (ADS), “high-risk automated decision systems” (HRADS), and distinguishes these instances of genuine artificial intelligence from programs such as spam filters, firewalls, antivirus software, and the like (California SB 896). However, categories such as “advanced artificial intelligence system” (New York A 8195), though emphasizing features of AI such as their ability “to perform functions that require cognitive processes such as understanding [and]. . . decision-making,” are still open to broad interpretation.
Bucket 4a: Without clearly specifying what it is they are regulating, these states have begun to regulate, most commonly with respect to concerns about the proliferation of deep fakes—especially with regard to pornography and election interference. However, there is still great opportunity here: providing legislators in these states with precise definitions of artificial intelligence can narrow the scope of their well intentioned legislation post hoc. One third (6) of these states have legislation that has created, or will, when passed, create commissions/task forces to define artificial intelligence and related concepts. We suggest providing these commissions with pertinent information to help them in their crafting of the definitions they will ultimately provide legislators or pass as administrative rules themselves.
Bucket 4b: Although some might be concerned that these states have not adopted a definition of AI whatsoever, we consider this situation a boon. Since no definition has been adopted, these states have the opportunity to consult policy wonks, technologists, industry experts, and other stakeholders to contribute to crafting narrowly targeted regulation in these states so that innovation is not needlessly stymied.
Conclusion
The landscape of state-level artificial intelligence legislation is a mixed bag. Half of the US is either actively regulating without defining it (15), have written overly broad definitions (6), or have adopted the suboptimal federal definition of AI (5). The remaining states are bimodal: they either have many detailed, specific definitions of artificial intelligence and distinguish it from related technology (23) or have no definitions or legislation on the topic whatsoever (6).
The 21 states that don’t have a definition of AI but still regulate it risk obstructing the development of the technology through poorly tailored legislation and rule-making. As policymakers are trying to accomplish the difficult task of defining AI in law, we recommend they review the bills in bucket 3 such as California SB 896, New York A 8195, Idaho H 568, and Hawaii HB 2152. These bills specify what AI is, what it is not, and distinguish between AI and related technology based on function and risk profile. As AI continues to develop at a remarkable pace, policymakers and concerned citizens alike must stay abreast of the latest developments as new benefits and pitfalls become apparent.
In forthcoming pieces for Abundance Institute, Sam and I will investigate state definitions of the terms “generative artificial intelligence,” “high-risk/advanced/large artificial intelligence,” “AI-generated content,” and “machine learning.” Stay tuned!