The Risk of “High-Risk” Artificial Intelligence Definitions
The Dangers of the Precautionary Principle
Introduction
In our first piece, we explored how states are defining and regulating artificial intelligence. In researching how “artificial intelligence” verbatim is defined, we discovered that states specify different kinds of AI. Case in point, California SB 896 refers to “automated decision system[s]”, “high-risk automated decision system[s]”, and “generative artificial intelligence” in addition to unqualified “artificial intelligence.” These semantic distinctions prompted us to investigate whether states are delineating between substantively different kinds of AI and doing so with precision. The following is the result of this research with respect to high-risk AI.
There are six states regulating what they deem to be a high risk artificial intelligence system: Colorado, Connecticut, Maryland, Rhode Island, Vermont, and Virginia. We also encountered Hawaii’s SB 2572, which implicitly regards all AI as high-risk and prescribes “proactive and precautionary regulation to prevent potentially severe societal-scale risks and harms” therefrom.
Here the reader can find the full search results from Plural Open as of July 27th, 2024 for the following key terms: “high-risk artificial intelligence,” “high-risk ai,” “dangerous artificial intelligence,” “consequential artificial intelligence system (CAIDS),” and “advanced artificial intelligence.” It is possible that there are other terms used to identify supposedly risky artificial intelligence that we did not run through our program.
Our current program is an updated version of the one we used to produce our first report for Abundance Institute on state definitions of “artificial intelligence,” which can be read on an earlier Now + Next post. The program script works in the following manner:
Search Plural Open for bills containing a given key term, which, for current purposes, was one of the terms listed above
Create a text file including the state, session, bill ID, and a hyperlink to the Plural Open bill page
Follow the link to access, copy, and append the text of each bill to the list of bill information described in step 2
Call the OpenAI API to search the text of each bill for a definition of the key term in question according to the following prompt: "You are a legislative analyst writing a policy brief on how different states define the concept ‘[key term]'. Your task is to read the text of state bills looking to isolate any definitions of artificial intelligence or related terms useful to your analysis. Return a line-break separated list of the word-for-word definitions of ‘[key term]' mentioned in this bill text. If there is no definition, return N/A."
After this program was run for each of the key terms under investigation, data was moved from the .txt file to a Google Sheets document that, in addition to the aforementioned information, also includes a column for “key term” and “actually defined,” since, as we will see momentarily, some of the bills included do not actually define high-risk AI but are of interest for other reasons.
Definitions of “High-Risk Artificial Intelligence”
States define “high-risk artificial intelligence” and permutations of this key term in the following manner:
Colorado and Vermont use exactly the same definition. Colorado SB 24-205 and Vermont H710 define a “high-risk artificial intelligence system” as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”
Though Vermont introduced this definition first on January 9, 2024 in its House, no further action has been taken. Colorado introduced its bill defining high-risk AI after Vermont, on April 10, 2024, but then went on to pass the bill on May 17, 2024.
Colorado defines a “consequential decision” as “a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; page 2-senate bill 24-205 (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.”
Vermont defines a “consequential decision” similarly (though not identically) to Colorado as “any decision that has a material legal, or similarly significant, effect on a consumer’s access to credit, criminal justice, education, employment, health care, housing, or insurance.”
On January 9, 2024, Vermont also introduced H711, which, in addition to expanding H710’s definition of “high-risk artificial intelligence system,” defines “inherently dangerous artificial intelligence system[s]” as “high-risk artificial intelligence system[s]. . . or generative artificial intelligence system[s].” Stay tuned for future research on generative AI definitions.
The remaining three states that define high-risk AI in legislation also do so with respect to the concept of consequential-decision-making:
Connecticut SB 2 (engrossed April 4, 2024): “‘High-risk artificial intelligence system’ means any artificial intelligence system that has been specifically developed and marketed, or intentionally and substantially modified, to make, or be a controlling factor in making, a consequential decision.”
Rhode Island SB 2888 (introduced April 11, 2024): “‘Consequential artificial intelligence decision system (CAIDS)’ means machine-based systems or services that utilize machine learning, artificial intelligence, or similar techniques that provide outputs that are not predetermined, and have been specifically developed, or an AI system specifically modified, with the intended purpose of making or determining consequential decisions.”
Virginia HB 747 (introduced February 8, 2024): "’High-risk artificial intelligence system’ means any artificial intelligence system that is specifically intended to autonomously make, or be a controlling factor in making, a consequential decision. A system or service is not a "high-risk artificial intelligence system" if it is intended to (i) perform a narrow procedural task, (ii) improve the result of a previously completed human activity, (iii) detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review, or (iv) perform a preparatory task to an assessment relevant to a consequential decision.”
The last state that defines high-risk AI, Maryland, doesn’t do so legislatively but defers this responsibility to the executive branch, specifically to the Department of Information Technology.
Maryland SB 818 (signed May 9, 2024): “‘High–risk artificial intelligence’ means artificial intelligence that is a risk to individuals or communities, as defined under regulations adopted by the Department in consultation with the Governor’s Artificial Intelligence Subcabinet.”
The Maryland legislature partially defines high-risk AI itself: "’High–risk artificial intelligence’ includes rights–impacting artificial intelligence and safety–impacting artificial intelligence.”
SB 818 defines “right-impacting” as serving “as a basis for decision or action that is significantly likely to affect civil rights, civil liberties, equal opportunities, access to critical resources, or privacy,” similar to Colorado SB 24-205 and Vermont H710’s definitions of “consequential decision.” The Maryland bill defines “safety-impacting” as having “the potential to significantly impact the safety of human life, well-being, or critical infrastructure.” “Significantly” is, unsurprisingly and concerningly, left open to interpretation.
Hawaii explicitly embraces an anti-innovation regulatory approach. Hawaii SB 2572 (introduced January 19, 2024) adopts the “precautionary principle” for AI regulation, which SB 2572 defines in §3 as “(1) requir[ing] the government to take preventive action in the face of uncertainty; (2) shift[ing] the burden of proof to those who want to undertake an innovation to show that it does not cause harm; and (3) hold[ing] that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative.”
Defining high-risk AI with respect to the importance of the tasks it carries out avoids passing laws, rules, and regulations that inadvertently stymie or prohibit innocuous instances of machine learning that even the most zealous AI-skeptic wouldn’t desire, e.g., spam filters in email software. However, the benefit of this distinction is drastically reduced by states’ near all-encompassing definitions of “consequential decisions,” “risk-impacting,” and “safety-impacting.” The general hostility toward AI innovation is made explicit in Hawaii’s SB 2572, which obliterates incentives for advancing AI with its regulation-first stance “even if the supporting evidence is speculative.”
If the “burden of proof [is on] those who want to undertake an innovation” whose uses and downstream impacts are, as with all emergent technologies, largely unknowable, who in their right mind would enter the AI market? Markedly fewer firms than would do so in a regulatory environment of permissionless innovation.
Discussion: The Dangerous “Precautionary Principle”
Nebulous definitions of “safety-impacting,” “risk-impacting,” and “consequential decisions” are concerning because they open the door to a passively anti-innovation regulatory environment. When states undertake the responsibility of defining these terms without reference to federal agencies’ definitions, federalism provides an antidote to the most hostile regulatory regimes: tech firms have the opportunity to operate and innovate in those states with more narrowly tailored definitions, allowing them to develop and deploy their AI products to the scrutiny of the market without having to jump through anticipatory hurdles. That is, most of the time. Sometimes state-level regulation of technologies that are inherently interstate—as is the case for certain applications of artificial intelligence—has extra-jurisdictional effects, intentional or otherwise. In cases where regulation, whether performed at the state or federal level, is expected to have an interstate effect, a federal approach in the states is no longer appropriate and the policy debate, weighing of tradeoffs, and evaluation of relevant evidence must occur at the national level.
States that explicitly outsource their legislative responsibility to the federal government, such as Maryland’s SB 818, obviate the benefits afforded by federalism. State legislatures’ deference to the executive branch of the federal government cedes rule-making authority to the executive branch of the federal government, substituting a one-size-fits-all approach for a laboratory of democracies.
Hawaii’s SB 2572 is even more inimical to the development of AI technology, adopting a reactionary anti-innovation stance with its so-called precautionary principle. While it is reasonable for policymakers to consider potential social costs and risks posed by emerging technology, the same logic applies to legislation and regulation: As Neil Chilson explains in another piece for Now + Next, just as red teaming “helps identify and mitigate risks before AI models are put into application, ultimately leading to more trustworthy and reliable AI solutions. . . Red teaming legislation might be even more important than red teaming technology, because technology is easier to update.”