Resolving to Defend AI Innovation in the States
Empowering States for a High-Tech Economy: Insights from Neil Chilson and Adam Thierer
I am excited to share this guest post this morning from CGO’s Neil Chilson and R Street Institute’s Adam Thierer on how the innovation perspective on AI desperately needs state champions. Thank you to Neil and Adam for sharing!
In recent Federalist Society essays, we have documented the growth of state and local proposals to regulate artificial intelligence (AI) and algorithmic systems. The danger of regulatory overreach is already upon us with almost 90 different bills pending currently and some already being implemented.
While some of these measures would simply require more study of AI policy issues, many others would have more far-reaching and immediate regulatory impact. Just recently, for example, a New York City regulation went into effect that mandates “algorithmic audits” to oversee AI tools used within hiring processes. Of the six different AI-related bills that California is considering currently, one would impose new algorithmic hiring regulations like New York City. The District of Columbia has also considered a “Stop Discrimination by Algorithms Act,” which would expand regulation of AI tools used to help make eligibility determinations for jobs or other matters. Employment law is just one area of increased AI regulation; many other issues are driving new AI regulation.
With growing momentum behind AI regulation, innovation desperately needs state champions. A recent American Legislative Exchange Council (ALEC) proposed model resolution, “In Support of Free Market Solutions and Enforcement of Existing Regulations for Uses of Artificial Intelligence,” provides an excellent starting point for such champions.
The ALEC resolution stresses how AI “represents the next great tool for human flourishing, artistic creativity, increased productivity, and economic growth,” and correctly notes that “AI also represents a major area of competition between American innovators and foreign adversaries and cyber criminals.”
Indeed, AI policy is important because it has profound implications for America’s global competitiveness and geopolitical standing among nations. This is why it is essential for U.S. policymakers to once again get our innovation culture right to facilitate an important new technological revolution that is unfolding.
Algorithmic technologies are already transforming many sectors and benefiting the public in diverse fields such as medicine and health care, financial services, transportation, retail, agriculture, entertainment, energy, aviation, the automotive industry and countless others. Experts predict that AI could drive explosive economic growth and productivity gains. A 2018 McKinsey study predicted an additional $13 trillion global economic activity by 2030, “or about 16 percent higher cumulative GDP compared with today.”
Toward that end, the ALEC notes that, “the major advancements in AI have been driven by private sector capital, ingenuity, and effort,” and that “AI innovation is supported by the principles of a market-driven approach to policy creation.” It is essential that we avoid “hysterical and misguided responses from government regulators,” and instead push for “competition and technological neutrality; constitutional limits and protections against government overreach; [and] self-governance as the preferred approach to addressing novel challenges,” the resolution notes.
With this framing in mind, ALEC’s model resolution would have states implement a bill or bills that, among other things would:
support a permissionless innovation approach to AI, recognizing that the free market is best equipped to advance innovation, mitigate potential harms, safeguard privacy, and ensure robust competition;
support efforts by state and federal functional regulators to enforce existing anti-discrimination and other laws against regulated entities that use AI;
reject any attempt by the federal or state governments to ban AI or significantly curtail its advancement by undermining the above principles;
deny the need for any government to impose “Diversity, Equality, and Inclusivity,” requirements on AI.
reflect that any federal regulations governing the use of AI technology must emerge from the U.S. Congress, and not be promulgated by federal agencies.
This is not a call for zero regulation of AI. Instead, the ALEC model legislation represents a principled and practical approach to AI governance. It highlights the many existing tools and methods governments already possess to govern algorithmic systems. For example, both federal and state lawmakers already have broad-based consumer protection laws, unfair and deceptive practices regulations, anti-discrimination standards, workplace safety rules, and more. All these policies will apply to algorithmic systems. Lawmakers need not create massive new bureaucracies and piles of additional red tape when so much regulatory capacity already exists.
States legislators can and should pass this model resolution and then follow it up with action. Any existing or new legislation to address “algorithmic bias” or “AI safety” should receive significant scrutiny for consistency with the resolution. Many such proposals are framed as neutral process or transparency requirements. But they should be challenged as potential government veto points for new technology and applications. One key litmus test should be whether the legislation creates new pre-market obligations on companies.
In addition, because AI is likely to redraw industry boundaries and business models, state legislators should review their existing regulations to ensure they aren’t imposing artificial barriers to deployment of AI. For example, AI capabilities could entice new entrants and competitors in insurance, telecommunications, transportation, and even government services. But if prescriptive regulation based on past business models makes it regulatorily risky to serve consumers through new, AI-powered methods, state legislators should reduce or eliminate such barriers.
On the other hand, if state and local regulatory efforts continue to multiply rapidly, many algorithmic innovators will look to Congress for preemption. Some companies are already requesting a federal AI framework for this reason. That move could backfire for them and the nation by creating a single, heavy-handed approach to algorithmic policy issues.
The Biden Administration and Senate Majority Leader Chuck Schumer are already seeking sweeping new federal mandates for AI. Last fall, the White House floated an “AI Bill of Rights,” and Senator Schumer just recently called for a “SAFE Innovation Framework” that would be an “all-of-the-above approach” to AI regulation. These efforts would dispense with America’s successful permissionless innovation governance vision and move us toward comprehensive permission slip regulation of AI.
States can lead the way toward a more prosperous, productive, and high-tech economy for the U.S. if they get their AI governance model right. The ALEC model resolution provides the right, pro-innovation foundation for the future.
The regulatory tradition recently has been to target individual big tech companies rather than solve industry-wide issues. Laws forcing big tech to pay for local news, split storage, provide more and more granular consent forms, favor local businesses, etc have been written extremely narrowly tailored (e.g. 'this reg shall only apply to companies with over $XXB in revenue and XX thousands of employees in ad tech & streaming & e-comm industries...).
Paradoxically this has led to pretty much every government with an oversight role over tech unprepared for the rise of generative AI and competing social media apps from other countries. Will Canada try to make TikTok pay for media as well? What about OpenAI? While agencies got wrapped around the axel over individual actors, they missed the rise of the new era.
I welcome industry-wide guardrails, but to the point of this post the public sector needs far more internal ramping of capability to even understand what those should be. The race is now really whether the public sector agencies can modernize themselves fast enough to set proper guardrails before they either set them wildly too narrow, or not at all.