We’ve been very involved in helping policymakers, the media, and stakeholders understand the benefits of a state AI regulatory pause. Just this evening, Ed Longe at the James Madison Institute and I had an op-ed published in the Washington Times on why all the voices in the states opposing the pause actually make the point for the pause.
All this is why I’m pleased to post this guest article from Tom Pandolfi, currently Tech and Innovation Policy Intern at the Libertas Institute. As readers will know, the One Big Beautiful Bill is currently in the Senate. Senators are considering a pause on state AI regulations if those states receive specific federal funds. This is an eminently reasonable idea.
In fact, we recommended a similar idea in our comments to the White House back in March. The reasons are simple: it’s pro-innovation and pro-economic growth. Without it, AI companies, especially the small ones, could drown under a wave of conflicting and confusing legal compliance from multiple states. There has been an unprecedented wave of proposed AI regulations from the states, large and small; liberal and conservative. We could lose many of the benefits of AI in this regulatory morass. The short post below outlines one of the many reasons for Congress to step in and pause what states are doing. Medical advances and thus human lives could be at risk if compliance costs slow AI development.
–Taylor
The Medical Consequences of AI Overregulation
As debates over the state regulatory moratorium on artificial intelligence intensify, the stakes for healthcare’s future continue to rise. In the world of medicine, breakthrough AI technology is driving the next revolution, making it an ethical imperative to protect the development of life-saving devices.
Today, medical device developers are creating new use cases for machine learning faster than the government can regulate them. Thanks to their work, AI software is already twice as accurate at reading brain scans for stroke patients. AI scans can even detect missed bone fractures, and AI-enabled medical devices can better identify early signs of sepsis.
With these present and future innovations, AI is on pace to save many lives and alleviate needless suffering within our lifetime.
State legislators are responding with a myriad of new laws across the country, in a heavy-handed attempt to steer progress. This year alone, they have introduced 1,000 new AI-related bills across all 50 states.
Many of the proposed general-purpose AI regulations called for restricting high-risk models, which states like Colorado define as any system that “makes, or is a substantial factor in making, a consequential decision.” (Colorado is the only state to have passed such a bill and the Governor set up a task force to fix it, a process which is still ongoing). Such restrictions are meant to be broad, targeting everything from legal services to financial aid. However, when applied to healthcare, they overlap with existing FDA classifications for medical devices, which increasingly use AI. While Colorado’s bill has broad exemptions for federally regulated products, states like New York do not. As a result, FDA-approved devices can face new state-level hurdles, such as bias audits, impact assessments, and mandatory human reviews, which can complicate deployment and increase compliance costs.
To comply with various state legislation, future developers of AI-enabled medical devices may need to create independent audits for New York as well as the appropriate consumer disclosures for Utah, while their deployment could require yearly comprehensive impact assessments for Colorado. To make sure their products can be available nationwide, developers have to jump through increasingly more hoops, and could see their compliance costs soar as a result.
Each state is only trying to figure out how to protect its citizens, but their collective decisions carry unintended consequences for medical AI developers. The idea that states should be a “laboratory for democracy” implies that what is being tested locally has a control arm in the rest of the nation. However, if every state is a laboratory, there is no baseline to compare results. For developers, these aren’t experiments. It’s just chaos.
As states diverge in their policies, they risk slowing the development, access, and deployment of FDA-approved medical devices under the guise of fairness and transparency.
Even with proper healthcare exemptions and deference to the FDA, the advancements in medical AI can be downstream from broader developments in machine learning. Just as the gaming model AlphaGo influenced the 2024 Chemistry Nobel Prize-winning AlphaFold (which has since been used in Alzheimer's and cancer treatments), any slowdown in the pace of artificial intelligence development can indirectly affect medical AI breakthroughs.
Despite AI software improving at breakneck speeds, it is currently as bad as it will ever be. As models improve exponentially and their costs drop drastically, today’s bleeding edge becomes obsolete faster than ever. Chilling innovation now would mean depriving future patients of these more performant models, with regulatory lag keeping older, less performant models on the market for longer. These patients pay the cost of overregulation, as compliance hurdles risk slowing the development of potential cures, restricting access to the latest innovations, throttling R&D, and, most importantly, creating missed opportunities to save lives.
As autonomous devices become more difficult to develop, those living in medical deserts, who already suffer from strained and overworked doctors, are likely to be impacted most. Higher development costs also worsen outcomes for another population: those suffering from rare diseases. More regulations and their accompanying compliance costs make already difficult-to-achieve economies of scale in smaller populations even more challenging.
So, will the benefits of increased regulation outweigh these costs for patients? That seems difficult to believe. The nature of preemptive regulation is that legislators try their best to predict the future, based on current risks and opportunities. But 2 in 3 doctors already use AI, and recognize its potential benefits in patient care. The opportunities are all around us. Skeptics rightly warn of future risks and potential harms, which are very real, but not as real as the lives already being saved.
The federal government recognizes these same risks and opportunities, yet takes a pro-innovation approach to AI-enabled medical devices. The FDA has even worked to accelerate and improve the approval process of these devices under both administrations to ensure patients safely have access to the latest in AI innovations.
Congress’s AI moratorium is just the latest step in ensuring that developers can keep building life-saving AI models and devices in the United States. With an AI-friendly FDA and a majority of doctors already using the technology, a state-by-state regulatory patchwork is now becoming one of the largest obstacles standing in the way of medical progress.
A few of our latest resources on the AI State Law Pause: