In an era when artificial intelligence is reshaping industries at breakneck speed, the Texas Responsible AI Governance Act (TRAIGA)—along with similar provisions found in Virginia HB 2094 (HB 2094), Nebraska LB 642 (LB 642), and others— has recently captured attention with their ambitious attempts to address concerns ranging from algorithmic discrimination to broader issues of accountability and consumer protection in high-risk applications of AI. Although these bills are presented as proactive measures to fill perceived regulatory gaps, a closer examination reveals that they are duplicative of current law and would add unnecessary compliance costs.
The rigid framework mandated by these bills could inadvertently hinder the development and deployment of AI technologies by forcing companies to navigate cumbersome regulatory procedures, ultimately impeding economic growth. Recent experiences underscore these concerns. In Colorado, SB24-205 was reluctantly signed by Gov. Jared Polis, reflecting significant pushback against perceived overregulation. Similarly, the controversy surrounding California SB 1047–which was vetoed by Gov. Gavin Newsom–demonstrates a growing resistance to what many see as excessive, top-down AI regulation. Critics, including prominent voices from think tanks and industry observers, warn that such measures may slow AI investments and innovation at a time when the global race for AI leadership is intensifying. Notably, Adam Thierer and our colleague Neil Chilson predicted this regulatory onslaught in their late 2022 analysis, foreseeing many of these unintended consequences.
Texas Responsible AI Governance Act (TX HB 1709)
House Bill 1709, titled the “Texas Responsible AI Governance Act” (TRAIGA), would establish a broad regulatory framework governing the development, distribution, and deployment of artificial intelligence (“AI”) systems—particularly those deemed “high-risk,” which would significantly influence or determine “consequential decisions.” Under the proposed legislation, a high-risk AI system is any system used in contexts like employment, lending, health care, or essential services, and each relevant actor—“developers,” “distributors,” and “deployers”—would bear a responsibility to exercise “reasonable care” to prevent unlawful or discriminatory outcomes. The bill would also create an “Artificial Intelligence Council” to advise and craft regulations on ethical AI use, and it would establish a sandbox program allowing limited AI testing with reduced regulatory hurdles.
Developers of high-risk AI would provide “High-Risk Reports” to downstream deployers, detailing intended uses, known limitations, data governance measures, and any reasonably foreseeable risks of algorithmic discrimination. Distributors and deployers would assume obligations to identify and mitigate noncompliance and conduct ongoing impact assessments. Deployers would produce semiannual reports describing the system’s data inputs, how bias is mitigated, and how decisionmaking is monitored. They would be required to disclose to consumers when a high-risk AI system is operating in a role where consequential decisions are made.
Notably, the bill would grant consumers a direct right to sue for “prohibited uses” that cause harm. For other violations, the Texas Attorney General would investigate, issue notice-and-cure demands, and impose escalating penalties if compliance is not achieved–up to $100,000 per violation for serious infractions. Meanwhile, certain developers and deployers could participate in a regulatory sandbox program–administered by the Department of Information Resources and overseen by the new AI Council–to test AI in a controlled environment without being subject to all standard mandates.
Members of our team and other experts have written about the issues with TRAIGA. We have highlighted a few below.
TRAIGA is More Californian Than Texan
Our colleague Chris Koopman highlighted in a Houston Chronicle op-ed that TRAIGA risks thwarting Texas’ AI boom before it even begins. Texas is on the verge of attracting massive AI investments, with projects like OpenAI’s Stargate Project, Google’s new West Texas facility, and Oracle’s expansion in Abilene. These data centers could accelerate the West Texas “Flywheel” which is harnessing the state’s natural gas, solar, and wind power to fuel next-generation AI innovation. Yet TRAIGA effectively makes businesses prove their AI systems won’t cause hypothetical harms. While supporters claim it would stop AI discrimination, anti-discrimination laws already exist at the state and federal level, and the bill would instead needlessly burden businesses—both large tech firms and small enterprises alike—just as Texas is winning tech investment from states like California. By following a regulatory path reminiscent of California and Europe, Texas risks choking off AI’s booming growth and forfeiting economic gains, jobs, and technological leadership that could otherwise flourish under its traditionally pro-business climate.
James Broughel from the Competitive Enterprise Institute in an article in Forbes calls Texas’ move toward AI regulation a “left turn,” highlighting the fact that such laws represent a departure from the state’s typical aversion to regulatory frameworks. States known for their more progressive politics are considering similar laws. Yet, this pivot isn’t solely due to shifting political preferences. It’s also a response to the practical realities of AI’s rapid expansion. Policymakers see the potential economic benefits of innovation but feel pressure to address genuine concerns over consumer protection, biased algorithms, and data privacy. _____
Technological Competition with China
Joe Lonsdale–Chairman of the board of Cicero Institute–a think tank in Austin, Texas offered commentary about the international and economic competition implications of TRAIGA stating,
China would love for Texas to have a new AI regulator, they would love for the free state where all of this top talent is coming to build, experiment, and try new things–to have this [regulator] come slow us down.
Speech and Self-Censorship
Dean Ball, in his Substack, offered feedback on the potential impact of creating “reasonable care” negligence liability stating,
Creating “reasonable care” negligence liability for language models will guarantee, beyond a shadow of a doubt, that AI companies heavily censor their model outputs to avoid anything that could possibly be deemed offensive by anyone. If you thought AI models were HR-ified today, you haven’t seen anything yet. Mass censorship of generative AI is among the most easily foreseeable outcomes of bills like TRAIGA; it is comical that TRAIGA creates a regulator with the power to investigate companies for complying with TRAIGA.
Greg Lukianoff at the Foundation for Individual Rights and Expression (FIRE) argues that Texas’ proposed “Responsible AI Governance Act” (HB 1709) repeats mistakes long made in higher education: using anti-discrimination rationales to expand regulatory power in ways that chill free speech and constrain knowledge. Drawing parallels to past university speech codes, he warns that AI developers, under threat of crushing penalties for “algorithmic discrimination,” would over-censor model outputs, much like speech codes suppressed dissent on campus. Lukianoff effectively argues how this undermines public trust in institutions and stifles research by allowing regulators broad leeway to designate content as discriminatory. Referencing prior battles at Stanford and FIRE’s ongoing efforts against academic censorship, he contends TRAIGA’s sweeping scope would similarly corrupt AI’s potential as a tool for discovery and expression, ultimately harming both free inquiry and public reliance on expert knowledge.
Virginia House Bill No. 2094
Meanwhile, Virginia is considering House Bill 2094, the “High-Risk Artificial Intelligence Developer and Deployer Act.” This bill similarly targets AI systems that can autonomously or substantially influence major decisions in areas like housing, lending, and education. As of this writing it is being considered in the Senate after passing the House. It also aims to reduce “algorithmic discrimination” by imposing certain obligations on those who build, embed, distribute, or use “high-risk” AI models. Specifically, developers must share details about system limitations and known risks of bias; deployers must complete risk assessments before using high-risk AI to make consequential decisions, notify consumers if AI is used, and allow appeals for adverse outcomes. Integrators and distributors, meanwhile, have narrower duties–such as maintaining acceptable use policies and halting the release of faulty products.
Unlike TRAIGA and Nebraska’s LB 642, the bill contains numerous exemptions, including for companies already regulated by stringent financial or insurance oversight and for healthcare entities operating under HIPAA. Its enforcement falls solely to the Virginia Attorney General, who may investigate violations and impose civil penalties up to $10,000 per willful offense. Notice-and-cure provisions give business an opportunity to fix issues before facing legal action. Unlike TRAIGA, there is no private right of action under the Act.
The Chamber of Progress, a progressive tech trade association, opposed the bill by arguing that HB 2094 would chill AI adoption while not meaningfully advancing civil rights:
AI has tremendous potential for improving education, enabling creative expression, and creating new business opportunities. So it is critically important that public policy promotes the broad and equitable distribution of these innovations… [P]inpointing the source and catalyst of discriminatory outcomes of an AI system is not always possible, nor is consistently determining who or what is responsible for the act of discrimination… [HB 2094] would hinder the adoption of innovative AI technologies without meaningfully advancing civil rights.
Adam Thierer at the R Street Institute also opposed the rule in a written testimony by arguing that there are better ways to address concerns about AI systems that wouldn’t involve a top-down regulatory system:
The bill represents a major effort to regulate artificial intelligence (AI) systems that is unnecessary, burdensome, and will subvert the ability of the Commonwealth to continue to be a leader of state-level digital innovation. There are better ways for Virginia to address concerns about AI systems that would not involve a heavy-handed, top-down, paperwork-intensive regulatory system for the most important technology of modern times.
The Virginia Institute for Public Policy also warned in an interview against adopting the bill. In particular they calculated the compliance costs it would impose on small businesses. Caleb Taylor stated,
If HB2094 is passed, small businesses could see compliance costs between $10,000 and $500,000 annually. Large corporations may face costs exceeding $10 million. Whilst states like Indiana, Tennessee and Minnesota are actively courting AI investments with business-friendly policies, Virginia must not throttle our own businesses in a vital, growing sector.
Nebraska LB 642
In the Midwest, Nebraska LB 642, titled the “Artificial Intelligence Consumer Protection Act,” also aims to mitigate the risk of algorithmic discrimination by regulating how certain AI systems are developed and deployed. Our colleague Taylor Barkley recently traveled to Lincoln to testify on the chilling effects this proposed law would have on AI innovation. We will post his comments in a follow up. The bill has many similarities with TRAIGA and VA HB 2094. One notable difference is that it imposes obligations on two groups instead of three. First, it covers developers, meaning those who create or substantially modify AI models, requiring them to provide documentation about their systems’ design, intended use, and potential limitations. If a developer discovers or is informed about a risk of discriminatory outcomes, they must promptly notify any known deployer or developer.
Second, the Act covers deployers, or those who actually implement high-risk AI systems when making consequential decisions about consumers–such as determinations involving housing, employment, education, or credit. Deployers must conduct risk assessments, keep records of those assessments, provide consumers with notice that AI is being used, and offer an appeal process for any adverse decision. The Act grants the Nebraska Attorney General the exclusive authority to enforce its provisions, including a notice-and-cure period before legal action is taken. It also exempts certain small businesses, research uses, and industries already regulated under federal or state frameworks that provide similar or more stringent oversight. Similar to VA HB 2094, there is no private right of action, meaning individuals cannot directly sue under the Act.
In his testimony, Taylor Barkley said that the proposed Artificial Intelligence Consumer Protection Act is both unnecessary and technically unfeasible, as it duplicates existing anti-discrimination laws while imposing burdensome compliance costs, particularly on smaller businesses. Barkley criticized the bill’s overly broad definition of AI, which would extend to conventional software as well as advanced systems, for threatening the open-source and open weights AI ecosystem. He contended that a one-size-fits-all regulatory approach punishes innovation by addressing hypothetical risks rather than concrete harms and recommended a more balanced strategy that focuses on deployers (those directly interacting with users), exempts open models, and narrows the AI definition to systems capable of autonomous learning and decision making. His testimony underscores the concern that such rigid regulation may stifle technological advancement and discourage investment, ultimately doing more harm than good.
Adam Thierer of the R Street Institute opposes LB 642 in a written testimony by arguing that the bill would stifle AI innovation and investment by imposing a top-down, European Union–style regulatory framework. He notes the United States is currently in a global race with China for AI leadership, so burdensome requirements—such as broad definitions of “consequential decisions,” “substantial factors,” and “high-risk” applications—would slow entrepreneurial activity and disproportionately hurt smaller companies that lack compliance resources. Thierer insists that concerns about AI misuse can already be addressed under existing state and federal laws on civil rights and consumer protection, and he urges lawmakers to reject the bill.
Conclusion
In reviewing the Texas Responsible AI Governance Act, or TRAIGA, along with similar bill in Virginia and Nebraska, what is clear from the trend is that while this proposed legislation promises to tackle the issues of algorithmic discrimination and consumer protection, the burden it would place on businesses and developers may do more harm than good. Those opposing the bills rightly argue that the unlawful outcomes are already guarded by the existing anti-discrimination statutes and adding new and unwieldy layers of regulation risks burying established companies and startups alike under mountains of paperwork and potential liability for outcomes they can not control. Additionally, these measures open the door to excessive self-censorship: developers, scared of being penalized with costly penalties, will err on the side of caution, potentially chilling innovation and even speech.
Moreover, policymakers do take a very real risk of undermining the competitive edge of the states at such a time when AI leadership is becoming increasingly contested on the global stage. Imposing top-down mandates and “reasonable care” standards of negligence not only duplicates the existing legal protection but also unnecessarily complicates the pipeline of innovation, driving investment and talent elsewhere. Ultimately, as well-intentioned these laws may be, the very communities they seek to protect may be left behind as AI entrepreneurs and researchers go elsewhere to pursue more balanced and innovation-friendly frameworks.
Well done, gentlemen!