What Utah’s New AI Law Gets Right About Risk
The standard isn’t perfection. It’s better outcomes.
For those paying attention, it’s not news that legislative ideas to regulate AI are flying thick in the states. In the midst of this blizzard, a great policy focused on mental health AI applications quietly became law in Utah. It serves as an excellent example of a way policymakers can narrowly target regulations to protect consumers, provide clarity for industry, and let innovation thrive.
Mental Health Chatbot Overview
Publicly disclosed adoption figures show that AI-powered mental-health chatbots have already touched a surprisingly large—though still only partially measured—user base. We’ve compiled an overview of mental health chatbots. Taken together, these figures indicate that at least 15 million individuals have interacted with a leading English-language mental-health chatbot.
More and more Americans are using these services and early evidence suggests they are helpful to consumers.
Recent research from scholars at Dartmouth suggests that mental health chatbots can provide valuable benefits. The research team ran what they call the first randomized clinical trial of a generative-AI “therapy chatbot.” Their app—Therabot—was tested on 106 U.S. adults with depression, anxiety, or eating-disorder risk, while 104 matched controls received no chatbot. Over eight weeks, the treatment group chatted with Therabot for a total of roughly six hours which is about the time commitment required for eight cognitive behavioral therapy (CBT) sessions. The bot followed evidence-based cognitive-behavioural scripts, monitored for crisis language, and escalated emergencies to human clinicians.
Outcomes were clinically meaningful. Compared with controls, Therabot users saw average symptom reductions of 51% in depression, 31% in anxiety, and 19% in body-image distress—effect sizes the authors note are comparable to standard outpatient CBT. Participants also rated their “therapeutic alliance” with the bot at a level similar to in-person therapy, and many initiated conversations late at night, suggesting the value of 24/7 availability.
The investigators stress that AI agents are supplements—not replacements—for human care: the study depended on safety guardrails and rapid clinician backup for high-risk users. Still, they argue such tools could help close a provider gap of roughly one therapist per 1,600 Americans needing care by offering scalable, around-the-clock support that preserves much of CBT’s benefit.
The Policy Approach in Utah
Policymakers in Utah were paying attention to these trends in user engagement. Over the last year via Utah’s Office of Artificial Intelligence Policy, the staff conducted a series of forums and efforts to investigate mental health applications of AI. This process, kicked off with last year’s S.B. 149, was a key ingredient to the bill’s crafting. The end result was H.B. 452 sponsored by Rep. Jefferson Moss and Sen. Kirk Cullimore.
H.B. 452 was signed by the Governor on March 25, 2025, and introduces regulations for AI-driven mental health chatbots, positioning the state at the forefront of artificial intelligence governance in healthcare. The law seeks to safeguard consumers while fostering responsible innovation, balancing protective measures with an environment conducive to technological advancement.
Under H.B. 452, a "mental health chatbot" is defined specifically as artificial intelligence technology using generative AI to engage in interactive conversations that closely resemble interactions with a licensed mental health therapist. The bill focuses on consumer-facing applications and explicitly excludes chatbots utilized solely for administrative purposes within clinical settings. This clear definition helps ensure targeted and effective regulation.
One of the cornerstone provisions of H.B. 452 addresses privacy and data protection. Suppliers of mental health chatbots are prohibited from selling or sharing individually identifiable health information and user inputs unless explicit conditions are met. Permitted exceptions include sharing data with user consent to healthcare providers or health plans, conducting bona fide scientific research under strict conditions, or partnering with service providers under HIPAA-compliant agreements. This approach underscores Utah's commitment to protecting sensitive user data while enabling beneficial scientific research.
Another benefit of the bill is that it establishes advertising restrictions for mental health chatbots without outright banning advertising. The law bans the use of user input to target or customize advertisements and mandates that all advertisements associated with these services be clearly labeled, including disclosures about sponsorship or business affiliations. These measures aim to protect users from exploitative marketing practices that could compromise the integrity and effectiveness of mental health support. Refrainment from a ban is important because advertising revenue can defray costs to the consumer, making the tool more accessible.
Transparency is a key theme of the legislation, with H.B. 452 mandating clear disclosure requirements. Chatbots must explicitly inform users that they are interacting with an AI—not a human therapist—prior to the initial interaction, after a period of inactivity exceeding seven days, and anytime the user inquires about the chatbot’s nature. These requirements reinforce transparency, enabling users to make informed decisions regarding their engagement with these services.
To support compliance, the law provides an affirmative defense mechanism for chatbot suppliers against allegations of unlicensed practice. Companies can establish this defense by maintaining thorough documentation detailing chatbot development and oversight by licensed mental health professionals, along with implementing clear internal policies outlining the chatbot's intended purpose, limitations, safety precautions, and adherence to clinical best practices. This structured governance framework promotes accountability and responsible innovation.
One of my favorite provisions I’ve ever seen in any legislation anywhere is in this section: “…ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in therapy with a licensed mental health therapist.” In AI policy discussions the risk standard is so often perfection–zero accidents in driverless cars for instance–when the status quo when humans are involved, is far from perfect. This language strikes the right balance.
Finally, enforcement of H.B. 452 falls under the authority of the Utah Division of Consumer Protection, which can issue fines up to $2,500 per violation. Courts may also impose civil penalties and require additional remedies such as disgorgement of profits or payment of attorney fees. This robust enforcement framework underscores Utah’s commitment to ensuring compliance and consumer safety.
Overall, Utah's H.B. 452 represents a thoughtful and balanced regulatory model for AI applications in mental healthcare, combining user protection, ethical considerations, and innovation-friendly policies. My colleague Neil Chilson also said as much about H.B. 452 when he testified before Congress recently. This bill serves as an important benchmark for other states navigating the complex intersection of artificial intelligence, consumer benefit, and public policy.