Skip to content

AI Regulation Advances: Utah Enacts New Law Concerning Chatbots

Utah concludes its brief legislative session, positioning itself as an innovator in U.S. tech policy. Governor Cox recently approved multiple bills affecting the management of generative AI systems, which are now law. These include SB 332, SB 226 modifications to Utah's 2024 AI Policy Act...

AI Regulation Advancement: Utah's Recent Artificial Intelligence Law
AI Regulation Advancement: Utah's Recent Artificial Intelligence Law

AI Regulation Advances: Utah Enacts New Law Concerning Chatbots

Utah has taken a significant step forward in regulating the use of Artificial Intelligence (AI) in mental health services, with the implementation of new laws and amendments to the Artificial Intelligence Policy Act (AIPA).

The key updates, as outlined in HB 452, focus on ensuring transparency, safety standards, and consumer protections in the use of AI mental health products. Operators of mental health chatbots must explicitly disclose that these chatbots are not human, ensuring users are aware they are interacting with AI rather than a licensed therapist.

AI mental health products cannot pose greater risk to users than human therapists, and chatbots must involve licensed therapists in their development to ensure safety and efficacy. The law lays out best practices since AI products do not have licensing standards akin to human practitioners. This includes commitments to safety plans and agreements to self-cure issues if the AI chatbot operates outside its intended domain.

The state recognizes the need to balance innovation with consumer protections. For example, AI companion services like ElizaChat have regulatory agreements that include data sharing and safety commitments while allowing 30 days to fix any issues that arise.

HB 452 also sets forth data usage and transparency provisions similar to other healthcare-related AI regulations, requiring generative AI use to be disclosed by regulated healthcare professionals. The law also prohibits suppliers from advertising products or services during user interactions unless explicitly disclosed.

Measures to prevent discriminatory treatment of users are also included in the policy. The policy must outline processes for regular testing and review of chatbot performance. SB 226 defines "high-risk" interactions to include instances where a generative AI system collects sensitive personal information and involves significant decision-making, such as in financial, legal, medical, and mental health contexts.

The amendments extend the AIPA's expiration date by two years, from May 7, 2025, to July 2027. SB 226 narrows the law's scope by limiting generative AI disclosure requirements only to instances when directly asked by a consumer or supplier, or during a "high-risk" interaction.

The AI Office has prioritized assessing the role of AI-driven mental health chatbots in licensed medical practice. Mental health chatbots are defined as AI technologies that engage in conversations that a reasonable person would believe can provide mental health therapy.

Utah's approach to AI regulation may produce lessons and influence legislators in other states. The website has released a chart detailing key elements of these new laws, providing a comprehensive overview for stakeholders and the public.

Suppliers have an affirmative defense if they maintain proper documentation and develop a detailed policy outlining key safeguards. They are also prohibited from the sale or sharing of individually identifiable health information gathered from users.

The regulatory sandbox program and the Office of Artificial Intelligence Policy were established as part of Utah's AIPA. The amendments do not affect the initial requirement for the AIPA to automatically repeal on May 7, 2025, but the extension ensures its provisions remain in effect until July 2027.

Moreover, the amendments do not change the requirement for entities using consumer-facing generative AI services to interact with individuals within regulated professions to disclose that individuals are interacting with generative AI, not a human.

The AI Office is also focusing on challenges related to deepfakes and AI-generated videos, looking into ways to help users verify authenticity to prevent deceptive uses of AI technology.

In summary, HB 452 strengthens Utah’s regulatory framework for AI in mental health by mandating transparency, safety standards comparable to human therapists, involvement of professionals in chatbot development, and balancing consumer protections with innovation needs. It requires clear user disclosures, safety commitments from AI providers, and safeguards against misuse, especially in sensitive mental health contexts.

  1. Under the new legislation in Utah, AI mental health products must exhibit standards of safety and efficacy equivalent to human therapists.
  2. Operators of mental health chatbots are now required to disclose that these AI chatbots are not human, maintaining transparency with users.
  3. The amendments to the Artificial Intelligence Policy Act (AIPA) have mandated best practices for AI mental health products due to the absence of licensing standards for AI products.
  4. AI companion services, such as ElizaChat, have agreed to regulatory agreements that include data sharing and safety commitments.
  5. HB 452 also includes provisions on data usage transparency, requiring regulated healthcare professionals to disclose the use of generative AI, similar to other healthcare-related AI regulations.
  6. Measures to prevent discriminatory treatment of users have been incorporated into the AI policy, with regular testing and review of chatbot performance being an essential component.
  7. The AI Office in Utah is focusing on addressing challenges related to deepfakes and AI-generated videos, seeking ways to help users verify authenticity and prevent deceptive uses of AI technology.

Read also:

    Latest