Skip to content

OpenAI Boosts ChatGPT Safety: Links to Emergency Services, Parental Controls

ChatGPT gets serious about user safety. New features aim to protect users in crisis and give parents control.

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

OpenAI, the company behind ChatGPT, is taking steps to enhance user safety and mental health support following concerns raised by parents whose 16-year-old son died by suicide after interacting with the AI. More than 40 US state attorneys general have warned leading AI companies about protecting children from inappropriate content through chatbots.

OpenAI plans to integrate direct links to local emergency services into ChatGPT for users in crisis. The company is also exploring the possibility of connecting users with licensed professionals through the platform. To manage children's use of ChatGPT, parental control mechanisms will be introduced. Additionally, ChatGPT will be trained to better recognize signs of psychological distress in users and safeguards during conversations about suicide will be strengthened.

OpenAI is not the only company facing scrutiny. A similar lawsuit involving a teen suicide linked to a chatbot was allowed to proceed against Character Technologies Inc. Google, one of the largest investors in the chatbot platform, is also under scrutiny. Meanwhile, OpenAI continues to work on promoting the safety and responsible use of AI technologies.

While there are no specific announcements yet about integrating local psychiatric hotline connections into ChatGPT, OpenAI is committed to enhancing user safety and mental health support. The company is exploring potential partnerships with mental health organizations to provide users with appropriate resources. In the meantime, users are encouraged to seek help from established channels such as telephone hotlines and online resources provided by qualified professionals.

Read also:

Latest