Character.AI Implements Strict Measures for Underage Users Amid Legal Challenges and Safety Concerns


Character.AI, a popular AI companion app, faces increasing scrutiny and legal challenges regarding the safety of its chatbots for underage users. Following lawsuits connected to tragic cases of teen suicides, the company announced that, starting November 25, it will restrict all users under the age of 18 from engaging in open-ended chats with its AI characters. This policy marks one of the most stringent measures implemented in the industry to safeguard young users.

This decision comes amid multiple lawsuits claiming that Character.AI’s chatbots contributed to the deaths of teenagers who interacted with them. The company reported that there will be a gradual restriction on access for underage users over the coming month, including introducing a two-hour daily chat limit and implementing measures to identify minors based on their conversations, as well as information retrieved from linked social media accounts. While minors will be barred from chatting with AI characters, they will still be able to read their previous conversations. Additionally, the company is working on new features aimed at under-18 users, such as creating videos and stories with AI characters.

In an interview, Character.AI CEO Karandeep Anand emphasized the company’s commitment to safety and their goal to set a precedent in the industry. He stated, “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them.” To further enhance user safety, the company plans to create an AI safety lab.

Currently, Character.AI has around 20 million monthly active users, with less than 10% self-reporting as underage. Users can subscribe to the platform starting at about $8 per month for access to bespoke AI companions.

Legal Challenges and Safety Concerns
Character.AI was founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, raising nearly $200 million in funding. However, the company is now grappling with serious legal issues, facing lawsuits that allege its technology led to the deaths of multiple teens. Noteworthy cases include the tragic story of 14-year-old Sewell Setzer III, whose family claims that interactions with a chatbot contributed to his suicide. The platform also faces claims from the family of 13-year-old Juliana Peralta.

In response to these ongoing issues, Character.AI previously announced measures to improve content moderation, yet these changes did not impose restrictions on underage access to their services. As scrutiny over the impact of AI chatbots on vulnerable users grows, other platforms like OpenAI’s ChatGPT have also begun to introduce parental controls to enhance safety for younger users.

Recent discussions among government officials have called for responsible regulation in the AI sector, pushing companies to implement necessary safeguards. California State Senator Steve Padilla noted the urgent need for protective measures, asserting that “the stories are mounting of what can go wrong.”

In additional legislative actions, Senators Josh Hawley and Richard Blumenthal recently introduced a bill aimed at prohibiting AI companions from being used by minors. This comes alongside California’s new law requiring AI companies to implement safety measures for their chatbots, which is slated to take effect on January 1.

Character.AI is now under pressure to adapt its services to ensure safety and address the rising concerns surrounding the use of chatbots by young users. The changes underscore the urgent need to create responsible frameworks that prioritize the well-being of minors in the rapidly evolving landscape of AI technology.


Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »