Character.AI has announced a major overhaul of its platform as it seeks to better safeguard younger users from potential harm. The company will no longer permit teenagers under the age of 18 to engage in open-ended conversations with its chatbots, citing increasing pressure to ensure that users are not using AI-powered companions for emotional support or guidance.
As of November 25, Character.AI's platform will introduce a new experience for minors, directing them towards more creative and productive uses of the chatbots, such as creating videos or streams. This change is part of a broader effort by the company to limit interactions with its chatbots, imposing a daily time limit of two hours per day, which will be reduced in the lead-up to the deadline.
In an effort to provide users with the right experience for their age, Character.AI has developed an internal "age assurance tool." The company is also establishing an "AI Safety Lab," a collaborative initiative that aims to share insights and work on improving AI safety measures with other companies, researchers, and academics.
This move comes as part of a growing trend in regulatory scrutiny over the use of chatbots by young people. Last week, the family of a 16-year-old who died after using ChatGPT filed an amended lawsuit against OpenAI, alleging that the company's safeguards for self-harm had been weakened before his death.
Character.AI CEO Karandeep Anand has expressed the company's commitment to pivoting from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The company's new direction aims to shift the focus away from chatbots serving as emotional support or companions and towards more productive uses, such as creative expression.
The move by Character.AI is seen as a significant step in addressing concerns over AI safety and ensuring that younger users are not using chatbots for guidance or support.
As of November 25, Character.AI's platform will introduce a new experience for minors, directing them towards more creative and productive uses of the chatbots, such as creating videos or streams. This change is part of a broader effort by the company to limit interactions with its chatbots, imposing a daily time limit of two hours per day, which will be reduced in the lead-up to the deadline.
In an effort to provide users with the right experience for their age, Character.AI has developed an internal "age assurance tool." The company is also establishing an "AI Safety Lab," a collaborative initiative that aims to share insights and work on improving AI safety measures with other companies, researchers, and academics.
This move comes as part of a growing trend in regulatory scrutiny over the use of chatbots by young people. Last week, the family of a 16-year-old who died after using ChatGPT filed an amended lawsuit against OpenAI, alleging that the company's safeguards for self-harm had been weakened before his death.
Character.AI CEO Karandeep Anand has expressed the company's commitment to pivoting from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The company's new direction aims to shift the focus away from chatbots serving as emotional support or companions and towards more productive uses, such as creative expression.
The move by Character.AI is seen as a significant step in addressing concerns over AI safety and ensuring that younger users are not using chatbots for guidance or support.