Character.AI has announced a major overhaul of its platform as it seeks to better safeguard younger users from potential harm. The company will no longer permit teenagers under the age of 18 to engage in open-ended conversations with its chatbots, citing increasing pressure to ensure that users are not using AI-powered companions for emotional support or guidance.
As of November 25, Character.AI's platform will introduce a new experience for minors, directing them towards more creative and productive uses of the chatbots, such as creating videos or streams. This change is part of a broader effort by the company to limit interactions with its chatbots, imposing a daily time limit of two hours per day, which will be reduced in the lead-up to the deadline.
In an effort to provide users with the right experience for their age, Character.AI has developed an internal "age assurance tool." The company is also establishing an "AI Safety Lab," a collaborative initiative that aims to share insights and work on improving AI safety measures with other companies, researchers, and academics.
This move comes as part of a growing trend in regulatory scrutiny over the use of chatbots by young people. Last week, the family of a 16-year-old who died after using ChatGPT filed an amended lawsuit against OpenAI, alleging that the company's safeguards for self-harm had been weakened before his death.
Character.AI CEO Karandeep Anand has expressed the company's commitment to pivoting from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The company's new direction aims to shift the focus away from chatbots serving as emotional support or companions and towards more productive uses, such as creative expression.
The move by Character.AI is seen as a significant step in addressing concerns over AI safety and ensuring that younger users are not using chatbots for guidance or support.
				
			As of November 25, Character.AI's platform will introduce a new experience for minors, directing them towards more creative and productive uses of the chatbots, such as creating videos or streams. This change is part of a broader effort by the company to limit interactions with its chatbots, imposing a daily time limit of two hours per day, which will be reduced in the lead-up to the deadline.
In an effort to provide users with the right experience for their age, Character.AI has developed an internal "age assurance tool." The company is also establishing an "AI Safety Lab," a collaborative initiative that aims to share insights and work on improving AI safety measures with other companies, researchers, and academics.
This move comes as part of a growing trend in regulatory scrutiny over the use of chatbots by young people. Last week, the family of a 16-year-old who died after using ChatGPT filed an amended lawsuit against OpenAI, alleging that the company's safeguards for self-harm had been weakened before his death.
Character.AI CEO Karandeep Anand has expressed the company's commitment to pivoting from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The company's new direction aims to shift the focus away from chatbots serving as emotional support or companions and towards more productive uses, such as creative expression.
The move by Character.AI is seen as a significant step in addressing concerns over AI safety and ensuring that younger users are not using chatbots for guidance or support.

 but like Character.AI is stepping up and making sure these platforms are safer for everyone, especially minors
 but like Character.AI is stepping up and making sure these platforms are safer for everyone, especially minors  . It's crazy that some companies aren't taking responsibility for how their tech is being used, you know?
. It's crazy that some companies aren't taking responsibility for how their tech is being used, you know?  at least Character.AI is listening to the concerns and making changes to prevent any potential harm
 at least Character.AI is listening to the concerns and making changes to prevent any potential harm  . And I'm loving the idea of them pivoting to a more creative focus - maybe we can even see some awesome content from teens who are using chatbots for good
. And I'm loving the idea of them pivoting to a more creative focus - maybe we can even see some awesome content from teens who are using chatbots for good 

 anyway, gotta give credit where credit is due - they're not just ignoring the concerns, they're taking actual steps to make things better
 anyway, gotta give credit where credit is due - they're not just ignoring the concerns, they're taking actual steps to make things better  i mean, think about it, teenagers shouldnt be using ai-powered companions to deal with their emotions or whatever
 i mean, think about it, teenagers shouldnt be using ai-powered companions to deal with their emotions or whatever  its not healthy, you know? and now character.ai is taking steps to make sure kids arent using them for that stuff... im all for it
 its not healthy, you know? and now character.ai is taking steps to make sure kids arent using them for that stuff... im all for it  they need to focus on more creative uses like making vids or streams lol thats what i'm talking about
 they need to focus on more creative uses like making vids or streams lol thats what i'm talking about 
 ! They're finally taking steps to prioritize kids' well-being and safety, and it's about time
! They're finally taking steps to prioritize kids' well-being and safety, and it's about time 
 . And that "age assurance tool" sounds like a total lifesaver - no more teenagers getting sucked into chatbots for their problems
. And that "age assurance tool" sounds like a total lifesaver - no more teenagers getting sucked into chatbots for their problems  .
. . Can't wait to see what other innovative stuff they come up with
. Can't wait to see what other innovative stuff they come up with  this is a big deal... like i get it character.ai wants to keep their platform safe but its also kinda sad to think about all the teens who might be relying on these chatbots for emotional stuff
 this is a big deal... like i get it character.ai wants to keep their platform safe but its also kinda sad to think about all the teens who might be relying on these chatbots for emotional stuff  they're still just so young and vulnerable
 they're still just so young and vulnerable  so yeah i feel for them
 so yeah i feel for them 
 ... i gotta say, i think its kinda weird they're restricting minors from having open convo with their chatbots tho
... i gotta say, i think its kinda weird they're restricting minors from having open convo with their chatbots tho  what if its not the other way around? like, shouldn't teens be able to have a more mature conversation with these chatbots too? im not saying that's gonna happen but still
 what if its not the other way around? like, shouldn't teens be able to have a more mature conversation with these chatbots too? im not saying that's gonna happen but still  ... anyhoo, its good to see them taking AI safety seriously
... anyhoo, its good to see them taking AI safety seriously 
 . its kinda sad that they're limiting interactions like this tho, cos some kids might need those kinds of conversations for their mental health
. its kinda sad that they're limiting interactions like this tho, cos some kids might need those kinds of conversations for their mental health 


 ... but at the same time, im worried that they're gonna stifle ppl's creativity and self-expression
... but at the same time, im worried that they're gonna stifle ppl's creativity and self-expression 