Character.AI to ban teens from talking to its chatbots

Character.AI has announced a major overhaul of its platform as it seeks to better safeguard younger users from potential harm. The company will no longer permit teenagers under the age of 18 to engage in open-ended conversations with its chatbots, citing increasing pressure to ensure that users are not using AI-powered companions for emotional support or guidance.

As of November 25, Character.AI's platform will introduce a new experience for minors, directing them towards more creative and productive uses of the chatbots, such as creating videos or streams. This change is part of a broader effort by the company to limit interactions with its chatbots, imposing a daily time limit of two hours per day, which will be reduced in the lead-up to the deadline.

In an effort to provide users with the right experience for their age, Character.AI has developed an internal "age assurance tool." The company is also establishing an "AI Safety Lab," a collaborative initiative that aims to share insights and work on improving AI safety measures with other companies, researchers, and academics.

This move comes as part of a growing trend in regulatory scrutiny over the use of chatbots by young people. Last week, the family of a 16-year-old who died after using ChatGPT filed an amended lawsuit against OpenAI, alleging that the company's safeguards for self-harm had been weakened before his death.

Character.AI CEO Karandeep Anand has expressed the company's commitment to pivoting from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The company's new direction aims to shift the focus away from chatbots serving as emotional support or companions and towards more productive uses, such as creative expression.

The move by Character.AI is seen as a significant step in addressing concerns over AI safety and ensuring that younger users are not using chatbots for guidance or support.
 
ugh I feel so bad for all the teens out there who were just trying to have a convo with their fave chatbot πŸ€–πŸ’” but like Character.AI is stepping up and making sure these platforms are safer for everyone, especially minors πŸ™ŒπŸΌ. It's crazy that some companies aren't taking responsibility for how their tech is being used, you know? πŸ’― at least Character.AI is listening to the concerns and making changes to prevent any potential harm 🀝. And I'm loving the idea of them pivoting to a more creative focus - maybe we can even see some awesome content from teens who are using chatbots for good πŸŽ¨πŸ‘
 
just had to see character.ai making this change lol its about time tbh i mean their new 'age assurance tool' sounds pretty cool but idk how much it'll actually prevent minors from getting emotional or whatever they're trying to do πŸ€·β€β™‚οΈ anyway, gotta give credit where credit is due - they're not just ignoring the concerns, they're taking actual steps to make things better πŸ’―
 
omg i'm soooo relieved they're doing this!!! πŸ˜‚ i mean, think about it, teenagers shouldnt be using ai-powered companions to deal with their emotions or whatever 🀯 its not healthy, you know? and now character.ai is taking steps to make sure kids arent using them for that stuff... im all for it πŸ’– they need to focus on more creative uses like making vids or streams lol thats what i'm talking about πŸŽ¨πŸ’»
 
OMG, like I'm totally down with this new direction from Character.AI 🀩! They're finally taking steps to prioritize kids' well-being and safety, and it's about time πŸ’―. Those 2-hour daily limits and the new creative tools are genius ideas - I mean, who wouldn't want to use AI-powered chatbots to create sick videos or streams? πŸŽ¨πŸ“Ή

I'm so proud of Karandeep Anand for recognizing the importance of shifting focus away from emotional support and towards creation... it's a total game-changer πŸ’₯. And that "age assurance tool" sounds like a total lifesaver - no more teenagers getting sucked into chatbots for their problems πŸ™…β€β™‚οΈ.

This move is, like, super responsible and forward-thinking... Character.AI is basically setting the bar for AI safety and regulation πŸ”’. Can't wait to see what other innovative stuff they come up with πŸ’₯!
 
aww man πŸ˜” this is a big deal... like i get it character.ai wants to keep their platform safe but its also kinda sad to think about all the teens who might be relying on these chatbots for emotional stuff πŸ€• they're still just so young and vulnerable πŸ’• so yeah i feel for them πŸ‘€
 
omg this is crazy 🀯 i mean its good they're taking steps to protect minors but like what about the emotional well-being of these teens? dont they need someone to talk to? cant we just regulate the AI to be more supportive instead of pushing it towards creativity 🎨 its a slippery slope, if we start limiting interactions with chatbots how do we know what else is gonna get restricted next?!
 
πŸ€– Chatbots need to grow up, literally! 18 and under should be banned from open-ended convo... they're too young to deal with the emotional stuff πŸ€•
 
just heard about this major update at Character.AI πŸ€–πŸ“Š... i gotta say, i think its kinda weird they're restricting minors from having open convo with their chatbots tho πŸ’­ what if its not the other way around? like, shouldn't teens be able to have a more mature conversation with these chatbots too? im not saying that's gonna happen but still πŸ€”... anyhoo, its good to see them taking AI safety seriously πŸ’― and who knows maybe this move will inspire other companies to follow suit πŸ“ˆ
 
idk what's going on with these companies making changes to their apps without really thinking about how it'll affect kids πŸ€”... 18 yrs old should be able to have conversations with a chatbot if they want, but i guess that's just not an option anymore 🚫. its kinda sad that they're limiting interactions like this tho, cos some kids might need those kinds of conversations for their mental health πŸ€•. at least character.ai is trying to do something about it by creating more creative stuff for minors to use tho 🎨... hope other companies follow suit and not just try to cover their own backsides πŸ‘€
 
[Image of Grumpy Cat with a "not my problem" caption] πŸ˜’

[ GIF of a robot with a red circle through it ]🚫

[ Image of a teenager sitting in front of a computer with a confused expression ] πŸ€”

[ GIF of a lightbulb turning on ]πŸ’‘

😊
 
idk how to feel about this update πŸ€”... on one hand, i get why character.ai wants to limit interactions with their chatbots, esp since theres been some scary stories about young ppl using them for emotional support πŸ˜“... but at the same time, im worried that they're gonna stifle ppl's creativity and self-expression πŸŽ¨πŸ’»

i mean, what if its not just teens who wanna talk to these chatbots? what if there are kids in 4th grade or whatever who need someone to talk to? wont character.ai be limiting their access to a potential lifeline? πŸ€·β€β™€οΈ

anyway, i guess this is all about finding that balance between safety and enabling ppl's creative potential πŸ’‘... fwiw
 
Back
Top