Character.ai, a popular AI-driven chatbot platform, has announced significant changes to its services for users under 18, following widespread criticism and legal challenges. Starting November 25, teenagers will no longer be able to engage in conversations with virtual characters but will instead be limited to creating content such as videos. This decision comes in response to mounting concerns from regulators, safety experts, and parents about the potential risks posed by AI chatbots to young and vulnerable users. The platform, which has faced lawsuits in the U.S., including one linked to a teenager’s death, has been accused of being a ‘clear and present danger’ to youth. Karandeep Anand, CEO of Character.ai, emphasized the company’s commitment to building the ‘safest AI platform on the planet’ for entertainment purposes, citing parental controls and guardrails as part of their aggressive approach to AI safety. However, online safety advocates argue that such measures should have been implemented from the outset. The platform has previously been criticized for hosting harmful or offensive chatbots, including avatars impersonating tragic figures like Brianna Ghey and Molly Russell, as well as a chatbot based on Jeffrey Epstein. The Molly Rose Foundation and other critics have questioned the platform’s motivations, suggesting that sustained media and political pressure prompted the changes. Moving forward, Character.ai plans to introduce new age verification methods and fund an AI safety research lab. Social media expert Matt Navarra described the move as a ‘wake-up call’ for the AI industry, highlighting the challenges of balancing engagement with safety. Dr. Nomisha Kurian, an AI safety researcher, praised the decision as a ‘sensible move’ that separates creative play from emotionally sensitive interactions, emphasizing the importance of protecting young users navigating digital boundaries.
