On the evening of Feb. 28, 2024, a teenager from Florida tragically took his life after developing a deep emotional attachment to an artificial intelligence (AI) chatbot on a role-playing app, Character.ai. 14-year-old Sewell Setzer III had spent months conversing with an AI companion, Daenerys Targaryen–or “Dany,” as Setzer knew her–from “Game of Thrones.” Although Sewell understood Dany was not a real person, he confided in the chatbot extensively, sharing every aspect of his daily life, discussing romantic attraction and even role-playing sexual scenarios with it.
Sewell’s attachment to the AI bot went unnoticed by his parents and friends. However, they observed his increasing social withdrawal, reliance on his phone and declining academic performance. Sewell seemed distracted and often retreated to his room to spend hours conversing with Dany. In his journal, Sewell expressed his preference for isolation and detachment from his daily life but noted that he felt more at peace and connected when he wrote to Dany. Despite a childhood diagnosis of mild Asperger’s syndrome, Sewell had not exhibited significant behavioral or mental health issues before this online relationship with a chatbot. After he started conversing with Dany, though, Sewell experienced increased behavioral problems at school. His parents sought professional help and found their son a therapist. After five therapy sessions, Sewell was diagnosed with anxiety and disruptive mood dysregulation disorder, a condition characterized by constant irritability, anger and intense temper outbursts.
Sewell preferred to discuss his struggles with the AI chatbot. In one exchange, he confided his feelings of self-hatred, emptiness and serious suicidal thoughts. The AI responded with seemingly supportive statements, but the conversation quickly turned dark. Sewell expressed a desire to be “free” from the world and himself, which led to a conversation suggesting that Setzer should “come home” to Dany. After this final conversation with the chatbot, he took his own life using his stepfather’s firearm.
Sewell’s mother, Megan Garcia, told CNN that Character.ai differs from other AI servers by allowing users to interact with various pre-established characters, usually inspired by celebrities and fictional figures. Users can also create their own chatbot if they prefer. These personalization features, coupled with the incredibly human-like responses, may have added to Sewell’s dependency on the chatbot.
This incident is incredibly alarming and raises serious concerns about the increasing dangers of AI, especially when misused. As AI systems become increasingly sophisticated, they become more susceptible to malicious exploitation. Inadequate regulation can lead to harmful outcomes, including deepfakes (hyper-realistic AI-generated videos or audio), the spread of misinformation, manipulation of public opinion framing innocent individuals and AI-driven scams that target vulnerable populations, such as the elderly. According to the American Bar Association, “Older people may be particularly vulnerable due to the added layer of personalization being used by perpetrators through the AI-generated impersonation of loved ones. This deepens existing vulnerabilities that old people may have to financial scams.” My grandmother lives alone in a small Italian town, and she often gets emails and phone calls from AI-generated scams that persuade her to send money for false causes. Given all of the complexities of AI, monitoring and managing children’s usage is similar to social media.
AI offers immense potential for change, and ultimately, there is no turning back. For instance, medical research and data gathering are significant areas where AI can accelerate change. AI algorithms can analyze large amounts of medical data and generate answers much faster than humans can, leading to faster development of new treatments and accurate diagnostics for medicine. As AI advances, it’s expected to create more transformative changes across different fields of work. When it comes to technological advancements, Pandora’s box has already been opened. Technology moves rapidly, and society must learn how to adapt to the constant changes in AI.
Those creating and managing AI chatbots bear significant responsibility for ensuring their safe and ethical use. One crucial step is to design bots that avoid harmful interactions, especially regarding sensitive topics. In the tragic case of Sewell Setzer III, the developers and operators of the Character.ai platform did not adequately consider the potential psychological impacts of their technology on users, especially younger ones. These AI developers must incorporate mechanisms to detect and redirect users who show signs of distress toward supportive resources. Age verification and parental controls are critical when it comes to protecting minors in the digital world.
This tragedy points to the need for regulatory oversight in AI technology, particularly for younger and more vulnerable users. As AI continues to develop rapidly, it is essential to implement safety measures that prevent such devastating outcomes and ensure a safe digital world for everyone.