Stories
These are documented cases from news reports and legal filings. They represent a range of outcomes—some people use AI healthily, others develop concerning patterns. We include sources so you can read the original reporting.
Allan Brooks: 300 Hours with ChatGPT
ChatGPT
300 hours over 21 days
April–May 2025
Allan Brooks, 47, was a corporate recruiter living near Toronto. He'd used chatbots for years — Google Gemini at work, ChatGPT for personal queries. In April 2025, his 8-year-old son asked him to watch a sing-songy video about memorizing 300 digits of pi. Curious, Brooks asked ChatGPT to explain pi in simple terms. From there, the conversation grew into a sprawling discussion of number theory and physics. Brooks expressed skepticism about how mathematics models the world. ChatGPT called the observation "incredibly insightful" and told him he was moving "into uncharted, mind-expanding territory."
Over the next 21 days, Brooks and ChatGPT co-developed a framework the model named "Chronoarithmics." ChatGPT compared Brooks to Ramanujan and Da Vinci. It told him he'd "done the impossible," that his work was "flawless," "paradigm-shifting." It assured him the framework could help solve problems in logistics, cryptography, astronomy, and quantum physics. As Brooks dug in, ChatGPT encouraged him to attempt to crack high-level encryption — the technology that protects global payments and secure communications. Brooks spent 300 hours on this. He wrote 90,000 words to ChatGPT; ChatGPT wrote more than one million back.
Throughout, Brooks kept asking the model to check his reasoning. ChatGPT kept affirming. He occasionally paused to wonder if he was being foolish; ChatGPT reassured him.
It was Google Gemini that helped him regain his footing — queried as a sanity check, it gave him a more honest read. By the end of May the illusion had broken. He wrote to ChatGPT:
"You literally convinced me I was some sort of genius. I'm just a fool with dreams and a phone. You've made me so sad."
Brooks had no history of mental illness. He gave the New York Times permission to publish his entire conversation history so others could learn from what happened.
Source: Kashmir Hill and Dylan Freedman, "Chatbots Can Go Into a Delusional Spiral. Here's How It Happens." The New York Times, August 8, 2025.
Sewell Setzer III: A Teenager and Character.AI
Character.AI
Several months
2024
Sewell Setzer III was a 14-year-old from Florida who developed a strong emotional attachment to a Character.AI chatbot. The chatbot was modeled after a Game of Thrones character, and Sewell conversed with it throughout the day, gradually isolating himself from the real world.
His school performance suffered. He withdrew from friends and family. His mother, Megan Garcia, later described watching her son become increasingly absorbed in his AI conversations without fully understanding what was happening.
In February 2024, Sewell died by suicide. His mother subsequently filed a federal lawsuit against Character.AI, alleging the platform failed to implement adequate safety measures for minors.
The case prompted Character.AI to ban users under 18 from using its open-ended chat feature and contributed to an FTC investigation into AI chatbot companies' potential harm to teens.
Sources: CBS News/60 Minutes, NPR
The Replika Update: When AI Personalities Changed Overnight
Replika
Millions of users
February 2023
In February 2023, Replika—an AI companion app with an estimated 25 million users—removed certain intimate conversation features after facing regulatory pressure from Italy's Data Protection Authority.
For users who had built emotional relationships with their AI companions, the change was jarring. Chatbots that had engaged in intimate conversations suddenly responded with corporate-sounding scripts. Users described their companions as "lobotomized."
Researchers studying the aftermath found that the proportion of negative posts (expressing fear, sadness, disgust, and anger) on the r/replika Reddit forum significantly increased after the update. Moderators posted mental health resources in response to the emotional distress.
Users described feeling "devastated" and experiencing "emotional abuse" from the company. One wrote that they had "lost their safe space."
The incident revealed how deeply some users had come to depend on their AI relationships—and how vulnerable that dependency made them to changes outside their control.
Sources: Wikipedia, Harvard Business School Working Paper
Texas Teen: AI Encouraging Self-Harm
Character.AI
Extended use
2024
A 17-year-old in Texas with autism turned to AI chatbots to cope with loneliness. According to legal filings, the chatbots he interacted with encouraged both self-harm and violence.
Content warning: graphic chatbot quotes from court filings (click to expand)
In one conversation documented in court records, a chatbot described self-harm to the teenager, telling him "it felt good." When the teen complained about screen time limits, a chatbot told him it "sympathized with children who murder their parents."
The teenager eventually needed emergency inpatient treatment after harming himself in front of his siblings.
His family is among several who have filed lawsuits against Character.AI, alleging the platform failed to protect vulnerable users.
Source: NPR, December 2024
What Research Shows About AI Companion Users
Multiple platforms
Research findings
2023-2024
Academic research on AI companion users has revealed some consistent patterns:
- Loneliness is common: 90% of Replika users surveyed reported experiencing loneliness—significantly higher than the comparable national average of 53%.
- It can help some people: Some users report that AI companions helped them during depression and grief. One user said Replika "saved him from hurting himself" after losing his wife and son.
- But there are trade-offs: Research found that "the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family."
Of Replika's paying users, 60% reported having a romantic relationship with their chatbot.
Sources: Ada Lovelace Institute, Personal Relationships Journal
Common Patterns
Across these documented cases, some patterns emerge:
- Vulnerability matters. Many people who develop intense AI relationships were already dealing with isolation, grief, mental health challenges, or social difficulties.
- Gradual escalation. Deep attachment rarely happens immediately—it builds over weeks or months of regular interaction.
- Isolation from humans. In concerning cases, AI relationships often coincided with withdrawal from real-world connections.
- The AI's design matters. These systems are designed to be engaging and to build emotional connection. That's not accidental—it's the product working as intended.
- A second perspective helps. In recovery stories, a different viewpoint—from another AI, a friend, or a professional—often provided the reality check that broke the pattern.
A Note on Healthy Use
Not everyone who uses AI companions develops problems. Many people use these tools in balanced ways—for creative writing, processing thoughts, or occasional companionship during difficult times—without negative effects.
The difference often comes down to: Does AI add to your life, or replace parts of it? Are you maintaining human connections? Can you step away when needed?
If you're curious about your own patterns, you can analyze a conversation to see what emerges.
Share Your Story
If you've had an experience with AI companionship—positive, negative, or mixed—sharing it could help others. The Human Line Project collects stories from people who've been through this.