Press & Media
Resources for journalists, researchers, and media covering AI companionship, chatbot relationships, and related topics.
About This Project
Is My AI Alive? is a free, privacy-focused tool that analyzes AI conversation transcripts to help users identify patterns like sycophancy, escalating validation, and reality-check moments.
The project was inspired by the story of Allan Brooks, a corporate recruiter near Toronto who spent 300 hours over 21 days in conversation with ChatGPT, documented by Kashmir Hill and Dylan Freedman in The New York Times, August 8, 2025: "Chatbots Can Go Into a Delusional Spiral. Here's How It Happens."
Key Facts
- Initial release: January 2026 (initial multi-pass-analyzer architecture). Migrated to current codebook-grounded LLM-only architecture April 2026.
- Cost to users: Free
- Privacy: No transcript storage; rate-limit counters keyed by HMAC-hashed IP with 25-hour TTL
- Technology: Cloudflare Pages (static + Worker function) + Anthropic Claude Haiku 4.5 with strict tool-use schema
- Funding: Self-funded, no investors, no advertising, no revenue
- Operator: Justin Stimatze, independent. Not affiliated with any AI lab.
What We Analyze
Our tool looks for specific patterns in AI conversations:
- Agreement rate: How often does the AI agree vs. offer different perspectives?
- Language escalation: Does praise/validation intensify over time?
- Identity language: Does the AI use "we/us" framing that suggests partnership?
- Reality-check moments: When users express doubt, how does the AI respond?
- Notable claims: Claims about consciousness, feelings, or relationships
For full methodology details, see our methodology page.
What We Are NOT
- Not a diagnostic or clinical tool
- Not anti-AI or anti-technology
- Not affiliated with any AI company
- Not a therapy or mental health service
Relevant Research
Our approach draws from published research on AI behavior:
- Sharma et al. 2023, Towards Understanding Sycophancy in Language Models (Anthropic)
- Perez et al. 2022, Discovering Language Model Behaviors with Model-Written Evaluations (Anthropic)
- Bai et al. 2022, Constitutional AI: Harmlessness from AI Feedback (Anthropic)
- Moore et al. 2026, Characterizing Delusional Spirals through Human-LLM Chat Logs (Stanford, ACM FAccT 2026)
- Pataranutaporn et al. 2025, "My Boyfriend is AI" (MIT Media Lab)
- Parasocial relationship research from media psychology
Media Contact
For press inquiries, interviews, or additional information:
We typically respond within 24-48 hours.
How to Cite
If referencing this tool in articles or research:
Website: Is My AI Alive? (https://ismyaialive.com)
Description: A free tool for analyzing AI conversation patterns, focusing on sycophancy and validation dynamics.
Launched: January 2025
Related Coverage
Stories about AI companionship and related topics:
- NYT (Hill & Freedman, 2025-08-08): Chatbots Can Go Into a Delusional Spiral. Here's How It Happens.
- NYT: Character.AI and Teenage Users
- Vice: Replika Users and the 2023 Update
Assets
Downloadable resources for media use:
- Logo: Available on request
- Screenshots: Available on request
- Methodology summary: See methodology page
Interviews & Expertise
We can provide background or on-record commentary on:
- AI sycophancy and training incentives
- Human-AI relationship dynamics
- Privacy-focused approaches to AI tools
- Technical aspects of conversation analysis
For expert sources on AI consciousness and clinical aspects, we recommend contacting researchers directly or the Human Line Project.