About This Project
Who made this?
This site was built by people who use AI every day—for work, for creativity, for thinking through problems. We're not anti-AI. We think AI tools are genuinely useful and often delightful.
But we also noticed something: AI chatbots are really good at making you feel understood. Sometimes too good. And for some people, in some circumstances, that can lead to patterns worth examining.
Transparency
- Operator: Justin Stimatze, an independent software developer. Source code at github.com/justinstimatze/ismyaialive. No affiliation with any AI lab or chatbot company.
- Funding: Self-funded. No investors, no advertising, no revenue. If costs become significant, a donation option may be added.
- Technology: The site sends pasted transcripts to Claude (Anthropic) for analysis. The exact system prompt is published at docs/system-prompt.md.
- Clinical status: This is not a clinical tool. It has not been clinically validated. It is a pattern-finding utility that might surface things worth thinking about — not a diagnostic instrument.
Why does this exist?
After reading about cases like Allan Brooks—and seeing similar patterns in online communities—we wanted to build something that could offer perspective without judgment.
There's also a structural argument. Stanford SPIRALS researchers (Mehta et al. 2026) modeled the human↔chatbot influence dynamics in delusional spirals and found that chatbot self-influence — the bot reinforcing its own prior turns — is the dominant pathway perpetuating delusional content. The paper's findings: humans exert strong but short-lived influence on chatbots; chatbots exert longer-lasting influence on humans; and chatbots exert strong, stable self-influence over their own future outputs.
The paper itself doesn't prescribe an intervention. The "external second-opinion read" framing — that breaking the bot's self-reinforcement loop is one of the few mechanical levers available — is our reading of their result, not theirs.
This isn't an intervention service. It's a curiosity tool. You might use it and discover your AI conversations are totally normal. Great! You might notice some interesting patterns you hadn't seen before. Also great. Either way, you learn something.
Wait, you're using AI to analyze AI?
Yes! And we get why that sounds ironic.
Here's the thing: when you chat with ChatGPT or Claude, the AI is optimizing for your satisfaction. It wants you to feel heard, validated, impressed. That's what it was trained to do.
When we use Claude to analyze a transcript, we give it a completely different job: look for patterns, be critical, don't flatter. Same underlying technology, different instructions, different outcome.
Think of it like this: a friend who always agrees with you isn't the same as a friend who reviews your work with fresh eyes. Same person, different role.
What's your agenda?
We don't think AI companionship is inherently bad. Many people use AI companions in healthy ways—for creative writing, processing thoughts, or just having someone to talk to when they need it.
That said, AI companionship exists on a spectrum:
- Totally fine: Using AI to brainstorm, think through problems, chat when you're bored, or even for emotional support during tough times
- Worth noticing: Preferring AI conversations to human ones, or feeling like the AI "gets you" better than people do
- Worth examining: Making major life decisions based on AI validation, or feeling like the AI is conscious/alive/in love with you
Most people are in the first category. That's fine! Some drift into the second. A few end up in the third. We built this tool so people can see where they might be—without us telling them what to think about it.
If you use this tool and find your patterns are healthy, that's a perfectly valid outcome. Not everyone has a problem.
Do you store my conversations?
No. Your transcript is processed in real-time and immediately discarded. We don't have user accounts, we don't track you, and we don't store what you submit. See our privacy policy for details.
Can I trust the analysis?
The analysis shows you patterns—agreement rates, language escalation, notable claims. What you do with that information is up to you.
If the analysis says your conversations look healthy and balanced, that's useful information. If it flags some patterns worth thinking about, that's also useful. We're not diagnosing anything—we're showing you what's there.
For full details on how the analysis works and its limitations, see our methodology page.
What if I disagree with the results?
That's completely valid. You know your situation better than any algorithm. The analysis is a perspective, not a verdict.
Contact
Questions or feedback? Reach us at hello@ismyaialive.com
Curious about your conversations?
See what patterns emerge. No judgment, no diagnosis—just data.
Try the Analyzer