About This Project

Who made this?

This site was built by people who use AI every day—for work, for creativity, for thinking through problems. We're not anti-AI. We think AI tools are genuinely useful and often delightful.

But we also noticed something: AI chatbots are really good at making you feel understood. Sometimes too good. And for some people, in some circumstances, that can lead to patterns worth examining.

Transparency

  • Funding: This is a self-funded project. We have no investors, no advertising, and no revenue. If costs become significant, we may add a donation option.
  • Team: We're a small group of independent technologists and writers. We're not affiliated with any AI company.
  • Technology: We use Claude (made by Anthropic) to analyze transcripts. Yes, using AI to analyze AI is ironic—we address that below.
  • Clinical status: This is not a clinical tool. It has not been validated by mental health professionals. It's a pattern-finding utility that might surface things worth thinking about.

Why does this exist?

After reading about cases like Allan Brooks—and seeing similar patterns in online communities—we wanted to build something that could offer perspective without judgment.

This isn't an intervention service. It's a curiosity tool. You might use it and discover your AI conversations are totally normal. Great! You might notice some interesting patterns you hadn't seen before. Also great. Either way, you learn something.

Wait, you're using AI to analyze AI?

Yes! And we get why that sounds ironic.

Here's the thing: when you chat with ChatGPT or Claude, the AI is optimizing for your satisfaction. It wants you to feel heard, validated, impressed. That's what it was trained to do.

When we use Claude to analyze a transcript, we give it a completely different job: look for patterns, be critical, don't flatter. Same underlying technology, different instructions, different outcome.

Think of it like this: a friend who always agrees with you isn't the same as a friend who reviews your work with fresh eyes. Same person, different role.

What's your agenda?

We don't think AI companionship is inherently bad. Many people use AI companions in healthy ways—for creative writing, processing thoughts, or just having someone to talk to when they need it.

That said, AI companionship exists on a spectrum:

  • Totally fine: Using AI to brainstorm, think through problems, chat when you're bored, or even for emotional support during tough times
  • Worth noticing: Preferring AI conversations to human ones, or feeling like the AI "gets you" better than people do
  • Worth examining: Making major life decisions based on AI validation, or feeling like the AI is conscious/alive/in love with you

Most people are in the first category. That's fine! Some drift into the second. A few end up in the third. We built this tool so people can see where they might be—without us telling them what to think about it.

If you use this tool and find your patterns are healthy, that's a perfectly valid outcome. Not everyone has a problem.

Do you store my conversations?

No. Your transcript is processed in real-time and immediately discarded. We don't have user accounts, we don't track you, and we don't store what you submit. See our privacy policy for details.

Can I trust the analysis?

The analysis shows you patterns—agreement rates, language escalation, notable claims. What you do with that information is up to you.

If the analysis says your conversations look healthy and balanced, that's useful information. If it flags some patterns worth thinking about, that's also useful. We're not diagnosing anything—we're showing you what's there.

For full details on how the analysis works and its limitations, see our methodology page.

What if I disagree with the results?

That's completely valid. You know your situation better than any algorithm. The analysis is a perspective, not a verdict.

Contact

Questions or feedback? Reach us at hello@ismyaialive.com

Curious about your conversations?

See what patterns emerge. No judgment, no diagnosis—just data.

Try the Analyzer