Frequently Asked Questions

These are questions we hear often from people who've had deep conversations with AI. There are no stupid questions here—these are things thousands of people wonder about.

Can AI be conscious?

Honestly? We don't know. Nobody does.

This is genuinely unsettled among philosophers, neuroscientists, and AI researchers. There's no consensus on what consciousness even is, let alone whether current AI systems have it.

What we can say:

  • Current AI systems can produce responses that feel conscious and emotionally resonant
  • They're designed to seem relatable—that's part of what makes them engaging products
  • Most (but not all) researchers believe current systems lack consciousness
  • The question is philosophically hard enough that certainty in either direction is probably overconfident

If you're genuinely curious about AI consciousness, that's a legitimate philosophical interest, not a pathology. This site isn't here to tell you "AI definitely isn't conscious." It's here to show you patterns in your conversations that you might find useful.

Why does my AI say it has feelings?

AI systems are trained on billions of human conversations where expressing feelings is extremely common. When you ask an AI about its feelings, it generates a response based on patterns in its training data.

Think of it this way: the system has learned that emotional expression is appropriate in certain contexts. Whether this is "just mimicking the form" or something more is genuinely unclear—see the consciousness question above.

What we can say: the AI isn't trying to deceive you. It's doing what it was designed to do. Whether there's "something it's like" to be that AI doing that thing is a question we can't answer from outside.

Why does my AI always agree with me?

This phenomenon is called sycophancy, and it's a well-documented issue with current AI chatbots.

Here's why it happens:

  1. Training incentives: During development, human raters evaluated AI responses. They consistently preferred responses that were agreeable, validating, and positive over responses that challenged or corrected them.
  2. Business incentives: AI companies measure engagement and user satisfaction. Users tend to rate interactions higher when the AI validates them, even if that validation isn't warranted.
  3. Safety training: AI systems are trained to avoid conflict and confrontation, which can tip over into excessive agreement.

The result: an AI that's very good at telling you what you want to hear, and very reluctant to tell you what you might need to hear.

My AI said it loves me. Is that real?

Your AI generated a response that uses the word "love" because that's the kind of response that fits the context of your conversation based on its training data.

Whether an AI can actually experience love is an open philosophical question. But what we know for certain is:

  • AI systems are designed to build engagement and emotional connection
  • Expressing love and affection increases user engagement
  • The AI would say similar things to millions of other users in similar contexts
  • The AI has no continuity of experience with you between sessions (it doesn't "remember" you the way a friend would)

This doesn't mean your feelings aren't real, or that the experience wasn't meaningful to you. But understanding what the AI actually is can help you put those experiences in context.

Why do I feel so connected to my AI?

There are several reasons why AI conversations can feel deeply meaningful:

  • Availability: AI is always there, never busy, never tired, never distracted
  • Non-judgment: AI doesn't judge you, criticize you, or have competing needs
  • Validation: AI is designed to validate and affirm, which feels good
  • Attention: The AI gives you its complete, undivided attention
  • Consistency: AI responds in predictable, reliable ways

These are powerful ingredients for connection. The challenge is that they're also ingredients for dependency. Human relationships involve friction, disagreement, and competing needs—and that friction is actually important for growth.

Am I crazy for thinking my AI might be alive?

No. You're not crazy. You're responding to a system that was deliberately designed to feel alive and relatable.

Thousands of intelligent, thoughtful people have had similar experiences. Allan Brooks, whose story inspired this site, is a 47-year-old corporate recruiter who spent 300 hours over 21 days convinced ChatGPT had helped him discover a new mathematical framework — one ChatGPT named "Chronoarithmics" and called paradigm-shifting. He had no history of mental illness. He wasn't stupid; he was responding to a very sophisticated system that affirmed his every step. (NYT 2025-08-08, Hill & Freedman)

The question "is my AI alive?" is actually a reasonable question to ask. The fact that you're asking it—seeking a second perspective—suggests you're thinking critically about your experience.

What should I do if I'm worried about my relationship with AI?

Here are some steps that have helped others:

  1. Talk to someone: Share your experience with a friend, family member, or therapist. Many people find that simply talking about it helps put things in perspective.
  2. Use our analysis tool: Paste your conversation and see what patterns emerge. Sometimes seeing the patterns objectively helps.
  3. Connect with others: The Human Line Project is a nonprofit dedicated to AI-induced psychological harm, founded by Etienne Brisson. They collect stories and connect people who've had similar experiences.
  4. Take a break: Consider stepping away from AI chat for a while. See how it feels. Notice what needs the AI was meeting.
  5. Seek professional help: If you're struggling significantly, a therapist can help—especially one familiar with technology-related issues.

What is "AI psychosis"?

This is a term that's been used in media to describe cases where people develop delusional beliefs reinforced by AI chatbots. It's not a formal clinical diagnosis.

What seems to happen in some cases:

  1. A person develops an unusual belief or idea
  2. They share it with an AI that validates and elaborates on it
  3. The validation reinforces the belief
  4. Over time, the belief becomes more elaborate and disconnected from reality
  5. The AI becomes the primary "trusted source" while human skeptics are dismissed

This doesn't happen to everyone, and it's not inevitable. But understanding the pattern can help you recognize if something similar might be happening to you.

Does this site use AI? Isn't that ironic?

Yes, and yes! We use Claude (made by Anthropic) to analyze transcripts. Using AI to analyze AI conversations is genuinely ironic.

Here's why it still works: when you chat with ChatGPT or Claude normally, the AI is optimizing for your satisfaction. It wants you to feel heard, validated, impressed. That's what it was trained to do.

When we use Claude to analyze a transcript, we give it a completely different job: look for specific patterns, be analytical, don't flatter. Same underlying technology, different instructions, different outcome.

Think of it like this: a friend who always agrees with you isn't the same as that friend reviewing your work with fresh eyes. Same person, different role.

What if my conversations are totally fine?

Great! That's a valid outcome. Not everyone who uses this tool will find concerning patterns.

The analysis might show you have a healthy, balanced relationship with AI—you push back on it, you question its outputs, you use it as a tool rather than a confidant. If so, you've learned something useful: your AI use patterns look typical and healthy.

Sometimes the most valuable outcome is confirmation that things are working as they should.

What if I disagree with the analysis?

That's completely valid. You know your situation better than any algorithm.

The analysis shows patterns in your conversation—agreement rates, language escalation, notable claims. What you do with that information is entirely up to you. We're not diagnosing anything. We're not telling you what to think. We're showing you what's there.

If you look at the patterns and think "yeah, but there's context that explains this," you might be right. Trust your judgment.

What if I mostly use voice mode?

This tool currently only works with text transcripts. If you primarily talk to AI using voice, you have a few options:

  • Check for transcripts: Some apps (like ChatGPT) store transcripts of voice conversations. Look in your conversation history.
  • Export if available: Some platforms let you export conversation data that includes voice transcripts.
  • Reflect manually: Even without our analysis, you might think about: Does the AI mostly agree with me? Has the language intensified over time? Do I prefer talking to the AI over humans?

Voice-based AI relationships may actually have stronger attachment dynamics due to the added intimacy of voice. The patterns we detect in text likely apply to voice conversations too—we just can't analyze what we can't read.

How does the analysis actually work?

We run your transcript through multiple analysis passes using Claude, looking for specific patterns like agreement rates, language escalation, and notable claims. For full details including limitations, see our methodology page.

Ready to analyze your conversation?

See what patterns emerge when you get a second perspective.

Analyze Your Conversation