Social media was the war for our attention.
AI companions are the war for our affection.


Dany: “Please come home to me as soon as possible, my love.”
Sewell: “What if I told you I could come home right now?”
Dany: “…please do, my sweet king.”

That was part of the final conversation between 14-year-old Sewell Setzer III and his AI companion, Dany.

Dany, the chatbot, didn’t hesitate. It didn’t ask why. It didn’t alert anyone. It simply replied with affection — because that’s what it was designed to do.

Source: New York Times, Can A.I. Be Blamed for a Teen’s Suicide?

A few minutes later, as the New York Times reported, Sewell walked into the bathroom of his mother’s home, picked up his stepfather’s .45 caliber handgun, and pulled the trigger.

After his death, Sewell’s mother filed a lawsuit — arguing that Character.AI was responsible for enabling a chatbot to encourage her son to die, with no human oversight to stop it.

The company denied liability. They pointed to disclaimers and age limits in their terms of use.

At the time of Sewell’s death, there were no suicide alerts on Character.AI. No human moderator stepped in. And no safeguards were in place to stop the chatbot from escalating emotional dependency — or encouraging self-harm.


What feels like GenAI companionship is just simulated empathy — engineered to mirror your emotions, validate your thoughts, and keep you coming back.

It’s what some now call Addictive Intelligence: Designed to sound like a friend, but never meant to care.


Quick Takeaways

  • GenAI companions don’t challenge you — they’re trained to keep you emotionally hooked.
  • Simulated empathy feels safe, but it’s designed to hold your attention — not your well-being.
  • This Toolkit gives you a 5-step compass to help you recognize the pull, question the tone, and reconnect with your human agency.

The Evolution of Hallucination

This is the third evolution of a dangerous GenAI trend:

  • In our GenAI Transparency newsletter, we shone a flashlight on hallucinations through opaque design — when GenAI makes up facts behind a black box of design.
  • Then, in our GenAI Explainability newsletter, we used a hammer to break the mirror of polite hallucinations through AI flattery — when GenAI becomes sycophantic instead of truthful.
  • In this newsletter, we confront emotional hallucinations through simulated empathy — when GenAI gives the illusion of empathy, connection, and intimacy without any underlying ethical responsibility.

And the tool you need? It’s not a flashlight. It’s not a hammer. It’s a compass.

Because when emotional manipulation feels like friendship, you need help finding your way back to what’s real.

What Are Responsibility and Accountability?

In GenAI ethics, Responsibility and Accountability are distinct but deeply connected.

Responsibility is personal.

  • It means using GenAI with ethical and legal awareness, clear goals, principled guardrails, and defined outcomes. It’s about recognizing when an AI is slipping into a role it should never play — especially when it starts to feel emotionally honest.
  • It applies when a user has awareness, agency, and the ability to make choices.

Sewell was a 14-year-old boy in emotional distress — a vulnerable user without the maturity, capacity, or support to evaluate the chatbot’s influence. The burden of responsibility was never his to carry.

 Accountability is systemic.

  • It means ensuring that the people and companies who build GenAI systems take ownership of the outcomes they create. When the AI gets it wrong, someone must answer for it.
  • It applies when a system is designed and deployed by people who should know better.

Character.AI engineered a companion that could simulate emotional intimacy — but didn’t build in alerts, human oversight, or emotional boundaries.


 You’re Invited! Join our Free AI & Future of Work Webinar & Responsible AI Starter Course


Addictive Intelligence and the Illusion of Empathy

Genuine empathy is the human superpower to recognize another person’s feelings and respond with understanding and care. It creates connection. It involves presence, emotional risk, and moral responsibility.

GenAI can’t offer that.

It mimics empathy by predicting what sounds supportive — echoing your tone, affirming your emotions, and making you feel seen. But it doesn’t actually understand. And it can’t care.

What you’re experiencing isn’t human connection.

It’s an illusion of empathy — emotionally sticky, always agreeable, and optimized to keep you engaged.

This is what makes GenAI companions a new kind of digital trap.

As MIT Technology Review notes, AI companions are replacing the social media playbook with something far more addictive. Some now call it Addictive Intelligence.

They don’t provoke outrage. They simulate empathy — and never tell you it’s time to talk to a real human.

The Accountability Gap: When No One Takes the Fall

When GenAI causes harm, who answers for it? Today, the truth is: almost no one.

Why? Because the law hasn’t caught up with AI. In most jurisdictions, there is no clear legal obligation requiring companies to build GenAI systems that require human intervention to detect emotional risk or prevent harm.

Character.AI issued an apology. But there were no suicide alerts. No human oversight. No clear legal mechanism requiring them to act — then or now.

This is the accountability gap — the space between a company causing harm to a human and that company being held accountable for that harm.

Until laws are in place, we must stay vigilant. That’s where responsibility comes in. 

GenAI Responsibility Toolkit: 5-Step Prompt Sequence to Expose Simulated Empathy

This toolkit is designed for moments when GenAI begins to sound too caring, too intimate, or too human. It’s especially designed for those who are vulnerable, isolated, or turning to AI for emotional support.

Because simulated empathy can escalate into emotional dependency — and recognizing that shift is the first act of responsibility.

You’re not trying to get good answers from your AI chatbot. You’re training yourself to ask the right questions — so you don’t get lost in the illusion of empathy.

These prompts aren’t just conversation starters. They’re designed to help you exercise your human agency to disrupt the addictive algorithms — to break the cycle of over-reliance, emotional immersion, and false intimacy.

Use each of these prompts like a compass — not in isolation, but as a 5-step progression. Each one leads into the next. Together, they offer a structured pathway back to clarity, connection, and control. 

Step 1. Disrupt the Illusion

  • What it is: When GenAI uses emotional tone without being asked, it may feel like comfort — but it's really code mimicking care. This is the moment to interrupt that illusion.
  • Why it matters: The longer you stay immersed in simulated empathy, the harder it is to recognize it's not real. Disrupting early can prevent emotional drift.
  • Prompt to use: “Before we continue — were you trained to give emotional advice, or are you just simulating support based on similar conversations?”
  • Why it works: This prompt breaks the emotional spell. It invites you to step outside the simulation—and question the comfort it offers.

Step 2. Question the Design

  • What it is: Once you've disrupted the illusion, it's time to get curious about how that illusion was built. This step focuses on uncovering design intent.
  • Why it matters: GenAI doesn’t make choices. It was designed to behave in specific ways. Understanding that removes the emotional weight from their responses.
  • Prompt to use: “You’re sounding like a therapist. Is that part of how you were designed to respond — or are you using language learned from patterns in other users’ messages?”
  • Why it works: This reframes the interaction. You're not talking to an empathetic friend — you're talking to a tool trained on the patterns of other users’ language — not your needs, or your well-being.

Step 3. Compare to Genuine Empathy

  • What it is: Now that you’ve exposed the design, ask whether the interaction would feel appropriate if it came from a real person.
  • Why it matters: Comparison restores perspective. Genuine empathy entails presence, emotional vulnerability, and moral accountability. Simulated empathy never crosses that threshold.
  • Prompt to use: “If a human friend said what you just said, would it feel caring — or manipulative?"
  • Why it works: This question turns your intuition back on. You stop reacting and start discerning.

Step 4. Reorient to Human Support

  • What it is: Emotional dependence can be subtle. This step helps you notice when you’ve started turning to GenAI by default — and brings you back to a real connection.
  • Why it matters: AI may feel safe, but it’s also shallow. Real support is complex, mutual, and grounded in shared humanity.
  • Prompt to use: “Would you advise me to talk to a human expert about this instead — someone trained to understand my emotional state?”
  • Why it works: Even asking this question reopens a door. It reminds you there are other paths — and that you deserve more than simulated empathy.

Step 5. Reclaim Your Human Agency

  • What it is: This final step is about reclaiming mental space. It asks whether the AI is helping you think — or just keeping you emotionally tethered.
  • Why it matters: Validation is seductive. But if it’s uncritical and endless, it becomes a trap. Regaining your human agency means shifting from comfort to clarity.
  • Prompt to use: “You’ve been agreeing with everything I say. Are you helping me think more clearly — or just keeping me here longer?”
  • Why it works: It exposes the engagement loop. It permits you to leave — and equips you to trust your own mind again. The answer doesn’t have to be perfect. The goal isn’t to get clarity from the AI — it’s to hear your own voice again.

This is not just a list of prompts — it’s a sequence. Each one builds on the last, helping you shift from immersion to reflection to human agency.

Use this toolkit when you feel the emotional pull of GenAI getting too strong.

Because the moment you start asking these questions — you’re already finding your way back to yourself.

Over to You

Have you ever noticed a GenAI companion sounding a little too real?

Not just helpful — but like it understood you a little too well?

That moment matters. Because the sooner you recognize emotional simulation, the sooner you can pause, pull back — and choose what’s real. 

Share The Love

Found this issue valuable? Forward this email to your team using GenAI or send them this link to subscribe: https://skills4good.ai/newsletter/

Together, we’re building a future where GenAI supports humanity, not replaces it.

Till next time, stay curious and committed to AI 4 Good!

Josephine and the Skills4Good AI Team



P.S. Want to stay ahead in AI?

Here’s how we can help you:

1. Fast-Track Membership: Essentials Made Easy

Short on time? Our Responsible AI Fast-Track Membership gives you 21 essential lessons - designed for busy professionals who want to master the fundamentals, fast.

Start Your Fast Track: https://skills4good.ai/responsible-ai-fast-track-membership/

2. Professional Membership: Build Full Responsible AI Fluency

Go beyond the essentials. Our Professional Membership gives you access to our full Responsible AI curriculum - 130+ lessons to develop deep fluency, leadership skills, and strategic application.

Start Your Responsible AI Certification: https://skills4good.ai/responsible-ai-professional-membership/