When Therapy Isn’t Enough
The 2 AM Therapist: When Mental Health Crises Don't Keep Office Hours
Traditional therapy ends at the office door. You get your hour. Then you’re on your own for days. But mental health crises don’t schedule themselves between 9 and 5, Monday through Friday. They hit at 2 AM when you can’t sleep. At 6 PM, when work stress peaks. On Sunday, when the dread creeps in.
This week’s episode features Rajiv Kapoor, bestselling author of AI Made Simple and Prompting Made Simple, sharing a story that challenges everything we think about AI and mental health. A corporate executive, suicidal and spiraling, couldn’t wait another month between therapy sessions. So he tried something desperate: ChatGPT as a 24/7 therapy supplement.
Two weeks later? Crisis level dropped from 15/10 to 3/10. Marriage saved. The relationship with his 14-year-old daughter was rebuilt. Career stabilized.
Here’s what matters: His human therapist stayed in the picture. AI didn’t replace her; it filled the 719 hours between sessions when she wasn’t available.
3 Ideas From This Episode
1. The therapy gap is a feature, not a bug — and it’s dangerous.
Traditional therapy operates on a monthly cadence. One hour of connection, then days of silence. For most people, that’s fine. For someone in crisis? It’s a vulnerability window wide enough to be fatal. AI tools like ChatGPT don’t replace the depth and expertise of a human therapist, but they can fill the space between sessions when the 2 AM breakdown hits. The question isn’t whether AI is “as good as” therapy; it’s whether having something is better than having nothing when you need help most.
2. Persona-based prompting turns AI from a tool into a thought partner.
Most people use ChatGPT like a slightly smarter Google. They ask factual questions and expect Wikipedia answers. But when you prompt ChatGPT to become someone — a therapist, a business coach, a board of advisors (Steve Jobs + Warren Buffett + Adam Grant) — the output quality explodes. The RTCA structure works: Role (who is ChatGPT), Task (what you need), Context (your situation), Ask (specific request). The executive used this to create three distinct AI advisors: business coach for the morning commute, life coach for the evening, therapist at night.
3. Deepfakes are more dangerous than nuclear weapons (and nobody’s ready).
Nuclear weapons require governments, infrastructure, and billions of dollars. Deepfakes require 10 seconds of your voice and $20 software. $25 million was stolen in Hong Kong via a deepfake Zoom call — the CFO and head of finance were both AI clones. Elderly parents are getting “hostage” calls using their children’s voice clones. Taylor Swift’s face ended up on deepfake porn. And we’re still acting like this is a future problem. It’s not. The threat is here. Family safe words, Google alerts for your name, and radical skepticism are your only defenses right now.
2 Reflection Questions
If you had access to a 24/7 thought partner who never judged you, never got tired, and could help you work through problems at 2 AM — would you use it? What would you ask?
We trust therapists with our deepest secrets, even though they take notes and store them in cloud systems that get hacked. Why do we panic more about telling ChatGPT we’re anxious than we do about healthcare data breaches?
1 Thing to Try This Week
Create your AI board of advisors.
Open ChatGPT (or your AI tool of choice). Type this prompt:
“Take on the persona of [3-5 people you admire — could be Steve Jobs, Maya Angelou, your late grandfather, a fictional character]. You are now my personal board of advisors. I’m facing [specific challenge]. Here’s my context: [brief situation]. Ask me questions to help me think through this clearly.”
Let the AI interview you. Answer honestly. See what happens.
It won’t replace human wisdom. But it might surprise you how helpful it is to have a thought partner who’s available when you need it most.
Full Episode Summary
In this powerful conversation, AI author Rajiv Kapoor shares the story of a business executive who was pulled back from the edge of suicide using ChatGPT as a 24/7 therapy supplement. The executive, stuck between monthly therapy sessions and facing a mental health crisis rated “15 out of 10,” used persona-based prompting to create multiple AI advisors — a therapist, a life coach, and a board of business advisors featuring Steve Jobs, Warren Buffett, and Mark Cuban.
Within two weeks, his crisis level dropped to 3/10. He reconnected with his 14-year-old daughter (even surprising her with Taylor Swift tickets), improved his marriage, and stabilized his failing career — all while continuing to see his human therapist.
Rajiv breaks down:
The exact ChatGPT privacy settings to protect your mental health data
The RTCA prompt structure (Role, Task, Context, Ask) for better AI responses
Why deepfakes might be more dangerous than nuclear weapons
The $25 million Hong Kong deepfake heist that fooled finance executives
How to protect elderly parents from AI voice cloning scams
Why Illinois Governor J.B. Pritzker’s AI therapy ban might be dangerously backwards
ChatGPT-5’s PhD-level intelligence and what it means for everyday users
If you or someone you know is struggling with mental health, please reach out to a licensed professional or call 988 (Suicide & Crisis Lifeline). AI tools can supplement therapy, but they should never replace human care. Share this episode with someone who might benefit from learning about these tools.


