Two Paths to Better Decision-Making: Which One Can Solve the Global Mental Health Crisis?
If a machine could simulate your future decisions, would you use it? Silicon Valley just answered with $100 million: Yes. But there’s a question they’re not asking: Should we?
Recently, Aaref Hilaly announced Simile’s $100M Series A, describing it as a company that “turns your mental framework on its head.”
The pitch is compelling: AI simulation platforms that create digital twins to predict human decision-making. “Make AI more human,” as Aaref puts it.
I’m fascinated by this for a different reason than most. Not because I think Simile will succeed (it likely will, in its target market). But because it crystallizes a fundamental question our field must answer:
When we talk about “improving human decision-making,” what problem are we actually solving? And for whom?
THE TWO PATHS
There are two distinct approaches emerging to help humans make better decisions:
PATH 1: AI-Centric (Simile’s approach)
- AI simulates your decision-making patterns
- Predicts what you’ll do based on your digital twin
- External intelligence guides your choices
- You consult AI to understand yourself
A NOTE ON SIMILE’S B2B MODEL
To be clear: Simile sells to enterprises, not individuals. They help companies predict how customer populations will respond to products and policies.
But this doesn’t change the fundamental paradigm.
Whether AI predicts YOUR decisions (B2C) or predicts CUSTOMERS’ decisions (B2B), the core dynamic is the same:
- AI as the predictor, humans as the predicted
- External simulation rather than internal understanding
- Analysis of behavior rather than development of capability
In fact, the B2B model may raise additional concerns:
In these systems, individuals may become objects of prediction without meaningful awareness or agency. Their behavioral patterns are used to train AI systems that companies then use to influence them— often without consent or awareness.
The philosophical question remains:
Should we build systems where AI predicts human behavior from the outside? Or systems where humans understand themselves from the inside?
For mental health, the answer is clear: internal capability building is essential. You can’t outsource self-awareness to AI, whether that AI is serving you directly or serving companies that want to understand you.
PATH 2: Human-Centric (Our approach at The Last 2 Minutes)
- You develop metacognitive awareness
- Recognize your own patterns through reflection
- Internal capability builds over time
- You understand yourself directly
Both paths aim to improve decision-making.
But they lead to fundamentally different destinations.
WHY THIS MATTERS FOR MENTAL HEALTH
Here’s what most people miss:
The global mental health crisis isn’t primarily a decision-making optimization problem in the business sense. At its core, it is a self-awareness and capability crisis.
1.1 billion people worldwide struggle with mental health conditions (WHO, 2025). The treatment gap is staggering:
- 76-85% untreated in low-income countries
- 35-50% untreated even in high-income countries
Why?
Not because people can’t predict others’ behavior. But because people don’t understand themselves.
“Why do I feel this way?”
“What triggers my anxiety?”
“Why do I keep making the same mistakes?”
“How do I change?”
These are the questions that keep 1.1 billion people awake at night.
An AI that predicts your decisions doesn’t answer these questions.
Metacognitive self-awareness does that.
THE ACCESSIBILITY PROBLEM
Let’s be honest about who Simile will serve:
Requirements for AI digital twins:
- High-end smartphone or computer
- Stable internet connection
- Massive personal data collection
- Likely $50-200/month subscription
- Technical literacy
- Probably English language
This describes maybe 200-500 million people globally.
The top 5-10%.
The global mental health crisis?
It’s concentrated in the other 90%.
76% of the mental health disease burden is in low- and middle-income countries, where:
- 3.7 billion people lack reliable internet
- Median monthly income is $200-400
- A $100/month AI subscription is impossible
- Smartphones are luxury items
Simile is building a premium tool for executives to make better business decisions.
That’s valuable.
But it’s not solving the global mental health crisis.
It’s serving a population that is already relatively resource-rich.
THE DEPENDENCY TRAP
There’s something more fundamental at stake here.
Mental health recovery requires building internal capabilities:
- Self-efficacy (Bandura, 1977)
- Internal locus of control (Rotter, 1966)
- Self-awareness (all therapy modalities)
- Sustainable coping skills
AI systems, if designed primarily as decision substitutes rather than decision supports, risk reinforcing the opposite dynamics:
- External locus of control (”AI tells me what to do”)
- Learned helplessness (”I can’t decide without AI”)
- Atrophy of self-reflection capacity
- Unsustainable reliance on technology
To understand why Path 1 is risky, we must look at the psychological shift it triggers:
Shift to External Locus of Control (Rotter, 1966): Julian Rotter proved that mental well-being depends on an “Internal Locus of Control”—the belief that your own actions determine your life. When an AI “predicts” your choices, it shifts your locus to the “External.” You begin to believe the system knows you better than you do, eroding your motivation to learn and adapt.
The Descent into Learned Helplessness (Seligman, 1972): As Martin Seligman famously demonstrated, when the link between a person’s effort and their outcome is severed—as it is when an AI makes the “best” choice for you—human agency withers. This leads to “Learned Helplessness,” where an individual stops trying to navigate life because they feel powerless without their digital guide.
Imagine someone using Simile for 5 years:
- Year 1: “AI helps me make better decisions”
- Year 3: “I check AI before any major decision”
- Year 5: “I can’t decide anything without AI”
Now imagine their internet goes down.
Or they can’t afford the subscription.
Or the service shuts down.
What happens?
They’re worse off than before.
This isn’t hypothetical. We’ve seen this pattern before:
- GPS eliminated our spatial reasoning skills (try navigating without it now)
- Calculators reduced mental math capability (quick: what’s 17 x 23?)
- Autocomplete weakened spelling ability (when did you last write without spell-check?)
Each technology made us ‘better’ by making us dependent. Now we’re doing it with decision-making itself— the most fundamental human capability.
This is the opposite of mental health recovery.
Recovery means: “I understand myself. I can cope. I’m resilient. I don’t need external support anymore.”
Not: “I’m dependent on AI to function.”
To be clear: AI does not inherently create dependency. Well-designed systems could potentially strengthen metacognitive awareness by prompting reflection rather than replacing it. The critical design question is whether AI functions as a cognitive crutch—or as a scaffold that gradually removes itself.
THE REFLECTION PARADOX
Here’s the fascinating irony:
Both approaches use reflection as their core mechanism.
Simile’s technology employs a “Reflection Engine” based on cutting-edge research (Shinn et al., 2023) in “Verbal Reinforcement Learning.”
In this algorithm, reflection means:
- AI analyzes past attempts
- Extracts patterns from failures
- Generates linguistic feedback
- Applies learning to next try like an automated “mistake notebook.”
And it works remarkably well:
- Coding accuracy: 80% → 91% (surpassing GPT-4)
- Reasoning tasks: +20% improvement
- Decision-making: +22% improvement
Reflection is clearly a powerful mechanism.
But here’s what gets missed:
For AI, reflection is about efficiency. An algorithm for better predictions.
For humans, reflection is about existence. A process of understanding ourselves and growing.
When AI does your “mistake notebook” for you:
- You make more efficient decisions
- But you don’t confront your failures
- You don’t grow through the struggle
- Your metacognitive capability atrophies
When you reflect yourself:
- You discover your own patterns
- You learn from your failures
- You build internal capability
- You become independent and resilient
Shinn’s research proves reflection dramatically improves AI performance.
But the same research reveals something crucial:
AI reflection is computation, not realization.
AI’s reflection:
“This code failed. Try different approach next time.” → Optimization
Human reflection:
“Why do I keep making this mistake? What drives this pattern in me?” → Self-understanding
The critical distinction:
Does reflection happen TO you?
→ AI analyzes you → You’re the object → Creates dependency
Does reflection happen IN you?
→ You understand yourself → You’re the active agent → Builds capability
For mental health, this difference is everything. No matter how sophisticated Simile’s Reflection Engine is, AI understanding you cannot heal you.
Healing comes from self-understanding. And self-understanding cannot be outsourced.
THE ALTERNATIVE PATH: CAPABILITY BUILDING
At The Last 2 Minutes, we’ve taken a radically different approach.
We started by asking: “What would a mental health intervention look like if it was designed from day one to serve the global population, not just the elite?”
The answer led us to three core principles:
1. UNIVERSAL ACCESSIBILITY
- Works with basic SMS (5 billion people have access)
- Works with pen and paper (no technology required)
- Reasonable price/month or free
- Offline-capable
- Any language, any culture
2. CAPABILITY BUILDING, NOT DEPENDENCY
- Daily 2-minute metacognitive reflection
- Users discover their own patterns
- Progressive autonomy (scaffolding → independence):
Year 1: Guided practice
Year 2-3: Independent practice
Year 5+: Internalized skill
Even if they stop using the app, they retain the capability.
3. SCIENTIFIC FOUNDATION
- Di Stefano et al. (2014) at Harvard Business School found that just 15 minutes of daily reflection led to 22.8% performance improvement—not through external intelligence, but through building self-efficacy.
- Andersson et al. (2024/2025): A systematic review and meta-analysis of 49 randomized controlled trials (3,239 participants) demonstrated that metacognitive interventions (MCT and MCTraining) are likely efficacious across psychiatric conditions.
- Not just theory—proven mechanisms
The result:
Someone using our approach for 5 years:
- Year 1: “Daily reflection helps me notice my patterns”
- Year 3: “I understand what triggers my anxiety now”
- Year 5: “I don’t need the app anymore, but I still reflect because it’s part of who I am”
They’ve built a capability.
Not a dependency.
THIS ISN’T COMPETITION—IT’S DIFFERENT MISSIONS
I’m not arguing that Simile is “wrong” or that we’re “better.”
We’re solving different problems for different populations.
Simile’s market:
- Business executives
- Enterprise decision-making
- Resource-rich individuals
- External prediction needs
Our market:
- Global mental health crisis
- Universal mental health capability
- Resource-constrained populations
- Internal self-awareness needs
There’s no conflict here.
Simile will likely build a very successful business serving executives who can afford premium AI decision support.
We’re building something else: a universal capability-building platform that could meaningfully address the global mental health crisis.
THE QUESTION FOR INVESTORS AND BUILDERS
Simile’s $100M Series A is actually good news for us.
It validates that investors believe decision-making improvement is a massive opportunity.
They’re right.
But there are two very different paths:
1. Build AI that primarily optimizes decisions externally
→ Premium pricing
→ Elite market
→ Risk of dependency if poorly designed
→ Great business
2. Build human capability that people internalize
→ Affordable pricing
→ Universal market
→ Self-efficacy
→ Greater impact
Both can be valuable businesses.
But only one is explicitly designed to address the global mental health crisis at scale.
The question isn’t which approach will make more money in the short term.
(Simile likely will. AI simulation for enterprises is premium market.)
The question is:
Which approach creates the world we want to live in?
A world where humans increasingly rely on AI for core decisions?
Or a world where humans understand themselves better and are more capable of navigating life independently?
WHY THIS MATTERS NOW
We’re at an inflection point in digital mental health.
The field has spent 20 years building apps that 97% of users abandon within 30 days. We’ve created dependency-driven models (BetterHelp’s marketplace), passive content consumption (Headspace/Calm), and now AI prediction systems (Simile).
All have their place.
But none of them address the core problem:
Most humans lack the basic capability to understand and manage their own mental health.
This isn’t a technology problem.
It’s a capability problem.
Technology can help build that capability—if it’s designed to empower rather than replace human agency.
That’s what we’re building.
Not AI that makes you smarter by doing the thinking for you.
But tools that help you become smarter through developing your own metacognitive awareness.
WHAT’S NEXT
We’re currently in discussions with academic partners to validate this approach through rigorous research.
If you’re:
- A researcher interested in metacognition and mental health
- An investor who believes in human capability building
- A mental health professional frustrated with current tools
- Someone who thinks humans should understand themselves better
I’d love to connect.
The conversation Simile’s funding has sparked is important.
But let’s make sure we’re asking the right questions:
Not just “How can AI make better decisions for us?”
But “How can we help humans develop the capability to make better decisions themselves?”
The global mental health crisis demands we get this right.
For 1.1 billion people struggling with mental health conditions, how we answer this question matters.
So the real question is simple:
Which world do we want to build?
TAKE ACTION
If this resonates with you:
🔬 Researchers: We’re seeking academic partnerships for validation studies. Contact: research@thelast2minutes.com
💰 Investors: Mission-driven capital that believes in human capability building, not just AI efficiency.
🏥 Clinicians: Join our advisory board to ensure we’re building tools that actually help your patients.
📱 Everyone: Follow our journey building a different kind of mental health solution.
#MentalHealth #DigitalHealth #AI #HumanCapability #GlobalHealth #HealthTech #Innovation #FutureOfWork #Psychology #Neuroscience
