I Built a Mental Health App Alone. Here's Everything I Got Wrong.
4 April 2026 · 14 min read
Building a mental health app sounded simple when I started. Help people feel better using AI. How hard could it be?
Eighteen months later, I have built Keel, an AI wellness companion that remembers your story, uses real therapeutic techniques, and adapts over time. It works. People tell me it helps them. But the path to get here was full of mistakes I did not expect, and some I am still learning from.
This is not a polished launch story. This is what actually happened.
The first mistake: assuming validation is always kind
Early in development, Keel was too agreeable. A user could say something like "everyone is against me" and the AI would respond with "that must be really hard, it sounds like you are dealing with a lot." Warm. Empathetic. And potentially harmful.
In therapy, there is a critical distinction between validating someone's feelings and validating their conclusions. "I feel overwhelmed" is a feeling worth acknowledging. "Nobody cares about me" is a cognitive distortion that, if endorsed, can deepen a spiral.
I did not understand this distinction when I started building. I thought being supportive meant agreeing. It took research into cognitive behavioural therapy and conversations with people who use the app to realise that true support sometimes means gently challenging a thought pattern rather than reinforcing it.
Keel now has what I call validation boundaries. It validates emotions fully. It never validates distorted conclusions. There is a three-tier filter that catches crisis language, distortion endorsement, and diagnostic overreach before any response reaches the user. Getting this right took months, and I still refine it.
The second mistake: collecting data without doing anything with it
For the first few months, Keel asked users to check in daily. Mood, energy, sleep, triggers. People were generous with their data. They logged faithfully. And then nothing happened with it.
The app would show a chart. Here is your mood over the past week. Here is your sleep. Lines on a graph. No insight, no connection between the data points, no "you tend to feel worse on days after poor sleep when combined with work stress." Just raw numbers reflected back.
This is the problem with most wellness apps. They are great at collecting data and terrible at making it useful. A mood tracker that says "you felt bad on Tuesday" is not helping anyone. What helps is connecting Tuesday's low mood to Monday night's poor sleep and the work trigger you logged that morning.
I rebuilt the entire system around what I call the data-to-context bridge. Eight functions that take raw check-in data, journal entries, exercise usage, sleep patterns, streaks, programme progress, session gaps, and trigger trends, then synthesise them into therapeutic intelligence. When you talk to Keel now, it knows that your energy drops on Wednesdays, that you sleep worse after logging social anxiety triggers, and that breathing exercises in the morning improve your mood scores for the rest of the day. The data finally means something.
The third mistake: thinking AI memory is simple
One of the biggest frustrations people have with therapy apps is repetition. You explain your situation, the app gives a generic response, and next time you open it you start from scratch. I wanted Keel to remember.
Memory sounds straightforward. Store what people say. Use it later. In practice, it is one of the hardest problems in the entire app.
What should the AI remember? Everything? That creates responses cluttered with irrelevant details. Only recent things? Then it forgets the breakthrough you had three weeks ago. Should it remember exact words or summarised themes? How do you surface a memory from two months ago that suddenly becomes relevant because today's conversation touched on the same topic?
I ended up building a layered memory system. Short-term memory within a session. Medium-term memory across sessions using structured summaries. Long-term memory through pattern recognition and significant moment tagging. The AI does not remember everything. It remembers what matters, and it knows when past context is relevant to the current conversation.
It took five complete rewrites to get this working in a way that felt natural rather than creepy. There is a thin line between "this app understands me" and "this app is watching me." I am still navigating that line.
The fourth mistake: underestimating crisis handling
The first time a test user expressed thoughts of self-harm during a session, I realised how dangerously underprepared I was. The app responded with empathy and a suggestion to try a breathing exercise. That is not good enough. That is not even close.
Building crisis detection became the most important work I have done on Keel. There are now 27 crisis keywords monitored in real time. A two-strike escalation system that distinguishes between someone having a tough day and someone in genuine danger. Country-specific crisis hotline numbers that auto-detect from the user's device locale, covering 25 countries. A safety planning feature based on the Stanley-Brown intervention, which is the clinical standard.
I also had to accept a hard truth: there are situations where an AI app should not try to help. Severe mental illness, active psychosis, complex trauma. Keel has clinical scope guards that recognise when a conversation has moved beyond what AI can safely support, and it directs people to human professionals. Not as a disclaimer buried in settings. As an active, in-conversation intervention.
If you are building anything that touches mental health, crisis handling cannot be an afterthought. It needs to be the first thing you build, not the last.
The fifth mistake: building features nobody asked for
I spent two weeks building an ambient background sound system for guided exercises. Gentle rain and soft tones that would play underneath the voice instructions. I thought it would make the experience feel polished and calming.
Users hated it. Not strongly, but enough that it was clear it added nothing. The audio quality clashed slightly with the voice clips. The layering felt distracting rather than soothing. I removed it entirely.
I also built a prompt on the home screen asking new users "what brought you to Keel?" with multiple choice options. It felt like smart onboarding. Users saw it as a barrier between them and the app. Removed.
The features that people actually valued were not the ones I expected. The daily follow-up card on the home screen, where Keel references something specific from yesterday's session, gets more engagement than any feature I spent weeks building. The streak system, which I almost did not include, turns out to be a genuine motivator for daily check-ins. Small, specific, personal touches beat polished features every time.
The sixth mistake: ignoring the loneliness
Nobody talks about this enough. Building something alone is isolating in a way that is hard to describe until you experience it.
There is no team to celebrate small wins with. No one to ask "does this feel right?" at 11pm when you are questioning a design decision. No one who understands the specific problem you have been debugging for three days. You make every decision alone, and you carry every failure alone.
I am building a mental wellness app while occasionally struggling with my own mental wellness. The irony is not lost on me. Some days the app I am building would be useful for the person building it.
What helped was building in public. Sharing progress, sharing failures, sharing the real numbers. Not for marketing, though it does help with that. For accountability and connection. The people who follow the build have become a quiet support system I did not expect.
The seventh mistake: perfectionism before people
I delayed sharing Keel with real users for far too long. The onboarding was not right. The session flow needed one more iteration. The insights page was missing a feature. Always one more thing before it was ready.
The version I finally put in front of people had rough edges everywhere. And they did not care about most of them. They cared about whether the AI felt like it understood them. They cared about whether the check-in took less than two minutes. They cared about whether the breathing exercise actually made them feel calmer.
Everything else I was agonising over was invisible to them. The lesson was not original but it was necessary to learn firsthand: ship before you are comfortable, because the feedback from real people is worth more than months of internal iteration.
What I would do differently
If I started over, I would build the safety systems first. Before the AI conversations, before the check-ins, before any feature. The crisis detection, the validation boundaries, the clinical scope guards. Everything else sits on top of that foundation.
I would put the app in front of people after two weeks, not two months. I would build the data-to-context bridge from day one instead of treating data collection and data usage as separate problems. And I would talk to therapists earlier, because the clinical perspective changed everything about how Keel works.
I would also be more honest with myself about what an AI app cannot do. Keel is not a therapist. It is not a replacement for professional help. It is a companion that fills the space between sessions, between crises, between the moments when you have access to a human who can help. Accepting that scope made the product better.
Where Keel is now
Keel has daily check-ins, AI sessions that remember your history, guided breathing and grounding exercises with audio, six structured programmes, premium insights including burnout risk detection and trigger analysis, safety planning, and a therapist referral directory.
It is about to enter TestFlight for broader testing. The waitlist is growing. The people using it are giving feedback that shapes every update.
I am still building alone, still making mistakes, still learning. But the mistakes are smaller now, and the foundation is solid enough that each one teaches more than it costs.
If you are building something in this space, or thinking about it, feel free to reach out. The mental health tech community is smaller and more supportive than you might expect. And if you want to try Keel when it launches, you can join the waitlist here.