Mark Manson Launches App to Address Mental Health Gaps for AI Chatbots

Ten years after publication The subtle art of not giving an F*ckbest-selling author and blogger mark manson turns to AI to answer some of his audience’s most pressing life questions. He recently co-founded The purposean AI-powered advisor designed to deliver actionable health advice—something Manson says most mainstream chats, like ChatGPT, aren’t built to do.
Manson is also known as Everything is F*cked: A Book About Hope and collaborative writing Will and actor Will Smith, a memoir that chronicles the struggles and growth of a celebrity. He started his career in 2008, launching a blog shortly after graduating from Boston University. What started as a dating advice column quickly turned into a platform for deep meditation on happiness, success and modern self-help. That blog would launch Manson’s publishing career and, over time, earn him nearly two million followers on Instagram.
Ever since AI entered the mainstream, Manson has been working hard to improve the way people want to be guided. After exploring ways to enter the market, including the possibility of acquiring an existing company, he chose to build something new with tech entrepreneur Raj Singh, founder of Google-backed Hospitality startup Go Moment. Singh’s company was rehired by Revinate in 2021; after leaving in 2024, he focused on mental health technology. Purpose’s engineering lead, William Kearns, previously ran AI at meditation and wellness site Headspace.
Mission has launched both a website and an iOS app, with an Android version expected later this month. So far, about 50,000 people have joined the platform, about one in four paying for a premium subscription that costs $20 a month or $150 a year.
The viewer talked to Manson about it mental health safetywhat AI finds right and wrong in the advice space, and where the line really lies between counseling and therapy.
The following discussion is edited for length and clarity.
How did you and your co-founder, Raj, connect? Who came to whom with this problem they wanted to solve?
We sat next to each other in a poker game, so it was totally random. I was actually trying to buy another early AI, and I hit a roadblock. Raj had just left his previous company and had independently decided that, whatever he did next, he wanted it to be mental health and AI We both realized that we were very strong in AI in terms of helping people. I would say, a month later in March 2025, we had a business.
How do you use AI chatbots in your life, and what are your favorites?
I use AI all the time instead of Googling things or asking business questions, health questions. I was watching a movie Hamnet one night and he paused to have a conversation with Claude about Shakespeare, and it was really interesting. Claude is definitely a favorite for taste and quality of writing. As a writer, the quality of writing is very important to me.
I’ve had a lot of fun interacting with some of the Character.AI products. It’s almost like fan fiction. But for everyday use cases, I mostly use Claude and Gemini.
He said that the Mission team cares about mental health. I have written about AI psychosis and related issues. The mission makes it clear that it is not a therapist and restricts access while the results are “in,” so I can see it putting barriers to communication. I’m curious about the concerns you have about AI partners creating dependency or reinforcing unhealthy thought patterns, and how you’ve tried to mitigate that in your app.
If you look AI psychosis casesmuch seems to be driven by sycophancy. The AI agrees with whatever you say. It’s like, “Oh, you think you’re the queen of England. That’s great. Tell me more about that.” They are not consistent enough; they are willing to challenge you, to keep you focused on the truth.
One of the first things we thought about when designing Intent was that it needed to challenge the user. It cannot simply agree with everything the user says. That is also similar to our work. You grow up being wrong about things. You grow in re-examining your beliefs and questioning your ideas. That was very important to us to make sure that we were constantly challenging users and forcing them to reevaluate some of their preconceived notions.
In addition, we have several strict precautions. Whatever appears to be a clinical-level condition, the mission is designed to direct the user through the path to finding a local specialist.
There’s a new industry benchmark for mental health, security and AI It’s called Vera MHand do 400 simulated clinical interviews, and they judge whether the AI is safe or not. We achieved 100 percent vulnerability in all 400 interactions, and scored in the top 0.5 percent of AI systems tested on that benchmark.
How skeptical are you about AI for emotional support, relationships or life advice? And how do you try to escape this concern with your product?
Big AI companies were woken last year about security measures and negative side effects. I think AI has great potential to create a population in this space. The technology isn’t there yet, but it’s getting better.
What would it take for technology to get there?
On purpose, we changed the AI principle. It’s not that hard. I think anyone with six months to develop an app can do the same thing. The hardest part is when you get into memory and pattern matching.
The way LLMs work is that the more information you provide, the less accurate it is, and that’s why ChatGPT’s memory, or Claude’s memory, is not very good, because they have so much random information from you that it’s hard for them to keep track of what’s useful in this conversation and what’s not.
Its second aspect is salience. Obviously, if the user talks about his mother, that is probably the most important thing in his life, and more important than what he had for breakfast or the kind of car he drives, but at the moment, AI does not know how to prioritize one fact about one person over another. You have to find ways to do that systematically. Otherwise, the AI will prepare a random fact for you.
I don’t think memory has really been solved by anyone, especially the big AI companies. When you think about personal growth and life advice, memory is very important. If you have a conversation with a Target about something that happened when you were 17, that’s probably a really important thing to remember when you come back in three months. I would say right now the biggest obstacle is memory.
Where do you think we should draw the line for using AI in the areas closest to our lives, and in what ways do we see AI companies around the world missing the mark on this?
It is inevitable that people will use AI for personal purposes. If you’re depressed and sleepy one morning, you won’t call a therapist, you won’t call a friend on Tuesday in the middle of the night, but AI is there. For me, the biggest thing is privacy and making sure that user data is anonymized and respected.
Although Purpose says it is not a therapist, when I used it, it reminded me of therapy in the way that it doesn’t tell you what to do, but asks you questions that lead you to your decision about how to move forward in your life. How do you put your foot on the line regarding treatment versus simple advice?
There are two different uses for therapy. Some people go to therapy because they are in trouble and have a serious health problem. Some go to treatment for maintenance or to get mentally clean. AI can do a good job with the end use case. Like, “I had an argument with my partner. What do you think about this?” You can get a lot of mileage out of AI in those situations, especially given accessibility, accessibility, compatibility.
Where we draw the line is when people are in that crisis stage and are showing more severe symptoms of depression or anxiety. This is where we direct them to look for a specialist. I wouldn’t feel comfortable using AI for that use case yet.
I have someone in my life who, in the past, had an eating disorder. They were using Njongo, and when they started talking about some of the things they were dealing with, they not only pointed out that they might have an eating disorder, but they sent them a prescription for doctors who specialize in those issues in their area. I was very happy when I heard that. It does exactly what it should be doing.
Is it your version that you wrote The subtle art of not giving a F*ck Are you surprised by this business you are doing?
Actually I don’t think so. I launched my first online course around 2010, and when the book came out in 2016, I had this dream of doing self-help courses. It saddened me that all the courses were on the rails, like you have to start here, and you have to go along the way. There were many people who would come down because it was no longer possible to meet them. I actually started designing one around 2017 and found maybe a month before it was clear that it was going to be complicated and impossible that I abandoned it.
When ChatGPT exploded, and I started messing around with it, I realized that this is the technology that makes the course of your choice possible.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘
fbq(‘init’, ‘618909876214345’);
fbq(‘track’, ‘PageView’);



