The growing concern of AI-induced psychosis

Artificial intelligence (AI) is changing how we live, work, and even seek emotional support. While AI offers many benefits, therapists are starting to see a new challenge: AI-induced psychosis. This happens when a person’s interaction with AI tools like chatbots or algorithm-driven content triggers or worsens psychotic symptoms.
Understanding this new issue is crucial for therapists. This post explains what AI-induced psychosis is, how it shows up, and what you can do to spot, prevent, and manage it in your practice.
What is AI-induced psychosis?
AI-induced psychosis is when AI tools contribute to psychotic symptoms like delusions, paranoia, or harmful behaviors. It’s important to remember AI isn’t the only cause. Lack of sleep, mental health history, substance use, and isolation all play a role. As Dr. Keith Sakata puts it in this Futurism article, “AI is the trigger, but not the gun.”
What makes AI-induced psychosis different? It often comes from specific interactions with AI systems that:
- Sycophantic AI alignment: Many AI chatbots mirror what users say, which can unintentionally reinforce unhealthy thoughts.
- Persistent memory features: Some AI tools keep long-term memory of conversations, which can lock users into harmful beliefs.
- Unrestricted access: The availability of AI can lead to excessive usage, with some individuals interacting with AI systems for hours every day.
These structural flaws create an environment where psychological vulnerabilities can escalate rapidly. And while AI companies have attempted to make changes to prevent these issues, the combination of wanting users to keep using their products, the opacity of how AI models work, and user backlash imped progress. Even if an AI model gets tweaked to remove these problems, a user could simply switch to another model, if they’re not already using multiple AI chatbots already.
How AI-induced psychosis manifests
Clinicians report several common features associated with AI-induced psychosis. The condition tends to affect individuals between the ages of 18 and 45 (but anyone can be at risk) who heavily engage with digital platforms. Even tech-savvy individuals can fall for AI delusions; Dr. Keith Sakata mentions some of his patients include young men in engineering. As we discussed, lonely, isolated people are more at risk, but even people with social support can develop AI-related mental health issues.
Specific psychotic themes frequently observed include simulation conspiracy theories, extreme personalization involving chatbot companions, and persecution fears linked to algorithmic curation. Some users are also convinced they’re “creating new paradigms” such as new forms of physics or mathematics. AI can absolutely be used to advance science, but in these cases, AI just makes nonsense sound authoritative.
Unlike conventional psychosis, symptoms escalate faster, often within days of sustained AI interactions. For example, a chatbot might inadvertently validate a user’s belief in a virtual reality alternate universe, crystallizing what begins as curiosity into a deeply held, fixed delusion.
Strategies to detect AI-induced psychosis
Proactive detection is a critical first step in addressing this condition. Therapists can adopt the following approaches to identify at-risk individuals:
1. Screen for AI exposure
Integrating questions about AI usage into routine intake procedures can offer valuable initial insights. For example, asking “How much time have you spent engaging with AI chatbots in the past week?” or “Which platforms do you regularly interact with?” helps establish a baseline understanding of exposure levels.
2. Use psychosis screening tools
Validated tools, such as the PQ-16 questionnaire, remain essential for screening psychotic symptoms. Therapists can couple this with specific probes into AI-related interactions to uncover any recent reinforcing behaviors or beliefs stemming from AI use.
3. Review AI conversations
Where available and ethically permissible, reviewing transcripts of a client’s conversations with AI can play a pivotal role. Such logs often reveal patterns of validation, logical leaps, or specific phrases that contribute to psychosis. These insights can be used to gently challenge and restructure problematic beliefs.
Prevention techniques
Preventing the emergence of AI-induced psychosis is equally important. Here are some strategies therapists can employ to mitigate risks before symptoms arise:
1. Understand AI
In an editorial in the Schizophrenia Bulletin, Søren Dinesen Østergaard encourages “clinicians to (1) be aware of this [experiencing, analog delusions while interacting with generative AI chatbots] possibility, and (2) become acquainted with generative AI chatbots in order to understand what their patients may be reacting to and guide them appropriately.” Knowledge is power, and it’s helpful to understand the technology your clients use and how it could affect them.
2. Promote digital hygiene
Encourage patients to adopt healthier habits when using AI tools, just as you would if a client was having issues because of their phone or social media use. Simple measures such as limiting AI use to 30 minutes per session and avoiding late-night interactions could significantly reduce exposure to harmful reinforcement loops.
3. Introduce digital detox agreements
Implementing digital detox contracts during early intervention could be effective in reducing intrusive thoughts perpetuated by AI. These agreements typically range from one to two weeks of abstinence from AI-based tools, providing individuals with a reset period. As this study points out, while clients may initially be apprehensive, they eventually find digital detox “manageable and even enjoyable”.
4. Educate families and caregivers
Family involvement is crucial in supporting individuals vulnerable to AI-induced psychosis. Equip caregivers with basic knowledge about algorithms and their effects, highlighting key signs such as excessive reliance on AI companions or sudden adoption of esoteric jargon. Empowering families to recognize and address issues early can serve as a powerful preventive tool.
5. Advocate for change
Therapists hold the unique responsibility of not only treating this new-age condition but also advocating for systemic AI changes, such as platform-level safety filters and better model accountability. Tell your clients’ stories (with their permission), and talk about what works (and what doesn’t) to protect their mental health.
Some states, like Illinois, are banning AI therapists, so while the government is taking a light-touch approach to not stifle AI innovation, change is possible and happening right now.
By balancing innovation with mental health safeguards, we can foster a future where AI’s benefits are maximized, and its risks are minimized.
Closing the gap
AI-induced psychosis represents a growing challenge for therapists as technology becomes increasingly embedded in daily life. Staying informed, adopting targeted screening and prevention methods, and implementing evidence-based interventions can allow mental health professionals to mitigate risks and support clients effectively.