ChatGPT Health is here: What therapists need to know

On January 7, 2026, OpenAI launched ChatGPT Health, a new, health specific space inside ChatGPT designed to help people ask medical and wellness questions in a more private, personalized way.
- Ask health questions in a separate space
- Choose to connect medical records or wellness apps (like Apple Health or MyFitnessPal)
- Get responses that reference their own data, such as labs, sleep, or activity patterns
OpenAI is clear about its intent: ChatGPT Health is not meant to diagnose, treat, or replace clinical care. It’s positioned as a way for people to understand health information and prepare for conversations with professionals. OpenAI also says Health conversations are isolated from other chats and are not used to train its foundation models.
While OpenAI has made strides in privacy by isolating health conversations and ensuring they aren’t used to train foundation models, there are still important considerations for therapists. The challenge isn’t in how the tool is labeled, but in how clients engage with it, especially when they’re anxious, distressed, trying to interpret complex medical or mental health data.
This post focuses on how ChatGPT Health may show up in therapy, and how clinicians can respond in ways that are ethical and useful.
In the Health tab, users can:
What is ChatGPT Health (and what’s new)?
ChatGPT Health is a dedicated health space within ChatGPT. According to OpenAI, it includes:
- A separate tab for health-related conversations
- Additional privacy protections, including data isolation and encryption
- Optional connections to medical records and wellness apps
- A commitment not to use Health conversations to train its models
The most significant change is data grounding. When users connect health data, ChatGPT Health can reference specific information, like lab trends, visit summaries, or activity patterns, rather than relying only on general knowledge.
If someone starts a health conversation outside the Health tab, ChatGPT may suggest moving it into Health for added privacy.
Why therapists should pay attention
Even before the Health tab launched, people were already using ChatGPT for health and mental health questions. OpenAI reports that over 230 million people worldwide ask health-related questions on ChatGPT each week.
The Health tab formalizes that behavior. It signals to users that this is a legitimate place to bring health concerns. And that means clients will increasingly bring AI “interpretations” of symptoms, AI-drafted agendas for sessions, or AI-generated plans that blend wellness data with mental health claims. If clients can connect records and apps, the outputs may sound more tailored—and therefore more convincing—than generic chat responses.
That makes it more important, not less, for therapists to understand how to work with AI-influenced material without letting it override clinical judgment.
OpenAI consistently emphasizes that ChatGPT Health is meant to help users feel informed and prepared, not to diagnose or treat conditions. For therapists, however, the more important issue is how the tool is experienced by clients. Personalized AI responses can carry an air of authority, even when they are incomplete or clinically mis-framed.
These AI outputs are not inherently harmful but need context. Having a plan for how to engage with this material is now part of modern clinical practice. The therapist’s role is to validate these efforts while reinforcing the importance of clinically grounded and evidence-based care. AI outputs should be treated as materials brought by clients for discussion, similar to a journal entry or an article they read, and should not replace clinical expertise.
When clients bring AI‑generated content to therapy
1. Contextualize and validate client use
When clients bring AI-generated outputs, start by acknowledging their proactive approach. Then, compare these insights to the client’s individual history and evidence-based guidelines. This reframing helps ground the conversation in clinical reality while addressing common misconceptions or inaccuracies from the AI.
2. Address privacy risks
Educate clients about the limitations of consumer tools like ChatGPT Health, including the fact that they operate outside HIPAA protections. Clients may be unaware that data uploaded to apps could involve less stringent privacy measures than clinical environments. Establish clear boundaries for your own practice (e.g., avoid inputting PHI into non-HIPAA compliant tools).
3. Guard against automation bias
Explain automation bias, the tendency to over-trust AI outputs, especially because ChatGPT Health references personal data. Encourage critical thinking, reminding clients that AI models are not infallible and should not outweigh professional judgment or standard screenings.
4. Reinforce crisis boundaries
Be explicit that AI is not a substitute for crisis intervention tools. Regularly remind clients of appropriate resources like the 988 crisis line or local mental health hotlines. If risky prompts or disclosures arise from an AI interaction, treat them seriously and follow your crisis management protocols.
5. Consider policy and documentation updates
Update your informed consent practices to include language about the role of AI in therapy. Clearly state that any client-provided AI outputs will be reviewed contextually but not treated as diagnostic tools or clinical evidence.
6. Stay educated
Regularly review industry guidelines, including WHO and APA recommendations, to keep abreast of safe practices for incorporating AI outputs into therapy. Consider case conferences and team trainings to standardize your approach to these conversations.
The way forward
ChatGPT Health will likely make AI a more visible part of mental health care. Used thoughtfully, it can help clients feel informed and engaged. Used uncritically, it can create confusion, anxiety, or misplaced trust.
Looking ahead, AI-generated health information may become even more integrated into clinical workflows. It’s not hard to imagine a future where clients share AI-summarized questions, medication concerns, mood patterns, or reflections between sessions, and where selected, clinician-reviewed insights could be incorporated into EHR notes, case summaries, or treatment planning. In that context, AI could function less as an outside influence and more as structured client provided data.
Therapists don’t need to ban AI to protect clients. Clear boundaries, good questions, and steady clinical judgment go a long way. As this technology evolves, responsible integration—not avoidance—will be key to supporting clients and protecting the integrity of care.



