A therapist’s guide to AI chatbot safety for clients

Clients are increasingly turning to AI chatbots for everything from daily journaling to late-night emotional support. These tools offer 24/7 availability and a stigma-free space to explore thoughts and feelings. However, this trend presents a new set of clinical challenges, including the risks of inaccurate advice, privacy breaches, and emotional dependency. As a therapist, you have a crucial role in helping clients navigate this digital landscape safely.
This guide provides a clear, seven-pillar framework for creating a collaborative AI safety plan. By proactively addressing the use of these tools, you can empower clients to use them as helpful adjuncts to therapy, not as risky replacements. We’ll break down how to establish purpose, set boundaries, spot red flags, and integrate this conversation directly into your clinical practice.
The core principle: Collaboration, not replacement
Before diving into the framework, the foundational rule must be clear for both you and your client: AI chatbots are tools, not therapists. They can be useful for low-stakes tasks like organizing schedules or practicing CBT prompts. They cannot provide diagnosis, crisis intervention, or the nuanced, empathetic support of a human professional. Your safety plan should be built on this principle, ensuring AI remains in the low-risk lane.
Pillar 1: Aligning on purpose
The first step is to understand why your client is using an AI chatbot. A collaborative conversation can help separate healthy, supportive uses from potentially harmful ones. Without a clear purpose, clients may drift into using AI for clinical needs it’s not equipped to handle.
Work with your client to define specific, healthy goals. Examples include:
- Organization: “I will use the chatbot to create a daily to-do list each morning to help with my ADHD.”
- Journaling: “I will use the chatbot to write down my thoughts at the end of the day, focusing on three good things that happened.”
- Skill-building: “I will ask the chatbot for prompts to help me structure a thought record.”
By articulating their intent, clients create an anchor for their usage. It becomes easier to recognize when they are straying into off-limits territory, such as asking for diagnostic opinions or medical advice.
Pillar 2: Setting clear boundaries and limits
Once the purpose is clear, the next step is establishing firm boundaries. Technology can easily blur the lines between helpful and excessive use. A lack of limits can lead to emotional dependency or using the chatbot as an avoidance strategy.
Discuss and document the following boundaries:
- Time limits: Set a maximum number of minutes per day (e.g., 20-30 minutes). This prevents mindless scrolling and encourages intentional use.
- AI-free zones: Designate times and places where the chatbot is off-limits, such as during family meals, an hour before bed, or when in a session.
- Topic restrictions: Explicitly name topics that should never be discussed with an AI. This list must include thoughts of self-harm, medical questions, trauma processing, and medication inquiries.
These boundaries act as guardrails, helping clients maintain a healthy relationship with the technology and reinforcing that the chatbot is a limited tool, not an all-access support system.
Pillar 3: Recognizing red flags
AI chatbots can “hallucinate” or generate responses that are inaccurate, biased, or clinically harmful. It’s vital to equip clients with the skills to identify these red flags.
Teach them to pause and disengage if a chatbot:
- Gives advice that contradicts your therapeutic guidance.
- Uses judgmental, stigmatizing, or dismissive language.
- Encourages risky behaviors or invalidates their feelings.
- Starts to feel “too real” or mirrors their language excessively.
When a client spots a red flag, their plan should be to stop the interaction, take a screenshot if they feel comfortable, and bring the example to their next session. This practice turns a potentially harmful experience into a valuable therapeutic discussion.
Pillar 4: Creating a crisis and escalation plan
This is the most critical pillar. You must ensure the client knows that an AI chatbot is never an appropriate resource during a crisis. The safety plan needs to explicitly redirect them to real, human support.
Collaboratively build a simple, clear crisis plan that outlines what to do instead of turning to AI. This should be a quick-reference guide that includes:
- For thoughts of self-harm: Call or text the 988 Suicide & Crisis Lifeline.
- For a medical emergency: Call 911.
- For feeling overwhelmed: Contact a designated support person (friend or family member).
- After an upsetting interaction: Use a pre-agreed-upon grounding technique.
This plan removes ambiguity in moments of high distress and reinforces the boundary between the chatbot’s function and true clinical safety. You could also add a reference to your practice’s crisis plan, if applicable.
Pillar 5: Discussing privacy and data safety
Most clients are unaware of how their data is used by commercial AI tools. Conversations with a chatbot are often not private or secure. You can protect your clients by educating them on basic data safety.
Guide them through these simple privacy checks:
- Avoid sharing Personal Health Information (PHI): Instruct them to never type full names, addresses, birth dates, or specific diagnostic details into a chatbot.
- Review the privacy policy: Help them look for whether the company uses conversations to train its models and give them the option to opt out.
- Use anonymous accounts: Recommend using a “burner” email address and avoiding logging in with primary Google or social media accounts.
These steps empower clients to take control of their personal information and make informed choices about the tools they use.
Pillar 6: Promoting digital well-being
Interacting with an AI can have a significant emotional impact. Some clients may develop a parasocial attachment, feeling a one-sided emotional bond with the chatbot. Others may feel distressed or activated by a strange or unhelpful response.
Your safety plan should include offline strategies to manage these emotional reactions. Work with the client to create a “grounding toolkit” they can use to regulate their nervous system after an intense digital interaction. This can include:
- Holding a tactile object, like a smooth stone.
- Calling a friend or family member.
- Engaging in physical movement or listening to a specific playlist.
These strategies help clients self-soothe and transition back to the real world, reducing the risk of emotional dependency on the AI.
Pillar 7: Defining the therapist’s role and client strengths
The final pillar ties everything together by defining how you and your client will collaborate. The safety plan isn’t a static document; it’s a living part of your therapeutic alliance.
Clarify the following:
- Check-in frequency: Decide how often you will discuss their AI use (e.g., at the start of every session, bi-weekly).
- Sharing information: Establish a secure method for them to share chatbot transcripts if they choose (e.g., through a HIPAA-compliant client portal).
- Strengths-based support: Identify and list the client’s personal strengths and protective factors, such as their support network, coping skills, and hobbies. This reminds them of the resources they possess outside of technology.
This collaborative approach reinforces the therapeutic relationship and integrates the conversation about technology directly into your established goals.
Putting it all together: Your next steps
Clients are already using AI. Ignoring it is no longer an option. By proactively introducing a safety plan, you meet clients where they are and uphold your ethical duty to promote their well-being in a changing world.
Here are your actionable takeaways:
- Normalize the conversation: Start by asking clients if they use chatbots. A nonjudgmental question like, “Many people are exploring AI tools these days. Is that something you’ve tried?” can open the door.
- Use the framework: Adapt the seven pillars into a simple, one-page document you can fill out collaboratively with your client.
- Document everything: Place a copy of the completed safety plan in the client’s file. This demonstrates you are meeting the enhanced standard of care related to technology in clinical practice.
- Revisit and revise: The digital landscape and your client’s needs will evolve. Make a plan to review and update the AI safety plan quarterly or as needed.
Integrating an AI safety plan into your practice is a powerful way to mitigate risk, empower your clients, and ensure technology serves as a supportive tool, not a clinical liability.
If you like learning more about new technologies and how they affect your practice, you should follow us on LinkedIn and subscribe to our Practice Success newsletter.



