Back to resources

Using AI in your practice? Here’s how to handle informed consent

Here’s how to handle informed consent

Paperwork is the bane of every therapist’s existence. You went into this field to help people, not to spend your evenings and weekends buried under a mountain of progress notes. It is no surprise that so many mental health professionals are turning to Artificial Intelligence (AI) to lighten the load. Ambient AI listens to your session and writes the note for you so that you can put down the pen and actually look at your client.

But bringing a “listening machine” into the therapy room raises some serious questions. Clients trust you with their deepest secrets. If you invite an AI tool into that sacred space, you owe it to them to explain exactly how it works.

An informed consent policy isn’t just about following the rules or avoiding a lawsuit. It’s also about preserving the therapeutic alliance. If a client finds out later that an algorithm was analyzing their trauma without their permission, that trust could be broken forever.

Here is a simple, practical guide to crafting an informed consent policy for AI usage that protects your clients and gives you peace of mind.

Why “standard” consent isn’t enough anymore

Most standard informed consent forms are outdated. They cover things like cancellation policies, billing, and general confidentiality. They rarely mention that a third-party software program might be processing the conversation. And as this research paper points out, many informed consent policies add sections that are overly technical and hard to understand for clients.

In the world of AI ethics, there is a concept called the “black box” phenomenon. This happens when technology works in the background, invisible and unexplained, leaving people in the dark about how their data is used. In therapy, we cannot have black boxes.

Your clients have a right to know if AI is present. They need to know how it listens and who (or what) sees that data. Transparency is the foundation of informed consent. When you are open about the tools you use, you treat your client as a partner in the process rather than just a subject of observation.

What your AI policy needs to say

No need to write a novel. You just need a clear, specific addendum to your intake paperwork. Whether you call it an “AI Technology Addendum” or just update your Professional Disclosure Statement, make sure these five specific points are covered.

1. Define the tool clearly

Clients might hear “AI” and imagine a robot therapist analyzing their psyche or making a diagnosis. You need to crush that myth immediately.

  • What to say: State clearly that you use an AI tool strictly for documentation assistance.
  • The key distinction: Emphasize that the AI is a scribe, not a co-therapist. It creates a draft of the notes to save administrative time. It does not diagnose, treat, or plan care. You are still the expert in the room.

2. The “human in the loop” guarantee

One of the biggest risks with AI is “hallucination.” This is when the software mishears a word or invents a detail that never happened. In a medical record, a made-up detail can be dangerous.

  • What to say: Promise your clients that a human being—you—reviews every single word the AI generates.
  • The promise: Assure them that nothing enters their permanent medical record until you have read, edited, and approved it. This tells the client that you are still the guardian of their story.

3. Voluntary participation (the opt-out)

Therapy requires autonomy. A client should never feel forced to be recorded or monitored by software just to get help.

  • What to say: Make it clear that consenting to AI usage is 100% voluntary.
  • The mechanism: Give them a clear way to say “no.” Let them know they can ask you to turn off the tool for a specific session, or permanently, without any penalty. Their care will not suffer if they prefer the old-fashioned way.

4. Data hygiene and training models

This is usually the scary part for clients. They read headlines about companies stealing data to train their public models. They worry their private session will end up teaching ChatGPT how to talk about depression.

  • What to say: Explain the lifecycle of their data.
  • The specifics: If you use a reputable, HIPAA-compliant paid AI (which you must), their data is likely isolated. Tell them explicitly: “Your data is not used to train public AI models.” Also, explain how retention works. For example, “The audio recording is deleted 24 hours after the note is generated.”

5. Emergency limitations

Some practices use AI chatbots or AI tools to respond to messages during sessions. While this can be helpful if someone is just rescheduling an appointment, it’s not ideal for clients in crisis. You need a protocol to deal with these emergencies.

  • What to say: AI tools are not crisis responders. They process data; they don’t react to danger.
  • The specifics: Include a disclaimer that the AI is not monitored in real-time. If a client mentions self-harm or a safety issue, you (the therapist) are responsible for the intervention, not the software vendor.

Keep your consent policy flexible. If clients keep asking the same question, add it. AI changes fast, so update your policy when new tools come along.

Check with your AI provider to make sure your policy is accurate. Ask simple questions like: Is data stored? For how long? A good provider should give clear answers about privacy, data use, and storage. If anything isn’t clear, ask.

Addressing the privacy elephant in the room

When you bring up AI, your clients will likely have one major question: Is my privacy safe? 

You need to feel confident answering this. Before you even write your policy, check your tech. In the United States, you must operate under HIPAA regulations. This means you cannot just use a standard, free version of a tool like ChatGPT or a generic voice memo app on your phone.

You must use a vendor that signs a Business Associate Agreement (BAA). This is a legal contract where the vendor agrees to protect patient data with the same rigor that you do. If your tool doesn’t offer a BAA, you cannot use it legally.

When explaining this to clients, use plain language. You don’t need to quote legal statutes. Just say something like: 
“I use a specialized, paid, secure AI that meets strict federal privacy standards. We have a legal contract in place that prevents them from sharing or misusing your information.”

How to talk to clients about AI

Writing the policy is step one. Step two is the actual conversation. You shouldn’t just slide a piece of paper across the desk and hope they sign it. Bring it up verbally during intake or at the start of a session if you are introducing a new tool.

Here are a few scripts you can adapt to your own style.

The “efficiency” script:

“I want to give you my full attention today without looking down to write notes constantly. I use a secure tool that listens to our conversation and drafts a summary for me later. It helps me stay present with you. I review everything it writes to make sure it’s accurate. How do you feel about using that today?”

The “privacy first” script:

“Before we start, I want to mention that I use an AI assistant to help with my paperwork. It’s not a robot therapist. It just helps me type up my notes so I can focus on you. It’s fully secure, HIPAA-compliant, and the audio is deleted 24 hours after I finish the note. You can always say no if you’re uncomfortable with it. What are your thoughts?”

The “control” script:

“I use technology to help with documentation, but I want you to know that you are in charge. If we start talking about something really difficult and you want the ‘ears’ off the room, just tell me, and I’ll pause it immediately. I read and approve every single line it writes. Does that sound okay to you?”

Common client fears (and how to answer them)

Even with a good script, clients might hesitate. Here is how to handle the most common pushbacks.

“Will my session end up on the internet?”

Answer: “No. The tool we use is encrypted, meaning the data is scrambled and locked. It is not shared publicly, and it is not used to teach other AI programs. It’s deleted 24 hours after the note is written.”

“What if the AI gets it wrong?”

Answer: “That definitely happens sometimes! AI can misunderstand sarcasm or cultural references. That is why I never copy-paste blindly. I read the draft and fix any mistakes before it becomes part of your record.”

“Does this mean you aren’t listening?”

Answer: “Actually, it means I can listen better. Because I don’t have to worry about remembering every specific detail to write down later, I can focus entirely on what you’re feeling right now.” 

Of course, don’t use these scripts verbatim! Put your own spin on them and make sure to change them depending on how your AI scribe works. 

Use AI responsibly

AI is here to stay, and it offers incredible benefits for reducing burnout and letting therapists get some much needed time back. But, as stated by the NBCC, we cannot let convenience override ethics.

By crafting a thoughtful, clear informed consent policy, you protect your practice and inform your clients. You are telling them that you value their privacy just as much as you value their progress.

Take a look at your current forms this week. Do they mention AI? If not, it is time for an update. Be transparent, be specific, and always keep the human connection at the center of your work.