AI Tools for Therapists: What's Safe and What's Not
If you've been in practice for more than a few months this year, you've felt the pressure: colleagues are using AI to draft notes faster. Your EHR vendor is adding AI features. Well-meaning patients ask if you use "the latest AI." And somewhere in your inbox is a pitch from a startup promising to "automate your practice."
The stakes are real. According to the 2025 APA Practitioner Pulse Survey, 92% of psychologists express significant concerns about AI in mental healthcare. Yet 56% of practitioners now use AI tools at least occasionally—and 29% use them monthly or more. This isn't a niche question anymore. It's a question of clinical governance and professional responsibility.
The good news: there's a clear framework for thinking about this. The bad news: it requires due diligence. Not all AI tools are created equal, and the differences matter for your liability, your patients' safety, and the integrity of your clinical work.
The 56% Problem (and the 92% Concern)
Let's start with what the data tell us. In a profession where fewer than one in three practitioners used AI a year ago, the adoption curve is steep. Psychologists are turning to AI primarily to manage administrative burden—a real and growing problem. But they're not confident yet.
The concerns are justified:
- 67% worry about data breaches
- 64% cite unanticipated social harms
- 63% flag biased input and output
- 61% doubt the tool has been rigorously tested
- 60% worry about inaccurate outputs
What's notable: these aren't theoretical worries. They reflect real risks that have materialized in early deployments. And they underscore why the "just use it" approach is dangerous.
The Three Categories: Safe, Promising, and Risky
Not every AI application in therapy is equivalent. Here's how to think about them:
Safe: Administrative and Support Tasks
This is where most therapists should focus their AI investment right now.
Clinical documentation support — An AI tool that transcribes or summarizes sessions, with you reviewing and editing every note, is a reasonable workflow enhancement. It doesn't make clinical judgments; it reduces busywork. Tools like Blueprint.ai and other HIPAA-compliant documentation assistants fall into this category when:
- The tool signs a Business Associate Agreement (BAA) with your practice
- You review and approve every note before it enters the record
- The AI generates drafts, not final clinical documents
- Audio transcripts are deleted after use (not retained for training)
- You maintain full clinical oversight
Scheduling, billing, and intake forms — AI can route incoming inquiries, schedule appointments, and handle eligibility checks. This is legitimate automation with minimal clinical risk.
Research and evidence synthesis — Using AI to summarize literature or organize clinical best practices for a case you're treating is reasonable. You're using it as a research assistant, not a decision-maker.
Administrative compliance — Auditing your documentation for completeness or alerting you to unsigned notes is a reasonable use of AI's pattern-matching capability.
The common thread: you retain full clinical authority. AI handles the repetitive work; you handle the judgment.
Promising But Guardrailed: Between-Session Support
This category includes tools designed to support patients between sessions—like daily check-ins, mood tracking, or guided practice exercises with built-in safety mechanisms.
The research is early, but promising: tools that help patients practice evidence-based skills (CBT worksheets, DBT exercises, mindfulness guidance) without replacing therapist contact show potential for improving engagement and outcomes. The key conditions:
- The tool is positioned as a support tool, not therapy. Patient-facing language must be clear: "This is a practice helper between sessions, not a substitute for therapy with your therapist."
- Crisis detection is multi-layered. A study from Stanford (2025) found that relying on language models alone to detect suicide risk leads to frequent false negatives. Any tool used with patients must combine pattern matching (keyword/phrase detection) with LLM-based assessment—and flag anything uncertain for human review.
- Therapists see a summary. If patients are using between-session tools, you should have access to usage data and flagged concerns, so nothing surprises you in session.
- The tool does not make clinical recommendations. It doesn't decide whether medication is needed, whether to increase session frequency, or whether a patient is safe. That's your judgment.
- There's a clear escalation path. When a between-session tool detects risk, it routes the patient to you or to crisis resources (988 or Crisis Text Line 741741), never delays or hedges.
The risk is high if any of these conditions slip. A between-session tool that tries to diagnose, recommend treatment, or make independence calls about safety is no longer a support tool—it's practicing therapy, and the legal exposure is severe.
Risky or Prohibited: Independent AI Decision-Making
Be direct about what's off-limits:
AI making clinical decisions. No AI system should independently decide that a patient can reduce session frequency, that they're "ready" for discharge, that their bipolar presentation indicates they need hospitalization, or that they're "no longer a risk." These are therapeutic judgments. Understandable if an AI tool flags data that prompts your judgment—e.g., "Patient has reported low mood in 4 of last 5 check-ins"—but not if it concludes anything.
Using ChatGPT or public LLMs for patient care. This is the most common compliance violation. ChatGPT, Claude, Gemini, and other consumer tools are not HIPAA-compliant. They don't sign Business Associate Agreements. Using them to draft clinical notes, analyze a patient's story, or generate treatment recommendations with any identifiable patient information violates HIPAA. It also violates the terms of service of these tools—most explicitly prohibit healthcare use of PHI. The risk: HIPAA fines ($100 to $50,000 per patient per violation, depending on circumstances), breach notification lawsuits, and loss of license.
AI detecting emotions or mental states directly. Illinois's new law (HB 1806), the most comprehensive state regulation of AI in therapy to date, explicitly prohibits using AI to detect emotions or mental states on behalf of the patient. This rule reflects a deeper principle: diagnostic and assessment work is the therapist's role, not AI's. An AI system that claims to "analyze your facial expressions and diagnose depression" or "detect your stress level from your typing speed" is not a support tool—it's overstepping.
General-purpose AI as a clinical aide without guardrails. Some practitioners are casually using ChatGPT to brainstorm case conceptualization or generate treatment ideas "just between us." This is problematic on multiple grounds: (1) the system may retain the information even if you delete your query, (2) you have no BAA, (3) you're outside your scope of coverage with malpractice insurance, and (4) you're modeling poor AI hygiene to your entire field.
What to Ask Before Adopting Any AI Tool
You don't need to be a technologist to vet an AI tool. Use this checklist:
-
Does the vendor sign a Business Associate Agreement? If you're processing patient data, the answer must be yes. If the vendor says "we're not regulated" or "we only handle aggregated data," ask them to sign anyway. It's legally standard.
-
What happens to my data? Find out: Where is it stored? How long? Is it encrypted in transit and at rest? Is it used for training? Can you delete it? A good vendor gives you full transparency and control.
-
What does the AI actually do? Does it generate drafts (reasonable), or does it make recommendations (more risky)? Does it flag concerns for you to evaluate, or does it make conclusions?
-
How is crisis handled? This is a hard question that reveals whether a vendor thought carefully about real safety. If the answer is "the AI routes to 988," that's good but incomplete. Ask: How does the AI detect crisis? Are there false negatives? What happens if the patient ignores the routing? Is there a human-in-loop option?
-
What's the backup plan if the AI fails? If your AI note-generation tool goes down, can you still document? If your between-session platform has an outage, will patients know? The more critical the tool, the more redundancy you need.
-
How is bias addressed? This is not a minor question. Early AI systems trained on psychiatric datasets have shown race and gender biases in severity assessment. Ask the vendor: Have they audited their training data? Have they tested the tool on demographically diverse populations? What are the known limitations?
-
What does my malpractice insurer say? Before adoption, call your malpractice carrier. Many have updated their policies to exclude coverage if you use unauthorized AI tools. Some require prior approval. Don't find out the hard way.
The Regulatory Landscape
The field is moving fast. Illinois HB 1806, signed in August 2025, is the nation's first comprehensive AI therapy law. It's worth understanding even if you don't practice in Illinois, because other states are drafting similar rules.
What HB 1806 permits:
- AI for administrative and supplementary tasks
- Documentation tools (if the therapist reviews and approves)
- Scheduling, billing, and other operational uses
What HB 1806 prohibits:
- AI making independent therapeutic decisions
- AI directly interacting with clients in therapeutic communication
- AI generating recommendations or treatment plans without therapist review and approval
- AI assessing or detecting emotions or mental states
- Violations carry fines up to $10,000 per incident
This law sets a clear standard: AI can assist; AI cannot decide. Other states are likely to follow this template.
A Practical Implementation Framework
If you're considering adopting an AI tool, here's a governance framework:
-
Define the specific use. "Reduce documentation burden" is too vague. "Generate a draft SOAP note based on session recording for my review" is clear.
-
Verify compliance. Confirm the vendor is HIPAA-covered, has signed a BAA, and carries appropriate liability insurance.
-
Test on a limited basis. Use the tool on 1–2 weeks of non-crisis cases before full adoption. Watch for errors, false flags, or unexpected outputs.
-
Establish a review protocol. Before any AI output enters the patient record, you review it. This is not optional. Documenting "I reviewed it" when you didn't is fraud.
-
Train your staff. If you have billing staff or administrative assistants, they need to understand the tool's scope, its limitations, and what to escalate to you.
-
Audit yourself. Periodically—say, quarterly—pull a random sample of AI-generated notes and verify they're accurate and clinically sound. Document this review.
-
Plan the exit. If a vendor closes or you lose confidence, you should be able to export your data and stop using the tool without disrupting patient care.
The Bottom Line
AI in therapy practice isn't going away. It's proliferating. The question isn't whether to use it, but how to use it responsibly.
The safe path: Use AI for what it's good at—repetitive, rule-based tasks like scheduling, billing, and initial documentation drafting. Retain full clinical authority for judgment, assessment, diagnosis, and safety decisions. Demand HIPAA compliance and transparency. And when a tool promises to do your clinical thinking for you, remember the 92% of your colleagues who are right to be concerned.
Your patients come to you for your clinical judgment. That's your competitive edge, your ethical obligation, and the thing AI can't replicate. Tools should amplify that judgment, not replace it.
Crisis Resources
If you or a patient is in crisis:
- 988 Suicide & Crisis Lifeline: Call or text 988
- Crisis Text Line: Text HOME to 741741
Sources & Further Reading
- APA 2025 Practitioner Pulse Survey: AI in the Therapist's Office
- Among psychologists, AI use is up, but so are concerns
- Illinois HB 1806: Wellness and Oversight for Psychological Resources Act
- Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs
- What Therapists Must Know About HIPAA-Compliant AI Note-Taking
- When AI Technology and HIPAA Collide
- Blueprint for Therapists: AI Documentation & Insights
Built for therapists who want better between-session data
Pre-session briefs, PHQ-9/GAD-7 tracking, homework assignment, and outcomes at a glance — under 3 minutes per patient per week.
Explore the Therapist PortalIf you or someone you know is in crisis
Help is available 24/7. Call or text 988 (Suicide & Crisis Lifeline) or text HOME to 741741 (Crisis Text Line). BridgeCalm is a wellness tool, not a crisis service.