What Illinois HB 1806 Means for AI in Therapy
In August 2025, Illinois made history. Governor J.B. Pritzker signed HB 1806, making Illinois the first state in the nation to explicitly regulate artificial intelligence in mental health treatment. The law, officially titled the Wellness and Online Practitioner Responsibility (WOPR) Act, drew a clear line in the sand: AI can help support therapy, but it cannot practice therapy.
For patients using AI wellness tools, therapists incorporating AI into their practice, and mental health companies building these products, this law is a turning point. It's not a ban. It's clarity. And clarity, it turns out, is what the entire industry has been waiting for.
Why Now? The Regulatory Vacuum
Before Illinois acted, mental health AI existed in a gray zone. The FDA didn't clearly regulate it. Professional licensing boards hadn't weighed in. State laws were silent. Companies like Woebot—a chatbot that had served over 10 million users—shut down on June 30, 2025, partly because they faced regulatory uncertainty they couldn't navigate profitably.
Meanwhile, the mental health crisis in America only deepened. According to the National Institute of Mental Health, 1 in 5 adults experience mental illness each year. Therapy waitlists stretch for months. Insurance barriers remain high. Patients desperately need something between their weekly (or monthly, or never-happening) therapy sessions.
That gap is where AI wellness companions fit. They're available at 3 a.m. They don't get overwhelmed by caseloads. They can help patients practice evidence-based skills between sessions.
But without guardrails, that same availability can become a liability. What if an AI makes a misdiagnosis? What if it tells someone in crisis to handle it alone? What if it collects deeply personal data and sells it?
Illinois HB 1806 answers these questions directly.
What Illinois HB 1806 Actually Says
The law is remarkably specific about what AI in mental health cannot do:
Prohibited Uses
- Making independent clinical decisions: AI cannot decide whether a patient has a condition, whether they need treatment, or what treatment they should receive without human review.
- Direct therapeutic communication: AI cannot engage in therapeutic dialogue designed to diagnose, treat, or manage a mental health condition on its own. (This is narrower than it sounds—see below.)
- Creating or modifying treatment plans: AI cannot draft, recommend, or change a treatment plan without a licensed therapist's direct involvement.
- Emotion detection claims: AI cannot claim it can detect emotional states from facial recognition, voice analysis, or text patterns with clinical accuracy, unless the vendor can prove it with FDA-grade evidence.
The penalty for violations? Up to $10,000 per violation, plus attorney's fees if the violation is found to be willful.
What AI Can Do
The law also lists what AI is explicitly permitted to do in mental health settings:
- Administrative support: Scheduling appointments, billing, medical records management, appointment reminders
- Data analysis: Aggregating patient data to identify trends (with privacy protections)
- Notes and documentation: Drafting clinical notes for therapist review and editing
- Decision support: Showing therapists relevant data or research—the therapist decides what to do with it
- Patient education: Providing evidence-based psychoeducational content
- Practice support between sessions: Helping patients practice skills therapists have already taught them (like breathing exercises or thought records)
This last category is crucial. It's where wellness companions live.
The Wellness vs. Therapy Distinction
Here's the key regulatory insight: AI therapy regulation hangs on one question: Does the AI claim to treat a mental health condition?
If an AI says "I can help you manage your depression," that's a therapeutic claim. It triggers regulation, FDA oversight, and medical device requirements.
If an AI says "I can help you practice the thought-challenging technique your therapist taught you," that's a wellness claim. It's supportive, not directive. It assumes a therapist is already involved.
The FDA distinguishes between these in its guidance on Software as a Medical Device (SaMD). Tools that claim to diagnose, treat, cure, mitigate, or prevent a disease are medical devices. Tools that support general wellness are not—as long as they make no therapeutic claims.
Illinois HB 1806 codified this distinction into state law. The WOPR Act applies to AI that targets mental health treatment. It does not apply to general wellness, stress reduction, or skill-building tools used as supplements to professional care.
This is why the distinction between "AI therapy" and "AI wellness companion" matters legally, not just linguistically.
California SB 243: The Companion Law
Illinois wasn't alone in moving fast. California, home to most of the nation's AI companies, acted just months earlier.
California SB 243, which took effect January 1, 2026, is narrower but in some ways more stringent. It focuses specifically on chatbots and conversational AI marketed to California residents. The law requires:
- Crisis disclosure: If a user says something indicating suicide or self-harm, the chatbot must provide crisis resources (like 988 and Crisis Text Line).
- Transparency: Clear disclosure that the user is talking to AI, not a human.
- No therapy claims: Chatbots cannot claim to diagnose, treat, or cure mental health conditions.
- Minor protections: Chatbots cannot knowingly engage in therapeutic conversations with users under 18 without parental consent.
- Right of action: Users harmed by violations can sue for at least $1,000 per violation—a private right of action that makes compliance matters more than just legal nicety.
SB 243 doesn't distinguish between "wellness" and "therapy" the way Illinois does. Instead, it applies to all conversational AI used by California residents for mental health purposes. This is a broader net.
The Emerging Patchwork
Illinois and California are not alone. As of March 2026, mental health AI has drawn legislative attention in at least 20 states:
| State | Bill | Status | Key Focus | |-------|------|--------|-----------| | Illinois | HB 1806 | Signed (Aug 2025) | Treatment AI prohibition, wellness AI permitted | | California | SB 243 | Enacted (Jan 2026) | Chatbot safety, crisis routing, transparency | | Nevada | AB 406 | Proposed | AI in telehealth, licensing of AI systems | | Utah | HB 452 | Proposed | Mental health AI transparency | | New York | S8484 | Proposed | AI algorithm disclosure, bias testing | | New Jersey | A5603 | Proposed | Consumer protection for AI mental health tools | | Kentucky | HB 455 | Proposed | Liability protections for therapists using AI |
Across all 50 states, there are approximately 143 bills that mention AI and mental health in some form. Only 28 explicitly regulate mental health AI itself. Most are still in committee.
The result: a fragmented regulatory landscape where companies must navigate state-by-state compliance. A tool legal in Illinois might violate California law. A tool compliant with SB 243 might lack the clinical safeguards Illinois requires.
This is messy. It's also inevitable, and probably healthy. State laboratories of democracy often produce variation before federal standards emerge.
What This Means for Patients
If you use an AI wellness tool, HB 1806 and SB 243 give you clearer rights:
- Transparency: Companies must tell you upfront whether they're offering wellness support or mental health treatment. No ambiguity.
- Crisis safety: If you're in crisis, the app should route you to trained crisis counselors (988 or Crisis Text Line), not try to handle it alone.
- Data protection: AI mental health companies are increasingly required to disclose how they use your data. If they're collecting emotional information, they need to tell you, and in California, you can sue if they misuse it.
- Human oversight: Any AI providing treatment recommendations must have a licensed therapist involved. You're never alone with the algorithm.
- No over-promising: Companies can't claim their AI is therapy, or that it can diagnose depression, unless they have clinical evidence to back it up and FDA clearance to sell it.
For many patients, this means peace of mind. For others, it might mean the tools you loved get reclassified or shut down. But transparency and accountability are better than a Wild West where anything goes.
What This Means for Therapists
For mental health professionals, these laws clarify what you can and can't do with AI:
- You can use AI for notes, scheduling, billing, and documentation—no special license or approval needed. The AI handles the busywork so you handle the relationship.
- You can recommend AI wellness tools to patients—as long as those tools make no therapeutic claims and you're still driving treatment decisions. The AI is an adjunct, not a substitute.
- You cannot outsource clinical judgment to AI. If an AI recommends a patient needs hospitalization, you decide whether to recommend it. If it drafts a treatment plan, you modify and approve it.
- You're still liable for patient harm caused by AI you recommend. If you send a patient to an AI tool, and that tool gives dangerous advice, you could be responsible. Choose vendors carefully.
- You're now working in a regulated environment. Compliance matters. Vendors who don't take it seriously are a liability risk.
The silver lining: therapists who use AI thoughtfully—as a tool, not a replacement—often report feeling less burned out. They have more time with patients and less time on paperwork. Illinois HB 1806 doesn't prevent that. It just draws boundaries.
The FDA's Role: Still Evolving
One gap remains: the FDA's guidance on AI in mental health is still incomplete.
The FDA has issued general guidance on Software as a Medical Device (SaMD), but it hasn't specifically weighed in on LLMs (large language models) for mental health. It's not clear whether a chatbot trained on therapy transcripts would require premarket FDA review. It's not clear how the FDA would evaluate an AI's ability to detect suicidality from text.
This ambiguity is intentional—regulators are waiting to see how the market evolves before locking in standards. But it also means vendors are making educated guesses about compliance.
What is clear: if an AI makes a claim like "treats depression" or "diagnoses anxiety," it's making a medical claim. The FDA will likely require evidence. Possibly clinical trials. Possibly premarket review. This is a high bar—and it should be, given what's at stake.
The NIST Framework: Voluntary Accountability
Outside of legal requirements, another standard is gaining traction: the NIST AI Risk Management Framework.
Developed by the National Institute of Standards and Technology, this framework provides a voluntary approach to AI risk management. It asks companies to identify risks (like bias, data privacy, safety), measure them, and mitigate them—with documentation showing the work.
While not legally binding in most states, NIST alignment is becoming a competitive advantage. Insurance companies may favor vendors who follow it. Hospitals may require it. Therapists certainly notice.
For mental health AI specifically, NIST encourages:
- Bias testing: Does the AI behave differently for different genders, races, or socioeconomic groups?
- Validation studies: Was the AI trained and tested on diverse populations?
- Fallback protocols: What happens when the AI is uncertain? Does it escalate to a human?
- Transparency: Can users understand why the AI made a recommendation?
Following NIST doesn't guarantee your AI is safe, but it shows you're thinking systematically about safety.
Looking Ahead: What's Coming
The regulatory landscape will continue to shift:
- More state laws: Expect 10-15 more states to pass mental health AI regulations by 2027, likely copying the Illinois and California models.
- Federal guidance: The FDA will eventually issue specific guidance on LLMs in mental health. This could either accelerate innovation (if the standards are reasonable) or chill it (if they're onerous).
- Professional standards: The American Psychological Association and American Psychiatric Association are both developing AI practice guidelines. These will become de facto standards even where not legally required.
- Insurance and liability: Malpractice insurers will start requiring specific AI vendor certifications. This will concentrate business on compliant players.
- International harmonization: Other countries (UK, Canada, EU) are developing parallel standards. Companies operating globally will need to navigate all of them.
The good news: this is all happening much faster than healthcare regulation usually moves. Regulators recognized that mental health AI is different—both higher-risk and higher-impact—and they're not waiting.
The Bottom Line
Illinois HB 1806 and the regulations following it are not a ban on AI in mental health. They're a boundary.
AI can support therapy. It can help you practice skills between sessions. It can reduce therapist burden and make care more accessible. It can provide crisis resources when you need them.
What AI cannot do—legally, now, in multiple states—is replace the human relationship at the center of healing. It cannot make clinical decisions. It cannot claim to treat conditions without evidence. It cannot prioritize profit over safety.
For patients, this means your data and safety matter. For therapists, it means you can use AI to work smarter, but the judgment and the relationship remain yours. For companies building these tools, it means compliance is the cost of operating.
That's not a setback. It's maturation.
The tools you use between therapy sessions should be good. They should be safe. They should know their limits. Illinois HB 1806 raises the bar to ensure they are.
Crisis Resources
If you or someone you know is experiencing a mental health crisis:
- National Suicide & Crisis Lifeline: Call or text 988 (available 24/7)
- Crisis Text Line: Text HOME to 741741
- International Association for Suicide Prevention: Find resources in your country
These resources connect you with trained crisis counselors who can provide real support right now. AI wellness tools are supplements to professional care, not replacements for it.
References
Practice therapy skills between sessions — in just 2 minutes a day
Jann, your wellness companion, walks you through evidence-based exercises daily and keeps your therapist informed.
If you or someone you know is in crisis
Help is available 24/7. Call or text 988 (Suicide & Crisis Lifeline) or text HOME to 741741 (Crisis Text Line). BridgeCalm is a wellness tool, not a crisis service.