Back to resources

Predictive analytics are all the rage, but is the system ready for prevention?

Predictive analytics are all the rage, but is the system ready for prevention?

Predictive analytics are moving from the research world into everyday clinical software, especially electronic health records (EHRs). Tools that once lived in academic papers now show up as risk flags, scorecards, registries, and “next best action” prompts inside workflows. The goal is noble: to shift from reactive to proactive care. Instead of waiting for a client to call in crisis, the system nudges you to reach out first. If we can predict risk earlier, we can prevent harm sooner.

But even if the predictions are accurate, is our healthcare system actually ready to act on them in a preventive way?

For therapists—often practicing in outpatient settings, with limited time and limited integration into broader medical systems—the answer is complicated.

Below, we’ll unpack what predictive analytics in EHRs are (and aren’t), where the evidence is heading, and the real-world constraints that can turn an “early warning system” into yet another alert no one has the capacity to meaningfully respond to. 

What are predictive analytics?

In healthcare software, predictive analytics usually means a model that uses available data (diagnoses, medications, lab results, utilization patterns, demographics, sometimes clinical notes) to estimate the probability of a future outcome such as hospitalization, relapse, crisis visit, medication nonadherence, or suicide attempt over a defined time window.

You might have already noticed new features popping up in your EHR. Platforms are rolling out modules that use machine learning to flag high-risk patients. These tools analyze everything from previous diagnosis codes to the natural language in your progress notes.

Some tools are framed as clinical decision support (CDS): “This person is at elevated risk; consider these actions.” Others are used for population health: “These 50 patients should be prioritized for outreach.” Many are embedded directly in EHR interfaces as scores, flags, or predictive decision support interventions.

Importantly, these systems aren’t crystal balls. They’re probability estimators. And the way they’re presented, which can include red/yellow/green categories, thresholds, and high-risk labels, can blur the difference between risk estimation and clinical certainty.

Are the predictions any good?

The evidence base is growing, especially in suicide risk prediction using large-scale EHR data. One multi-system U.S. study validated an EHR-based machine-learning approach across five diverse health systems (over 3.7 million patients). At a specificity of 90%, models detected a meaningful portion of suicide attempts well in advance, with AUCs in the ~0.71–0.76 range.

That’s real signal. It’s not perfect, but better than chance and potentially useful as an adjunct to clinical assessment in certain settings. And researchers are also arguing—credibly—that how we judge these models matters. For low-base-rate outcomes like suicide, positive predictive value will look low even for decent models. Clinical utility depends on calibration, thresholds, context, and what interventions you trigger at each risk level.

But the technology isn’t perfect. While some models boast high accuracy rates in controlled studies, they often struggle with “false positives” in the real world. A model might flag a patient as high-risk for suicide because of data gaps or historical patterns that don’t reflect their current stability. For a busy clinician, sifting through these alerts can quickly lead to “alert fatigue,” where the sheer volume of notifications makes it hard to distinguish a true emergency from a statistical glitch.

So predictive models can identify risk earlier than humans can in some contexts. But what happens when someone gets that information?

The prediction-to-prevention gap

Even the best prediction is only as valuable as the response it enables.

In mental healthcare, prevention often requires things that are in short supply:

  • Timely access to appointments
  • Care coordination (including warm handoffs and follow-ups)
  • Crisis resources that can actually respond
  • Measurement-based care systems that can track symptoms over time
  • Reimbursement that supports proactive outreach, not just face-to-face sessions

And in the U.S., these ingredients are uneven at best.

Knowing a client is at risk is only half the battle. You have to be able to do something about it. This is where the system readiness gap becomes most painful.

Imagine your EHR flags five clients today who are at high risk for hospitalization. Do you have open slots to see them this week? Does your clinic have a mobile crisis team that can be deployed instantly?

For most U.S. providers, the answer is “no.”

We’re facing a workforce shortage projected to reach a deficit of thousands of providers by 2030. More than 122 million Americans live in mental health professional shortage areas, and shortages of nearly 88,000 mental health counselors and 114,000 addiction counselors are projected.  And between declining reimbursement rates and the professional degree reclassification, there are concerns it’ll only get worse. Waitlists at community centers often stretch past 60 days. In this environment, a predictive alert is like a smoke alarm ringing in a fire station that has no trucks available to dispatch.

When capacity is constrained, predictive analytics can become a cruel paradox: the system identifies who needs help, then can’t deliver it.

Furthermore, reimbursement models don’t support this kind of work. Insurance companies pay for a 50-minute therapy session. They rarely pay for the administrative time it takes to review risk scores, coordinate with a psychiatrist, or make brief check-in calls to high-risk clients. Without financial incentives, proactive outreach becomes “unpaid labor” that leads to burnout.

Alerting is easy; building a response pathway is hard

If predictive tools increase the number of people flagged as needing outreach, safety planning, medication review, higher level of care, or more frequent contact, you need staffing and procedures to match. Otherwise, you create:

  • Liability anxiety
  • Clinician moral injury (seeing need but lacking capacity to act)

Implementation and governance bodies have called out the need for oversight, monitoring, transparency, and clarity about limitations and bias precisely because real-world use can amplify unintended consequences if systems aren’t ready.

Why adoption is stalled in outpatient settings

If these tools are so powerful, why aren’t they standard practice in every clinic? The barriers are practical and tangible:

  • Workflow disruption: Alerts often live in a separate “population health” tab rather than on the main screen where you write notes. It takes extra clicks and extra time to find them.
  • The cost factor: These analytics modules often come with a price tag. For a small practice, that adds up fast.
  • Liability concerns: This is the big one. If an algorithm flags a client as high-risk and you don’t see the alert in time to act, are you liable? Conversely, if you act on a false positive and hospitalize someone unnecessarily, is that malpractice? The legal framework hasn’t caught up to the technology.

What prevention looks like in practice

Here’s the good news: the broader healthcare system does have emerging preventive response models, especially when behavioral health is integrated into primary care.

For example, Medicare has supported Behavioral Health Integration (BHI) services and the Psychiatric Collaborative Care Model (CoCM), which formalize team roles, measurement-based monitoring, proactive follow-up, and psychiatric consultation. CMS guidance describes these as care management services paid on a monthly basis, not tied only to face-to-face sessions, and emphasizes structured monitoring and care plan revision when patients aren’t improving.

This is exactly the kind of infrastructure predictive analytics needs to become preventive: 
a system that can monitor people over time and intervene early.

But in many outpatient therapy settings, especially private practice, those supports may not exist:

  • No registry to track symptom change across a panel
  • No care manager to do outreach
  • No psychiatric consultant readily available
  • No billing structure that pays for proactive non-session work

So predictive analytics may arrive in the software before prevention arrives in the payment model.

The ethical tension of predictive analytics

Beyond logistics, there are ethical questions you need to wrestle with.

False positives can create labels that affect treatment

When a person is flagged as “high risk,” the label can shape how they’re treated across the system. If the prediction is wrong, the stigma may still drive:

  • More surveillance-like interactions
  • Law enforcement responding to calls rather than community health
  • Altered documentation tone
  • Unnecessary medication
  • Invasive medical procedures
  • Forced hospitalization
  • Increased referrals or restrictive interventions

That doesn’t mean therapists shouldn’t use these tools, but it does mean you should treat outputs as signals, not verdicts.

False negatives can create misplaced reassurance

If the system doesn’t flag someone who later deteriorates, clinicians can be left asking, “Why didn’t the tool catch this?” That can erode trust in clinical judgment and in the tool at the same time.

Bias remains a problem

Algorithms learn from historical data, which is often biased. If a model was trained on data that under-diagnosed certain populations, it will repeat those mistakes. For example, black men are more likely to be flagged as having psychosis. Governance guidance increasingly stresses transparency and disclosure for what data were used, how risk is measured, what limitations exist, especially because predictive tools can encode inequities if not designed and monitored responsibly. 

Privacy, consent, and the “data exhaust” problem

Predictive analytics is powered by data—lots of it. And that data often flows across covered entities, business associates, and vendors.

At a baseline, mental health clinicians should be aware that HIPAA has specific rules and guidance around permitted uses and disclosures of protected health information (PHI), including for care coordination and continuity of care, and guidance materials for compliance obligations.

Predictive analytics increases the stakes of data governance because the data isn’t just documenting care anymore. It’s being used to forecast future risk and trigger actions. Governance bodies are explicitly flagging privacy and security concerns as integral to responsible AI and predictive tool deployment.

And then there’s informed consent. Do your clients know an AI is reading their file to predict their behavior? Few EHR patient portals explicitly disclose this.

For therapists, especially those in smaller practices, the question becomes less “Can predictive analytics help?” and more:

  • What data is being used?
  • Who has access to the outputs?
  • How are the outputs documented?
  • Can the data be modified?
  • How do you discuss them with clients, if at all?

What you can do right now

Despite these challenges, predictive analytics isn’t going away. It will likely become a standard part of our toolkit. Rather than ignoring it, you can prepare for it.

Here are a few practical steps for clinicians:

  1. Treat it as a second opinion: Use algorithmic risk scores as one data point among many. Trust your clinical judgment. If the computer says “high risk” but the client presents as stable, document your assessment clearly.
  2. Define your alert governance: If your clinic uses these tools, decide who checks the alerts and when. Don’t let them pile up unread.
  3. Use micro-interventions: You don’t always need a full session to intervene. Evidence-based “micro-interventions” like a brief safety check-in call or a text reminder can be powerful tools for clients flagged as moderate risk.
  4. Advocate for payment: Talk to payers and administrators. If they want you to use predictive tools to lower hospitalization rates, they need to reimburse the time it takes to do that preventive work.

        Prevention-ready questions to ask

        If predictive analytics (or AI-generated risk flags) are coming into your EHR, your clinic, or your referral network, these questions can help you evaluate readiness:

        Clinical workflow

        • What happens after someone is flagged—specifically? (Who does what, when?)
        • Are there tiered responses (light-touch outreach vs. higher-level intervention)?
        • Is there a way to document actions taken and close the loop?

        Capacity

        • Do we have staff/time for increased outreach or safety planning?
        • What resources exist if the model flags more people than we can see?

        Governance and transparency

        • Can we see what data drives the prediction and its limitations?
        • Is there monitoring for bias and performance drift over time?

        Ethics and consent

        • Will clients be informed that predictive tools are being used?
        • How are outputs shared (or not shared) across care teams?

        So… is the system ready for prevention?

        In some pockets, yes. Integrated care settings that can deliver collaborative care workflows, registry monitoring, and proactive follow-up are closer to being prevention-ready.

        Across the system as a whole, not yet. Workforce shortages, uneven access, and reimbursement constraints make it hard to respond consistently to rising demand, even when the need is already obvious without predictive analytics. Prevention requires infrastructure: workforce, care models, reimbursement, coordination, governance, and ethical guardrails. Until those foundations improve, predictive analytics risks becoming another tool spreading care delivery even thinner.

        If we want predictive analytics to be more than hype, we have to invest as much in the response system as we do in the algorithm. The system may not be fully ready for prevention yet, but by engaging with these tools critically and carefully, we can start building the bridge to get there.