Measuring What Matters in Behavioral Health Quality

Faisal Zain brings a builder’s eye to behavioral health quality. Trained in medical technology and device manufacturing, he’s spent years turning vague clinical aspirations into measurable specifications. At the Behavioral Health Tech conference in San Diego, he challenged the field to align on “what good looks like” and to move from counting tasks to proving health improvement. In this conversation, he unpacks data gaps, policy levers, value-based contracts, and a practical, minute-by-minute template for post-hospital follow-up that ties process to outcomes.

At the Behavioral Health Tech conference in San Diego, you said we lack a shared definition of “what good looks like.” Can you share a story that shows this gap, the exact points where stakeholders diverge, and the metrics you’d use to align them step by step?

I sat with a payer, a hospital leader, and a community clinic after a tense case review. Everyone agreed care was delivered, but no two people agreed it was good. The payer pointed to on-time claims and a seven-day follow-up; the hospital highlighted safe discharge; the clinic noted a completed visit. The patient, meanwhile, still felt unstable. The divergence was simple: they tracked “did you do a thing,” not “did health improve.” I’d align them by sequencing metrics: first, access (time to first appointment within seven days); second, visit content (medication reconciliation and SDOH screening documented); third, early symptom change (validated measures documented at baseline and again by day 30); and finally, stability markers at day 90 (no unplanned ED or inpatient returns). When stakeholders see those steps on one line, disagreements soften because the thread from action to outcome becomes visible.

You pointed to payer data not showing care quality. What specific datasets exist today, which key fields are missing, and how would you link outcomes data to encounters? Please walk through a concrete example with measures, timelines, and feedback loops.

Today we have claims, eligibility, prior authorization, and pharmacy fills—great for dates and costs, weak for clinical nuance. Missing are symptom scores, medication reconciliation status, safety plans, and SDOH findings. To link outcomes, anchor every encounter to a longitudinal episode ID and capture a baseline symptom score on discharge day, then repeat at seven days, 30 days, and 90 days. In practice: a patient discharged on a Friday gets a scheduled follow-up within seven days, a documented medication reconciliation, and a symptom reassessment at the visit; pharmacy data confirms fills; if scores worsen by day 30, an alert prompts a treatment review, with another check at day 90. The loop closes when those scores, reconciliations, and fills are tied back to the index hospitalization and the follow-up visit, not just a claim line.

On provider shortages and full schedules, what are the top three bottlenecks you see, and how do they show up in wait time metrics? Describe a workflow that actually reduced delays, including staffing mix, triage steps, and measurable impact.

Three bottlenecks recur: undifferentiated scheduling (every slot looks the same), documentation drag, and mismatch of clinician license to task. They show up as long waits for high-acuity needs while low-acuity visits occupy scarce specialist time. We reworked one clinic by creating express intakes staffed by care coordinators, routing moderate cases to therapy first, and reserving prescriber time for high-acuity or medication-heavy visits. A nurse led triage calls, a therapist ran brief interventions, and the prescriber handled complex consults. The payoff: urgent cases were seen within seven days, routine follow-ups were booked into care pathways, and the clinic stopped clogging specialist calendars with tasks better done by the team.

You criticized the HEDIS seven-day follow-up after hospitalization. Using one recent case, can you map what happened within those seven days, what was captured, what was missed (like medication reconciliation or SDOH checks), and how outcomes differed at 30 and 90 days?

A patient discharged on a Tuesday made the seven-day visit on Monday. The claim captured the visit; the box was checked. What didn’t get captured: a full medication reconciliation, a housing instability screen, and a written safety plan. At 30 days, the patient reported escalating symptoms and trouble accessing meds, which could have been avoided if reconciliation and SDOH checks had been done at the follow-up. By 90 days, after a targeted outreach that finally added those missing steps, symptoms stabilized and there were no unplanned returns. The seven-day tick mark mattered, but the content of that visit determined the trajectory.

You mentioned that process measures don’t prove health improved. Which outcome measures would you prioritize for depression, anxiety, or SUD, and how would you collect them reliably? Please share recommended intervals, target ranges, and how to handle missing data.

I’d prioritize validated symptom scales, medication adherence signals, and functional status. Collect at discharge, seven days, 30 days, and 90 days. Targets should reflect meaningful improvement from baseline and sustained stability by day 90. For missing data, treat it as a signal—trigger outreach and a brief visit or call to capture the outcome, and use the last observation judiciously only to avoid losing the case entirely.

You said policy action may be needed to define quality. What specific policy levers would you use (e.g., measure sets, reporting standards, incentives), and how should they roll out over 12–24 months? Cite examples from physical health that translate well.

Start with a core behavioral health measure set anchored to outcomes, standard data fields for visit content, and a basic reporting schema that ties encounters to episodes. Over 12–24 months, phase in public reporting, then modest incentives, and finally stronger payment ties. Borrow from physical health by aligning reporting definitions and using staged adoption so systems can build pipes before money shifts. The cadence matters: define, test, publish, then pay.

You noted physical health is further along. Which two practices from physical care (e.g., risk adjustment, episode definitions) would you copy into behavioral health, and how would you adapt them? Please include a side-by-side example with expected outcome gains.

I’d adopt risk adjustment and clear episode definitions. Risk adjustment prevents penalizing clinics that treat higher-acuity patients; episodes tie services to a coherent time window. Side by side, a clinic using both can compare outcomes for similar patients in the same episode window, making improvement real instead of confounded by case mix. The expected gain is fewer unplanned returns by day 90 because care teams see where to intervene earlier.

On value-based contracts, what are the minimum common measures you’d require to “pay for results” rather than activity? Walk us through contract terms, attribution, risk corridors, and one example where aligned metrics changed provider behavior and outcomes.

Minimum measures: timely follow-up, documented visit content, symptom improvement by day 30, and stability by day 90. Contract terms would attribute patients to the first follow-up provider after hospitalization, with risk corridors that limit downside while sharing upside for improvement. When a network moved to this model, teams shifted focus from visit counts to early reassessment and reconciliation, which paid off as patients stabilized earlier. It changed conversations in huddles: “Did we do the visit?” became “Did the visit change anything?”

You highlighted SDOH checks during follow-up. What specific SDOH domains should be captured, how often, and by whom? Describe a case where addressing housing, transport, or food access changed the clinical trajectory and the numbers that proved it.

Capture housing, food access, transportation, utilities, and safety at discharge and the seven-day visit. Care coordinators can collect, and clinicians confirm relevance to the plan. In one case, a simple transport fix made it possible to keep the seven-day visit and a medication pickup that had been missed after discharge. By day 30, symptoms began to improve, and by day 90 the patient avoided unplanned care—small social fixes with outsized clinical effects.

To move from “did you do a thing?” to “did health improve?”, what does a gold-standard visit look like after hospitalization? Please outline minute-by-minute flow, key assessments, safety planning, medication reconciliation steps, follow-up cadence, and the dashboard you’d monitor.

Minute 0–5: warm handoff, confirm discharge details, and orient the patient. Minute 5–15: medication reconciliation—name, dose, and purpose; confirm fills and side effects. Minute 15–25: symptom reassessment and a quick functional screen. Minute 25–35: SDOH screen tied to concrete actions. Minute 35–45: collaborative safety plan in writing, with crisis contacts. Minute 45–50: adjust the treatment plan and set specific next steps. Minute 50–60: schedule the next touchpoints at seven, 30, and 90 days, and send a clear after-visit summary. The dashboard tracks on-time follow-up, reconciliation completion, symptom trends at each interval, and any unplanned returns—turning one visit into a 90-day arc of accountability.

Do you have any advice for our readers?

Don’t wait for perfect consensus to start measuring what matters. Pick a few outcomes tied to the seven, 30, and 90-day arc, embed medication reconciliation and SDOH checks in your first visit, and make the results visible to your team. Share the wins and the misses—transparency drives improvement. If you can connect the dots from action to outcome for one patient, you can scale it to many.

Subscribe to our weekly news digest

Keep up to date with the latest news and events

Paperplanes Paperplanes Paperplanes
Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later