Angela Adams is a Registered Nurse who has spent more than a decade at the intersection of clinical care and artificial intelligence. Beginning her career in critical care at Duke University Medical Center, she witnessed firsthand how manual inefficiencies could derail patient outcomes. This experience propelled her into the health informatics space, where she has led initiatives at organizations like Jvion and Inflo Health to utilize machine learning for preventing patient deterioration and closing the loop on missed radiology follow-ups. Today, she shares her insights on why “finding” a medical issue is only half the battle and how orchestration is the key to solving healthcare’s “last mile” problem.
The following discussion explores the breakdown in follow-up care for pulmonary nodules, the hidden costs of manual tracking in imaging, the dangers of the “diagnostic alert trap,” and the essential shift toward measurable, closed-loop orchestration in modern radiology departments.
AI is flagging pulmonary nodules more accurately, yet follow-up completion rates can be as low as 30%. What specific operational barriers cause this breakdown after a recommendation is filed, and what practical steps can teams take to ensure these findings don’t just sit at the bottom of an inbox?
The breakdown occurs because we’ve optimized the “find” but neglected the “finish.” When an AI flags a nodule and a radiologist files a report, the recommendation enters a fragile chain of handoffs involving EHR orders, prior authorizations, and scheduling. In many settings, follow-up completion rates hover around 30% because the process is manual and fragmented; a recommendation at the bottom of an inbox is essentially a patient lost to the system. To fix this, teams must move away from isolated detection and toward orchestration, ensuring every flagged finding is tied to a specific clinical pathway. We need to treat follow-up as a “system of record” where the handoff from the report to the scheduled exam is tracked with the same rigor as the initial diagnostic image.
Incidental findings now appear in up to 40% of advanced imaging studies. When hospitals rely on manual tools like spreadsheets or shared inboxes to track these, how does it impact staff efficiency and patient safety? Please share an example of the hidden labor costs involved in these manual cascades.
With incidental findings appearing in 20% to 40% of studies, relying on spreadsheets creates a massive amount of “manual scaffolding” that is both brittle and dangerous. This creates a cascade of labor: staff must manually clarify clinical relevance, chase down the right care team, manage prior authorizations, and send reminders. One major hidden cost is the time clinicians spend re-triaging the same data over and over because there isn’t a clear, automated path to the next step. Every minute a nurse or coordinator spends reconciling a spreadsheet is a minute stolen from direct patient care, and every manual entry is a point where a high-risk finding could simply disappear, opening the hospital up to significant liability.
Many AI tools create worklists that aren’t tied to ordering or scheduling, often leading to a “diagnostic alert trap.” How can organizations integrate these outputs into existing EHR workflows to prevent backlogs, and what role-based controls are necessary to maintain clinician trust in the automated data?
A worklist by itself is not a follow-up program; it’s often just a backlog in disguise. To avoid the “diagnostic alert trap,” AI outputs must be integrated directly into existing EHR workflows so that an alert can immediately trigger an order or a scheduling task without leaving the system. Governance plays a huge role here—it’s not just about paperwork, it’s about safety and trust. We need role-based controls that dictate who sees an AI output and how it is labeled, ensuring clinicians don’t feel the need to double-check every automated flag. When the right information is presented to the right person in the right context, we eliminate the manual review steps that current “point solutions” often require.
True orchestration reserves clinician time for expertise while automating logistical friction. How should a system distinguish between a moment requiring human empathy and a routine scheduling task, and what does a successful “system of record” for follow-up look like in a high-volume radiology department?
Orchestration is about the deliberate allocation of human resources, recognizing that clinician time is the scarcest currency in healthcare. A successful system automates the logistical “noise”—like routine scheduling outreach—which most patients actually prefer to be clear and timely. However, it must be intelligent enough to hand the reins to a human when a finding is concerning or frightening, as that is when a patient needs empathy and expert explanation. In a high-volume department, a “system of record” looks like a digital layer that tracks every actionable finding from the moment it is flagged until the follow-up exam is completed or a documented resolution is reached.
Closing the loop on actionable findings requires a defined end state, such as a completed exam or a documented resolution. How can leadership transition from simple AI pilots to a model of measurable reliability, and what specific metrics indicate that a follow-up program is actually working?
Leadership must stop focusing on the “find” and start operationalizing the “finish” by defining a clear closure event for every recommendation. We move from pilot to reliability by assigning ownership across the handoffs—between radiology, ordering clinicians, and patient outreach teams. Key metrics include the percentage of follow-up completion, the time elapsed from recommendation to exam, and the number of overdue exceptions surfaced early. When a system provides an auditable record of every completed referral or documented clinical resolution, you know the program is working because you’ve moved from “detecting” a problem to “resolving” it for the patient.
What is your forecast for the future of AI-driven radiology follow-up?
I believe the next wave of value in radiology will shift entirely away from standalone detection models and toward closed-loop orchestration systems. We will see a future where an actionable finding automatically initiates a clinical pathway that tracks itself to completion, ensuring no patient falls through the cracks regardless of imaging volume. Success will no longer be measured by how many nodules an AI can find, but by how many of those findings resulted in timely, completed care. The organizations that thrive will be those that treat AI as a tool for operationalizing the finish line, turning vast amounts of data into a coordinated, reliable journey for every patient.
