With a deep background in the manufacturing of cutting-edge medical devices for diagnostics and treatment, Faisal Zain has spent his career at the intersection of technology and patient care. He has witnessed firsthand the immense promise of innovation and the practical hurdles that often prevent it from reaching its full potential. As healthcare organizations pour billions into artificial intelligence, with a staggering 95% of generative AI projects failing to see returns, his insights are more critical than ever. We sat down with Faisal to understand this disconnect, exploring the common patterns that lead to failure and the strategic frameworks that define the successful 5%. Our conversation delved into the nuances of integrating AI into complex clinical workflows, the art of aligning technology with C-suite objectives, and the vital human roles required to build a lasting, effective AI capability within a healthcare organization.
The article notes that while physician AI use is soaring, a staggering 95% of enterprise generative AI investments are seeing no returns. Could you share an experience of a project that landed in that 95% and explain which of the common failure patterns was most to blame?
Absolutely, and it’s a story I see play out far too often. I recall a brilliant team of data scientists at a large hospital system who developed a predictive algorithm for identifying patients at high risk for sepsis. Technically, it was flawless. It had incredible accuracy in their testing environment. The problem was, to use it, a nurse on a busy ward had to stop what they were doing, log into a completely separate, standalone web portal, manually enter a patient’s ID, and then interpret a complex risk score. This project was a perfect storm of failure patterns. The most glaring was ‘end-user friction.’ It wasn’t built into their existing EHR; it was one more screen, one more password to remember. It also suffered from ‘low-impact workflow selection.’ While sepsis is critical, the tool didn’t solve the nurses’ most immediate, day-to-day problem, which was managing their overwhelming documentation load. It became an expensive, ignored experiment because it was designed in a vacuum, completely disconnected from the chaotic reality of a hospital floor.
You mentioned that ‘end-user friction’ is a primary killer of adoption. Could you walk us through the practical, step-by-step process for properly integrating a new AI feature into a core system like an EHR to ensure it’s actually used?
This is where the real work begins, long before a single line of code is written. The first step is always deep immersion. You can’t design for a workflow you don’t intimately understand. This means our clinical informatics experts and engineers spend days, not hours, shadowing clinicians. They watch, they listen, and they feel the pain points of juggling multiple applications and alerts. From there, we move to collaborative design. We bring a low-fidelity mockup, often just a sketch, directly into the EHR environment and ask, “If this button appeared here and gave you this insight, would it help or hinder you?” This iterative feedback loop is crucial. Once we have a design that feels native to their workflow, we launch a small pilot with a select group of users. During this phase, we track metrics relentlessly. We’re not just looking at adoption rates; we’re measuring things like time-on-task for specific documentation, the number of clicks saved per patient encounter, and even qualitative feedback on cognitive load. True adoption isn’t just about people using the tool; it’s about the tool becoming an invisible, indispensable part of how they deliver care.
The text emphasizes that misalignment with the organizational mission starves projects of support. How should a project leader frame an AI initiative to the C-suite, linking it directly to strategic goals like cost reduction, to secure the necessary resources?
When you walk into that boardroom, you cannot lead with the technology. The C-suite doesn’t speak in terms of large language models or neural networks; they speak in the language of strategic imperatives—cost reduction, quality improvement, and patient access. The pitch must start with the organizational goal. You say, “Our strategic objective is to reduce operational costs by 10% over the next two years. I have a plan to contribute to that.” Then, you introduce the AI initiative as the specific mechanism to achieve a piece of that goal. For example, you might propose an AI-powered patient scheduling system. The pitch must then include a clear ROI model. It should detail how automating scheduling will reduce administrative staff hours by a specific amount, decrease patient no-show rates by a quantifiable percentage, and improve patient throughput, all of which translate directly to dollars saved. Your presentation absolutely must include a defined budget, a timeline with clear milestones, and, critically, the measurement framework you’ll use to prove it’s working. You have to show them you’re not just running an experiment; you’re making a strategic business investment with a predictable return.
The article advocates for “technology matching” and highlights a sophisticated solution called “AI agentification.” Could you break down what an intelligent orchestrator agent does in practice, perhaps using a scheduling example?
Think of an intelligent orchestrator agent as a master conductor for a symphony of specialized AIs. In most organizations, you have different AI tools that are very good at one specific thing—one for billing, one for clinical documentation, one for patient communication. By themselves, they are disconnected instruments. The orchestrator is the conductor that reads the “music”—a command from a user—and cues each instrument at the right time to create a harmonious result, often completely behind the scenes. Let’s take scheduling. A patient might send a simple text: “I need to book my three-month follow-up with my cardiologist.” The orchestrator agent receives this natural language command. It first delegates to a ‘patient identity’ agent to verify who is sending the message. Simultaneously, it tasks a ‘clinical data’ agent to check the patient’s EHR to confirm what kind of follow-up is needed. Then, it pings the ‘scheduling’ agent to find open slots in the cardiologist’s calendar that match the required appointment type. Finally, it uses a ‘communication’ agent to propose a few options back to the patient via text. The user has one simple, conversational interaction, but in the background, four or five specialized AIs have been perfectly coordinated to complete the task seamlessly.
The piece criticizes “inadequate measurement frameworks,” where organizations can’t tell if an AI is paying off. Beyond simple time savings, what are some specific, nuanced metrics a hospital should track to prove an AI tool’s true value?
Time savings are important, but they are often just the tip of the iceberg and can be difficult to translate into hard ROI. To truly prove value, you have to measure the second- and third-order effects. For an ambient listening solution that reduces physician documentation time by, as the article notes, two and a half hours a day, the real value isn’t just the time itself. The crucial metric is: what happens to that recaptured time? We should be tracking if that time translates to a 10% increase in patient-facing minutes per visit, which directly impacts patient satisfaction scores. We can measure if it leads to more accurate and timely chart closures, which accelerates the revenue cycle and reduces claim denials. An even more powerful, albeit longer-term, metric is tracking physician burnout and retention rates within departments that have adopted the tool versus those that haven’t. Reducing turnover by even a few percentage points represents millions in savings and is a metric that speaks volumes to an executive.
The article lists several critical roles for building a sustainable AI capability, including clinical informatics experts who act as “translators.” Can you share a concrete example of how this translator role prevents a project from failing?
I saw a perfect example of this recently. A team of data engineers was incredibly proud of an alert system they had built to predict patient deterioration. It was technically brilliant, pulling in dozens of real-time data streams. Their plan was to have it fire a pop-up alert on the clinician’s screen. The clinical informaticist on the team immediately stopped the project. She stepped in and said, “I understand the data is accurate, but our doctors are already drowning in ‘alert fatigue.’ Another pop-up will be ignored or, worse, create resentment.” She was the translator. She explained the clinical reality—the noise, the pressure, the cognitive overload—to the engineers. Then, she worked with both sides to redesign the solution. Instead of a disruptive alert, they embedded a small, color-coded risk score directly into the patient list banner in the EHR. It was subtle, non-intrusive, and provided the same critical information within the clinician’s existing workflow. That single act of translation turned a guaranteed failure into a tool that was not only adopted but loved by the clinical staff because it respected their reality.
What is your forecast for how AI will reshape the day-to-day experience of both clinicians and patients over the next five years?
My forecast is that the most successful AI will become functionally invisible. We will move past the era of “AI projects” and into an era of AI-native healthcare. For clinicians, this means AI won’t be another application they have to open; it will be a quiet, intelligent layer working within the tools they already use. It will pre-chart patient visits, surface relevant clinical insights without being asked, and handle the vast majority of administrative burdens that currently lead to burnout. Their daily experience will feel less like data entry and more like practicing medicine. For patients, the experience will be one of seamless access and personalization. They will schedule appointments through simple conversations, receive follow-up care plans that are dynamically tailored to their progress, and feel more connected to their care team because the technology is handling the logistical friction. Ultimately, AI’s greatest achievement will be to fade into the background, augmenting human expertise and freeing up clinicians to focus on the deeply human, empathetic side of patient care.
