As a Clinical Informatics professional with a dual background in frontline medicine and computer science, Suhailin Abdul Aziz occupies a unique space, acting as a translator between the worlds of patient care and digital innovation. Her work focuses on weaving empathy and technical precision into Singapore’s national health systems. In this conversation, she explores the human side of health technology, discussing how thoughtful data integration can transform patient safety, why building an emotional connection with clinicians is key to adoption, and what the future holds as AI and genomics become part of our everyday healthcare. She offers a look into how designing with users, not just for them, is the only way to build digital health services that are truly inclusive and trustworthy.
You described your role as a bridge between clinicians and technologists, drawing on your dual background. Could you share a specific instance where your clinical experience helped resolve a design conflict, and detail how that intervention ultimately improved the system for patients?
I recall a design session for a new electronic medication charting system where the tech team had proposed a highly efficient, multi-step verification process to ensure data accuracy. On paper, it looked perfect—it would reduce errors by forcing a series of checks. However, my clinical experience immediately raised a red flag. I could just picture a nurse during a hectic night shift, dealing with multiple urgent patient needs, finding this process incredibly cumbersome. The risk wasn’t just frustration; it was that they would find workarounds that could bypass the safety checks entirely. I advocated for a more integrated, single-screen verification that used smart prompts and color-coding. It was a subtle change, but by understanding the high-pressure reality of a hospital ward, we created a system that was both safer and more intuitive, ultimately protecting patients from potential medication errors born from user fatigue.
You shared a powerful story about improved data integration preventing duplicate tests for a patient. Can you walk us through the key technical or policy changes that enabled this and describe the feedback you received from clinicians who used this more coordinated system?
That particular case was a real breakthrough moment. The patient, an elderly woman with several chronic illnesses, used to visit different hospitals and clinics, and her records were completely fragmented. Technically, the key change was the implementation of a national health data exchange standard that allowed disparate electronic medical record systems to finally speak the same language. On the policy side, we established clear data-sharing agreements between public and private institutions, which was a huge hurdle. The first time a doctor at one hospital was able to pull up her lab results from a different clinic a month prior, he was visibly relieved. He later told me, “I was about to order another full blood panel, but seeing her history saved her an unnecessary needle stick and saved the system money. More importantly, I had a complete picture and could make a confident decision about her medication.” That’s the real win—not the tech, but the trust and safety it enables.
Your project on a shared health data platform measured success via data validation metrics and clinician feedback. Could you give us an example of a specific metric that improved and share an anecdote from a clinician that shows how this trust in the data changed their daily workflow?
One of the key metrics we tracked was the “cross-institutional data consistency rate” for patient allergies. When we started, the mismatch rate was concerningly high because information was manually re-entered at each location. After implementing standardized data entry fields and a shared repository, we saw a 40% improvement in consistency within six months. The most powerful feedback came from an emergency department physician. She shared a story about a patient who was brought in unresponsive. She was able to instantly pull his unified record and see a critical penicillin allergy noted by a general practitioner just two weeks earlier. She said, “Five years ago, I would have been working in the dark. Today, that single piece of trusted data prevented a potentially fatal decision.” That anecdote was more impactful than any chart or graph; it demonstrated that data integrity isn’t an abstract IT goal—it’s a lifeline.
You discovered that user adoption hinges on an emotional connection. Recalling a specific user engagement session, what was one piece of clinician frustration you heard, and how did a minor workflow change you implemented directly address that and help build that crucial trust?
During one co-design session, a senior nurse expressed deep frustration with the patient discharge process. She explained, with a tired voice, that generating the final summary required navigating through five different screens to pull and confirm information. She felt like an administrator, not a caregiver, and said it was the part of her day she dreaded most. Hearing that, we didn’t just take a note; we paused the session and workshopped it right there. The tech team, listening intently, realized they could create a “One-Click Discharge Summary” button that would auto-populate 90% of the required fields for her review. When we rolled out that small feature a few weeks later, she emailed me directly. She said it saved her nearly 30 minutes per shift and, more importantly, made her feel like we had actually listened. That single, minor change did more to build trust and encourage adoption across her entire department than months of formal training.
You mentioned using AI to flag at-risk patients. What are the first three steps a public health agency should take to implement such a system ethically, and what governance is needed to ensure the AI model avoids bias and is trustworthy?
Implementing AI ethically requires a foundation of deliberate design, not just technical prowess. The first step is to establish a cross-functional ethics board—including clinicians, data scientists, ethicists, and patient advocates—before a single line of code is written. Their job is to define the problem and ensure the AI’s objective is centered on equity. Second, you must rigorously audit your training data. This means actively identifying and mitigating biases by ensuring the data represents your entire population, especially minority and underserved groups who are often underrepresented. The third step is to mandate a “human-in-the-loop” framework, where the AI serves as a decision-support tool, providing alerts and recommendations, but a qualified clinician always makes the final call. For governance, you need continuous performance monitoring to detect any “bias drift” over time, absolute transparency in how the model works, and a clear, accessible process for patients and clinicians to appeal or question an AI-generated recommendation. Trust is built on accountability.
Looking toward the integration of genomics into routine care, what specific data standards for genomic exchange are most critical? How do you foster the multidisciplinary collaboration required to translate these complex genetic insights into practical, everyday clinical workflows for healthcare providers?
For genomics to become part of routine care, we absolutely need robust data standards. The most critical are standards for representing genetic variants and for clinical data exchange, ensuring that a genetic test result from any lab can be seamlessly and accurately integrated into a patient’s electronic health record. Without this, we’ll have digital Tower of Babel. Fostering collaboration is the human side of the challenge. The key is to create dedicated “translational informatics” teams. These teams must bring together geneticists, who understand the science; bioinformaticians, who can process the data; clinicians, who know the workflow; and UX designers, who can make the information understandable. Their joint mission is to transform a dense, 50-page genetic report into a simple, actionable alert within the EMR—for instance, a pop-up that says, “Patient’s genetic profile indicates high risk for adverse reaction to this medication.” It’s about bridging that last mile between complex data and a clear clinical decision.
What is your forecast for the integration of AI and genomic data in public health over the next five years?
Over the next five years, I foresee a significant shift from reactive to predictive healthcare, driven by the convergence of AI and genomics. We will move beyond pilot programs to the initial stages of population-level implementation. For example, public health agencies will begin using AI to analyze aggregated, anonymized genomic data to predict infectious disease outbreaks or identify geographic hotspots for genetic predispositions to conditions like heart disease. On an individual level, a patient’s genomic profile will start becoming a standard component of their electronic health record, with AI-powered clinical decision support tools quietly working in the background to flag potential drug-gene interactions or recommend personalized screening schedules. The biggest hurdle won’t be the technology itself, but rather establishing the national governance frameworks and fostering the public trust needed to manage this deeply personal data ethically and equitably. The future is incredibly promising, but it must be built on a foundation of transparency and inclusivity.
