FBI Urges Hospitals to Partner Against Cyberthreats

FBI Urges Hospitals to Partner Against Cyberthreats

Faisal Zain stands at the intersection of medical technology and cybersecurity, a field where innovation in diagnostics and treatment is constantly shadowed by evolving digital threats. With extensive experience in the manufacturing of critical medical devices, he has a frontline perspective on the vulnerabilities that nation-states and criminal syndicates seek to exploit. Our conversation delves into the complex nature of these “blended threats,” where state actors use criminal proxies to attack healthcare systems, creating a challenging and unpredictable risk environment. We explore the insidious danger posed by clandestine foreign IT workers, the dawning cyber arms race fueled by artificial intelligence, and the paramount importance of forging strong, proactive relationships with law enforcement. This dialogue reveals not just the threats, but the practical, collaborative strategies hospitals must adopt to protect their networks, their data, and ultimately, their patients.

Nation-states are increasingly using criminal groups as proxies, creating a “blended threat.” What are the primary motivations behind this strategy, and how does it change the risk profile for a hospital? Could you share a specific anecdote of how this collaboration might manifest in a real-world attack?

This is what we call the “blended threat,” and it’s a major focus for us right now. The primary motivation for a nation-state is plausible deniability and operational efficiency. By leveraging an existing criminal ecosystem, they can obscure their own involvement and use specialists who are already adept at penetrating networks. For a hospital, this completely changes the game. You might think you’re dealing with a financially motivated ransomware gang, but behind them could be a state actor with geopolitical ambitions, looking to steal research, disrupt services, or gain a strategic foothold in critical infrastructure. The attack is no longer just about money; it’s about national security.

Imagine this scenario: a known criminal group launches a phishing campaign and successfully deploys ransomware on a hospital’s network. The hospital’s incident response team treats it as a standard criminal event. What they don’t see is that the access was initially procured and then sold or handed off to a nation-state group. While the ransomware creates a noisy, distracting crisis, the state actors are quietly moving laterally, escalating privileges, and exfiltrating sensitive medical research or patient data. The ransomware is just a smokescreen for a much more consequential espionage campaign. We’ve seen cases where named companies in countries like China have been directly implicated in facilitating this kind of access for state-sponsored hacking campaigns.

The presence of clandestine remote IT workers from North Korea is a specific concern. Beyond fundraising for their regime, what are the most significant operational risks they introduce inside a health system’s network? Please outline some practical, step-by-step measures for vetting and monitoring these individuals.

The financial aspect is certainly alarming—knowing that hospital funds could be supporting nuclear weapons programs is a chilling thought. But the immediate operational risks are just as severe. When you have an adversary with that level of access, they are not just a passive employee. They have a direct line into your network to steal proprietary data, intellectual property, or vast amounts of patient health information. Even more frightening is their ability to deliver destructive malware. They could be a sleeper agent, waiting for the right moment to deploy a wiper or trigger a system-wide shutdown, causing catastrophic disruption to patient care.

As for vetting, it has to be a multi-layered process. First, rigorously verify identity and work history, looking for inconsistencies that might suggest a fabricated persona. Second, conduct thorough technical interviews and background checks that go beyond surface-level confirmation. Third, and most critically, you must assume they could still get through. Implement a “trust but verify” model for all remote workers, especially those with privileged access. This means continuous monitoring of their activity, analyzing logs for anomalous behavior, and restricting access to only the systems and data absolutely necessary for their job. Limiting their privileges can contain the potential damage if one of these individuals is identified. I hear almost weekly from hospitals that have identified and terminated access for a suspicious remote IT worker, which shows that active vigilance works.

Adversaries are now using AI to automate large parts of the cyberattack kill chain. For a health system just beginning to explore defensive AI, what are the first “baby steps” they should take? Can you detail a practical approach for implementing behavior-based detection on critical user accounts?

It’s a stark reality that we are in a new cyber arms race fueled by AI. A recent report from the AI firm Anthropic showed that adversaries were using AI to agentically perform 80 to 90% of their attack kill chain, from reconnaissance all the way to privilege escalation. We have to start employing similar capabilities defensively. For a health system, the idea of applying AI to the entire infrastructure is overwhelming, so the key is to start small and focused. The first “baby steps” should be to identify your most critical assets. Don’t try to boil the ocean. Pinpoint your crown jewels: key user accounts with administrative privileges, critical network devices like core switches and routers, and the data stores containing the most sensitive patient information.

Once you’ve identified these assets, you can begin a practical implementation of behavior-based detection. The first step is to pull all the relevant logs from these specific environments. You don’t need to log everything everywhere, just focus on these high-value targets. Next, run those logs through an approved, sandboxed AI platform to establish a baseline of normal activity for those critical accounts. The AI will learn what “normal” looks like—what time a user typically logs in, what systems they access, how much data they usually transfer. The final step is to set up alerts for any deviation from that baseline. When the AI detects an anomaly—like an admin account suddenly accessing a system it never has before at 3 a.m. and trying to exfiltrate data—it flags it for immediate human review. This is the future of threat hunting, and we have to start now.

Building a relationship with the FBI before a crisis is essential. For a hospital CISO who has never engaged with the Bureau, what does that first outreach look like? Please describe the initial conversation and the key people they should connect with at their local field office.

That first outreach is much simpler and less intimidating than most people think. It really just starts with a conversation. The best first step for a hospital CISO is to simply reach out to their local FBI field office and ask to speak with two key people: the Private Sector Coordinator and the Cyber Supervisor. Every one of the 56 field offices has these roles. That initial conversation is about building a human connection, not about turning over sensitive data. You can introduce yourself, explain your role at the hospital, and express a desire to build a relationship before an incident happens.

The goal is to demystify the process. You can ask them, “What does engagement with the FBI look like during a crisis? What kind of information would you need from us, and what kind of assistance can you provide?” This conversation helps allay fears, especially from the legal department. You’ll find they are people who genuinely want to help. There’s no commitment beyond that initial talk. You don’t have to feel like you’re suddenly obligated to share everything. It’s about opening a line of communication, getting their contact information into your cell phone, and knowing exactly who to call when seconds count. You don’t want to be exchanging business cards in the middle of a fire.

During a ransomware attack causing ambulance diversions, a hospital’s legal counsel may hesitate to share information. What specific assurances can you provide regarding how the FBI protects sensitive data, avoids regulatory entanglements, and uses shared threat intelligence to directly aid in the hospital’s recovery?

This is probably the single biggest hurdle to overcome, and it’s why having that pre-existing relationship is so critical. The first assurance I would give is that the FBI operates under a victim-centric approach. They are bound by the Victims’ Rights Act, and their mission is to help you, not to penalize you. The information a hospital shares during an active investigation is protected under law enforcement sensitivity; it is not shared with regulatory agencies. The FBI is not a regulator. They are not going to make your information public or create new legal problems for you. Their teams are never going to ask for patient information or protected health information (PHI). What they need are the technical artifacts of the attack—anonymized indicators of compromise (IOCs), malware samples, wallet addresses—things that are fully aligned with threat pursuit and have nothing to do with patient privacy.

Furthermore, sharing this information directly and immediately benefits the hospital. When you provide IOCs, the FBI can run them through their vast resources. They can check against their law enforcement holdings, query intelligence community partners, and leverage their 22 cyber assistant legal attachés stationed with foreign partners globally. This often unlocks a wealth of information. They can come back to the hospital and say, “We recognize this attacker. Here are the other tools they use, here is the decryption key we recovered in a previous case, and here is how you can hunt for them in your network to ensure they are fully eradicated.” This intelligence is invaluable for containment and recovery, helping the hospital get back on its feet and resume patient care much faster. It’s not about reporting for compliance; it’s about activating a powerful partner in your recovery efforts.

What is your forecast for the evolution of cyberthreats targeting the healthcare sector over the next two years?

My forecast is that the threats will become both more sophisticated and more disruptive, driven by two major trends. First, the “blended threat” model will become the norm. We will see fewer clear lines between financially motivated criminals and nation-state espionage. Adversaries will increasingly use disruptive attacks like ransomware not just for profit, but as a strategic tool to create chaos, test our response capabilities, and cover for deeper, more patient intrusions. This means hospitals will face attacks that have complex, overlapping motives, making attribution and response far more challenging.

Second, the offensive use of AI will accelerate dramatically, leading to hyper-realistic social engineering campaigns and highly evasive malware that can adapt to a network’s defenses in real time. Our current defensive postures will struggle to keep up without a reciprocal adoption of defensive AI. Consequently, the only effective path forward will be through deeper, more trusting public-private partnerships. The era of organizations trying to defend themselves in isolation is over. In the next two years, a hospital’s resilience will be measured not just by its firewalls, but by the strength of its relationships with industry peers and government partners like the FBI, enabling the real-time sharing of intelligence needed to fight an AI-driven adversary.

Subscribe to our weekly news digest

Keep up to date with the latest news and events

Paperplanes Paperplanes Paperplanes
Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later