Medical device security improvements coming—but not anytime soon
This article was originally published in Forbes.
The cybersecurity of connected medical devices—notoriously poor for decades—should finally start to improve.
That is genuinely good news. But it is tempered by the reality that it will not happen quickly.
The long-overdue change is coming thanks to the federal Food and Drug Administration’s (FDA) announcement in June that it was adopting UL 2900-2-1 as a new “consensus standard” for better software security in new devices, in order for them to qualify for “premarket certification.” That is expected to have a major impact—for good—on both the industry and patients.
But it doesn’t change much yet. Today’s reality, as has been acknowledged numerous times, is that the majority of medical devices in use have been designed to work flawlessly, in some cases for more than a decade. They just weren’t designed to be connected to the Internet, where hackers could undermine that flawless functionality.
Security experts, and even the US government, have been raising the alarm about it for years.
The June 2017 “Report on Improving Cybersecurity in the Healthcare Industry” from a congressional task force put it bluntly. “Healthcare cybersecurity is in critical condition,” it said.
Earlier this year, there were multiple presentations about it at major cybersecurity conferences. Josh Corman, CSO at PTC, a founder of I Am the Cavalry and a member of that congressional task force, demonstrated at the RSA conference in April, with the help of a couple of physician hackers, how an infusion pump could be compromised and put a patient at risk.
At Black Hat in Las Vegas last month, researchers Billy Rios and Jonathan Butts brought a similar message, with a session titled, “Exploiting Implanted Medical Devices.”
They demonstrated that some devices they tested, including infusion pumps, pacemakers and patient monitoring systems, had vulnerabilities that were relatively easy to exploit remotely.
The two were careful not to issue a blanket condemnation. One of their slides read, “The benefits of implanted medical devices outweigh the risks (for most people).”
Still, when it comes to the healthcare system, the reasonable expectation is that the benefits of treatments or devices ought to outweigh the risks for everybody.
And that is still not the case. Just in the last couple of weeks came examples of that ongoing insecurity.
Philips (commendably) reported finding nine vulnerabilities in its e-Alert Unit (versions R2.1 and earlier), which monitors the performance of medical imaging systems.
That produced an ICS-CERT advisory noting that the bugs were exploitable remotely and would take only a low skill level to exploit. It assigned them a CVSS (Common Vulnerability Scoring System) of 7.1 (out of 10), or moderately severe.
They include improper input validation, cross-site scripting, information exposure, incorrect default permissions, cleartext transmission of sensitive information, cross-site request forgery, session fixation, resource exhaustion, and use of hard-coded credentials.
And that was only a week or so after the Netherlands-based firm acknowledged unpatched vulnerabilities in its IntelliSpace Cardiovascular (ISCV) line of medical data management products, which also could compromise both patient safety and privacy.
And then came word that Qualcomm’s Life Capsule Datacaptor Terminal Server (DTS) and the Becton Dickinson Alaris TIVA syringe pump allow remote access without authentication.
Those are critical vulnerabilities—ICS-CERT gave the Life Capsule flaw a CVSS score of 9.8 and the Alaris TIVA a 9.4—in devices that are widely used and affect both patient safety and privacy.
The new UL (formerly Underwriters Laboratories) standard is an effort to fix bugs like that during development—before devices are in use—and calls for “structured penetration testing, evaluation of product source code, and analysis of software bill of materials”—the kinds of testing and analysis that security experts have been pushing for more than a decade.
It won’t make future devices bulletproof—nothing will. But it should mean that stories like this will become much less frequent, and that all patients can have a higher level of confidence that the benefits do indeed outweigh the risks—by a lot.
It’s just that, given a “refresh” cycle for these devices of five years or more, they are likely to be a long time coming.
Do you work with medical devices? When’s the last time you reviewed your security initiative?
*** This is a Security Bloggers Network syndicated blog from Software Integrity authored by Taylor Armerding. Read the original post at: https://www.synopsys.com/blogs/software-security/security-medical-devices-wont-improve/