SBN

FHIR API Security Research Sparks Debate

FHIR Security concept; fire ring made of smoke and flames with text 'FHIR API Security Research Sparks Debate'

Alissa Knight released her report “Playing with FHIR” a couple of weeks ago (download it here) about her investigations into the security of healthcare apps and APIs which use the FHIR standard. This report has certainly sparked a lot of debate about the security of healthcare apps and a broader discussion about who is accountable for keeping patient data safe as the ecosystem expands. The bottom-line is that everyone in the healthcare ecosystem needs to take steps to shield their APIs immediately.

We hosted a very lively webinar with Alissa on October 28th to discuss the report and the reaction to its findings. The audience asked many questions and later in the blog we will capture all the questions as well as Alissa’s replies. We have also included her answers to some questions which, because of time constraints, couldn’t be answered in the webinar itself. 

We do encourage you to read the report yourself but just for context we will summarize what she did and what she found. Alissa tested Fast Healthcare Interoperability and Resources (FHIR) APIs and a selection of apps which access those APIs. She was looking for both vulnerabilities and any exposed secrets she could then use to mount an attack.

The report, covering all enterprise types in the FHIR ecosystem, represents her findings. Although she found that EHR vendor implementations of FHIR were well protected, she did find major issues in the new ecosystem of aggregators and app developers who access those APIs. This underscores a systemic lack of basic protections in parts of the ecosystem – enabling unauthorized access to an inordinate amount of patient records.

As she says in the report, “An effective kill chain in the targeting of the healthcare industry will not be one targeting the EHR systems running in the provider’s network, but targeting the third-party FHIR aggregators and third-party apps which access these EHR APIs. It is alarming how sensitive patient data moves from higher security levels to third-party aggregators where security has been found to be flagrantly lacking”.

Among key findings in “Playing with FHIR – Hacking and Security FHIR APIs” were:

  • Five production FHIR APIs serving an ecosystem of 48 apps and APIs were tested (2 were from EHR vendors and did not show vulnerabilities)
  • The ecosystem covered aggregated EHR data from 25,000 providers and payers
  • 4M patient and clinician records could be accessed from 1 single patient login account
  • 53% of mobile apps tested had hardcoded API keys and tokens which could be used to attack EHR APIs
  • 100% of FHIR APIs tested allowed API access to other patients’ health data using one patient’s credentials. 
  • 50% of clinical data aggregators did not implement database segmentation allowing access to patient records belonging to other apps developed on their platform for other providers.
  • 100% of the mobile apps tested did not prevent person-in-the-middle attacks, enabling hackers to harvest credentials and steal or manipulate confidential patient data.

There are a large number of recommendations in the report for regulators, FHIR API providers and for app developers. Here are some of the highlights:

For API owners:

Employ an API threat management solution that prevents data from leaving via your API endpoints unless the incoming request is tokenized. This will eliminate a lot of the bandwidth wasted to synthetic traffic generated malicious scripts, bots and automated tools. Put in place app and device attestation checks at your API endpoints and require any apps connecting to your endpoint to implement this control.

  • Inventory your APIs. You can’t protect what you don’t know you have. Ensure you know how many APIs you have, ensure they are all part of your enterprise vulnerability and patch management strategy, and know whether or not they are transmitting, processing, and storing sensitive or regulated data, such as PII, PCI, or PHI.
  • If you aggregate data, don’t use the same database to store the patient records for each provider. This creates the potential for all of your EHR data to be leaked as a result of a vulnerability in just one of the apps.  Each microservice should have its own isolated database.

And for app developers:

  • Obfuscation of mobile app code to secure source code against decompilers isn’t enough. Run-time shielding is also needed to prevent tampering with the mobile app or its environment. You must authenticate the app and device using SDK-powered solutions that attach a token to the API request. By using solutions that allow you to compile your mobile app with their SDK, you eliminate developer friction and limit the disruption to your existing software development lifecycle (SDLC) while gaining increased privacy of any secrets hardcoded in the app.
  • Put in place a solution for app, user and device attestation to ensure that only genuine apps running in secure environments can access the APIs, thereby eliminating any bots masquerading as your app. 
  • Implement certificate pinning between app and API to eliminate WitM attacks. Tools are available to make this easy to deploy and administer.

 Learn More about Mobile API Security! 

Here are the questions you asked in the webinar (and Alissa’s answers which have been edited for clarity and conciseness).  The recording of the webinar is now available and you can find it here.

Q: Why doesn’t this look more like a vulnerability open disclosure advisory?

A: This is not a vulnerability disclosure advisory requiring steps to follow proper disclosure steps. It’s a white paper — a content marketing asset urging the entire industry to do better, not just specific companies. It did shed some much needed attention on a huge attack surface that doesn’t just affect some of us, it affects all of us.

Q: Since this research is sponsored by a security vendor, can we trust the impartiality of the results?

A: Yes you can. My clients do not dictate what I write in the reports. As a matter of fact, we create a very strong firewall between what we’re going to put in our report and how much we actually talk about that company’s product. There is actually just one bullet point where I talked about Approov in a ~50 page report and it was relevant because I said that the APIs that had Approov deployed were successful in preventing me from talking to those API endpoints. Approov sponsored the research which is why their logo is on it.

Approov didn’t create the vulnerabilities; the vulnerabilities were there, I simply found those vulnerabilities and followed the kill chain that I usually follow in hacking APIs and then wrote about it. My clients are not allowed to remove anything that I’ve put in there as findings. 

Q: Why wasn’t this work done using industry recognized responsible disclosure mechanisms?

A: See the answer to question #1 for additional context. I’ve been in open disclosure for two decades. I published the first vulnerability on hacking VPNs on an open disclosure mailing list called Bugtraq in 2000 and went on to speak about it at Black Hat briefings. I understand responsible disclosure very well; that’s not what this was. 

This was not a vulnerability advisory, but a white paper. We were very careful to protect the identity of the companies in this paper. And if you’re not identifying the company and you’re not identifying the product there’s no responsible disclosure steps to follow. 

Q: It seems like data aggregators were the main targets for this research; why did you pick on them?

A: I didn’t pick on aggregators. For those of you who did read the paper, you saw that my initial target was the EHRs – thinking that’s where the vulnerabilities would be. I was wrong. When I started to move up in the layers of the onion, if you will, I started to find where the vulnerabilities were in the FHIR APIs. I do want to make that abundantly clear. As I moved up, I started to identify these companies that were deploying FHIR APIs and aggregating this data. I added my own patient account and started aggregating my medical data. I actually had to go to John Moehrke (co-chair of HL7 security working group) , and Grahame Grieve – (Product director for HL7 International and the creator of FHIR). I had to ask them ”What is this company, what do these companies do,” and they had to explain it to me.

I’m a hacker, I followed the kill chain, for those of you who are familiar with the concept of the kill chain model developed by Lockheed Martin. Here’s the thing, isn’t it good that a white hat discovered these vulnerabilities versus a black hat? I’m able to bring this research data to all of you and say “let’s not single out one company and point fingers, let’s all do better, let’s stop hard coding API keys and tokens in these mobile apps, let’s start implementing pinning so these Women-in-the-Middle attacks stop working. Let’s just do better, let’s authorize and not just authenticate.”

Q: It’s an unintentional consequence of the research but how do you react to the accusation that the report will be used as evidence by those people who want to stop or slow down the opening up of patient data to patients?

A: I don’t know how people are going to use this report and I’m certainly not responsible for that. Just like if any of you published a blog or paper and it was used by the far right or the far left or anywhere in between or used by lawmakers to further their cause or explain why FHIR is bad. This is why I went to great lengths to make sure that I clarified these were vulnerabilities in the FHIR implementation and not FHIR itself. 

This has nothing to do, from my perspective, with attacking the information blocking rule. I don’t know who’s going to use this report for ammunition, it’s not what I set out to do. Look at who sponsored this paper; it wasn’t EPIC, it wasn’t Cerner, it wasn’t the EHRs that sponsored this paper, it was a cyber security vendor. If the EHRs or anyone on Capitol Hill use this paper to further their own cause I can’t stop them from doing that. What I’m trying to say is, I’m a risk communicator, that’s all I’m doing is I’m communicating risk here.

From phase one to phase two of this research, all those who’ve been involved in this research (healthcare providers, payers, the data aggregators) are better for this research. From their perspective, they got a free penetration test as long as they participated in this research. So they’re better because of it, they’re more secure.

Since then, I’ve been contacted by lawmakers and regulators regarding my report. I’m pleased to say this report has been an impetus for everyone to pause and make sure we’re doing everything right – that while interoperability and access of EHR data is vital to an evolved healthcare system, security of that data for the safety and security of those patients is paramount. 

Q: Did you look at both clinician and patient apps; I believe that the relevant FHIR API rules are different for each, and if so, did you find the security was the same at the same level for each?

A: Yes I did. During the research I did learn that actually clinicians should have access to other patients that are not assigned to them in case they need it, but it is frowned upon if a clinician is accessing a patient record that doesn’t belong to them without good reason. For example, if the clinician wanted to access Britney Spears’ record. 

This makes sense if a clinician is accessing another patient record for the hospital where they work but what I found wasn’t that.

I had a clinician login for one of the attacks and I was able to access another patient that wasn’t assigned to me but because it was an aggregator and because they weren’t isolating their databases from across the different apps in their ecosystem I could access patients for other healthcare providers and for other apps and this clearly shouldn’t be allowed.

The aggregators were not creating isolated databases for each microservice and for each app. All the data is one big pool so if you know the patient record IDs, you’re accessing patient records for a completely different healthcare provider and a different app.

Q: The report says a hundred percent of FHIR APIs tested allowed a single authenticated patient to access other patient health records. Is there no mapping control within the FHIR APIs so that a patient who is authenticated can only access their records? Does this mean that FHIR API resources are not secured from a resource patient level?

A: This is one of the questions where I’m going to say that the answer is probably inappropriate for me to answer. That’s probably something that someone from HL7 should answer.

I will say that if I implement FHIR and let’s say Michael implements FHIR we could have completely different FHIR implementations. Let’s say Michael implements it without scopes, without any kind of authorization, he’s authenticating and issuing tokens but he’s not authorizing. I can implement it more securely and authenticate and authorize. We are both using the same FHIR version and the same FHIR standard but my implementation is more secure.  It’s not like Michael and I are going to Best Buy and buying a shrinkwrapped FHIR API off the shelf with all the security baked in. It just doesn’t work that way. This is a framework so everyone’s going to implement it differently, some implementations may just be more secure than others. 

Q: You said that in some cases WAFs were deployed but were not effective – could you elaborate?

A: The biggest issue I see is that people default to what they know and because these apps use the HTTP protocol they think they can use a WAF (Web Application Firewall). But traditionally, WAFs work from rules and are looking for “known bads or known knowns” – things like SQL Injection in the payload. But, they are not going to know I am logged in as Alissa Knight and accessing other peoples records. Context is everything in security and that’s what’s missing in a WAF, the understanding of how the app works and what it’s supposed to be doing. WAFs can’t protect against the kind of logic based attacks I was using to target APIs. So please don’t rely on a WAF,  you need an API Threat Management solution which is built from the ground up to protect against API specific attacks and logic abuse. And no, I’m not paid a commission to say that 🙂

Q: How have the regulators reacted to the research?

A: HL7 published an announcement on their website. I think they’re even forming a working group at this point to focus on addressing these issues and plan to send  something to regulators in response to this.

The Office of Inspector General for HHS and ONC reached out to me as well as the Office of Personnel Management (OPM). The reception from lawmakers and from regulators, has been very positive. In a time when we are experiencing  nation-state-sponsored attacks in other areas of cyber security the current Biden administration really cares about this. The FTC came out with something,  they care about this and they’re really wanting to collaborate with me and the community.

It’s everyone’s patient records and we should expect a certain level of care and security around that. This is why I’ve always focused my vulnerability research in these different areas of security because it’s like me with hacking cars. I care less about the hacker that wants to deface a website and more about the hacker that can take remote control of my car with my family in it from their living room.

If your debit card is compromised the bank can send you a new card in the mail but how the hell is anyone going to send you new patient history because your PHI was put on sale on the dark web. This is a serious problem and I am seeing a real push to extract the meaningful things out of this paper and do something about it.

Contact Us

Questions not Covered in the Webinar 

Q: For hacking web services like FHIR, are there tools that she really has come to favor? For example “I used to rely on tool X, but now I do most of my work using tool Y” .. any insights like that would be great to hear.

A: For WITM attacks, I used to rely on mitmproxy and manually creating API requests from scratch using Postman. But increasingly I’ve been using Burp Suite, which has both professional and community editions. For static analysis of mobile apps, Mobile Security Framework (MobSF) is a great place to start. Approov provides a good list of additional tools in APPENDIX C of this White Paper.

Q: This may be beyond what the speaker would opine on – but what does she think of the need for federal regulatory oversight of the consumer app space for health data?

A: I added a recommendation in the report that some kind of oversight body is put in place to protect patient data and consumer facing apps: “Such an oversight mechanism could take many forms. The lack of HIPAA protections makes it currently a free-for-all and could take the form of extending some HIPAA-like regulatory protections over patient data outside of healthcare organizations, it could be a certification body that does security testing or auditing, or it could be a consumer advocacy group performing research and publishing findings on the security of various apps to give them ratings.

Q: I know you indicated you did not have any problem with FHIR standards per se – but isn’t the issue not limited to FHIR? Isn’t the real issue security around APIs and health data (regardless of format) and how secure they are? And is there similar concern with third party app developers that would be connecting to the API services?

A: FHIR is just an example of a standardised API (like Open Banking in Finance) and yes there are security risks anywhere mobile apps are accessing APIs whether the APIs are private, public or whether or not they are based on standards. If you refer to some of my other vulnerability research reports, such as my Hacking Law Enforcement Vehicles through their APIs (videos available on my YouTube channel) and now my new research on hacking 55 banks and cryptocurrency exchange apps and APIs, you’ll see a common theme across a lot of the vulnerabilities — a failure to authorize API requests. This is such a systemic problem and yes, you are correct, it isn’t relegated to just FHIR.

Q: You mentioned receiving all PHI from a database with a mobile app to do filtering; was the PHI received from an EHR database or something else?

A: The data accessed by the mobile app in this case was stored by an intermediary (an aggregator). It was not coming directly from an EHR. However, the data was pulled from FHIR APIs that they have partnerships with at the numerous healthcare providers sharing their data with them.

Q: Have you looked at the use of Unified Data Access Profiles (UDAP.org) to strengthen the issues uncovered in your study?

A: We have reached out to UDAP to discuss this further with them. 

The recording of the webinar is now available and you can find it here.

Try Approov For Free!

*** This is a Security Bloggers Network syndicated blog from Approov Blog authored by David Stewart. Read the original post at: http://blog.approov.io/fhir-api-security-research-sparks-debate