Participants 🔗
- Yujan Shrestha - CEO, Partner
- J. David Giese - President, Partner
Key Takeaways 🔗
- Complexity of Cybersecurity for Medical Devices: Medical devices face unique cybersecurity threats, particularly when connected to hospital networks. Ensuring device security requires balancing FDA guidelines with the needs of hospitals and patient safety.
- FDA's Role and Limitations: The FDA requires certain cybersecurity measures but cannot mandate everything needed for optimal device security. This creates a gap between what the FDA requires and what hospitals or manufacturers might consider essential.
- Hospital IT and Device Manufacturer Dynamics: Hospitals often demand compliance with standards like SOC 2 and ISO 27001 for security. This adds another layer of requirements for device manufacturers aiming to integrate their devices seamlessly within hospital networks.
- Best Practices in Cybersecurity Assurance: Effective practices include code reviews, using automated testing tools like Snyk, and periodically checking the Software Bill of Materials (SBOM) for vulnerabilities.
- Tool Validation Challenges: Validating tools can be time-consuming and may not significantly enhance device security. Focus on essential validations, especially for tools critical to safety, while minimizing efforts on well-established tools where risks are low.
Transcript 🔗
J. David Giese: Hey Yujan! How’s it going?
Yujan Shrestha: Hey David, how you doing? And again.
J. David Giese: Yes, this will be kind of round two.
Yujan Shrestha: Round two. Yeah. Yeah. Well, thanks for joining us again. Yeah. So, let’s see. I think we are live. So yeah, Happy Halloween, everybody. Today is my Halloween costume. I’m going to be an FDA auditor.
J. David Giese: That’s scary. That’s terrifying.
Yujan Shrestha: What are you going to be, David? What are you?
J. David Giese: Well, what am I? Oh, yeah, yeah.
Yujan Shrestha: For Halloween?
J. David Giese: You know, I’m boring this year. I don’t have a costume. My daughter is a pumpkin. I have a four-year-old daughter, Eloise, and she’s switched around a few times, but she decided she wanted to be a pumpkin. So, she’s got a cute little pumpkin dress.
Yujan Shrestha: So, what were the other options?
J. David Giese: Oh, she wanted to be Elsa for a bit and then Anna for a bit from the Frozen Disney movie. But she loves like most kids. There was another one, I think, from The Wizard of Oz. But one of the good witches, I forget her name, but I think she wanted to be that for a while too, but then landed on a pumpkin within the Amazon delivery window for it.
Overview of the Discussion Format 🔗
Yujan Shrestha: Well, that’s great. Yeah, I guess let’s just go ahead and jump right into the topic. Thanks, everyone, for joining. We’re definitely happy to have you. The format of this call is going to be like for the first 15 to 20 minutes, Dave and I will chat about some topic, and please write your questions into the Q&A. If you’re on Zoom, please go ahead and type questions in the Zoom chat. And we can also let you speak on Zoom. If you’re on LinkedIn, please just type them into the Q&A post, and we will look at the Q&A posts during our talk and try to answer questions at that time, but also at the end, we’ll be answering all the questions and hopefully, we can get to everyone. Last time we got to almost everyone, I believe. If you have any questions, please go and queue them up for us. Without further ado, we are going to be talking about medical device cybersecurity threats today. David, do you want to take it away?
Medical Device Cybersecurity Threats 🔗
J. David Giese: Sure. Last week, someone had a very good question, which was about the main threats affecting medical devices. I did some thinking and reading about this topic in the last week. Something interesting is that malware affects a lot of hospitals, and that’s in the news a lot. It’s top of mind for most people in the space. But it’s not clear in many cases that medical devices are involved in the malware attacks. In a few cases, they happen. In fact, there was a report put out by the HHS in mid-2023 which, you know, basically says, hey, medical devices, it doesn’t seem like they’ve been a main part of these threats. And yet, while available data on cybersecurity incidents does not appear to show medical device vulnerabilities, disruption of the devices is critical. So, what are some of the threats related to medical devices? One is exploits from a malicious network. If there is an exploit and these devices are deployed in the hospital network along with other medical devices, having attacks come from the side, there have been incidents of that occurring. That’s one threat, especially if you have hard-coded passwords that are widely available. Another threat is outdated dependencies, especially if you have a cloud-hosted device. Those are two common areas. I’m kind of jumping around a bit, but maybe to take a step back, we can keep going through some of the threats. When we think about the time that we’re spending on medical device cybersecurity, we can kind of allocate this time to different purposes. I’ve got this Venn diagram that I like to use. So, let’s say we’re spending time developing a software bill of materials (SBOM). Why are we doing that? One reason is to make the device secure, and that would fall into one inner circle. It may be because hospital IT departments demand that you have an SBOM to sell to them, in which case it would fall in the green circle or overlapping. And, of course, the FDA since March of 2023 is requiring an SBOM as well. When we think about medical device cybersecurity, we should keep in mind that there are some things we’re doing to make the device secure, some things we’re doing to satisfy the FDA, and then some things we’re doing to satisfy the hospital IT networks, and these don’t all overlap.
Yujan Shrestha: For example, would we expect the FDA to eventually expand this circle and overlap with IT needs? I don’t think so because I think the hospital IT is thinking about other regulations outside of the FDA’s jurisdiction. So, there are needs that hospital IT has that FDA won’t enforce, and this affects medical device manufacturers because when they go to market, they then have to fill out certain forms and meet certain requirements to sell to hospitals.
Challenges and Considerations in Compliance 🔗
J. David Giese: The FDA can’t regulate every user need or hospital IT requirement. Like you said, it’s impossible given the wide array of devices and intended use environments. There’s the hospital network, devices deployed at home, and so on. In a perfect scenario, everything the FDA asked you to do would be critical for making the device secure. But realistically, the FDA can’t ask device companies to implement every possible security control, as it would be overly conservative and add waste. Nevertheless, there are things we, as a company, might do to make the device secure, even if the FDA wouldn’t flag us on it or if hospital IT doesn’t demand it.
Yujan Shrestha: What are some examples that fall within these regions? For instance, having the SBOM might overlap all three, but some requirements specific to hospital IT, like MDS2 forms, may not seem like they’re enhancing security or fulfilling FDA requirements.
J. David Giese: ISO 27001 or SOC 2 compliance are examples that overlap the green and orange areas. Hospital IT departments might require them for cloud-hosted components of devices, while the FDA cares more about the software development lifecycle security, which has some overlap but doesn’t align completely. We often see clients come to us thinking ISO 27001 compliance means they’re prepared for FDA submissions, but ISO 27001 focuses on organizational IT security, which doesn’t fully cover what the FDA requires for medical device cybersecurity.
Yujan Shrestha: Got it. Business reputational risks, like leaked passwords not tied to a medical device, are also considerations the manufacturer cares about but FDA or hospital IT may not. And, in the blue-only region, it sounds like the FDA requires some documentation exercises that feel wasteful at times.
J. David Giese: I think so. Yeah. So certainly it's tricky to get the format the way they have to be like this and where they're not to ask a lot of questions. But, but I do believe, you know, having good diagrams of your system and using those to help you identify risks is, I generally find it to be very helpful now and it's unfortunate. You do need to understand how they want to see it. And there's all these real specific things they like to see in your diagrams. Certainly, this is an example where I think it's overboard and we generally don't do this and also generally don't have FDA call us out on this. But you can see for every communication path that exists between two assets, insecure use case views, you know, including kind of internal connections, and then they have this big list of things that they want to see for each of these. And it's really quite a bit of information that I don’t think is that useful in many cases. So that would be that would probably be a good example.
Yujan Shrestha: This is yeah, I would say like the, like the cases that we typically deal with are software-only medical devices. Is that what you mean? Like the, in most cases that the cases that we deal with or like do you, do you mean just kind of just kind of in general, it's not really in.
J. David Giese: Yeah, I would say, I would say even in general having all of this detail for every communication path. This is extreme. Now certainly I think for... Yeah, yeah, I see your point actually. Yes. Yeah, I think for SaMD especially, but I do think for devices that could directly harm the patient. Insulin pump. I see lots of recalls for not insulin pump, may have gone blank on the name, but then the pumps that they're delivering to the patient. I see a lot of recalls for those. And I can see if you have a network-connected, you know, I can't believe I'm blanking on the name of it. But, but I think for a device like that, being really thorough about every communication path and thinking through all of this for everyone probably makes sense. But for a lot of the SaMD devices that are kind of separated from the patient, they're more helping the radiologists or, you know, cardiologists or whatever clinician review reports, I think having a detailed breakdown of every communication path, it's really not that useful.
Yujan Shrestha: So yeah, yeah, that certainly makes sense. It also makes sense if you put yourself in the shoes of FDA, you can't create like a general-purpose guidance that would fit for everyone, right? And I feel like they might tend to take the more conservative route in hopes that the industry will like meet them in the middle. But also like how would you how could they have captured all the nuances that say, okay, you need to do this if it's this, if it's this, with this in this particular setting, you just can't. Right? So you put yourself in the shoes of an FDA reviewer, user needs for the industry it against documents got to be kind of like relatively short. I mean, the cybersecurity ones are already, in my opinion, really long. So hey, this has got to be somewhat short to be concise and kind of ask for a bit more than what you think you're going to get and maybe you'll land somewhere in the middle. Like, I think there's a lot of that as well. If you put yourself in their shoes, then it kind of makes sense. Like, hey, like the spirit of this guidance. What you're what they're saying here is probably not like we don't need to do all of this stuff for this category of device and, you know, like what like the spirit of the guidance is different than the letter of the guidance.
J. David Giese: Yeah, I think it's exactly right and like in this diagram, we can kind of blow this out to at another level because in practice the FDA does not require you to do this. Right. Just to be clear, like it's in the guidance, but we often do not do it. And I think we have had FDA call us on it once or twice where they want more detail, but I think there's a difference between what you see in the guidance just reading it by the letter and then in the spirit and that this is a little bit more narrow, more nuanced in like probably if you had a higher risk device, they would demand this, I would imagine.
Balancing Engineering and Regulatory Perspectives 🔗
Yujan Shrestha: So yeah. And I think the way to do the right balance is to definitely involve the engineering team in this discussion. So I think the engineering team has a good idea of the size and shape of the orange circle, right? Yep. Yeah. And like the regulatory-oriented team knows the size and shape of the blue circle, perhaps more on the bigger end. But when you combine the two, right, I think then you'll find that happy medium. Or I guess we'll be able to make that big blue circle as small as possible and overlap as much as possible with the orange circle and where you want to be. But like I've seen a lot of our or like, like I've seen med device organizations try to keep the two separate, like two separate silos and have just a loose communication pathway between the two. Then what ends up happening is you have not only a much bigger blue circle, but you have even less overlap with the orange circle. And it makes the engineering team and the regulatory team a lot more cynical about everything. It's like, Oh, we're doing all this random stuff or like that. Like the software engineers think, Oh, we're doing all these things that are just adding no value. And then the regulatory team thinks that like the engineers are just combative and they just don't want to do this stuff. And I think like you were mentioning, the like if you're cynical, like it gets further apart, but it's a self-fulfilling prophecy. In that case, the two teams just drift further and further apart and you have worse outcomes. But yeah, that would be my advice. Like if the engineers think that it's, it's going a bit overboard, it I think it probably is. And like that's a good sign to probably rethink that process of what you're doing is really needed to make the device more secure.
J. David Giese: Yeah, I completely agree with all that I have seen.
The Role of Code Review in Cybersecurity Management 🔗
Yujan Shrestha: Yeah, you’ve seen it play out in a bad way. I do think to be fair to the regulatory side, I do think there's times when the engineers downplay some of the security risks. So there's value in the regulatory perspective, like saying, hey, we should do this. The engineers may be. Totally, like, I think we engineers have a lot of unknown unknowns about what's needed and the brainstorming activities do genuinely help. I think that does need to come from the regulatory side. Like you can't really ask someone, hey, what, what things that you do not know about. The only way to do that is to put forth like, okay, here's some options that we need to do. Like we need to do threat modeling using STRIDE. They have to go down each one and brainstorm. How does this happen? Like that I think does need to come from the regulatory side to really get the unknown unknowns out of the engineering team.
J. David Giese: And one of my favorite ways to do this is in the TIR57, which is really good going through this annex D. There's a big detailed list of questions. I found engineers will often roll their eyes a bit because there's a lot of them, and then a lot of the questions won't apply to your device. But I very consistently find going through it is useful, even if it's informal. Like we used to do it, where we would document every single line in our security risk assessment. But we now shift it a little bit, a little more flexible, or we’ll still at least go through it sometimes, depending what the client wants, but we'll have the full list in there. But anyway, this is a paid document you get to buy, but it's quite good.
J. David Giese: Think it's a.
Yujan Shrestha: Yeah.
Addressing Cybersecurity Threats Across Diverse Environments 🔗
J. David Giese: Yeah. I think going back to the original question of what are the threats facing devices like, I think keeping in mind the different stakeholders is important when approaching cybersecurity. I think another thing to keep in mind is the variety of use environments that the devices are deployed. And so there's a device that is deployed, maybe in the same network as other high-risk devices. There's also devices that are likely deployed in a separate network that are more isolated, but still in the hospital. Then there's cloud deployments. Maybe there's cloud deployments that have a local component. We see that pattern very often. There are devices deployed in the lab environment, a lot of IVD software projects, people we work on or in that area. And then there's also home use environments which are quite different and have different threats facing them, including just people forgetting passwords. Or I shouldn't say lazy, but we are all busy, right? We have we have a lot of things going on. So I think the threats varied quite a bit. I also think that to really understand what you need to do to make devices secure, like in the ideal case, it's coming from the perspective of someone who's really managing this in the real world. Like they manage hospitality and they see the real threats they're dealing with. Just being upfront, we're often focused on the pre-market phase, helping our clients get through FDA submissions, and we're often less involved in the postmarket and even less involved in how it's deployed in the real world. So because of that, like it's hard for us to, we don't necessarily see the specific kinds of threats people are facing, but I can imagine things like password issues or a lot more common.
Bridging the Gap Between Sales, Marketing, and Regulatory Requirements 🔗
Yujan Shrestha: Yeah, let's see if we got any, we have any questions. See. Yes, we have a comment from Bilal says, you know, the way I see it, there's a disconnect between the sales and marketing and the regulatory process and requirements. Yeah, certainly like the marketing strategy, at least in the pre-market phase. When we’re getting FDA clearance there, there is, I think, a strong consensus between what marketing is going to say about your device and what you are giving FDA clearance for.
There’s a strong consensus there. But I think you're right in that in the post-market, that consensus tends to get a little bit blurred and a little bit squishy. Yeah, I'm not sure if that's what you meant by the comment, but I certainly think that not just the engineering team and the regulatory team need to talk, but also the sales and marketing team also need to be involved in that conversation.
I'm not sure about cybersecurity in general, how that sales and marketing team overlaps, but definitely like the intended use of the device and not overstepping your FDA clearance. Like it's important to involve all those parties.
Challenges of Cloud and On-Prem Deployments in Healthcare 🔗
J. David Giese: One thing I would add to that is I do think doing some advanced research on when we start selling this, what are the needs of the hospital I.T. teams. Like a common issue is cloud deployment versus on-prem deployments. And for certain types of devices, there are hospitals that just won't buy cloud deployment. And so it may be the case that you need to support both.
And that can be very painful if you realize that after the fact. Another thing you may realize is, hey, if we're doing an on-prem deployment, what are the needs to get it installed quickly and going through the process with hospital I.T. can be really painful from I would say three months to a year.
You can really go on and on and I think having some first-hand experience on the constraints there to inform your software architecture, how you deploy it, how you support things like backups and various other security controls.
The Importance of Code Review in Cybersecurity 🔗
Yujan Shrestha: So as a follow-up, Bilal clarifies the example that hospital I.T. may require MD-SDS and SOC 2 as well. Yeah. Yeah, absolutely.
Okay. So we have a question from Michael. Michael's asking, “So you give some sort of inspector your code base and they do a code review?” And I believe the question was back when we were talking about how to make the orange circle, like how to make that more secure. And then the question's about what role does code review have in the cybersecurity management process?
J. David Giese: Yeah, I think code review is really important. I would just mention we have a really in-depth article on code reviews that you might want to check out. But definitely, big believer in code reviews. They can catch things that are really difficult to catch in other ways. So I definitely think having like we’d like to have a checklist that's enforced in the GitHub pull requests where it has, you know, you've considered safety risks related to these changes and then adding one to consider security risks.
Especially, you can make it more specific for your device if there are certain things that you think are really important to be covered in the code review. I think that can be really helpful. This is really code review, but certainly having tools like Snyk or there’s some open-source ones or even like GitHub, I think you need the GitHub enterprise for it.
But the GitHub security which typically runs like whenever the software developer will write a bunch of code and then there's this process called a pull request that integrates their new changes, the changes they're making to the code, into the mainline of the code. And that's, that's typically the step where the code review occurs. This is when you're doing the pull request review.
But another thing you can do is you can have automated software checks that have to pass before you're allowed to merge the code. But the changes in your code into the mainline and what you can do and what we really almost always recommend our clients do is set up a tool like Snyk to run. Now these automated tools, just like you'd expect, sometimes they have bad suggestions, but overall they've really gotten to be pretty good and they can catch a lot of basic security issues at a lower cost than having another engineer check it. So so I'd say this is complementary to code reviews.
But something we really recommend and I guess since I'm talking about other things that are really worth doing, is monitoring your SBOM for vulnerabilities. I say worth doing it. You have to do to Timbel it after it got in some regulations, but.
Monitoring SBOM for vulnerabilities, having a process around that, having software requirements for all your security controls, and then having ideally automated tests to verify those controls. Ideally having those tests run on every pull request so that just like this SAS tool, having it run and be checked automatically before you can merge your changes.
That's another best practice we suggest our clients follow. So hopefully that answers your question and a little extra.
Yujan Shrestha: Yeah, I guess the only thing I would add to that is I think like generative AI tooling. This could be an excellent use case for using tools. I'm not sure what's already out there, but like tools that can do an automated code review of code bases, not just of pull requests or changes to the code, but just like periodically look at your code and perhaps even open up a pull request that says, here are some cybersecurity threats that we've identified.
You know, like you might want to sanitize this input here or you might want to do this, that and the other here. But I think that that could be a very useful use case of the upcoming AI tooling.
J. David Giese: Maybe a few things I'd add to that. So I definitely think you should consider threats from older devices that aren't secure, and FDA is pretty explicit that you need to do that. The threats are even clearer if you know you're integrating with specific devices, so you can be more specific than in the case where it's just any general device happens to be on the same network.
But being realistic, you know, you would… I'm just making up an example, but like a code injection attack where there's like a .com tag that has a cross-site scripting thing embedded, right? You could imagine something like that. You should try to make your device secure against that type of thing, but you're not going to be able to know that the data that's coming in is good.
The Value of Tool Validation in Medical Device Development 🔗
Yujan Shrestha: And by validate, my recommendation for something like this is to keep it as simple as possible. The way I've done this in the past—I would even argue if it was even necessary—but to validate something like a CI server that runs tests is just to have a dummy test case that always fails and verify that the test case always fails, and that was the extent of, I think, useful work that came out of the tools and process validation for the tools. So yeah, that would just be my two cents. I wouldn't spend a lot of time on that. Have like two tests: one passes, one fails, and run it. That will now validate for the test in general.
J. David Giese: Yeah, I think this type of stuff falls in the area where it's something you need to do for the FDA, but it doesn’t really make the device safer in almost any case. These tools are so widely used that it's really unlikely your validation is going to find something. Personally, I think that kind of validation is usually a big waste of time, so just do the bare minimum you need to make the FDA happy and move on to something more useful.
Yujan Shrestha: Also, if there’s already a downstream way to catch errors—for example, you shouldn't validate compilers or anything like that, because if you're going to validate a compiler, how do you even validate something like this? It’s a good question; I can't really think of any solid ways to do that. The best way to do it is to compile it, run the code, and if it works, say you know your compiler works. It's unlikely that any additional defects would have been injected at the compilation stage that we wouldn't have caught. This is especially true if the compiler is widely used, which usually it is. It’s just not really a worthwhile exercise to attempt to validate something like that because there already exist robust downstream ways to catch any errors, and also it is widely used.
J. David Giese: You can think in a particular case of like an automated testing tool—like, it’s unlikely that it’s going to work and all your tests are going to pass but the tool is broken in a weird way. That’s just a very unlikely situation. Now here’s an example of something where I think tools validation is important: a lot of times people have some formulas in an Excel spreadsheet for safety risk management. I've seen it happen a lot of times where those formulas are wrong, and it turns out they have it wrong in a way that things look like every safety risk doesn’t need a mitigation or is acceptable when in reality the formula was outdated or had a typo or something. So that's like something so simple, but what can happen from that is you don’t spend energy where you really should be spending more energy on even following your own process. But this is different though. So I think doing tool validation for a spreadsheet like that makes a lot of sense, and there are other examples, but for something like a testing tool—especially if it's widely used—I just think it's almost certainly a waste of time. So just do the bare minimum to make the FDA happy, and then move on to something more useful.
Yujan Shrestha: Also, I think code editors are another thing where it's just useless to do tool validation on that. I’ve seen this especially in larger companies where they have tool validation probably done for other reasons, not necessarily just for quality systems, but I think that's another example.
J. David Giese: We should do a LinkedIn post on this because this is kind of a strong opinion that a lot of regulatory people might disagree with based on their experience. We’d probably get a lot of likes, or maybe a lot of backlash, I don’t know. I guess LinkedIn doesn’t do that, but…
Yujan Shrestha: It's the best way to upset engineers if you lock them into a particular set of tools they don’t want to use because of some tool validation. I think it's better to pick a fight in other areas than this one and save the social capital for something else that really does matter, in our opinion.
J. David Giese: I think we can say without naming names that we've seen engineers leave good companies because of this. Like, it’s so demoralizing to be validating these systems where you know it's pointless. I think in the end it can even make the device less safe, ironically, because it’s wasting time on things that aren’t needed and it makes engineers doing good work feel undervalued.
Yujan Shrestha: Let's see, I think we have time for one more question. But it looks like we got through all the questions, so that's great. Well, happy Halloween, everybody. I hope this was a useful conversation. Of course, it's always a lot of fun for us. I've learned a lot from this call, so I hope you all have a great weekend. And definitely, build devices that you would use on yourself and on your family members. And as long as that's true, do as little in that blue circle as possible, but make sure the first item is true. So, all right.
Yujan Shrestha: Hope you all have a great rest of your week.
J. David Giese: Thanks, Yujan.
Yujan Shrestha: Thank you.