10x Coffee Talk: Threat Modeling and Cybersecurity Risk Management

 November 15, 2024
SHARE ON

CybersecurityDICOM

Participants 🔗

  • Yujan Shrestha - CEO, Partner
  • J. David Giese - President, Partner

Key Takeaways 🔗

  1. Threat modeling doesn't need to be intimidating; adopting a structured yet relaxed approach can make the process more effective and accessible.
  2. Practical, everyday scenarios can serve as metaphors to better understand and communicate complex concepts like threats and risk management.
  3. The distinction between threats, vulnerabilities, and risks lies in their respective definitions and roles in system security and safety.
  4. Harm in cybersecurity can extend beyond immediate safety to include broader impacts such as loss of data integrity and reputation damage.
  5. The STRIDE methodology is a systematic approach to identifying and addressing threats, providing a foundation for comprehensive risk management. ​

Transcript 🔗

Introduction and Greetings 🔗

Yujan Shrestha: Happy Thursday.

David Giese: Hey. Good morning. Happy Thursday.

Yujan Shrestha: Yeah. Likewise.

Introduction to Threat Modeling 🔗

David Giese: Excited to talk about threat modeling.

Yujan Shrestha: Yes I am. I'm very much excited about threat modeling. You know, it's a it's a topic that gets a lot of people scared. But it doesn't it doesn't need to be scary. You know? Which is why I put this background because, like, I think threat modeling should be very chill and relaxing and, like, you know, should it shouldn't need to be scary, right?

David Giese: Threats are facing you lately.

Personal Examples of Threats 🔗

Yujan Shrestha: Threats that have been facing me. Yeah. Yes. And my son, like, when I write changes diaper like, sometimes I get some threats from that. And after I have to properly defend myself against those threats. Yeah. There's this product called PPTP for boys. And it used to work. But then like as my son got older, the streams got stronger and that no longer was it was a threat mitigation unfortunately.

So now it's just I just have to react really fast. But anyway, this just more than he bargained for.

David Giese: Right? Yeah. I mean, one threat I have, I walk to work and there's this Starbucks that I sometimes I'll stop at and work. And there have been a lot of homeless people coming in and kind of rampaging around the Starbucks. Just a little strange. I wouldn't have thought in Austin that would be so much of a thing.

But because I recently moved here from New York and that happened in New York sometimes. But, anyway, I don't know. Well, but I'm going anyway. I guess what I was thinking about when I was prepping for today, I was thinking about what would be some useful talk about. And one thing I was just going to say is like, well, what?

Defining Threats, Vulnerabilities, and Security Risks 🔗

David Giese: Like, what's the difference between a threat and then a vulnerability? And then also security risk. Like what? What's the difference between these things? And I, I know I did not to pay this thought to tease out but like what it off the top of your head. What would you say you think. Like what's a threat versus vulnerability versus security risk.

Yujan Shrestha: A threat to me seems like a hazard. Like if I were to map it to like safety I'd say a threat is more like a hazard. A risk is obviously a risk where you have a probability and like a severity and a vulnerability is like a step and the sequence of events sort of it's like it's a property of the system that allows a certain sequence of events from the threat to the risk.

And I guess the harm is, is like the, like the asset being exposed or, you know, I think the I think there's several other harm endpoints that are also discussed. But yeah, I don't know. Is that is that right?

David Giese: Yeah. I think that's that's a pretty good analogy. Yeah. I think I think like one difference between security risk management and safety. Risk management is the definition of harm is broader. And I think FDA jurisdiction kind of prevented them from regulating cybersecurity partly because of this, because they're their jurisdiction was really around safety and a lot of security.

Broader Definitions of Harm 🔗

David Giese: Cybersecurity issues aren't you know, sometimes it traces back to safety, but a lot of time it doesn't, at least not certainly not directly. And so I think, you know, in early 2023, when the amendment to the FTC act went into effect and they kind of that more jurisdiction over cybersecurity, that's kind of related to this broader definition of harm that includes, you know, loss of availability, loss of confidentiality and loss of integrity of, you know, information assets.

So that's on the on the, I guess, security risk versus a risk. Yeah, I think I think you're right. Like a vulnerability probably most closely matches to like a step in the sequence of events like a property. The system really a weakness in the system that enables that, that sequence of events that results in harm and threat threats like the most kind of.

Yeah, I think I think compared to hazard is a good analogy. The official definition that the FDA has, it, I can just read is it's a threat, is any circumstance or event with the potential to adversely impact the device, blah, blah, blah, and then give a bunch of other a long list of things, you know, through an information system, via unauthorized access, destruction, disclosure, modification or information and or denial of service.

So, you know, it kind of it's a little it's a little fuzzy because it's like, well, it's a threat. It's a threat. Any circumstance or event with the potential to adversely impact the device.

Exploring Threats and Harm 🔗

Yujan Shrestha: So it's almost like more like that, like a hazard, a situation, the a threat. Like they're saying it's more or less circumstance which is which maps to hazard a situation. What I don't know, I'm kind of biased. I like my threat analogy a little better with the hazard. Like this is like the start of it and then it has to fine.

Like the threat has to find its way to the harm which the harms in this case would be like the adverse impact the device or organization or operations.

David Giese: Yeah, yeah, I like a lot of, a lot of times you think of like a threat actor and that's like, like a person who is trying to break in. Right? Like that's that's like one type of threat, but another type of threat is like, just like a hurricane hits your data center or you can think of like someone just forgets, you know, the team is just sharing their password.

Like everyone's using the same password. Just log in. So it's like a threat to repudiation in the sense that, yeah, you don't know who's doing what, because everyone's sharing the same password because they don't want to. They don't want to pay more money for more user accounts or whatever, or it's just too annoying. Like everyone just uses the same account of the password, you know, you know, one, two, three, 4 or 5.

So, but anyway, I, I find, I find one of the challenges with, with threat modeling is like, what's the relationship between threat modeling and security risk management because they are very related. Like this confused me for a long time. And I would say honestly, don't still have like a perfect. Kind of like clarity in my head for the difference.

Relationship Between Threat Modeling and Risk Management 🔗

David Giese: So here I got this diagram I wrote up at some point. And this is part of this big security risk management article that we have on our website. By the way, for anyone watching this, if you go to resources, cyber security, you can find this one here. Ian Bembridge stressed out also great engineer on our team called write a lot of this and we've updated a number of times.

In fact, I'm going to be pushing a pretty big update probably later this week based on some work I've been doing. But you can see here on the left we've got security risk management look, which looks very similar to safety risk management. Of course, I've got another diagram in here showing the relationship. Oh yeah. Of course, for anyone watching I'll just call out like you want to have a separate process for security risk management from safety risk management.

FDA has been pretty clear and has issued deficiencies if which you used to be. It's like, hey, we suggest you do this. But now I'd say pretty much you have to have separate processes. And I think there's good reason for that. But, but anyway, if we then compare like security risk management to the threat modeling manifesto, you know, it really I don't know like it gets it gets kind of blurred like I think what like what the distinction is in the way that I think about it is threat modeling is kind of like FMEA.

It's like one way of identifying security risks. And it tends to be lower level, like very a little more detailed, kind of in the same way FMEA tends to go through every like little component. But you're not going to necessarily create a risk for every FMEA item. The other thing I think that's different about the security risk is you actually will go through and evaluate like the severity and like qualitative likelihood of it occurring.

Tools and Methods for Identifying Security Risks 🔗

David Giese: Whereas in the threat modeling generally that's it's more just enumerating what's there. I don't know, Yujan, if you had any if you thought about this at all or if this has ever come up on any of your client interactions.

Yujan Shrestha: Yeah, I mean, certainly I could definitely like an analogy. And the point that threat modeling is a tool to identify, I guess, threat. I mean, if, like if we if we think about tools that we have on the safety side, you're definitely right. Like there's there's the very similar effects analysis. There's also that the top down risk analysis where you start backwards, where you start from okay here's a patient.

How does it work backwards. And then you can also start from individual components. And it's just several ways to brainstorm. And get a full picture of it. It's it's kind of like in like in like astronomy how you can have different filters. We can look at different wavelengths. And that tells you something different about the universe. You know, like it. It's it's also like building a tunnel where you start from both sides, like one on one side. Right. Could be more on like the, the precursors to harm. And you start there. But you also have other mechanisms where you go to look at the asset and kind of work backwards and say, okay, cool, how can the two meet together?

That's how I think the safety reasoning way is like we have hazard and we have farm and risk analysis and just brainstorming pathways of how those two could be meet together and failure mode effects analysis like you're taking it from the middle where you have these sequence of events with various system items in the middle. And, and that lets you pick a point and kind of work your way both forward and back and also top down.

You do it from the risk side backwards and also from the hazard side forward. I find that to be less useful. But you know, I think they're they're all kind of similar. So I think threat model is yeah I think probably middle and also from the from the start. Start. Yeah. And then forward.

David Giese: It's like yeah I think I think there's there's a number of tools that you can use to identify security risks. Like besides threat modeling like one, one, one that I mentioned last week was just going through the list of questions in the appendix AAMI TIR57 and just as you go through and answer those questions, you'll kind of think of security risks as you go. I think that that's that's an approach. I think just doing your penetration testing, getting a third party pentesting to attack your system, you'll probably identify some risks that you hadn't even thought of when you do that. I think threat modeling the way I'd get into it or I think about it, by the way, we'll we'll get into concrete examples shortly.

Defining a Threat Model 🔗

David Giese: But is you first you're creating a model for software. Right. So software is very complex. There's lots of code and we're creating a model of that code. And when I use the word model, I have a real precise definition in my head that I thought a lot about over the years. So to me, a model is a useful approximation or is an approximation of reality that's useful for some purpose.

So, it's it's a proxy. It's never accurate. Perfectly accurate. Right. Like if you wanted to perfectly describe the system, you'd you'd need basically all the source code it will end additional information about the deployment environment and so on. So a threat model like these diagrams that we create, they're a proxy, but they're not going to capture every detail of the system.

And I think one problem I see people make is they try to make them way too detailed. This happens on safety risk management too, where they just get way too detailed and you kind of it actually makes it less effective. So, so, so if we go going back to definition of a model of use of an approximation of reality useful for some purpose if you keep that purpose in mind, you know why you're doing what you're doing, then you kind of can that helps you draw the line like, okay, this is this is two.

Like this is actually making our it's making our, our model less useful because we're making it too detailed. There's that saying like you know, all models lie. Some are useful I think I think that's so, so, so trying to make it to where you perfectly describe the system is, is a fool's errand. And you shouldn't you shouldn't try and do that.

But so anyway threat modeling to me. Yet you create a model of your software system and then you kind of have a systematic approach of thinking about those models, those in particular diagrams, the approach I like to use and FDA kind of requires. Now you systematically go through those diagrams and ask questions. What can go wrong in the way?

When we talk about this, there's a particular approach we like to use called stride. That's kind of, I think, the most popular methodology, although there's definitely a number of other ones, and you kind of systematically go through it. So it's and you identify all these ways things can go wrong. And then you move on to the mitigations, which tends to be produce the software and system requirements and requirements for labeling that you can then verify.

Outputs of Threat Modeling 🔗

David Giese: And you do kind of residual risk analysis at the end say, oh yeah, this is good. Now and then. Yeah. You revisit that threat model periodically over time. We generally do start with it once the system architecture is somewhat well defined. And that's kind of it related to the system architecture part usually in phase one. And then we kind of iteratively review it during phase two of our process.

I don't know any thoughts on this. Yojan Before we dive into a more concrete example

Yujan Shrestha: How about we take one question from the audience and also about this. By the way, if you have any questions, please type them into the into the Q&A or the comments. So we have a question from Rama that she asked for a software on a medical device. Is it an accepted practice to document cybersecurity hazards and patient safety hazards in the same risk management report, or based on the definition, Harm could be a separate document.

Questions from the Audience 🔗

David Giese: Yeah. So certainly you need to have a separate security risk management process from your safety risk management process. And because of that we typically do have a separate report and in in the FDA estar pdf it's a separate like it requires it to be split out. So if you did combine it you would you'd have to at least have one or document.

So yeah we would we would suggest splitting the report, especially since it's often done by different teams like the security. So you say they're going to be reviewed and approved by different people in most cases. So. So yeah, I would I would split them both.

Yujan Shrestha: Thanks, David. Yeah. Also thanks for your question Rama.

David Giese: Oh, and I see Cooper Boyles has a question which I'll, I'll address in just a bit when we get into the specifics. But basically yeah, we do go through and do stride per element. And I'll talk a little bit more about what that means. But that's typically the way we like to do it. You definitely don't have to do it that way.

You can also just describe it, look at the overall system level, which tends to be a little quicker it for a simple system, but cool. Okay I'm going to go. So there's a few things I figured I'd say. So one there's a lot of ways to do threat models. And as far as I know there's no standard there.

Threat modeling kind of originated outside the medical device industry. And there's this guy, Adam Schiff Stack, at Microsoft, who I think was the guy that really popularized it. And he has this pretty good book on threat modeling. And we've kind of built out a process that is based on that and on the miter Threat Modeling Handbook.

But there's definitely different ways to do this that focus on different things. So. Well, I'll, I'll quickly go through the outline. Our process at a high level. And then we can go through an example. So we start by kind of identifying all the major system items that that make up the software system. So this is usually any external systems you communicate with like the Pax, EHR, third party logging tools, cloud systems, all of that. And we would like to have an ID for each component. I find that to be really useful because if you don't have that over time when writing the documentation, people use slightly different terms for different things, and just having the IDs is don't actually typically see people do this. But I find it to be really useful. And we use those IDs throughout all of our, documentation, including in FMEA and so on.

Then we, we create all the security architecture diagrams described in the FDA guidance starting with the global system views and up to date ability patch ability view that kind of shows how you push out updates to the software. And this is especially useful for attacks on that process. And then often usually, well, at least have one security use case view.

And I'll talk a little bit more about how we do these in a minute. But often there's multiple for different situations. So we start with there. Then we'll go through and we'll assess all the cyber security assets and how important or critical they are. And then we do stride by element or stride per element like Cooper you'd mentioned.

And I know not every stride I actually I should even talk about what stride means. So stride stands for spoofing, tampering, repudiation, information disclosure, denial service and elevation of privilege. So these are common types of threats. And what we could go through each of them in detail. We'll get into it when we get to the example a little bit.

But like it it's a model, this is I would say is also a model reality. Like it's not like often threats kind of fall in multiple ones. Multiple buckets. And so again, it's just a useful tool to think about and to kind of facilitate that brainstorming and what we'll then go through, we'll go through each system item in each connection we'll think about, okay, how do each of these threat types apply to each element.

We'll write them down and then often will eliminate some of them that feel kind of far fetched. And then we'll trace them back to the security risks. And this is really our overall process that we follow. So thank you John. Any thoughts about any of the steps here or questions or. Yeah. No thanks. So I started in preparation for this talk, I started trying to write out an example.

So let me switch over. And I didn't quite get as far as I wanted, but I thought it would be helpful for everyone to have a concrete example that we could talk through. So what this is here. This is a really common deployment strategy for AI enabled radiology software. So on the left here we have a hospital network and there's a Dicom router.

This is a piece of software that usually it's ingesting images from the different imaging modalities like the MRI scanner, CT and so on. And based on certain criteria, it may forward the images to other third party software and of course send it to the PACs, possibly send it to research PACs if there's any studies that are going on, and so on.

And so what will happen is this router will be configured to then send images to a local piece of software that's running in the hospital network. This will de-identify the data and send a subset of the data to a cloud, the cloud to for processing. And sometimes that will happen is the companies will want to have the processing module up here so they can monitor it, update it, and for just IP protection or some reasons why this is commonly done.

Detailed Example: AI in Radiology 🔗

David Giese: Or maybe you need special hardware that a lot of hospitals don't have. So then the processing module process, the data often will store a copy of it and then send it back for being re identified. A report is generated and then sent back to the DICOM router for probably storage back in the PACs. So what. This is a little oversimplified here, but I figured it was like complicated enough to be interesting.

And you can see here I've got a legend. So this is what we call an external entity. We got data stores. We have processes and trust boundaries. And I've labeled all of the different system items in the interfaces between them. So underneath like so. So I was thinking we could go through this and apply STRIDE per element and just kind of talk through some threads following our process.

Yujan Shrestha: Yeah let's do.

David Giese: So if we go back let me move this over here. So if we go back to our, our threat modeling process. So okay so we've already identified major system items. Maybe Yujan is there anything you'd expect on here that you aren't seeing like for example, I would probably expect to see logs in the local side to like these are like up in the cloud.

David Giese: But that would be something you might you might end, you know, you could probably break out this local node into smaller subcomponents that like do the report generation and the anonymization.

Yujan Shrestha: Yeah, I think I think this covers asset and like kind of system items like I guess these system items in asset like I guess how the system items and assets correlate.

David Giese: Sure. Yeah. So oftentimes the system items are an asset, but there's also other assets too. So like in this case what would what would some of the assets be. One would be that the I, I was thinking about that for. So the DICOM files that are coming here do have patient information in them. And that patient information is stored in the database so that the data coming back can be identified.

So that's one thing. Another thing I often think about is keys. So like in the cloud here presumably like this connection needs to be authenticated with a pair of keys. And so that would be another asset. The log files the like raw images. They're anonymized. So you know I would say they're like lower value probably to most threat actors who are typically focused on malware injection.

But you know, someone could steal all this data and who knows maybe a competitor I don't know, but that would be an asset for sure. But another asset is some intangible things, just like your business reputation. So like if there's some leak where a bunch of your data is, is lost, like what one asset is your company's reputation.

And that's important to think about just because it affects the harm and the severity levels that in your security risk management. And I think some companies may like larger companies are probably going to view that as a bigger asset. Like we, you know, the maybe a smaller company. Me I don't know you. Does that answer your question.

Yujan Shrestha: Yeah. No not that's great. Yes. So it sounds like assets can both be inside of data stores and processes actually as it's going here.

David Giese: Yeah absolutely. It doesn't map. It doesn't map 1 to 1. Another thing I would point out is, like something that's not on here is like the, the, the source code that's used to produce the system images that run here and here. So like let's say that this is a Docker container on both sides. This is a Docker container that's deployed in the hospital.

And then over here and then there's source code and third party dependencies that are used to build that, that we don't typically show in the global system view. Sometimes you do, but that usually shows up in the update ability and patch ability view. And that's the source code is also an asset. Obviously you don't want you know, you don't want people seeing your source code.

In most cases it's also a threat vector. And that or an attack vector in that you can like what when examples where you have third party dependencies that get injected with malware because the open source maintainer loses their keys or, or dies or whatever. And then a threat actor gets access to it and is able to stick their malware, and people upgrade the package.

And then now their build has malware inside of it, and it's pretty clear they want you to consider that sort of attack. And that's not really shown in here, but that shows up in the updatable compatibility view. But anyway. So okay, so let's say here we've we've finished defining major assets and then next we kind of think about creating system diagrams.

So this here is what I'd call the system view. Now let me show you I sketched out really quickly a security use case. So this diagram for anyone who's not familiar with, sequence diagrams, the way it works is time flows from top to bottom. And each of these is a different node in the system. And oftentimes we'll also have users interacting and perhaps even multiple types of users.

If you have like you know, a lab technician and then pathologist or whatever, interacting in different parts, we would we would often show that to and show them logging in and so on. But in this case, since there's no human interaction, it's a fully automated system. Humans are only interacting kind of down downstream looking at the report, when they view the MRI sequences.

But okay, so time flows from top to bottom. And then these little arrows show interactions between the system. And what I've done here is I've traced out kind of the typical workflow. You know, the data comes in, it's anonymized, the log, let's say some information like the patient ID into the database. Then it transfers anonymized data over the internet to the processing module up in the cloud.

STRIDE Methodology 🔗

David Giese: You know, at what I this is probably too detailed, I would say, but because you could always add more add more detail. Right. But logs that the job started to the log file, say is a copy of the data to the AWS store, runs a model log that finished, sends the results back. It then queries the database for the patient identifiers, re identifies it, creates a report.

It sends it back. So you can you can any step along the way. You can ask questions like well how can someone attack this? How is that connection authenticated. And this is this is yeah, this is useful for the threat modeling. And I would say often you'll have diagrams like this for alternate situations like let's say a common one is configuring a new hospital site.

So like how does that work. What are the steps involved and where there are threats to that process. So this is a pretty simple system. So I probably just have one in this case. But we have had FDA push back. And if you just have one we've had them say hey, we want to see sequence diagrams for these other three situations.

So that is something we've seen FDA and efficiency done. So here's another. This is a generic example. But patch ability update ability view. We use sequence diagrams for this too. So in this case we have the code repository like GitHub or Bitbucket. And often we'll have a build server probably using GitHub actions that does the build. And that build server will then push the results to a cloud environment which then deploys it.

Typically this gets more complicated, like you're usually pulling updates from third party dependency servers like PyPI or NPM. and I typically add that as well. But anyway, I don't know you guys this any questions about any of this or thoughts or. Yeah.

Yujan Shrestha: No, this is really helpful. I'm I'm really eager to see how you apply STRIDE it to be diagrams.

David Giese: Sure. Yeah. So let's let's go over here. And so what I'd start with. So let's look at this. You know I'm going to actually pull up my there's a nice chart in this book I don't I don't remember offhand which it says which STRIDE elements apply to which node types because okay. So for an external entity it's it's suggests that spoofing and repudiation are the main types of threats.

So let's let's think about that I think.

Yujan Shrestha: I think personally I think personally we should go into what the STRIDE element stand for.

David Giese: Sure. Yeah. So let's by the way, this is all still in that big article I had, but I have. I'm kind of accumulating a list of examples. So spoofing is kind of attack against authentication. It's really pretending that you are someone you're not. So like phishing is a type of spoofing attack where you're trying to get access to someone's credentials.

So you can then, you know, log in as them and pretend that you're them. Or here we have another examples like brute force attacks. That's that's another spoofing example. So like in the case of our diagram here. So are there attacks where someone could act as the DICOM router and send you data like would that would is that a threat.

Guess and let's just assume it is like what would be the the mitigation of that. Like you would expect that the data coming in is coming from a certain port, like an AI, certain IP address in port, or you have some or you have, you know, I'm actually not too familiar with DICOM SSL. I know some of the engineers that our team are but so I don't know if that would protect you against this or not, but of course that would just protect the confidentiality of the connection, but that would be an example of yeah, spoofing attack spoofing threat and a possible mitigation set.

Yujan Shrestha: Yeah. So that makes sense. Yeah. So do you, do you try STRIDE to just the boxes or do you apply them to the to like the arrows as well.

David Giese: You do it to the arrows as well. And again in this I should put this table up in our article. But for data flows typically. Yeah. Tampering. Yeah. Information disclosure and denial of service about the main threats that then apply to the arrows. So okay so let's let's think about this connection here. Since this is maybe more interesting.

So like a denial of service attack. Like could you could you do an attack against the cloud or this or this side of the connection where you're just sending a huge number of requests, where you use up all the processing time of that node, just even if it's rejecting, if it's rejecting the request to where you're able to reduce the availability, the system by an attack like that.

So that would be an example of a denial of service attack. And the mitigation would be, you know, making sure there's different technological tools. But one simple thing is just make sure you're not doing any really complicated processing and then and then the request evaluation. But of course there's tools that are specifically meant for that protecting it's service attacks.

So tampering would be like a mean in the middle attack. And so let's say someone is able someone on this because this connection here is going through the cloud. What if someone has access to the internet, like the network infrastructure that's sitting in between the hospital and the cloud? And if you weren't encrypting the connection, they'd be able to read the data, tamper with it, and, you know, move things around, maybe, and then send it along.

And in that case, the processing module wouldn't know the difference, right? It would just operate on the tampered data. So the mitigation for that is a of course, requiring Https for this connection so that there would be. Yeah. Yeah.

Yujan Shrestha: And that makes sense to me why you've mapped threat modeling to more like to more. So FMEA I mean this definitely reminds me of that where you like pick like you pick one component at a time and then you ask a set of questions, and then you add that, and then you assemble all of those findings into the final risk, like the risk management for it.

David Giese: Yeah, exactly. Because it tends. And when you do this, you tend to get a lot of threats because you're going through each element of the system. So it tends to be pretty comprehensive. And in you, we wouldn't recommend putting every single one of these threats in your security risk matrix. And so that I know we're probably getting to the point where we should start taking some questions.

So yeah, because I know we could keep going through for a while, maybe we can do another talk where we go in more depth if people want that. But if we go back to our process here. So we list out all of the threats for each element. And then you kind of go through and you say, yeah, some of these feel a little far fetched.

And then and then ultimately you trace them back to the risks in the, in the mitigations and kind of the outputs are a list of security risks in the list of requirements, software and system requirements that mitigate those threats and risks. And then you, you know, it makes your system more secure. You write having the requirements. It forces you to verify, hey, like we actually did all these things.

We said we do. One common thing I see is people will they'll do all this, but they won't ever have. They won't write requirements for it. And that's, I think, way less useful than and FDA will call you out on that. We've seen them do that a number of times recently. So that's yeah that's really the output of the whole process.

So I don't know what do you think you done maybe should we take a few questions.

When to Stop Threat Modeling 🔗

Yujan Shrestha: Yeah I yeah, I actually have a question.

I call the safety risk. Like, I noticed a lot of questions are around when to stop. Like when is enough. And we've kind of developed some, some internal kind of guidelines around that. Do you have anything equivalent for cyber security. Well like when like how much is enough. How much is too much. Yeah. What do you tell people.

David Giese: Yeah. So no it's it's always it's a that's always a great question. So one thing I would say is when it starts to feel like really silly, that's like okay guys like come on that's usually a good sign that you're starting to reach that point. Now that only applies though, like if I've seen engineers where they just really dismiss the whole process out of hand and into them, they would just say, oh, this, this whole thing is silly from the start.

So they would just stop. So that's not like the perfect heuristic. But if you're coming from like, like a place where you maybe are aware, hey, these threats happen like malware attacks are real. You know, maybe, maybe malware attacks today tended not really focus on medical devices. And I believe that's that's accurate. Like based on some reports that have been put together as like usually those malware attacks are not really coming through that medical devices, but still.

Like they're real. There are real threats and attacks that happen that can hurt patients even beyond malware attacks. And you know, once the team is trained on that and they really understand and they're like, yeah, this is real, then you have that baseline and then you kind of go until you start to feel like, hey, this is this is kind of silly.

I think that the Pentesting is another really nice check on that. And I think that's one of the reasons for insisting on having third party pen testers is it really opens your eyes like, oh wow. Like there is all these ways that the pen testers were able to get access to our system that we hadn't considered. So that's that's a nice wake up call, I think, to people.

Yujan Shrestha: Yeah, that makes a lot of sense. And I didn't quite think of it that way. But I like that the pen test is a good check and balance for doing too little. That. Yeah, that certainly makes a lot of sense.

David Giese: I think to and I know I think I said this last week, but in the FDA guidance there's a list of security controls that FDA likes to see. I mean, certainly like once you go through this, you'll kind of see, oh, here are all the controls that we've identified. And you'll then want to go back to that list and say, hey, we have a few from each of these categories, because, again, FDA will likely call you on it if you don't have security controls from each of those eight categories.

That's another way that you can tell, like, hey, we don't have enough. And purely from a regulatory consideration, even beyond like what actually is necessary.

Yujan Shrestha: So right now to a question from Cooper. So STRIDE for STRIDE, do you go component by component and evaluate whether each type of threat can be present. And then another question do you do this for each of the security architecture reviews.

David Giese: Yeah. So I guess I partly answer that earlier, but so yes, we will I wouldn't necessarily go through for each like security use case view. In addition like I would use them collectively. And these should all be kind of mapping onto each other. In most cases like map mapping under the global architecture view. But okay. And I see there's a related question how granular should security user needs and requirements be?

Should we go as granular as a user need or requirement for each of the eight security concepts?

So I we don't typically have a user need for each of the eight security concepts. But we will typically do is we actually categorize our requirements and we'll have all eight of those categories in our in our requirements list. So if I go over here.

Our requirements table this is in our request system. Here you'll see you know we have all these types of requirements. And then we have the different security types listed out here. And that's kind of how we do that. So we don't generally handle that categorization by tracing user needs. But I don't think that's a bad idea though.

If that's it's how you guys do it. You know, we tend to have our user needs to be pretty high level in the language that users would typically use. And I wouldn't typically expect users. I mean, maybe IT department users, but you to speak in this language. So hopefully I don't. Cooper, if that answers your question.

Yujan Shrestha: Yeah. Well, Dave, I think you did a great job because those were the only two questions.

Adversarial Machine Learning and AI Threats 🔗

David Giese: Oh, I see, I see Richie, Christian just had a question. Thanks for the another informative session. Yeah. That's great. Thanks for coming. So it's nice to see people joining. So what are your thoughts on using framework a framework like STRIDE for adversarial machine learning, input manipulation attacks, data poisoning? If not, have you come across any framework specifically for security risks related to AI and MLL’s ?

That's a great question. So I to be honest, we I have not like really seriously considered machine learning adversarial machine learning attacks in any of the threat models I've been involved with. Although to be honest, I'm doing fewer of the threat models these days. I'm more engineers are doing them. I don't know you, John. Have you seen this come up in any of your projects?

Yujan Shrestha: No, because yeah, like Gen AI I think Gen AI I probably where you would have a lot of this come up in non generative AI. Like you could still have adversarial attacks. The honestly kind of far fetched in my opinion, but I think it hasn't come up much because Gen AI feel as if a new field and there hasn't been a large language model based device cleared by FDA yet, but I could see different attacks, like a prompt injection attack, which is analogous to with people injection attack that's already covered by STRIDE like, I think if you did STRIDE properly, you would identify SQL injection attacks and prompt injection attacks and other of these new

like I think that the framework would still apply. You just have to maybe break out the system a slightly different way so that you can you can have a higher chance of identifying those threats. But yeah, I'm not aware of any frameworks or anything like that. That's. Yeah.

David Giese: Yes. In a, in a system like this, like just trying to think of an adversarial AI attack, like, like I wonder you can have a spoofing attack on this connection where you, you do like, like we're talking about meaning in the middle where you, you get the image data and then you like, if you somehow knew what how the model you I guess if you had access to you could make a tweak to it, a subtle tweak to the image such that it gives a bad result.

Yeah. You know, I think the mitigation of that would be some sort of some sort of data checking.

Like it like I'm trying to think like I can.

Yujan Shrestha: Like when my pins like, that could be just like it just seems so far. Like, like these kind of these kind of these kind of image based attacks. Yeah. Like you could in an academic setting, inject some noise to make it. Now we're going to give the patient a cancer. Right. Or like I don't know. Like what?

Like what what kind of thing? I think this like if those kind of attacks fall into like the silly category, like, why are we even looking into that? But I do think though for like the large language model, ones like you can have a prompt injection where someone steals your source code for like a prompt injection where it like, injects, injects some sort of executable code and, and your system would actually would actually run that depending on how you set up, like your LLM tool usage and stuff like it could actually do that.

I think that's probably more useful. But like the image based ones, like unless your device is doing something really high risk down the other end, it just doesn't seem really that useful to me to.

David Giese: Yeah, I could see a radiology setting where like the like worst case, someone gets you give someone cancer when they don't have a cancer or vice versa like vice versa. But yeah, but but yeah, I could see like, you know, if you have a really important person who has I don't know. Yeah it does it does get hard to think of and I would I think there's a well known thing at this point.

But with cybersecurity it's always like you're never going to make the system perfectly secure. Right? It's always like how? Like making it secure enough that there's no benefit. Like there wouldn't be a situation where someone would actually make that attack. Like you're you're unlikely to perfectly secure your software. You just have to spend enough money to make it unlikely that the attackers would want to do it.

And in this case, the main attackers, I think, are the malware where they can they can get money by locking up the hospital and forcing the hospital to pay them a bunch of money. But like, why would someone want to, you know, tamper with the data in a way that you don't notice that causes someone to get hurt?

Like what? You know, maybe if you're like, trying to attack the president or like someone like, you know, like in the what was that show homeland where they, there was an attack on the vice president's peacemaker? Something like that. Right. But I don't know. I don't know, Richie, if that answers your question, or maybe we're totally going off in a different direction than you had in mind.

Certainly. Yeah. I think there's a whole new class of attacks and threats that, you know, I'm not I haven't read up that well on that topic.

Yujan Shrestha: But yeah, let's try to invite Richie to the to the conversation commentating. Think about the higher bandwidth way to answer the question and see that. Yeah. Here we go. Hey, Richie. Oh, hi. Hey. How's it going?

Final Questions and Closing Remarks 🔗

Richie Christian: Hey, thanks for that. I just responded to for the discussion and in a comment and I think, yeah, it's it's quite an active area of research as it is with, you know, AI ML, Gen AI, but I think off switch which you guys would be familiar with. You know, they're sort of published, you know top ten security risks associated with ML.

And they've got some nice thoughts on how you could mitigate it. And David, you spoke about some of that. Yeah I think this is probably early days. You know, we don't see this kind of area covered in the current guidances. But I wouldn't be surprised if this becomes a thing in the next sort of 3 to 5 years, you know, kind of upping the game on and not security.

Yujan Shrestha: Yeah. Yeah. This is this is really interesting. Yeah. Yeah. I had you know, on topic on its own. Yeah. Yeah.

David Giese: Yeah. I'd be curious to read more about this. Thanks. Thanks for sharing. I had I definitely hadn't seen this last page.

Richie Christian: No worries. Yeah. Yeah that's super interesting. Thanks for the discussion and the example. Yeah. Very helpful.

David Giese: Yeah. How's your week going?

Richie Christian: Good, good. Thursday evenings are quiet for me. I'm in Europe so yeah this works out nice for me at this stage. So yeah. Keen to join as many as I can right.

David Giese: Yeah. Thanks. Thanks for joining that and for your for your great questions.

Richie Christian:So no worries.

David Giese: Well any I don't see any other questions.

Yujan Shrestha: So maybe we just go. We just got one more from Rama.

See when creating cybersecurity risk management report or SaMD Risk management how do you structure risk management categories? For example, would you start with the product and intended uses and branch out from there? Or would you start thinking about the processes, components processing modules and interactions and establishing granularity from there? So it sounds like Rama is asking like, do you start like kind of top down or bottom up?

And it's for risk. It is for risk management.

David Giese: Yeah. So first I'm to I'm going to assume you're talking about security risk management here and not safety risk management. But definitely correct us. If that's not the case. And it says how would you structure the security risk management categories. So I would say we follow. So we focus really on the FDA in the US market. So we have some bias that's maybe not applicable to our groups that are more globally focused.

But what we typically do is in this appendix one here of the FDA guidance, there are these eight categories that are there pretty reasonable and pretty generic general. And these are the categories that we follow. So they're not they don't really vary from device to device in that way. And maybe I'm misunderstanding the question if I'm interpreting in is you're asking if like these categories would change from the device to device, and if so how would you come about them.

But anyway, yeah Rama, I'm, I'm sorry if I'm misunderstanding your question, but that's I think the best I can say at this point, I.

Yujan Shrestha: Would just yeah, I would just add that I recommend doing both. Like I recommend like starting with one doing a little bit and then bouncing the other one doing a little bit, and then just keep doing that, that iterative process. I found that doing it top down and bottom up, like doing it top down will help you do the bottom up, and doing it bottom up will help you the top down.

So you might as well do both, and you might as well do them kind of in parallel, you know. So don't like do one exhaustively and then do the other anyway, that's that's worked well for me if I understand the question correctly. Well yeah. Thanks a lot everybody. I know this was the this is a very this is a very insightful conversation David.

I learned a lot and I hope and I hope you guys did too. And again you know golden rule for for anything med device development is always about something that you want to use on yourself. And also your loved ones. And provided you do that, make it pragmatic and do with little work as possible to get it in the hands of patients and doctors where it belongs.

So yeah, with that, I'll definitely give you guys to it. Thank you for your time. Yeah. Have a good.

SHARE ON
×

Get To Market Faster

Medtech Insider Insights

Our Medtech tips will help you get safe and effective Medtech software on the market faster. We cover regulatory process, AI/ML, software, cybersecurity, interoperability and more.