How to get AI/ML SaMD FDA cleared - Presentation by: Yujan Shrestha

 December 17, 2024
SHARE ON

AI/MLRegulatorySoftware

5 Key Takeaways 🔗

  1. Balancing Safety and Speed: Companies must balance delivering safe and effective products with timely market entry, which can be challenging during development.
  2. Statistical Power vs. Generalizability: Statistical analysis ensures validity but does not guarantee a device’s generalizability to the U.S. population, emphasizing the need for diverse and representative data.
  3. Risk Analysis Threshold: Knowing when to stop risk analysis is critical—teams should aim for a product they would trust for themselves or their families.
  4. PCCP for Regulatory Flexibility: The Predetermined Change Control Plan (PCCP) provides a proactive strategy to manage AI/ML changes, reduce regulatory uncertainty, and speed up pre-market submissions.
  5. S. Population Data Requirements: The FDA emphasizes that over 50% of clinical data must come from the U.S. to ensure devices align with local practices and demographics.

Participants 🔗

  • Yujan Shrestha - Partner Innolitics (Presenter)
  • George Hattub - Senior Regulatory Affairs Project Manager
  • Richie Christian
  • Joshua Tzucker - Seniour Software Engineer

Transcript 🔗

Discussion Setup and Participants 🔗

Yujan Shrestha: As you guys know, last week was the annual RSNA conference. And we were there with a lot of folks, it was great show and I just wanted to talk about some takeaways from that. Also during this call, I'd like us to discuss the given talk here and maybe we'll do something interesting. Josh is here on the call. Am I able to speak to you?

Josh: I can hear you right now. Can you hear me?

Personal Note - Work-Life Balance 🔗

Yujan Shrestha: Yes, I can hear you. All right. Great, great. Well, the other host that's usually on wasn't able to make it today. So, Josh, I hope it's okay. I'm going to put you on the spot. How's that now? Well, actually, we’ve got Josh and also Richie Christian on the call too. Thanks a lot for joining Richie. We've been really happy to have you on the show for the past couple of shows.

Richie Christian: Glad to be here.

Yujan Shrestha: Thanks. I think what we'll do is I'll just go over some of these slides and we'll do something fun. So rather than me just kind of speaking about these slides, maybe we'll have you guys try to guess what’s the context or what the talking points for each of these slides should have been. And then we'll just kind of talk about them. So the first one is this slide. So part of the rules of the game, you'll have to guess what my talking point for this slide was supposed to be.

Richie Christian: You're going to miss changing diapers while you're at the conference?

Yujan Shrestha: Yeah, I mean, that's close. I was just promising to change both the wet diapers and the poopy diapers whenever I get back home. This was the longest time I was away from home from this little guy for a while, so I was just saying that. But yeah, I think it was a great guess. I'm going to skip over this one because it's pretty obvious what this one was about.

Balancing Safety and Market Speed 🔗

Yujan Shrestha: And of course, our services that we provide to our clients, some of our happy clients here. So this one's probably obvious what this was. At the time I was only given 20 minutes, which is kind of difficult to go over all of the intricacies of all software, med device in 20 minutes. So I just had to say a disclaimer. Okay, how about this one? So what do you think the talking point of this slide was?

Joshua Tzucker: I’d guess it’s about balancing how you prepare a submission, doing enough to ensure safety and effectiveness without doing unnecessary work.

Yujan Shrestha: Yeah, exactly.

Richie Christian: Yeah, I kind of agree. I think it's why we're in the game. These are not just abstract products. They're going to be used on real patients wherever they fall in the clinical workflow. Whatever risk associated with these devices. So I guess you're balancing how fast you get to the market versus doing the right thing. And it can be tricky at times. It sounds pretty simple, but can be tricky going through a development project as a team.

Knowing When to Stop Risk Analysis 🔗

Yujan Shrestha: Yeah, definitely. And really, the only experience I would add to that is it's difficult to know when to stop. The guidance 4971 says that you want to do risk analysis, you want to ensure that you've identified and properly reduced risks where it’s possible, but they don't quite tell you when to stop. And in theory, you could go on forever doing it. And that has happened to me where we've gone on probably way too longer than what was necessary. And I think knowing when to stop really comes back to this rule is you should stop when you feel like you've got a safe and effective device that you'd feel comfortable using not only on yourself, but also your family.

Statistical Power vs. Generalizability 🔗

Yujan Shrestha: Yeah, that kind of gives a sense of the threshold, of how much is enough to do. And it's abstract, but you know, this kind of best guiding principle that we've got. Hey, George, how you doing?

George Hattub: Good Yujan.

Yujan Shrestha: Yeah. So the game we're playing is like, we're going to go through these slides that I presented RSNA and we're going to try to guess what the talking points are. Just kind of discuss our own personal experiences on them. Just have an interesting conversation. So how about this one. The straightforward sample size. How about this slide. Josh what do you think. See here. And if you all have any thoughts to this just feel free to jump in.

George Hattub: This is a tough one.

Joshua Tzucker: I mean this makes sense to me that there's two different things when you talk about like the performance of a device and there's the kind of statistical performance in terms of the result, raw results. And then there's the performance of how does it actually perform against the intended use.

George Hattub: So we're talking about subgroup analysis, right?

Yujan Shrestha: So this is just saying that the statistical power analysis only tells you if your experiment is valid, but it doesn't tell you if your device is going to generalize to the overall US population. So my understanding like in the way I explain this is like your experiment is statistically powered. If you can differentiate between these two bell curves, if the bell curves are far apart, you need fewer samples to be able to tell with a level of confidence that these are indeed two separate distributions, or if they're the same distribution, like the stats analysis, the stats power now just tells you, how many samples you need. If they're very close distributions, you need a lot more, if very far, you need a lot fewer. But it doesn't tell you if your device will generalize when it makes it into the post market and commonly like I've seen, folks get confused or they will overindex on just the statistical power analysis in itself, but not look into if it matches the US population data.

Sample Size Determination for AI/ML 🔗

Yujan Shrestha: Does it represent all of the race and ethnicities in the US? Things like that. You also have to built/bake into your study as well.

Richie Christian: Yeah. I guess so much of the effort goes into the primary endpoint, which is where a lot of the stats are driven at, but generalizability kind of goes into the subgroups as George was saying. And that's super important especially for AI/ML and having the right models that are going to acquire the images for example and a whole bunch of factors that go into it. So yeah, super important and I see this is more of a punchline, right? Statistical power doesn't equal generalizability. So I'm sure you managed to catch people's attention there.

Yujan Shrestha: Yeah. This one definitely gets a lot of people. And it's like it's unfortunate because you can't prove that it's going to generalize, right? It's not something that you can prove in the pre-market. That's only something that in the post-market you will be able to really show that. Which is why I think FDA is putting a strong emphasis on post-market surveillance now. It's really hard to know this upfront, but in the post-market, that's where you can really get a sense of that. So how about this one. So straightforward sample size determination. You know how much is enough for nonCADe or for nonCADx type stuff. You know I say 200 subjects with three readers. Any ideas on what this slide was supposed to be about.

Joshua Tzucker: I mean I know that this is a common area of concern for a lot of companies. How do you come up with this number, this kind of ambiguous.

Yujan Shrestha: Yeah.

George Hattub: You mean like 200 and equals 200. What is the confidence interval in the quality level of that number.

Yujan Shrestha: And it kind of dovetails back to this previous slide where like you can't really prove that's going to generalize in the pre-market, but FDA wants you to like show that you thought about that. If you can't prove it, then how do you set that sample size and if statistical power is not enough, then what do you set the sample size right? Usually what I've seen, like the distributions that we're trying to measure, are sufficiently far apart that the sample size power calculation ends up being like end of ten or, you know, something pretty small. But obviously if you go to FDA and say, like, hey, we're just going to have ten of ten, they're most likely not going to let it through and because they're going to cite well with ten, we're not confident that that's going to generalize. So what we've found over the years of just going back and forth with FDA several times is like, I think there's kind of two different categories of the studies that are needed and I'm gonna split them up into, I think there's a like, if you know, what the I'm going to call it white box or not, any white box really.

U.S. Population Data Requirements 🔗

Yujan Shrestha: Like if this is segmentation type algorithms probably fall in this bucket CADe probably falls in here as well. Some CADx I think could fall in here, but it depends on how understood the underlying algorithm is and I'll say these like I'll say a ballpark estimate of 200. Like if you're looking for a sample size is 200, I think you're in fairly good shape for the study. But then there's this other category where I'm going to call this like, I don't know, big black boxes where these are studies like you have pathology proven data and you've trained a classifier. And you don't know how the classifier works, like it's just a big black box. For these, I would put the required data’s in into, like greater than 10,000. That's like the sample size that you would need to prove that this big black box neural net classifier thing is working as you intended. So it's a big scale difference, right? But these are the best rules of thumb that I could really like come up with. I don't know if you guys have any other having a thought about these.

George Hattub: If you're talking about attributes data versus variables data, it's been my experience that variable scanner requires a smaller sample size and attribute, meaning, you know, good or bad, or comparing it against a gold standard that's where you need a higher number. Does that does that follow your experience?

Yujan Shrestha: By variable data you mean?

George Hattub: Variable data means a measurement, a continuous number. So when you're measuring continuous data, the sample size is smaller. I know for probably the sake of conversation we're talking about attributes good or bad, like a binomial distribution and that's where n is higher.

Yujan Shrestha: I think it has more to do with the the explainability and like the ease of determining if like the algorithm is correct and also to like find out if there's any defects like the processing. I think that's probably what I think and also risk as well. Like there's risk in this. But this is all assuming that it's like the same type of risk. But the less you know about the internal workings of the algorithm and how it's broken out, the more sample size you'd need. And I think on the extreme end, like if you have just outcomes data and you have a big CNN that like classify something like you need a lot,

Richie Christian: You put your MRMC studies in the, big black box category here?

Yujan Shrestha: Usually these big black boxes don't need MRMC. and my justification for this is like usually this is some outcomes data that has like a pathology confirmed diagnosis or some other ground truth that doesn't depend on, human performance. So you're not necessarily trying to measure how much better the human is than, like, because it just doesn't like I think it doesn't really make sense in that case. But I think the MRMC study comes down to here, because there is a human in the loop, and there's a strong component of that, the standalone performance study and the ground truth you get from that would be the same end. So the n of 200, like you'll get three readers to annotate this and, get a ground truth and then you'll get like 12 readers to do the MRMC study on the same data. So, so the sample size would be the same for both.

Richie Christian: You know, one other factor that I'm aware of that could influence this is the level of automation I think they talk about manual semi-automated and automated and usually the amount of effort required in terms of validation would be proportionate to that as well. That's why in my experience anyway, is that you you need more rigorous data the further along you go. Terms of your task automation.

Multi-Reader Studies and FDA Guidelines 🔗

Yujan Shrestha: Yeah great points. So yeah okay. Let yes let's move on. This one's pretty straightforward. But basically, more than 50% of your data must come from the US. And again why do we think that is, like why do you think U.S says or the U.S FDA, they've said this on almost every pre sub and almost every application that we've done is they want to ensure more than 50% comes from U.S.

George Hattub: I, I think it's because if you look at them, mission of the FDA is to protect the US public. And so this is not official. But like you said, it comes back on additional information requests. And so I think that the FDA wants to ensure that the practice of care in the USA is covered. So anyone else have anything to add to that?

Richie Christian: I thought it came from this, I can confirm after, from this guidance document. On evaluation, reporting of age, race and ethnicity, specific data in medical device clinical studies. Nothing to do with AI/ML, but I think it's in this context. It's also to do with maybe generalizability to a certain extent. And I know they can be flexible with literature-based justification on why you could get away with less than 50. But I think in this context it's probably also a lot to do with generalizability.

George Hattub: Okay. So it's inferred in that guidance document that you mentioned. It's not specific to AI/ML.

Richie Christian: Yeah, I think so. I need to confirm this, but it comes from somewhere.

George Hattub: Yeah I know that I've gotten in trouble for this. Well I shouldn't use the word trouble, but it seems like before let's say 2020, the FDA wasn't looking at this or following this guideline. And then on subsequent 510(k)s, the FDA will challenge what they cleared several years ago. And then the burden is on you to show that it's not a problem, that it's the same standard of care. But what do you think Yujan.

Yujan Shrestha: Yeah. No. Exactly. I yeah I think the same. And like your first point, George, about FDA's mandate is to protect the public, mostly the US public. Like they're giving clearance for, to to do pre-market authorization in the states. So they want to make sure that it would generalize in the states. And again, you can't prove that something's going to generalize before you actually prove it. But there's rules of thumb right. Like and this 50% is another rule of thumb. And the source of this is, is like an FDA guidance is where this rule of thumb comes from. Any like multi reader study we were mentioning before like we've seen 12 readers seems to be like a happy middle ground.

George Hattub: Where did you get the number 12 as opposed to let's say ten?

Yujan Shrestha: You know, now this one doesn't have like a guidance document that we can really point to. It's more of like we've been through FDA, either through Pre Subs or like, getting whole letters back from FDA. Sometimes if we go to a few, they might ask for more. Or if they've if we've gone to too many, they're not going to ask for more. So it's kind of a judgment call is like, well, like maybe 12 is a is a good number to shoot for based on that. It's kind of same way that we came to this N of 200 is like a similar way like we've tried to do N of 40 and that was too little. And we've done N of 500 and there was no complaints. So like it's kind of a guessing game bit about these two. So yeah I don't know if that that answers your question.

George Hattub: So you're saying 12 would be what you would propose and you're willing to go down to ten or this is just the magic number with the FDA greater than 12.

Yujan Shrestha: No. Again like if you have less than 12, you could justify it some other way. I mean, like FDA looks at the the totality of the like the study that you provide, not necessarily just one thing. So if it's less, it's not going to be a guarantee that you're not going to get it.

Handling Software Modifications and Regulatory Pathways 🔗

Yujan Shrestha: But I think if you're trying to design the study with that sweet spot in mind, like, you know, kind of go back to one of these previous slides like this. This supposed to be a 80/20 rule, you know, like I think trying to design for that sweet spot. I would recommend 12 readers.

George Hattub: Okay I see.

Yujan Shrestha: Yeah. All right. To file or note to file. That is a question y'all probably know what this is going to be about. This topic has come up a lot of times, right? I don't know who wants to start about this. Like how to handle ongoing changes. Maybe we'll scope the conversation down since we're talking about AI, but specifically for AI.

Yujan Shrestha: Yeah. What's been your thoughts on like if a client asks you like, what's what would be enough where we have to file another 510(k), either a special or, or a full one.

George Hattub: So, I mean, the FDA has two guidance documents modifications for 510(a) nd then software modifications. And it's pretty open to interpretation. But as a rule, what I found is some companies will say that if we add a feature or we go from a software version, let's say 3 to 4, those would be the thresholds, and then maybe version 3.1, 3.2, provided that they work within the guidance documents. Those would be your so-called letter. The file I don't know if that's been other people's experiences.

Joshua Tzucker: I mean, I know a lot of times we're looking when we're talking about like kind of a matrix of possible risks to the patient. I introduced by a change and the likelihood and severity. I think AI kind of complicates this a lot because, you know, with a standard new feature or development change, you know, it's kind of a deterministic it's very easy to reason about is this going to introduce a new risk or not or modify an existing risk? I think AI it's kind of making it difficult in that, you know, sometimes you're talking about these large kind of black box outputs and it can be harder to kind of fit things into that same matrix.

George Hattub: Yeah. I think one of the problems that comes up is that when you make changes, you do the so called letter of the files that taken alone, one change is not a problem, but over a period of time you can get away from your original 510(k) and then when you decided to submit a 510(k), it can be problematic of how you got to this point because you have you release software already. So I always look at it as you were just saying before about risk, why are you making the change? Is it because you've discovered a risk, or is it because of a complaint? And if it’s because of a complaint, then you have to put it into your system, have a cap that opens you up. When the FDA comes in, they're going to see you made a change because of this and then that could bring up the question, why didn't you submit a new 510(k). So it's always it's not a black and white decision. And it always comes down to, like you said, risk. You have to have the risk analysis. But the other thing I'll say about it when you if you add a feature, okay, and you do a letter to the file and your competitors are watching you, maybe they're not they're not monitoring you, but let's say they submit a 510(k) and they use you as a predicate device. You want to avoid a situation where the FDA is reviewing the predicate device, and they say they never had clearance for you. That's one way it can backfire on you. So those are just kind of like real life stories in a way. And so yeah, you have to make changes, but it's how you handle them.

Yujan Shrestha: Do you think this diagram I'm doing here is like accurate to what was said before. So is like backtracking back to what we said. But and this is kind of circular. It's like, well, we have a convention that we're going to change the version numbers depending on the outcome of what we think. But this is like one way to capture it. Is this what you guys have seen is like kind of a common practice?

Richie Christian: I normally see versioning being tied to the extent of change rather than necessarily to regulatory actions. You know what I mean? Because that's always one step further. But as a concept, this makes sense to me that if you had the right balance, then this is how it would occur.

Yujan Shrestha: Yeah. Like this could be the way to like tie together the regulatory team and the software engineering team. It's like I think we'll have one matrix for like regulatory type changes. What would I guess be considered a minor version change. Unlike the software engineering side, you know, like this. This could be a cybersecurity update would be one of these or, you know, minor bug fix or something. And then you would you would harmonize that to the regulatory side of what that means. Well, that's almost certainly just a note to file example. But anyway, if this is a more substantial change, maybe, I'm not sure Josh would come up with an example, but this could be no new requirements. Or maybe like I don't know if it's like low risk features or something. But anyway, yeah, like some way to harmonize the two.

Richie Christian: Here are thoughts on this first point where you say retraining the algorithm you think does not constitute enough for change for another 510(k). The reason I ask is I've seen some recent clearances that have been that have PCCPs authorized as part of that and they've definitely included retraining of algorithm with new data as part of the PCCP, which indicates to me that that would have been otherwise. It's been significant enough to require a 510(k) in the absence of a PCCP.

George Hattub: Was that a big company or a small company that you've seen this.

Richie Christian: I need to look, but they've they were familiar names.

George Hattub: Yeah. And the reason, why the reason why I'm saying that in my experience, I've found big corporations tend to be very conservative. And so they may choose to do something that a smaller company might say, I would, I disagree, I would have done a letter of file. So I'm not asking for any names. It's just that this has been my experience too that larger companies, they tend to be more conservative sometimes they have former FDA on their staff or they have lawyers and you know, they can survive the time. Not saying that the ends justify the means, but that's just yeah.

Yujan Shrestha: Yeah. I think that's what it comes down to. I think it's like the risk tolerance of the particular institution like, and there's kind of a soft line somewhere between where do you draw the line between what's a note to file versus special 510(k). And like the larger companies tend to be a lot more conservative. So they'll put it they'll put the threshold way down here. So if there's a change, maybe I'll say retraining kind of is probably somewhere on this thing and all the way on the right here would be a new indication. We can all agree that, hey, that's probably going to be a new a new file and probably all the way on the left side here is like a cybersecurity upgrade, which FDA has made clear that they don't want or like that's that's something they don't need like like a special 510(k) to include. So you know, where do you draw the line here is kind of variable. And the new tool that we have available now is that is the PCCP. So what the PCCP it's designed to get changes that before was in this like gray zone. But with the PCCP now you can get more confidence baked in and you get blessing from FDA that says like, okay, like for a retraining, we have blessed your verification strategies such that it's no longer like a judgment call.

FDA Challenges and Regulatory Evolution 🔗

Yujan Shrestha: But it's not required. This is for if you want to de-risk it further, which I think is a good idea, especially if you're a more conservative group. And otherwise it would have required just a whole bunch of special 510(k)s you could save a lot of money doing doing one PCCP. But yeah, that's my interpretation of it. It's like we have this new tool now where you don't have to guess, you know, is that a note to file or a special, you know, which one is it?

George Hattub: Yeah. I think another thing too, when I started my career was with physical devices. So it was pretty black and white as far as how you could change your device and whether or not you had to do a recall or things like that and I think because software, you can make changes instantaneously. How does the FDA differentiate the rules? Because the rules, once again, are really written for physical devices and so I think this is this tool you're talking about, the PCCP is to fill that gap. Yeah. I don't know if that's been other people's experiences or understandings.

Yujan Shrestha: My kind of reading between the lines, like if you put yourselves in the shoes of FDA, right. I think like one concern be like, we just got so many of these special 510(k)s or whatever, that was just like low risk and it was just clogging the list, just the pipeline. So they're thinking, well, how can we reduce the workload and also make it easier for, for like my device manufacturers to get these changes without having to always ask us. Yeah and so that was one reason for PCCP. I think the other reason was for AI to be able to have AI that like continuously updates and retrains itself out in the field. I think the PCCP is like the only way to really do that. But yeah, I think putting yourself in the shoes of FDA, like why did they come out with this? I've always found that to be a helpful exercise. I think it, it helps to, try to understand that so you can make the right judgment call of like where to put this threshold is really hard. But but if you consider FDA and their challenges of how to manage limited resources and maximize safety of the public like it makes a lot of sense. And that also helps me help identify like these terms they use, you know, like when they say new risk or when they say it is a significant change. They're leaving a lot open to interpretation and I always come back to the examples that they gave and their guidance and also putting myself in the shoes of FDA is like why they put out this guidance, you know.

Joshua Tzucker: Good, to kind of expand on that a little bit. I really like that, that train of thought, because a lof of times when I think about this, I think of like what is like going up a level, like kind of rude question of like, what is the purpose of all of this? And like, to me, a PCCP is kind of like you can just think about it in terms of like, you know, going outside of the like kind of formal regulatory framework. Like if you were to just ask a favor of someone like, hey, can I crash at your house? People usually like a heads up for surprises. Like if you're like, hey, get a crash in your house. And by the way, Tuesday I'm going to have all my bandmates over and we're going to be practicing really, really loud. Like, it's nice to give, you know, the person a heads up versus Tuesday at, you know, five minutes before your band practice. You tell them. So I think a lot of that is because also it just comes down to like, you know, kind of going back to the like what provides the most value while being not overly burdensome. It's like I think both sides want the same thing, really. And that's kind of what this is, is getting at.

George Hattub: Yeah, I agree, I think also another thing with the special 510(k) is that I often tell people that FDA literally has to drop what they're doing to work on a special 510(k) when you submit a special 510(k), FDA has 30 days to make a decision and you know better be the right decision. And so, like you said, with software they people can submit, you know, a lot of special 510(k)s and FDA could be overwhelmed with them. And if, let's say the knee jerk reaction is to put them on hold, then that's just going to tie up resources from the FDA and everybody else, you know, you have to have the ten day meetings and things of that nature. So this is probably like you said, they're looking at what can they do to still stay within the regulatory framework and, come up with, you know, a compromise.

Yujan Shrestha: Yeah.

George Hattub: They're not going to tell us this. Like we have to infer this. But I think right now between all the things that the FDA has to do, you know, where now if you 510(k) gets put on hold, you know, you have a ten day meeting. If you do it within a certain time, you can get a Q sub. And then meanwhile, people are using FDAs almost like consultants with the Q sub. Internally, the FDA is probably saying, how do I handle all this? You know, because, you know, I'm not getting more resources and meanwhile, they're regulating other products probably where they should probably made exempt. You know, you don't need a 510(k), you know, like physical products. And so I think that this is one of the things happening and who knows, you know, now with the new change in politics, you know, if Elon Musk is going to look at what they're doing at FDA and saying, why are you, you know, not allowing companies to make their own decisions. How can you guarantee that you're protecting the public? Maybe you're not and we just created a lot of paperwork and leave it up to the companies and then if they make the wrong decision, that's when you send in the investigators and see how, you know, they they weren't following good practices.

Yujan Shrestha: Yeah, definitely. I would definitely agree with all that said here is like I think the reasons and I think if we go and look at the timeline between when did the no default guidance come out, when did special 510(k)s come into existence? I bet we'll see a pattern like at some point it was maybe just the traditional 510(k), and then maybe it's way too many of these like small changes.

PCCP as a Strategy for Pre-Market Submission 🔗

George Hattub: I was alive during that time and I remember it and that was when there was political change, when the Republicans, Newt Gingrich, when they were in charge, the FDA had to react. And so in the late 90s, that's when they came up with the concept, the special 510(k). Then they also had, you know, the modifications and when that happened, it was during a period where the FDA was being accused of not getting things cleared or approved, and then also about the public learning that medical devices have a different regulatory scheme and then let's say drugs where medical devices like 90% of the approvals are just 510(k)s, which are paperwork exercises as opposed to clinical trials. So I guess what I'm trying to say is that, that's when those changes happen. The special 510(k)s and other guidance documents and I think we're entering a period where that might happen again where the FDA will be forced to maybe say, “These types of devices are not going to require 510(k)s” and then, maybe software products will either force you to do PCCPs or come up with different rules to get things through quicker. Because another thing I'm wondering, they don't publish this matrix, but does anyone get a 510(k) cleared in 90 days anymore? I'm just saying that rhetorically. It seems like everybody gets an AI letter, it's almost like if you don't get an AI letter, you start thinking if the FDA reviewed your application at all and then with the eSTAR template there's a lot of obstacles that automatically you have an AI letter so I don't know if other people have seen that or wondered the same thing.

Yujan Shrestha: Yeah. My experience for like lower risk more straightforward LLZ for example, those have gone through recently without getting an AI letter. But I would agree that in most other cases you always get all FDA AI letter. So I wouldn't mark that as like a negative outcome. It's just part of the this is part of the process at this point. One of the things I would add to the PCCP, the way I've kind of internalize this with my software engineering type brain is it's a lot like test driven development, where test driven development is a practice where you write your tests for your software first, and those tests initially fail because the software isn't written yet or they do but the feature doesn't perform as the target. But then once you’ve written the test, you could delegate the task to another engineer or even give it to a completely different engineering firm. And you've essentially pre-greenlit what you think is going to be a acceptable blowback. I kind of think of it as similar like the PCCP is a way you could pre verify your verification and validation approach with FDA without having a completed product at this point. But then FDA can greenlight that and they are stamping that you see looks good. And if your software passes those tests, then they agree that you can make those changes without having to go back and ask for permission. So that's how I've thought about it. And that opens up some interesting concepts like the one mentioned or the one we're talking about here in this diagram. It could unlock some opportunities where you could submit a 510(k) sooner, even though some of your features aren't done, you could submit it sooner with a PCCP that outlines your testing methodology, and perhaps like that could be yet one other way to use the PCCP. Not necessarily just for post-market changes, but also to accelerate pre-market as well. Any thoughts on on that?

George Hattub: So is this how you're envisioning it?

Yujan Shrestha: This is one use case of the PCCP that I'm thinking about.

George Hattub: Okay.

Yujan Shrestha: Where, says you have an algorithm, the core algorithm is done, but all the plumbing for that algorithm, like the cloud deployment, all that infrastructure isn't done yet. But you don't want to take the risk of doing all that as a not to file. Like it kind of comes back to this threshold, like you don't want to put this threshold all the way up here. But I think there's a case where you could get that core algorithm submitted and then with a PCCP promise to do these downstream things and outline the test cases that you would use for those and you can submit your 510(k) several months in advance and then build out those features while you wait for that clearance to come through.

George Hattub: That would be nice.

Joshua Tzucker: I mean, there is a there is a flip side to this, right? And that with the PCCP you are kind of making a promise. So you're kind of handcuffing yourself a little bit to a certain path forward. It's almost like maybe another software parallel would be publishing a specification in advance, like saying, okay, in the future we're going to be implementing this feature, here's what the API is going to look like and same kind of thing where you can kind of get yourself into a corner where people start building upon the new spec and so you can't make changes because people are already relying on what you promised.

PCCP Limitations and Commercial Considerations 🔗

Yujan Shrestha: Yeah. Just to clarify though, Josh, just because you have a predetermined change protocol doesn't mean that you have to execute it. Like you could just shelve it forever. But if you wanted to make the change, you're right in that you are boxed in to that or you risk invalidating the PCCP and then you're back to this question, well, is that a note to file or is it a special like you're back to this gray zone.

Joshua Tzucker: And potentially have to scrap a bunch of work you did putting together the PCCP in the first place.

Yujan Shrestha: Right.

Richie Christian: Yeah. I find the PCCP to be a good commercial tool as well as a regulatory strategy tool. In that it could quicken revenue generation by getting to market earlier. So I think that can be helpful for a lot of startups who can just kind of get their V1 out. But on the flip side, I think the PCCP talks about cumulative changes. So here on your diagram you talk about version 3.0 development deployment. At that point, you're assuming that you followed through version two and you haven't candid, because if you did and somehow there was an an interaction between 2 and 3, now you're in this situation where you're back to, you know, okay to each other line now. So I think it's a great tool. But I think, across functionally, you know, product people need to be on board and they need to have done their discovery and, and whatnot to be able to put into a PCCP as a formal commitment. So that can be always challenging, but it's worth it if that gives you quicker entry to market to begin with.

Yujan Shrestha: Yeah. And that's also an excellent point, is that the PCCP requires you to have sometimes even a crystal ball, you need to know what changes you're going to make at a very fairly concrete level. And you don't want to have abstract changes or like a changes that could significantly change the device, especially the device intended use. That could be difficult to do looking long range. I mean, that's the whole reason why we have agile software development, it's really difficult to predict what the market wants before you actually deploy it. So I think you're absolutely right that limits the utility of the PCCP from thinking too far ahead. But if you know ahead of time what you need, I think this is especially true for like AI companies, you have a lot of AI companies that are trying to add multiple AI indications, and they kind of know what it is already. It's like, hey, we want to start out with lung and then we want to do heart or whatever.

PCCP Practical Implementation and Q&A 🔗

Yujan Shrestha: They kind of plan it out. Then in that case, I think a PCCP is reasonable, but if you don't know what product market fit is, what you want, you're right. Like it might be a big waste of time to do all that up front and also add some risk to the first 510(k), you are pulling in risks in the future. You're pulling them into the present, which sometimes that might not be worth it. So yeah, definitely things to consider before you do PCCP. Maybe we'll continue this next week since we only have two minutes left. Maybe I'll talk about it in two minutes, but it can be hard to do. Anything more on the PCCP that you guys think are useful to discuss?

George Hattub: All good.

Yujan Shrestha: Cool. Let's see if we have any questions from the audience. Sorry for the late notice for everybody. Looks like we've got a question from Cooper. Thanks a lot, Cooper. The question is: how specific does the PCCP need to be? For example, if I wanted to integrate software with hardware devices, can the PCCP say we're going to add in devices and test all with XYZ protocol, modified to each device are appropriate? Or do you have to list out every single device when you consider adding it preemptively?

I would say as specific as you can get, I think the better. You will have an opportunity to go back and forth with FDA during the probably doing the FDA AI letter, they'll probably say, oh, like we don't agree with the PCCP, it's too broad. So I would recommend maybe going a little bit broader and then meet the FDA kind of in the middle somewhere. But yeah, I unfortunately I don't have any concrete advice for, what I would do in this case.

Richie Christian: Lets’s say even before the submission in a pre sub, putting this question out in front of the FDA. And then I think the FDA also encourages that to discuss PCCP in a pre-sub. So that could be another thing to consider.

Yujan Shrestha: Yeah it's a great point. And also PCCP s can be added on after you've already had a product in the market, you want to add on the PCCP, that could be done too. So really it's a tool I think to de-risk this to file note to file choice and to move that risk earlier on into product development. So you can de-risk it in parallel to actually being developed. So you don't have this weird thing like, well, we've already developed it, we're going to release it, but do we do it special. But then if we do it special, we have to wait to release it. But marketing's not going to like that. It's not good for the for the business and it's not good for patients.

Conclusion and Closing Remarks 🔗

Yujan Shrestha: So it can be used as a tool to break that tie. And then in a perfect world, you get a PCCP clear first and then you build it in parallel and as soon as the features done, you can execute the PCCP and also get that feature released on day one with with very little risk. Like you don't have to ask the question note to files, special are going to get audited. It's going to be a finding later. So anyway, guys, I got to run. But thank you so much for your participation, especially on short notice.

George Hattub: Thank you.

Yujan Shrestha: Yeah, I hope to see you guys again next week. And definitely remember the golden rule. To definitely design for others as you want yourself. And then after that find the least burdensome pathway.

Richie Christian: The whole bit. All right.

George Hattub: Bye now.

Yujan Shrestha: See you. Thanks.

Richie Christian: Bye bye.

SHARE ON
×

Get To Market Faster

Medtech Insider Insights

Our Medtech tips will help you get safe and effective Medtech software on the market faster. We cover regulatory process, AI/ML, software, cybersecurity, interoperability and more.