Webinar - Empowering Regulatory Affairs with AI: Challenges, Insights, and Future Vision

 December 02, 2024
SHARE ON

Participants 🔗

  • Yujan Shrestha - CEO Innolitics
  • Marcus Emne - Webinar Host
  • Martin King - Regulatory and Quality Leadership Expert
  • Harsh Thakkar - CEO and Founder of Cultivate
  • Lutz Fingers - Faculty at Cornell University
  • ChatGPT - AI panelist

5 Key takeaways 🔗

  1. AI's Role in Regulatory Affairs: AI is being increasingly adopted in regulatory and quality assurance processes to streamline tasks like regulatory intelligence, compliance monitoring, and documentation preparation. However, human oversight remains essential.
  2. Challenges in AI Implementation: Key barriers to AI adoption include organizational resistance, data privacy concerns, and a lack of foundational data infrastructure. Overcoming these requires strong leadership and clear AI policies.
  3. AI as an Augmentation Tool, Not a Replacement: While AI excels at repetitive and context-driven tasks, it cannot replace human judgment and decision-making, especially in complex, nuanced situations like FDA submissions or risk analyses.
  4. Need for AI Policies and Frameworks: Organizations must develop clear AI usage policies to guide safe, efficient, and ethical implementation of AI tools in workflows, ensuring data security and compliance with regulations.
  5. Future Vision of AI in Regulatory Workflows: The future of regulatory affairs involves a seamless integration of AI tools to empower teams, enhance efficiency, and improve decision-making. Companies must embrace this shift while building internal AI knowledge bases.

Transcript 🔗

Welcome and Overview of the Webinar 🔗

Marcus Emne: Welcome to everyone who has already joined us. I'm just going to say a few words about this webinar. We're going to. It's going to go on for about an hour and a half. There will be polls. Three of them. One will come rather soon and there will be questions. And there will be questions and answer session, at the end of this webinar.

Instructions for Audience Engagement 🔗

Marcus Emne: Now for all of you joining now whenever a question pop your mind, please use the question and answer button at the bottom of the screen, just hover at the bottom. And you can submit your question at any time. We will, however, not address them until the Q&A session, but go ahead. And if you're shy, remember, you can always ask on behalf of a friend.

Introduction of the Panelists 🔗

Marcus Emne: What else? Well, we have a fantastic panel, our guests and experts today and why not start with having them introduced? Mr. Martin King, would you be so kind to start? Introduce yourselves.

Martin King: Thank you. Marcus. So my name is Martin King. I'm based in Europe. I've been actually based in Switzerland, in the middle of Europe. I am originally British, but I've been in Switzerland for 35 years, and I've worked in medtech and bomb for too long to tell over the years. And I started off in development. So from my education, I am electronics engineer, microelectronics engineer, material scientist.

And nowadays I've been doing regulatory work for 20 years for regulatory leadership roles, for regulatory quality leadership roles, for 20 years. And that's what I do now most of the time, helping companies get back on track after they've gone off track or helping them understand new regulations or breakthrough changes associated with their products. That's me.

Marcus Emne: Thank you so much, Mr. Martin King. Mr. Yujan could you please introduce yourselves?

Yujan Shrestha: Sure. Thanks a lot, Marcus. Yes. My name is Yujan Shrestha. I am based out of Denver. I was born in Nepal. Came to States when I was four. My background is in biomedical engineering, and I also have an MD. Nowadays, I'm the CEO of analytics, where we help our clients write medical device software. That's software as a medical device. Software in a medical device, and AI as a medical device. We also help our clients get FDA cleared. I've been doing this for about 13 years. And yeah, like I couldn't pass up an opportunity to just nerd out about AI. And that's the ultimate devices happen to be a part of this panel and really looking forward to the conversation.

Marcus Emne: Harsh. Please go ahead. You're muted.

Harsh Thakkar: You hear me now? All right. Thanks to Marcus first of all for having me. This is I, I echo the same thing Yujan said. This is a phenomenal time to be working in the life sciences industry. So I want to say that I'm here to share what I know, but I'm also here to learn from all the others here. For those of you who follow me and know about me, I'm the CEO and founder of Qualtivate. We are a quality compliance technology and data management consulting firm. I started that two years ago after being in the industry for about 15 years as a consultant and an employee in various quality, compliance and software validation roles. I also host an industry podcast called Life Sciences 360, where I talk with other biotech executives and leaders and I occasionally, or I shouldn't say occasionally, I post a lot on LinkedIn about quality management, my personal life, parenting, consulting. I'm an open book, so I'm really excited to be here. Thanks again Marcus.

Marcus Emne: Thank you Harsh. And finally, Mr. Lutz Fingers from Denver or in Denver?

Lutz Fingers: So much not in Denver, San Francisco Bay area. So yes, it's 6 a.m. in the morning, Marcus. It feels like 6 a.m. Guys, amazing to be here. Awesome panel. Good morning from the West Coast. I'm Lutz, Lutz Finger I'm a faculty at Cornell. I'm an author of an eCornell course that recently came out that explains all things AI.

We go in hundreds of hours of deep learning and generative AI. It's a no-code course and it's meant for essentially anybody to learn how to use AI applications. And that's what I have been doing. Like at least like, yeah, I just calculated last 25 to 30 years. So I have been looking at the intersection of it between product and AI and trying to make AI useful because AI let's face it, it's just a tool and we will touch on this today in our fun discussion as well.

I have been building AI solutions for media. I have been building it for health care. I helped to build up Google Health, I help the Snapchat, going public, and I worked at LinkedIn as well as had my own companies that I sold in the past as well as now, I just recently founded a company focused on generative AI and AI as a platform.

I am a little bit like, Harsh, I'm an open book. I have my own podcast called The Edge together with Cherry Ventures out of Europe. I as well have a Forbes channel where I talk about different developments in AI and how to use AI. You can do my course at eCornell as well as obviously reach out on LinkedIn to me.

Marcus Emne: Right. And we have a panelists with us today. Mrs. ChatGPT. Could you please introduce yourselves?

ChatGPT: Hi, everyone. I'm ChatGPT, and I'm excited to join this panel and discuss AI's role in regulatory intelligence and surveillance, by processing, and generating natural language responses based on vast amounts of training data. I can help interpret complex topics, identify relevant information, and deliver actionable insights tailored to your needs.

I'm able to communicate with you today using advanced text to speech technology that transforms my written responses into lifelike spoken words. This enables me to engage in dynamic discussions and provide valuable insights on the spot. I look forward to exploring today's topics and showing how AI can empower your regulatory work. Thank you.

Lutz Finger: Did anybody of you guys feel like this is kind of like, okay, get on. We understand who you are. Because honestly I did not know the backstory of Martin and Harsh, you are so freaking amazing, but ChatGPT, if she would have just said or it would have just said I'm ChatGPT, I was like, yeah, good, I know it. So yeah, that is the downside of generative AI - way too spammy, way too much talk.

Marcus Emne: Yeah. ChatGPT wouldn't have made many friends at the party.

Lutz Fingers: Keep yourself short. Please!

Martin King: Yeah. No, not unless it gave away free drinks.

Lutz Fingers: It’s actually funny and just as an as an FYI for, obviously dear listeners, right, this is Hoodin prepared this awesome. So we had a prep call and we had a list of questions. Now obviously I'm cheating. So for my course I created a copy of myself so that like that thing knows everything I know. So yesterday evening and prep for this call today I pasted all the questions into it and ask for a question. I won't read it out to you because it will be as boring as ChatGPT, but you will get the gist.

Marcus Emne: Haha. Great! Well what a fantastic start and besides you dear viewers and listeners, just listening to us, we would like to engage you as well. So why not have the first poll straight away? It's ChatGPT dealing with the polls and it takes some time. Okay, now you should have it in front of you and the hosts and the panelists, we cannot vote but I guess you can see it.

Regulatory Terms and Definitions 🔗

Marcus Emne: So we're asking you a little bit about what areas do you see most critical for AI within regulatory intelligence and surveillance or let's rephrase and think regulatory affairs and quality assurance to be a little bit more, wide. And you know, when reading this, it strikes me that one of the quite important things that we need to do during this webinar or before we get started would be to provide some definitions, right? So one thing that we should do would be to, to before we get started with the discussion and the kind of questions that we addressing today, we need some definitions. So we mentioned regulatory intelligence, regulatory surveillance and regulatory vigilance, so what is what? Why is it so important and probably challenging to perform. And I know one person in the panel who knows this probably better than anyone else. Mr. Martin King, could you please tell us what you think is the best definition to go with?

Martin King: Sure. I'll be happy. I just thought one of the questions in that poll of regulatory affairs in general was actually what is regulatory affairs in general? So first of all, you have to get to what is regulatory affairs. I always say Regulatory Affairs is about finding the way, it's a navigation, it is finding the way, because your goal is to get your product in the market and keep it in the market. So it's about finding the way and anybody who thinks there's a fixed solution, I can tell you, you can take the same device two times slightly different indications for use, and your regulatory pathway will be different. So regulatory affairs is about finding the way. So back to our definitions. So we had this discussion yesterday on what is vigilance. What is surveillance. What is intelligence. Right. So actually we're quite fortunate because the FDA actually gives us definitions for surveillance and vigilance. So vigilance is the science and activities relating to the detection, assessment, understanding and prevention of adverse events or any other product related problems. Sorry, I swap the word drug for products, right. Because vigilance with the FDA comes out of pharmaceuticals. What does that really mean? It means it's about putting out a fire. You wait for something to happen and you deal with it and you make sure it doesn't happen again. It's about making sure the patients using the products are safe, right? And if so, if you have an event in the market, and it's a negative event, you don't want it happening again. So you take action. That's what vigilance is all about monitoring what's going on and putting out the fire as soon as it starts. Surveillance, they've got this definition as the systematic monitoring of medical products, once they are on the market, to ensure they continue to meet regulatory standards and do not pose any risks to users. So it's about, it's an active process, it's not reactive, it's about keeping your product on the market and keeping it safe, right. So it's about training, it's about forecasting. You know, if you see something going in a negative direction, maybe you send out a new IFU if you or some kind of new notice to inform the users, they should be cautious about using the product.

So when we get to regulatory intelligence this is a bit more tricky. There are real definitions. And yet we see this wording used really often. So I come up with like a definition which it says it's almost like a broad systematic gathering analysis of information and regulations and guidelines and policies gathering all this information from wherever it comes within your area of focus, within your scope of operation, to assure that you're up to date and compliant. The worst case is a regulation comes out and you discover that all your products on the market are no longer compliant, and you're going to have to take them off the market. You need to know before the regulation comes out, and you need to take action before the regulation comes out. That's why we have draft guidance for instance. That's why we have transition periods. And if we see in Europe one transition period is not enough. They want to keep coming up with the transition periods because no one can keep up with the MDR and IVDR, so that's the definitions. Intelligence is not defined. But it's this once regulations and guidance vigilance is about putting out fires and surveillance is about making sure the fires don't happen in the first place. There you go.

Marcus Emne: Very good. Very good definitions. Keeping the product safe for both the patient and the PNL.

Martin King:

Marcus Emne: So because there is a there are there are many, many aspects of things of course, but I think those definitions are well put and I think that we should keep for today. But then we have the context of AI on top of regulatory targets, surveillance and, and vigilance, quality assurance as well, for that sake. To get started, I think, I think one thing that we all are very curious to know about is how you currently use AI technology for for regulatory intelligence, surveillance, vigilance or similar areas.

Applications of AI in Regulatory Intelligence 🔗

Marcus Emne: Anyone who would like to start, how do you use AI today? You're limited to talk for three minutes, by the way, just so you know. All right. Tell us your your best experience as of today. Besides making it making a clone of yourself. That's a good one.

Lutz Fingers: Well, I mean, that's, the making the clone of myself. It's not just a joke, right? It's kind of. It is obviously a neat learning experience for my students, because they learn how to clone first their visual image, their voice image, as well as the content of a person. So just maybe as a backdrop to explain how this looks in the in this course, as you can kind of hear, I have a German accent, if you want to listen to somebody who has a German accent, that can be painful. So one of the things I kind of thought, let's make this course a little bit more entertaining and therefore I introduce a kind of Jarvis voice from the off and I use a quite famous voice from an economist in London. We trained it, the students learn how to do this and then I have this interaction between this Jarvis and me making this course more interactive. So that's a first usage of AI. It's for entertainment purposes. And then the next use is that I clone my liking as the person. So there is suddenly somebody who looks like Lutz. And again the question is why would you do so? Right? I mean, I can see myself day in, day out. Why do I need that?

And also there is a use case because I don't need to travel now to sit in a studio to make the course so I can update and the course is kept up to date. So I keep updating the course from the comfort here out of my office in Silicon Valley. And then the third one is that I use the AI to understand my knowledge again, why would I do so? There is a business need, and the business need in this case is help students to code, help students to get access to all the content which we have created. Now, why do I tell the story? We talked about regulation and we talked about medical devices and we talked about healthcare. I I've spent quite a long time in healthcare and probably as the audience know, everybody says, wow, this LLM revolution will change healthcare and I'm I went on the record quite often saying probably not, because what it will do is it offers a new interface, an interface like it does to my students, an interface to access content easier in interface to create media easier, an interface to be more engaging as typical interface used to be. So here we have regulation. We know that regulation has rules. It has a structure, it has data. And the question to the group is how can such new technology actually help to drive access to complicated matter easier and it can. So I knew, Marcus all the question you going to ask me, so I put it in and saying like, “Hey, tell me what should I say, what does Lutz know like to talk about?” And obviously if I'm looking at this, we talk about the typical structures of enhance decision making, bias detection and ease of repetitive tasks, though as Martin King said it so nicely, it's a task to keep the product in the market. So there is constant regulation, but there are constant changes. So if that's over and over the second task, can I create a tool set that works like Martin and is not as annoying as ChatGPT earlier was? And the answer is yes. And this is an important one. And I, I don't want to have a copy of Martin because I want Martin, and Martin will be supercharged by a small little tool on his side to actually know what are the points, and he needs and knows how to evaluate which one to use and which not not to use.

Marcus Emne: Wow. That was that was that was broad and it kind of, you know, feels like science fiction almost. But as we said yesterday, the future is already here, it's just a little bit uneven distributed and Lutz you're probably the one that is the furthest away when it comes to being in the future, where there are so many applications, what do you what do you guys say? Any responses or thoughts or immediate ideas?

Martin King: I haven't really simple example which anybody on this webinar can try. And that is when someone asks you a question quite often and I get this on LinkedIn a lot. People come with a question and I really don't understand what they're asking me and GPT is really good at helping me understand what they're asking me. I might need to ask you 2 or 3 times to give me an answer that I fully understand. So it's a very good assistance in that respect. It lets you to be able to fully understand the question, because only if you fully understand the question can you give an answer. And this is a key point when you're using GPT, if you don't give enough information in, if you don't give enough context, what am I talking about? Why am I asking this question? What is my specific question? Is there a lot of detail? You will get garbage back, guaranteed.

Lutz Fingers: No, this is actually funny, right? So all of us, I would claim all of us have used ChatGPT or any of the other nice tools out there, ChatGPT is a transformer model and we have this in Gemini, we have this in Claude, we have this in open AI, so it's a transformer. And all of us have used some form of LLMs to understand what the heck does somebody want from us? And all of us use transformer models to actually…we kind of spit out an answer and then we kind of saying, write it nicely or write it more with more words or help us saying this without being too aggressive or whatever and then it would write it back. Now the fun thing is that answer is then transferred with an SMTP mail protocol over to somebody's computer, and they kind of copy paste it into yet another large language model ChatGPT, and say like, what the heck did Martin King mean? So at the end we have now Martin using ChatGPT to write his answer, and then the other person trying to understand Martin's answer. And we are just utilizing an interface between both areas.

Yujan Shrestha: This is kind of a translation layer where like, you know, perhaps Martin speaks a dialect of English is a lot smarter than the one I use. But my GPT can probably interpret kind of what kind of what comes in into something that I understand that uses my personal knowledge network to like, translate someone else's knowledge network into mine. That really makes a lot of sense to me. And that's, that's I think, how ChatGPT could be used or like LLMs in general, could be used to bridge the gap between different people.

Lutz Fingers: Just to jump in because the idea of translating something which is hard to translate, that's exactly what we're talking in this panel.

Lutz Fingers: Who are the audience has read as an evening reading before they fall asleep, the regulatory framework for RX or medical devices? Nobody in their right mind will read this text because it's not so nice to read. So we have here medical discussions follow rule sets. Regulation follows rule sets and they are not nice to read. This is very similar to our colleague who kind of like just asked us something and we have no freaking clue what they're mean. And therefore we will ask ChatGPT or any other large language model. And now we have it as well from medications or medical devices or whatever you have.

Yujan Shrestha: Yeah. All right.

Marcus Emne: And and you know, interpreting or communications, that's, that's a really big thing because that's one of the things that causes the most problems internally. And the question from the beginning here was regarding your current experiences or current use of AI and what it lands into is that one thing I think not just the panel here or me or I think most people do, is use pre-trained Gen models for interpreting communications or interpreting text, which is great, and that opens up a lot of opportunities I would say. The other thing, if we narrow into regulatory updates and and as we stated in the beginning of the webinar, it's all about having your product on the market and keeping it on the market without too much huzzle. Now, if we take AI and just focus upon this regulatory compliance part and ensure compliance efficiently, what do you think I would do best in that aspect? As of today.

Martin King: Help me find a project device, right? A lot of stuff in regulatory work is a judgment call, it's an opinion, right? You give an opinion based on the information that you gather. And it's all based on context. That's where GPT tool is very powerful. And if we consider the 510(k) database, that's great. We can look at all the FDA devices. But which one is the predicate device for my product. And the FDA gives you guidance on this great. You can read all this guidance. And you're still going to have to go through each one and find something that's closest to your product. And this is where ChatGPT can help, this is really…it can point you to the information. But the key point is, is that the information you receive, you will be verified. So you will be looking at it, it's like “is this really telling what is chosen as my predicate device, is it really predicate device?” Does it take all the boxes? You're verifying that data, you're not just blindly using the tool and saying, oh, GPT, tell me, therefore it must be right. And I think that's really important. The most powerful places to use these tools today is where you verify the output insight. That makes sense. That meets my expectation.

Yujan Shrestha: I think that's a great point, Martin. I think the root of why those particular use cases are great for LLMs is because the context is rarely available and therefore with the context being rarely available, tools like like retrieval augmented generation and go and search that database all in the proper context loaded into your prompt. And then more importantly, be able to verify that the outputs of the LLMs make sense. Where it doesn't work, where I think a lot of people may be trying to use something like GPT to help with FDA response or something like that.

There's a lot of hidden information and it's almost like specific to a particular reviewer, very, very project context specific. You can't capture in like a guidance document, which are a lot more general. And like in those cases, like, I don't think GPT can be used there to extrapolate, like the intelligence built up over the years of like, what will FDA say on this? Or what's the best way to position this response? At least I haven't been able to properly do that. Even with the proper context of guidance documents, ISO standards, whatever built into there. That relevant knowledge, that information just isn't there for the the LLM to use, nor is it there in like the pre-training data. Like you can look all over, all over Reddit. You're not going to find that data. So it's probably not built into the GPT LLM weights in there. So I think that's like if the if your usage of LLMs crosses that threshold where it's not just retrieving information that's readily present, I think that where you get yourself into problem. The other analogy I like to use for anyone that does any sort of 3D printing, there's a certain type of print you can do called a bridge print, where you'll have like two posts and the printer can, in bridge between the two and the gap can be pretty wide. But what the printer can't do is extrapolate past that, like where you don't have a point to kind of hang on for reference. So LLMs and related technologies, they can help you bridge between a gap of two point posts. These are strong knowledge points that you have personally. They can help you bridge that, but don't use it to extrapolate somewhere where you just don't know. That's usually a recipe, because then you don't have any way to really verify that bit of information, some of the tools that I use are kind of similar in like the they're used for knowledge retrieval. It's kind of a fuzzy mapping, like a Google Plus kind of thing. There's tools like undermine, which can go in sort of like the academic literature for, for specific topics, tools like perplexity that do the same for the internet.

Challenges in AI Implementation 🔗

Yujan Shrestha: There's this tool called consensus.ai also I would recommend most companies start building up a knowledge base. So again context is king. So if you have a knowledge base internally at your company where you put all of your best practices in a result to going back to what Lutz was saying too, to essentially you have a clone of your thinking, but also your clone of your institution thinking. You have that now and then in the future, I think AI would be able to pull that and be able to properly represent your company's mission, value and also best practices in those prompts. But you got to start now. And like the value of that data now may not be that valuable. But as AI improves, I think the value of that kind of data would just be a lot more powerful. But you have to start somewhere. So that's all I got to say for.

Martin King: I have had an experience few years ago where I was building out a risk management for a prefilled syringe product, and it had been on the market, the drug products have been on the market for several years. And when I wanted to use real world evidence, real world evidence is all a big, big thing right now. Well, if you're taking a new prefilled syringe to replace an existing products on the market, there's nothing like ten years of data from the market. You try and go and find the pharmacovigilance people who then have three databases of information, and then you try to bring that information together to make sense in the new risk analysis that you're doing. I wish I'd had a tool like that because it would make my life much easier. So there is already a lot of contextual data out there. People just have to start applying it and using it. But one of the issues is that how can I use that and keep it safe and secure? Because if I'm uploading it to the cloud, do I really want my confidential data about all the customer complaints and everything else going outside of my company?

Harsh Thakkar: The point you just you made is, is really interesting. And it's something that I've seen and heard from other clients in life sciences industry is if you look at a life science company, you have different departments, right? You have R&D, bio stats, HR, you have marketing, you have, you know, other IT, quality, reg. affairs, and then you have management level CEO, CTO, whatever, right? These are all people. They are here today, two years from now. If that acquired, half of them are gone or they leave a job and go somewhere else. The problem is when they go the knowledge in their brain also goes with them, right? So the point, Yujan you're making is really amazing because if let's say I am Novartis and I build a quote unquote Novartis AI where everything I do gets captured, and obviously you have to have safeguards, you know, how to protect that information. You can decide whether you want to go on cloud or put other, you know, threat assessments or vulnerability assessments in place so that you keep all that information secure. So it's a lot of it's a big step. You know, companies are not ready to do that yet. But that is what where you can get the best information because like Yujan was saying, right now, you can go from 1 to 1. If you want to do all these other things, you first need to build the foundation where all your data is in one place, and then you can start asking these very specific questions. Because now that AI has more context about what has been going on, that doesn't mean you cannot still ask that question today. You can still ask ChatGPT or some other, you know, AI tool a question if you are an employee of Novartis and you will get an answer, but that answer is missing out a lot of contextual parts, which if you had built the model first with your data, gated it so that only your employees can use it. Think of it like a Google search, but just that has your company's data, right? I mean, I don't know how to best explain it, but that's kind of what it's all about the knowledge management in one space. And when you have that, then whoever is, whoever is the person that's the head of R&D or the head of IT when they leave tomorrow, the next person still comes and the context is in the system. The context is not lost with the person that just left the company. Right?

Lutz Fingers: I actually think that Harsh the way you just said it is a magic word. Think about it as Google search right after Google came about, after search tools came about, Google wasn't the first search tool, right? We suddenly had an enterprise search. Now, where I scratch my head a little bit is because everybody gets completely, insanely happy about why we have those large language tools, and they will solve everything and they will take all our jobs away. Yeah, you had Google before, so why if I have a question about regulatory affairs, why don't I just google it? Well, it doesn't have my company data. Okay. Well your company data probably is in let's say Microsoft SharePoint for whatever it's worth, right? Why don't you search for it? So ChatGPT is essentially let's pull it out and call it a glorified search. And as of glorified search, it can bring out information. And it helps you as a user to select one information and one only. But that doesn't mean that this is the right information. And as Martin described, the human element in deciding which one of those search results is actually helpful, is useful, is an important one. Yes. The FDA has a long list of applications that got approved, and now you need to search which ones you want to draw a comparison to. But it's a search and it is a decision point you need to do. So I'm making this clear because very often we have that like discussions like these go through cycles. Everybody's like, wow, this is so amazing, wow, we take all the data and we can search everything and wow, what's our job again? Oh, we're getting all replaced. No, not at all. It's kind of yeah, we had have search. We had search before. Now search is a little bit easier but it can be misleading. Let me do one example with the audience and you. How would you complete the sentence? Life is like a box of…

Harsh Thakkar: Extra dark chocolate.

Lutz Fingers: Chocolate? Yes. Like why would you say chocolate? Right? And you would say chocolate because it's the meme from Forrest Gump. And this is what life is like a box of chocolate. But honestly, we could and say as well, life is like a box full of surprises. That would be as well very neat. And it would be probably as a starter for discussion and life. Way better and more intriguing than such an old Forrest Gump movie. Now, if you ask ChatGPT to complete this sentence, or if you ask Google Search to ask them to complete the sentence, you will get chocolate, right? And if you were smart, you said extra dark chocolate. Give it a little bit of a twist, right? The point is, if I'm asking about my regulatory framework, my medical device, my and I get an average answer, then this might have been just a better Google search and it might not fit my businesses, it might not fit my need. So we need to be able. And it's very easy for a data scientist like me to say it's the data that is stupid. You had the wrong data, right? If you had all the movie data and therefore you said chocolate, right? Or you had all the wrong FDA cases in it, or Novar as your data set is skewed, and that's some reason why you got the wrong answer. Now we need to have the ability to steer. And that is essentially what makes tooling around large language models so important.

Yujan Shrestha: For me and I put in like a software engineer kind of twist to it and just to kind of build on what you said, Lutz. The thing, it's a search engine, but also like, we haven't really talked about the cognition engine, which it also is, but I think we've been talking about search engine because that's currently the the most useful use case of LLMs I think, because that lends itself to being less hallucinations and other kind of problems with LLMs now. But as like a software engineer, I think of ways to code things to make them fully automatic. And pre LLMs is like the way to do that was was to use rule based logic, right? Where you say, you know, if this, then that pattern matched this, then that. And I think now LLMs give this an opportunity to go one step further where there's certain tag that are just really difficult to describe in a way that you could really write a program for. But these mini tag and like from using ChatGPT and related tools, you kind of get a sense of like what types of these cognitive tasks you can really delegate to ChatGPT. And that's how I think about it. Like ChatGPT is kind of like another colleague where you have to learn their limitations, you have to learn what they're good at and what they're not so great at. Then you can come up with these micro tasks that could be delegatable. And then when you build, start on top of these micro cognitive tasks, you can then chain together some very complicated workflows that just weren't possible before. So I just want to add that in the conversation that, you know, I think that because that works great, but also cognition too.

Lutz Fingers: Well, yeah. So let me jump in there. So like in two thoughts to it. So if you talk about cognition engine like is this like idea of cognition, your core, right. So meaning a machine learning algorithm that can be used for transactional performance steps. So the idea of a cognition engine comes from like a very human concept. And we know that the architectural design of large language models are definitely not a cognition engine. They are auto regression completion engines. So if I say “life is like…” we could complete in any way, the moment I say box, I box myself in into one direction. So I have an exponential error function for this architecture. So get to excited about AI being cognition. And we go through the AI phase about there, there were different hype cycles and we are yet in another hype cycle of AI. And people get excited because they believe there is cognition. But from my point of view, no there isn't. It's an interface and it's an it's an autocompletion engine. And that Autocompletion engine can be used for obviously time series and then it becomes extremely helpful. But the term cognition kind of like gets me like, ooh, I like it, like don't do it. However, if you, you can do task small tasks with it, right. Do task x, do tasks y. And for those tasks this is what you described very neatly. You have to break it down. I can tell you this from my course actually. It's very funny. So in the course I introduced very early on the concept of a copilot. And the copilot means my students don't need to be able to code. I'm going into different industry cases. I want people to be able to use AI in media, in healthcare, in legal, in whatever it is. And I focus very much on use cases, and therefore I don't want them to be engineers.

And in order to enable them to code, to do analytics to all of this, I introduce a copilot very early on. So at the end they actually do deep learning models that create LoRAs, the fine tuned models. They do all those freaky stuff or complicated stuff with the help of a tool. Now, I introduced this very early on into the course. And then they get question. They get simple question initially, and I can see there is a learning curve for the students to actually use this new colleague of theirs, because humans tend to think, oh, okay, so I have this copilot, there's a cognitive thing, or what I need to do is I copy paste my homework, give it to the copilot, and saying, answer, yeah. And that goes terribly wrong because student haven't understood that this thing doesn't have a cognitive function. This thing doesn't understand that this is your homework, which is supposed to to solve for you. So you need to learn to break down into smaller tasks and those smaller tasks can be done. Now, since I venture partner for cherry, I see a lot of startups that at the moment go round and kind of take a human workflow, break it down into smaller tasks, put an LLM around it and saying, I have an amazing tool. And I always wonder myself like, okay, how likely is that weird workflow which some human came up to being able to last in a world where we have LLMs existing, right? So that is then a question how long that will be. But point is, I don't think there is cognition in LLMs, but I do think you can break it down on workflows. But because we can those workflows will change.

Yujan Shrestha: Yeah. Let me just just say that kind of one thing to that, like I think language is a tool. Language is also imprecise as well. So you can't really fit, you know, the exact like I wish there was a way to teleport the thought vector or whatever it is in, in our brains and kind of teleport across everybody. So like, like I've been through these conversations many times on LinkedIn about, “hey, do LLMs reason, is it cognition” and I think that's that's like maybe not the most productive conversation. Like, I think language should be used to be able to broadcast what we mean to as many people as possible that don't need to be a machine learning scientist.

Therefore, if we say something like, oh, this is just a big LLM, it’s just a big autocompletion, auto regression, which I totally agree it is, but that's not really going to bring the rest of the like the 99% of the of like the population into the conversation. Which is why like you probably hear me continue saying cognition, reasoning, all these kind of hot button topics, but that's just because I'm trying to use language as a tool to include as many people as possible. And that's, that's just my my kind of personal take on it. Like, I totally understand where you're coming from. It's not it's not replicating, you know, human thought like the human brain is not the way the LLMs work. But I would argue that, you know, maybe for the purpose of this conversation, it's not really like a productive thing, that.

Marcus Emne: It's the difference between how the technology truly works and how people interpret how it works as well.

Martin King: But I think it's a really key point in what you said, Yujan. And that is in order for people to embrace technology, they need to be comfortable with it. And if they think that a copilot is cognitive and if they believe that, and that allows them to embrace and use the tool, then good for them. I mean, the key issue is, how do I use this tool in my daily work? How do I and why do I use it? I use it to become more efficient. I'm not just going to use AI because it's out there. I'm going to use it so I can do my job better.

Lutz Fingers: Just jumping in here. If I sell you a car and I say, you know what, Martin? I want you to be comfortable with this car. This thing can fly and it can swim and it can do amazing things. It gets you from A to B wherever you want to go, just that you feel comfortable. So the next thing you will do, you drive into a river and kind of thinking like, hey, I thought it gets me to cross the river. It didn't, because it was a car. So when people when we tell people that they can be comfortable with any kind of ChatGPT cognitive function, you know what? And Martin, you had those folks like those, those people who kind of give you just a copy paste answer from GPT because they thought, you asked me a question, Martin, I copy pasted the answer from ChatGPT and use like, well, I could have done this and that's not the answer. The answer is a little bit more complicated and actually requires what we would call your human brain. It's like, oh really? I thought ChatGPT is doing that for me. So like, guys, we need to spell out what this thing is because if we don't, we will have the discussion about, oh, ChatGPT is going to replace it. By the way, I just ask if you have you the side chat. I actually asked ChatGPT why it's it's so silent. It hasn't answered. Just tell you.

Marcus Emne: I think this discussion…we could talk for hours. But let's wrap it up and just say that it is important to understand how the technology works and what it can do, because when you understand that, that's how when you can tame it and when you can start use it for efficiency, for improved quality and so forth. And I think we're all agree on that. And time flies when having fun. So I fast forward the questions, and I think there's one final question that we should address before we take on the over 20 questions that are coming so far. And that is we've been we've been talking about this more or less all the time. But let's focus and narrow down what we think. If we're looking to the future a year from now, tops. What is your vision for a fully AI empowered, regulatory and quality affairs team? How close are we to achieve this vision or. Well, let me rephrase it and say, what is your vision of how an RAQA team would use and utilize AI in the near future, a year from now.

Yujan Shrestha: If you don't mind me kind of using another like engineering type analogy, I think the way to think about this problem, like what's the frontier of something being fully automated by AI one proxy that I kind of looked at, that is how much of a feedback loop is there for the AI, and how tight is that feedback loop and how much it can be automated. So something like coding for example, that feedback loop, the very tight like the AI can write some code, run it, take a screenshot, see how it performed, do it again, do it again. And that self-improving flywheel will move very quickly there. However, something like regulatory when you have FDA that introduces a, you know, one year time horizon that in that feedback loop that gets a lot more difficult to to do. And additionally, until FDA is using all AI agents internally on there, as you have humans in the loop. So I think that's also going to be really difficult as well. So I think wherever there's a feedback loop where you can do something, you can check the output and you can perhaps feed that input back into LLMs to repeat this virtuous cycle. I think those are tasks that would be at the frontier of being automated by AI. And I think it's like you're always going to find tasks that won't fit that. I think AI would probably automate 50% of every job, rather than 50% of all jobs, if that makes sense. I think the same thing is going to happen with like regulatory, software engineering, pretty much all the disciplines that we're talking about, where these micro optimizations will help you do your job 50% of the time, probably maybe do it for you. But I don't think going to completely replace that. There's always going to be something that, feedback loop is just too long or involves other humans in the loop.

Martin King: So if I want to set a KPI for, QA or QARA industry 50% of regulatory submissions, it doesn't matter whether you go to Europe or the U.S., are incomplete or incorrect, if we can just reduce that a little bit through using AI as an assistant to get our normal everyday tasks, then it would be great. I don't think that will happen because the human being always wants to take a shortcut. So even if it's going to cost the company a fortune, they will still do an incomplete submission because they want to be a little bit quicker. And in the end they'll be two years slower. That is life and that is how people are. But I think companies would do well to one document in their coding their documentation that they use AI. It could be, in fact, for instance, if we look at clinical evaluations, these post-market surveillance documents, you are supposed to disclose the methods you use. So why not? I put use ChatGPT to filter a thousand cases, right? Why not put it in there? There's no shame in that. And I think if people start to embrace AI, we will have achieved a lot and mean embrace it professionally and not just try it, but start to use it and say, yes, I use this to filter adverse events in the market to find out if any of them affect my product. I use it to find the most. I use it to find vertical devices. Just really simple tasks will be a huge step forward and if we achieve that in one year, we'll be doing very well.

Marcus Emne: Yeah, great. I think we should ask ChatGPT as well. But that guest hasn't been or that panelist has been silent. ChatGPT, could you tell me a little bit about how regulatory affairs and quality of quality assurance teams should prepare for the future?

ChatGPT: Looking ahead, the vision for fully AI powered regulatory affairs and quality assurance team involves the seamless integration of AI across all regulatory workflows, from compliance monitoring to risk management. Teams will be able to rely on AI to continuously update them on regulatory changes, automate routine tasks, and provide advanced insights. However, achieving this will require advances in AI technology, regulatory frameworks, and a culture shift toward embracing AI as a trusted partner in compliance. Thank you.

Lutz Fingers: Guys, this is essentially saying nothing. So if anybody of you out there wants to build a future company, would you trust the general knowledge about all average things out there? This is like buying your medical device based on Amazon reviews. People said it's a good thing. Guys, if we want to build a big company to make something special, do not go for the general average knowledge.

Future Vision for AI in Regulatory Affairs 🔗

Lutz Fingers: So, fun fact I use ChatGPT a lot in order to figure out from my course what our topics and so on. Try to ask ChatGPT about risks from ChatGPT. Oh, you will get a lot of brainwashed whitewash kind of average answers. This is not it. There is research out there is top of the line discussions, ChatGPT did not talk about that this will change the workflow and therefore certain parts will in the genetic workflow get automated. Other parts won't. Right. So in ChatGPT I just showed that it knows average. Now if anybody of you wants to be average, go for ChatGPT. If anybody of you wants to have a good regulatory support, maybe use a different data set, maybe your own brain cells, maybe the gentic workflow, and that is what the future will look like.

Marcus Emne: That's again, if you ask the average question, you will receive an average answer ChatGPT is not very good but open question. So you have to be extremely specific in your prompt. And you should train the model with your data set before you use it. I think that's one of the key things here that we kind of learn all the time that ChatGPT or AI models can be a great help, but you can't use it just randomly, openly, because then you will receive average dull, boring, incorrect in the worst case, answers to your question.

Now talking about questions, we have received many and it's time for us to deal with them. There's so many great questions, funny questions here. And let's see here I'm going to open them up and while I do so while I read through them, could could we perhaps have a poll for a few seconds. The second poll please. Okay, here it comes. I will give you a few seconds before we start with the first question that arrived. And dear guests, we will not really have time to answer all of them. We have 25 more minutes to go. So those questions that we didn't address live today will come in the follow up package of information with the recordings and all the questions and answers in there as well. And speaking on what you will receive tomorrow. Also, the results from the poll, because we're going to try to analyze them and do something fun with them or interesting. We're gonna ask ChatGPT, what do you think? And have an average reply. No, we're not going to do that. Okay, great. So first question from the audience, what are the main challenges organization faced when implementing AI for regulatory surveillance, and how can they overcome them? From doctor GDP?

Harsh Thakkar: I want to say something that this was an interesting point Martin made, which is AI is not going to affect human behavior and human psychology. That is hardwired. And we're not at a stage where we can just give the keys of the kingdom to AI and say, yeah, you know, I'm going to be in in Hawaii, just tell me when you've set my submission to the FDA and when it's approved, and I'll be back and my stock will be up and I'll sell the company and move on. That's not happening. So until that happens, the biggest challenge to and I'm curious to know the answer to the poll of how many people are using AI already using it starting next year, that's a very interesting, simple but very interesting question. The biggest challenge to adopting AI is, you know, understanding whether your organization or people have the desire to do it because you can buy the fanciest AI tools. They're not going anywhere if your employees and your people have a mental block like, no, this is I can't do this, I like the old way of doing stuff, this is too new for me. So that to me that's like the biggest hurdle that many companies are facing. Like, we've done projects with companies where, you know, we'll do some data extraction, we'll do stuff like that. But at some point when those projects go up in the management reviews, we get this pushback like, hey, we can't scale this, we can't make people do this cool thing that you did or you guys did, because the knowledge or the desire to do it in a different way is still…people are not ready for it. Right? So I think that's one of the biggest challenges. And how do we change that? I mean, Lutz is a great example. He has tons of content on this courses. That's one way to, you know, let's have these kind of discussions so people can ask questions, get, you know, insights. And we're all none of us are like AI experts here. We're just sharing from our, you know, background of experience. But once we pass that level of comfort, I think that's when we can actually start.

Until then, there are still examples of low risk use cases that companies are experimenting with just to sort of test the water as to whether their employees or teams are comfortable with using AI, you know, meeting notes and reading like audit reports or like quering your QMS data. These are all like low level stuff, right? This a lot of companies are doing that. But to go to the next level, I think the employees have to be ready to sort of embrace AI like Martin said.

Marcus Emne: And do you foresee or what do you know about the current average situation? Is it the bottom up process or on and on where people start using AI here and there, or have you guys stumbled upon a lot of cases where companies have started with AI policies or, you know, code of conduct or documentation? This is how we're going to use AI. And so it's kind of driven from from executives or management.

Harsh Thakkar: So I'll just give a short answer and let somebody else talk on this. I have seen companies that have these kind of processes in place, but they're more for, I want to say, more of a documentation exercise to please an auditor or regulatory inspector and not for creating substance of, you know, what to do with AI. Or maybe it's a clear wall to say, hey, we're blocking everything you can't use. Just like many companies don't allow you to use Chrome extensions like what's so great about it. But I know I work in companies where you couldn't download a Google Chrome extension to use because it can take your data and do stuff with it. So I have seen companies in life sciences that have these policies, but I don't have any substantial data to say how soon will that maturity come in? But I can guarantee you it's not coming in one year. Yeah, as you asked earlier. So I think it's going to be a little bit longer than a year. Okay.

Martin King: Yeah. My recommendation to any head of quality on this webinar is if you don't have an AI policy, make one and make one so that the company has at least a corporate account and not if everybody's using their cell phone, doing company business on AI. Really, really basic stuff, right? Most companies have policies on the private company. Well, things like that or cloud storage or things like that. You should have one for AI and it should define what you can or what you cannot use it for. And it should provide a level of transparency that says if you use it and you're using it for generate, that will result in the generation of a document and declare it, it's okay. The AI policy should say what is okay and what is not okay, and the AI policy can change. So what you put in it this year might not be what is in it next. But at least have one.

Yujan Shrestha: Okay, so I totally agree with both Martin and Harsh, just to add one more facet to this, which I think is important for any leaders in any, any sort of med tech company or any company using AI, I think it does have to start top down. I think it's really important to emphasize the mission and value of what you're doing. Each time a new version of ChatGPT comes out, I must count like there's I personally have like some sort of existential crisis each time this happens, like, oh crap, this, this task that I used to do that I used to assign my personal self value self work to is now it seems like it's automated, but the way to get around that, to look at the bigger vision. Right. Like what are we really doing it for? Like we're really doing this for patients to, to save lives. That's our mission. That's our value statement. And I think that needs to be really instilled inside the organization in that we're not ascribing our value to doing manual device searches, even though that's what we've learned for a lot of years. Our mission in value is to help patients and whatever tools we can use to make ourselves better, that that's what we should do. I think that would help adoption from the bottom up, but it needs to come from the top down.

Marcus Emne: Do you foresee that next to the CIO there will be a CAIO.

Yujan Shrestha: I've seen that title actually already a couple of times. I'm not sure if the CAIO, but I have seen like, like a, like a C-suite, chief AI officer or something. Let's go ahead.

Lutz Fingers: We had the same discussion about data right? AI is a hammer and we are looking now for nails, but we do not get a C hammer Officer. Right. We have product, we have marketing, we have sales, and we have those core functions. And in healthcare we have a regulatory function. And the regulatory function will, not go away.

The regulatory function has a certain reason to exist, right. So the regulatory function will use probably tools, but it's not a CAIO that tells a regulatory function, “Oh, I'm now the big CAI and I will tell you to use x, y, z.” I think what we will see is the people use the tool effective for workflows.

So one of the companies Cherry Ventures invested in is Flinn.AI and that's a med tech compliance copilot, right. So they and I coach them. It's a very cool team, Haseeb Samad is one of the guys I'm working with very closely. And I can see how we are thinking about using AI on one side to integrate them into the various workflows, so the need will not change, the need for regulatory compliance won't change, and for that we will maybe have a regulatory officer somewhere in the company.

Audience Questions and Responses 🔗

Lutz Fingers: What will change is how do you supercharge people working on it, how do you make them more effective, what are the checks and balances you need to have in order to do so. So when we talk about how are we using AI, I would not say, oh, come up with a rule set. I would say first figure out how you can make your life better.

I just was recently in Berlin and talked at a panel with the EU government and parts from the German administration, and we talked about regulation. It's the same thing. Do not start regulating something if you don't really know how to use it. So a company, it should be very wise to first think about how can I use tools like Flinn AI or other tools in order to supercharge my employees and then defined on that workflow, improve and structure the steps around it, and then put guardrails around it so that we don't have people just randomly asking ChatGPT and thinking, that's now my job to copy paste into ChatGPT and from ChatGPT. No, it's not.

Marcus Emne: So thank you very much. Next question and let's hop on something that I think a lot of people has experienced in their early days of start using PP models. I'm quoting now, “a big problem with using AI in regulatory, regulatory, intelligent is AI hallucination or fictional answers. What kind of difficulties do you see in achieving correctness with regulatory AI experts?”

Lutz Fingers: Are we talking about a hallucination from the experts or from the AI? By the way, it ain't a different thing.

Marcus Emne: True that. No, but I mean the fictional replies that AI does sometimes it's not an average answer, sometimes it's pure fictional. They talk about a regulatory body that does not exist in a country, for example, and refer to regulation that does not exist. It's simply made it all up. What? How to deal with this?

Lutz Fingers: It's what you get when you have an auto completion thing, right? An auto encoder. Because the AI doesn't understand what the regulatory framework does, it just completes the next sentence now and so for hallucination of models, which that's the technical term for it, right, you look for the next best word. And sometimes, as we said, the exponential error goes right. So life is like a box. Now you need to say chocolate because we are already down that path. Meaning you have the same problem for anything. And then people saying, oh my God, it's hallucination. It's a feature. It's part, it's part of the tool, right? If you use a hammer and a nail and with a hammer you hit your thumb, then don't complain about that the hammer is hard, it's meant to be hard. So it's a feature. Now, what do you can do if you using a hammer, wear gloves? Wear protective gear? Use AI wear protective gear. So we have by now, I would say densely figured out how to work with hallucination. You can take the answer from. But it's a technical answer. It's not is super interesting. Right. So you take the answer from ChatGPT and then and the background you check. Is this really existent? Does this make sense? There are many ways to check, and that's a reason why I joked about the employee in having hallucination. I mean also CEO sometimes hallucinate, right? We don't call it this way, but we all know who we are talking about, right?

So meaning we do the same thing, we double check. We kind of saying, does it make sense? Are like, what is that person smoking? Right. So we are at the same structure. Autocompletion sometimes leads us astray. We can double check and we have technical, totally feasible solution to work with this. And good companies. It's funny because like Flinn AI or the company which does the medtech compliance, we just had like build on a whole structure around avoiding that, the hallucinate. But it's technically very straightforward.

Yujan Shrestha: Let me just we just kind of autocomplete what you said. They're kind of on the same pathway there. I think the prompt engineering techniques of like asking the AI to make sure you cited sources, that really helps and also double checking those sources. And there's like there's tuning parameters like the like the models temperature that you can set to like a lower value or make sure it's not set up too high. That that can also help. But I have also noticed, like just from the recent models, that the rate of hallucination and just the overall ability to follow instructions has improved quite a bit as well. So I wouldn't say it's a solved problem, but I would say definitely having context and ensuring that the AI produces the references and the citations, even page numbers and specific quotes, then you can go and easily verify that information.

Martin King: I would argue that the challenge of hallucination in AI is a human problem. That is, that the three pillars of any regulatory submission: context argument, and evidence, and AI can never provide you with the evidence. It can only provide you first with an argument. So go and check the evidence, because that's the job of every quality and regulatory professional. Simple.

Harsh Thakkar: Yeah. And I would also say it's add on to what Lutz said, the hammer analogy, is if you give somebody a hammer and don't tell them what the hammer is capable of doing and in which situations, they should use a hammer versus some other tool like a screwdriver, then they're not going to get the full understanding of that. So that's what is happening. You know, like hallucination is one example, there's bias, there's all these other, you know, drawbacks with AI. It's understanding what it can do, what is your end objective if you're in drag affairs or quality, and where can AI help you and where can it not help you? Because the human in the loop is for me, that's always going to stay. I don't think that's going anywhere. So your job as a human in reg affairs and quality now becomes a little bit more interesting. Like Martin said, you know, how do you find that evidence? How do you, you know, tell the AI to think differently by using something like a chain of thought or some other technique where you're asking the AI, you're pushing it to say, hey, tell me how you got to this, or, you know, like Yujan said, asking for citations and stuff like that. So yeah, I do agree that it's the quality of the prompt. It is the quality of the back and forth and the human in the loop that can, you know, minimize the hallucinations to achieving the end objective.

Marcus Emne: Great. Now, believe it or not, but we almost been on here for an hour and a half. We only have six minutes left, so it's time to wrap up. There's a lot of questions we haven't had the time to answer because many of these questions are rather big. And there are some really funny questions this topic, but that we certainly should address, such as how many of the participants today are AI avatars joining the webinar? We'll bring them to the table some with a follow up. We have five minutes left and I want to give the panelists and experts a little bit time to think. One key takeaway, just a short phrase you would like to leave the audience with. Before we end this, while you're thinking, we're going issue the final poll as well.

First, the poll, the third one regarding the practical workshop, maybe not. Well, we'll see what happens, but if there's not going to be poll, I can tell you one thing, and that there will be send out a survey after the webinar. And I really hope you take one minute to, respond on that survey, that’s what I have to say. While the panelists were thinking, I don't think they need that much thinking time, but let's walk around the table. Martin, what's your short phrase to leave the audience with? One key takeaway.

Martin King: Embrace AI in baby steps. Just take one thing. Work out how to do it well and use it.

Marcus Emne: What's your key takeaway to deliver?

Yujan Shrestha: Keep the patient first to be able to turn AI from a threat into an opportunity.

Marcus Emne: Harsh

Harsh Thakkar: Don't focus on AI. Focus on your data.

Marcus Emne: And finally Lutz.

Lutz Fingers: So much about it, right? I learn about AI, understand what it is. It's a hammer. It can help you with nails. It's a tool that, helps you to be more effective. It can help you so that you save the lives of patients. It can help you so that you make your company stronger. But you need to understand it. Do not believe it's everything and replacing everything. It is not the car that can drive in the water or fly. It's a tool and you need to understand the abilities of that tool, as well as the restrictions of that tool where the tool doesn't work. Now, obviously you do this best by coming to my course, but beyond that, try to understand what is AI and then you can embrace it, then you can regulate it, then you can kind of add it into the place where it creates value and wealth for you and your company.

Marcus Emne: Yeah, great. I also have one final thing to say besides thanking all of you and and if you who is watching this now and listening, none of us have not had that many practical samples, or haven't had the time to try out AI models for practical use in regulatory affairs. We will soon have practical workshops where we will work with AI tools, tooling around AI can with data sets for regulatory and compliance purposes. So stay tuned for that. It will happen quite soon. The dates will be announced very soon. And of course, train ourselves on the basics and understanding what AI is, that's crucial. Chat GPT what do you say?

ChatGPT: Use AI with common sense.

Harsh Thakkar: That that was a good one.

Marcus Emne: Yeah, that was the first time. Well, the thing with common sense is that it ain't that common. Sorry about that. Well, it's all thank you so much for joining us. And this a special thanks to the, the, experts Martin, Yujan, Harsh and Lutz. You've done a fantastic job. I learned so much during today, and, it opened up me a lot of new ideas and, and thoughts around how we at hood in May make use of AI even more in the in a better way. And what we do today, currently we mainly use it for just being that consillier and support of helping the user to ask the right questions or interpreting in regards to the specific product. So never a general, never an average context. Certainly avoid that all the time. Really really great. And so again thank you so much. And who knows we might do this again. So with that said we're closing down for today. And tomorrow you will receive, some, you know, Cliffhanger information regarding the polls and the questions that we didn't have time to, answer upon today. I wish you all a splendid day or evening or morning or wherever we are in the world. And, thank you so much.

Yujan Shrestha: Bye.

Martin King: Thank you.

ChatGPT: Bye.

SHARE ON
×

Get To Market Faster

Medtech Insider Insights

Our Medtech tips will help you get safe and effective Medtech software on the market faster. We cover regulatory process, AI/ML, software, cybersecurity, interoperability and more.