10x Coffee Talk: Image Processing Toolbelt

 February 04, 2025
SHARE ON

AI/MLSoftware

5 Key Takeaways 🔗

  1. Improves contrast and tissue differentiation in medical imaging, aiding radiology reports and FDA submissions.
  2. Used for refining medical images, noise removal, contour enhancements, and radiation oncology treatment planning.
  3. Prevents visual artifacts and blind spots, with Viridis/Inferno for PET scans and grayscale for CT/MRI being preferred.
  4. Techniques like histogram normalization and morphological operations improve deep learning model performance.
  5. Colorblind-friendly design and avoiding color-only indicators enhance usability and risk management in medical applications.

Participants 🔗

  1. Kris Huang - Senior Software Engineer (Presenter)
  2. Ethan Ulrich - Software Engineer AI/ML
  3. Reece Stevens - Director of Engineering
  4. Nicholas Chavez - Software Engineer
  5. Bimba Shrestha - Software Engineer
  6. Matt Hancock - Software Engineer
  7. Joshua Tzucker - Software Engineer AI/ML
  8. JP Centeno - Software Engineer AI/ML
  9. Marry Vatter - Director of regulatory affairs

Transcript 🔗

Introduction to the Discussion 🔗

Kris Huang: So yeah, I've already given away most of what I'm going to talk about, I suppose, in the links. So, I guess it depends on how much everybody read. This is intended to be more of a discussion, so please feel free to ask questions or make comments. You know, suggestions, whatever. And if nothing else, I'll be asking some questions.

Analyzing the Axial, Sagittal, and Coronal Views 🔗

Kris Huang: So let's have some fun with this. Have a proper let's properly nerd out on this, okay? You know this part. So here we have obviously an axial sagittal and coronal view of the chest CT. Does anybody notice anything unusual? Not about pathology or anything but just what we're seeing here.

Marry: Is it off centered?

Kris Huang: I suppose so and part of it is, you know, I'm from Red Oaks, so this is a red Oak image. Yeah, it's definitely not centered the way it would be in radiology if you notice, like there's actually two tables here. There's a table on a table and that's because we need an absolutely flat table and so that kind of just pushes the patient upwards in the image a bit. But that's a good call. But that’s not like about the actual image presentation here. Sorry. Trying to be a little more specific.

Ethan Ulrich: To me the lungs look a little dense. And you can also see like, I don't know, some sort of padding that the patient is on, which I don't, I'm not used to seeing in CT.

Kris Huang: Yeah. Again, this is not a diagnostic image. It's more of a therapy image. But yeah that's a really good observation. So in contrast to how we probably have seen CT images, you can see almost everything in here. You can see the lung tissue pretty well. You can see the low density padding that the patient is kind of, I don't know, squished into. But you can also see the soft tissue like you know, it's not the best contrast. But you can see you know okay contrast between the fat and the muscle and are you guys able to see my mouse. Great. Yeah. So like fat muscle, bones, lung, I mean you kind of see it all, like, you can even see, like the spongy bone in the, in the vertebral bodies pretty well. So yeah, this is definitely not a normal, CT view that we would see now. Anybody have any comments about this histogram kind of stuck here in the middle?

Understanding Histogram Equalization in Medical Imaging 🔗

Ethan Ulrich: It looks to be tri modal. Well.

Kris Huang: Yeah. Kind of. So this is I guess if you're going to do a more literal histogram from a regular CT, notice, you know, this is not Hounsfield units, but, I mean, there's a pretty wide range from zero to, you know, roughly 2000 or so for compact bones. Yeah, there's a lot of air in this particular histogram.

They actually point out the anterior and posterior lungs. I would actually argue this is probably more fat fluid, at least water supposed to be by definition 1000. And then we have a bunch of organs kind of lumped in this peak here, which is a little bit more dense than water, because it's got proteins and other stuff in it.

And then the kind of bone kind of all throughout here. Oops. So if we were to just rescale linearly from zero to like, let's say 2000, we'd probably get a view like this and this is probably a little more what we're used to seeing. So the lungs are fairly black. Like we're really only able to see some of the more prominent vessels.

And then the contrast between the fat and the muscle is a lot poorer. Ironically, you can actually see the bones reasonably well, but we kind of gave up something and then as you know, in radiology they like to use lung these windows and leveling, which is just another linear transformation here. So I got these lung window values from radio PDS. So hopefully they're pretty representative of what we would normally see in the clinic. So this actually goes technically even lower than air which is interesting. But it does give us a pretty good view of the lung parenchyma. But pretty poor everything else and I don't know about you, but on some of my monitors, different monitors, for me, the bone is just completely white.

So. And then soft tissue. We're really just focused on that center thing here. The two of the three modes that Ethan pointed out and we can see the muscle much better, the bones are definitely a lot brighter, but then we've lost the lungs almost entirely. So and then the bone window's kind of likewise good bone. We can distinguish, like, the interior from the cortical part of the bone much better here. And, then, to look at the original histogram and this so-called equalized histogram, we can see there's a lot of dynamic range compression and expansion simultaneously going on. So the high end here, just like down here and then the low end has been pushed upwards and the idea is you're trying to maximize the amount of information that you're showing. And there's actually a biological basis for this. But in any case here's a side by side comparison. And granted that this is all a compromise of sorts, but overall, at least to me, clinically, the equalized image kind of gives me the best of almost everything. I can actually at least try to distinguish the interior, the bone marrow here, from the cortical part of the bone, which I lose in the soft tissue. Obviously, I get a really good view of the lungs. In fact, you might argue that the view of the lungs here is actually better than in the lung windows. And then, you know, we have at least a compromise on the soft tissue contrast. So you can kind of see everything just kind of nice. So again what do we use histogram equalization for.

It's great for dynamic range compression. And it's really nice also because it still preserves the relative lightness and darkness of things, things that are lighter than others still are going to be lighter. The things that should be darker than others are still going to be darker. So you know, the relative brightness is still sort of preserved. I found this really great.

Whenever I have a PDF document that I need to generate in a printing, like obviously we don't print very much anymore, but printing like the dynamic range is even worse. And then we don't have the option to window or level anything. And so it's really nice to give the user kind of a good compromise view of everything. This is especially good for CT because the dynamic range is just so wide. You can still do it for MRI, but it's kind of less important in my experience, just because MRI is kind of already sort of perceptually balanced from the get go. So really all you need is a proper window and leveling another really great thing about histogram equalization is that it's super easy. It's computationally not hard. You know, this is like you don't even really need any additional dependencies. Probably if you're doing image processing, you're likely using Nanpy. If you're in the Python ecosystem and this is like the only function that you need and it's really short. There's also one for JavaScript. It's about equally as short. And another nice thing is that this works in N dimensions. So 2D, 3D, technically 4D, you can use this on just about anything. Has anybody ever used this before?

Reece Stevens: I have not seen very. I like the use case in particular of when you have to print something or you have to do something like a fixed report where it might be useful to include information, at least for visual reference for multiple parts of that range, but you can't fit it all. That seems like a pretty good use case.

Morphological Operations in Image Processing 🔗

Kris Huang: Yeah, and actually I used it when I was generating reports for the FDA. So for our clinical or for performance testing, you know, obviously we have to show scans. We have to show contours on the scans. And it's not very helpful to give the FDA an image of a contour. And then there's, you know, it's like all black in that area.

They'd be like, what is this? So it was really helpful for that. And they never really commented on it, but I assume they like it because they didn't have any issues with interpreting the report. So moving on to the morphological model, excuse me, such a hard word for me to say. Morphological operations.

Reece Stevens: I'm sorry, I just have one other question on that. Because I thought it was interesting when you were showing the dynamic range stretching in the lungs, how it looked like there were some visibles that were more clearly there were more structure. There were some structures that were more clearly visible in the equalized spectrum than in the lung window. I'm curious, like what do you think is the effect of that is something where if you just tightened the range of the lung window, you would maybe see it more clearly or like I guess, what? Why do you think so? I think there's this distinction between this, the visibility of the structures.

Kris Huang: Sure. So despite the fact that we see this line here, what's happening when you equalize the histogram is definitely not a linear thing. So when you're stretching this, like if you were to just let's get back to the lung window. So I mean we're kind of stretching it here. But because it's linear it's just a proportional effect if that makes sense.

Whereas with this like it, it's hard to map because I don't actually have access right now to the actual transformation that was made. But this tail seems a lot longer than if you just stretch this out or likewise, even for this, it isn't just, you know, a linear stretching. And likewise, when we deal with a lot of perceptual things, excuse me, we find that there's a lot of non-linearity.

So a lot of the things that we as engineers love, we love linearity because it's nice and simple, you know, frankly don't really apply directly very well. And they end up being kind of approximations that work most of the time. But perhaps we could do better. So anybody uses morphological operations a lot. All right. Sweet. So just as a review of sorts morphological operations are kind of like a cousin to convolutions.

And both of them have what they call a kernel. Just sort of like the basis element that we used to process there in the image. So here we have a box and here we have some approximation to a circle. Most of the time we're dealing with a binary element. So it's either on or it's off which is a bit different, somewhat different from convolution.

And in fact you can implement morphological operations using convolution and for that matter, for those interested in speeding things up, because convolution can be done in the frequency domain. You should therefore be able to do morphological operations in the frequency domain as well in any way, you can do this in 2D. You can do this in 3D. You can do this in ND.

So the idea all the way I simplify it for myself is in a morphological operation you take that basis structure, the structuring element. And for dilation for example, for every white pixel you would replace that white pixel with that structuring element. So here we'll do it once. And I'll just kind of go back and forth so we can see the effect. And then we can do it again. And we can do it more. We can do it as many times as we like. And one thing to notice is the edges are kind of rounded. And that's because the kernel is around it. I mean, I kind of liken it to blowing up a balloon. We can go in the other direction. We call it erosion. And it's really just the reverse of that. So you treat the black for every black one, every black pixel you replace with that, cut that kernel so we can go back the other direction. Now theoretically we'd be back where we started, but clearly we're not. So the inner edges, the inner corners have been rounded off.

The outer corners are still sharp. And we call this a closing. We'll talk a little bit more about this in a little bit. So if we keep on going and then we go backwards, we have what we call morphological smoothing where you can see both the inner and the outer corners are now rounded and kind of a back and forth to kind of see that. So again, a closing is a dilate, and then erode, it smooths the intrusions into a shape. We have an opening which is an erosion then a dilation, and that smooths the protrusions. And then we have a smoothing which is when you do an opening or closing and then an opening. So kind of a I guess, hopefully not a hard question, but how many more logical operations do you need to do a smoothing.

Ethan Ulrich: Four.

Kris Huang: Sure, yeah. If you use the same kernel or structuring element, then yes, it would be four. You did the dilate than a road erode dilate. But you can actually combine these two erosions into a larger single kernel or structuring element. And then you could kind of cheat and just do one dilation when twice as big erosion and then one dilation again.

Nicholas Chavez: I have a question about that, about the hack or I would I guess it's not a hack, but like the strategy of using a larger erosion when you say a larger kernel, if you go back a couple of slides, could you demonstrate what a larger kernel would mean. Would it mean just like this multiplied by two or like squared or so?

Kris Huang: Sure, so, for example, if we, let's just say we're not going to do like a whole bunch of them. So you could do a dilation like this. And then so this one you don't really count the center. So this would be two extra. So this would be two. And this is one. So I would do one with this one dilation with this and then an erosion with this. And then another dilation with this one.

Nicholas Chavez: Gotcha okay. Yeah that helps.

Kris Huang: So yeah. So this works in 2D and in 3D I guess you could do 4D but I've never needed to. So other fun tricks like so I just took this from OpenCV because they had this really nice example. So if you have speckled noise you could do an opening and then potentially remove it all depending on whether your kernels are big enough to catch that.

Application of Morphological Operations in Medical Imaging 🔗

Kris Huang: One thing I would caution you on is that the shape isn't exactly preserved. It's somewhat minor in this case, but for this the undots of the eye, it's definitely not what it was before. So if you really require something not to change its shape, I probably wouldn't pick this method. But if you just need something quick and dirty and close enough, then maybe this will work. Likewise, in the reverse, if you have holes in your image, you could use a closing to close them up. And likewise, the same warning kind of applies. You can kind of see the intrusions into this J have been rounded out. So again, if you absolutely need shape fidelity this is probably not the best way to go. And lastly, you can do both and subtract them.

And you get the outlines of something. And you know one might ask, why in the world would I ever need to do that? But it turns out in radiation oncology, sometimes we do. This is an example of a prostate treatment. And so this is the prostate here. This is the bladder. And here it's long and long gated structure is the rectum. And so you can see where we're going to blast this prostate to a pretty high dose. And the rectum is sitting right behind it. And over the years there has been some debate over how we should contour this structure. Because the easy thing and the standard thing to do would just be a contour the whole thing, but we honestly don't care how much we radiate, what's in the what's inside the rectum, we really care about the wall of the rectum. But some protocols and research protocols might ask for this for now, we're hyper fractionating our treatments a lot more, which means we're treating with fewer treatment sessions. But each treatment session gives a more whopping dose. It's kind of coming back into fashion where we kind of want this wall structure.

This isn't always offered by treatment planning systems. So it's not a hard thing to implement. And that's a little extra thing that we could do. We commonly use these operations to expand or shrink contours for margins. So here is a screenshot from an older version of the eclipse, which is a treatment planning system. And you can see they actually allow you to define different distances for the different major directions. So does anybody have any idea how you might implement that? So far we've been talking about symmetrical structuring elements.

Reece Stevens: Being like a non square or non equally distributed kernel.

Kris Huang: Yeah. And that's exactly right. And the trick is how do you do that in a smooth way. Because we can't use a cubic or square box like one. Because then we'd end up with some really large discontinuities. We'd end up with sort of like eight cubes stuck together. And that's okay. Yeah. We don't have to know. We can leave this as an exercise for the reader, but clearly it's been done.

Some TPS do use this as a smoothing operation. And yeah, you can make shells of contours this way.

Perceptually Uniform Color Maps and Their Importance 🔗

Kris Huang: So moving on to perceptually color for a uniform color map here. How many of you folks have used perceptually uniform color maps. No. Okay. So I'm kind of curious what color maps do you guys use the most frequently? I mean, besides grayscale.

Reece Stevens: Pretty much grayscale all the way in case, unless it's, the black body radiation one. And I think the only reason I do that one is because it's the default.

Kris Huang: Okay. Fair enough, fair enough. So color mapping. Yeah, I guess we don't think about it very much. But like for radiology, it really is the standard to color maps some sorts of images like pets, functional imaging is almost always color mapped in some way and usually overlaid on a grayscale image. The color map has gotten more attention as something we should probably pay attention to from a usability perspective.

So here we have examples of…At the top we have the Matlab favorites Jet and hot. And then down here we have more perceptually uniform variants called Rainbow and Fire. They really love interesting names. So Jet is originally from maybe JPL I think. And then I don't even know where Hot comes from. But these were really created with like programing ease kind of in mind. I mean, this is just straight straight like pure blue, pure green, pure red, etc.. And what we're looking at is something called a sign grading plot. Has anybody seen these before? Okay, great. I feel good that I'm not talking about something you guys already know. So at the bottom is just the plain color map, at the top here we have slight perturbations in the brightness in the level. So what we're looking at everywhere is sort of like, can you see a difference in that gray level versus its nearby neighbors. So in a perceptually uniform colormap we should be able to see these lines pretty much uniformly throughout the range. Such as we do here in this grayscale plot.

The problem is that some of them will have kind of like these almost like blind spots where you can't see these lines anymore. And so what this means is in this range, you actually will have a hard time telling the difference between this intensity and this intensity, or even worse, for this one, this intensity and this intensity, which is actually pretty different. Like almost I don't know, an eight 10th of the color scale where you can't even see anything. Likewise here this is a modified HSV, which is a rainbow. Like I don't know, you wouldn't be able to see much of anything in this region or in this region. On the flip side, I don't know about you, but I kind of see like a line here. And when you're viewing data, it can look like an artificial feature that doesn't actually exist. So this potentially has some usability. And risk attached to it. And probably about I don't know, ten years ago people started recognizing that color perception was actually kind of important. And so they actually started to model human vision and how we perceive colors and brightnesses and kind of bake them into color maps so that, you know, these two ostensibly are pretty similar and so are these.

But, you know, if you pick the right brightnesses and the right hues, you can actually start to minimize these problems. Does it matter? Yeah, kind of it does. So you can see by the age of this 1996 and from IBM, you know, the most innovative of places, you know, they caught on to this. They're clearly not radiologists because their heads are facing the wrong way, but still standard jet I don't even, I can't even see anything in this whole area here, where it’s the grayscale. Oh yeah. Now we know they're missing.

Does it matter clinically? Well, there's not a whole lot of data in this area, but one of the more recent articles from 2019, it's actually done by a couple people from the FDA. So it's kind of an important topic for them. And the answer is yes, it does matter. So what they did was they studied grayscale Hot and Rainbow, which I'm thinking is probably Jet. And they did myocardial CT perfusion, asked radiologists to find the under perfused areas. And then they also did prostate MRI apparent diffusion weighting. And they asked radiologists to find the tumor. So in this use case they saw a significant statistically significant difference in the AUC for detection of under perfused areas between grayscale and particulate rainbow.

This doesn't look like a whole lot of difference, but this is actually almost like a 20% difference. So I would argue that is clinically significant. So from like in the mid 50s to the upper 60s. So clearly perceptual uniformity here paid off. And now unfortunately this didn't really pan out in the prostate detection one. And it's possibly because maybe it was already obvious and it didn't really matter. But nonetheless, I think it's safe to say that being perceptually uniform was definitely not a detriment.

Color Blindness Considerations in Medical Imaging 🔗

Kris Huang: Do we have any colorblind folks in the group? Okay, well, I looked it up and there actually are colorblind radiologists, maybe not completely colorblind, but at least partially. And so these are folks that we should probably also try to accommodate.

These are renderings of the common Matlab color maps as perceived by different types of partial color blindness. And you can see the more perception the closer you are to perceptual uniformity, the better it looks. So definitely gray is obviously gray and bone don't change. Even Hot didn't do that badly. And that's just because the perceived brightness is at least, you know, in line with what we expect. But then other color maps get kind of confusing. Like here we have a real perceptual dead spot in both the yellow and the blue zones. Same here, kind of here and here, and then there's also all sorts of color maps we can use for just because it looks cool.

3D Rendering and Color Mapping in Radiology 🔗

Kris Huang: So particularly for 3D rendering it's just a fun thing to do. You know, it was kind of fortuitous that Matt had mentioned dealing with TeraRecon little bit earlier in one of the other channels. I think they kind of had their claim to fame, starting with 3D renderings of things. Out of curiosity, is that something clients ever seem to care about?

Matt Hancock: TeraRecon or 3D rendering?

Kris Huang: 3D rendering, sorry.

Reece Stevens: I’m sorry, it's pretty hard to hear you, Bimba. I think maybe your mic is not working or something.

Bimba Shrestha: Sorry. Can you hear me now?

Mary Vater: Yeah, I can hear, you know.

Reece Stevens: Yes, yes.

Bimba Shrestha: Oh, okay. Cool. Yeah, just saying I don't like it. I've had to come up with a client project, so I think it will be substantial to contribute there.

Kris Huang: Yeah. I imagine particularly for online or web based applications I don't know maybe it's more challenging I don't know. But yeah you can get really creative with color mapping. This one was from a paper talking about turning volume renderings into something more pleasing perceptually, mostly for the purposes of education, possibly surgical planning. It's just easier to imagine, like I really am looking at a patient and they take advantage of that.

The fact that Hounsfield units tend to be associated with different kinds of tissues, as we saw earlier. Here are a couple other versions where you can make pretty pictures playing with color maps. So this was a CT, with contrast. And if you arrange your color map right, you can pretty much just get rid of most of the tissue and look at, I guess it what is most important, the major organs, major vessels, bones. And likewise you can do the same for the lungs. And you can really see the tree of the lungs here. It's kind of neat.

So in my practice I really like to try to use perceptually uniform color maps whenever possible. I do think it's worth having that discussion with clients. Like for one project I brought it up with the client and they said they actually considered it. But then when they asked the doctors what they had preferred, the doctors actually preferred the non-uniform maps. And, I think a lot of it is visual appeal versus can I distinguish the data here. And it's really easy for I mean, especially if you don't even know that it's happening, it's easy to go for the perceptual appeal.

Try to keep people with color blindness in mind. And when we manufacture our devices, and, you know, it's okay to have something that's not perceptual uniform as your default, you know, if that's what people like, it's okay to have it. But I do think it's important to offer a selection or to make it customizable so that those people can be accommodated. For PET, I definitely recommend Viridis or Inferno. Inferno is very heat-like so it's very natural for radiologists to look at. They probably won't even know that you had switched it up. But the ability to distinguish different levels is definitely better. And then for CT and MRI, it's hard to beat the good old fashioned grayscale. So thanks for listening. I'm curious to hear what sorts of image processing techniques or, you know, standbys for you guys.

Reece Stevens: Great presentation, Kris. it's awesome.

Bimba Shrestha: I was just going to say on that last color map stuff, I didn't have anything specific to point out, but I was curious if you'd heard about that is the thing is, the Jet there was like a some controversy that involved NASA and Google and how they discovered that people were at a Jet had, I think, called being not being very accessible to colorblind people, people seeing like big patterns in Jet in particular, it was I don't know how much of that is true for medical imaging, but do you know anything more about that? Is it pretty safe to recommend avoiding Jet more or less whenever possible?

Kris Huang: I think it's okay to use a rainbow like one. Maybe not Jet, but yeah, partially because of the potential for seeing features that aren't there. Like you can hear. Unfortunately this is sort of a soft area and even studies like this are hard to come by because it is such a subjective thing. But again, I would point out that perceptual uniform maps do not underperform compared to jet, despite how aesthetically pleasing it may look. So there's definitely no downside to going with that. And there's definite I mean, it's not definite, but there's potential for risk mitigation if you do use them.

Bimba Shrestha: Gotcha. Yeah, those 3D renderings look awesome.

Kris Huang: Thanks. But one was from a paper. Yeah. This is from a paper. So it does look pretty cool.

Matt Hancock: I've heard about perceptual uniformity and but this one I hadn't seen the sign grading thing. And just seeing visually you know I think that really just made it pretty apparent. Like I kind of understood the importance but never really kind of visually understood it until I just saw this. So those are really interesting.

Joshua Tzucker: I want to just say that like, a lot of these like, terms and things I had heard and, you know, like a little bit about, but I feel like a lot of things just clicked for me, seeing everything all together. The kudos on that. I did have a question. So kind of along the lines of like designing for accessibility. I'm curious if you know, if much, much research has been done into how external factors affect perception, because I could imagine, like things like an OLED versus LCD screen or the type of lighting that is in the environment the person is using images, like all of that could affect perception?

Kris Huang: Yeah they can. Certainly this quality of your screen does matter now. Like whether the display technology itself I don't think anybody has studied directly. But definitely when it comes to the actual monitor performance, like for example, the whether they call it the delta, I forget the letter. But anyway, when they calibrate monitors and they certify them for medical use, they definitely have a standard that they need to meet. And, you know, it kind of makes sense that the display technology itself doesn't matter that much, because if they're already measuring the output, they're measuring the output, and that's what goes to your eyes anyway. That being said, maybe in the future, like for LCDs, like viewing angle does matter, no matter how good your LCD is, it still matters. So in that respect, maybe OLED or Micro-led might be superior in the future, barring burn in.

Reece Stevens: I noticed I never noticed this feature in iPhones before, but I was helping my grandmother with her iPad, going through all the accessibility settings like turn on some stuff, and I noticed that one of the settings that's in there is that it will adjust the color tone of the display based on ambient lighting conditions to try and make the perception of colors more uniform in different lighting conditions. I thought that was really interesting. I had never seen that setting in the iPhone before, but it definitely did make me more aware of the lighting conditions, especially really dramatic lighting conditions. Like if you're in a place that has red lights on or something, or like something like really, you know, skewed, unnatural light coloring, it can really change how everything looks. So I thought that was pretty interesting to see that they built in stuff just just for just for the perception.

Kris Huang: Yeah. Apple is really on top of those issues and it kind of sets them apart from most other hardware manufacturers. And I definitely give them a lot of credit for that. And I would say that's a great point. Like I think for medical devices, there are a lot of these little, you know, soft touches that I think could really make, you know, individually it might not seem to make much of a difference, but I think overall, like when if you have enough of them, it will add up to a perceivable difference in the quality of the device. I mean, we see this with cars all the time, right? Like, what's the difference between a Lexus ES and a Camry? You know, it's like they have the same platform. They use a lot of the same parts. You can go to either dealership and get them repaired. Right. But one of them commands a higher price, and one of them is generally seen as better than the other. And I definitely think that easily applies to medical devices. Like, I think we've all used something that's really refined and then something that's clearly not.

Machine Learning and Morphological Operations in Image Processing 🔗

Bimba Shrestha: One thought I had when you were talking about morphological operations and how they are just one form of, I forget what you said, but they're related to convolutions, right? I'm wondering if in like the context of deep learning, where…is it still, do you know, if there is research out there that is it still beneficial to explicitly do, like morphological operations for pre-processing, post processing, or because if you know, like convolutional neural nets, I would think because it's just convolution and it's related to morphological operations, that just you would just capture anything productive, you'd want to do with more morphological operations, just in the, you know, in the black box set. So like, is it still even beneficial to do any of this morphological pre-processing post probably would just be better to just let the black box kind of figure out?

Kris Huang: That's a good and fun question to think about. Like in the end I'd have to say I don't know, but it would make sense to me that the cleaner the data you can provide, either during training or inference, seems to me that that would be the right thing to do, and I don't have any hard evidence regarding AI, but I would say just thinking about how human vision works and how we perceive things, there's a lot of pre-processing that happens before that signal from your eye gets to your brain, and even then, before it gets to the conscious part of your brain. And so to me at least, that at least suggests the more you can clean it up before it gets to the important processing, you know, the important inference step, the better off you'll probably be.

JP Centeno: And in the context of neural nets, like with that, you know, like normalization layer that usually you encounter like I guess if your data is very…The purpose of the things is to normalize it, but also if your input data is very, very non normalized, I guess you can add, you know enough variance you know, or to the model I guess, you know, and I guess you want to prevent that too.

Kris Huang: Without knowing anything about the model I would say probably yes. Like when you think about even histogram equalization, it's kind of a weird thing because you're increasing the contrast between a lot of things. But at the same time it's also compressing things too and it's all it's trying to keep it. It's trying to maximize the amount of information being conveyed, but without actually changing the data, which is kind of strange because histogram equalization is actually reversible. So if you know, you can actually get the transform that maps brightnesses from A to B and then reverse and get back to A without any loss, really. So you're not actually losing information, but you are presenting it in a different way. I don't know if that answers your question, but that is a really that's a really interesting area.

Bimba Shrestha: Yeah. I'm really curious to see if there's any stuff because I would imagine, you know, like I know morphological operations. Before deep learning became popular, it was very useful for noise reduction and guessing, like all that sort of stuff is still worth doing as a pre-processing step. Like, even if the neural net captures that anyway. Like it just would not hurt, you know, clean the data, like you said. And it's like, I can't imagine that hurting only helps.

Matt Hancock: Yeah, yeah, quite a bit. Still for like the post-processing of the the masks produced from segmentation like even even recently for a client project, just because like, even like the edges like far away, like, let's save a volume and like your primary thing that you're trying to segment is in the middle, like there's, there's almost usually like, I mean, no matter how great your model is, there's going to be like some kind of, like spurious things like way over here by the feet let's say that you want to just, like, clean up and don't want to contribute towards volume counts and things like that in my experience. So I feel like it's still used for morphological operations for post-processing. Yeah, the segmentation mask is used quite a bit. I mean, but it's definitely context dependent. If you have the world's greatest model trained on all the data in the world, maybe you don't need it.

Bimba Shrestha: Yeah. That makes it complete. Yeah. If there's just like a tiny miss prediction in the corner somewhere. Feels like something that's purpose designed for this.

Kris Huang: It just tells you how weird models like AI models are, because I think if I doubt, like for in Matt's example, like, do you think a person would ever make a mistake like that.

Matt Hancock: Like, say there's a liver lesion in the foot or something? Probably not.

Kris Huang: Yeah. Or like you said, like you have all these details like elsewhere and stuff like that. And you know, there's all this cleanup necessary. Like do you think those are, those are errors that a human would make now.

Matt Hancock: Probably not any. Yeah probably not those mistakes. Those seem pretty unique to neural nets.

Ethan Ulrich: I mean, I have seen it where a person was doing an annotation and then they just clicked off to the side and made this little island of, you know, a label which was, you know, an error or something. So it does happen. Maybe not in the same way.

JP Centeno: A, in a tiny one that unless you're running code on top of a mask, you probably won't find it.

Matt Hancock: There's something I guess related to maybe not model quality, but just like approaches, maybe you're doing something like you need to break apart instances and you, you know, you don't have the data like annotated to do something like instance segmentation. So you have like to say, I don't know, teeth butting and your mask touches and you need to do one of these morphological operations to try to kind of split them apart like those kinds of things. So it's definitely still a good tool to have in your tool.

Kris Huang: Agreed.

Ethan Ulrich: But another thing that I wanted to mention that's useful is like the histogram normalization or histogram matching sometimes it's called. Take data, you know a large set of data and make it more similar. It's not necessary as much for CT, but maybe for something like MRI, the pixel value, like the intensity value doesn't necessarily have a quantitative meaning to it. If you image the same patient with two different MRI vendors like those values are going to be very different for the same region of the body. So sometimes there's like a pre-processing step. You would do histogram normalization where there's usually a linear function, but a way to match the histograms for each series across vendors so that they look more similar. That's pretty important for quantitative analysis. If you're ever using like the intensity value of an MRI image, which, you know, maybe is frowned upon, but sometimes that might be necessary. So the ability to normalize those intensity values is really important. If you're, you know, using data from different MRI vendors.

Kris Huang: Very true. On that note, for MRIs and I didn't cover this topic, but has anybody ever used homomorphic filtering? So like so in MRI because it's based on this magnetic gradient. And sometimes you have inconsistency in this gradient. So has anybody ever noticed that the center of the image is usually brighter than the periphery. And it sort of looks like you're, you've got a flashlight looking down on the image. Yeah. And so like if you were to take an intensity profile across the image, it would kind of look like that. And that will really screw up anything quantitative that you might want to do. So in addition to the not yeah definitely not replacing histogram normalization. You can use homomorphic filtering to try to use frequency space filtering to try to even out this low frequency component. Because something like this is an extremely low frequency where the details in the image are very high frequency. So if you can selectively remove the low frequency portion, you can kind of even out this gradient and then you're still left with like all the detail.

Accessibility and Risk Management in Color Usage 🔗

Ethan Ulrich: Is that what they sometimes called bias field correction?

Kris Huang: I don't think so. But I would have maybe that is what they call that.

Bimba Shrestha: I think that might be this is like really, I don't remember, but I think that might be something else.

Matt: Like homomorphic field filtering is a method to achieve the bias.

Kris Huang: My impression is that bias field correction is more about the actual geometric distortions that can occur, but not necessarily the intensity alterations.

Reece Stevens: But there's one other comment I wanted to make. This is backtracking a little bit, but on the topic of color, in terms of managing risk associated with the use of color, and this might be obvious to everybody already. So I just think this is probably one of the cases where we run into risk with color the most is using a red to indicate an error condition in an application.

We run into this all the time, and especially if you're using as a risk control measure the like, hey, if there's an A, if there's a value, the user needs to pay attention to, we're going to highlight it by making it red. That doesn't work for colorblind users. And so that risk control measure isn't applicable to some portion of the user population.

So using color alone as an indicator of something important we need to communicate to the end user generally is not sufficient. You need to have some sort of secondary way to indicate that some information is extra important. I just want to mention that because I think it's something that probably it's like a pretty mundane use of color, but it's also like, you know, it's I think we run into it a lot is worth bearing in mind.

Nicholas Chavez: That was a big discussion that was had back when I worked in manufacturing with these big pieces of manufacturing equipment, it was really important to service different people with different disabilities for the differentiation between just a notification, a warning and an alarm, because depending on the severity level of what those were, you either, you know, either it's just trying to inform you of something, you're causing damage to the device that you're working on, or you could incur bodily harm from it. So it was like a huge discussion as a part of our DRM training to ensure that when you're given the user information, to make sure that it's done in a way that services everyone.

Kris Huang: Hopefully, if we have submissions that have either of those two, two people from the FDA and that paper, we can make them happy, right? Thanks so much, guys.

Reece Stevens: Yeah. Thank you. This was a great talk.

Bimba Shrestha: I feel like in the age of deep, deep learning, it's nice to see like old school image processing stuff. So yeah, thanks for the presentation.

Kris Huang: Thank you. See you guys later.

SHARE ON
×

Get To Market Faster

Medtech Insider Insights

Our Medtech tips will help you get safe and effective Medtech software on the market faster. We cover regulatory process, AI/ML, software, cybersecurity, interoperability and more.