If you followed along with our last post, we developed a deep-learning model that achieves our goal of identifying Simpsons characters in an image. However, as with all software development tasks, getting a working program is only half the battle. In order to maintain a program and fix bugs, the developer must understand the system– in particular, they must understand how it fails as well as how it succeeds. This can be quite a difficult task for deep-learning models, as they are black-boxes by nature of their construction. However, there are some techniques we have at our disposal to open up the black box and get a view into what is happening in our trained model; these can help us to find “bugs” in the model’s learning and even indicate how to resolve them. Among the many techniques to visualize the internals of a deep learning model, we will be focusing on the use of class activation maps.