An Explainable Learning Tool for Convolutional Neural Networks
My MSc project aimed to address the lack of explainability in CNNs through a novel visualisation technique.
Convolutional neural networks provide impressive results in complex image classification tasks but lack explainability in their predictions due to their complex nature and black-box designs. Current visualisation tools can be seen to address this issue and provide rich insight into model behaviour, however, are in sparse supply and typically fail to meet user requirements due to shallow models and cluttered interfaces.
My Master’s thesis attempted to address current weaknesses through a novel visualisation technique using a Hierarchical Agglomerative Clustering (HAC) algorithm to group feature map outputs at hidden layers of the VGG-16 model. The tool was evaluated in a small qualitative study revealing that interactive CNN tools visualising feature maps provide explainability which is can be useful for both new learners and experts alike.
Check out the project on GitHub.
Explainable AI Computer Vision Deep Learning Clustering