-
Notifications
You must be signed in to change notification settings - Fork 62
Description
Even though our dataset only classifies images into 10 classes, the examples could be further divided into additional subcategories. For example, consider the images of planes. We have images of the fronts of planes on the ground:

And images of planes flying against different colored skies:
Although these are all images of planes, it seems plausible that there could be a difference in the activation patterns between the two sets of images.
If we view the output of the fully-connected layers as signatures describing the images, then by computing distance metrics between these signatures we can rank the similarity of images.
Given the FC64 or FC10 outputs for all of the images, we could apply clustering techniques to identify groups of images that the network classifies similarly. We could project the results of this clustering down to two dimensions and display the images according to this clustered layout.
It would be interesting to see how this clustering evolves while the model is trained. I imagine that at first the similarity scores might reflect low-level image features, like overall color, but that over time the clusters would be based on higher-level features and some subcategories might begin to emerge.
This view could also be useful for understanding misclassifications; I've noticed that the classifier sometimes confuses dogs and horses. Given a misclassified horse, being able to see the most similar dog images might help to explain the misclassification: maybe that particular horse image is atypical and is similar to some particular subgroup of dog images.




