In our papers (SVCCA, Insights on Representational Similarity), we explored some questions using these technqiues, but many other open questions remain:
- Applying some of these methods to understand properties of generative models.
- Exploring low rank based compression, particularly with PLS
- Understanding learning dynamics and representational similarity of generalizing/memorizing networks through training.
- Identifying the important neurons in a layer, and determining how many there are. Relatedly, identifying which neurons are become sensitive to which classes.
- Pinpointing the effect of different kinds of layers on the representation.
- Comparing representations in biological and artificial neural networks.
We hope the code and tutorials are a helpful tool in better understanding representational properties of neural networks!