- We are finishing up a series of experiments comparing sparse unsupervised image classification, focusing on the MNIST dataset. In particular we have covered Sparse Autoencoders and Convolutional-Growing-Neural-Gas (representing a broader family of Convolutional-Competitive-Learning methods). The latter has the added benefit that it works equally well on nonstationary input.
- We plan to work with more sophisticated datasets, starting with Street-View-House-Numbers (SVHN).
- We are actively investigating whether Capsules Networks can be trained in an unsupervised manner.
Theory & Literature Review
- We are looking into computational models of episodic memory and medial-temporal lobe function
- We are reviewing all methods of integrating feedback into hierarchical models, including message-passing algorithms (such as Markov Random Fields and Belief Propagation) and Capsules-network Dynamic Routing.