Research Projects

Predictive Generative Models (2019-)

This project aims to develop a predictive model – such as our RSM – into a self-looping generative model that could be used to create simulations of future events. We’ve begun testing the model on the “bouncing balls” task described in “Predictive Generative Networks” (Lotter et al 2015), and RAVDESS, a database of actors singing, speaking and emoting. 

Reinforcement learning of attentional strategies (2019-)

Recently Transformers and other attentional neural architecture have shown great promise in a number of domains. How can we align the deep backpropagation used in Transformers with our desire to use biologically plausible local credit assignment? We aim to use the Bellman equation – better known as discounted Reinforcement Learning – to learn attentional strategies for controlling a set of RSM memory “heads” such that their predictive abilities are maximized.

Learning distant cause & effect with only local credit assignment (2018-2019)

In May we introduced our Recurrent Sparse Memory (RSM). We showed that it can learn to associate distant casuses and effects, higher-order and partially observable sequences, and also tested it on natural language modelling. RSM is demonstrated to possess capabilities that previously required deep backpropagation and gated memory networks such as LSTM, while using more computationally and memory efficient local learning rules.

Episodic Memory (2018-2019)

Imagine trying to accomplish everyday tasks with only a memory for generic facts – without even remembering who you are and what you’ve done so far! That is the basis for most AI/ML algorithms.

We’re developing a complementary learning system with a long term memory akin to Neocortex and a nearer term system analogous to the Hippocampi.

The objective is to enable “Episodic memory” of combinations of specific states, enhancing the learning and memory of ‘typical’ patterns (i.e. classification). In turn enabling a self-narrative, faster learning with less data and the ability to build on existing knowledge.

Continuous online learning of sparse representations (2017-2018)

This project was the foundation of our approach to learning representations of data, with ambitious criteria – continuous, online, unsupervised learning of sparse distributed representations, resulting in state-of-the-art performance even given nonstationary input. We reviewed a broad range of historical techniques and experimented some novel mashups of older competitive learning and modern convolutional networks. We obtained some fundamental insights into effective sparse representations, and how to train them.

Predictive Capsules (2018)

We believe that Capsules networks promise inherently better generalization, a key weakness of conventional artificial neural networks.

We published an initial paper on unsupervised sparse Capsules earlier this year, extending the work of Sabour et al to only allow local, unsupervised training, and arguably obtained much better generalization. We are now developing a much better understanding of Capsules and how they might be implemented by Pyramidal neurons.

Since we ended this project, Kosiorek et al have developed a better version of the same ideas called “Stacked Capsule Autoencoders”. Their results are better too!

Alternatives to BackPropagation Through Time (BPTT)

We are intensely interested in biologically plausible alternatives to backpropagation through time (BPTT). BPTT is used to associate causes and effects that are widely separated in time. The problem is that it requires storage of partial derivatives for all synaptic weights for all time-steps up to a fixed horizon (e.g. 1000 steps). Not only is this memory intensive, the finite time window is very restrictive. There is no neurological equivalent to BPTT – nature does it another way, which we hope to copy.

This project led to the development of our RSM algorithm.