Current Research

You can find our current research projects here. For the bigger picture, be sure to check out our Research Roadmap and approach to AGI.

Brainbow Hippocampus,

Episodic Memory


Imagine trying to accomplish everyday tasks with only a memory for generic facts – without even remembering who you are and what you’ve done so far! That is the basis for most AI/ML algorithms.

We’re developing a complementary learning system with a long term memory akin to Neocortex and a nearer term system analogous to the Hippocampi.

The objective is to enable memory of combinations of specific states, or Episodic memory, enhancing the learning and memory of ‘typical’ patterns (i.e. classification), or Semantic memory. In turn enabling a self-narrative, faster learning with less data and the ability to build on existing knowledge.

Roadmap: Continuous Learning


Predictive Capsules


We believe that Capsules networks promise inherently better generalization, a key weakness of conventional artificial neural networks.

We published an initial paper on unsupervised sparse Capsules earlier this year, extending the work of Sabour et al to only allow local, unsupervised training, and arguably obtained much better generalization. We are now developing a much better understanding of Capsules and how they might be implemented by Pyramidal neurons.

Roadmap: Representation

Continuous online learning of sparse representations


This project was the foundation of our approach to learning representations of data, with ambitious criteria – continuous, online, unsupervised learning of sparse distributed representations, resulting in state-of-the-art performance even given nonstationary input. We reviewed a broad range of historical techniques and experimented some novel mashups of older competitive learning and modern convolutional networks. We obtained some fundamental insights into effective sparse representations, and how to train them.

Roadmap: Continual learning

Sequence learning: Alternatives to Backpropagation Through Time


We are intensely interested in biologically plausible alternatives to backpropagation through time (BPTT). BPTT is used to associate causes and effects that are widely separated in time. The problem is that it requires storage of partial derivatives for all synaptic weights for all time-steps up to a fixed horizon (e.g. 1000 steps). Not only is this memory intensive, the finite time window is very restrictive. There is no neurological equivalent to BPTT – nature does it another way, which we hope to copy.

Roadmap: Sequence learning