The case for Episodic Memory in Machine Learning

The case for Episodic Memory in Machine Learning

ML Today

Today’s Machine Learning has demonstrated unprecedented performance in what seems like every application thrown at it.

Almost all the success has been based on advanced memory systems that can learn to recognise an input based on a large number of training examples. This is the equivalent to memory for facts. The biological terminology is Semantic Memory.

Semantic Memory is just one slice of what a memory system can be. Semantic memory is one aspect of Explicit Memory, also referred to as Declarative Memory¹ – knowledge about people, places and things.

The other aspect of Explicit Memory is Episodic Memory. It’s the memory of personal experiences, often thought of as autobiographical memory.  Episodic Memory allows us to remember our experience as an internal story.

I’m going to explain why I think that Episodic Memory will make Machine Learning algorithms more powerful and more general.

Adding Episodic Memory

Two commonly described aspects of Episodic Memory are Pattern Separation and Pattern Completion. Pattern Separation is the ability to remember a specific memory, being able to distinguish between similar memories. An example is recognising a specific blue cup, rather than understanding that this object is a cup with the colour blue. Pattern Completion is the ability to recall memories when only partial clues for recall are available. Episodic Memory is also likely important for learning from very few examples, or in ML lingo, one-shot learning. Episodic memory is essential for many of the features of Continuous Learning which we wrote about recently.

Semantic memory is used in a variety of ways. Transfer learning, domain adaptation, avoiding catastrophic forgetting (EWC), sequences with LSTM and many more. There are a variety of advanced applications. But if you were to describe an intelligent agent that moves about the world, you would take for granted the abilities conferred by Episodic Memory.

In order to interact meaningfully in a coherent way, an agent needs to remember it’s own story, remember specific occurrences and objects and to learn fast from significant examples and experiences. Learning fast when appropriate, requiring less data as an additional benefit, confers the most tangible near term benefit to a range of existing successful AI applications.

How can it be done?

The part of the mammalian brain responsible for Episodic memory is the Medial Temporal Lobe (MTL). It contains the Hippocampus, which is the structure most often discussed. Therefore, understanding MTL function and how it interacts with Neocortex seems like a very good place to start.

For this reason, the Hippocampus has been of interest to us for years. About a year ago, researchers published a study [BBC article here][Science publication here] that demonstrated a model for Hippocampus and its interaction with Cortex, that deviated significantly from the standard model of decades² . That study triggered a new surge of attention from us.

Since then, we’ve been playing around with relatively abstract computational models based on short and long term memory components, analogous to Hippocampus and Cortex respectively. It’s early days, but we believe this approach holds great promise. We’re looking forward to implementing a first version and reporting on progress.


¹ The other Memory system described in Biology is called Implicit Memory, or non-declarative memory. Implicit Memory is regarded as unconscious memory related to habits and associations. It is not part of this discussion.

² The prevailing view was that memory is formed in the Hippocampus and then moved to Cortex for long term storage. But changes were observed in both Hippocampus and Cortex at the initial stages of memory formation.


Also published on Medium.