Over the last few years, there have been several breakthroughs and exciting new research directions in Reinforcement Learning, Hippocampus Inspired Architectures, Attention and Few-Shot Learning. There has been a move towards multi-component, heterogeneous, stateful architectures, many guided by ideas from cognitive sciences. Google DeepMind and Google Brain are leading the way making progress on several fronts and teaming up with biologists.
We mapped many of the significant examples visually, to show the big picture as a timeline (left to right) organised by research area (top down). The table shows foundational papers and subsequent improvements. We found it helpful, and we hope you do too. We also invite you to contribute or extend your own copy of the spreadsheet.
State-of-the-art ML/AI has been receiving high profile and well founded criticism for being a form of curve fitting (Jordan, Pearl, Marcus), powerful at many tasks, but ultimately limited in terms of general purpose interactive systems that can meaningfully reason. These approaches which buck that trend, are refreshing and great cause for optimism.
Also published on Medium.