Adaptive optimization methods, such as Adam and Adagrad, maintain some statistics over time about the variables and gradients (e.g. moments) which affect the learning rate.… Read More »Optimization using Adam on Sparse Tensors
Research Engineer at Project AGI
It’s such a joy to be able to test an idea, go straight to the idea without wrestling with the tools. We recently developed an experimental setup which, so far, looks like it will do just that. I’m excited about it and hope it can help you too, so here it is. We’ll go through the why we created another framework, and how each module in the experiment setup works.
Eager Execution is an imperative, object oriented and more Pythonic way of using TensorFlow. It is a flexible machine learning platform for research and experimentation where operations are immediately evaluated and return concrete values, instead of constructing a computational graph that is executed later.
There are plenty of established machine learning frameworks out there, and new frameworks are popping up frequently to address specific niches. We were interested in examining if one of these frameworks fits in our workflow. I surveyed the most popular frameworks, and aim to provide a helpful comparative analysis.
SVHN is relatively new and popular dataset, a natural next step to MNIST and complement to other popular computer vision datasets. This is an overview of the common preprocessing techniques used and the best performance benchmarks, as well as a look at the state-of-the-art neural network architectures used.
Releasing a set of tools for converting the Street View House Numbers (SVHN) dataset into images with additional preprocessing options such as grayscaling.
This article assesses the research paper, ‘A Distributional Perspective on Reinforcement Learning’ by the authors, Marc G. Bellemare, Will Dabney and Remi Munos, published in… Read More »Literature Review: ‘A Distributional Perspective on Reinforcement Learning’