Asynchronous Methods for Deep Reinforcement Learning

Abstract

In this paper-reading seminar, I discuss the following paper:

Volodymyr Mnih et al. (2016). Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning, PMLR 48:1928–1937.

I explain the methods used in this paper in detail with relevant underlying theory, show its results, and analyse its advantages and limitations. I also provide an outlook that this paper enabled in succession.

Abstract of the paper:

We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.

Date
11 Feb 2022 3:00 PM — 4:00 PM
Event
ACS Advanced Topics in Machine Learning (R255) Reinforcement Learning Seminar
Location
Computer Laboratory, University of Cambridge
William Gates Building, JJ Thomson Avenue, Cambridge, CB3 0FD
Xiangyu Zhao
Xiangyu Zhao
PhD Candidate