Researchers would Love to Have you Contribute to This Platform or Use it for Your Research

Source: Deep Learning on Medium

9091 Images

Researchers would Love to Have you Contribute to This Platform or Use it for Your Research

A PyTorch Platform for Distributed RL

Thanks to advances in deep learning and GPU hardware, reinforcement learning has achieved a lot of feats including the ability for agents to learn policies and tackle complex tasks. It is no wonder it has attracted a lot of interest.

However, according to scholars, there’s a lack of well-written, high performance, scalable implementations of distributed RL architectures that have hindered the reproduction of more published work. Not only that, new developments are restricted to only a few organizations that have the required know-how.

An approach like model-free reinforcement learning built on top of the IMPALA agent has achieved prominence for domains like StarCraft II or first-person shooter games. And while implementation of the IMPALA agent built on TensorFlow has been released as open-source software, researchers preferring PyTorch have but fewer options.

Simple, Open Source PyTorch Platform for Distributed RL

In this paper, Facebook AI, University of Oxford, Imperial College and University College in London researchers describe TorchBeast design principles and implementation. Torch beast is a platform for RL research that implements the popular IMPALA agent and comes in two variants MonoBeast and PolyBeast.

A sample of the PolyBeast agent process in Python-like pseudocode

MonoBeast requires only Python and PyTorch. PolyBeast, on the other hand, is the multi-machine high-performance version. It is, therefore, harder to install but powerful as it allows cross-machine training. The main purpose of the MonoBeast variant is to be easy to and get started with PolyBeast.

Why is it Important?

“We believe TorchBeast provides a promising basis for reinforcement learning research without the rigidity of static frameworks or complex libraries,” says the researchers.

TorchBeast helps to level the playing field by being a simple and readable PyTorch implementation of IMPALA, designed from the ground-up to be easy-to-use, scalable, and fast.

Both versions use multiple processes to work around technical limitations of multithreaded Python programs. The bottom line, they enable researchers to conduct scalable RL research without any programming knowledge beyond Python and PyTorch.

They have open-sourced TorchBeast, would love for you contribute to or use it for your research. Interested?TorchBeast is released under the Apache 2.0 license, access it here.

Read the full paper: A PyTorch Platform for Distributed RL

Thanks for reading, please comment and share. For an update of the most recent and interesting research papers, subscribe to our weekly newsletter. You can also connect with me on Twitter, LinkedIn, and Facebook. Remember to 👏 if you enjoyed this article. Cheers!