GOOGLE DESIGNS OPEN SOURCE FRAMEWORK THAT REDUCES AI MODEL TRAINING COSTS BY UPTO 80%.

Original article can be found here (source): Artificial Intelligence on Medium

GOOGLE DESIGNS OPEN SOURCE FRAMEWORK THAT REDUCES AI MODEL TRAINING COSTS BY UPTO 80%.

  • Google published a paper saying that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%.
  • To evaluate SEED RL, the research team benchmarked it on the commonly used Arcade Learning Environment, several DeepMind Lab environments, and the Google Research Football environment.
  • The team managed to solve a previously unsolved Google Research Football task and that they achieved 2.4 million frames per second with 64 Cloud TPU cores, representing an improvement over the previous state-of-the-art distributed agent of 80 times.

Google researchers recently published a paper describing a framework — SEED RL — that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn’t previously compete with large AI labs.

Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington’s Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.