[R] MIT Researcher Neil Thompson on Deep Learning’s Insatiable Compute Demands and Possible Solutions

Original article was published by /u/Yuqing7 on Deep Learning

As the size of deep learning models continues to increase, so does their appetite for compute. And that has Neil Thompson, a research scientist with MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), concerned.

The growth in computing power needed for deep learning models is quickly becoming unsustainable,” Thompson recently told Synced. Thompson is first author on the paper The Computational Limits of Deep Learning, which examines years of data and analyzes 1,058 research papers covering domains such as image classification, object detection, question answering, named-entity recognition and machine translation. The paper proposes that deep learning is not computationally expensive by accident, but by design. And the increasing computational costs in deep learning have been central to its performance improvements.

Here is a quick read: MIT Researcher Neil Thompson on Deep Learning’s Insatiable Compute Demands and Possible Solutions

The paper The Computational Limits of Deep Learning is on arXiv.

submitted by /u/Yuqing7
[link] [comments]