The Future of Computation for Machine Learning and Data Science

Source: Deep Learning on Medium

The Trends

Deep Learning

Deep learning has become ubiquitous in the modern world, with wide-ranging applications in nearly every field. As might be expected, people have started to notice, and the hype behind deep learning continues to increase as its widespread adoption by businesses occurs. Deep learning consists of neural networks with multiple hidden layers and has some particularly demanding needs in terms of computational resources:

  • 1 billion parameters to train.
  • Computational intensity increases as the network depth grows

Deep learning has been hugely successful for some important tasks in speech recognition, computer vision, and text understanding.

The apparent success of deep learning in the field of speech recognition.

The major downside of deep learning is its computational intensity, requiring high-performance computational resources and long training times. For facial recognition and image reconstructions, this also means working with low-resolution images. How will researchers aim to tackle these big compute problems? And what is the bottleneck of this approach in the first place?

Internet of Things (IoT)

A total of 20 billion embedded devices are expected to be active in the next several years, stemming from smart fridges, phones, thermostats, air quality sensors, locks, dog collars, the list is impressively long. Most of these devices will be battery powered and severely resource-constrained in memory and computing power. In spite of these limitations, these devices need to perform intelligent tasks and be easy to use.

The expected prevalence of IoT devices comes with major concerns, particularly regarding the security of these devices (perhaps future criminals will be hackers that can hack my smart locks on my doors). It also portends the advent of data-driven deep learning. Deep learning already requires extensive amounts of data, but the presence of large-scale networked devices or sensors opens new avenues of research — especially in my own field of environmental science, which is seeing an increasing push towards low-cost and large-scale sensor networks to monitor atmospheric pollutants.

This idea also extends to smartphones. Smartphones are becoming increasingly powerful and now rival some old laptops in their computational abilities. Increasingly, machine learning is performed on these devices. Despite their improvements, they are still limited in their computational resources and hence trends towards reducing computational and data overhead are increasingly important on these devices.

This idea also extends to self-driving cars, which are beginning to be commercialized after a long period of research and development. Self-driving cars present their own issues but still suffer from the same concerns, the most publicized of which are the security and ethical underpinning of the vehicles.

More data means more data science (woohoo for the data scientists!), but it also means needing to store and process this data, and to transmit this data through wireless networks (and do it securely, I might add). How are the problems associated with IoT to be tackled? And whose responsibility is it if something goes wrong?

Cloud-Based Computing

There is an increasing push towards the use of cloud computing by companies in order to outsource their computational needs and minimize costs.

This also aids companies (especially startups) by reducing capital costs associated with purchasing infrastructure and instead moving it to operating costs. With the growing popularity of machine learning and data science, which are computationally intensive and require high-performance systems, as well as the increasing online presence of most corporations, this trend in cloud-based distributed computing is not expected to slow down anytime soon.

Scaling Computation Size

Scaling up computation size with only modestly increased energy consumption is a difficult task. We need such balanced computation scaling for deep neural networks to minimize their energy overhead.

The current solution of this is the parallel use of many small, energy-efficient compute nodes to accommodate large computations.

Reducing Energy Usage

Scaling down energy with only modestly decreased outcome quality (e.g., prediction accuracy) is becoming an increasingly important task. We need such balanced energy scaling for IoT devices and wearables. These considerations are related to a new field called approximate computing which trades off quality with the effort expended.

The current solution to this is the coordinated use of multiple small, energy-efficient compute nodes to support intelligent computations.

More Data

The data trend is clearly not going to stop any time soon! However, a discussion for this deserves its own section.