Source: Deep Learning on Medium

# Neural Networks Training with Approximate Logarithmic Computations

Neural Network training is expensive in terms of both **computation** and **memory accesses** — around three to five times computationally expensive from inference. Together these two factors contribute significantly to the net power requirements when training a neural network on edge-devices (devices connected to the edge of the internet — wearables, smartphones, self-driving cars, etc). To make real-time training as well as inference possible on such edge devices, computation reduction is of paramount importance. Although a lot of solutions to the problem posed above has been proposed, such as sparsity, pruning and quantization based methods, we propose yet another — design end-to-end training in a logarithmic number system. Note,

- for this to work, all significant Neural Network operations need to be defined in LNS.
- In LNS, multiplication reduces to addition. But addition itself becomes computationally expensive.
- Hence we resort to
**Approximate Logarithmic Computations**with the intuition that back-propagation noise tolerance would be able to absorb the uncertainty of our log-domain operations.

The mapping between real numbers and Logarithmic Numbers are given as,