[R] Google Proposes Lasso Algorithm Variant for Learning Convolutions: ‘Bridging the Gap Between Fully-Connected and Convolutional Nets’

Original article was published by /u/Yuqing7 on Deep Learning

While researchers have focused on learning convolution-like structures from scratch to move forward in this regard, they face a dilemma due to a limited understanding of the inductive bias that gives rise to convolutions. How to reduce inductive bias while making sure this won’t hurt model efficiency? Is it possible to keep only the core bias to deliver high performance? Google Senior Research Scientist Behnam Neyshabur recently offered his insights on the topic in the paper Towards Learning Convolutions from Scratch.

Here is a quick read: Google Proposes Lasso Algorithm Variant for Learning Convolutions: ‘Bridging the Gap Between Fully-Connected and Convolutional Nets’

submitted by /u/Yuqing7
[link] [comments]