Original article was published by /u/Yuqing7 on Deep Learning
While researchers have focused on learning convolution-like structures from scratch to move forward in this regard, they face a dilemma due to a limited understanding of the inductive bias that gives rise to convolutions. How to reduce inductive bias while making sure this won’t hurt model efficiency? Is it possible to keep only the core bias to deliver high performance? Google Senior Research Scientist Behnam Neyshabur recently offered his insights on the topic in the paper Towards Learning Convolutions from Scratch.