Original article was published by akira on Deep Learning on Medium
Akira’s ML news #Week 40, 2020
Here are some of the papers and articles that I found particularly interesting I read in week 40 of 2020 (27 September ~). I’ve tried to introduce the most recent ones as much as possible, but the date of the paper submission may not be the same as the week.
- Machine Learning Papers
- Technical Articles
- Examples of Machine Learning use cases
- Other topics
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
1. Machine Learning Papers
Alternative Parameter Update Method to Back Propagation
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
A study of Direct Feedback Alignment, which allows us to parallelize parameter updates in a variety of tasks, such as the Transformer model, Graph Conv, and recommendation, and they confirm that the results are not bad.
Identify and improve DARTS accuracy factors
Theory-Inspired Path-Regularized Differential Network Architecture Search
DARTS, one of the Neural Architecture Searches methods, often degrades accuracy because it select often skipping connections, but they proved theoretically that this is because the convergence speed is faster than other operations such as Conv. Furthermore, they proposed a PR-DARTS that prevent it and confirmed that the accuracy of the PR-DARTS was improved.
Curriculum learning for data imbalance and difficulty
EGDCL: An Adaptive Curriculum Learning Framework for Unbiased Glaucoma Diagnosis
Glaucoma data has been a difficult task due to the presence of hard samples and data imbalance. They proposed a curriculum-learning EGDCL to learn it through a network that generates sample difficulty, label rarity, and feature map importance. Results beyond previous research.
Improving the accuracy of physics simulations by incorporating physical constraints into the model
Kohn-Sham equations as regularizer : building prior knowledge into machine-learned physics
In the problem of approximating physical simulations using density functional theory with neural networks, physical constraints can be imposed on ML models by treating the Kohn-Sham equation as a differentiable model. This greatly improves the accuracy of exchange-correlation term calculations.
Image classification based on shape, not texture
The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
Although it has been known that ImageNet-trained CNN models depend more on texture than on shape, they showed that using data augmentation methods,such as color-distortion, blur,can make the model classify objects based on shape.
Countermeasures for tasks with different distributions in Train/Test dataset
Adversarial Validation Approach to Concept Drift Problem in User Targeting Automation Systems at Uber
For tasks with different train and test data distributions, They responded with Adversarial Feature Selection to match the distributions by training a classifier to discriminate between train and test and removing important features of the classifier until the scores were random. It is better than using validation with data close to test or weighting train according to the test distribution.