Akira’s ML News #Week42, 2020

Original article was published by akira on Deep Learning on Medium

Akira’s ML News #Week42, 2020

Here are some of the papers and articles that I found particularly interesting I read in week 42 of 2020 (11 October~). I’ve tried to introduce the most recent ones as much as possible, but the date of the paper submission may not be the same as the week.

  1. Machine Learning Papers
  2. Technical Articles
  3. Examples of Machine Learning use cases
  4. Other topics

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

1. Machine Learning Papers

— —

Minority data generation GAN

Inclusive GAN: Improving Data and Minority Coverage in Generative Models
https://arxiv.org/abs/2004.03355

GANs tend to generate only major categories. For this reason, they propose an Inclusive GAN that supports minor sample generation by sampling latent variables corresponding to minor categories and then constraining the generator to successfully generate images in the minor latent variables.

Lottery tickets hypothesis in GANs

GANS CAN PLAY LOTTERY TICKETS TOO
https://openreview.net/forum?id=1AoMhc_9jER

The lottery ticket hypothesis in GANs. The lottery sub-networks existed in GANs, and the lottery sub-networks could be compressed with good accuracy. The initial value of the Discriminator was found to be important, although there was little need to compress the Discriminator as well as the Generator.

Domain adaptation with Self-supervised learning

Inference Stage Optimization for Cross-scenario 3D Human Pose Estimation

https://arxiv.org/abs/2007.02054

A study of domain adaptation by performing self-supervised learning in the target domain prior to inference when inferring in a domain different from training. Uses two techniques, adversarial learning of predicted poses and cycle consistency with 3D⇆2D pose transformations, to update parameters for different domains.

Representation learning using the data routine work

CONTRASTIVE LEARNING OF MEDICAL VISUAL REPRESENTATIONS FROM PAIRED IMAGES AND TEXT
https://arxiv.org/abs/2010.00747

This is a research for representation learning of a pair of medical images and text data by contrastive learning, which is commonly used in medical routine work. This is a representation learning method to make the distance between the paired text and image closer, which is more useful than ImageNet-trained models and greatly improves the accuracy of image retrieval.

Why sparse networks fail to learn without using the lottery ticket initialization?

Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
https://arxiv.org/abs/2010.03533

A study examining where the difference in accuracy between lottery ticket initializations and random from in a sparse NN. Randomly initialized sparse networks are less accurate than dense networks due to the deterioration of gradient flow (small gradient values tend to disappear). Lottery ticket initialization does not improve the gradient flow, but can be trained well due to the bias of the pruned network. RigL, which can learn sparse networks dynamically, can improve the gradient flow, and thus can give good accuracy from random initial values.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

2. Technical Articles

— — — —

operational challenges to credit scoring

A thread discussing the operational challenges to credit scoring (a technique to quantify the ability to lend and repay money). The discussion is about not including gender and age because it is discriminatory, and how to deal with the bias in subsequent data once the model is moved to the production environment.

Reinforcement learning is supervised learning on optimized data

An article interpreting reinforcement learning in terms of supervised learning. They stated that it can be viewed as RL can be interpreted as a joint optimization problem for both policy and data, and from this supervised learning perspective, many RL algorithms alternate between finding the right data and performing supervised learning on that data.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

3. Examples of Machine Learning use cases

— — — —

AI Can Help Patients — but Only If Doctors Understand It

This article is about the various trials and tribulations of implementing machine learning tools in the medical field. It states that even if you develop an automated diagnostic system using machine learning, if you do something that disrupts the normal workflow, it will make the situations more difficult, so it is important to develop a system that takes these factors into account.

An A.I. Outperformed 20 Top Lawyers

The AI outperforms top lawyers in the task of finding flaws in confidentiality agreements, and the AI can review a confidentiality agreement in 26 seconds, compared to 92 minutes for a normal lawyer. The response from lawyers has been positive, raising benefits such as allowing lawyers to focus on more complex projects.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

4. Other Topics

— — — —

AI development in 2020

A report on the development of artificial intelligence in 2020.An extensive 177-page reporting document. The report covers a wide range of topics, including research trends, where the talent is located, ethical issues, expansion into military applications, and predictions of trends for the next year. In terms of research, huge datasets/massive models are driving accuracy, more papers using AI in biology, Pytorch is catching up to Tensorflow, etc.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Twitter, I post one-sentence paper commentary.

https://twitter.com/AkiraTOSEI