The #paperoftheweek 10 is: SC-FEGAN: Face Editing Generative Adversarial Network with User’s…

Source: Deep Learning on Medium


Go to the profile of Brighter AI

In this paper, South Korean researchers achieved high quality (512×512) completion of face images guided by a user-provided edge sketch and colors for a cut-out part of the original image. Unlike in-painting additional user inputs allows the purposeful generation of small details like earrings. The model is also capable to produce high-quality results without additional user inputs (similar to in-painting), although when the removed region is too big the results are not so nice.

The authors also created a tool (see the attached animation) that demonstrates how impressive GAN-empowered photo-editing software could be.

Their model was trained on CelebA-HQ with custom processing steps to create synthetic input-output pairs. The generator architecture is similar to U-net, the discriminator is based on SN-patchGAN. Along with GAN loss, they also employ per-pixel loss, perception loss, style loss, and total variation loss.

Abstract:

“We present a novel image editing system that generates images as the user provides free-form mask, sketch and color as an input. Our system consist of a end-to-end trainable convolutional network. Contrary to the existing methods, our system wholly utilizes free-form user input with color and shape. This allows the system to respond to the user’s sketch and color input, using it as a guideline to generate an image. In our particular work, we trained network with additional style loss which made it possible to generate realistic results, despite large portions of the image being removed. Our proposed network architecture SC-FEGAN is well suited to generate high quality synthetic image using intuitive user inputs.”

You can read the full article here.

About the author:

Evgeniy Mamchenko, Deep Learning Engineer at Brighter AI.