Synthetic Realities at ICML 2019

Source: Deep Learning on Medium


Go to the profile of Delip Rao

With each passing day, we are seeing bigger and more capable deep generative models that produce content that’s close to indistinguishable from authentic content for the human eyes and ears. With the incentives for misinformation and the growth of social networks, we are also seeing a resurgence of “old school” fake content such as playing an audio clip recording out of context (“replay attack”) or forging misleading videos and images using a multimedia editor. In an experiment done at AI Foundation, 70% or fewer untrained human subjects were able to tell fake and real videos apart. That number is lower if we consider synthetic images and audio and lower resolution images/videos. Meanwhile, as we demonstrate in our FaceForensics work (and observed by other researchers in the field), machine learning models far surpass untrained humans in telling real and synthetic/forged content apart.

The use of machine learning models for discriminating synthetic content is thus indispensable, and pushing the boundaries of that science is core to our mission at AI Foundation, and we are proud to sponsor the Synthetic Realities workshop at ICML 2019. The workshop features interesting new work in detection science, covering a wide range of topics in speech, language, and computer vision. We congratulate authors of all the accepted papers.

Accepted papers at the Synthetic Realities workshop

Best Paper Award and Travel Grants

We want to congratulate Pavel Korshnov and colleagues from IDIAP Research Institute and SRI for winning the best paper award for their paper, “Tampered Speaker Inconsistency Detection with Phonetically Aware Audio-visual Features.” The paper focuses on detecting tampering in a video with a person speaking to a camera. Besides being a well-experimented result, the importance of this line of work cannot be stressed in today’s information climate, and also because this form of manipulation is easy to perform since one can replace a part of the audio, dramatically changing the meaning of the video.

We also want to congratulate Hanxiang Hao and David Güera, students from Prof. Edward Delp’s group at Purdue, for winning the travel grants.

If you are in Long Beach attending ICML, stop by the workshop on June 15th for a day of exciting invited and contributed talks, and stimulating panel discussions. Here’s a warmup from the contributed talks:

How can we use Speaker Identification and ASR systems to evaluate the effectiveness of voice conversion? What are the theoretical limits of Deepfake detection? How can we exploit metadata for improving fake video detection? How do we preserve privacy with content generation?

We look forward to seeing you there!