How deep fakes could change UX

Original article can be found here (source): Deep Learning on Medium

So what is a “deep fake”?

It’s often described as face swapping, but without the need for any video editing skills. Swapping faces in videos has been possible for a long time using CGI, but it has previously required a lot of time and hard work from the editor. This is the big difference, utilizing deep learning algorithms combined with a large enough pool of sample images anyone can create a convincing deep fake video. This is the game changer.

I recently came across a Youtube channel called Ctrl Shift Face that exclusively posts DeepFake videos. At this moment the account has 377K subscribers.

In this video Bill Hader channels Tom Cruise

I think this really shows the potential of the this emerging technology and the transition between the faces is so smooth I can’t help to feel just a tiny bit scared.. This video is of course created as entertainment, but the same technology can be used in other less amusive ways.

Artificially intelligent face swap videos, known as deepfakes, are more sophisticated and accessible than ever.

From a UX preperspective

I think this poses a real challenge. How will we as UX designers create experiences where the user can trust us as creators and trust the content we’re putting out there if anyone has the ability to fake videos in such a convincing way? I believe that this, in the future will put a strain on the relationship between user and designer that is so incredibly important when working with human centered design. As UX designers we need to be on top of emerging technologies like this to be able to relate to the user, and understand their concerns.

And will there be sufficient software in the future to counters and identify deep fakes? I don’t know. In short term it seems that way. But in the long run?it depends on who you ask.

Hao Li, associate professor at the University of Southern California and CEO of Pinscreen, tells The Verge that any deepfake detector is only going to work for a short while. In fact, he says, “at some point it’s likely that it’s not going to be possible to detect [AI fakes] at all. So a different type of approach is going to need to be put in place to resolve this.”