Deep fakes, targeted ads and listening devices — it’s not all bad, I promise!

Original article can be found here (source): Artificial Intelligence on Medium

Deepfakes, targeted ads and listening devices — it’s not all bad, I promise!

From Cambridge Analytica, to voice-controlled speakers and surveillance-camera doorbells, technology has built up a poor rep in recent times. I felt a need to share some incredible tech for good stories that didn’t make the headlines:

Deepfakes vs ALS

A deepfake is a video, image or sound recording where the viewer or listener can be tricked into believing the content is “true” and correct when in fact it has been tampered with. An example of a deepfake would be the convincing video in which Barack Obama initially appears to be giving a public address. In the video, Obama sits in a chair talking to camera using increasingly aggressive language until it is revealed that the video is a deepfake.

Video from BuzzFeed

The technique combines and superimposes existing images, videos and sound onto source content using a machine learning technique known as generative adversarial network. Because of these capabilities, deepfakes have also been used to create fake celebrity pornography, scam companies out of millions and generate fake news.

ALS, also known as Motor Neurone Disease, is a progressive neurodegenerative disease that amongst other symptoms, often takes away a person’s ability to speak. Using the same technology as deepfakes, an organisation called project revoice has been able to give those with ALS their voice back. The company captures the voice of those who suffer from ALS whilst they are still able to speak and then enables patients to “speak” to others even when their ability to use their actual voice is gone.

Targeted ads vs terrorist recruitment

Quietly, in the background, Google — through its subsidiary Jigsaw — has been running targeted ads and withholding information on google searches for social good. Terrorist extremists in the middle east have long been using the internet to recruit men and women from the Europe. Jigsaw wondered if they could use technology to help prevent the spread of extremism.

As potential recruits search for content about extremist groups and become susceptible to their brainwashing they are leaving clues about themselves through their online footprint. Jigsaw has been able to identify someone as being susceptible to radicalisation through online profiling in the same way that Cambridge Analytica could identify swing voters. When the potential extremist recruit Google searches for extremist content, instead of showing dangerous material, Jigsaw places its own advertising into the mix.

Jigsaw spent a lot of time speaking to ISIS defectors to understand the thought process behind joining and leaving extremist groups. The new ads link to materials that Jigsaw believes can effectively undo extremist groups’ brainwashing — like testimonials from former extremists. Jigsaw has been able to build profiles of potential recruits and target the right ads to help stop the spread of online extremism.

Listening apps vs opioid addiction

Perhaps one of our biggest fears, in true Orwellian style, is the thought that we are constantly being observed, watched and monitored. So in 2019, when news broke that a team of humans were listening in on Amazon Alexa users, there was outrage.

Photo by Paweł Czerwiński on Unsplash

However, a PhD candidate Rajalakshmi Nandakumar at the University of Washington has been developing a listening app called ApneaApp that can help to diagnose sleep apnoea through its algorithms. Sleep apnoea is when your breathing stops and starts while you sleep and is linked to high blood pressure, diabetes, heart attacks and strokes.

Taking inspiration from how bats use sonar to detect their surroundings, the app is constantly sending noises outside of human hearing and constantly listening for their reflection, in order to monitor breathing patterns.

The team behind ApneaApp has now turned their attention to opioid addiction through a new proof of concept. When opioid addicts experience an overdose their breathing pattern changes dramatically and the ApneaApp team have successfully trailed their algorithms to identify if opioid addicts are experiencing an overdose.

The current proof of concept sends the user a notification asking for feedback once an overdose is identified. If the user doesn’t respond to a notification, the app will send a message to friends or family with the potential for an ambulance to be sent directly to the user’s geo location.

It’s not all doom and gloom

I think all these examples illustrate positive use cases for technologies that carry a bad name. Of course, there is a need for regulation that keeps pace and makes sense in today’s digital age. Big tech and wider businesses also have a role to play through developing fair, robust and societally beneficial products, platforms and services. However, we as individuals also have the power to steer technology in a direction that benefits all of society, but it really is down to us to make that happen.

In short, technology does not improve lives on its own: it needs a development agenda for policy makers, individuals and business leaders that mitigates some of the downside effects of technology adoption and maximise social impact.