Is there a way to combine artificial intelligence and data protection?

Original article was published by Til Harder on Artificial Intelligence on Medium


Is there a way to combine artificial intelligence and data protection?

AI research is making ever greater and ever faster advances. Is a good AI more important than data protection?

Photo from Franki Chamaki on Unsplash

Artificial intelligence or privacy security: do we need to choose between these alternatives, or can there be a middle way for this AI privacy problem?

AI can already help us in many every day and decision-making situations, but the decision-making scenarios become increasingly complex.

We have come to a point where AI not only supports us but where many new questions arise. We must take a step back and ask ourselves how we set up AI and if the current path is responsible.

AI can do a lot, and it can help us in many areas. It can achieve better results for us, especially in actions and challenges in which people are prone to error.

These areas are, for example, precision work and decisions that have to be made under high pressure. In these situations, Artificial intelligence should keep a “cool head.” So is AI better suited for such scenarios?

But what do we give up for it? Our autonomy? Our privacy? The free development of our life? What do such interventions mean for our lives?

Let us take a look at the privacy problem

First, we need to look at how we create our AIs. I don’t want to go into too much detail. But as most of us have heard, an AI needs data to learn. A lot of data. And we humans produce a lot of this data.

That has profound implications.

The more data we are created, operated, and fed into the AI, the bigger the danger that this information will be misused. For example, for purposes besides the original or even being sold to third parties.

This threat can be huge because the marketplace for user information has become so rewarding, and massive cash quantities are involved.

And even if the accumulated user information isn’t used or marketed for different functions, there’s still a danger of information theft.

For example, three billion Yahoo accounts have been hacked a couple of years back. One of the most significant data leaks we know about.

We see pretty quickly that we have to weigh up. How much data do we give away? What information about us and our behavior.

Ultimately, we’re facing the directional choice of developing high-performance AI versions with potential data security violations and evident consequences, or we choose data protection and accept that the AI cannot support us in everything.

But is it already too late?

At the same time, we can’t ban AI out of our own lives — since it’s long been around and established in many services we use every day.

AI makes our regular life simpler in many areas, even when we do not recognize it. It enhances our hunts on the world wide web, among other things.

For example,

  • Reveals us the quickest way to wherever we want to go
  • makes it much easier for many people to do their job,
  • relieves us of dull evenings through smart suggesting algorithms,

In conclusion, this implies: We can’t return to life without AI and ban it from our lives. We also can’t halt the creation of enormous amounts of information.

On the technical aspect, nevertheless, we could find solutions that unite data security with AI.

There are information anonymization methods. Here the initial data is marginally altered mathematically and then processed.

This technique has the advantage that it is relatively simple to execute, already ubiquitous, and may be utilized in many different ways.

But this method can only be a temporary solution because this technique is not entirely safe.; In parallel to the anonymization methods, the deanonymization technology is also advancing.

Among other things, private data could be recalculated by connecting different databases. What protects our data today may be obsolete tomorrow.

Another optimistic way is homomorphic encryption. Information is encrypted, accumulated, and then processed on the device itself.

A problem with the spread of this method is that extensive computing resources are required.

Another way to adjusting AI and information security begins one step beforehand: Rather than bringing the information to the algorithms, the algorithms are brought to the data.

This alternative, decentralized strategy, is known as edge computing or federated learning. The consumer information consistently stays on the end device and trains nearby AI models right on the device.

These versions are encrypted and then merged.

This approach’s crux is that the real user information never leaves the end device, so the consumer’s solitude is preserved.

This also saves resources because considerable quantities of data no longer have to be sent.

Besides, this approach is even more secure because hackers would have to hack all devices simultaneously.

With decentralized AI versions, privacy protection isn’t merely an element that’s enforced but is incorporated in the AI from the beginning.

Internationally, development and research of these forward-looking and resilient methods are already underway at different use cases — such as AI versions for the healthcare industry.

There is still a long way to go to decentralized data processing. But what we should see positively is that we have the option to combine AI and data protection.

We can choose to go this route.