Differential Privacy and Deep Learning

Source: Deep Learning on Medium


Go to the profile of Archit Garg

Differential Privacy is about ensuring that when our neural networks are learning from sensitive data, that they’re only learning what they’re supposed to learn from the data without accidentally learning what they’re not supposed to learn from the data

Foundational Principles of Differential Privacy

  1. How noise is applied.
  2. How we define privacy.

What is Differential Privacy?

The general goal of differential privacy is to ensure that different kind of statistical analysis don’t compromise privacy and privacy is preserved if after the analysis, the analyzer doesn’t know anything about the features in data-set, that means Information which was been made public elsewhere isn’t harmful to an individual.

Robust definition of privacy proposed by Cynthia Dwork, Algorithmic Foundations:

“Differential Privacy” describes a promise, made by a data holder, or curator, to a data subject, and the promise is like this: “You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources, are available.”

To define privacy in the context of simple database, we’re performing some query against the database and if we remove a person from the database and the query doesn’t change then that person’s privacy would be fully protected. This means we are removing person from the database and query doesn’t change then it means that person wasn’t leaking any statistical information into the output of the query. To give you more intitution about this Lets see in a python code:

<script src=”https://gist.github.com/gargarchit/39ed046551259f5840ee6b3788001311.js“></script>