Source: Deep Learning on Medium
How to Make Your First Contribution to Saving Privacy?
This Blog will give you a high-level overview on the importance of privacy, learning resources for privacy-preserving techniques and how you can join a community called OpenMined.
Privacy is one of the biggest concerns these days, with ever Increasing breach of privacy and the different kinds of uses of personal data. Fobs reports 4.1 Billion Records breach in the first 6 months of 2019 and if we see all data braches of 2019, It’s kind on kind of alarming timeline.
As we all know, that small or big tech companies like Facebook, Google, Apple, Microsft, and Amazon are filtering each and every click with our social and personal interactions. They are using more and more data to improve their products and services.
And It makes a lot of sense if you think about it. It is better to measure what your users like then to guess and build a product that no one likes. However, this is also very dangerous, It undermines our privacy because collected data can be quite sensitive, causing harm if it would leak.
So all these collected data is being used to study and analyze the need of users by some statistical analysis or training a machine learning model like a recommendation system.
Brandon Rohrer is a Data Scientist and one of the coolest explainers of the machine learning concepts. Brandon’s tweet beautifully explains the risk and need for privacy in just five points.
Importance of Encrypted, Privacy in Machine Learning
Laws like GDPR or General Data Protection and Regulation which is a set of rules introduced by the EU to protect the privacy of EU citizens and following the EU more than 100 countries have introduced data protection laws.
It gives rise to an interesting challenge, where we are evolving and discovering high-quality models and accessible data but on the other side, we are also supposed to keep data safe from both intentional and accidental leakage.
One of the challenging consequences of all of this is that scientists and researchers are often extremely constrained in solving problems that are more personal. This challenge is slowing down research across society and every person facing industry, making it more challenging to cure diseases or to understand more complex societal trends.
The more personal data and the more personal potential uses of that data in society, the more restricted it is from scientists. This means that some of the important and personal issues in society can’t be addressed with machine learning because we do not have access to proper training data.
This contradicting need of privacy and research for humanity can be satisfied with the technique called Differential Privacy.
Differential Privacy and Application
Differential Privacy addresses the problem of learning nothing about an individual while learning useful information about a population.
Cynthia Dwork, the inventor of the differential privacy, has proposed a robust definition of Differential Privacy in this Book The Algorithmic Foundations of Differential Privacy
“Differential privacy” describes a promise, made by a data holder, or curator, to a data subject: “You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources, are available.”
Differential privacy can be applied to everything from recommendation systems to location-based services and social networks. Apple uses differential privacy to gather anonymous usage insights from devices like iPhones, iPads, and Macs. The method is user-friendly, and legally in the clear.
Differential privacy would also allow a company like Amazon to access your personalized shopping preferences while hiding sensitive information about your historical purchase list. Facebook could use it to collect behavioral data for targeted advertising, without violating a country’s privacy policies.
What is PySyft, How Does it solve the Privacy Problems?
PySyft is an open-sourced Python library that gives you privacy-preserving tooling to train a secure and safe AI without compromising user data. It got recently added with the support of Tensorflow and can be used with Pytorch and Keras.
With OpenMined, an AI model can be governed by multiple owners and trained securely on an unseen, distributed dataset.
PySyft gives you the power to enable robust privacy models in deep learning projects. As the data uses, machine learning model, data governance is evolving, privacy is definitely a fundamental aspect of any application, project or framework.
We only talked about differential privacy with a high-level overview but there are other techniques like federated learning, multi-party computation, and homomorphic encryption that can be easily implemented by Pysyft.
You can learn more about these techniques here.
So If you excited to join OpenMinded, Here is your invitation to join the community and contribute to the development of safe AI.
Resources on Privacy-Preserving & DP:
Learning is important to understand anything clearly and solve the problem precisely So here I’m sharing some of the resources which I found when I started learning about this topic.
These resources are sorted from theory to practice and will definitely help you.
Security and Privacy in Artificial Intelligence & Machine Learning Blog
4. Secure and Private AI Course — by facebook Artificial Intelligence
6. Encrypted Training with Pytorch + Pysyft
How to Join OpenMinded?
You can introduce yourself in #genral-discussion and look for recent #announcements. You may be approached by OpenMinded mentor who will be helping you with any particular questions.You can choose the topic and area of your contribution which is divided into Community and Dev Teams.
To join the dev team you must raise your first PR by solving “good first issues” over here
To join community team and people interested in writing Blog Posts, Tutorials, Demos, or Documentation, You can reach to Trask and show your recent work. He will give you a good first project to work on, with the goal of joining the writing team/community team.