AI programs are not immune to attacks.

AI programs are not immune to attacks. Alexander Polyakov talks about building a system that can not only detect attacks but can also withstand them.

Take a look at BadNets — neural networks with backdoors trained to misclassify objects in particular circumstances were presented. Since most companies don’t have the power to build their own models from scratch, they usually use pre-trained models and retrain them on their data. Imagine if someone could upload a backdoored network or hack a server with networks and modify existing ones. The backdoored networks have almost the same quality, but they may be trained to misclassify objects. If it’s a face recognition system, a hacker can modify it so that his or her face will always pass the check.

Source: Deep Learning on Medium