User’s Bad Behavior’s Imitated by Facebook using AI

Original article was published by Quatics on Artificial Intelligence on Medium


User’s Bad Behavior’s Imitated by Facebook using AI

A new method using Artificial Intelligence has been deployed by Facebook engineers to identify and prevent harmful behaviors. This method identifies and prevents behaviors like spreading spam, scamming others, or buying and selling weapons and drugs through Facebook.

By Letting AI-powered bots by letting them loose on a parallel version of Facebook, they can now simulate the actions of bad actors using The simulator known as WW, pronounced “Dub Dub,”. This enables the researchers to study the bots’ behavior in simulation and experiment with new ways to eliminate them.

Photo by Glen Carrie on Unsplash

Facebook engineer Mark Harman and the company’s AI department in London are leading the research.

“WW is a hugely flexible tool that can aid in limiting a wide range of harmful behavior on the site,” Harman said when speaking to journalists.

In a real-life scenario, scammers usually tend to start their work by targeting users’ friendship groups to find potential marks. To tackle this, Facebook engineers created a group of “innocent” bots to act as targets and trained a number of “bad” bots who explored the network to try to find them. The engineers further experimented with various methods to stop the bad bots and to see how it affected them.

Facebook calls this approach a “web-based simulation”. Since the WW project based on the real version of Facebook, it is more notable than other simulation processes.

“The difference between traditional simulation & web-based simulation is that in traditional methods everything is simulated, whereas web-based simulation is more realistic since the observations & actions are taking place through the real infrastructure,” says Harman.

Currently, WW is undergoing the research stage, and none of the simulations run with bots has resulted in any real-life changes to Facebook. According to Harman, though, the engineers are running tests to check whether simulations match real-life behaviors with high enough fidelity, the work might result in modifications to Facebook’s code by the end of the year.

The biggest advantage of WW is its ability to work on a huge scale. It allows Facebook to process thousands of simulations at one go to check all sorts of changes to the site without affecting users. One of its most exciting aspects is its potential to uncover new weaknesses in Facebook’s architecture through the bots’ actions.

“As of now, we are mainly focusing on training the bots to perform and imitate what we know is happening on Facebook. But in theory and in practice, the bots are capable of much more than what we have seen before,” says Harman. “Eventually that is our aim, as our ultimate goal is to get ahead of the bad behavior rather than continually playing catch up,” he added.

Harman stated that the group has seen some unanticipated behavior from the bots, but did not share any details. He said he did not want to give a heads up to the scammers on how it is done.