Facebook creates AI that behaves like a harmful user; understand why

Facebook creates AI that behaves like a harmful user; understand why

Facebook has created artificial intelligence (AI) capable of behaving like a harmful user on the social network. The possible question here is: why would the company endeavor to recreate bad behavior on the platform when there are already so many harmful people on the platform? Well, the idea is exactly to simulate how these people act maliciously on the social network and then create defense mechanisms against them.

  • Zuckerberg denies existence of "secret agreement" with Donald Trump
  • New leak gives more details on integration between Facebook and Instagram
  • Messenger now lets you share screen with another user on Android and iOS

In a presentation to journalists in the United States, the company described the mechanism. The simulator received the name of WW (whose pronunciation is "dãb dãb", as short for "dada") and works on a simulated parallel version of Facebook for testing purposes only.

What the company realized is that the first thing a malicious person does on the social network is to co-opt a group of other users to be convinced of spreading false news or taking bad actions together on the social network. To represent this, the researchers created an artificial intelligence (called “bad AI”) to be the “co-optator” and another that will behave like the co-opted user (called “innocent AI”).

With that, Facebook studies the ways in which this bad AI will interact with innocent AI and create these groups that are then used for malicious actions. Within this simulator, the company begins to limit bad AI actions, for example, by reducing the number of private messages it can send or how many new friends it can make. Thus, the group analyzes the impact of each of these changes to prevent negative behavior on the network.

And how does it improve the social network experience?

Using this AI has two advantages. The first is to not rely on the participation of real profiles, which can have negative consequences in a behavioral experiment. Another advantage is being able to speed up the process. "We can scale this to tens or hundreds of thousands of bots and then, in parallel, research several and several possibilities for restriction," points out Mark Harman, lead engineer on the Facebook team in the survey.

Created bots can search, create friend requests, write comments and publications, and write private messages. However, this is only an input, that is, the messages are not exactly simulated.

Despite happening in a parallel version of the social network, he points out that it is a version very similar to the real one. "Unlike other simulations, this network-based one, has actions and observations based on the infrastructure of the real network, which makes it much more realistic", explained the engineer.

According to the researchers, at the moment, WW is being used only for Reviews. This means that there have not yet been any practical results in changes to the real version of Facebook.

Currently, the focus is on training bots to imitate things that we know are already happening on the platform. But in theory and in practice, these bots can do things that we haven't seen on the platform yet. This is what we really want, because we seek to be one step ahead of negative behavior than to always be in search of it ”, he added.