St. Petersburg Federal Research Center
of the Russian Academy of Sciences

Researchers of the St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS) have trained artificial intelligence in identifying the groups of malicious bots in social networks using the analysis of public data about them, regardless the language they write posts and comments in. This approach can be implemented by companies that use social networks for commercial purposes to identify and counter information attacks. The results of the study are published in the international journal JoWUA  (http://jowua.com/vol12no2.php).

Bots is an important tool for the social networks functioning. For instance, they are involved in the work of chats of support or advertising distribution, at that they are able to replace an entire team of real people, while automatically distributing information. At the same time, bots are also used for unethical activities, for example, to cheat ratings, write false positive reviews about products and spread misinformation. At the same time, some types of bots can successfully enough copy the behavior of real people, so it is extremely difficult to recognize them.

"The problem is that there exist a great number of social networks all over the world; they differ from each other and contain information in different languages. However, we have developed a prototype system for monitoring the group activity of bots based on the analysis of general input data about the bots themselves, independently of communications’ language and  structure of social networks. Such data are present in any social network, based on the principle of its operation, «says a Leading Researcher of the Laboratory of Computer Security Problems at the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS – SPC RAS structural division) Andrey Chechulin.

At analyzing groups of bots, the developers used open information about implicit social connections between accounts. Information about social connections is the input data for artificial intelligence. Scientists study bots, their activity in social networks and the way they interact with each other and other users. The obtained data allow for a high probability understanding what accounts belong to people and what are the bots.

"To train the neural network, we have developed special groups in social networks, where were introduced bots of different quality - both simple and those that can well disguise themselves as real users. Upon an analysis completion, we evaluated to what extent our methods correctly identify bots and cope with their disguise. The experiments have shown that our approaches can detect even the masked bots" says Andrey Chechulin.

According to Maxim Kolomeets, the project participant, and Junior Researcher of SPC RAS, the effectiveness of the system is evaluated by analyzing various groups of bots and control groups of users. These groups include automatically created and managed bots, as well as those created and controlled by real users. Another group of bots was composed of the hacked and abandoned accounts, whose users included the ones performing actions for money and, of course, the regular users of social networks.

 “The system could be cheated by creating a truly realistic account. Anyway, over time, it will still accumulate enough anomalies that our tool will be able to detect. The accuracy of recognition varies depending on the quality of bots - from 60 to 90% with 5-10% of false positives” "the researcher explained.

The method developed by the SPC RAS researchers can identify bots, as well as evaluate their quality and roughly calculate the cost of any attack. This data can be used to investigate security incidents. "Say, we are looking through the account in the social network of some restaurant, and see there a lot of negative comments. We can identify whether they are made by bots or real people. If there were the bots, then the restaurant would realize that it is subjected to an attack. In addition, we can determine the quality and capabilities of bots and understand how much money was invested to implement this attack. Based on these data, the businesses will be facilitated in taking measures to effectively respond to this attack," Andrey Chechulin summed up.

The project is supported by a grant from the Russian Science Foundation.