St. Petersburg Federal Research Center
of the Russian Academy of Sciences

Staff members of the St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS) have developed an application that allows for automatic detecting the bots engaged in cyberattacks. The new approach first and foremost studies open data on the bots’ development as based on a variety of parameters (metrics), whose analysis also allows for determining the bot’s type. The developed application can possibly be used by companies implementing social networks for commercial purposes to uncover and counter targeted malicious impacts. The results of the study are published in the international journal  Social Network Analysis and Mining.

Right now, bots are an important tool for the effective functioning of social networks. So, that they are engaged in the work of the users' support chat or advertising distribution, where they are able to substitute quite a big team of real people, automatically distributing information.

However, bots are also used in unethical activities, for example, to cheat ratings, write false positive reviews about products and spread misinformation. At the same time, some types of malicious bots can successfully enough copy the behavior of real people and compose convincing text messages as composed with the help of neural networks. This is why, they are extremely difficult to recognize.

 “Our research team has developed an application to detect bots that now, in particular, are actively used in the competitive and reputational digital space. The development is based on a neural network that accounts for over a thousand metrics that distinguish bots from living people. At that, these metrics are related not so much to the current activity that bots have learned to simulate well, as to how they are developing in time,” says Andrey Chechulin, Leading Researcher of the Laboratory of Computer Security Problems at SPC RAS.

Among the metrics that scientists used to analyze a potential bot are the account’s “age”, the profile description, the originality of photo and video content, the characteristics and connections of friends with the bot and with each other, and many other. “For example, a user’s account can exist for many years, while a bot is being developed quickly and for a specific task. As a rule, the bot does not have original photos. A person's account is consistently developing in social networks: studying, working, getting married, making friends. The dynamics in the development of these bot’s characteristics is different, and a number of friends and connections with them are more chaotic,” explains Andrey Chechulin.

To learn the neural network to recognize bots, researchers developed experimental groups in social networks, where specially trained bots of various types were introduced in. Their effectiveness varied in cost, functioning features, goals, work speed; the factors that the bot’s ability to successfully imitate real users depended on. The acquired data on these bots’ characteristics were used to compile metrics the neural network was trained on.

 “The experiment results showed that the new approach really works and allows for extracting metrics necessary for better identification of the properties of targeted impacts in social networks and analysis of the malicious bots’ evolution. Prospectively, our application can turn into a basis for the transition from simple detection of a cyberattack using bots to a deeper analysis of the attacker and its capabilities,” says Andrey Chechulin.

Above considered developments were supported by the RSF Grant (No. 18-71-10094).

Detailed information of the Ministry of Science and Higher Education of Russia is available at the link: https://minobrnauki.gov.ru/press-center/news/nauka/68783/?lang=ru