UDE Study on Opinion Robots How Social Bots Create Mood

They post and like and yet they are not human beings. Can social bots really make opinions and are they even a threat to democracy?

There is no scientific evidence for this yet. An interdisciplinary team from the University of Duisburg-Essen (UDE) has conducted a virtual experiment to investigate the influence of these software robots in social media. According to the study, only a few bots are needed to steer the mood in a network. The results were published in the European Journal of Information Systems (EJIS).

Social Bots are computer programs that act like real users in social media and automatically spread messages. It is said that bots can be used to artificially push topics and falsify debates; they can also have a political impact. This is feared, for example, in the forthcoming European elections. "The extent to which bots can influence users has so far not been proven because of the lack of scientific methods," explains computer scientist and project manager Björn Ross.

"We have therefore simulated a network of thousands of virtual actors and assumed that opinions on a topic are 50:50, positive and negative. In half the cases, one side gains the upper hand - without bots," says Ross. "We know from research on the so-called spiral of silence," says co-author Dr. German Neubaum, "that people have less confidence in expressing their opinions if they think they are in the minority. That is why we have investigated how bots can trigger such a spiral."

What the UDE team found out: Even a small number of two to four percent of bots are enough for users to prefer to be quiet in a controversial discussion. This increases the probability that the opinion supported by the robots will prevail from 50 percent to two thirds. A false impression of the mood is created.

"Björn Ross says, "How successfully bots influence the mood depends on three factors, among others: How many connections are there between the users of a network? Where are the bots placed in the network, centrally or at the edge? And above all, are they so well programmed that they act like humans?"

The researchers say that social bots are not yet so perfect that they cannot be recognized. But they are likely to be gradually optimized - even for undesirable purposes such as deception. And then software robots would actually be a threat to democracy.

* Are social bots a real threat? An agent-based model of the spiral of silence to analyze the impact of manipulative actors in social networks". The six authors of this study belong to the Research Training Group “User-Centred Social Media” and the Research Group of Professional Communication in Electronic Media/Social Media.