How the next-generation bots is interfering with the US election

How the next-generation bots is interfering with the US election

Editors choice
0

Social-media platforms such as Twitter were used to sow discord in the United States in the run up to the 2016 presidential election, according to a report finalised this year by the US Senate. Russian operatives used tools such as bots — automated accounts that share content — in an attempt to deceive social-media users in the United States and sway the election in favour of President Donald Trump, the report found.

With the 2020 election now in less than a week away, researchers are more worried than ever that bots will interfere. The fake accounts have become more sophisticated and harder to detect, says Emilio Ferrara, a data scientist at the University of Southern California in Los Angeles. Ferrara studies social-media bots to understand how they can change people’s beliefs and behaviours.

Questions: In 2016, you found that nearly 19 per cent of all tweets related to the election that year were generated by bots1. How did you identify them?

Answer: We have developed machine-learning methods that look at the behaviour of an account — the language it uses and the sentiment it conveys. We look at when the account was created, how active it is, who are its followers. If an account is active 24 hours a day, seven days a week, and posts every minute, that’s a strong signature of a bot. The majority of bots are not malicious. But you can also have malicious bots that deliberately spread disinformation [falsehoods intended to deceive], as happened on Twitter before the 2016 US presidential election.

Question: You have analysed billions of tweets in the past few years. Have bots changed over that time?

Answer: Back in 2016, bots used simple strategies that were easy to detect. But today, there are artificial intelligence (AI) tools that produce human-like language. We are not able to detect bots that use AI, because we can’t distinguish them from human accounts. These bots survive longer on social-media platforms and can create botnets, which are networks of bots that push the same messages. To detect botnets, we have developed methods to identify accounts that appear to be synchronised and by accounting for things like the hashtags and keywords that they propagate, we can isolate them.

Question: How effective are bots at spreading disinformation?

Answer: In 2016, people retweeted content originated by bots at almost the same rate at which they retweeted content originated by human accounts. Today, the number of users retweeting bots has dramatically diminished. One explanation is that companies such as Twitter have got better at detecting bots and suspending them. Another explanation is that people have got better at spotting content originated by bots, so they engage less. But another possibility is that we can’t identify the more sophisticated bots, so we can’t detect when they are retweeted by human users.

Question: What worries you most about bots today?

Answer: In 2016, I was worried that no one was paying attention to social-media manipulation. Today, the situation is different: there are millions of eyes on this. Governments and companies are involved in monitoring social-media platforms. My biggest concern now is, what are we doing with these platforms? Are we okay with them being incubators of misinformation? Do we want to have some regulations on them, and where should the regulations come from? If people on these platforms are exposed to unsubstantiated claims — things like ‘Covid-19 is a scam’ or ‘masks will kill you’ — can affect public health.

Earlier this year, your team analysed more than 240 million tweets related to the 2020 election. Tell us about the findings you’ve just published.

Human accounts usually outnumber bots. But around certain political events (such as the national conventions of the US Democratic and Republican parties), we observed that the amount of bot activity dwarfed human activity. We also found an enormous amount of bot activity associated with conspiracy theories such as QAnon and the one depicting Covid-19 as a liberal scam. About one in four accounts that use QAnon hashtags and retweet [far-right outlets] Infowars and One America News Network are bots. (QAnon is a baseless far-right conspiracy theory alleging that a group of paedophiles is running a global child sex-trafficking ring and plotting against Trump.)

US intelligence agencies have warned that countries such as Russia will use bots to sow discord during this election. Does your analysis agree?

Using data on accounts banned from Twitter, we found that interference operations from China and Russia are targeting both right-leaning and left-leaning users, whereas operations from other countries, such as Ghana and Nigeria, mostly interact with left-leaning users. Some researchers think that foreign actors tend to inject themselves into fringe communities. But, at least from our analysis, it turns out that they target mainstream conservatives or liberals.

What are you looking out for in the week before election day on November 3?

Manipulation events tend to occur close to an election, sometimes the week before. We limit ourselves to researching these instances and then leave it to the appropriate authorities to investigate independently. We’re keeping our eyes open for evidence of interference campaigns that have not yet been uncovered.

  • A Nature magazine report
About author

Your email address will not be published. Required fields are marked *