Russian trolls experimented with different methods to maximise political disruption|E&T||
A study has described in detail how employees of Russia’s Internet Research Agency (IRA) experimented with different methods in the run-up to the election of President Donald Trump in 2016.
The IRA, based in Saint Petersburg, is an agency engaged in online influence operations on behalf of the interests of President Putin. Employees of the IRA set up fake accounts and buy ads on social networks and news websites in order to spread viral deceptions, cause social divisions in the US and its allies, praise Vladimir Putin, denigrate his political opponents, and bolster support for foreign leaders backed by Putin, such as Bashar al-Assad and Donald Trump.
In February 2018, a US grand jury indicted 13 Russian nationals and three Russian entities, including the IRA, on charges of committing crimes with the intent to interfere in the 2016 presidential election.
In October 2018, Twitter released a dataset relating to 3841 accounts affiliated with the IRA and 770 other accounts potentially originating in Iran. The dataset included more than 10 million tweets in both English and Russian, and more than 2 million images, videos, and GIFs dating back to 2009.
A pair of researchers – security researcher Charles Kriel, who is special adviser to MPs on disinformation and addictive technologies, and data science MSc student Alexa Pavliuc – have now presented an analysis of this dataset in Defence Strategic Communications.
Their analysis shows that the IRA trolls were able to use “innocuous hashtags” to inject themselves into broader Twitter conversations, with tactics and methods changing over time.
Analysis of the Russian-language dataset showed that IRA users experimented with various methods, such as retweeting other accounts, then by sending original tweets, or targeting the same sets of users and hashtags. The period of greatest activity among these trolls was in the day following the shooting down of Malaysia Airlines Flight MH17 over Ukraine in July 2014, killing all 283 passengers and 15 crew on board. Russian-language tweeting tapered off at the start of 2016 as English-based effort increased.
Analysis of the English-language dataset showed that several automated bot networks were set up in 2012 and 2013, but only fully activated and manually controlled years later. Many of the ‘sleeper’ accounts went live in the spring of 2015 and increased coverage of US topics and some UK topics.
IRA employees also` experimented with programming bots to tweet and retweet either mundane (mostly bots created in 2014) or strongly polarising (mostly bots created in 2013) content. Bots were able to maximise their follower numbers by retweeting mundane trending content (using hashtags like #myamazonwishlist, #reasonsmymomisbetter, and #ifgooglewasagirl) and sports-related content, as well as by posing as local news sources such as @todaypittsburgh. This technique was also used by manually-operated accounts to gain followers from a broad range of Twitter users.
The IRA also tested spam bots to spread high volumes of URL links throughout 2015, abandoning them as they failed to gain large numbers of followers. Very few new English-language bots were created in 2016. During the autumn of 2016 (the US presidential election period), there was a build-up of tweets peaking on Election Day.
The researchers identified that the English-language IRA Twitter network seemed to have two main focuses: one weighed towards the US election and another related to #BlackLivesMatter tweets.
“Although we examine only Twitter here, in nearly every exposure of the IRA’s activities the common element of each campaign is social media amplification,” the authors wrote. “As we’ve shown, the IRA’s work is highly organised, sophisticated, and well-resourced, with as many of 1000 employees working for them in 2015.”