By Emily Tatum
The Havighurst Center for Russian and Post-Soviet Studies continued its Colloquium lecture series on “Russian Media Strategies at Home and Abroad” this past Monday, February 25, with guest lecturer Joshua Tucker, Professor of Politics and co-founder and co-Director of the Social Media and Political Participation (SMaPP) laboratory at New York University (NYU). Tucker discussed the SMaPP lab’s efforts to detect Russian bots and how bots work in the current Russian media system.
Tucker began by posing the question, “How do authoritarian regimes respond to online opposition?” Tucker explained that authoritarian regimes now respond by engaging on the internet in addition to traditional censoring or direct repression. Tucker claimed that through engagement, authoritarian regimes are “taking an old technique, censorship, and importing it into a new context.”
Over the past 20 years in Russia, Vladimir Putin has responded to online opposition by altering the Russian media system. In his first term as President (2000-2008), Putin quickly brought the private television industry under state control. However, he largely left the internet and social media unregulated. Tucker claimed that after a quick respite under Dmitry Medvedev (2008-2012), Putin’s reentry in his second term marked “the beginning of a different story.” Putin began systematically filtering media content with the annexation of Crimea in 2014, using media as a weapon of foreign policy.
Tucker claimed that authoritarian regime internet engagement can involve the use of bots, which Tucker defines as “accounts operated by automated computer algorithms.” The SMaPP lab focuses its research on Russian Twitter accounts from early 2014 to late 2017. The lab compiled 25 million tweets from 1.5 million Russian Twitter users drawn from the Twitter Search API. Tucker’s team ultimately created a comprehensive coding system where every Twitter account was categorized between: official account, bot, human being, cyborg, or spam.
The data collected allowed Tucker to evaluate larger trends of bot usage in Russia. Tucker explained, “You would think the real value of this data is the text, whereas it’s really the 50 pieces of metadata attached.” For example, Tucker presented a case where a group of bots would only tweet news headlines without links to articles. Tucker’s team concluded that these bots were created to affect trending topics and to optimize Google searches.
Tucker concluded by stating that during this window from 2014-2017, one half of Russian political Twitter accounts were bots, which contained comparable numbers of pro-Kremlin and pro-opposition bots. This conclusion counters the previously held public perception that Russian bots are primarily pro-Kremlin. The conclusion represents the growing domination and diversity of bots in the Russian domestic media landscape.
Tucker repeatedly stated throughout the lecture, “We don’t have ground truth.” He wanted to make the distinction that the SMaPP lab’s findings are still built around human classification. However, Tucker argues that the system that they have built is the best current system, devoid of leaked data and outside influence. He designed the test to be “scalable, transparent, replicable, and retrospective.” Tucker will continue to expand the lab’s bot testing to other pressing opposition movements and elections across the world.
Emily Tatum is a senior majoring in International Studies with a Latin American focus and in Political Science.