Introduction
Automation forms a foundational part of the device network known as the Internet . Computerized algorithms determine what users see on their Facebook news feed or the Uber map. Near instantaneous communication occurs via pre-arranged technical networks of nodes and edges. The rise of the Internet of Things (IoT) means that automated connection and organization will only grow, and the effects upon society are manifold. How, though, are particularâeven racist, classist, or sexistâvalues programmed in at the algorithmic level of the Internet ? How might digitally enabled automation be used for user manipulation? Could the devices in the IoT be used to eavesdrop on publics or predict their behavior? Could automated online personasâsocial bots âbe used to alter public opinion?
Questions like these, and the pervasive nature of automation in our daily lives, lie at the center of this chapter. Specifically, I focus on the rise of political bots âsoftware-driven social actors that can be used to spread propaganda and obstruct activism. I draw together in-depth analyses of the literature on political communication , science and technology studies, qualitative research methods , and bot history in order to develop a multifaceted understanding of the political economy of the political bot. Mapping continuities between political communication and the current literature on technology and politics illuminate how uses of social bot technology interact with ideas related to communication, power , and structures of agency . Analysis of the literature from science and technology studies, especially from the emerging sub-field of ethnography of information, explains how qualitative methods and historical context can help us to understand the phenomenon of the political bot.
In order to understand the ways in which historical context, political communication , and qualitative methods can help build an understanding of bots , it is first important to understand what a bot is. In the simplest sense, bots are pieces of automated software used to do tasks online on behalf of human coders. They can crawl the Internet for information on a particular topic or perform repetitive tasks that would otherwise take human days. Social bots are a particular kind of bot, unique in that they have a front-facing function wherein they communicate with real human users. Social bots are closely related to chat bots , programs build to talk to people across numerous parts of the Internet , but are different in that they only exist on social networking services (SNS) such as WeChat, Twitter , Weibo, VK, and Facebook.
The ability to build bots , whether as automated fetching software or as chat capable personas, is bound up in the digital divideâwhile anyone can hypothetically learn to code from open educational resources, the cost of equipment, the free time necessary, and the background knowledge required places limits upon the sorts of people who can develop such skills. In other words, to create bots Internet users must have a good understanding of computer code and the time and money to do so. Furthermore, a high degree of Internet literacy is required to simply identify an automated social media accountâin the form of a political botâon a site like Twitter . Politics exists in the very ability to build a social bot, let alone launch one for political means.
Itâs not surprising, then, that the powerfulâhere those with resources required to build, deploy, and sustain bots âcan use bots in attempts to control those who do not have a sophisticated knowledge of coding and the Internet . Social bots have been harnessed by technologically adept marketers to send blatant spam in the form of automatically messaged or recycled advertising content since the beginning of Twitter and on the social platforms that came before (Chu et al. 2010). Politicians have, in the last several years, taken note of and emulated celebrity Twitter usersâ tactics of purchasing massive numbers of bots to significantly boost follower numbers (Chu et al. 2012). Militaries, state contracted firms, and elected officials worldwide now use political bots to invasively spread various forms of propaganda and flood newsfeeds with political spam (Cook et al. 2014; Forelle et al. 2015).
Recent research reveals the pervasive breadth of global political bot use across online social networks (Boshmaf et al. 2011). Automated Political bots have been the main tools for online astroturfâor fake grassrootsâsmear campaigns during political moments worldwide: the US mid-term elections of 2010 (Metaxas and Mustafaraj 2012a), the ongoing crisis in Syria (Qtiesh 2011), and the 2014â2015 disputes over Crimea (Alexander 2015).
Politically oriented bots are emerging phenomena and are among the most important recent innovations in political strategy and communication technology. Bots are prevalent, and active, in social media conversationsâand their presence in these spaces continues to grow. The noise, spam, and manipulation inherent in many bot deployment techniques threaten to disrupt civic conversations and organization worldwideâjeopardizing equal participation in the public sphere and democracy more generally.
A Brief History of Bots
An understanding of bot history enables insight into how political bots came to be. Bots have existed online as long as the Internet has been public. In the 1990s, they played a prominent role in Internet Relay Chat (IRC), an early chat system online. Bots on IRC appear as a normal user to others on a channel. In other words, they function as chat bots âthough they are mainly used to perform automated functions on behalf of other users. IRC bots perform several useful tasks: keeping the IRC from being overtaken by malicious or foreign users, preventing other types of IRC warfare such as malicious file sharing, keeping the IRC open while no other users are online, and controlling who is able to join the channel. Robey Pointer is credited by some as having developed the first botââEggdropââfor IRC regulation in 1993. This program is, to date, the oldest IRC bot still in active development.
Throughout the 1990s and early 2000s, the number of bots active across the Internet grew exponentially. According to Stone-Gross et al. (2009), over six months of 2004, the number of new bots launched online each day rose from 2000 to 30,000. Search engines like Google built their platform around the ability to use bots in a connective way; the Googlebot is an Internet -crawling bot that builds out the platformâs database of Internet sites. Advertisers, marketers, and scammers also began to use bots in high numbers. Automated emails and messages containing sales links on chat sites are only one way these groups have used bots to spread their content.
Newer online social platforms have mimicked IRC by also building bots into their core functions. Twitter , for instance, has an open application programming interface (API) that allows coders to easily build and launch bots on the platform. Bots can act as useful tools on a SNS and social bots have been constructed that draw attention to social issues, make jokes, and send out news. The last 5 years, however, have seen social bots deployed for purposes of political manipulation.
There are unique benefits of socially oriented automation, and the more general use of bots and automated technology for aggressive means is also not new. The development of malicious uses for bots came alongside that of more strictly utilitarian bots like Eggdrop. Both denial of service attacks (DoS) and distributed denial of service (DDoS) attacks function via the use of bots and botnetsânetworked collections of bots .
It is important to draw a distinction between botnets and social bots . Both are software driven and automated, but they diverge in the interactive arena. Whereas botnets are designed to simultaneously attack a particular Internet site or group of sites, social bots tend to be independently functioning agents designed to have front-end interactions with human users. The âbotâ in botnets refers to an infected computer used to log on to a site in conjunction with other bot computers; the bot in social bot refers to an automated personaâthough one that might exist in a network with other stand-alone personas.
Today, both the number of bots online and their diversity are astounding. The New Yorker sums up this trend well: âbots , whose DNA can be written in nearly any modern programming language, live on cloud servers, which never go dark and grow cheaper by the dayâ (Dubbin 2013, para. 1). Filippo Menczer, a professor at the Indiana University School of Informatics and Computing and one of the early researchers of politicized social bot technology, makes a similar point on the rise of the social bot: âBots are getting smarter and easier to create, and people are more susceptible to being fooled by themâ (Urbina 2013).
A Bot Timeline
1950: Alan Turing publishes âComputing Machinery and Intelligence .â
1954: The âGeorgetown Experimentâ successfully converts over 60 Russian sentences to English automatically.
1964â1970: SHRDLU, an early natural language processing (NLP) program that uses pre-programmed handwritten NLP rules, is created.
1967â1969: ELIZA, another early natural language processing program, is created.
1972: PARRY, an early interactive chat bot, is created.
1975â1981: Several âconceptual ontologies,â which translate embodied information into computational data, are created.
1984: Racter, an early automatic prose generator, is created.
1988â1989: Machine learning algorithms begin to revolutionize the field of natural language processing.
1989: IBM research s...