Drawing on research from multiple disciplines and international case studies, this book provides a comprehensive and up-to-date understanding of online disinformation and its potential countermeasures.
Disinformation and Manipulation in Digital Media presents a model of the disinformation process which incorporates four cross-cutting dimensions or themes: bad actors, platforms, audiences, and countermeasures. The dynamics of each dimension are analysed alongside a diverse range of international case studies drawn from different information domains including politics, health, and society. In elucidating the interrelationship between the four dimensions of online disinformation and their manifestation in different international contexts, the book demonstrates that online disinformation is a complex problem with multiple, overlapping causes and no easy solutions. The book's conclusion contextualises the problem of disinformation within broader social and political trends and discusses the relevance of radical innovations in democratic participation to counteract the post-truth environment.
This up-to-date and thorough analysis of the disinformation landscape will be of interest to students and scholars in the fields of journalism, communications, politics, and policy as well as policymakers, technologists, and media practitioners.
This research received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825227.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go. Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Disinformation and Manipulation in Digital Media by Eileen Culloty,Jane Suiter in PDF and/or ePUB format, as well as other popular books in Languages & Linguistics & Computer Science General. We have over one million books available in our catalogue for you to explore.
Across the world, all spheres of life have become subject to false information, conspiracy theories, and propaganda. These information pathologies are implicated in the global resurgence of vaccine preventable diseases, the subversion of national politics, and the amplification of social divisions. In 2018, the United Nations Human Rights Council cited Facebook as a âdeterminingâ factor in the ethnic cleansing of Myanmarâs Rohingya population. A year later, the Oxford Internet Institute found evidence of efforts to manipulate public opinion in 70 countries (Bradshaw and Howard 2019). More recently, the Covid-19 pandemic brought an onslaught of conflicting reports, hoaxes, and conspiracy theories. The World Health Organisation (WHO) called it an âinfodemicâ: an overabundance of accurate and inaccurate claims that left many people confused about what to believe. In this context, it is unsurprising that a sense of crisis has become entrenched among policymakers, scholars, technologists, and others (see Farkas and Schou 2019).
While the need to develop countermeasures for disinformation is urgent, it is also challenging on many fronts. First, there are significant conceptual difficulties surrounding definitions of the problem. Second, there are practical impediments to developing fair and consistent moderation principles for enormous volumes of online content. Third, any proposed restriction on freedom of expression is necessarily accompanied by legal, ethical, and democratic reservations. Fourth, communication technologies are constantly evolving, which makes it difficult to design countermeasures that will be effective for the future. Fifth, and perhaps most crucially, there are major gaps in our understanding of the problem owing to the nascency of the research area and the lack of access to the platformsâ data. As a result, there is broad agreement that something needs to be done, but there is far less clarity about what that should be.
Undoubtedly, our current digital age is predisposed to a âshock of the newâ whereby digital media phenomena can seem more radical than they are because we untether them from their historical precedents. Taking a long view of human history, there is nothing new about disinformation. To take one example, âThe Protocols of the Elders of Zionâ emerged from Russia in 1903 and, in the guise of a leaked document, appeared to reveal a Jewish plot for global domination. It gained international traction through the endorsement of major public figures, including the US industrialist Henry Ford, and through news media coverage and the distribution of pamphlets. As with disinformation generally, it is difficult to delineate the direct effects of the document, but two important lessons can be drawn from this case: successful disinformation amplifies existing prejudices and relies on structures of communication power and influence.
There is much to be gained by adopting a historical understanding of disinformation (see Cortada and Aspray 2019). Nevertheless, while cognisant of historical continuities, we argue that online disinformation represents a fundamental change. The affordances of digital platforms â with their design features, business models, and content policies â distinguish contemporary disinformation from its historical precursors. Digital media have unprecedented consequences in terms of the scale and speed at which disinformation is dispersed as well as the range of content types and platforms in which it is manifest. While the motivations that lie behind the production and consumption of disinformation may not have changed substantially over time, the rapid evolution of digital platforms have created new opportunities for bad actors while leaving regulators struggling to keep pace.
All this is predicated on the wider âplatformizationâ of economic, political, and social life (Plantin and Punathambekar 2019). Entire sectors have become institutionally dependent on the major online platforms. The news media is an important case in point. By dominating how people access information, the platforms became integral to news publishersâ distribution strategies (Cornia and Sehl 2018). However, the relationship was fundamentally asymmetrical. News publishers were subject to unpredictable changes in platform policies, such as changes to recommendation algorithms, and were largely unable to monetise the content they created. Meanwhile the platforms, Google and Facebook in particular, came to dominate online advertising; largely thanks to their ability to collect data from users who enjoyed free access to content. As these conditions contributed to a dramatic decline in the news mediaâs advertising revenue, finding ways to support high-quality journalism is a major consideration within the broader effort to counteract online disinformation.
Of course, the challenges faced by journalism are just one contributing factor to the proliferation of online disinformation. The key aim of this book is to provide an overarching context for understanding this multifaceted and evolving problem. In what follows, we present our model of the online disinformation process and its potential mitigation.
The components of online disinformation
Reduced to its basic constituents, online disinformation, when it is successful, is a process that involves different actors and consecutive stages (see Figure 1.1). We model online disinformation in terms of the bad actors who create and push manipulative content, the platforms that enable the distribution and promotion of this content, and the audiences who give it meaning and impact through their willingness to engage with it. Of course, any given scenario of online disinformation is more complex than this basic model suggests and we elucidate this complexity in the succeeding chapters by interrogating each component of the process. Nevertheless, we suggest the value of this model is that it allows us to simultaneously map and assess various countermeasures as efforts to intervene in different stages of the online disinformation process. In so doing, we emphasise the need for a multi-pronged approach and the concluding chapter takes this further to argue that countermeasures are likely to be ineffective unless they are accompanied by broader efforts to address deep-seated issues relating to public trust and democratic legitimacy.
Figure 1.1
The online disinformation process
The first stage in the process involves the so-called bad actors who create and push online disinformation. Bad actors may be defined collectively for their common intention to deceive or manipulate the public, but it is important to recognise that the nature of bad actors is multifarious. To date, much of the scholarly and journalistic attention has focused on state-backed bad actors in the political domain; primarily on Russiaâs Internet Research Agency. As outlined in Chapter 2, we are also interested in the broader range of bad actors who are intent on misinforming the public or subverting public debate. A nuanced understanding of bad actors is complicated by the fact that much of what we know is derived from leaks and investigative journalism. Moreover, robust investigations â whether academic, journalistic, or parliamentary â have been hampered by a lack of useful data from the platforms (Boffey 2019). Nevertheless, we suggest that a broad understanding of bad actors may be derived by assessing: who they are or represent (e.g. states, corporations, social movements); their primary motivations (e.g. political, financial, ideological); and, of course, their tactics (e.g. creating deceptive content, influencing media agendas). The answers to these questions are typically inferred from the digital traces left online by bad actors; that is, by analysing disinformation content and how it has propagated through online networks. This brings us to the second component of our model â the platforms â as the strategies and tactics of bad actors take shape in line with the affordances of digital platforms.
The infrastructures of the platforms facilitate disinformation and incentivise low-quality content in many ways. As noted above, platform advertising models have had a detrimental impact on professional news. They also allow bad actors to monetise their disinformation. In addition, recommendation algorithms appear to have âfilter bubble effectsâ that amplify existing biases and potentially push people towards more extreme positions (Hussein et al. 2020). Recommendation algorithms aim to provide users with relevant content by grouping them according to their shared interests. This approach is relatively benign when those interests centre on sports and hobbies, but the implications are severe when those interests are defined by conspiracy theories and hate. More generally, the platformsâ engagement metrics â likes, shares, and followers â incentivise attention-grabbing content including clickbait journalism and hoaxes. These metrics can be manipulated by bad actors who piggyback on trending content and use false accounts and automated bots to inflate the popularity of content (Shao et al. 2018).
Nevertheless, receptive audiences are arguably the most important component of the process. After all, disinformation only becomes a problem when it finds a receptive audience that is willing, for whatever reasons, to believe, endorse or share it. Understanding what makes audiences receptive to disinformation and in what circumstances is therefore crucial. Many researchers are trying to answer this question and what they find is a complex overlap of factors relating to biased reasoning and the triggering of negative emotions such as fear and anger. These tendencies are amplified on social media where our attention is perpetually distracted. Moreover, quite apart from any bias on the part of the individual, repeated exposure to disinformation can increase perceptions of credibility over time (De keersmaecker et al. 2020; Fazio et al. 2015). Thus, reducing exposure to disinformation and providing supports to help audiences evaluate content have been to the forefront of efforts to mitigate disinformation.
There are ongoing debates about how to counteract online disinformation without undermining freedom of expression. Since 2016, a wide range of technological, audience focused, and legal and regulatory interventions have been proposed (see Funke and Flamini 2020). Technological interventions aim to advance the ability to detect and monitor disinformation. For their part, the platforms have variously taken action to reduce the visibility of certain content, but face calls for more radical action to improve transparency and accountability. Within the media and educational sectors, there has been a rapid growth in verification and fact-checking services and a renewed focus on media and information literacy. Legal and regulatory interventions are perhaps the most controversial, ranging from new laws prohibiting the spread of false information to proposals for the regulation of the platforms. Authoritarian states and democratic states that are âbackslidingâ into authoritarianism are both exploiting concerns about disinformation to silence critics and increase their control over the media. For example, Hungary recently introduced emergency Covid-19 measures that permit prison terms for publicising disinformation (Walker 2020). These and similar bills are widely criticised for their potentially chilling impact on freedom of expression and such cases accentuate the need for international leadership to protect fundamental rights and freedoms.
Conceptual approach
This book adopts an international and multi-disciplinary perspective on online disinformation and its mitigation. As a growing research area, important empirical insights are emerging from multiple disciplines including communication studies, computer science, cognitive science, information psychology, and policy studies. At the same time, technologists and investigative journalists are deepening our understanding of the problem and a range of actors are developing new initiatives and countermeasures. While grounded primarily in communication studies, we draw on developments in all of these areas to provide a comprehensive and up-to-date understanding of the disinformation environment.
Throughout the book, we utilise a selection of international case studies that represent different information domains including politics, health, and social relations. While there are valuable studies of disinformation within specific countries (primarily the US) and thematic areas (primarily politics), we present a wider perspective in order to elucidate the dynamics of the disinformation process. Context is vital. The architectures, interfaces, moderation mechanisms, and participatory behaviours of social media platforms are neither static nor universal (see Karpf 2019; Munger 2019). Rather, they are temporally situated and patterns of audience engagement are relative to their media, political, and social contexts. It follows that the dynamics of online disinformation are highly variable and understanding this variability is essential for assessing the threats and developing effective countermeasures. In elucidating the interrelationship between the four key components of the online disinformation process and their manifestation in different international contexts, we emphasise that online disinformation is a complex problem with multiple, overlapping causes and no easy solutions.
Throughout the book, we use the term online disinformation rather than the more popular term âfake newsâ. The latter is a specific subset of disinformation and the term is already polluted through its invocation as a term of abuse for the news media. Nevertheless, we note that current definitions of the problem are broad, encompassing disinformation, âfake newsâ, manipulation, propaganda, fabrication, and satire (see Tandoc et al. 2018). In part, this definitional confusion is a consequence of the variety of forms and genres in which disinformation is manifest. It may appear as news articles, memes or tweets and its substantive content can range from the complete fabrication of facts to their distortion or decontextualisation (Wardle and Derakhshan 2017). We take the view that it is not necessarily helpful to think in strict terms of true and false or fake and real. Disinformation is often multi-layered containing a mix of verified, dubious, and false statements. Moreover, in many cases, the distinction between disinformation and ideological opinion may be difficult to define because âpolitical truth is never neutral, objective or absoluteâ (Coleman 2018: 157). Ultimately, we suggest the threat of disinformation has less to do with individual claims than the cumulative assault on trust and evidence-based deliberation.
Book outline
Following this introduction, successive chapters focus on each element of our disinformation process model: bad actors, platforms, audiences, and countermeasures. The second chapter examines the bad actors who produce and distribute disinformation. With a specific focus on disinformation about politics, climate change, and immigration, we examine different types of actors, their motivations, and the tactics through which they seek influence. Ultimately, we suggest that fo...