Does alcohol protect you against a virus?
The English-language Wikipedia article now named âCOVID-19 misinformationâ has a version history going back to early February 2020, a good month before the World Health Organisation (WHO) even declared the outbreak of a pandemic (Wikipedia 9-9-2021). Since then, the article has grown long and rich in content, and similar articles exist in over 30 other languages on Wikipedia. It includes well-known and widespread conspiracy theories such as that the spread of the virus is caused by electromagnetic fields from 5G mobile networks, or less popular ones such as the idea that the virus arrived in the Wuhan region of China with a meteorite.
The article also contains a section on the misinformation that drinking pure alcohol supposedly protects people against the virus. This false claim has led to the deaths of hundreds in Iran as a result of drinking methanol (Islam et al., 2020). CNN reports how the former US President Donald Trump alone made 654 false claims about the virus in just 14 weeks during spring 2020 (Dale and Subramaniam, 2020). Some of the false information circulating about the virus is easy to debunk; for example, the danger of 5G mobile communications masts, an idea that has resulted in sabotage of such masts and threats against people working with the network. The idea that cocaine use can prevent COVID-19 was so widely spread at one point that the French Ministry of Health found itself officially disparaging the claim. Other theories or false claims arise from a lack of context. This complicates comparison between different countries and leads to certain explanations or statements changing their meaning when transferred from one country or context to another. Some theories are based on unsubstantiated correlation, while yet others might even start from actual science-based observations which are then interpreted and developed further in ways that are unsupported by the scientific study in question. What Wikipediaâs taxonomy of COVID-19 misinformation and similar taxonomies show is less how many and which different types of misinformation and conspiracy theories exist about COVID-19, but rather that what we call misinformation is by no means a homogenous entity, neither are so-called conspiracy theories. We will return to this important point in due course.
Before the WHO classified the outbreak of the then-novel coronavirus as a pandemic and even before said virus and the disease it causes had been assigned their official names, SARS-CoV-2 and COVID-19, the organisation issued a warning about the new coronavirus being âaccompanied by a massive âinfodemicââ (WHO, 2020). This, they explained, is âan over-abundance of information-some accurate and some not-that makes it hard for people to find trustworthy sources and reliable guidance when they need itâ (ibid.). Since then, talk of an infodemic has joined the older image of information overload (Bawden and Robinson, 2020) in public debate and media reporting. Infodemic makes for a powerful metaphor, yet it invites problematic oversimplification of a complex social phenomenon by biologising it (Simon and Camargo, 2021). The image fuses well with already established analogies and images, such as those of computer viruses and memes or other content going viral on social media, and engenders new ones, including the notion that people can inoculate themselves against fact resistance, that psychological vaccinations are possible, or that so-called âfake newsâ can be stopped by creating herd immunity to it. The term âfake newsâ, in particular, has come to be used by politicians and other public persons to describe news they reject or discredit positions they disagree with. It is in the current debate best understood as a âfloating signifierâ, as Farkas and Schou (2018) suggest, which is also why we refrain from using the notion as an analytical concept.
That said, if acted on, certain types of incorrect health information can be directly lethal. For example, the American Center for Disease Control (CDC) reported deaths that had resulted from people drinking hand sanitiser in attempts to treat or prevent COVID-19. Drinking urine, on the other hand, may not be lethal or even dangerous by itself, but using it to treat COVID-19 can still set in motion chains of events leading to deadly outcomes and add to an already ongoing destabilisation of trust in healthcare professions and government institutions more broadly (Islam et al., 2020).
Crisis, co-constitution, and digital culture
Paradoxes of Media and Information Literacy: The Crisis of Information is a book about media and information literacy in digital culture. The word crisis marks a turning point, and in its original Greek meaning it denotes the turning point in a disease. While we want to be careful not to simplify complex, multi-layered social phenomena using poorly understood medical metaphors, there is a certain appeal in describing the extreme volatility of information that characterises contemporary society as just that: a crisis and a potential turning point. The appeal lies in the way that crisis, in such an understanding, conjures up the possibility of hope on the one hand, but also leaves open the possibility of different outcomes on the other. And further, that it is not a sudden event that comes out of nowhere, but one that has multiple causes, different developments and, regardless of how it unfolds, inevitably leaves traces or even scars, and opens up new paths. The crisis of information cuts through numerous social arrangements and reveals (and also challenges) their interdependence in new and profound ways. In the process, many taken-for-granted assumptions â not only about what counts as information, but also about how information should be produced, questioned, organised, and sustained â are being challenged. But what are the most tangible facets of this crisis, and how can they be traced out for our exploration of media and information literacy?
During the COVID-19 pandemic, the increased volatility of information has become ever more palpable. Rumours, governmental information, research reports, pre-prints, policy documents, official statistics, research data, journal articles, and conspiracy theories, all amalgamate on the same platforms, detached from their original context and imbued with different meanings. Often, it is near impossible to confidently establish the origin or status of the fast-changing, continuously updated information that is aggregated in social media feeds, in search engine results, or even in spreadsheets. Fragmentation describes how a complex body of knowledge is arranged into a continuously shifting shape provided by networks of ever fewer corporate information platforms. Not only is information becoming more fragmented, but access to information is also becoming more individualised, depending on, for example, who you are, who you follow, and who you interact with. The extent of personalisations and their societal effects are invisible to the individual user. Nobody knows for sure what others encounter, which unique combination of apps, search engines, and social media make up other peopleâs information ecosystem, how they arrive at which terms to enter into which search engine, or how the feed will reconfigure itself in the next reload, in response to the next swipe, after the next search or engagement. Positioning the fragmentation of information at the level of experience highlights how intimately the ongoing fragmentation of the collective understanding of societyâs knowledge base relates to the information infrastructure within which everything from everyday life to politics plays out. Both the foundations of trust and the possibilities for the creation of shared meaning are called into question.
This points to a number of interrelated challenges that contribute to the destabilisation of information â fragmentation, individualisation, emotionalisation, and the erosion of the collective basis for trust â all of which are, clearly not created, yet exacerbated by societyâs commercial, algorithmic information infrastructure, and a qualitatively new form of politicisation of information, where information is increasingly tailored to the form provided by multi-sided platforms and their specific logic of amplification. The importance of algorithms, user data, and increasingly AI-based systems for contemporary culture, more specifically for multi-sided platforms such as search engines, recommender systems, intelligent household assistants, streaming services, or dating apps, cannot be overstated. It is becoming increasingly important to understand how algorithmic systems work and how they are trained to perform in specific situations, while at the same time they are becoming ever more elusive and embedded in society and everyday life at all levels. One way to approach this is to use the term âdigital cultureâ, not as a historical period with a clear definition at the expense of other understandings of culture and society, but as a perspective that allows us to foreground the prevalence of certain socialities and ways of knowing and being in the world that are embedded in the organisation of society at different levels. To borrow from media researcher Ted Striphas, âalgorithmic cultureâ is one where âhuman beings have been delegating the work of culture â the sorting, classifying and hierarchizing of people, places, objects and ideas â to data-intensive computational processesâ (Striphas, 2015, p. 396; see also Lloyd, 2019). Digital culture as a perspective takes this into account, but also how algorithms, along with the platforms in which they are embedded, and the data extracted from users, exist within wider rationalities and programmes of social change (see also Beer, 2017).
Like all knowledge organisation and information systems, multi-sided platforms, algorithms, and data are always contingent and never impartial. In other words, algorithmic configurations, and multi-sided platforms are active and co-constitutive of the wider social fabric. Co-constitutive implies not only that this is not a one-way relation, but also that people, society, algorithms, data, platforms, regulators and so on are all constitutive of each other (Barad, 2003; 2007; Orlikowski and Scott, 2015). For the purpose of understanding society from the vantage point of digital culture, they cannot be meaningfully separated. People create meaning from platforms and algorithms in the various practices they are involved in, but they also resist them in more or less consequential ways. Platforms and their algorithms not just influence people in one direction, but through their interactions with these systems people also change them. Fragmentation, individualisation, and emotionalisation are integral to how commercial algorithmic information systems operate, at least in the sense that the categorisations applied to cluster, target, and extract data from people is intentionally invisible to those subjected to it. Trust in public knowledge, on the other hand, requires collectively shared and societally accepted methods for producing, challenging, and vetting knowledge, which fragmentation obscures and undercuts.