Automated Media
eBook - ePub

Automated Media

Mark Andrejevic

Share book
  1. 172 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Automated Media

Mark Andrejevic

Book details
Book preview
Table of contents
Citations

About This Book

In this era of pervasive automation, Mark Andrejevic provides an original framework for tracing the logical trajectory of automated media and their social, political, and cultural consequences.

This book explores the cascading logic of automation, which develops from the information collection process through to data processing and, finally, automated decision making. It argues that pervasive digital monitoring combines with algorithmic decision making and machine learning to create new forms of power and control that pose challenges to democratic forms of accountability and individual autonomy alike. Andrejevic provides an overview of the implications of these developments for the fate of human experience, describing the "bias of automation" through the logics of pre-emption, operationalism, and "framelessness."

Automated Media is a fascinating and groundbreaking new volume: a must-read for students and researchers of critical media studies interested in the intersections of media, technology, and the digital economy.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Automated Media an online PDF/ePUB?
Yes, you can access Automated Media by Mark Andrejevic in PDF and/or ePUB format, as well as other popular books in Social Sciences & Media Studies. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2019
ISBN
9780429515774
Edition
1

1 The Subject of Automation

The inventor, futurist, and Google guru Ray Kurzweil has secular fantasies of immortality and resurrection. Not only does he take “thousands of dollars” of vitamins a day to help him live until the technology is available to upload his consciousness into a machine (Blodget 2015), but he is collecting information about his deceased father so he can be reincarnated in the form of an AI (artificial intelligence). Kurzweil imagines that some combination of archival material and machine learning will be able to construct a digital version of his father that he can converse with – perhaps forever – if he succeeds in surviving until the “singularity” (when human and machine consciousness merge). Until then, he believes his paternal AI will be, for conversational purposes, not just an accurate reproduction of his father but, as he puts it, “more like my father than my father would be” (Berman 2011). At first glance, such a formulation sounds like little more than the hyperbole of a practiced futurist, albeit with an interesting post-Oedipal psychoanalytic twist. However, there is a lurking psychoanalytic insight in this formulation: that the subject is, in an important sense, non-self-identical: that is, there are aspects of the subject that remain unavailable to it. The paradox, then, is that a self-identical digital subject would not be like itself at all – it would be “more” coherent and fully specified than an actual subject. To say that a data-driven simulation might be “more like” someone than they are themselves is to suggest that it would be more consistent, perhaps living up to some idealized image of the self that the subject itself was unable to attain. However, if a subject is, in important ways, constituted by its gaps and inconsistencies, the attempt to “perfect” it amounts to an attempt to obliterate it. That Kurzweil would aspire to this version of perfection is unsurprising, given his goal of achieving digital immortality by shedding the spatial and temporal limits of subjectivity.
Perhaps predictably, the promise of technological immortality is inseparable from that of automation, which offers to supplant human limitations at every turn. When it comes to the fate of the subject, the forms of automation at stake are not simply mechanical (as in the factory) but informational. Creating a digital model of individuals relies on automated forms of data collection and information processing facilitated by digital media technologies. Automated media thus anticipate the automation of subjectivity. Consider, for example, the familiar promise of data-driven target marketing: that with enough information, marketers can fulfill our needs and desires before we experience them. Predictive policing systems draw upon burgeoning databases to target crime at its moment of emergence. “Smart” interfaces are preparing to monitor the rhythms of our daily lives to track minor deviations that signal and anticipate shifts not yet detectable to us: signs of aging or depression, happiness, or illness. These developments are not simply matters of convenience: the notion that automation might more effectively “serve” subjects by anticipating their wants and needs – or, on the other hand, secure society more efficiently by pre-empting anti-social desires; rather, they address a perceived problem: the moment of uncertainty, unpredictability, inconsistency, or resistance posed by the figure of the subject. The problem stems from the fact that subjects can be unpredictable, recalcitrant, and otherwise irrational in ways that threaten systems of control, management, and governance. An automated subject would allow a fully automated society to run smoothly and frictionlessly – whereas actual subjects threaten to gum up the works. This is a familiar sentiment, for example, with respect to the deployment of self-driving cars, whose promoters identify the unpredictable behavior of humans as a stubborn obstacle.
There is an additional psychoanalytic twist to Kurzweil’s account: the post-Oedipal attempt to resuscitate the figure of a father without an unconscious. The automated, immortal father reflects a version of AI as the data devouring big “other” (deus quod machina) that can make sense of the world in ways humans themselves cannot. In the face of the welter of automatically generated data that permeates the information landscape, the AI foregrounds the limitations of human information processing. The technological fantasy of automated information processing is that for the first time in human history, instead of simply conceding the impossibility of absorbing and making sense of all available information or relegating the process to an inaccessible metaphysical position, humans anticipate the possibility of building such a position for themselves – and putting it to work in the form of an enslaved mechanical god. The lure of such a prospect is that the perspective of totality would no longer need to be taken on faith but can be built out in reality. Finally, all can be known: no more doubts about human or natural risks: climate change can be definitely measured; we will no longer have to rely on the vagaries or deceptions of testimony but can go directly to the data record. Kurzweil’s paternal fantasy writ large is the computerized installation of a real big “Other” – not imagined or symbolic, but actual.

Social De-skilling

Kurzweil is, admittedly, something of an outlier, despite his prestigious position at Google – but only in the sense that he has taken contemporary logics of automation to an extreme while nonetheless remaining true to their inherent tendencies. His fantasy of the “singularity” as the end of finitude and thus subjectivity captures a recurring theme of the contemporary information society: the promise that, with enough information, anything can be automated, including, and perhaps especially, the subject. We have become familiar with this logic of specification in the era of large-scale data mining of personal information: it manifests itself in the claims by marketers to know what we want “better than we do ourselves;” in the deployment of a growing range of automated screening systems to determine whether we are likely to be good (or bad) students, employees, or citizens; whether we can be viewed as risks or opportunities, assets or liabilities. The subject is a target of automation because of the role it plays in consumption (as a desiring subject); in production (as the locus of labor and creativity); in politics (as voter, protester, or subversive); and security (as both victim and threat). All of these roles mark potential points of friction or resistance to the acceleration of social and economic processes that shape contemporary life. Digital platforms have made it possible to create and circulate information at an increasingly rapid pace, transforming the processes of both consumption and production.
We are experiencing what might be described as the reflexive stage of what James Beniger (1999) called a “control revolution.” Beniger describes the ways in which electronic information systems helped rationalize the circulation and distribution of products to keep pace with their manufacture. Early forms of automated production combined with bureaucratic rationalization resulted in a “flood of mass produced goods” that, in turn, drew on steam-powered transport that required “a corresponding infrastructure of information processing and telecommunication” (17). This production-driven process flowed outward from the factory floor as the speed and volume of manufacturing increased. Mechanized transport assisted in the circulation of both raw materials and finished commodities, but these in turn required new information control systems, including the telegraph and eventually digital communication technology. Media systems served not only as the “nervous system” for production and transport but also as the means of promoting consumption via publicity and advertising. With the rise of mass customization and a growing range of information services, we have reached the point at which increasingly detailed information about consumers, some of it generated by metadata about their behavior and communications, is fed directly back into the production process. Now the realms of consumption and sociality generate information products that flow back into factories and advertising firms to further rationalize production and advertising. Notionally, the automation of production would be complemented by that of consumption in a self-stimulating spiral.
In the industrial era, the focus on production foregrounded the figure of automated labor: the robot whose physical force, speed, endurance, and reliability promised to outstrip its human predecessors. However, as automatically generated information comes to play a central role in the rationalization of production, distribution, and consumption, artificial intelligence “robotizes” mental labor: it promises to augment or displace the human role in communication, information processing, and decision-making. AI resuscitates the promise of automation in the mental sphere: to be faster, more efficient, and more powerful than humans. The activities that are automated in this context are not forms of physical labor, like welding or drilling, but of informational and communicative work: collecting, sorting, and processing information to generate correlations and decisions: the work of what Robert Reich dubbed the “symbolic analysts” (1992). These are meaning-making processes and rely on an understanding of what counts as relevant information, accurate understanding, and effective judgment. Such processes are distinct from forms of physical work, which can be automated without necessarily reconfiguring conceptions of significance and representation. Nevertheless, there is a tendency, perhaps an artifact of earlier discourses about automation, to portray human limitations as easily surpassed by automation. According to such accounts, the current generation of humans is to AI as John Henry was to the steam-powered drill: “In the next six years, AI will be able to translate better and quicker than humans. Within ten years, they will start replacing truck drivers 
 Need an essay written? Turn to AI” (Calderone 2018) or, as Newsweek once put it, “AI and automation will replace most human workers because they don’t have to be perfect – just better than you” (Shell 2018).
Mental production is analogized to physical production: both can be sped up by augmenting or replacing humans with machines. However, the speed bump in conceptual processes is not simply the pace of human calculation (as it is in physical production) but also the complexities introduced by desire and judgment – the internal tensions that accompany the divisions in the subject: between conscious and unconscious, individual and collective, culture and nature. Automating communication processes therefore requires reconfiguring the subject: making it more like itself than it actually is, to borrow from Kurzweil’s formulation. Typically, automation results in the abstraction of a task away from the motivations and intentions in which it is embedded. Thus, examples of automated “intelligence” tend to sidestep the reflexive layer of subjectivity in order to focus on the latest computer achievements: the fact that machines can now beat us in chess, Go, and some computer games. But there is little talk about whether the machines “want” to beat us or whether they get bored or depressed by having to play creatures they can beat so easily when there are so many other things they could be doing. That such observations seem absurd indicates how narrowly we have defined human subjective capacities in order to set the stage for their automation. We abstract away from human desire to imagine that the real measure of human intelligence lies in calculating a series of chess moves rather than inventing and popularizing the game in the first place, entertaining oneself by playing it, and wanting to win (or perhaps letting someone else win). Such activities really lie at the heart of whatever we might mean by human intelligence, although they fall by the wayside when we consider examples of machine “intelligence.” Perhaps this omission derives from the way we think about industrial automation. We are not interested in attributing intelligence or desire to industrial machines or drawing upon them as models of cognition and judgment: they remain embedded in a familiar division of labor between mental and manual, planning and execution, that absolves them from anything akin to intelligence or cognition.
To make this observation, however, is not to concede some inherent division between mental and material. The implementation of robotics relied on a long history of the de-skilling, de-socialization, and standardization of labor that helped drive down costs and routinize production processes to the point that they could be taken over by machines (Braverman 1998). The mental and material had to be systematically and forcibly separated to facilitate industrial automation. The production processes had to be disembedded from traditional labor relations, stripped of their social and mental character, and reconfigured as forms of unthinking, rote repetition. By the same token, information practices need to be de-socialized to pave the way for their automation. Communication and the subject have to be pried apart from one another, a process that I will examine over the course of the following chapters.
In other words, it is possible to trace a trajectory parallel to the social de-skilling of physical labor taking place in the communicative realm. This is not uniform across professions and practices, but it manifests itself in the communication and information systems that facilitate social fragmentation and the automation of information collection, processing, and response. In the academic realm, for example, the rise of “course software” that links to readings, administers quizzes, and calculates grades encourages the standardization of instructional procedures. In many cases, such as plagiarism detection, this software already relies on algorithmic sorting. As these platforms take over, they will likely rely upon increasingly intensive forms of “preprocessing” on the part of instructors and on new dimensions of automated data collection about students. The same can be said for college applications, job applications, health care forms and a growing number of online forms that feed into bureaucratic systems. Relatedly, the standardized formats of many social media posting systems facilitate their algorithmic sorting and processing. It is much easier to apply sentiment analysis to a 140-word tweet than to, say, an essay or a letter. As we fill out forms, participate in “preprocessing” information (by filling out a growing array of online forms), and place all of our content in the cloud, these all come to feel like steps on the path to automation. They systematically fragment and standardize the components of expression and evaluation, and in so doing run the danger of eroding underlying forms of coherence and overarching logics that cannot be broken down into their constituent bits.
The automated collection and processing of data promises to achieve what, drawing on Marxist terminology, we might describe as the perfection of “real subsumption.” As David Harvey (2018) puts it, “real” subsumption hinges on the entry of surveillance and rationalization into the labor process – and thus on the rise of monitored factory spaces and waged labor. According to this account, hourly wage labor was unfeasible for work that took place in the home because of its unsupervised character. Early forms of home production, then, were compensated by the piece rather than the hour. Only when labor migrated into the supervised space of the factory enclosure could hourly wages be instituted. Since supervisors also had to be paid, these spaces were necessarily large enough to allow as many people as possible to be supervised by as few as possible (the guiding principle of panoptic surveillance).
The difference between formal and real subsumption lies in the fact that the latter reorganizes the labor process internally. The assembly line, for example, and the rise of scientific management provide examples of the reconfiguration and rationalization of the labor process: the worker’s every movement is subject to both monitoring and management in the name of extracting maximum value from each hour of wage labor. The rationalization process reconfigures the physical actions, postures, and dispositions of the worker. The process of real subsumption is an ongoing one, facilitated by developments in monitoring technologies and innovations that reconfigure the labor process. Consider, for exampl...

Table of contents