Augmented Exploitation
eBook - ePub

Augmented Exploitation

Artificial Intelligence, Automation and Work

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Augmented Exploitation

Artificial Intelligence, Automation and Work

About this book

Artificial Intelligence is a seemingly neutral technology, but it is increasingly used to manage workforces and make decisions to hire and fire employees. Its proliferation in the workplace gives the impression of a fairer, more efficient system of management. A machine can't discriminate, after all. Augmented Exploitation explores the reality of the impact of AI on workers' lives. While the consensus is that AI is a completely new way of managing a workplace, the authors show that, on the contrary, AI is used as most technologies are used under capitalism: as a smokescreen that hides the deep exploitation of workers. Going beyond platform work and the gig economy, the authors explore emerging forms of algorithmic governance and AI-augmented apps that have been developed to utilise innovative waysto collect data about workers and consumers, as well as to keep wages and worker representation under control. They also show that workers are not taking this lying down, providing case studies of new and exciting form of resistance that are springing up across the globe.

Tools to learn more effectively

Saving Books

Saving Books

Keyword Search

Keyword Search

Annotating Text

Annotating Text

Listen to it instead

Listen to it instead

Information

PART I

Making It

1

AI Trainers:
Who is the Smart Worker Today?

Phoebe V. Moore

Most scholarly and governmental discussions about artificial intelligence (AI) today focus on a country’s technological competitiveness and try to identify how this supposedly new technological capability will improve productivity. Some discussions look at AI ethics. But AI is more than a technological advancement. It is a social question and requires philosophical inquiry. From the time of the Victorians who built tiny machines resembling maids, to the development of humanoid carebots such as are seen in Japan today, we have been reifying machines with our characteristics. Malabou (2015) discusses the cyberneticians’ assumptions that intelligence is primarily associated with reason as per the Enlightenment ethos. Indeed, cyberneticists’ fascination with similarities between living tissue and nerves and electronic circuitry ‘gave rise to darker man-machine fantasies: zombies, living dolls, robots, brain washing, and hypnotism’ (Pinto 2015: 31). Pasquinelli (2015) argues that cybernetics, AI and current ‘algorithmic capitalism’ researchers believed and still believe in instrumental or technological rationality and the ontological and epistemological determinism and positivism that permeate these assumptions. The mysticism and curiosity about how smart machines can be, and how this is manifest, predates our current era. But unlike in the first stages of AI research, where scholars such as Hubert Dreyfus (1979) directly challenged the idea that it would be relatively easy to get a machine to behave as though it were a human, today very little AI research looks for a relationship between the machine and the workings of the human mind. Nonetheless, software engineers and designers – and software users, who in the cases set out below are human resource professionals and managers – unconsciously as well as consciously project direct forms of intelligence onto machines themselves, without considering in any depth the practical nor philosophical implications of this, when weighed against human actual or perceived intelligences. Neither do they think about the relations of production that are required for the development and production of AI and its capabilities, where workers are expected not only to accept the intelligences of machines, now called ‘smart machines’, but also to endure particularly difficult working conditions in the process of creating and expanding the datasets that are required for the development of AI itself.
If AI does actually become as prevalent and as significant as predictions would have it – and we really do make ourselves the direct mirror reflection of machines, and/or simply resources for fuelling them through the production of datasets via our own supposed intelligence of, e.g., image recognition – then we will have a very real set of problems on our hands. Potentially, workers will only be necessary for machinic maintenance or, as discussed later in this chapter, as AI trainers. AI is often linked to automation and potential job losses, but very little discussion of the quality of jobs that replace previously existing jobs is occurring. AI is not automation, in fact. AI is most suitably described as an augmentation tool and/ or application that builds on data collection and allows advances in dataset usage and decision-making, rather than as a stand-alone entity. While the Internet of Things, automation and digitalisation sometimes overlap with discussions of AI, the European Commission’s more precise definition of AI in its 2020 White Paper is quite useful: a ‘collection of technologies that combine data, algorithms and computing power. Advances in computing and the increasing availability of data are therefore key drivers of the current upsurge of AI’ (European Commission 2020). The European Commission’s definition as provided in its 2018 Communication is also useful in indicating that AI ‘refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals’ (European Commission 2018). A 2018 report for the European Parliament’s Committee on Industry, Research and Energy, entitled ‘European Artificial Intelligence Leadership, the Path for an Integrated Vision’, defines AI as a ‘cover term for techniques associated with data analysis and pattern recognition’ (Delponte 2018: 11).
In 2019, the OECD published its ‘Recommendations of the Council on Artificial Intelligence’, stating that AI will be a good opportunity for ‘augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being’ (OECD 2019), and differentiating AI from other digital technologies in that ‘AI are set to learn from their environments in order to take autonomous decisions’.
These definitions not only identify the scope and context within which AI is understood to have the potential to affect workspaces, but also take into account the often incorrect blanket use of the term. AI machines and systems are seen to demonstrate competences which are increasingly similar to human decision-making and prediction. AI-augmented tools and applications are intended to improve human resources and allow more sophisticated tracking of productivity, attendance and even health data for workers. These tools are often seen to perform much faster and more accurately than humans and, thus, managers.
However, as Aloisi and Gramano (2019) point out, once management is fully automated, AI may also engender or push forward ‘authoritative attitudes … perpetuate bias, promote discrimination, and exacerbate inequality, thus paving the way to social unrest and political turmoil’. Sewell (2005) warned of the ways in which nudges and penalties introduced by AI-augmented incentivisation schemes can create tensions within working situations, and Tucker (2019) cautioned that AI-influenced ranking systems and metrics can be ‘manipulated and repurposed to infer unspecified characteristics or to predict unknown behaviours’ (discussed in Aloisi and Gramano 2019: 119).
This chapter paves the way for discussions in later chapters of Augmented Exploitation, through exploring the ontological premise for recognising human ‘intelligence’ in machines. Exploitation in the labour process relation is tried and tested and widely reported. Workers resist, often also in highly creative ways. However, the forms of control discussed in work and organisation studies are no longer restricted to the analogue but are now increasingly augmented through sophisticated or ‘smart’ technological capabilities. Neither Marx nor subsequent Marxist, Marxian or post-Marxist researchers fully acknowledged or interrogated the assumptions around what is necessary scientifically to build an intelligent machine, or what we today call a smart machine, given that defining the ‘smart’ or ‘intelligent’ human is already highly problematic in itself.
After outlining smart machines’ demonstrations of seeming and hoped-for intelligences, and then indicating how that is translated into explicit social relations of production, this chapter makes the argument that workers, in collaboration, should be appropriating and intervening in the understandings of ‘smart’, to critique and challenge the dominant ideas surrounding supposed machinic smartness or intelligence. A range of human resources assistive machines seen in ‘people analytics’ have, after all, shown evidence of discriminatory, racist, sexist and psychosocially violent traits of human intelligence in digitalised work contexts. If these are the core tenets of the dominant forms of human intelligence today, then we may or perhaps should be heading for a new phase of lines of questioning, where these assumptions must be challenged. In that light, this chapter begins a discussion to devise a means for a war of position, in the Gramscian sense, for the smart worker today.

SMART MACHINES

We hear about smart cars, smartphones, smart watches and even smart cities in the news and in scientific research, but there seems to be no critique around what ‘smart’ means. Heuristically, we can say that ‘smart’ as a definitional category for these kinds of objects refers to machines’ ability to perform an activity on behalf of humans, or to perfect reality for us by performing menial tasks, providing convenience and services, and enhancing possibilities for ecological sustainability. Smart cars are smaller than average and can run on electricity instead of petrol, thereby helping the environment and so hopefully extending humans’ stay on this planet. Smart cars are also expected to eliminate the need for a human driver altogether. Low productivity in the UK has been attributed to the time wasted in commuting to work. If we are being driven to work by robots, we could read our Kindles and write on our iPads in the back seat, relying on the intelligence of machines and at the same time ideally developing our own. We might even eliminate ‘bullshit jobs’ (Graeber 2018) through achieving more quality work, upskilling and so on, and improving the country’s productivity altogether.
Of course, these utopian ideas could be stymied due to the Covid-19 global pandemic, as a result of which many knowledge workers are increasingly being required to stay home to work. The ‘smart office’ may thus increasingly come to be defined within the laboratory of personal environments, where a range of devices used to calculate working time electronically and to measure other aspects of work are normalised via experimentation. Smartphones offer a further chance for work mobility in terms of documenting workers’ geographical position and offering the use of the internet and a camera. Phone conversations in which we can see people’s faces, as well as the array of applications enabling us to find our way to the nearest restaurant or shop, listen to almost any music we want, track patterns in our steps and heart rate, order transport, set goals, do yoga, read books and get the latest news, are other features of smartphones that can be used for workplace benefit. ‘Smart cities’, furthermore, provide convenience for citizens and tourists in terms of better connectedness and travel options.
While these smart products and environments sound quite attractive and exciting, they rely on the acquisition of big datasets extracted from human activity or objects that are based originally in human activity. Self-driving cars must learn to recognise specific images which are originally categorised by human labour. Smartphones’ provisions such as digital maps rely on data about locations provided by human input. The smart office relies on data collected from workers’ keystrokes, timestamps for entering and exiting work platforms, and so on. With regards to smart services and social media, products are provided in exchange for, in some cases, a small monetary fee, or, more often than not, in the expectation that the reams of data gathered about us will be used to profile our ‘selves’ for advertising and possibly for governmental use.
Based on human data, smart technologies, via machine learning, algorithms, robotics and emotion coding, demonstrate a series of forms of active ‘smarts’ or intelligences which I have previously categorised as collaborative, assistive, prescriptive and proscriptive (Moore 2020), where machines’ functionality towards these active intelligences is facilitated and augmented by AI. These are human/machine mirror intelligences, but they are based more on active potential than on expected social cognitive conditions which then are evidenced in what Marx referred to as the social relations of production. This chapter therefore builds on my previous arguments about human/machine reflections of intelligence, looking more closely at the social relations of production and the surrounding expectations placed on the smart worker.

AI TRAINERS AND THE RELATIONS OF PRODUCTION

Karl Marx observed, in the ‘Fragment on Machines’ section of Grundrisse: Foundations of the Critique of Political Economy (Marx 1993), that we as humans often attribute to machines our own characteristics, and, by association, also intelligence. However, since the site of introduction into the labour process is one of class struggle, the attribution of intelligence to machines relies on specific categories of ‘intelligence’ in socially dominant understandings of that sphere. As Marx observed, the employment relationship in the early stages of industrialisation divided people along class lines, whereby a handful of people were assumed to have the superior intelligence required to design machines and to organise and manage workplaces, as well as manage workers and control labour processes and operations. The other main category for intelligence explicitly subordinated workers, who were expected to carry out physical labour and to build and maintain the very same machines that were ultimately considered to be more intelligent than the average person.
All this being said, intelligence is by no means a homogeneous category, and so-called symbolic and connectionist AI researchers have never got to grips with nor agreed on what the most important features of intelligence are. John Haugeland, who coined the term ‘GOFAI’, described intelligent beings as demonstrating the following characteristics:
our ability to deal with things intelligently … due to our capacity to think about them reasonably (including subconscious thinking); and the capacity to think about things reasonably [which] amounts to a faculty for internal ‘automatic’ symbol manipulation. (Haugeland 1985: 113)
Marcus Hutter, who designed a well-known theory of universal AI, later argued: ‘The human mind … is connected to consciousness and identity which define who we are … Intelligence is the most distinct characteristic of the human mind … It enables us to understand, explore, and considerably shape our w...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Figures
  6. Series Preface
  7. Acknowledgements
  8. Introduction: AI: Making it, Faking it, Breaking it Phoebe V. Moore and Jamie Woodcock
  9. Part I: Making It
  10. Part II: Faking It
  11. Part III: Breaking It
  12. Notes on Contributors
  13. Index

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Augmented Exploitation by Phoebe Moore, Jamie Woodcock in PDF and/or ePUB format, as well as other popular books in Computer Science & Decision Making. We have over one million books available in our catalogue for you to explore.