PART I
Making It
1
AI Trainers:
Who is the Smart Worker Today?
Phoebe V. Moore
Most scholarly and governmental discussions about artificial intelligence (AI) today focus on a countryâs technological competitiveness and try to identify how this supposedly new technological capability will improve productivity. Some discussions look at AI ethics. But AI is more than a technological advancement. It is a social question and requires philosophical inquiry. From the time of the Victorians who built tiny machines resembling maids, to the development of humanoid carebots such as are seen in Japan today, we have been reifying machines with our characteristics. Malabou (2015) discusses the cyberneticiansâ assumptions that intelligence is primarily associated with reason as per the Enlightenment ethos. Indeed, cyberneticistsâ fascination with similarities between living tissue and nerves and electronic circuitry âgave rise to darker man-machine fantasies: zombies, living dolls, robots, brain washing, and hypnotismâ (Pinto 2015: 31). Pasquinelli (2015) argues that cybernetics, AI and current âalgorithmic capitalismâ researchers believed and still believe in instrumental or technological rationality and the ontological and epistemological determinism and positivism that permeate these assumptions. The mysticism and curiosity about how smart machines can be, and how this is manifest, predates our current era. But unlike in the first stages of AI research, where scholars such as Hubert Dreyfus (1979) directly challenged the idea that it would be relatively easy to get a machine to behave as though it were a human, today very little AI research looks for a relationship between the machine and the workings of the human mind. Nonetheless, software engineers and designers â and software users, who in the cases set out below are human resource professionals and managers â unconsciously as well as consciously project direct forms of intelligence onto machines themselves, without considering in any depth the practical nor philosophical implications of this, when weighed against human actual or perceived intelligences. Neither do they think about the relations of production that are required for the development and production of AI and its capabilities, where workers are expected not only to accept the intelligences of machines, now called âsmart machinesâ, but also to endure particularly difficult working conditions in the process of creating and expanding the datasets that are required for the development of AI itself.
If AI does actually become as prevalent and as significant as predictions would have it â and we really do make ourselves the direct mirror reflection of machines, and/or simply resources for fuelling them through the production of datasets via our own supposed intelligence of, e.g., image recognition â then we will have a very real set of problems on our hands. Potentially, workers will only be necessary for machinic maintenance or, as discussed later in this chapter, as AI trainers. AI is often linked to automation and potential job losses, but very little discussion of the quality of jobs that replace previously existing jobs is occurring. AI is not automation, in fact. AI is most suitably described as an augmentation tool and/ or application that builds on data collection and allows advances in dataset usage and decision-making, rather than as a stand-alone entity. While the Internet of Things, automation and digitalisation sometimes overlap with discussions of AI, the European Commissionâs more precise definition of AI in its 2020 White Paper is quite useful: a âcollection of technologies that combine data, algorithms and computing power. Advances in computing and the increasing availability of data are therefore key drivers of the current upsurge of AIâ (European Commission 2020). The European Commissionâs definition as provided in its 2018 Communication is also useful in indicating that AI ârefers to systems that display intelligent behaviour by analysing their environment and taking actions â with some degree of autonomy â to achieve specific goalsâ (European Commission 2018). A 2018 report for the European Parliamentâs Committee on Industry, Research and Energy, entitled âEuropean Artificial Intelligence Leadership, the Path for an Integrated Visionâ, defines AI as a âcover term for techniques associated with data analysis and pattern recognitionâ (Delponte 2018: 11).
In 2019, the OECD published its âRecommendations of the Council on Artificial Intelligenceâ, stating that AI will be a good opportunity for âaugmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-beingâ (OECD 2019), and differentiating AI from other digital technologies in that âAI are set to learn from their environments in order to take autonomous decisionsâ.
These definitions not only identify the scope and context within which AI is understood to have the potential to affect workspaces, but also take into account the often incorrect blanket use of the term. AI machines and systems are seen to demonstrate competences which are increasingly similar to human decision-making and prediction. AI-augmented tools and applications are intended to improve human resources and allow more sophisticated tracking of productivity, attendance and even health data for workers. These tools are often seen to perform much faster and more accurately than humans and, thus, managers.
However, as Aloisi and Gramano (2019) point out, once management is fully automated, AI may also engender or push forward âauthoritative attitudes ⌠perpetuate bias, promote discrimination, and exacerbate inequality, thus paving the way to social unrest and political turmoilâ. Sewell (2005) warned of the ways in which nudges and penalties introduced by AI-augmented incentivisation schemes can create tensions within working situations, and Tucker (2019) cautioned that AI-influenced ranking systems and metrics can be âmanipulated and repurposed to infer unspecified characteristics or to predict unknown behavioursâ (discussed in Aloisi and Gramano 2019: 119).
This chapter paves the way for discussions in later chapters of Augmented Exploitation, through exploring the ontological premise for recognising human âintelligenceâ in machines. Exploitation in the labour process relation is tried and tested and widely reported. Workers resist, often also in highly creative ways. However, the forms of control discussed in work and organisation studies are no longer restricted to the analogue but are now increasingly augmented through sophisticated or âsmartâ technological capabilities. Neither Marx nor subsequent Marxist, Marxian or post-Marxist researchers fully acknowledged or interrogated the assumptions around what is necessary scientifically to build an intelligent machine, or what we today call a smart machine, given that defining the âsmartâ or âintelligentâ human is already highly problematic in itself.
After outlining smart machinesâ demonstrations of seeming and hoped-for intelligences, and then indicating how that is translated into explicit social relations of production, this chapter makes the argument that workers, in collaboration, should be appropriating and intervening in the understandings of âsmartâ, to critique and challenge the dominant ideas surrounding supposed machinic smartness or intelligence. A range of human resources assistive machines seen in âpeople analyticsâ have, after all, shown evidence of discriminatory, racist, sexist and psychosocially violent traits of human intelligence in digitalised work contexts. If these are the core tenets of the dominant forms of human intelligence today, then we may or perhaps should be heading for a new phase of lines of questioning, where these assumptions must be challenged. In that light, this chapter begins a discussion to devise a means for a war of position, in the Gramscian sense, for the smart worker today.
SMART MACHINES
We hear about smart cars, smartphones, smart watches and even smart cities in the news and in scientific research, but there seems to be no critique around what âsmartâ means. Heuristically, we can say that âsmartâ as a definitional category for these kinds of objects refers to machinesâ ability to perform an activity on behalf of humans, or to perfect reality for us by performing menial tasks, providing convenience and services, and enhancing possibilities for ecological sustainability. Smart cars are smaller than average and can run on electricity instead of petrol, thereby helping the environment and so hopefully extending humansâ stay on this planet. Smart cars are also expected to eliminate the need for a human driver altogether. Low productivity in the UK has been attributed to the time wasted in commuting to work. If we are being driven to work by robots, we could read our Kindles and write on our iPads in the back seat, relying on the intelligence of machines and at the same time ideally developing our own. We might even eliminate âbullshit jobsâ (Graeber 2018) through achieving more quality work, upskilling and so on, and improving the countryâs productivity altogether.
Of course, these utopian ideas could be stymied due to the Covid-19 global pandemic, as a result of which many knowledge workers are increasingly being required to stay home to work. The âsmart officeâ may thus increasingly come to be defined within the laboratory of personal environments, where a range of devices used to calculate working time electronically and to measure other aspects of work are normalised via experimentation. Smartphones offer a further chance for work mobility in terms of documenting workersâ geographical position and offering the use of the internet and a camera. Phone conversations in which we can see peopleâs faces, as well as the array of applications enabling us to find our way to the nearest restaurant or shop, listen to almost any music we want, track patterns in our steps and heart rate, order transport, set goals, do yoga, read books and get the latest news, are other features of smartphones that can be used for workplace benefit. âSmart citiesâ, furthermore, provide convenience for citizens and tourists in terms of better connectedness and travel options.
While these smart products and environments sound quite attractive and exciting, they rely on the acquisition of big datasets extracted from human activity or objects that are based originally in human activity. Self-driving cars must learn to recognise specific images which are originally categorised by human labour. Smartphonesâ provisions such as digital maps rely on data about locations provided by human input. The smart office relies on data collected from workersâ keystrokes, timestamps for entering and exiting work platforms, and so on. With regards to smart services and social media, products are provided in exchange for, in some cases, a small monetary fee, or, more often than not, in the expectation that the reams of data gathered about us will be used to profile our âselvesâ for advertising and possibly for governmental use.
Based on human data, smart technologies, via machine learning, algorithms, robotics and emotion coding, demonstrate a series of forms of active âsmartsâ or intelligences which I have previously categorised as collaborative, assistive, prescriptive and proscriptive (Moore 2020), where machinesâ functionality towards these active intelligences is facilitated and augmented by AI. These are human/machine mirror intelligences, but they are based more on active potential than on expected social cognitive conditions which then are evidenced in what Marx referred to as the social relations of production. This chapter therefore builds on my previous arguments about human/machine reflections of intelligence, looking more closely at the social relations of production and the surrounding expectations placed on the smart worker.
AI TRAINERS AND THE RELATIONS OF PRODUCTION
Karl Marx observed, in the âFragment on Machinesâ section of Grundrisse: Foundations of the Critique of Political Economy (Marx 1993), that we as humans often attribute to machines our own characteristics, and, by association, also intelligence. However, since the site of introduction into the labour process is one of class struggle, the attribution of intelligence to machines relies on specific categories of âintelligenceâ in socially dominant understandings of that sphere. As Marx observed, the employment relationship in the early stages of industrialisation divided people along class lines, whereby a handful of people were assumed to have the superior intelligence required to design machines and to organise and manage workplaces, as well as manage workers and control labour processes and operations. The other main category for intelligence explicitly subordinated workers, who were expected to carry out physical labour and to build and maintain the very same machines that were ultimately considered to be more intelligent than the average person.
All this being said, intelligence is by no means a homogeneous category, and so-called symbolic and connectionist AI researchers have never got to grips with nor agreed on what the most important features of intelligence are. John Haugeland, who coined the term âGOFAIâ, described intelligent beings as demonstrating the following characteristics:
our ability to deal with things intelligently ⌠due to our capacity to think about them reasonably (including subconscious thinking); and the capacity to think about things reasonably [which] amounts to a faculty for internal âautomaticâ symbol manipulation. (Haugeland 1985: 113)
Marcus Hutter, who designed a well-known theory of universal AI, later argued: âThe human mind ⌠is connected to consciousness and identity which define who we are ⌠Intelligence is the most distinct characteristic of the human mind ⌠It enables us to understand, explore, and considerably shape our w...