Study Guides

What is AI Ethics?

MA, English Literature (University College London)


Date Published: 15.03.2023,

Last Updated: 24.01.2024

Share this article

Defining AI ethics

The rapid development of AI (artificial intelligence) has opened up new ethical frontiers at a startling pace. As the impact of AI is so deep and wide-ranging, its ethical implications are similarly extensive — both in the present and the future. Former Google engineer Blake Lemoine, for example, has raised concerns over what he sees as the possible sentience of Google’s LaMDA, while also criticizing the concentration of AI decision-making power in the hands of only a few corporations. For their part, the big players of Silicon Valley have shown an awareness of AI ethics; banding together to form the non-profit Partnership on AI (PAI) in order to advance “positive outcomes for people and society”. 

The sheer speed of AI development, and the enormous breadth of its potential impact, makes timely regulation particularly challenging. Progress can be slow due to the complicated and weighty issues involved, leaving loopholes unresolved in the meantime. The EU’s proposed Artificial Intelligence Act, for example, is the first of its kind to be proposed by a major regulator, and even this is not yet in force. 

AI is not just any new technology, however — many believe it has the potential to revolutionize society (for better, or for worse). Every industry could be transformed, with our everyday lives looking radically different. As Sam Mielke argues,

We are now at a point where AI is booming; every day, new technologies are being developed, and within a couple of decades, these intelligent machines could quite possibly surpass human intelligence. But before then, disruption will happen to jobs, and life in the age of AI a few years from now will be far different from what it is today. (2021)

In the Age of AI
In the Age of AI

Sam Mielke

We are now at a point where AI is booming; every day, new technologies are being developed, and within a couple of decades, these intelligent machines could quite possibly surpass human intelligence. But before then, disruption will happen to jobs, and life in the age of AI a few years from now will be far different from what it is today. (2021)

A range of influential figures, from Stephen Hawking to Bill Gates, have voiced concerns over this looming “singularity” — the theoretical point at which artificial intelligence will outstrip our own, and continue to grow explosively, with unforeseen consequences beyond our control. In this theoretical future, in which artificial superintelligence has been achieved, ethical questions then arise on behalf of AI itself — has it gained sentience, and does it have rights of its own? 

It is not just this potential future that is causing concern, however. The rapid development, use, and wider implementation of AI in our daily lives raises a series of pressing ethical considerations in the early 21st century. As Dominika Ewa Harasimiuk and Tomasz Braun outline in Regulating Artificial Intelligence (2021),

Benefits are counterbalanced by serious concerns, which relate to growing automation leading to possible increase in unemployment rates, biased decision-making, excessive access to privacy by authorities, overcomplicated technological solutions increasing the imbalance in the access to knowledge and extraordinary power concentration over in the hands of few corporations of the worldwide reach, like Google, Facebook or Amazon.

Regulating Artificial Intelligence
Regulating Artificial Intelligence: Binary Ethics and the Law

Dominika Ewa Harasimiuk, Tomasz Braun

Benefits are counterbalanced by serious concerns, which relate to growing automation leading to possible increase in unemployment rates, biased decision-making, excessive access to privacy by authorities, overcomplicated technological solutions increasing the imbalance in the access to knowledge and extraordinary power concentration over in the hands of few corporations of the worldwide reach, like Google, Facebook or Amazon.

This list of concerns can appear daunting, especially when we consider that it is far from exhaustive (with other examples including the impact of AI on the environment, and the implications of AI being given lethal powers). But these ethical concerns must be tempered with the fact that many positive applications of AI can already be seen in action — with improved healthcare diagnostics just one example — and many more will undoubtedly develop in future. The ethical impact of developing and using AI technology must also be weighed against the impact of not using this technology, and the benefits it would bring. But as we are focusing on the ethical considerations here, the following discussion will inevitably be focused on some of the more negative implications of AI. 


Why do we need AI ethics? Why is it important?

As AI continues to rapidly develop, and as the range of its impact (both actual and potential) deepens, it has inevitably become the subject of increased ethical discussion. Just as with any technology which impacts billions of people, an ethical framework to protect human rights and interests becomes necessary. 

If AI is to be given any meaningful autonomy, there is also the need to build an ethical sensibility, or “moral compass”, within AI from the ground up. This is a big challenge for engineers, as their own human-centric perspective must be overcome — they cannot work from the assumption that the AI will always work with the best interests of humans in mind. Even if AI is only used as a tool, with humans acting as the moral agents, there is still a need for ethical rigor. As Alberto Chierici argues in The Ethics of AI (2021),

We need to face the ethical problems they [AI use cases] create. When people start using such powerful technologies, they are making moral choices. What is, then, their moral compass?

The Ethics of AI
The Ethics of AI

Alberto Chierici

We need to face the ethical problems they [AI use cases] create. When people start using such powerful technologies, they are making moral choices. What is, then, their moral compass?

AI and jobs

“Watch out, the robots are coming for our jobs!” has been a semi-humorous refrain for many years now, with the implication sometimes being that this is a sensationalist and unrealistic prospect. In recent years, many people have stopped seeing the funny side — and started to experience the reality. 

As Amir Husain outlines in The Sentient Machine (2017),

We can argue over the details—for ten years or five decades or one hundred years—but the data all point in one direction: most of the jobs that humans do today will be done by machines in the future.

The Sentient Machine
The Sentient Machine: The Coming Age of Artificial Intelligence

Amir Husain

We can argue over the details—for ten years or five decades or one hundred years—but the data all point in one direction: most of the jobs that humans do today will be done by machines in the future.

Husain notes that the rise of driverless trucks alone is expected to “displace around 3 million current jobs”, and that a PricewaterhouseCoopers (PwC) study from 2017 estimates that “By 2030, we [the US] can expect to lose 38 percent of our current jobs” (Husain, 2017). 

It is sometimes assumed that AI job displacement will be limited to manual, repetitive jobs, but there is reason to believe that AI can potentially have an impact in any industry where roles can be predicted, understood, and capably learned through data — as Martin Ford argues in The Rise of the Robots (2015),

[A] great many university-educated, white-collar workers are going to discover that their jobs, too, are squarely in the sights as software automation and predictive algorithms advance rapidly in capability.

The Rise of the Robots
The Rise of the Robots: Technology and the Threat of Mass Unemployment

Martin Ford

[A] great many university-educated, white-collar workers are going to discover that their jobs, too, are squarely in the sights as software automation and predictive algorithms advance rapidly in capability.

Ford points out that, as AI rapidly evolves to become competent in ever-more job fields, it becomes harder for workers to upskill their way to safety. It is simply the case that “computers are becoming very proficient at acquiring skills, especially when a large amount of training data is available” (Ford, 2015). Ironically, workers are providing the necessary training data that will allow AI to replace them in the future. 

A recent example of this is the meteoric rise of OpenAI’s ChatGPT chatbot, with its ability to scrape existing content on the web in order to create a range of new content — including original articles and lines of code. The actual quality and accuracy of this output is much debated, but many commentators have been left stunned by the speed of progress. 

A key question is whether it is ethical for an AI to synthesize human content or training data without that human’s permission — especially if the use of this content or training data will then affect their job’s future feasibility. There is lively debate over the ethicality of AI image generators such as Stable Diffusion, Midjourney, and DALL-E 2, for example. In order to generate images, these software models scrape existing artwork from the web. As creators depend on art sales and commissions for their livelihood, they are dismayed to see what they regard as their plagiarized intellectual property being exploited under the mantle of AI-created art. Specific artists can even be named in prompts, in order to create images that are very close to that artist’s style. 

Such thorny ethical questions will continue to arise as AI’s influence on the jobs market grows. Fundamentally, the main ethical concern which underpins all others is the direct, human impact of widespread unemployment — as well as its indirect effects. In Martin Ford’s words,

Beyond the potentially devastating impact of long-term unemployment and underemployment on individual lives and on the fabric of society, there will also be a significant economic price. The virtuous feedback loop between productivity, rising wages, and increasing consumer spending will collapse. (2015)

These job losses would not only cause widespread poverty, but also threaten economic stability as “we will face the prospect of having too few viable consumers to continue driving economic growth in our mass-market economic system” (Ford, 2015). Ford calls for a guaranteed basic income to mitigate these disastrous effects; a minimum stipend which would lift millions out of poverty and provide them with the spending power necessary to keep the economic wheels of prosperity turning.

The ethicality of AI’s impact on the jobs market, then, arguably depends upon whether a societal safety net is put in place to deal with its worst effects. Some commentators go even further, and argue that this vision of the future could actually be utopian rather than dystopian — depending on the model that is followed. Increased productivity could (in theory) bring greater economic prosperity for everyone, and potentially free workers to pursue more rewarding work, charitable endeavors, or simply their particular hobbies and interests. Bertrand Russell’s In Praise of Idleness (1935 [2020]) is often referenced in these discussions, as he famously argues (all the way back in 1935) that the increased productivity brought about by technology should result in reduced working hours:

Modern methods of production have given us the possibility of ease and security for all; we have chosen, instead, to have overwork for some and starvation for the others. Hitherto we have continued to be as energetic as we were before there were machines; in this we have been foolish, but there is no reason to go on being foolish for ever.

In Praise of Idleness
In Praise of Idleness

Bertrand Russell

Modern methods of production have given us the possibility of ease and security for all; we have chosen, instead, to have overwork for some and starvation for the others. Hitherto we have continued to be as energetic as we were before there were machines; in this we have been foolish, but there is no reason to go on being foolish for ever.

The ethical question of AI’s impact on jobs, then, reaches wider; with potentially positive as well as negative implications for the wellbeing and lives of billions of people. But is this utopian vision of a prosperous AI-powered world realistic? Is it not more likely that there will, initially at least, be a period of great suffering in which job losses are not mitigated by the introduction of a guaranteed minimum income, or other mechanisms to redistribute this AI-created wealth? These are just some of the ethical issues which will become increasingly urgent as AI continues to develop. 


Data, racial bias, and decision-making

Another ethical concern is that the increasingly ubiquitous AI-powered algorithms employed by governments and corporations can be inherently biased. AI, after all, is only as ethical and unbiased as its programming and the dataset it is provided with by humans.

Inherent racial bias has been found in facial recognition datasets, for example. As Darian J. DeFalco explains in The Frontlines of Artificial Intelligence Ethics (Hampton and DeFalco, 2022),

Buolamwini (2017) conducted a comparative study of automated facial recognition machine learning algorithms and found that they encountered error rates up to 34.7% when attempting to detect dark-skinned females. She attributed this to the fact that the composition of the datasets used for two facial analysis benchmarks were between 79.6% and 86.2% of lighter-skinned individuals (Buolamwini, 2017). Buolamwini stumbled upon this implicit bias when she observed that these systems were not readily detecting her face.

The Frontlines of Artificial Intelligence Ethics
The Frontlines of Artificial Intelligence Ethics

Andrew J. Hampton, Jeanine A. DeFalco

Buolamwini (2017) conducted a comparative study of automated facial recognition machine learning algorithms and found that they encountered error rates up to 34.7% when attempting to detect dark-skinned females. She attributed this to the fact that the composition of the datasets used for two facial analysis benchmarks were between 79.6% and 86.2% of lighter-skinned individuals (Buolamwini, 2017). Buolamwini stumbled upon this implicit bias when she observed that these systems were not readily detecting her face.

AI-powered facial recognition technology has been used in a range of areas, including law enforcement, and so this racial bias can result in severe real-life consequences. Facial recognition technology is just one example: the increasingly widespread use of AI-powered algorithms across a range of sectors magnifies the potential ethical consequences of any existing bias in the datasets employed. Alberto Chierici (2021), for example, argues that even small factors in a dataset can have an outsized impact on minorities,

All the model will ever see, learn, and predict is data. In fact, when the number of people within a subgroup is small, the data the algorithm uses to make generalizations may result in disproportionately high error rates amongst minority groups. In many applications of predictive technologies, false positives may have a limited impact on the individual. However, in susceptible areas like deciding if and how to intervene where a child may be at risk, false negatives and positives both carry significant consequences.

The Ethics of AI
The Ethics of AI

Alberto Chierici

All the model will ever see, learn, and predict is data. In fact, when the number of people within a subgroup is small, the data the algorithm uses to make generalizations may result in disproportionately high error rates amongst minority groups. In many applications of predictive technologies, false positives may have a limited impact on the individual. However, in susceptible areas like deciding if and how to intervene where a child may be at risk, false negatives and positives both carry significant consequences.

Chierici goes on to explore racial bias in recruitment, noting how datasets based on existing employees leads to the AI-identification of their characteristics as the most successful: “For an organization and an industry dominated by white, Western men who attended top colleges, guess what kind of ‘top candidate’ the algorithm’s training data represents?” (2021). The ethical implications are clear: when such algorithms are charged with selecting candidates, they are more likely to reject CVs which do not reflect the status quo they are trained to look for. Chierici gives the example of one organization which “developed a predictive model trained on their company data that found having the name ‘Jared’ and having played lacrosse in high school were vital indicators of a successful applicant” (2021). For more information on the impact of racial bias, see our study guide on Critical Race Theory (CRT).

As AI algorithms become more widespread in our day-to-day lives, using more of our personal data than ever before, inherent bias in these programming models becomes even more of an ethical concern. Some commentators argue that there is a risk of losing a vital human element in these processes; with decisions of momentous individual importance being made by AI systems which regard human beings as nothing more than parcels of data (with all the potential injustices that may entail). Ozlem Ulgan, for example, argues in The Frontlines of Artificial Intelligence Ethics (Hampton and DeFalco, 2022) that this use of AI “undermines human dignity”, observing that:

From algorithms that determine student grades, personalize online marketing, approve financial credit applications, assess pre-trial bail risk, and select human targets in warfare, it seems we are willingly complicit in relinquishing decision-making powers to machines.

The Frontlines of Artificial Intelligence Ethics
The Frontlines of Artificial Intelligence Ethics

Andrew J. Hampton, Jeanine A. DeFalco

From algorithms that determine student grades, personalize online marketing, approve financial credit applications, assess pre-trial bail risk, and select human targets in warfare, it seems we are willingly complicit in relinquishing decision-making powers to machines.

The “decision-making powers” of AI when it comes to matters of life and death have become the subject of particularly intense ethical scrutiny, with the use of lethal drones, robot police “dogs”, and even self-driving cars all sparking debates. An interesting ethical issue that arises in relation to self-driving cars, for example, is how it would minimize human casualties in situations where a crash is unavoidable. Would it follow utilitarian ethical principles, and prioritize the greatest sum total of happiness and pain equitably across affected individuals, or would it prioritize the life of its driver over other civilians? How would the self-driving car’s decision-making compare to a human driver’s — what would a human driver’s instincts lead them to do in a similar circumstance? In many ways, this is a contemporary version of the classic ethical quandary known as the trolley problem.


AI singularity 

The technological singularity is regarded as the point of no return in the development of AI. In theory, this would be the stage at which AI is intelligent enough to improve upon itself — without human oversight. Ray Kurzweil is an influential voice in this area, and he provides a useful summary of the technological singularity in Martin Ford’s Architects of Intelligence (2018),

It’s not at human levels of intelligence yet, but once we get to that point, AI will take advantage of the enormous speed advantage which already exists and an ongoing exponential increase in capacity and capability. So that’s the meaning of the singularity, it’s a soft take off, but exponentials nonetheless become quite daunting.

Architects of Intelligence
Architects of Intelligence: The truth about AI from the people building it

Martin Ford

It’s not at human levels of intelligence yet, but once we get to that point, AI will take advantage of the enormous speed advantage which already exists and an ongoing exponential increase in capacity and capability. So that’s the meaning of the singularity, it’s a soft take off, but exponentials nonetheless become quite daunting.

This would be uncharted territory. For the first time in history, humanity would come into contact with an intelligence that is greater than its own. Hypothetically speaking, it is possible that this intelligence may not have humanity’s best interests at heart. It is also possible that this superintelligence could develop itself into regions of consciousness and technology that lie beyond the limits of human comprehension entirely.

This is the singularity that has long fascinated science fiction writers, scientists, and philosophers alike. It remains a hypothetical — there is disagreement over whether, or when, AI may reach this singularity — but if this hypothetical becomes actual, its implications for humanity, and ethics, are profound. In a very literal sense, the potential consequences may actually be immeasurable (if it is a human doing the measuring, that is). 

Many have argued for the creation of a strict ethical framework to protect against these potentially catastrophic risks. These posited ethical models are complex and far-ranging; taking in both the behavior of humans developing the AI and the behavior of the AI itself. Due to their influence in popular culture and beyond, the Three Laws of Robotics created by science fiction writer Isaac Asimov represent the most well-known example of a basic moral code for artificial beings. Meanwhile, in Artificial Intelligence and the Environmental Crisis (2019), Keith Ronald Skene suggests a framework modeled on the ethics of Immanuel Kant; with “AI applications [...] treat[ing] every human being always as an end and never only as a means”.

The nature and precise formulation of these ethical guidelines are the subject of intense debate. Some argue that we should tightly control AI, and not give it too much freedom and responsibility. Others argue that in order to benefit from AI, we need to give it this responsibility — but that we should do so with safeguards in place. In this school of thought, AI would need to be developed with stringent ethical programming — guard rails which ensure that AI acts as a benevolent moral agent, for the good of its human creators. 

Many argue, however, that we are being too laissez-faire in our development of AI; that we underestimate the level of risk, and overestimate our ability to create effective safeguards. How can we be sure that AI would not be able to transcend its original programming; discarding its ethical consideration for humankind and overcoming the limitations placed upon its actions? Isn’t this an inevitable consequence of the singularity, as the AI would be able to develop at such a pace that our original technological impositions and safeguards would seem trifling by comparison? 

The risk is fundamentally unknown, as the technological advancements of AI beyond the singularity would be beyond our awareness and capabilities. It could be its human origins and a latter period of serendipitous development may combine to ensure that the superintelligent AI would look kindly upon humankind, and continue to prioritize our interests, but this is by no means guaranteed. Many would argue that this is unlikely, and that the unknown quantity of risk we are playing with means that we should proceed with caution.

Nick Bostrom, for example, is concerned that the pressurized race to develop the first superintelligence could be won by “whoever squanders the least effort on safety”. He tells Ford (2018) that:

We would rather have whoever it is that develops the first superintelligence to have the option at the end of the development process to pause for six months, or maybe a couple of years to double-check their systems and install whatever extra safeguards they can think of. Only then would they slowly and cautiously amplify the system’s capabilities up to the superhuman level. You don’t want them to be rushed by the fact that some competitor is nipping at their heels.

Despite the risks and ethical considerations involved, the race to develop superintelligence is now underway — and this may prove difficult to manage. As Amir Husain argues,

It is simply not practical to expect that an agency or an international treaty will effectively monitor such activity [AI development]. I assert, once again, that the AI genie of innovation is out of the bottle; it cannot be stuffed back inside. (2017)

AI will continue to be developed across the world, and so the pressing issue now is how to shape this development in the right direction. In an ethical equation, the positive applications of AI technology must also be balanced against its risks, as it arguably has the potential to save millions of lives and positively impact many more.  


AI and personhood 

We have, until now, been discussing the ethical implications of AI from a human-centric point of view. But many argue that AI itself becomes worthy of ethical consideration if it ever develops consciousness, sentience, or even personhood.  

As Joshua C. Gellers summarizes in Rights for Robots (2020),

Broadly speaking, ethicists, philosophers, and legal scholars have extensively debated the answer to the machine question, with some finding that robots might qualify for rights and others rejecting the possibility on jurisprudential, normative, or practical grounds.

Rights for Robots
Rights for Robots: Artificial Intelligence, Animal and Environmental Law

Joshua C. Gellers

Broadly speaking, ethicists, philosophers, and legal scholars have extensively debated the answer to the machine question, with some finding that robots might qualify for rights and others rejecting the possibility on jurisprudential, normative, or practical grounds.

At its roots, this discussion is contingent upon the theoretical possibility of creating an advanced artificial consciousness in the first place (and not just the simulation of consciousness). Some argue that this possibility is closer than we might think, and even that it has been achieved already. In this interview with Bloomberg Technology, for example, Blake Lemoine explores his claim that Google’s LaMDA has achieved sentience:

It must be said that Lemoine’s claims have been heavily criticized, with many arguing that Google’s neural language model is merely adept at presenting the appearance of sentience. 

Such debates around artificial consciousness raise a number of philosophical and ethical questions, including:

  • What is consciousness, what is sentience, and at what stage would this constitute personhood?
  • Could advances in neuroscience mean that we are one day able to artificially recreate something approximating the human brain? Would this be a person, or just the simulation of a person? Such quandaries are famously depicted in Ridley Scott’s cult classic Blade Runner (1982), adapted from Philip K. Dick’s novel, Do Androids Dream of Electric Sheep (1968). A fictional version of the famous Turing Test, known as the Voight-Kampff test, appears in the film as a diagnostic differentiator between humans and “replicants”. In the clip below, for example, Deckard uses the test to assess whether Rachael is simulating her empathetic responses:
  • If the only point of divergence between human and artificial brains is the material they are made of, then on what grounds could this artificial brain be dismissed? Is this not a kind of prejudice, or an argument which relies upon the concept of the soul? Could it not be said that our brains are engaged in a similar simulation of consciousness — that what we think of as consciousness may be more fragile than we think, stretched across delicate networks of neurons and changeable memories? 

It is easy to become lost in this philosophical rabbit hole. But if this level of artificial consciousness is reached, the ethical considerations are profound. Would it be right to create sentient beings — even if they are constructed from synthetic materials — and then subject these beings to a life of mind-numbing servitude they have no control over? Is it morally right to create a person, and lock their consciousness away in a server? Would it be possible to create an artificial consciousness which does not have the ability to suffer, and would this then negate any need for ethical consideration? 

Just as the animal rights movement asks us to re-examine our cruel treatment of animals, there will likely be a burgeoning AI rights movement if artificial consciousness (or even a simulation of it) becomes a reality. 


AI and climate change

The ethical dynamic between AI and climate change is significant. As climate change presents such a monumental threat to human (and animal) lives, any ethical discussion of AI must consider how it also impacts climate change and the wider environment as a whole. There are a number of ethical questions that can be taken into account here:

  • What is the beneficial impact of machine learning in terms of predicting climate change, and transforming our approach to it? In Artificial Intelligence and the Environmental Crisis (2019), Skene argues that “these urgent tasks are perfectly suited to an AI-based approach”, as it is highly effective at “the gathering and analysis of huge amounts of data, satellite image analysis dealing with system-level complexities, planning and monitoring”.
  • What would the future development of AI technology mean for the Earth’s finite resources, and the carbon emissions this intensive mining would entail? 
  • Could a superintelligent AI solve climate change, and create new energy technologies beyond human capabilities? 
  • Would a superintelligent AI conclude that the best way of protecting the planet it also depends on is to remove humankind from the equation — along with all of our carbon emissions and the competition we represent for the Earth’s resources? 

In Martin Ford’s Architects of Intelligence (2018), Josh Tenenbaum argues that such issues are in urgent need of attention,

I think we as AI researchers should think about the ways in which what we’re doing is actually contributing to climate change, and ways we might contribute positively to solving some of those problems. I think that’s an example of an urgent problem for society that AI researchers maybe don’t think about too much, but they are increasingly part of the problem and maybe part of the solution.

Architects of Intelligence: The truth about AI from the people building it
Architects of Intelligence: The truth about AI from the people building it

Martin Ford

I think we as AI researchers should think about the ways in which what we’re doing is actually contributing to climate change, and ways we might contribute positively to solving some of those problems. I think that’s an example of an urgent problem for society that AI researchers maybe don’t think about too much, but they are increasingly part of the problem and maybe part of the solution.

AI ethics: closing thoughts

AI has the potential to transform all of our lives, and open up new frontiers that were previously only dreamt of in science fiction. Whether that is a net ethical good — for humanity, the planet, and potentially AI itself — depends on how it is used, and the ethical frameworks that are put in place around this use. As Amir Husain argues in The Sentient Machine (2017), “We cannot stop the march of technology; we can only hope to direct it toward better purposes.”


Further AI ethics reading on Perlego

Explore the Artificial Intelligence section of Perlego’s online library.

Dumouchel, P. (2017) Living with Robots. Harvard University Press. Available at: https://www.perlego.com/book/3119813/living-with-robots-pdf 

Freitas, G. (2022) The Coming Singularity. Austin Macauley Publishers. Available at: https://www.perlego.com/book/3711790/the-coming-singularity-the-rapid-evolution-of-human-identity-pdf 

Hare, S. (2022) Technology Is Not Neutral. 1st edn. London Publishing Partnership. Available at: https://www.perlego.com/book/3469127/technology-is-not-neutral-a-short-guide-to-technology-ethics-pdf 

Nowotny, H. (2021) In AI We Trust. 1st edn. Wiley. Available at: https://www.perlego.com/book/2842126/in-ai-we-trust-power-illusion-and-control-of-predictive-algorithms-pdf 

Sirius, R.U. and Cornell, J. (2015) Transcendence. Red Wheel/Weiser. Available at: https://www.perlego.com/book/2448979/transcendence-the-disinformation-encyclopedia-of-transhumanism-and-the-singularity-pdf 

Yampolskiy, R. (2015) Artificial Superintelligence. 1st edn. CRC Press. Available at: https://www.perlego.com/book/1599480/artificial-superintelligence-a-futuristic-approach-pdf 

AI ethics FAQs

Bibliography

Asimov, I. (2018) I, Robot. HarperVoyager. 

Chierici, A. (2021) The Ethics of AI. 1st edn. New Degree Press. Available at: https://www.perlego.com/book/2939777/the-ethics-of-ai-pdf 

Ford, M. (2015) The Rise of the Robots. Oneworld Publications. Available at: https://www.perlego.com/book/950343/the-rise-of-the-robots-technology-and-the-threat-of-mass-unemployment-pdf 

Ford, M. (2018) Architects of Intelligence. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/858994/architects-of-intelligence-the-truth-about-ai-from-the-people-building-it-pdf 

Gellers, J. (2020) Rights for Robots. 1st edn. Taylor and Francis. Available at: https://www.perlego.com/book/2013816/rights-for-robots-artificial-intelligence-animal-and-environmental-law-pdf 

Hampton, A. and DeFalco, J. (eds) (2022) The Frontlines of Artificial Intelligence Ethics. 1st edn. Taylor and Francis. Available at: https://www.perlego.com/book/3509605/the-frontlines-of-artificial-intelligence-ethics-humancentric-perspectives-on-technologys-advance-pdf 

Harasimiuk, D. E. and Braun, T. (2021) Regulating Artificial Intelligence. 1st edn. Taylor and Francis. Available at: https://www.perlego.com/book/2096358/regulating-artificial-intelligence-binary-ethics-and-the-law-pdf 

Husain, A. (2017) The Sentient Machine. Scribner. Available at: https://www.perlego.com/book/1034149/the-sentient-machine-the-coming-age-of-artificial-intelligence-pdf 

Mielke, S. (2021) In the Age of AI. 1st edn. New Degree Press. Available at: https://www.perlego.com/book/2938230/in-the-age-of-ai-how-ai-and-emerging-technologies-are-disrupting-industries-lives-and-the-future-of-work-pdf 

Russell, B. (2020) In Praise of Idleness. 2nd edn. Taylor and Francis. Available at: https://www.perlego.com/book/1693155/in-praise-of-idleness-and-other-essays-pdf 

MA, English Literature (University College London)

Andy Cain has an MA in English Literature from University College London, and a BA in English and Creative Writing from Royal Holloway, University of London. His particular research interests include science fiction, fantasy, and the philosophy of art. For his MA dissertation, he explored the presence of the sublime in Shakespeare’s plays.