
- 76 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
AI for Games
About this book
What is artificial intelligence? How is artificial intelligence used in game development?
Game development lives in its own technical world. It has its own idioms, skills, and challenges. That's one of the reasons games are so much fun to work on. Each game has its own rules, its own aesthetic, and its own trade-offs, and the hardware it will run on keeps changing. AI for Games is designed to help you understand one element of game development: artificial intelligence (AI).
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access AI for Games by Ian Millington in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.
Information
1
What Is AI?
DOI: 10.1201/9781003124047-2
Artificial intelligence is about making computers able to perform the thinking tasks that humans and animals are capable of.
We can program computers to have superhuman abilities in solving many problems: arithmetic, sorting, searching, and so on. Some of these problems were originally considered AI problems, but as they have been solved in more and more comprehensive ways, they have slipped out of the domain of AI developers.
But there are many things that computers arenât good at which we find trivial: recognizing familiar faces, speaking our own language, deciding what to do next, and being creative. These are the domains of AI: trying to work out what kinds of algorithms are needed to display these properties.
Often the dividing line between AI and not-AI is merely difficulty: things we canât do require AI, things we can are tricks and math. It is tempting to get into a discussion of what is ârealâ AI, to try defining âintelligence,â âconsciousness,â or âthought.â In my experience, it is an impossible task, largely irrelevant to the business of making games.
In academia, some AI researchers are motivated by those philosophical questions: understanding the nature of thought and the nature of intelligence and building software to model how thinking might work. Others are motivated by psychology: understanding the mechanics of the human brain and mental processes. And yet others are motivated by engineering: building algorithms to perform human-like tasks. This threefold distinction is at the heart of academic AI, and the different concerns are responsible for different subfields of the subject.
As games developers, we are practical folks, interested in only the engineering side. We build algorithms that make game characters appear human or animal-like. Developers have always drawn from academic research, where that research helps them get the job done, and ignored the rest.
It is worth taking a quick overview of the AI work done in academia to get a sense of what exists in the subject and what might be worth plagiarizing.
Academic AI
To tell the story, I will divide academic AI into three periods: the early days, the symbolic era, and the natural computing and statistical era. This is a gross oversimplification, of course, and they all overlap to some extent, but I find it a useful distinction.
The Early Days
The early days include the time before computers, where philosophy of mind occasionally made forays into AI with such questions as: âWhat produces thought?â âCould you give life to an inanimate object?â âWhat is the difference between a cadaver and the human it previously was?â Tangential to this was the popular taste in automata, mechanical robots, from the 18th century onward. Intricate clockwork models were created that displayed the kind of animated, animal or human-like behaviors that we now employ game artists to create in a modeling package.
In the war effort of the 1940s, the need to break enemy codes and to perform the calculations required for atomic warfare motivated the development of the first programmable computers. Given that these machines were being used to perform calculations that would otherwise be done by a person, it was natural for programmers to be interested in AI. Several computing pioneers (such as Turing, von Neumann, and Shannon) were also pioneers in early AI.
The Symbolic Era
From the late 1950s through to the early 1980s, the main thrust of AI research was âsymbolicâ systems. A symbolic system is one in which the algorithm is divided into two components: a set of knowledge (represented as symbols such as words, numbers, sentences, or pictures) and a reasoning algorithm that manipulates those symbols to create new combinations that hopefully represent problem solutions or new knowledge.
An expert system, one of the purest expressions of this approach, is among the most famous AI techniques. If today all the AI headlines talk about âdeep learning,â in the 1980s, they name dropped âexpert systems.â An expert system has a large database of knowledge, and it applies a collection of rules to draw conclusions or to discover new things. Other symbolic approaches applicable to games include blackboard architectures, pathfinding, decision trees, and state machines.
A common feature of symbolic systems is a trade-off: When solving a problem the more knowledge you have, the less work you need to do in reasoning. Often, reasoning algorithms consist of searching: trying different possibilities to get the best result. This leads us to the golden rule of AI:
Search and knowledge are intrinsically linked. The more knowledge you have, the less searching for an answer you need; the more search you can do (i.e., the faster you can search), the less knowledge you need.
Some have suggested that knowledge-infused search (known as âheuristic searchâ) is the way all intelligent behavior arises. Unfortunately, despite having several solid and important features, this theory has largely been discredited as an account of all intelligence. Nevertheless, many people with a recent education in AI are not aware that, as an engineering trade-off, knowledge versus search is unavoidable. At a practical level, AI engineers have always known it.
The Natural Computing/Statistical Era
Through the 1980s and into the early 1990s, there was an increasing frustration with symbolic approaches. The frustration came from various directions.
From an engineering point of view, the early successes on simple problems didnât seem to scale to more difficult problems. For example, it seemed easy to develop AI that understood (or appeared to understand) simple sentences, but developing an understanding of a full human language seemed no nearer. This was compounded by hype: When AI touted as âthe next big thingâ failed to live up to its billing, confidence in the whole sector crashed.
There was also an influential philosophical argument that symbolic approaches werenât biologically plausible. You canât understand how a human being plans a route by using a symbolic route-planning algorithm any more than you can understand how human muscles work by studying a forklift truck.
The effect was a move toward natural computing: techniques inspired by biology or other natural systems. These techniques include neural networks, genetic algorithms, and simulated annealing. Many natural computing techniques have been around for a long time.
But in the 1980s through to the early 2000s, they received the bulk of the research effort. When I began my PhD in artificial intelligence in the 1990s, it was difficult to find research places in Expert Systems, for example. I studied genetic algorithms; most of my peers were working on neural networks.
Despite its origin as a correlate to biology, AI research heavily applied mathematics, particularly probability and statistics, to understanding and optimizing natural computing techniques. The ability to handle all the uncertainty and messiness of real-world data, in contrast to the clean and rigid boundaries of the symbolic approaches, led to the development of a wide range of other probabilistic techniques, such as Bayes nets, support-vector machines (SVMs), and Gaussian processes.
The biggest change in AI in the last decade has not come from a breakthrough in academia. We are living in a time when AI is again back in the newspapers: self-driving cars, deep fakes, world champion Go programs, and home virtual assistants. This is the era of deep learning. Though many academic innovations are used, these systems are still fundamentally powered by neural networks, now made practical by the increase in computing power.
Engineering
Though newspaper headlines and high-profile applications have flourished in the last 5 years, AI has been a key technology relevant to solving real-world problems for decades. Navigation systems in cars, job scheduling in factories, voice recognition and dictation, and large-scale search are all more than 20 years old. Googleâs search technology, for example, has long been underpinned by AI.
When something is hot, it is tempting to assume it is the only thing that matters. When natural computing techniques took center stage, there was a tendency to assume that symbolic approaches were dead. Similarly, with talk of deep learning everywhere, you might be forgiven for thinking that is what should be used.
But we always come back to the same trade-off: search vs knowledge. Deep learning is the ultimate in the compute-intensive search; AlphaGo Zero (the third iteration of the AlphaGo software) was given very minimal knowledge of the rules of the game, but extraordinary amounts of processing time to try different strategies and learn the best. On the other hand, a character that needs to use a health pack when injured can be told that explicitly:
IF injured THEN use health pack
No search required.
The only way any algorithm can outperform another is either to consume more processing power (more search), or to be optimized toward a specific set of problems (more knowledge of the problem).
In practice, engineers work from both sides. A voice recognition program, for example, converts the input signals using known formulae into a format where the neural network can decode it. The results are then fed through a series of symbolic algorithms that look at words from a dictionary and the way words are combined in the language. A statistical algorithm optimizing the order of a production line will have the rules about production encoded into its structure, so it canât possibly suggest an illegal timetable: The knowledge is used to reduce the amount of search required.
Unfortunately, games are usually designed to run on consumer hardware. And while AI is important, graphics have always taken the majority of the processing power. This seems in no danger of changing. For AI designed to run on the device during the game, low computation/high knowledge approaches are often the clear winners. And these are very often symbolic: approaches pioneered in academia in the 1970s and 1980s.
Game AI
Pac-Man was the first game many people remember playing with fledgling AI. Up to that point, there had been Pong clones with opponent-controlled bats (following the ball up and down) and countless shooters in the Space Invaders mold. But Pac-Man had definite enemy characters that seemed to conspire against you, moved around the level just as you did, and made life tough.
Pac-Man relied on a very simple AI technique: a state machine. Each of the four monsters (later called ghosts after a disastrously flickering port to the Atari 2600) occupied one of three states: chasing, scattering (heading for the corners at specific time intervals), and frightened (when Pac-Man eats a power-up). For each state, they choose a tile as their target and turn toward it at each junction. In chase mode, each ghost chooses the target according to a slightly different hard-coded rule, giving them their personalities.
Game AI didnât change much until the mid-1990s. Most computer-controlled characters prior to then were about as sophisticated as a Pac-Man ghost.
Take a classic like Golden Axe eight years later. Enemy characters stood still (or walked back and forward a short distance) until the player got close to them, whereupon they homed in on the player. Golden Axe had a neat innovation with enemies that would enter a running state to rush past the player and then switch back to homing mode, attacking from behind. Surrounding the player looks impressive, but the underlying AI is no more complex than Pac-Man.
In the mid-1990s, AI began to be a selling point for games. Games like Beneath a Steel Sky even mentioned AI on the back of the box. Unfortunately, its much-hyped âVirtual Theaterâ AI system simply allowed characters to walk backward and forward through the gameâhardly a real advancement.
Goldeneye 007 probably did the most to show gamers what AI could do to improve gameplay. Still relying on characters with a small number of well-defined states, Goldeneye added a sense simulation system: Characters could see their colleagues and would notice if they were killed. Sense simulation was the topic of the moment, with Thief: The Dark Project and Metal Gear Solid basing their whole game design on the technique.
In the mid-1990s, real-time strategy (RTS) games also were beginning to take off. World of WarCraft was one of the first times pathfinding was widely noticed in action (though it had been used several times before). AI researchers were working with emotional models of soldiers in a military battlefield simulation in 1998 when they saw Warhammer: Dark Omen doing the same thing. It was also one of the first times people saw robust formation motion in action.
Halo introduced decision trees, now a standard method for characters to decide what to do. ...
Table of contents
- Cover
- Half Title
- Series Page
- Title Page
- Copyright Page
- Table of Contents
- Author
- Introduction
- 1 What Is AI?
- 2 Model of Game AI
- 3 Algorithms and Data Structures
- 4 Game AI
- 5 Techniques
- 6 Supporting Technologies
- Index