Smart Machines
eBook - ePub
Available until 27 Jan |Learn more

Smart Machines

IBM's Watson and the Era of Cognitive Computing

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub
Available until 27 Jan |Learn more

Smart Machines

IBM's Watson and the Era of Cognitive Computing

About this book

We are crossing a new frontier in the evolution of computing and entering the era of cognitive systems. The victory of IBM's Watson on the television quiz show Jeopardy! revealed how scientists and engineers at IBM and elsewhere are pushing the boundaries of science and technology to create machines that sense, learn, reason, and interact with people in new ways to provide insight and advice.

In Smart Machines, John E. Kelly III, director of IBM Research, and Steve Hamm, a writer at IBM and a former business and technology journalist, introduce the fascinating world of "cognitive systems" to general audiences and provide a window into the future of computing. Cognitive systems promise to penetrate complexity and assist people and organizations in better decision making. They can help doctors evaluate and treat patients, augment the ways we see, anticipate major weather events, and contribute to smarter urban planning. Kelly and Hamm's comprehensive perspective describes this technology inside and out and explains how it will help us conquer the harnessing and understanding of "big data," one of the major computing challenges facing businesses and governments in the coming decades. Absorbing and impassioned, their book will inspire governments, academics, and the global tech industry to work together to power this exciting wave in innovation.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Smart Machines by John E. Kelly III,Steve Hamm in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.
1
A NEW ERA OF COMPUTING
image
IBM’s Watson computer created a sensation when it bested two past grand champions on the TV quiz show Jeopardy! Tens of millions of people suddenly understood how “smart” a computer could be. This was no mere parlor trick; the scientists who designed Watson built upon decades of research in the fields of artificial intelligence and natural-language processing and produced a series of breakthroughs. Their ingenuity made it possible for a system to excel at a game that requires both encyclopedic knowledge and lightning-quick recall. In preparation for the match, the machine ingested millions of pages of information. On the TV show, first broadcast in February 2011, the system was able to search that vast storehouse in response to questions, size up its confidence level, and, when sufficiently confident, beat the humans to the buzzer. After more than five years of intense research and development, a core team of about twenty scientists had made a very public breakthrough. They demonstrated that a computing system—using traditional strengths and overcoming assumed limitations—could beat expert humans in a complex question-and-answer competition using natural language.
Now IBM scientists and software engineers are busy improving the Watson technology so it can take on much bigger and more useful tasks. The Jeopardy! challenge was relatively limited in scope. It was bound by the rules of the game and the fact that all the information Watson required could be expressed in words on a page. In the future, Watson will take on more open-ended problems. It will ultimately be able to interpret images, numbers, voices, and sensory information. It will participate in dialogue with human beings aimed at navigating vast quantities of information to solve extremely complicated yet common problems. The goal is to transform the way humans get things done, from health care and education to financial services and government.
One of the next challenges for Watson is to help doctors diagnose diseases and assess the best treatments for individual patients. IBM is working with physicians at Cleveland Clinic and Memorial Sloan-Kettering Cancer Center in New York to train Watson for this new role. The idea is not to prove that Watson could do the work of a doctor but to make Watson a useful aid to a physician. The Jeopardy! challenge pitted man against machine; with Watson and medicine, man and machine are taking on a challenge together—and going beyond what either could do on its own. It’s impossible for even the most accomplished doctors to keep up with the explosion of new knowledge in their fields. Watson can keep up to date, though, and provide doctors with the information they need. Diseases can be freakishly complicated, and they express themselves differently in each individual. Within the human genome, there are billions of combinations of variables that can figure in the course of a disease. So it’s no wonder that an estimated 15 to 20 percent of medical diagnoses are inaccurate or incomplete.1 Doctors know a lot about diseases and the practice of medicine. What they need help with is using evidence-based medicine to better evaluate and treat individuals.
Dr. Larry Norton, a world-renowned oncologist at Memorial Sloan-Kettering Cancer Center who is helping to train Watson, believes the computer will be able to synthesize encyclopedic medical and patient information to help physicians more quickly and easily identify treatment options for complex health conditions. “This is more than a machine,” Larry says. “Computer science is going to evolve rapidly and medicine will evolve with it. This is coevolution. We’ll help each other.”2
THE COMING ERA OF COGNITIVE COMPUTING
Watson’s potential to help with health care is just one of the possibilities opening up for next-generation technologies. Scientists at IBM and elsewhere are pushing the boundaries of science and technology fields ranging from nanotechnology to artificial intelligence with the goal of creating machines that do much more than calculate and organize and find patterns in data—they sense, learn, reason and interact naturally with people in powerful new ways. Watson’s exploits on TV were one of the first steps into a new phase in the evolution of information technology—the era of cognitive computing.
During this era, humans and machines will become more interconnected. Thomas Malone, director of the MIT Center for Collective Intelligence, says a big question for researchers as the era of cognitive computing unfolds is: How can people and computers be connected so that collectively they act more intelligently than any person, group, or computer has ever done before?3 This avenue of thought stretches back to the computing pioneer J.C.R. Licklider, who led the U.S. government project that evolved into the Internet. In 1960 he authored a paper, “Man-Computer Symbiosis,” where he predicted that “in not too many years, human brains and computing machines will be coupled together very tightly and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”4 That time is fast approaching.
The new era of computing is not just an opportunity for society; it’s also a necessity. Only with the help of smart machines will we be able to deal adequately with the exploding complexity of today’s world and successfully address interlocking problems like disease and poverty and stress on natural systems. Computers today are brilliant idiots. They have tremendous capacities for storing information and performing numerical calculations—far superior to those of any human. Yet when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans; there are many situations where computers can’t do a lot to help us.
Up until now, that hasn’t mattered much. Over the past sixty-plus years, computers have transformed the world by automating defined tasks and processes that can be codified in software programs in series of procedural “if A, then B” statements—expressing logic or mathematical equations. Faced with more complex tasks or changes in tasks, software programmers add to or modify the steps in the operations they want the machine to perform. This model of computing—in which every step and scenario is determined in advance by a person—can’t keep up with the world’s evolving social and business dynamics or deliver on its potential. The emergence of social networking, sensor networks, and huge storehouses of business, scientific, and government records creates an abundance of information tech-industry insiders call “big data.” Think of it as a parallel universe to the world of people, places, things, and their interrelationships. This digital universe is growing at about 60 percent each year.5
The volume of data creates the potential for people to understand the environment around us with a depth and clarity that was simply not possible before. Governments and businesses struggle to come to grips with complex situations, such as the inner workings of a city or the behavior of global financial markets. In the cognitive era, using the new tools of decision science, we will be able to apply new kinds of computing power to huge amounts of data and achieve deeper insight into how things really work. Armed with those insights, we can develop strategies and design systems for achieving the best outcomes—taking into account the effects of the variable and the unknowable. Think of big data as a natural resource waiting to be mined. And in order to tap this vast resource, we need computers that “think” and interact more like we do.
The human brain evolved over millions of years to become a remarkable instrument of cognition. We are capable of sorting through multitudes of sensory impressions in the blink of an eye. For instance, faced with the chaotic scene of a busy intersection, we’re able to instantly identify people, vehicles, buildings, streets, and sidewalks and see how they relate to one another. We can recognize and greet a friend we haven’t seen for ten years even while sensing and prioritizing the need to avoid stepping in front of a moving bus. Today’s computers can’t do that.
With the exception of robots, tomorrow’s computers won’t need to navigate in the world the way humans do. But to help us think better they will need the underlying humanlike characteristics—learning, adapting, interacting, and some form of understanding—that make human navigation possible. New cognitive systems will extract insights from data sources that are almost totally opaque today, such as population-wide health-care records, or from new sources of information, such as sensors monitoring pollution in delicate marine environments. Such systems will still sometimes be programmed by people using “if A, then B” logic, but programmers won’t have to anticipate every procedure and every rule. Instead, computers will be equipped with interpretive capabilities that will let them learn from the data and adapt over time as they gain new knowledge or as the demands on them change.
The goal isn’t to replicate human brains, though. This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership. The machines will be more rational and analytic—and, of course, possess encyclopedic memories and tremendous computational abilities. People will provide expertise, judgment, intuition, empathy, a moral compass, and human creativity.
To understand what’s different about this new era, it helps to compare it to the two previous eras in the evolution of information technology. The tabulating era began in the nineteenth century and continued into the 1940s. Mechanical tabulating machines automated the process of recording numbers and making calculations. They were essentially elaborate mechanical abacuses. People used them to organize data and make calculations that were helpful in everything from conducting a national population census to tracking the performance of a company’s sales force. The programmable computing era—today’s technologies—emerged in the 1940s. Programmable machines are still based on a design laid out by the Hungarian American mathematician John von Neumann. Electronic devices governed by software programs perform calculations, execute logical sequences of steps, and store information using millions of zeros and ones. Scientists built the first such computers for use in decrypting encoded messages in wartime. Successive generations of computing technology have enabled everything from space exploration to global manufacturing-supply chains to the Internet.
Tomorrow’s cognitive systems will be fundamentally different from the machines that preceded them. While traditional computers must be programmed by humans to perform specific tasks, cognitive systems will learn from their interactions with data and humans and be able to, in a sense, program themselves to perform new tasks. Traditional computers are designed to calculate rapidly; cognitive systems will be designed to draw inferences from data and pursue the objectives they were given. Traditional computers have only rudimentary sensing capabilities, such as license-plate-reading systems on toll roads. Cognitive systems will augment our hearing, sight, taste, smell, and touch. In the programmable-computing era, people have to adapt to the way computers work. In the cognitive era, computers will adapt to people. They’ll interact with us in ways that are natural to us.
Von Neumann’s architecture has persisted for such a long time because it provides a powerful means of performing many computing tasks. His scheme called for the processing of data via calculations and the application of logic in a central processing unit. Today, the CPU is a microprocessor, a stamp-sized sliver of silicon and metal that’s the brains of everything from smartphones and laptops to the largest mainframe computers. Other major components of the von Neumann design are the memory, where data are stored in the computer while waiting to be processed, and the technologies that bring data into the system or push it out. These components are connected to the central processing unit via a “bus”—essentially a highway for data. Most of the software programs written for today’s computers are based on this architecture.
But the design has a flaw that makes it inefficient: the von Neumann bottleneck. Each element of the process requires multiple steps where data and instructions are moved back and forth between memory and the CPU. That requires a tremendous amount of data movement and processing. It also means that discrete processing tasks have to be completed linearly, one at a time. While we have introduced some parallelism, it’s not enough. For decades, computer scientists have been able to rapidly increase the capabilities of CPUs by making them smaller and faster. But we’re reaching the limits of our ability to make those gains at a time when we need even more computing power to deal with complexity and big data. And that’s putting unbearable demands on today’s computing technologies—mainly because today’s computers require so much energy to perform their work.
What’s needed is a new architecture for computing, one that takes more inspiration from the human brain. Data processing should be distributed throughout the computing system rather than concentrated in a CPU. The processing and the memory should be closely integrated to reduce the shuttling of data and instructions back and forth. And discrete processing tasks should be executed simultaneously rather than serially. A cognitive computer employing these systems will respond to inquiries more quickly than today’s computers; less data movement will be required and less energy will be used.
Today’s von Neumann–style computing won’t go away when cognitive systems come online. New chip and computing technologies will extend its life far into the future. In many cases, the cognitive architecture and the von Neumann architecture will be employed side by side in hybrid systems. Traditional computing will become ever more capable while cognitive technologies will do things that were not possible before. Already, cloud, social networking, mobile, and new ways to interact with computing from tablets to glasses are fueling the desire for cognitive systems that will, for example, both harvest insights from social networks and enhance our experiences within them.
Should we fear the cognitive machines? MIT professors Erik Brynjolfsson and Andrew McAfee warn in their book, Race Against the Machine, that one of the side effects of this generation of advances in computing is they are coming at the expense of existing jobs. We believe, though, that the most important effect of these technologies will be in assisting people to do what they are unable to do today, vastly expanding the problems we can solve and creating new spheres of innovation for every industry. And like previous eras of computing, this will take a tremendous amount of innovation over decades. “These new capabilities will affect everything. It will be like the discovery of DNA,” predicts Ralph Gomory, a pioneer of applied mathematics who was director of IBM Research in the 1970s and 1980s and later head of the Alfred P. Sloan Foundation.6
HOW COGNITIVE SYSTEMS WILL HELP US BE SMARTER
As smart as human beings are, there are many things that we can’t do or simply can’t process in time to affect the outcome of a situation. Cognitive systems in many cases help us overcome our limitations.
COMPLEXITY
We have difficulty rapidly processing large amounts of information. We also have problems understanding the interactions among elements of large systems, such as the interplay of chemical compounds in the human body or the dynamics of financial markets. With cognitive computing, we will be able to harvest insights from huge quantities of data to handle complex situations, make more accurate predictions about the future, and better anticipate the unintended consequences of actions.
City mayors, for instance, already can begin to make sense of the interrelationships among urban subsystems—everything from electrical grids to weather to subways to demographic trends to issues reported or expressed by citizens. One example is monitoring social media during a major storm to spot patterns of words and images that indicate critical problems in particular neighborhoods. Much of this information will come from sensors—video cameras, instruments that detect motion, and devices that spot anomalies. Mobile phones will also be used as anonymized sensors that help city planners understand the movements of people and accurately predict the effects and financial impact of various actions.
EXPERTISE
With the help of cognitive systems, we will be able to see the big picture and make better decisions. This is especially important when experience in an area is limited or we’re trying to address problems that cut across professional or practical domains.
For instance, police are beginning to gather crime statistics and combine them with information about demographics, events, building blueprints, and weather to produce better analysis and safer cities. Armed with abundant data, police chiefs can set strategies and deploy res...

Table of contents

  1. Cover 
  2. Half title
  3. Title
  4. Copyright
  5. Contents 
  6. Preface
  7. 1. A New Era of Computing
  8. 2. Building Learning Systems
  9. 3. Handling Big Data
  10. 4. Augmenting Our Senses
  11. 5. Designing Data-Centric Computers
  12. 6. Inventing a New Physics of Computing
  13. 7. Imagining the Cognitive City
  14. Coda: An Alliance of Human and Machine
  15. Notes