1
A NEW ERA OF COMPUTING
IBMâs Watson computer created a sensation when it bested two past grand champions on the TV quiz show Jeopardy! Tens of millions of people suddenly understood how âsmartâ a computer could be. This was no mere parlor trick; the scientists who designed Watson built upon decades of research in the fields of artificial intelligence and natural-language processing and produced a series of breakthroughs. Their ingenuity made it possible for a system to excel at a game that requires both encyclopedic knowledge and lightning-quick recall. In preparation for the match, the machine ingested millions of pages of information. On the TV show, first broadcast in February 2011, the system was able to search that vast storehouse in response to questions, size up its confidence level, and, when sufficiently confident, beat the humans to the buzzer. After more than five years of intense research and development, a core team of about twenty scientists had made a very public breakthrough. They demonstrated that a computing systemâusing traditional strengths and overcoming assumed limitationsâcould beat expert humans in a complex question-and-answer competition using natural language.
Now IBM scientists and software engineers are busy improving the Watson technology so it can take on much bigger and more useful tasks. The Jeopardy! challenge was relatively limited in scope. It was bound by the rules of the game and the fact that all the information Watson required could be expressed in words on a page. In the future, Watson will take on more open-ended problems. It will ultimately be able to interpret images, numbers, voices, and sensory information. It will participate in dialogue with human beings aimed at navigating vast quantities of information to solve extremely complicated yet common problems. The goal is to transform the way humans get things done, from health care and education to financial services and government.
One of the next challenges for Watson is to help doctors diagnose diseases and assess the best treatments for individual patients. IBM is working with physicians at Cleveland Clinic and Memorial Sloan-Kettering Cancer Center in New York to train Watson for this new role. The idea is not to prove that Watson could do the work of a doctor but to make Watson a useful aid to a physician. The Jeopardy! challenge pitted man against machine; with Watson and medicine, man and machine are taking on a challenge togetherâand going beyond what either could do on its own. Itâs impossible for even the most accomplished doctors to keep up with the explosion of new knowledge in their fields. Watson can keep up to date, though, and provide doctors with the information they need. Diseases can be freakishly complicated, and they express themselves differently in each individual. Within the human genome, there are billions of combinations of variables that can figure in the course of a disease. So itâs no wonder that an estimated 15 to 20 percent of medical diagnoses are inaccurate or incomplete.1 Doctors know a lot about diseases and the practice of medicine. What they need help with is using evidence-based medicine to better evaluate and treat individuals.
Dr. Larry Norton, a world-renowned oncologist at Memorial Sloan-Kettering Cancer Center who is helping to train Watson, believes the computer will be able to synthesize encyclopedic medical and patient information to help physicians more quickly and easily identify treatment options for complex health conditions. âThis is more than a machine,â Larry says. âComputer science is going to evolve rapidly and medicine will evolve with it. This is coevolution. Weâll help each other.â2
THE COMING ERA OF COGNITIVE COMPUTING
Watsonâs potential to help with health care is just one of the possibilities opening up for next-generation technologies. Scientists at IBM and elsewhere are pushing the boundaries of science and technology fields ranging from nanotechnology to artificial intelligence with the goal of creating machines that do much more than calculate and organize and find patterns in dataâthey sense, learn, reason and interact naturally with people in powerful new ways. Watsonâs exploits on TV were one of the first steps into a new phase in the evolution of information technologyâthe era of cognitive computing.
During this era, humans and machines will become more interconnected. Thomas Malone, director of the MIT Center for Collective Intelligence, says a big question for researchers as the era of cognitive computing unfolds is: How can people and computers be connected so that collectively they act more intelligently than any person, group, or computer has ever done before?3 This avenue of thought stretches back to the computing pioneer J.C.R. Licklider, who led the U.S. government project that evolved into the Internet. In 1960 he authored a paper, âMan-Computer Symbiosis,â where he predicted that âin not too many years, human brains and computing machines will be coupled together very tightly and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.â4 That time is fast approaching.
The new era of computing is not just an opportunity for society; itâs also a necessity. Only with the help of smart machines will we be able to deal adequately with the exploding complexity of todayâs world and successfully address interlocking problems like disease and poverty and stress on natural systems. Computers today are brilliant idiots. They have tremendous capacities for storing information and performing numerical calculationsâfar superior to those of any human. Yet when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans; there are many situations where computers canât do a lot to help us.
Up until now, that hasnât mattered much. Over the past sixty-plus years, computers have transformed the world by automating defined tasks and processes that can be codified in software programs in series of procedural âif A, then Bâ statementsâexpressing logic or mathematical equations. Faced with more complex tasks or changes in tasks, software programmers add to or modify the steps in the operations they want the machine to perform. This model of computingâin which every step and scenario is determined in advance by a personâcanât keep up with the worldâs evolving social and business dynamics or deliver on its potential. The emergence of social networking, sensor networks, and huge storehouses of business, scientific, and government records creates an abundance of information tech-industry insiders call âbig data.â Think of it as a parallel universe to the world of people, places, things, and their interrelationships. This digital universe is growing at about 60 percent each year.5
The volume of data creates the potential for people to understand the environment around us with a depth and clarity that was simply not possible before. Governments and businesses struggle to come to grips with complex situations, such as the inner workings of a city or the behavior of global financial markets. In the cognitive era, using the new tools of decision science, we will be able to apply new kinds of computing power to huge amounts of data and achieve deeper insight into how things really work. Armed with those insights, we can develop strategies and design systems for achieving the best outcomesâtaking into account the effects of the variable and the unknowable. Think of big data as a natural resource waiting to be mined. And in order to tap this vast resource, we need computers that âthinkâ and interact more like we do.
The human brain evolved over millions of years to become a remarkable instrument of cognition. We are capable of sorting through multitudes of sensory impressions in the blink of an eye. For instance, faced with the chaotic scene of a busy intersection, weâre able to instantly identify people, vehicles, buildings, streets, and sidewalks and see how they relate to one another. We can recognize and greet a friend we havenât seen for ten years even while sensing and prioritizing the need to avoid stepping in front of a moving bus. Todayâs computers canât do that.
With the exception of robots, tomorrowâs computers wonât need to navigate in the world the way humans do. But to help us think better they will need the underlying humanlike characteristicsâlearning, adapting, interacting, and some form of understandingâthat make human navigation possible. New cognitive systems will extract insights from data sources that are almost totally opaque today, such as population-wide health-care records, or from new sources of information, such as sensors monitoring pollution in delicate marine environments. Such systems will still sometimes be programmed by people using âif A, then Bâ logic, but programmers wonât have to anticipate every procedure and every rule. Instead, computers will be equipped with interpretive capabilities that will let them learn from the data and adapt over time as they gain new knowledge or as the demands on them change.
The goal isnât to replicate human brains, though. This isnât about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership. The machines will be more rational and analyticâand, of course, possess encyclopedic memories and tremendous computational abilities. People will provide expertise, judgment, intuition, empathy, a moral compass, and human creativity.
To understand whatâs different about this new era, it helps to compare it to the two previous eras in the evolution of information technology. The tabulating era began in the nineteenth century and continued into the 1940s. Mechanical tabulating machines automated the process of recording numbers and making calculations. They were essentially elaborate mechanical abacuses. People used them to organize data and make calculations that were helpful in everything from conducting a national population census to tracking the performance of a companyâs sales force. The programmable computing eraâtodayâs technologiesâemerged in the 1940s. Programmable machines are still based on a design laid out by the Hungarian American mathematician John von Neumann. Electronic devices governed by software programs perform calculations, execute logical sequences of steps, and store information using millions of zeros and ones. Scientists built the first such computers for use in decrypting encoded messages in wartime. Successive generations of computing technology have enabled everything from space exploration to global manufacturing-supply chains to the Internet.
Tomorrowâs cognitive systems will be fundamentally different from the machines that preceded them. While traditional computers must be programmed by humans to perform specific tasks, cognitive systems will learn from their interactions with data and humans and be able to, in a sense, program themselves to perform new tasks. Traditional computers are designed to calculate rapidly; cognitive systems will be designed to draw inferences from data and pursue the objectives they were given. Traditional computers have only rudimentary sensing capabilities, such as license-plate-reading systems on toll roads. Cognitive systems will augment our hearing, sight, taste, smell, and touch. In the programmable-computing era, people have to adapt to the way computers work. In the cognitive era, computers will adapt to people. Theyâll interact with us in ways that are natural to us.
Von Neumannâs architecture has persisted for such a long time because it provides a powerful means of performing many computing tasks. His scheme called for the processing of data via calculations and the application of logic in a central processing unit. Today, the CPU is a microprocessor, a stamp-sized sliver of silicon and metal thatâs the brains of everything from smartphones and laptops to the largest mainframe computers. Other major components of the von Neumann design are the memory, where data are stored in the computer while waiting to be processed, and the technologies that bring data into the system or push it out. These components are connected to the central processing unit via a âbusââessentially a highway for data. Most of the software programs written for todayâs computers are based on this architecture.
But the design has a flaw that makes it inefficient: the von Neumann bottleneck. Each element of the process requires multiple steps where data and instructions are moved back and forth between memory and the CPU. That requires a tremendous amount of data movement and processing. It also means that discrete processing tasks have to be completed linearly, one at a time. While we have introduced some parallelism, itâs not enough. For decades, computer scientists have been able to rapidly increase the capabilities of CPUs by making them smaller and faster. But weâre reaching the limits of our ability to make those gains at a time when we need even more computing power to deal with complexity and big data. And thatâs putting unbearable demands on todayâs computing technologiesâmainly because todayâs computers require so much energy to perform their work.
Whatâs needed is a new architecture for computing, one that takes more inspiration from the human brain. Data processing should be distributed throughout the computing system rather than concentrated in a CPU. The processing and the memory should be closely integrated to reduce the shuttling of data and instructions back and forth. And discrete processing tasks should be executed simultaneously rather than serially. A cognitive computer employing these systems will respond to inquiries more quickly than todayâs computers; less data movement will be required and less energy will be used.
Todayâs von Neumannâstyle computing wonât go away when cognitive systems come online. New chip and computing technologies will extend its life far into the future. In many cases, the cognitive architecture and the von Neumann architecture will be employed side by side in hybrid systems. Traditional computing will become ever more capable while cognitive technologies will do things that were not possible before. Already, cloud, social networking, mobile, and new ways to interact with computing from tablets to glasses are fueling the desire for cognitive systems that will, for example, both harvest insights from social networks and enhance our experiences within them.
Should we fear the cognitive machines? MIT professors Erik Brynjolfsson and Andrew McAfee warn in their book, Race Against the Machine, that one of the side effects of this generation of advances in computing is they are coming at the expense of existing jobs. We believe, though, that the most important effect of these technologies will be in assisting people to do what they are unable to do today, vastly expanding the problems we can solve and creating new spheres of innovation for every industry. And like previous eras of computing, this will take a tremendous amount of innovation over decades. âThese new capabilities will affect everything. It will be like the discovery of DNA,â predicts Ralph Gomory, a pioneer of applied mathematics who was director of IBM Research in the 1970s and 1980s and later head of the Alfred P. Sloan Foundation.6
HOW COGNITIVE SYSTEMS WILL HELP US BE SMARTER
As smart as human beings are, there are many things that we canât do or simply canât process in time to affect the outcome of a situation. Cognitive systems in many cases help us overcome our limitations.
COMPLEXITY
We have difficulty rapidly processing large amounts of information. We also have problems understanding the interactions among elements of large systems, such as the interplay of chemical compounds in the human body or the dynamics of financial markets. With cognitive computing, we will be able to harvest insights from huge quantities of data to handle complex situations, make more accurate predictions about the future, and better anticipate the unintended consequences of actions.
City mayors, for instance, already can begin to make sense of the interrelationships among urban subsystemsâeverything from electrical grids to weather to subways to demographic trends to issues reported or expressed by citizens. One example is monitoring social media during a major storm to spot patterns of words and images that indicate critical problems in particular neighborhoods. Much of this information will come from sensorsâvideo cameras, instruments that detect motion, and devices that spot anomalies. Mobile phones will also be used as anonymized sensors that help city planners understand the movements of people and accurately predict the effects and financial impact of various actions.
EXPERTISE
With the help of cognitive systems, we will be able to see the big picture and make better decisions. This is especially important when experience in an area is limited or weâre trying to address problems that cut across professional or practical domains.
For instance, police are beginning to gather crime statistics and combine them with information about demographics, events, building blueprints, and weather to produce better analysis and safer cities. Armed with abundant data, police chiefs can set strategies and deploy res...