Trillions
eBook - ePub

Trillions

Thriving in the Emerging Information Ecology

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Trillions

Thriving in the Emerging Information Ecology

About this book

We are facing a future of unbounded complexity. Whether that complexity is harnessed to build a world that is safe, pleasant, humane and profitable, or whether it causes us to careen off a cliff into an abyss of mind-numbing junk is an open question. The challenges and opportunities--technical, business, and human--that this technological sea change will bring are without precedent. Entire industries will be born and others will be laid to ruin as our society navigates this journey.

There are already many more computing devices in the world than there are people. In a few more years, their number will climb into the trillions. We put microprocessors into nearly every significant thing that we manufacture, and the cost of routine computing and storage is rapidly becoming negligible. We have literally permeated our world with computation. But more significant than mere numbers is the fact we are quickly figuring out how to make those processors communicate with each other, and with us. We are about to be faced, not with a trillion isolated devices, but with a trillion-node network: a network whose scale and complexity will dwarf that of today's Internet. And, unlike the Internet, this will be a network not of computation that we use, but of computation that we live in.

Written by the leaders of one of America's leading pervasive computing design firms, this book gives a no-holds-barred insiders' account of both the promise and the risks of the age of Trillions. It is also a cautionary tale of the head-in-the-sand attitude with which many of today's thought-leaders are at present approaching these issues. Trillions is a field guide to the future--designed to help businesses and their customers prepare to prosper, in the information.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Trillions by Peter Lucas,Joe Ballay,Mickey McManus in PDF and/or ePUB format, as well as other popular books in Business & Business Strategy. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2012
Print ISBN
9781118176078
eBook ISBN
9781118240069
Edition
1
CHAPTER 1
The Future, So Far
Behind all the great material inventions of the last century and a half was not merely a long internal development of technics: There was also a change of mind.
—LEWIS MUMFORD
There is a point of view—generally called “technological determinism”—that essentially says that each technological breakthrough inexorably leads to the next. Once we have light bulbs, we will inevitably stumble upon vacuum tubes. When we see what they can do, we will rapidly be led to transistors, and integrated circuits and microprocessors will not be far behind. This process—goes the argument—is essentially automatic, with each domino inevitably knocking down the next, as we careen toward some unknown but predetermined future.
We are not sure we would go that far, but it is certainly the case that each technological era sets the stage for the next. The future may or may not be determined, but a discerning observer can do a credible job of paring down the alternatives. All but the shallowest of technological decisions are necessarily made far in advance of their appearance in the market, and by the time we read about an advance on the cover of Time magazine, the die has long since been cast. Indeed, although designers of all stripes take justifiable pride in their role of “inventing the future,” a large part of their day-to-day jobs involves reading the currents and eddies of the flowing river of science and technology in order to help their clients navigate.
Although we are prepared to go out on a limb or two, it won’t be in this chapter. Many foundational aspects of the pervasive-computing future have already been determined, and many others will follow all but inevitably from well-understood technical, economic, and social processes. In this chapter, we will make predictions about the future, some of which may not be immediately obvious. But we will try to limit these predictions to those that most well-informed professionals would agree with. If you are one of these professionals (that is to say, if you find the term pervasive computing and its many synonyms commonplace), you may find this chapter tedious, and you should feel free to skip ahead. But if the sudden appearance of the iPad took you by surprise, or if you have difficulty imagining a future without laptops or web browsers, then please read on.
TRILLIONS IS A DONE DEAL
To begin with, there is this: There are now more computers in the world than there are people. Lots more. In fact, there are now more computers, in the form of microprocessors, manufactured each year than there are living people. If you step down a level and count the building blocks of computing– transistors–you find an even more startling statistic. As early as 2002 the semiconductor industry touted that the world produces more transistors than grains of rice, and cheaper. But counting microprocessors is eye-opening enough. Accurate production numbers are hard to come by, but a reasonable estimate is ten billion processors per year. And the number is growing rapidly.
Many people find this number implausible. Where could all these computers be going? Many American families have a few PCs or laptops—you probably know some geeks that have maybe eight or ten. But many households still have none. Cell phones and iPads count, too. But ten billion a year? Where could they all possibly be going?
The answer is everywhere. Only a tiny percentage of processors find their way into anything that we would recognize as a computer. Every modern microwave oven has at least one; as do washing machines, stoves, vacuum cleaners, wrist watches, and so on. Indeed, it is becoming increasingly difficult to find a recently designed electrical device of any kind that does not employ microprocessor technology.
Why would one put a computer in a washing machine? There are some quite interesting answers to this question that we will get to later. But for present purposes, let’s just stick to the least interesting answer: It saves money. If you own a washer more than ten years old, it most likely has one of those big, clunky knobs that you pull and turn in order to set the cycle. A physical pointer turns with it, showing at a glance which cycle you have chosen and how far into that cycle the machine has progressed. This is actually a pretty good bit of human-centered design. The pointer is clear and intuitive, and the act of physically moving the pointer to where you want it to be is satisfyingly literal. However, if you have a recently designed washer, this knob has probably been replaced with a bunch of buttons and a digital display, which, quite possibly, is not as easy to use.
So why the step backward? Well, let’s think for a second about that knob and pointer. They are the tip of an engineering iceberg. Behind them is a complex and expensive series of cams, clockwork, and switch contacts whose purpose is to turn on and off all the different valves, lights, buzzers, and motors throughout the machine. It even has a motor of its own, needed to keep things moving forward. That knob is the most complex single part in the appliance. A major theme of twentieth-century industrialization involved learning how to build such mechanically complex devices cheaply and reliably. The analogous theme of the early twenty-first century is the replacement of such components with mechanically trivial microprocessor-based controllers. This process is now ubiquitous in the manufacturing world.
In essence, the complexity that formerly resided in intricate electromechanical systems has almost completely migrated to the ethereal realm of software. Now, you might think that complexity is complexity and we will pay for it one way or another. There is truth in this statement, as we will see. However, there is a fundamental economic difference between complexity-as-mechanism and complexity-as-software. The former represents a unit cost, and the latter is what is known as a nonrecurring engineering expense (NRE). That is to say, the manufacturing costs of mechanical complexity recur for every unit made, whereas the replication cost of a piece of software—no matter how complex—approaches zero.
This process of substituting “free” software for expensive mechanism repeats itself in product after product, and industry after industry. It is in itself a powerful driver in our climb towards Trillions. As manufacturing costs increase and computing costs decrease, the process works its way down the scale of complexity. It is long-since complete in critical and subtle applications such as automotive engine control and industrial automation. It is nearly done in middling applications such as washing machines and blenders, and has made significant inroads in low-end devices such as light switches and air-freshener dispensers.
Money-saving is a powerful engine for change. As the generalization from these few examples makes clear, even if computerized products had no functional advantage whatsoever over their mechanical forebears, the rapid computerization of the built world would be assured. But this is just the beginning of the story. So far, we have been considering only the use of new technology to do old things. The range of products and services that were not practical before computerization is far larger. For every opportunity to replace some existing mechanism with a processor, there are hundreds of new products that were either impossible or prohibitively expensive in the precomputer era. Some of these are obvious: smartphones, GPS devices, DVD players, and all the other signature products of our age. But many others go essentially unnoticed, often written off as trivialities or gimmicks. Audio birthday cards are old news, even cards that can record the voice of the sender. Sneakers that send runners’ stride data to mobile devices are now commonplace. Electronic tags sewn into hotel towels that guard against pilferage, and capture new forms of revenue from souvenirs, are becoming common. The list is nearly endless.
Automotive applications deserve a category of their own. Every modern automobile contains many dozens of processors. High-end cars contain hundreds. Obvious examples include engine-control computers and GPS screens. Less visible are the controllers inside each door that implement a local network for controlling and monitoring the various motors, actuators, and sensors inside the door—thus saving the expense and weight of running bulky cables throughout the vehicle. Similar networks direct data from accelerometers and speed sensors, not only to the vehicle’s GPS system, but also to advanced braking and stability control units, each with its own suite of processors. Drilling further down into the minutiae of modern vehicle design, one finds intelligent airbag systems that deploy with a force determined by the weight of the occupant of each seat. How do they know that weight? Because the bolts holding the seats in place each contain a strain sensor and a microprocessor. The eight front-seat bolts plus the airbag controller form yet another local area network dedicated to the unlikely event of an airbag deployment.
We will not belabor the point, but such lists of examples could go on indefinitely. Computerization of almost literally everything is a simple economic imperative. Clearly, ten billion processors per year is not the least bit implausible. And that means that a near-future world containing trillions of computers is simply a done-deal. Again, we wish to emphasize that the argument so far in no way depends upon a shift to an information economy or a desire for a smarter planet. It depends only on simple economics and basic market forces. We are building the trillion-node network, not because we can but because it makes economic sense. In this light, a world containing a trillion processors is no more surprising than a world containing a trillion nuts and bolts. But, of course, the implications are very different.
CONNECTIVITY WILL BE THE SEED OF CHANGE
In his 1989 book Disappearing through the Skylight, O. B. Hardison draws a distinction between two modes in the introduction of new technologies—what he calls “classic” versus “expressive”:
To review types of computer music is to be reminded of an important fact about the way technology enters culture and influences it. Some computer composers write music that uses synthesized organ pipe sounds, the wave forms of Stradivarius violins, and onstage Bösendorf grands in order to sound like traditional music. In this case the technology is being used to do more easily or efficiently or better what is already being done without it. This can be called “classic” use of the technology. The alternative is to use the capacities of the new technology to do previously impossible things, and this second use can be called “expressive.” . . .
It should be added that the distinction between classic and expressive is provisional because whenever a truly new technology appears, it subverts all efforts to use it in a classic way. . . . For example, although Gutenberg tried to make his famous Bible look as much like a manuscript as possible and even provided for hand-illuminated capitals, it was a printed book. What it demonstrated in spite of Gutenberg—and what alert observers throughout Europe immediately understood—was that the age of manuscripts was over. Within fifty years after Gutenberg’s Bible, printing had spread everywhere in Europe and the making of fancy manuscripts was an anachronism. In twenty more years, the Reformation had brought into existence a new phenomenon—the cheap, mass-produced pamphlet-book.
Adopting Hardison’s terminology, we may state that the substitution of software for physical mechanism, no matter how many billions of times we do it, is an essentially classic use of computer technology. That is to say, it is not particularly disruptive. The new washing machines may be cheaper, quieter, more reliable, and conceivably even easier to use than the old ones, but they are still just washing machines and hold essentially the same position in our homes and lives as their more mechanical predecessors. Cars with computers instead of carburetors are still just cars. At the end of the day, a world in which every piece of clockwork has experienced a one-to-one replacement by an embedded processor is a world that has not undergone fundamental change.
But, this is not the important part of the story. Saving money is the proximal cause of the microprocessor revolution, but its ultimate significance lies elsewhere. A world with billions of isolated processors is a world in a kind of supersaturation—a vapor of potential waiting only for an appropriate seed to suddenly trigger a condensation into something very new. The nature of this seed is clear, and as we write it is in the process of being introduced. That seed is connectivity. All computing is about data-in and data-out. So, in some sense, all computing is connected computing—we shovel raw information in and shovel processed information out. One of the most important things that differentiates classic from expressive uses of computers is who or what is doing the shoveling. In the case of isolated processors such as our washing-machine controller, the shoveler is the human being turning that pointer. Much of the story of early twenty-first century computing is a story of human beings spending their time acquiring information from one electronic venue and re-entering it into another. We read credit card numbers from our cell phone screens, only to immediately speak or type them back into some other computer. So we already have a network. But as long as the dominant transport mechanism of that network involves human attention and effort, the revolution will be deferred.
Things are changing fast, however. Just as the advent of cheap, fast modems very rapidly transformed the PC from a fancy typewriter/calculator into the end nodes of the modern Internet, so too are a new generation of data-transport technologies rapidly transforming a trillion fancy clockwork-equivalents into the trillion-node network.
An early essay in such expressive networking can be found in a once wildly popular but now largely forgotten product from the 1990s. It was called the Palm Pilot. This device was revolutionary not because it was the first personal digital assistant (PDA)—it was not. It was revolutionary because it was designed from the bottom up with the free flow of information across devices in mind. The very first Palm Pilot came with “HotSync” capabilities. Unlike previous PDAs, the Pilot was designed to seamlessly share data with a PC. It came with a docking station having a single, inviting button. One push, and your contact and calendar data flowed effortlessly to your desktop—no stupid questions or inscrutable fiddling involved. Later versions of the Palm also included infrared beaming capabilities—allowing two Palm owners to exchange contact information almost as easily as they could exchange physical business cards.
In this day—only a decade later—of always-connected smartphones, these capabilities seem modest—even quaint. But they deserve our attention. It is one thing to shrink a full-blown PC with all its complexity down to the size of a bar of soap and then put it onto the Internet. It is quite another to do the same for a device no more complex than a fancy pocket calculator. The former is an impressive achievement indeed. But, it is an essentially classic application of traditional client-server networking technology. The iPhone truly is magical, but in most ways, it stands in the same relation to the Internet as the PC, which it is rapidly supplanting—namely it is a terminal for e-mail and web access and a platform for the execution of discrete apps. It is true that some of those apps give the appearance of direct phone-phone communications. (Indeed, a few really do work that way, and Apple has begun to introduce new technologies to facilitate such communication). But it is fair to say that the iPhone as it was originally introduced—the one that swept the world—was essentially a client-server device. Its utility was almost completely dependent upon frequent (and for many purposes, constant) connections to fixed network infrastructure.
The Palm Pilot, in its modest way, was different. It communicated with its associated PC or another Palm Pilot in a true peer-to-peer way, with no centralized “service” intervening. Its significance is that it hinted at a swarm of relatively simple devices directly intercommunicating where no single point of failure can bring down the whole system. It pointed the way toward a new, radically decentralized ecology of computational devices.
The Pilot turned out to be ...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Dedication
  5. Route Map for the Ascent of Trillions Mountain
  6. Preface
  7. Acknowledgments
  8. Chapter 1: The Future, So Far
  9. Chapter 2: The Next Mountain
  10. Interlude: Yesterday, Today, Tomorrow: Platforms and User Interfaces
  11. Interlude: Yesterday, Today, Tomorrow: Data Storage
  12. Notes
  13. About the Authors
  14. Index