chapter one
The Harbinger
Before I knew what Silicon Valley was, I had seen a computer. It was December 1979, and our next-door neighbor had brought home a build-it-yourself computer kit. I remember him assembling the device on his living room floor and plugging it into a black-and-white television set. After my neighbor meticulously punched in a series of commands, the screen transformed into a tapestry of blocky pixels.
I took the machine in with all the wonder of a seven-year-old. Until then, I had only seen computers depicted in TV shows and movies. Here was one I could touch. But it was more remarkable, I think now, that such a contraption had even got to a small suburb of Lusaka in Zambia in the 1970s. The global supply chain was primordial, and remote shopping all but non-existentâand yet the first signs of the digital revolution were already visible.
The build-it-yourself kit piqued my interest. Two years later, I got my own first computer: a Sinclair ZX81, picked up in the autumn of 1981, a year after moving to a small town in the hinterlands beyond London. The ZX81 still sits on my bookshelf at home. It has the footprint of a seven-inch record sleeve and is about as deep as your index and middle fingers. Compared to the other electronic items in early-1980s living roomsâthe vacuum-tubed television or large cassette deckâthe ZX81 was compact and light. Pick-up-with-your-thumb-and-forefinger light. The built-in keyboard, unforgiving and taut when pressed, wasnât something you could type on. It only responded to stiff, punctuated jabs of the kind you might use to admonish a friend. But you could get a lot out of this little box. I remember programming simple calculations, drawing basic shapes, and playing primitive games on it.
This device, advertized in daily newspapers across the UK, was a breakthrough. For ÂŁ69 (or about $145 at the time), we got a fully functional computer. Its simple programming language was, in principle, capable of solving any computer problem, however complicated (although it might have taken a long time).10 But the ZX81 wasnât around for long. Technology was developing quickly. Within a few years, my computerâwith its blocky black-and-white graphics, clumsy keyboard, and slow processingâwas approaching obsolescence. Within six years, my family had upgraded to a more modern device, made by Britainâs Acorn Computers. The Acorn BBC Master was an impressive beast, with a full-sized keyboard and a numeric keypad. Its row of orange special-function keys wouldnât have looked out of place on a prop in a 1980s space opera.
If the exterior looked different from the ZX81âs, the interior had undergone a complete transformation. The BBC Master ran several times faster. It had 128 times as much memory. It could muster as many as sixteen different colors, although it was limited to displaying eight at a time. Its tiny speaker could emit up to four distinct tones, just enough for simple renditions of musicâI recall it beeping its way through Bachâs Toccata and Fugue in D Minor. The BBC Masterâs relative sophistication allowed for powerful applications, including spreadsheets (which I never used) and games (which I did).
Another six years later, in the early 1990s, I upgraded again. By then, the computer industry had been through a period of brutal consolidation. Devices like the TRS-80, Amiga 500, Atari ST, Osborne 1 and Sharp MZ-80 had vied for success in the market. Some small companies had short-lived success but found themselves losing out to a handful of ascendant new tech firms.
It was Microsoft and Intel that emerged from the evolutionary death match of the 1980s as the fittest of their respective species: the operating system and the central processing unit. They spent the next couple of decades in a symbiotic relationship, with Intel delivering more computational power and Microsoft using that power to deliver better software. Each generation of software taxed the computers a little more, forcing Intel to improve its subsequent processor. âWhat Andy giveth, Bill taketh awayâ went the industry joke (Andy Grove was Intelâs CEO; Bill Gates, Microsoftâs founder).
At the age of nineteen, I was oblivious to these industry dynamics. All I knew was that computers were getting faster and better, and I wanted to be able to afford one. Students tended to buy so-called PC clonesâcheap, half-branded boxes which copied the eponymous IBM Personal Computer. These were computers based on various components that adhered to the PC standard, meaning they were equipped with Microsoftâs latest operating systemâthe software that allowed users (and programmers) to control the hardware.
My clone, an ugly cuboid, sported the latest Intel processor: an 80486. This processor could crunch through eleven million instructions per second, probably four or five times more than my previous computer. A button on the case marked âTurboâ could force the processor to run some 20 percent faster. Like a car where the driver keeps their foot on the accelerator, however, the added speed came at the cost of frequent crashes.
This computer came with four megabytes of memory (or RAM), a four-thousand-fold improvement on the ZX81. The graphics were jaw-dropping, though not state-of-the-art. I could throw 32,768 colors on the screen, using a not-quite cutting-edge graphics adaptor that I plugged into the machine. This rainbow palette was impressive but not lifelikeâblues in particular displayed poorly. If my budget had stretched ÂŁ50 (or about $85 at the time) more, I might have bought a graphics card that painted sixteen million colors, so many that the human eye could barely discern between some of the hues.
The ten-year journey from the ZX81 to the PC clone reflected a period of exponential technological change. The PC cloneâs processor was thousands of times more powerful than the ZX81âs, and the computer of 1991 was millions of times more capable than that of 1981. That transformation was a result of swift progress in the nascent computing industry, which approximately translated to a doubling of the speed of computers every couple of years.
To understand this transformation, we need to examine how computers work. Writing in the nineteenth-century, the English mathematician and philosopher George Boole set out to represent logic as a series of binaries. These binary digitsâknown as âbitsââcan be represented by anything, really. You could represent them mechanically by the positions of a lever, one up and one down. You could, theoretically, represent bits with M&Msâsome blues, some reds. (This is certainly tasty, but not practical.) Scientists eventually settled on 1 and 0 as the best binary to use.
In the earliest days of computing, getting a machine to execute Boolean logic was difficult and cumbersome. And so a computerâbasically any device that could conduct operations using Boolean logicârequired dozens of clumsy mechanical parts. But a key breakthrough came in 1938, when Claude Shannon, then a masterâs student at the Massachusetts Institute of Technology, realized electronic circuits could be built to utilize Boolean logicâwith on and off representing 1 and 0. It was a transformative discovery, which paved the way for computers built using electronic components. The first programmable, electronic, digital computer would famously be used by a team of Allied codebreakers, including Alan Turing, during World War II.
Two years after the end of the war, scientists at Bell Labs developed the transistorâa type of semiconductor, a material that partly conducts electricity and partly doesnât. You could build useful switches out of semiconductors. These in turn could be used to build âlogic gatesââdevices that could do elementary logic calculations. Many of these logic gates could be stacked together to form a useful computing device.
This may sound technical, but the implications were simple: the new transistors were smaller and more reliable than the valves that were used in the earliest electronic components, and they paved the way for more sophisticated computers. In December 1947, when scientists built the first transistor, it was clunky and patched together from a large number of components, including a paper clip. But it worked. Over the years, transistors would become less ad hoc and more consistently engineered.
From the 1940s onwards, the goal became to make transistors smaller. In 1960, Robert Noyce at Fairchild Semiconductor developed the worldâs first âintegrated circuit,â which combined several transistors into a single component. These transistors were tiny and could not be handled individually by man or machine. They were made through an elaborate process a little like chemical photography, called photoÂlithography. Engineers would shine ultraviolet light through a film with a circuit design on it, much like a childâs stencil. This imprints a circuit onto a silicon wafer, and the process can be repeated several times on a single waferâuntil you have several transistors on top of one another. Each wafer may contain several identical copies of circuits, laid out in a grid. Slice off one copy and you have a silicon âchip.â
One of the first people to understand the power of this technology was Gordon Moore, a researcher working for Noyce. Five years after his bossâs invention, Moore realized that the physical area of integrated circuits was reducing by about 50 percent every year, without any decrease in the number of transistors. The filmsâor âmasksââused in photolithography were getting more detailed; the transistors and connections smaller; the components themselves more intricate. This reduced costs and improved performance. Newer chips, with their smaller components and tighter packing, were faster than older ones.
Moore looked at these advances, and in 1965, he came up with a hypothesis. He postulated that these developments would double the effective speed of a chip for the same cost over a certain period of time.11 He eventually settled on the estimate that, every 18â24 months, chips would get twice as powerful for the same cost. Moore went on to cofound Intel, the greatest chip manufacturer of the twentieth century. But he is probably more famous for his theory, which became known as âMooreâs Law.â
This âlawâ is easy to misunderstand; it is not like a law of physics. Laws of physics, based on robust observation, have a predictive quality. Newtonâs Laws of Motion cannot be refuted by everyday human behavior. Newton told us that force equals mass times accelerationâand this is almost always true.12 It doesnât matter what you do or donât do, what time of day it is, or whether you have a profit target to hit.
Mooreâs Law, on the other hand, is not predictive; it is descriptive. Once Moore outlined his law, the computer industryâfrom chipmakers to the myriad suppliers who supported themâcame to see it as an objective. And so it became a âsocial factâ: not something inherent to the technology itself, but something wished into existence by the computer industry. The materials firms, the electronic designers, the laser manufacturersâthey all wanted Mooreâs Law to hold true. And so it did.13
But that did not make Mooreâs Law any less powerful. It has been a pretty good guide to computersâ progress since Moore first articulated it. Chips did get more transistors. And they followed an exponential curve: at first getting imperceptibly faster, and then racing away at rates it is hard to comprehend.
Take the below graphs. The top one shows the growth of transistors per microchip from 1971 to 2017. That this graph looks moribund until 2005 reflects the power of exponential growth. On the second graph, which shows the same data using a logarithmic scaleâa metric that converts an exponential increase into a straight lineâwe see that, between 1971 and 2015, the number of transistors per chip multiplied nearly ten million times.
Source: Our World In Data
The magnitude of this shift is almost impossible to conceptualize, but we can try to grasp it by focusing on the price of a single transistor. In 1958, Fairchild Semiconductor sold one hundred transistors to IBM for $150 apiece.14 By the 1960s, the price had fallen to $8 or so per transistor. By 1972, the year of my birth, the average cost of a transistor had fallen to fifteen cents,15 and the semiconductor industry was churning out between one hundred billion and one trillion transistors a year. By 2014, humanity produced 250 billion billion transistors annually: twenty-five times the number of stars in the Milky Way. Each second, the worldâs âfabsââthe specialized factories that turn out transistorsâspewed out eight trillion transistors.16 The cost of a transistor had dropped to a few billionths of a dollar.
Why does this matter? Because it led computers to improve at an astonishing rate. The speed that a computer can process information is roughly proportional to the number of transistors that make up its processing unit. As chips gained transistors, they got faster. Much faster. At the same time, the chips themselves were getting cheaper.
This extraordinary dropping in price is what drove the computing revolution of my teenage years, making my BBC Master so much better than my ZX81. And since then, it has transformed all of our lives again. When you pick up your smartphone, you hold a device with several chips and billions of transistors. Computersâonce limited to the realms of the military or scientific researchâhave become quotidian. Think of the first electronic computer, executing Alan Turingâs codebreaking algorithms in Bletchley Park in 1945. A decade later, there were still only 264 computers in the world, many costing tens of thousands of dollars a month to rent.17 Six decades on, there are more than five billion computers in useâincluding smartphones, the supercomputers in our pockets. Our kitchen cupboards, storage boxes and attics are littered with computing devicesâat only a few years old, already too outdated for any conceivable use.
Mooreâs Law is the most famous distillation of the exponential development of digital technology. Over the last half-century, computers have got inexorably fasterâbringing with them untold technological, economic, and social transformations. The goal of this chapter is to explain how this shift has come about, and why it looks set to continue for the foreseeable future. It will also serve as an introduction to the defining force of our age: the rise of exponential technologies.
â˘
Put in the simplest terms, an exponential increase is anything that goes up by a constant proportion. A linear process is what happens...