Understanding the Digital World
eBook - ePub

Understanding the Digital World

What You Need to Know about Computers, the Internet, Privacy, and Security

Brian Kernighan

Compartir libro
  1. 256 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Understanding the Digital World

What You Need to Know about Computers, the Internet, Privacy, and Security

Brian Kernighan

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

The basics of how computer hardware, software, and systems work, and the risks they create for our privacy and security

Computers are everywhere. Some of them are highly visible, in laptops, tablets, cell phones, and smart watches. But most are invisible, like those in appliances, cars, medical equipment, transportation systems, power grids, and weapons. We never see the myriad computers that quietly collect, share, and sometimes leak vast amounts of personal data about us. Through computers, governments and companies increasingly monitor what we do. Social networks and advertisers know far more about us than we should be comfortable with, using information we freely give them. Criminals have all-too-easy access to our data. Do we truly understand the power of computers in our world?

Understanding the Digital World explains how computer hardware, software, networks, and systems work. Topics include how computers are built and how they compute; what programming is and why it is difficult; how the Internet and the web operate; and how all of these affect our security, privacy, property, and other important social, political, and economic issues. This book also touches on fundamental ideas from computer science and some of the inherent limitations of computers. It includes numerous color illustrations, notes on sources for further exploration, and a glossary to explain technical terms and buzzwords.

Understanding the Digital World is a must-read for all who want to know more about computers and communications. It explains, precisely and carefully, not only how they operate but also how they influence our daily lives, in terms anyone can understand, no matter what their experience and knowledge of technology.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Understanding the Digital World un PDF/ePUB en línea?
Sí, puedes acceder a Understanding the Digital World de Brian Kernighan en formato PDF o ePUB, así como a otros libros populares de Computer Science y Computer Science General. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2017
ISBN
9781400884803
Part I
Hardware
“I wish to God that this computation had been executed by steam.”
Charles Babbage, 1821, quoted in Harry Wilmot Buxton, Memoir of the Life and Labours of the Late Charles Babbage, 1872.
Hardware is the solid, visible part of computing: devices and equipment that you can see and put your hands on. The history of computing devices is interesting, but I will only mention a little of it here. Some trends are worth noting, however, especially the exponential increase in how much circuitry and how many devices can be packed into a given amount of space, often for a fixed price. As digital equipment has become more powerful and cheaper, widely disparate mechanical systems have been superseded by much more uniform electronic ones.
Computing machinery has a long history, though most early computational devices were specialized, often for predicting astronomical events and positions. For example, one theory holds that Stonehenge was an astronomical observatory, though the theory remains unproven. The Antikythera mechanism, from about 100 BCE, is an astronomical computer of remarkable mechanical sophistication. Arithmetic devices like the abacus have been used for millennia, especially in Asia. The slide rule was invented in the early 1600s, not long after John Napier’s description of logarithms. I used one as an undergraduate engineer in the 1960s, but slide rules are now curiosities, replaced by calculators and computers, and my painfully acquired expertise is useless.
The most relevant precursor to today’s computers is the Jacquard loom, which was invented in France by Joseph Marie Jacquard around 1800. The Jacquard loom used rectangular cards with multiple rows of holes that specified weaving patterns. The Jacquard loom thus could be “programmed” to weave a wide variety of different patterns under the control of instructions that were provided on punched cards; changing the cards caused a different pattern to be woven. The creation of labor-saving machines for weaving led to social disruption as weavers were put out of work; the Luddite movement in England in 1811–1816 was a violent protest against mechanization. Modern computing technology has similarly led to disruption.
Image
Modern implementation of Babbage’s Difference Engine.
Computing in today’s sense began in England in the mid-19th century with the work of Charles Babbage. Babbage was a scientist who was interested in navigation and astronomy, both of which required tables of numeric values for computing positions. Babbage spent much of his life trying to build computing devices that would mechanize the tedious and error-prone arithmetic calculations that were performed by hand. You can sense his exasperation in the quotation above. For a variety of reasons, including alienating his financial backers, he never succeeded in his ambitions, but his designs were sound. Modern implementations of some of his machines, built with tools and materials from his time, can be seen in the Science Museum in London and the Computer History Museum in Mountain View, California (in the figure).
Babbage encouraged a young woman, Augusta Ada Byron, the daughter of the poet George Gordon, Lord Byron, and later Countess of Lovelace, in her interests in mathematics and his computational devices. Lovelace wrote detailed descriptions of how to use Babbage’s Analytical Engine (the most advanced of his planned devices) for scientific computation and speculated that machines could do non-numeric computation as well, such as composing music. “Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.” Ada Lovelace is often called the world’s first programmer, and the Ada programming language is named in her honor.
Image
Ada Lovelace. Detail from a portrait by Margaret Sarah Carpenter (1836).
Herman Hollerith, working with the US Census Bureau in the late 1800s, designed and built machines that could tabulate census information far more rapidly than could be done by hand. Using ideas from the Jacquard loom, Hollerith used holes punched in paper cards to encode census data in a form that could be processed by his machines; famously, the 1880 census had taken eight years to tabulate, but with Hollerith’s punch cards and tabulating machines, the 1890 census took only one year. Hollerith founded a company that in 1924 became, through mergers and acquisitions, International Business Machines, which we know today as IBM.
Babbage’s machines were complex mechanical assemblies of gears, wheels, levers and rods. The development of electronics in the 20th century made it possible to imagine computers that did not rely on mechanical components. The first significant one of these all-electronic machines was ENIAC, the Electronic Numerical Integrator and Computer, which was built during the 1940s at the University of Pennsylvania in Philadelphia, by Presper Eckert and John Mauchly. ENIAC occupied a large room and required a large amount of electric power; it could do about 5,000 additions in a second. It was intended to be used for ballistics computations and the like, but it was not completed until 1946 when World War II was long over. (Parts of ENIAC are on display in the Moore School of Engineering at the University of Pennsylvania.)
Babbage saw clearly that a computing device could store its operating instructions and its data in the same form, but ENIAC did not store instructions in memory along with data; instead it was programmed by setting up connections through switches and re-cabling. The first computers that truly stored programs and data together were built in England, notably EDSAC, the Electronic Delay Storage Automatic Calculator, in 1949.
Early electronic computers used vacuum tubes as computing elements. Vacuum tubes are electronic devices roughly the size and shape of a cylindrical light bulb (see Figure 1.6); they were expensive, fragile, bulky, and power hungry. With the invention of the transistor in 1947, and then of integrated circuits in 1958, the modern era of computing really began. These technologies have allowed electronic systems to become smaller, cheaper and faster.
The next three chapters describe computer hardware, focusing on the logical architecture of computing systems more than on the physical details of how they are built. The architecture has been largely unchanged for decades, while the hardware has changed to an astonishing degree. The first chapter is an overview of the components of a computer. The second chapter shows how computers represent information with bits, bytes and binary numbers. The third chapter explains how computers actually compute: how they process the bits and bytes to make things happen.
1
What’s in a Computer?
“Inasmuch as the completed device will be a general-purpose computing machine it should contain certain main organs relating to arithmetic, memorystorage, control and connection with the human operator.”
Arthur W. Burks, Herman H. Goldstine, John von Neumann, “Preliminary discussion of the logical design of an electronic computing instrument,” 1946.
Let’s begin our discussion of hardware with an overview of what’s inside a computer. We can look at a computer from at least two viewpoints: the logical or functional organization—what the pieces are, what they do and how they are connected—and the physical structure—what the pieces look like and how they are built. The goal of this chapter is to see what’s inside, learn roughly what each part does, and get some sense of what the myriad acronyms and numbers mean.
Think about your own computing devices. Many readers will have some kind of “PC,” that is, a laptop or desktop computer descended from the Personal Computer that IBM first sold in 1981, running some version of the Windows operating system from Microsoft. Others will have an Apple Macintosh that runs a version of the Mac OS X operating system. Still others might have a Chromebook or similar laptop that relies on the Internet for storage and computation. More specialized devices like smartphones, tablets and ebook readers are also powerful computers. These all look different and when you use them they feel different as well, but underneath the skin, they are fundamentally the same. We’ll talk about why.
There’s a loose analogy to cars. Functionally, cars have been the same for over a hundred years. A car has an engine that uses some kind of fuel to make the engine run and the car move. It has a steering wheel that the driver uses to control the car. There are places to store the fuel and places to store the passengers and their goods. Physically, however, cars have changed greatly over a century: they are made of different materials, and they are faster, safer, and much more reliable and comfortable. There’s a world of difference between my first car, a well-used 1959 Volkswagen Beetle, and a Ferrari, but either one will carry me and my groceries home from the store or across the country, and in that sense they are functionally the same. (For the record, I have never even sat in a Ferrari, let alone owned one, so I’m speculating about whether there’s room for the groceries.)
The same is true of computers. Logically, today’s computers are very similar to those of the 1950s, but the physical differences go far beyond the kinds of changes that have occurred with the automobile. Today’s computers are much smaller, cheaper, faster and more reliable than those of 50 years ago, literally a million times better in some properties. Such improvements are the fundamental reason why computers are so pervasive.
The distinction between the functional behavior of something and its physical properties—the difference between what it does and how it’s built or works inside—is an important idea. For computers, the “how it’s built” part changes at an amazing rate, as does how fast it runs, but the “how it does what it does” part is quite stable. This distinction between an abstract description and a concrete implementation will come up repeatedly in what follows.
I sometimes do a survey in my class in the first lecture. How many have a PC? How many have a Mac? The ratio was fairly constant at 10 to 1 in favor of PCs in the first half of the 2000s, but changed rapidly over a few years, to the point where Macs now account for well over three quarters of the computers. This is not typical of the world at large, however, where PCs dominate by a wide margin.
Is the ratio unbalanced because one is superior to the other? If so, what changed so dramatically in such a short time? I ask my students which kind is better, and for objective criteria on which to base that opinion. What led you to your choice when you bought your computer?
Naturally, price is one answer. PCs tend to be cheaper, the result of fierce competition in a marketplace with many suppliers. A wider range of hardware add-ons, more software, and more expertise are all readily available. This is an example of what economists call a network effect: the more other people use something, the more useful it will be for you, roughly in proportion to how many others there are.
On the Mac side are perceived reliability, quality, esthetics, and a sense that “things just work,” for which many consumers are willing to pay a premium.
The debate goes on, with neither side convincing the other, but it raises some good questions and helps to get people thinking about what is different between different kinds of computing devices and what is really the same.
There’s an analogous debate about phones. Almost everyone has a “smart phone” that can run progr...

Índice