Hacking Capitalism
eBook - ePub

Hacking Capitalism

Johan Söderberg

Share book
  1. 252 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hacking Capitalism

Johan Söderberg

Book details
Book preview
Table of contents
Citations

About This Book

The Free and Open Source Software (FOSS) movement demonstrates how labour can self-organise production, and, as is shown by the free operating system GNU/Linux, even compete with some of the worlds largest firms. The book examines the hopes of such thinkers as Friedrich Schiller, Karl Marx, Herbert Marcuse and Antonio Negri, in the light of the recent achievements of the hacker movement. This book is the first to examine a different kind of political activism that consists in the development of technology from below.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Hacking Capitalism an online PDF/ePUB?
Yes, you can access Hacking Capitalism by Johan Söderberg in PDF and/or ePUB format, as well as other popular books in Sozialwissenschaften & Soziologie. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2015
ISBN
9781135916381

1
A Background of the Hacker Movement

The History of Internet

One could argue that cyberspace emerged in 1876 with the telephone. The Internet, as we presently know it, is commonly thought of as the merger of telephony and computers. Leading on from the Internet’s heritage in telephony, Bruce Sterling light-heartedly proclaims that the first hackers were the boys employed as switchboard operators by the telephone companies. The boys played pranks while connecting customers and they were soon replaced with more reliable, female personnel.1 This historical anecdote is in accordance with the portrayal of hacking as it comes across in mainstream media. Hacking is regularly reduced to an apolitical stunt of male, juvenile mischievousness, and, ultimately, it is framed as a control issue. In order to emphasise the political dimension of hacking, it is apt to outline a different ‘mythical past’ of hackers. This story too begins with the invention of the telephone. Graham Bell was not only a prominent inventor but also a forerunner in exercising his patent rights. The business model which his family built up around the patent was no less prophetic. Telephones were leased rather than sold to customers and the monopoly service was provided through a network of franchised subsidiaries. All in all, Graham Bell established one of the most controversial and longstanding monopolies in American twentieth century corporate history. When the communication infrastructure was built, the Bell Telephone Company concentrated on catering to urban dwellers while rural areas fell by the wayside. The telephone had its biggest impact on life in the countryside but it was not profitable for companies to connect distant farmhouses. Long before Bell’s patent had expired, farmers began to construct their own telephone lines, sometimes using fence wire to pass the signal from one farm to the next. The movement spread rapidly in rural areas. The first telephone census made in 1902 counted more than 6000 small farmer lines and cooperatives. Over the years, the farmer lines were incorporated into the national dialling system.2 The most direct parallel to those farmers today are community activists establishing gratis, wireless Internet access in their neighbourhoods. The farmers and hackers both demonstrate the ingeniousness of living labour, to route around constraints and to appropriate tools (even when it takes fence wire) for its own purposes. This interpretation is rather consistent with the original meaning of the term hacking. The word was first used by computer scientists in the 1950s to express approval of an ingenious and playful solution to a technical problem. These privileged few enjoyed a great amount of autonomy to do research and ‘hack’ while having access to very expensive equipment. After the end of the cold war, when computer equipment got cheaper while researchers lost some of their former autonomy, the joy of playing with computers was picked up by groups outside the institutions, by people calling themselves hackers. Though this book is mainly about the latter group, the story begins in the science laboratory. Readers that are familiar with the background of the Internet can skip ahead to the next heading.
If any time and place could be pinpointed as the springboard for the merger of computing and telephony, later to become the Internet, John Naughton suggests that it was the thriving experimental milieu at Massachusetts Institute of Technology (MIT) before and around the two World Wars.3 When Vannevar Bush completed the first Differential Analyser in 1928, it was a massive compartment of gears and pressure cylinders. The machine was used for advanced equations in engineering projects and for calculating ballistic trajectories for the military. To build such a computer represented a huge investment affordable only to the biggest institutions. Despite the immense costs, the computer could only perform a limited set of operations and each calculation had to be hardwired into the machine. To give a significantly different instruction, or to correct a bug, meant to physically replace hardware components. The cost efficiency of computing resources would be vastly improved if computers were made more flexible. This required an architecture where the physical components were given an open-ended function so that more instructions could be provided in software code. Norbert Wiener, the founder of cybernetics, sketched on such a digital computer and his ideas were implemented towards the end of the Second World War. MIT scientists hoped for a deepened symbiosis between man and machine. By shortening the feedback loops between the computer and the user, they envisioned a computer that would function as a complement to the human brain. The computer could take care of intricate and monotonous calculations and leave the humans free to engage in innovative and associative exploration. This dream was held back by the computer design of the time. Batch-run computers were provided with a set of instructions which had to be completed in advance. The computer processed the instructions in one chunk without allowing for any human interruptions. If something went wrong the researcher had no choice but to rewrite the program and start all over from square one.
A solution to this awkwardness was found in an alternative design, timesharing computers. The selling case for time-sharing computers was that several users could share the capacity of a single computer. It saved a very expensive resource, computer calculation time. Later on, the principle of time-sharing was extended beyond the confines of the computer box. It was extended from a number of users in one place sharing a single computer to many users in a wide area pooling and sharing their combined computer resources. This idea occurred to Bob Taylor who presided over Advanced Research Projects Agency (ARPA). The organisation had been set up in the aftermath of the launch of Sputnik. It was part of American policy to catch up with the Soviet Union in the race for technological supremacy. Bob Taylor realised that ARPA was in possession of a cacophony of computer terminals unable to exchange information with each other and that internal communication would be alleviated if these computers were linked together. His ambitious plan was stalled by the fact that the terminals had not been manufactured with the intent to speak to one another. Furthermore, the complexity of the system would grow exponentially with every new computer added to the cluster. To overcome these two problems, of incompatibility and complexity, ARPA researchers placed nodes in-between the terminals. The nodes consisted of small computers that served as network administrators, receiving and sending data, checking for errors and verifying that messages arrived at prescribed destinations. The nodes bridged the incompatibilities of end-user terminals in a decentralised fashion. By dispersing the intelligence to the edges of the network, rather than collecting information on the whole system in a server and guiding every intricate detail of the network from this centre, the problem of complexity was somewhat reduced. This end-to-end solution still remains basic to the architecture of the Internet.
The common notion that Internet originates in Pentagon is partly misleading. It is correct, though, that a theory on a networked mode of communication had been devised previously to ARPA’s undertaking in an organisation affiliated with Pentagon. The individual behind this feat was Paul Baran and his employer was Research ANd Development (RAND).4 Nuclear holocaust was the policy area of RAND’s strategists. A major concern of theirs was the game-benefits of a nuclear first strike. A first strike, or an accident for that matter, could sever the connections running between the headquarters and the missile silos. The mere possibility of such an outcome created uncertainties and jeopardised the MAD (Mutual Assured Destruction) doctrine. A resilient communication system was therefore crucial to guarantee retaliation capacity. Vulnerability was located at the single line which carried the message: ‘fire’, or: ’hold fire’. Hence, the model envisioned by Paul Baran distanced itself as far as possible from a centralised communication infrastructure. In a network all the nodes are linked with their neighbours and there are several possible routes connecting any two destinations. Baran had the plan sketched out in 1962 but he ran into opposition from the phone company. AT&T was entrenched in analogue telecommunication technology and obstructed to build the infrastructure that Baran’s system required. In analogue communication systems the sound waves are faithfully reproduced in a single stream running through the phone line. In a digital communication system, in contrast, the signal is translated into ones and zeroes and sent as a number of information packages. Once the data arrives at the destination, the packages are reassembled and put together so that it appears to the receiver as a continuous stream of sound. Baran’s idea demanded a digital communication system where the signal was divided up in information packages and each package could decide individually for the best route to travel. If a channel was blocked the package could take a different route. Because of the resistance from AT&T, Paul Baran’s plans were left in a drawer and he did not learn about the work in ARPA until much later.5
Towards the end of the 1960s, ARPA built the first computer-to-computer connection and named it ARPANET. It linked together a small selection of universities and military bases. For a long time it remained an exclusive system confined to the top academic and military echelons. Over the years, however, other networks began to crop up in the US and elsewhere. The Télétel service in France is the most well-known example, though less successful trials were made in England and Germany too. It was implemented by the French telephone company in 1982 after many years of testing. The terminals, known as Minitel, were handed out for free with the intention that they would replace the need for printed white page directories. Instead the users quickly found out how to communicate with each other through their Minitels. Most of the traffic was driven by conversations between users and by erotic bill boards, so called ‘messageries roses’.6 The Internet, the network of networks, took shape as these diverging net-clusters were joined together. To cope with a growing diversity of standards, Robert Kahn and Vint Cerf designed a system of gateways in the mid 1970s. The Transmission-Control-Protocol (TCP) and Internet Protocol (IP) links together and carries the traffic over the many networks of Internet.
The increased flexibility of computer hardware has allowed important advances in the utilisation of computers to be made solely on the level of software code. This in turn implies lower costs to innovate and thus less dependency on government or business support. UNIX is a landmark in the history of software development but it is also archetypical in that it partially emerged to the side of institutions.7 The two enthusiasts responsible for UNIX, Ken Thompson and Dennis Ritchie, had been working on an operating system for Bell Laboratory, a subsidiary of AT&T, for some time. They had become disheartened and started their own, small-scale experiment to build an operating system. The hobby project was taken up in part to the side of, in part under the wings of, the American telephone company. UNIX rapidly grew in popularity and became so widely used by AT&T staff that the company eventually endorsed it. Moreover, it also became known among users outside the phone company. An anti-trust settlement in 1956 against AT&T was of utmost significance for the success of UNIX. As part of the settlement the phone company agreed not to enter the computer business. The AT&T was thus barred from selling UNIX or charging a higher tariff against computer transmissions running on its phone lines. Consequently, UNIX could be freely distributed and became widely popular in universities and in the private sector. John Naughton’s explanation for the success of the operating system is instructive: “The main reason was that it was the only powerful operating system which could run on the kinds of inexpensive minicomputers university departments could afford. Because the source code was included, and the AT&T licence included the right to alter the source and share changes with other licensees, academics could tamper with it at will, tailoring it to the peculiar requirements of their sites.”8 It is logical that UNIX was designed to run on relatively inexpensive computers, since for most part it was developed on such computers by users with limited access to large-scale facilities. The same pattern is repeated once again when AT&T’s original UNIX program metamorphoses into versions of BSD UNIX, and later inspires GNU/Linux. This time around the code was written on computers that were just-about affordable to individuals. Using personal computers to write software must have felt like an impediment at the time. And yet the accessibility of small computers was the key factor for the eventual success of operating systems like BSD UNIX and GNU/Linux. The point should be stressed since it highlights two important lessons. First, the success of this technology often stands in inversed relationship to the size of fixed capital (i.e. machinery and facilities) that is invested in it. Second, as a consequence, much computer technology has been advanced by enthusiasts who were, at least partially, independent of institutions and corporations. Users joined forces in a collaborative effort to improve UNIX, fix bugs, and make extensions, and to share the result with each other. The environment of sharing and mutual support was spurred in the early 1980s, thanks to the invention of a protocol for UNIX computers to share files with each other through the phone line. It facilitated community building and fostered values that foreboded later developments. With the option to connect computers over the telephone infrastructure, a cheaper and more accessible communication channel than ARPANET had been created. The stage was set for hackers to enter.

The History of the Computer Underground

It is one of history’s ironies that the roots of the Internet can be traced to two sources, U.S. Cold War institutions and the anti-war movement. The hacker community grew out of American universities in the 1960s. Bruce Sterling attributes the potent ideological hotbed of the computer underground to a side effect of the Vietnam War. Many youngsters then chose to enter college studies to avoid being sent into battle. Disposition for civil disobedience was reinforced by the communicating vessels between university drop-outs, peace activists, and hippies. The radicalism of students mixed with the academic cudos of researchers.9 In the following decade, the mixture of hippie lifestyle and technological know-how was adopted by so-called Phone Phreaks, a subculture specialised in tapping phone lines and high-tech petty theft. Political self-awareness within the movement was propagated in the pioneer newsletter, Youth International Party Line. It was edited by Abbie Hoffman who started it in 1971. He saw the liberation of the means of communication as the first step towards a mass revolt. Two years later the newsletter was superseded by the Technological American Party. The new publication jettisoned most of the political ambitions of its predecessor and concentrated on circulating technical know-how. The forking of the fanzine epitomises two polarities within the movement, still in force today. On one side are activists motivated by ideology and on the other side are ‘techies’ who find satisfaction in mastering technology. Some techies have come to look unkindly upon the efforts by activists to politicise the movement. Techies tend to perceive ‘hacktivists’ as latecomers and outsiders claiming the hobby for their own purposes. The truth of the matter is that the subculture has always been deeply rooted in both traditions. Indeed, the hacker movement was more or less forked out of the New Left.10 De-politicisation came later, mirroring general trends in society. In the aftermath of the clashes of 1968, the line of thought within the hippie and environmentalist movements changed. Rather than engaging in head-on confrontations with the system and the police, hopes were placed in the building of an alternative system. The leading thought was to develop small-is-beautiful, bottom-up, and decentralised technology. The personal computer fits into this picture. A central figure in advocating such an approach, with a foothold both in the environment movement and the embryonic hacker movement, was Stewart Brand, publisher of the Whole Earth Catalog. Another key name in the philosophy of ‘appropriate technology’ was the industrial designer Victor Papanek. They denounced mass production in the same breath as they provided blueprints for Do-It-Yourself technologies. The leading thought was that a ‘better mousetrap’ would win out against faulty industrial products on the merits of its technical qualities. Hackers show no less confidence in the superiority of Free/Open Source Software (FOSS) code and are assured of their victory over flawed, proprietary code. The historian of technology Langdon Winner was more sceptical when he wrote a few years after the Reagan administration had quenched the high spirit of ‘appropriate technology’ campaigners.11 The ease by which the government purged the programs for appropriate technology is a sobering lesson of the raw power of the state. Winner complained that the thrust of the hippie and environmental movement had quickly been deflected inwards, into consumption of bohemian lifestyles and mysticism. His pessimistic account of the events is understandable but must be amended by the fact that he was unaware of the sprouting activity of phone phreaks and hackers at the time he wrote. The ideals of the appropriate technology movement jumped ship and thrived long after the zenith of the hippies and the environmentalists. This could be a precious reminder in a possible future when the hacker movement has faded and its heirs have not yet announced themselves. But it is also possible that the hacker movement proves itself to be more resilient than the New Left. A principal difference, though not the only one, is the motivational force behind hacking. The advocates of appropriate technology were led to experiment with Do-It-Yourself techniques as a deduction of their politics. Hackers, on the other hand, write code primarily for the sake of it, and politics flows from this playfulness.
Steven Levy writes about the hardware hackers gathering at the Home-brew Computer Club in the mid 1970s. His retrospect gives an account of the two, partially coinciding, partially inconsistent, sentiments expressed by the people involved. They were drawn together by the excitement of tinkering with electronics. Even so, the pleasure they experienced from hacking was tied up with a political vision and messianic hopes. By constructing a cheap and available computer able to ‘run on the kitchen table’, they set out to liberate computing from elite universities and from corporate and military headquarters. But persons with overtly political motives found themselves out of place. The initiator of the Homebrew Computer Club, Fred Moore, eventually dropped out, expressing disappointment with the lack of political awareness among club members. Reflecting on his departure, the activist and long-time moderator of the Homebrew Computer Club, Lee Felsenstein, suggested that Fred Moore got his politics wrong. The politics of the Homebrew Computer Club was the “propaganda of the deed” rather than “gestures of protest”.12 Indeed, what the hardware hackers accomplished from playing with electronic junk is impressive. The microprocessor had recently been invented by Intel and the company expected the item to be used in things like traffic light controllers. Hardware hackers thought differently. They combined Intel’s microprocessor with spare parts and built small computers. Ed Robert’s Altair marked a watershed in 1975. Altair was not the first hacker computer but it was the first computer bu...

Table of contents