The Evolution of Cloud Computing
eBook - ePub

The Evolution of Cloud Computing

How to plan for change

  1. 206 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Evolution of Cloud Computing

How to plan for change

About this book

Cloud computing has been positioned as today's ideal IT platform. However, this has been said before of other IT architectures. How is cloud different?This book looks at what cloud promises now, and how cloud is likely to evolve as the future unfolds. Readers will be better able to ensure that decisions made now will hold them in good stead for the future and will gain a better understanding of how cloud can deliver the best outcome for their organisations.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Evolution of Cloud Computing by Clive Longbottom in PDF and/or ePUB format, as well as other popular books in Computer Science & Cloud Computing. We have over one million books available in our catalogue for you to explore.
PART 1
LOOKING BACK
Cloud computing in context
1 BACKGROUND
On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Charles Babbage (‘the Father of Computing’) in Passages from the Life of a Philosopher, 1864
It is interesting to see how far we have come in such a short time. Before we discuss where we are now, it can be instructive to see the weird but wonderful path that has been taken to get us to our current position. The history of electronic computing is not that long: indeed, much of it has occurred over just three or four human generations. By all means, miss out this chapter and move directly to where cloud computing really starts, in Chapter 2. However, reading this chapter will help to place into perspective how we have got here – and why that is important.
LOOKING BACKWARD TO LOOK FORWARD
That men do not learn very much from the lessons of history is the most important of all the lessons that history has to teach.
Aldous Huxley in Collected Essays, 1958
Excluding specialised electromechanical computational systems such as the German Zuse Z3, the British Enigma code-breaking Bombes and the Colossus of the Second World War, the first real fully electronic general-purpose computer is generally considered to be the US’s Electronic Numerical Integrator And Computer (ENIAC). First operated in 1946, by the time it was retired in 1955 it had grown to use 17,500 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5,000,000 hand-soldered joints, all in a space measuring 168 m2. Compare this with Intel’s 2016 Broadwell-EP Xeon chip, which contains 7.2 billion transistors in a chip of 456 mm2.
Weighing in at around 27 tonnes and needing 150kW of electricity, ENIAC could compute a military projectile’s trajectory around 2,400 times faster than a human. Its longest continuous operating period without breaking down was less than five days.1 It has to be noted, though, that a 1973 legal case found that the designers of ENIAC had seen a previous Atanasoff-Berry Computer and that ENIAC shared certain design and functional approaches.
Meanwhile, in Manchester, UK, the first stored program computer was developed and ran its first program in 1948. The Small Scale Experimental Machine 1, developed at the Victoria University, was the first such machine and led to the development of the first commercially available stored program computer, the Ferranti Mark 1, launched in 1951.
In the 70 years since ENIAC, the use of computers has exploded. The development and wide availability of transistors drove the digital computer market strongly in the 1950s, leading to IBM’s development of its original series 700 and 7000 machines. These were soon replaced by its first ‘true’ mainframe computer, the System/360. This then created a group of mainframe competitors that, as the market stabilised, became known as ‘IBM and the BUNCH’ (IBM along with Burroughs, UNIVAC, NCR, Control Data Corporation and Honeywell). A major plus-point for mainframes was that everything was in one place – and through the use of virtualisation, as launched in 1966 on the IBM System/360-67, multiple workloads could be run on the same platform, keeping resource utilisation at 80% or higher.
However, mainframes were not suitable for all workloads, or for all budgets, and a new set of competitors began to grow up around smaller, cheaper systems that were within the reach of smaller organisations. These midicomputer vendors included companies such as DEC (Digital Equipment Corporation), Texas Instruments, Hewlett Packard (HP) and Data General, along with many others. These systems were good for single workloads: they could be tuned individually to carry a single workload, or, in many cases, several similar workloads. Utilisation levels were still reasonable but tended to be around half, or less, of those of mainframes.
The battle was on. New mass-production integrated chip architectures, where the use of transistors is embedded into a single central processing unit (CPU), were built around CISC/RISC (complex and reduced instruction set computing) systems. Each of these systems used different operating systems, and system compatibility was completely disregarded.
Up until this point, computers were generally accessed through either completely dumb or semi-dumb terminals. These were screen-based, textually focused devices, such as IBM’s 3270 and DEC’s VT100/200, which were the prime means of interfacing with the actual program and data that were permanently tied to the mainframe or midicomputer. Although prices were falling, these machines were still not within the reach of the mass of small and medium enterprises around the globe.
THE PRICE WAR
Technological innovation has dramatically lowered the cost of computing, making it possible for large numbers of consumers to own powerful new technologies at reasonably low prices.
James Surowiecki (author of The Wisdom of Crowds) in The New Yorker, 2012
The vendors continued to try to drive computing down to a price point where they could penetrate even more of the market. It was apparent that hobbyists and techno-geeks were already embracing computing in the home. Led by expensive and complex build-your-own kits such as the Altair 8800 and Apple I, Commodore launched its PET (Personal Electronic Transactor) in mid-1977 but suffered from production issues, which then allowed Apple to offer a pre-built computer for home use, the Apple II. This had colour graphics and expansion slots, but cost was an issue at ÂŁ765/$1,300 (over ÂŁ3,300/$5,000 now). However, costs were driven down until computers such as the Radio Shack TRS-80 came through a couple of months after the Apple II, managing to provide a complete system for under ÂŁ350/$600. Then, Clive Sinclair launched the Sinclair ZX80 in 1980 at a cost of ÂŁ99.95/$230, ready built. Although the machine was low-powered, it drove the emergence of a raft of low-cost home computers, including the highly popular BBC Micro, which launched in 1981, the same year as the IBM Personal Computer, or PC.
Suddenly, computing power was outside the complete control of large organisations, and individuals had a means of writing, using and passing on programs. Although Olivetti had brought out a stand-alone desktop computer in 1965 called the Programma 101, it was not a big commercial success, and other attempts also failed due to the lack of standardisation that could be built into the machines. The fragility of the hardware and poor operating systems led to a lack of customers, who at this stage still did not fully understand the promise of computing for the masses. Companies had also attempted to bring out desktop machines, such as IBM’s SCAMP and Xerox’s Alto machine, the latter of which introduced the concept of the graphical user interface using windows, icons and a mouse with a screen pointer (which became known as the WIMP system, now commonly adopted by all major desktop operating systems). But heterogeneity was still holding everybody back; the lack of a standard to which developers could write applications meant that there was little opportunity to build and sell sufficient copies of any software to make back the time and investment in the development and associated costs. Unlike on the mainframe, where software licence costs could be in the millions of dollars, personal computer software had to be in the tens or hundreds of dollars, with a few programs possibly going into the thousands.
THE RISE OF THE PC
Computers in the future may … weigh only 1.5 tons.
Popular Mechanics magazine, 1949
It all changed with the IBM PC. After a set of serendipitous events, Microsoft’s founder, Bill Gates, found himself with an opportunity. IBM had been wanting to go with the existing CP/M (Control Program/Monitor, or latterly Control Program for Microcomputers) operating system for its new range of personal computers but had come up against various problems in gaining a licence to use it. Gates had been a key part of trying to broker a deal between IBM and CP/M’s owner, Digital Research, and he did not want IBM to go elsewhere. At this time, Microsoft was a vendor of programming language software, including BASIC, COBOL, FORTRAN and Pascal. Gates therefore needed a platform on which these could easily run, and CP/M was his operating system of choice. Seeing that the problems with Digital Research were threatening the deal between IBM and Microsoft, Gates took a friend’s home-built operating system (then known as QDOS – a quick and dirty operating system), combined it with work done by Seattle Computer Products on a fledgling operating system known as SCP-DOS (or 8-DOS) and took it to IBM. As part of this, Gates also got Tim Paterson to work for Microsoft; Paterson would become the prime mover behind the operating system that became widespread across personal computers.
So was born MS-DOS (used originally by IBM as PC-DOS), and the age of the standardised personal computer (PC) came about. Once PC vendors started to settle on standardised hardware, such that any software that needed to make a call to the hardware could do so across a range of different PC manufacturers’ systems, software development took off in a major way. Hardware companies such as Compaq, Dell, Eagle and Osbourne brought out ‘IBM-compatible’ systems, and existing companies such as HP and Olivetti followed suit.
The impact of the PC was rapid. With software being made available to emulate the dumb terminals, users could both run programs natively on a PC and access programs being run on mainframe and midicomputers. This seemed like nirvana, until organisations began to realise that data was now being spread across multiple storage systems, some directly attached to mainframes, some loosely attached to midicomputers and some inaccessible to the central IT function, as the data was tied to the individual’s PC.
Another problem related to the fact that PCs have always been massively inefficient when it comes to resource use. The CPU is only stressed when its single workload is being run heavily. Most of the time, the CPU is running at around 5% or less utilisation. Hard disk drives have to be big enough to carry the operating system – the same operating system that every other PC in a company is probably running. Memory has to be provided to keep the user experience smooth and effective, yet most of this memory is rarely used.
CHANGING TO A DISTRIBUTED MODEL
The future is already here – it’s just not very evenly distributed.
William Gibson (author of Neuromancer) on Talk of the Nation, NPR, 1999
Then the idea of distributed computing came about. As networking technology had improved, moving from IBM’s Token Ring configurations (or even the use of low-speed modems over twisted copper pairs) and DEC’s DECnet to fully standardised Ethernet connections, the possibility had arisen of different computers carrying out compute actions on different parts or types of data. This opened up the possibility of optimising the use of available resources across a whole network. Companies began to realise: with all of this underutilised computer and storage resources around an organisation, why not try and pull it all together in a manner that allowed greater efficiency?
In came client–server computing. The main business logic would be run on the larger servers in the data centre (whether these were mainframes, midicomputers or the new generation of Intel-based minicomputer servers) while the PC acted as the client, running the visual front end and any data processing that it made sense to keep on the local machine.
Whereas this seemed logical and worked to a degree, it did bring its own problems. Now, the client software was distributed across tens, hundreds or thousands of different machines, many of which used different versions of operating system, device driver or even motherboard and BIOS (Basic Input/Output System). Over time, maintaining this overall estate of PCs has led to the need for complex management tools that can carry out tasks such as asset discovery, lifecycle management, firmware and software upgrade management (including remediation actions and roll-back as required) and has also resulted in a m...

Table of contents

  1. Front Cover
  2. Half-Title Page
  3. BCS, THE CHARTERED INSTITUTE FOR IT
  4. Title Page
  5. Copyright Page
  6. Contents
  7. List of figures
  8. About the Author
  9. Foreword
  10. Acknowledgements
  11. Abbreviations
  12. Glossary
  13. Preface
  14. PART 1 LOOKING BACK: CLOUD COMPUTING IN CONTEXT
  15. PART 2 THE CLOUD NOW: CLOUD AT ITS SIMPLEST, AS IT SHOULD BE IMPLEMENTED
  16. PART 3 THE VERY NEAR FUTURE: CLOUD AT A MORE COMPLEX LEVEL, AS YOU SHOULD BE IMPLEMENTING IT
  17. PART 4 THE FUTURE OF CLOUD: CLOUD AS YOU SHOULD BE PLANNING FOR IT IN THE FURTHER-OUT FUTURE
  18. Index
  19. Back Cover