Computer Science
Von Neumann Architecture
Von Neumann Architecture is a computer design concept that separates the memory and processing units, allowing data and instructions to be stored in the same memory space. This architecture features a central processing unit (CPU) that can fetch, decode, execute, and store instructions and data. It forms the basis for most modern computer systems.
Written by Perlego with AI-assistance
Related key terms
1 of 5
12 Key excerpts on "Von Neumann Architecture"
- eBook - PDF
- Alvin Albuero De Luna(Author)
- 2023(Publication Date)
- Arcler Press(Publisher)
CLASSIFICATION OF COMPUTER ARCHITECTURE 2 CONTENTS 2.1. Introduction ...................................................................................... 50 2.2. Von-Neumann Architecture............................................................... 50 2.3. Harvard Architecture......................................................................... 53 2.4. Instruction Set Architecture ............................................................... 56 2.5. Microarchitecture ............................................................................. 71 2.6. System Design .................................................................................. 78 References ............................................................................................... 81 CHAPTER Dissecting Computer Architecture 50 2.1. INTRODUCTION The architecture of computer systems is comprised of the laws of mathematics as well as methodologies and procedures that describe how computer systems have been constructed and function. Architecture has been designed to meet the demands of the consumer while also including economic and budgetary restrictions. Previously, architecture was designed upon paper and implemented as hardware (Hwang and Jotwani, 1993). The architecture is created, tested, and produced in hardware form once the transistor-transistor logic has been built in. The productivity, efficiency, dependability, and expense of a computer system may all be used to characterize computer architecture. It is concerned with technological standards for hardware or software. The CPU, memory, Input/output devices, and channels of communication that link to it (Loo, 2007; Shaout and Eldos, 2003). The several types of computer architecture are discussed in further sections. 2.2. VON-NEUMANN ARCHITECTURE John von Neumann is the author of such a design proposal. Modern computers, such as those we use today, are built on the von-Neumann architecture. - eBook - PDF
Low-Level Programming
C, Assembly, and Program Execution on Intel® 64 Architecture
- Igor Zhirkov(Author)
- 2017(Publication Date)
- Apress(Publisher)
It is a relatively high-level description, compared to a calculation model, which does not omit even a slight detail. Von Neumann Architecture had two crucial advantages: it was robust (in a world where electronic components were highly unstable and short-lived) and easy to program. In short, this is a computer consisting of one processor and one memory bank, connected to a common bus. A central processing unit (CPU) can execute instructions, fetched from memory by a control unit . The arithmetic logic unit (ALU) performs the needed computations. The memory also stores data. See Figures 1-1 and 1-2 . Following are the key features of this architecture: • Memory stores only bits (a unit of information, a value equal to 0 or 1). • Memory stores both encoded instructions and data to operate on. There are no means to distinguish data from code: both are in fact bit strings. • Memory is organized into cells, which are labeled with their respective indices in a natural way (e.g., cell #43 follows cell #42). The indices start at 0. Cell size may vary (John von Neumann thought that each bit should have its address); modern computers take one byte (eight bits) as a memory cell size. So, the 0-th byte holds the first eight bits of the memory, etc. • The program consists of instructions that are fetched one after another. Their execution is sequential unless a special jump instruction is executed. Assembly language for a chosen processor is a programming language consisting of mnemonics for each possible binary encoded instruction (machine code). It makes programming in machine codes much easier, because the programmer then does not have to memorize the binary encoding of instructions, only their names and parameters. Note, that instructions can have parameters of different sizes and formats. An architecture does not always define a precise instruction set, unlike a model of computation. - eBook - ePub
Modern Computer Architecture and Organization
Learn x86, ARM, and RISC-V architectures and the design of smartphones, PCs, and cloud servers
- Jim Ledin(Author)
- 2020(Publication Date)
- Packt Publishing(Publisher)
The Von Neumann Architecture was introduced by John von Neumann in 1945. This processor configuration consists of a control unit, an arithmetic logic unit, a register set, and a memory region containing program instructions and data. The key feature distinguishing the Von Neumann Architecture from the Harvard architecture is the use of a single area of memory for program instructions and the data acted upon by those instructions. It is conceptually straightforward for programmers, and relatively easier for circuit designers, to locate all of the code and data a program requires in a single memory region.This diagram shows the elements of the Von Neumann Architecture:Figure 7.1: The Von Neumann ArchitectureAlthough the single-memory architectural approach simplified the design and construction of early generations of processors and computers, the use of shared program and data memory has presented some significant challenges related to system performance and, in recent years, security. Some of the more significant issues were as follows:- The von Neumann bottleneck : Using a single interface between the processor and the main memory for instruction and data access frequently requires multiple memory cycles to retrieve a single instruction and to access the data it requires. In the case of an immediate value stored next to its instruction opcode, there might be little or no bottleneck penalty because, at least in some cases, the immediate value is loaded along with the opcode in a single memory access. Most programs, however, spend much of their time working with data stored in memory allocated separately from the program instructions. In this situation, multiple memory access operations are required to retrieve the opcode and any required data items.The use of cache memories for program instructions and data, discussed in detail in Chapter 8 , Performance-Enhancing Techniques
- eBook - ePub
Microprocessor 1
Prolegomena - Calculation and Storage Functions - Models of Computation and Computer Architecture
- Philippe Darche(Author)
- 2020(Publication Date)
- Wiley-ISTE(Publisher)
3 Computation Model and Architecture: Illustration with the von Neumann ApproachA user working today in front of his or her microcomputer workstation hardly suspects that he or she is in front of a machine whose operation is governed by principles described by the mathematician John von Neumann in the 1940s1 (Ceruzzi 2000). This remains the case when modern terms such as “superscalar architectures” and “multicore” or accelerating mechanisms like the pipeline, concepts discussed in the forthcoming Volume 2, are mentioned. Before studying the functioning of the microprocessor, we need to clarify the theoretical concepts of the computational model and computer architecture. The so-called von Neumann approach, which still governs the functioning of computers internally despite all the progress made since it was developed, is described by presenting the basic execution diagram for an instruction. This architecture has given rise to variations, which are also presented. Finally, the programmer needs an abstraction of the machine in order to simplify his or her work, which is called the “Instruction Set Architecture” (ISA). It is described before the basic definitions for this book, which complete this chapter.NOTE.– In this book, the term CU for “Central Unit” (or CPU for Central Processing Unit) is taken from the original word, that is, the unit which performs the computations, and not from the microcomputer itself. It most often describes the microprocessor also referred to as an MPU (MicroProcessor Unit) or μP for short, which is a modern integrated form of the CU. We are also adapting the level of discourse to the component’s scale. However, we do not include main memory, as do some definitions, which generally rely on the vocabulary of mainframes from the 1960s.3.1. Basic concepts
Definitions of the fundamental concepts of the Model of Computation (MoC) and of architecture have evolved over time and vary from author to author (Reddi and Feustel 1976, Baer 1984). The same is true for associated terms such as “implementation” or “achievement”. Before presenting them, the concepts of program, control and data mechanisms and flows must be clarified. - eBook - PDF
Rethinking Cognitive Computation
Turing and the Science of the Mind
- Andy Wells(Author)
- 2017(Publication Date)
- Red Globe Press(Publisher)
computers still follows the principles worked out by John von Neumann and his colleagues in the late 1940s and early 1950s. This chapter discusses the hardware ‘architecture’ of von Neumann computers and Chapter 15 discusses software. The term ‘architecture’ is most commonly understood in its application to buildings but in recent years it has also frequently been applied to both the software and hardware of computers. The hardware architecture of a computer is the set of parts it contains and the connections between them. To understand why Von Neumann Architecture has the char-acteristics it has, it is valuable to know a little about the ENIAC, the machine whose construction and operating principles stimulated von Neumann’s thinking. The ENIAC In 1943 the Moore School of Electrical Engineering at the University of Pennsylvania was commissioned to construct an electronic computer for the United States Army. The machine was formally accepted by the US Government in 1946 and operated successfully until it was retired to the Smithsonian museum in 1955. The ENIAC (Electronic Numerical Integrator and Computer) was, by modern standards, a physical giant but a computa-tional midget. It was 100 feet long, 10 feet high, 3 feet deep and weighed 30 tons. In operation it consumed 140 kilowatts of power. The clock func-tioned at 100 kilohertz and the machine performed some 330 multiplications per second. For all its impressive size, it had storage for only 20 ten digit dec-imal numbers. Nonetheless, it represented a huge step forward both in the sophistication and reliability of its engineering and in its speed of operation which was some 500 times faster than its closest electro-mechanical rival, the IBM Automatic Sequence Controlled Calculator (Goldstine 1972, p. 117). The ENIAC owed its speed to the use of electronic rather than electro-mechanical components which was both controversial and risky at the time. - eBook - PDF
- Aharon Yadin(Author)
- 2016(Publication Date)
- Chapman and Hall/CRC(Publisher)
This type of modularity is currently used in many other industries, but for computers it was made possible due von Neumann’s ideas. Prior to von Neumann, the computer’s logic was tightly integrated, and each changed introduced affected other parts as well. Relating to a different industry, it is like a driver that has replaced a car’s tires and has to modify the engine as well. When the modularity that stems from von Neumann’s ideas was introduced to computer systems, it enabled reduced complexity and supported fast technological developments. For example, increasing the memory capacity and capabilities was easily implemented without any required change to other system components. 186 ◾ Computer Systems Architecture One of the most important developments was to increase the capacity of computer memory, which manifested itself in the ability to better utilize the system. This improved utilization was achieved by enabling the central processing unit (CPU) to run more than one program at a time. This way, during the input and output (I/O) activities of one pro-gram, the CPU did not have to wait idle for I/O operations to complete. As several pro-grams can reside in the memory, even if one or more are waiting for input or output, the CPU can still be kept busy. One of the consequences of dividing the system into distinct components was the need to design a common mechanism for data transfers between the different components. The mechanism, which will be explained later in this book, is called a bus . As it relates to the memory activities, the bus is responsible for transferring instructions and data from the memory to the CPU as well as for transferring some of the data back to be stored in the memory. It should be noted that when there is just one bus, as is the case with the Von Neumann Architecture, the instructions and data have to compete in accessing the bus. - eBook - ePub
Computer Architecture and Security
Fundamentals of Designing Secure Computer Systems
- Shuangbao Paul Wang, Robert S. Ledley(Authors)
- 2012(Publication Date)
- Wiley(Publisher)
Hewlett-Packard (HP-Compaq, 2002) have been working on a new type of Secure Platform Architecture (SPA). It is a set of software interfaces built on top of HP's Itanium-based product line. SPA will enable operating systems and device drivers to run as unprivileged tasks and will allow services to be authenticated and identified. The problem exists in the SPA and is that, as the company described, it uses a set of software interfaces to authenticate and identify the tasks. Once the system is compromised, SPA will not be able to function well.Sean Smith and Steve Weingart (Smith and Weingart, 1999) developed a prototype using a high-performance, programmable secure coprocessor. It is a type of software, hardware, and cryptographic architecture (Suh et al., 2005). This architecture addressed some issues especially on how to secure programs running on coprocessors and system recovery. In terms of secure information and data, there is a lot of work which needs to be done.Recently, MIT researchers proposed secure processors that enable new applications by ensuring private and authentic program execution even in the face of physical attack.10.2 Single-Bus View of Neumann Architecture
Neumann architecture is the foundation of modern computer systems. It is a single bus , stored program computer architecture that consists of a CPU, memory, I/O, and storage. The CPU is composed of a control unit (CU) and arithmetic logical unit (ALU) (von Neumann, 1945). Almost all modern computers are Neumann computers which is characterized as a single system bus (control, data, address) with all circuits attached to it.10.2.1 John von Neumann Computer Architecture
John von Neumann wrote “First Draft of a Report on the EDVAC” in which he outlined the architecture of a stored-program computer. He proposed a concept that has characterized mainstream computer architecture since 1945. Figure 10.1 shows the Neumann model.Figure 10.1 Block diagram of John von Neumann's computer architecture modelA “system bus” representation of the Neumann model is shown in Figure 10.2 - eBook - PDF
Computer Architecture
Fundamentals and Principles of Computer Design, Second Edition
- Joseph D. Dumas II(Author)
- 2016(Publication Date)
- CRC Press(Publisher)
chapter one Introduction to computer architecture “Computer architecture” is not the use of computers to design buildings (although that is one of many useful applications of computers). Rather, computer architecture is the design of computer systems, including all of their major subsystems: the central processing unit (CPU), the memory system, and the input/output (I/O) system. In this introductory chapter, we take a brief look at the history of computers and consider some general topics applicable to the study of computer architectures. In subsequent chapters, we examine in more detail the function and design of specifc parts of a typical modern computer system. If your goal is to be a designer of computer systems, this book provides an essential introduction to gen-eral design principles that can be expanded upon with more advanced study of particular topics. If (as is perhaps more likely) your career path involves programming, systems analysis or administration, technical management, or some other position in the computer or information technology feld, this book provides you with the knowledge required to understand, compare, specify, select, and get the best performance out of computer systems for years to come. No one can be a true computer pro-fessional without at least a basic understanding of computer architecture concepts. So let’s get underway! 1.1 What is computer architecture? Computer architecture is the design of computer systems, including all major subsystems, including the CPU and the memory and I/O systems. All of these parts play a major role in the operation and performance of the overall system, so we will spend some time studying each. - Hesham El-Rewini, Mostafa Abd-El-Barr(Authors)
- 2005(Publication Date)
- Wiley-Interscience(Publisher)
Based on the interface between different levels of the system, a number of computer architectures can be defined. The interface between the application programs and a high-level language is referred to as a language architecture. The instruction set architecture defines the interface between the basic machine instruction set and the runtime and I/O control. A different definition of computer architecture is built on four basic viewpoints. These are the structure, the organization, the implementation, and the performance. In this definition, the structure defines the interconnection of various hardware com- ponents, the organization defines the dynamic interplay and management of the various components, the implementation defines the detailed design of hardware components, and the performance specifies the behavior of the computer system. Architectural development and styles are covered in Section 1.2. 1 Fundamentals of Computer Organization and Architecture, by M. Abd-El-Barr and H. El-Rewini ISBN 0-471-46741-3 Copyright # 2005 John Wiley & Sons, Inc. A number of technological developments are presented in Section 1.3. Our discus- sion in this chapter concludes with a detailed coverage of CPU performance measures. 1.1. HISTORICAL BACKGROUND In this section, we would like to provide a historical background on the evolution of cornerstone ideas in the computing industry. We should emphasize at the outset that the effort to build computers has not originated at one single place. There is every reason for us to believe that attempts to build the first computer existed in different geographically distributed places. We also firmly believe that building a computer requires teamwork. Therefore, when some people attribute a machine to the name of a single researcher, what they actually mean is that such researcher may have led the team who introduced the machine.- No longer available |Learn more
- (Author)
- 2014(Publication Date)
- College Publishing House(Publisher)
This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small ________________________ WORLD TECHNOLOGIES ________________________ transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date. While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model. Operation The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch , decode , execute , and writeback . The first step, fetch , involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Learning Press(Publisher)
This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small ________________________ WORLD TECHNOLOGIES ________________________ transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date. While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to invest-tigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model. Operation The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch , decode , execute , and writeback . The first step, fetch , involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. - eBook - PDF
- Architecture Technology Corpor(Author)
- 2013(Publication Date)
- Elsevier Science(Publisher)
Computer Architecture Technology Trends 2. Machine Architecture The description of a computing machine's architecture describes organizational aspects which allow us to describe its operation and compare it to other machines. An architecture consists of a machine organiza-tion and an instruction set; it resolves the division of labor between hardware and software. The classifica-tion of this architecture is useful for three primary reasons. First, it helps us to understand what has already been achieved. Second, it reveals possible configurations which may have not originally occurred to designers. Third, it allows useful models of performance to be built and tested. In very simple terms, a computing machine can be thought as applying a sequence (stream) of instructions to a sequence (stream) of data. To achieve better performance, it is necessary at some point to find ways to do more than one thing at a time within this machine. In order to define and better understand the parallelism possible within computing machines, Flynn 1 categorized machine organization into a generally accepted set based on instruction and data stream multiplicity. Flynn allows for both single and multiple data and instruction streams giving rise to four categories of architectures (Figure 1). Single Instruction Stream (SI) 1 Multiple Instruction Stream (Ml) Single Data Stream (SD) SISD uniprocessor MISD Multiple Data 1 Stream (MD) SIMD parallel processor MIMD multiprocessor Figure 1: Flynn Machine Organization Recently, Skillicorn z further extended this taxonomy in order to categorize and relate the growing variety of multiprocessors. His classification scheme involves four abstraction levels (Figure 2). The highest level classifies the model of computation being used. Most computing architectures to date have used the traditional Von Neuman machine model of computation.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.











