Computer Science
Primary storage
Primary storage, also known as main memory or internal memory, is a type of computer memory that is directly accessible by the CPU. It is used to store data and instructions that are currently being processed by the computer. Primary storage is volatile, meaning that its contents are lost when the computer is turned off.
Written by Perlego with AI-assistance
Related key terms
1 of 5
9 Key excerpts on "Primary storage"
- eBook - ePub
Programming for Problem-solving with C
Formulating algorithms for complex problems (English Edition)
- Dr. Kamaldeep(Author)
- 2023(Publication Date)
- BPB Publications(Publisher)
Figure 2.7 illustrates the connection between memory and the processor.Figure 2.7: The connection between memory and processorComputer memory is of two types, which are as follows:- Primary/main/working memory
- Secondary/auxiliary memory/external memory
Primary memory is temporary storage, also known as working memory or main memory. This memory is directly (primarily) accessed by the CPU, which is why it is known as primary memory. It stores all the current temporary data/instructions while the computer runs.Primary memory is further categorized into the following three parts:- RAM
- ROM
- Cache memory
Generally, RAM is considered primary memory. It is volatile (the data is erased when the computer is switched off) in nature. The secondary memory is not directly connected to the CPU. It is also called auxiliary memory or tertiary memory.RAMRAM is an abbreviation for random access memory. It stores data as well as instructions before the execution by the CPU and the result after execution by the CPU. It is known as random access because the stored content can be directly accessed from any location randomly or in any order. The data which is currently used by the CPU is placed in it. It holds data or instruction temporarily as it is said to be volatile in nature (erased data when the power goes off or turned off). It needs a constant power supply to retain data. In order to run data or a program in a computer, it must load first in RAM. If the memory (RAM) is too low, it might be unable to hold all the necessary data or programs that the CPU needs. When this happens, the CPU has to access the required data from the secondary memory, which is very slow and leads to slowing down the computer. So, to solve this problem, you only need to increase the size of the RAM in a computer. There are the following two types of RAM: - eBook - PDF
Computer Organisation and Architecture
An Introduction
- B.S. Chalk, Antony Carter, Robert Hind(Authors)
- 2017(Publication Date)
- Red Globe Press(Publisher)
Main memory provides the largest internal storage area that can be directly accessed by the CPU, having a typical storage capacity of between 128 MBytes and 512 MBytes in a PC. There may be more in PCs being used as network servers and in many mainframe computer systems. To reduce cost, main memory is normally implemented using Dynamic Random Access Memory (DRAM) chips (see Section 6.3). Because DRAM operates at around one-tenth of the speed of CPU logic, it tends to act as a bottleneck, reducing the rate at which the processor can fetch and execute instructions. To compensate for this, many systems include a small high-speed cache memory. Cache memory sits between the CPU and main memory and is usually implemented using more expensive Static Random Access Memory (SRAM) technology (see Section 6.3). This transparent memory is used for storing frequently used program segments. Each time the CPU requests an instruction or data word, the cache is always checked first. If the information is found in the cache, a ‘hit’ occurs and the instruction or data word is rapidly retrieved and passed directly to the processor. If the information is not in the cache, a ‘miss’ takes place and the slower memory is accessed instead. Once the data has been retrieved from main memory after a miss it is placed in cache memory in case it will be required again in the near future. Memory that is not directly accessible by the CPU is called external memory and includes secondary storage devices such as magnetic disks, tapes and optical storage devices, such as CD-ROMs and DVDs. These devices, which must be accessed through input–output (I/O) interface circuits, are the slowest components in the memory hierarchy. They provide a high-capacity storage area for programs and data not immediately required by the processor. - eBook - PDF
- Lubomir Stanchev(Author)
- 2013(Publication Date)
- CRC Press(Publisher)
This means that with time, both main memory and CPUs become cheaper. Note that there is a significant difference between the hard disk and the main memory of a computer. When the computer shuts down, everything that is written in main memory disappears. This is why the main memory of a computer is referred to as volatile memory. Alternatively, the data on the hard disk is persistent even after the computer shuts down and there is no power. Usually, magnetic fields are used to store the data permanently. This is why hard disks are referred to as persistent storage. Since the hard disk of a computer contains physical components, such as moving heads and rotating platters, accessing data from the hard disk is significantly slower than accessing data from main memory. Since the hard disk contains moving components, Moore’s law does not apply to it. Main memory is significantly more expensive than hard disk memory. For example, currently one can buy a 1 TB hard disk for around $ 100. Buying that much main memory costs more than 10 times more. Note that the CPU cannot directly access data from the hard disk. The data needs to be first brought into main memory before it can be accessed by the CPU. Main memory is often referred to as Random Access Memory (RAM) . This means that it takes exactly the same time to access any cell of the main memory. Alternatively, accessing different parts of the hard disk can take different time. For example, the sector (unit of division of the hard disk) that is the closest to the reading head can be accessed the fastest. Accessing a sector that is far away from the reading head will incur rotational delay . This is the time that is needed for the reading head to reach the sector. Input devices and output devices can be connected to a computer. Examples of input devices include keyboard, joystick, mouse, and microphone. Examples of output devices include monitor, speakers, and printer. - eBook - PDF
- Gilbert Held(Author)
- 2000(Publication Date)
- Auerbach Publications(Publisher)
The short-term memory focuses on work at hand, but can keep only so many facts in view at one time. If short-term memory fills up, the brain sometimes is able to refresh it from facts stored in long-term memory. A computer also works this way. If RAM fills up, the processor needs to con-tinually go to the hard disk to overlay old data in RAM with new, slowing down the computer’s operation. Unlike the hard disk, which can theoreti-cally fill up and put the server out of business, so to speak, RAM never runs out of memory. It keeps operating, but much more slowly than desirable. 174 HARDWARE RAM is physically smaller than the hard disk and holds much less data. A typical client computer may come with 32 million bytes of RAM and a 4 billion -byte hard disk. A server can be much larger, up to gigabytes of RAM or more. RAM comes in the form of discrete (meaning separate) microchips and also in the form of modules that plug into slots in the computer’s moth-erboard. These slots connect through a bus or set of electrical paths to the processor. The hard drive, on the other hand, stores data on a magnetized surface that looks like a stack of CD-ROMs. Having more RAM reduces the number of times that the computer pro-cessor has to read data from the hard disk, an operation that takes much longer than reading data from RAM. (RAM access time is in nanoseconds; hard disk access time is in milliseconds.) RAM is called random access because any storage location can be accessed directly. Originally, the term distinguished regular core memory from offline memory, such as magnetic tape in which an item of data could be accessed only by starting from the beginning of the tape and finding an address sequentially. Perhaps it should have been called “nonsequential memory” because RAM access is hardly random. RAM is organized and controlled in a way that enables data to be stored and retrieved directly to specific locations. - eBook - PDF
Computer Practice N4 SB
TVET FIRST
- S Sasti, D Sasti(Authors)
- 2021(Publication Date)
- Macmillan(Publisher)
3.3 You want to track your fitness levels when you train at the gym. 3.4 A video editor for a large movie company needs a computer. 4. Provide a definition for each of the following computer terms/concepts: (10) 4.1 CPU. 4.6 GPU. 4.2 Hardware. 4.7 Control unit. 4.3 Software. 4.8 Arithmetic logic unit. 4.4 Motherboard. 4.9 3D printer. 4.5 Peripherals. 4.10 Server. TOTAL: [30] 1.2.5 Computer memory The computer’s memory is Primary storage and there are two important types of memory: • Read-only memory (ROM). • Random access memory (RAM), which you have already learnt about. • Flash memory. Table 1.1: Computer memory Type of memory Function Example ROM (read only memory) • This is the computer’s permanent memory. • Data stored cannot be changed electronically or erased after it is made. The ROM stores the instructions that the computer needs in order to start up. RAM (random access memory) • This is the computer’s temporary memory. • Memory is volatile and is erased when the computer is switched off. This is the computer’s main memory. It holds data, instructions and programs while the computer is running. This is the space the CPU uses to perform processing. Flash memory • Electronic non-volatile memory. • Flash memory is a type of electronically erasable programmable read-only memory (EEPROM) • Stores data permanently, which can be electronically changed or erased. Flash memory is inside your smartphone, GPS, MP3 player, digital camera, PC and the USB flash stick. It is also used on solid state drives (SSDs). They are fast but also expensive. read-only memory (ROM): the computer’s permanent memory that holds instructions to start up the computer USB (universal serial bus): a commonly used computer port for connecting peripheral devices solid state drive (SSD): a type of permanent storage similar to a hard disk drive 18 Topic 1: Computing concepts and application skills TVET FIRST The size of the memory is measured by the amount of data it can store. - eBook - PDF
- Henry M. Walker(Author)
- 2012(Publication Date)
- Chapman and Hall/CRC(Publisher)
85 C H A P T E R 4 Where Are Programs and Data Stored? A s with the representation of data, many applications make the details of program and data storage transparent to users as they run programs. Thus, when running a program, we rarely worry about where the program was stored, how the main memory is allocated for our work, or how data move into the main memory or the CPU. Similarly, when we work with a file from a word processor, spreadsheet, database, or Web browser, we normally do not think much about where on the disk the material is stored, how we open a file, or how the machine knows where to look for it. However, it is natural to wonder how these materials are stored and retrieved. Sometimes a basic understanding of such matters can guide us in getting work done efficiently. This chapter reviews some basics of program storage and data storage and considers how we can use this knowledge in our regular use of computers. What are the types of memory? Chapter 1 began an answer to this question by describing basic functions of the CPU and its registers, cache memory, main memory, and I/O devices—all connected by a bus. Addressing this question here allows us to consider four additional elements of a computer’s memory: 1. Types of main memory (RAM and ROM) 2. Transitory versus permanent memory 3. Files and their organization 4. Virtual memory and its relationship to files and main memory When we are finished, we will have the hierarchical view of computer storage that is shown in Figure 4.1. (Although Figure 4.1 serves as a nice summary to this section of the 86 ◾ The Tao of Computing chapter, you may have to keep reading to learn what all of the terms in the figure mean; please be patient as you read ahead!) Types of main memory As we begin our consideration of computer storage, we need to consider a computer’s main memory a bit more carefully. When considering the main memory, most of it, called ran-dom access memory (RAM) , functions as described in Chapter 1. - eBook - PDF
- Stephen D. Burd(Author)
- 2015(Publication Date)
- Cengage Learning EMEA(Publisher)
Wait states reduce CPU and computer system performance. As discussed in Chapter 2, registers in the CPU are storage locations for instructions and data. Their location enables zero wait states for access, but CPUs have a limited FIGURE 5.1 Topics covered in this chapter Copyright 2016 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. Storage Device Characteristics 153 FIGURE 5.2 Primary and secondary storage and their component devices number of registers—far fewer than are needed to hold typical programs and their data. Primary storage extends the limited capacity of CPU registers. The CPU moves data and instructions continually between registers and Primary storage. To ensure that this move-ment incurs few or no wait states, all or part of Primary storage is implemented with the fastest available storage devices. With current technology, Primary storage speed is typi-cally faster than secondary storage speed by a factor of 10 5 or more. Speed is also an important issue for secondary storage. Many information system applications need access to large databases to support ongoing processing. Program re-sponse time in these systems depends on secondary storage access speed, which also af-fects overall computer performance in other ways. Before a program can be executed, its executable code is copied from secondary to Primary storage. The delay between a user request for program execution and the first prompt for user input depends on the speed of both primary and secondary storage. - Jocelyn O. Padallan(Author)
- 2023(Publication Date)
- Arcler Press(Publisher)
Figure 4.10. Memory caches (CPU caches) employ high-speed static RAM (SRAM) chips, while disk caches have been often a portion of main memory consisting of standard dynamic RAM (DRAM) chips. Source: https://www.pcmag.com/encyclopedia/term/cache. The Role of Communication in Computer Science 142 The efficacy of a cache is determined by the proportion of memory accesses that “hit” in the cache and, as a consequence, are fed from the cache rather than the considerably slower main memory. Because Several computer programs exhibit locality in the way they access memory, a cache can achieve a greater hit rate while having a very limited capacity. This is because data that has been accessed in the past are likely to be accessed once more shortly. Thus the cache stores the data that were most recently accessed (or that are accessed the most often), a tiny cache, its capacity has been significantly smaller than those of main memory, can handle the majority of memory accesses (McFarling, 1989). 4.4.1. Primary Considerations of Design There have been three fundamental design factors to consider whenever implementing a cache: (i) capacity, (ii) physical substrate, and (iii) granularity of data management. We’ll go through each of them one by one in the sections below (Hill & Smith, 1989). Firstly, the physical substrate utilized to implement the cache should be capable of delivering much lower latencies than the physical substrate utilized to implement the main memory. Because of this, SRAM (pronounced “es-ram”) has been and continues to be the most frequent physical substrate for caches. SRAM’s primary advantage is that it can operate at speeds comparable to those of the CPU (Iyer, 2004). Furthermore, SRAM has been made up of the same sort of semiconductor-based transistors as the CPU. As a result, a lower-cost SRAM-based cache may be installed beside the CPU on the same semiconductor chip, enabling even reduced latencies.- eBook - PDF
Computer Architecture
Fundamentals and Principles of Computer Design, Second Edition
- Joseph D. Dumas II(Author)
- 2016(Publication Date)
- CRC Press(Publisher)
The technique of main memory caching is a somewhat more general way of improving main memory performance that we will examine in this section. A cache memory is a high-speed buffer memory that is logically placed between the CPU and main memory. (It may be physically located on the same integrated circuit as the processor core, nearby in a separate chip on the system board, or in both places.) Its purpose is to hold data and/ or instructions that are most likely to be needed by the CPU in the near future so that they may be accessed as rapidly as possible—ideally, at the full speed of the CPU with no “wait states,” which are usually necessary if data are to be read from or written to main memory. The idea is that if the needed data or instructions can usually be found in the faster cache memory, then that is so many times that the processor will not have to wait on the slower main memory. The concept of cache memory goes back at least to the early 1960s, when magnetic core memories (fast for the time) were used as buffers between the CPU and main storage, which may have been a rotating magnetic drum. The word cache comes from the French verb cacher , which means “to hide.” The operation of the cache is transparent to or, in effect, hidden from the programmer. With no effort on his or her part, the program-mer’s code (or at least portions of it) runs “invisibly” from cache, and main memory appears to be faster than it really is. This does not mean that no effort is required to design and manage the cache; it just means that the effort is expended in the design of the hardware rather than in program-ming. We examine aspects of this in the next sections. 2.4.1 Locality of reference Typically, due to cost factors (modern cache is built from more expensive SRAM rather than the DRAM used for main memory), cache is much smaller in size than main memory. For example, a system with 4 GB of
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.








