Semiconductor Memory Devices and Circuits
eBook - ePub

Semiconductor Memory Devices and Circuits

  1. 240 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Semiconductor Memory Devices and Circuits

About this book

This book covers semiconductor memory technologies from device bit-cell structures to memory array design with an emphasis on recent industry scaling trends and cutting-edge technologies. The first part of the book discusses the mainstream semiconductor memory technologies. The second part of the book discusses the emerging memory candidates that may have the potential to change the memory hierarchy, and surveys new applications of memory technologies for machine/deep learning applications. This book is intended for graduate students in electrical and computer engineering programs and researchers or industry professionals in semiconductors and microelectronics.

  • Explains the design of basic memory bit-cells including 6-transistor SRAM, 1-transistor-1-capacitor DRAM, and floating gate/charge trap FLASH transistor
  • Examines the design of the peripheral circuits including the sense amplifier and array-level organization for the memory array
  • Examines industry trends of memory technologies such as FinFET based SRAM, High-Bandwidth-Memory (HBM), 3D NAND Flash, and 3D X-point array
  • Discusses the prospects and challenges of emerging memory technologies such as PCM, RRAM, STT-MRAM/SOT-MRAM and FeRAM/FeFET
  • Explores the new applications such as in-memory computing for AI hardware acceleration.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Semiconductor Memory Devices and Circuits by Shimeng Yu in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Engineering. We have over one million books available in our catalogue for you to explore.

Information

1 Semiconductor Memory Technologies Overview

DOI: 10.1201/9781003138747-1

1.1 Introduction to Memory Hierarchy

1.1.1 Data Explosion to Zetta-scale

Digital data volume is exploding. A recent analysis [1] predicted that the number of devices connected via the Internet could reach almost 75 billion globally by 2025. Furthermore, the data generated from these devices will reach 175 Zettabytes1 by 2025, with most of it coming from videos and security camera surveillance. Therefore, analyzing the data and finding ways for the short-term and long-term storage of the data are essential. Archiving these vast volumes of data is also important. New data are generally collected at the edge, partially processed at the edge, partially transmitted to the cloud, and mostly stored at the cloud. The ever-increasing demand for data analytics and data storage drives the continuing development of memory/storage technologies for higher density and wider bandwidth. A rough differentiation factor between memory and storage is the lifetime of the data.2 The memory holds short-term data with faster and more frequent read/write access, while the storage holds long-term data with slower and less frequent read/write access.

1.1.2 Memory Hierarchy in the Memory Sub-system

A modern computer system generally follows the von-Neumann architecture where the data are processed in the processor core (e.g., arithmetic logic unit, ALU) and the data are stored in the memory components. Ideally, a memory device should have a sufficiently large capacity and fast access speed. Unfortunately, there exists no such ā€œuniversalā€ memory that can satisfy both needs. Therefore, different memory components are required to build the memory sub-system, establishing a memory hierarchy. The memory hierarchy is typically shown as a pyramid in Figure 1.1. Going upward the hierarchy, the memory component is becoming faster; going downward the hierarchy, the memory component has a larger capacity.
The memory hierarchy diagram showing mainstream SRAM-based cache, DRAM-based main memory, and NAND Flash-based solid-state drive, and opening gaps for eDRAM or emerging memories to serve the last level cache and/or storage class memory.
Figure 1.1 The memory hierarchy showing mainstream SRAM-based cache, DRAM-based main memory, and NAND Flash-based solid-state drive, and opening gaps for eDRAM or emerging memories to serve the last level cache and/or storage class memory.
The top of the pyramid is the processor core, and the core typically has the logic units and the cache integrated on the same chip (as indicated by the box). Cache stores the most frequently used data, and it is typically composed of multi-levels (L1, L2, L3, and/or the last level) as well. Static random access memory (SRAM) is the primary on-chip memory technology that implements L1 to L3 cache. A trade-off between the access latency and the storage capacity is also present within the cache levels. For example, the L1 cache could be accessed in sub-ns and has a capacity of ~100 kB,3 the L2 cache could be accessed in 1–3 ns and has a capacity of ~1 MB, and the L3 cache could be accessed in 5–10 ns and has a capacity of tens of MB. In some high-performance computing systems, for instance, IBM’s power series microprocessor [2], the last level cache is implemented with an embedded dynamic random-access memory (eDRAM).
Off the processor’s chip, the main memory in the hierarchy is generally implemented by the standalone DRAM,4 which could be accessed in tens of ns and has a capacity of tens of GB. Most of the data that is readily used by a software program is stored in the main memory. Both SRAM-based cache and DRAM-based main memory are classified as volatile memories, which means that the data will not be preserved when the power supply is removed. Sometimes, they are also referred to as working memories. If the data need to be preserved for a long time even when the power supply is removed, non-volatile memories (NVMs) are required as the storage memories. The line indicating the boundary between the volatile memories and the NVMs is shown in the pyramid. The widely used NVMs are the NAND Flash-based solid-state drive (SSD) and the magnetic hard-disk drive (HDD). SSD could be accessed in tens of μs and has a capacity of hundreds of GB–TB, and HDD could be accessed in ~ms and has a capacity of tens of TB. The perceived and ever-increasing gap between the main memory’s bandwidth and the SSD’s bandwidth motivates a recent trend of creating a new level in the memory hierarchy, namely the storage class memory which could be accessed in hundreds of ns and has a capacity of tens to hundreds of GB. The storage class memory is placed at the boundary between the working memories and the storage memories and very often it belongs to NVMs, thus sometimes it is also referred to as the persistent memory. Emerging memories are actively being researched to fill in this vacant position as storage-class memory, and a notable example is the three-dimensional (3D) X-point memory introduced by Intel and Micron [3]. Figure 1.2 shows the trade-offs between access time, integration density (Mb/mm2), and cycling endurance of different memory technologies in the memory hierarchy. The recent industrial trends in 3D vertical NAND Flash and 3D stacked DRAM are also shown. The necessity of creating a storage class memory to bridge the gap between NAND Flash and DRAM is indicated, which opens opportunities for emerging memories such as resistive random access memory (RRAM) and phase-change memory (PCM). To bridge the gap between DRAM and SRAM, emerging memories such as magnetic random access memory (MRAM) are suited for the last-level cache. The newest technology that may find its position in this chart is the ferroelectric field-effect transistor (FeFET). A 3D stacked high-density FeFET may serve as storage-class memory, and a FeFET with optimized write endurance may serve as the last level cache. Another important aspect of the trade-off in this chart is the cycling endurance, which specifies how many times the memory device could be written before it fails. As working memories, SRAM or DRAM generally have >1016 endurance in a 10-year lifetime. NAND Flash is less often written with 103–105 endurance, and the storage-class memory is expected to have 109–1012 endurance.
A chart showing the trade-offs in access time, integration density and cycling endurance for different memory technologies, indicating the opportunities for emerging memories in the storage-class memory and the last level cache.
Figure 1.2 The trade-offs in access time, integration density, and cycling endurance for different memory technologies, indicating the opportunities for emerging memories in the storage-class memory and the last level cache.
There are other storage media beyond the technologies listed in this pyramid. For archiving purposes, magnetic tapes are still being used for the large volume ā€œcoldā€ data storage due to their ultra-low cost. Optical discs such as CDs, DVDs, or Blu-ray are also used for information distribution purposes. The magnetic tapes, optical discs, and HDD will not be covered in this book as they are not fabricated with the silicon-manufacturing processes. To summarize, the focus of this book will be placed on the semiconductor memory technologies such as SRAM, DRAM, NAND Flash, and emerging memories.

1.2 Semiconductor Memory Industry Landscape

As introduced in the memory hierarchy, the semiconductor memories could be categorized into two types: the embedded memories that are on the same processor’s chip and the standalone memories that are off the processor’s chip. As of 2020, the total semiconductor revenue is $466 billion, out of which $126 billion were contributed by the standalone memories [4]. 98% of revenue for standalone memories came from DRAM (53%) and NAND Flash (45%). Figure 1.3 shows the market share of DRAM and NAND Flash as of 2020, with respect to the total semiconductor revenue. The standalone memories industry has consolidated over the decades and now it is dominated by a few key vendors. The major DRAM vendors are Samsung, Micron, and SK Hynix, and the major NAND Flash vendors are Samsung, Micron, SK Hynix, Kioxia (a spin-off from Toshiba in 2018), Western Digital (who acquired SanDisk in 2016), and Intel (who is scheduled to sell its memory business to SK Hynix by 2025).
A pie chart showing the global semiconductor revenue in 2020 and the standalone memories (DRAM and NAND Flash) takes a 27% share that is worth 126 billion dollars.
Figure 1.3 Semiconductor revenue (2020) and its distribution into standalone memories (DRAM and NAND Flash).
The embedded memories are components of a microprocessor (e.g., central processing unit, CPU or graphics processing unit, GPU); therefore, it is more challenging to estimate its market share.5 It was estimated that ~$35 billion revenue was generated by the embedded memories components as of 2020, mostly contributed from the SRAM for microprocessor or microcontroller,6 and partly contributed from the embedded NVMs for the microcontroller. Foundries are the primary vendors of the embedded memories, and the major players are Taiwan Semiconductor Manufacturing Corporation (TSMC), Samsung, Globalfoundries, etc. SRAM scales well with the logic transistor process to the leading-edge node, but the conventional embedded Flash (eFlash) is difficult to scale beyond the 28 nm node. Therefore, foundries are actively researching emerging NVMs (eNMVs) that could be extended to more advanced nodes.

1.3 Introduction to Memory Array Architecture

1.3.1 Generic Memory Array Diagram

Regardless of the types of semiconductor memory technologies, they are commonly organiz...

Table of contents

  1. Cover Page
  2. Half Title page
  3. Title Page
  4. Copyright Page
  5. Dedication
  6. Contents
  7. Preface
  8. Acknowledgments
  9. Author
  10. 1 Semiconductor Memory Technologies Overview
  11. 2 Static Random Access Memory (SRAM)
  12. 3 Dynamic Random Access Memory (DRAM)
  13. 4 Flash Memory
  14. 5 Emerging Non-volatile Memories
  15. Index