Event-Based Neuromorphic Systems
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

About this book

Neuromorphic electronic engineering takes its inspiration from the functioning of nervous systems to build more power efficient electronic sensors and processors. Event-based neuromorphic systems are inspired by the brain's efficient data-driven communication design, which is key to its quick responses and remarkable capabilities.  This cross-disciplinary text establishes how circuit building blocks are combined in architectures to construct complete systems. These include vision and auditory sensors as well as neuronal processing and learning circuits that implement models of nervous systems.

Techniques for building multi-chip scalable systems are considered throughout the book, including methods for dealing with transistor mismatch, extensive discussions of communication and interfacing, and making systems that operate in the real world. The book also provides historical context that helps relate the architectures and circuits to each other and that guides readers to the extensive literature. Chapters are written by founding experts and have been extensively edited for overall coherence.

This pioneering text is an indispensable resource for practicing neuromorphic electronic engineers, advanced electrical engineering and computer science students and researchers interested in neuromorphic systems.

Key features:

  • Summarises the latest design approaches, applications, and future challenges in the field of neuromorphic engineering.
  • Presents examples of practical applications of neuromorphic design principles.
  • Covers address-event communication, retinas, cochleas, locomotion, learning theory, neurons, synapses, floating gate circuits, hardware and software infrastructure, algorithms, and future challenges.

Trusted by 375,005 students

Access to over 1 million titles for a fair monthly price.

Study more efficiently using our study tools.

1
Introduction

The effortless ability of animal brains to engage with their world provides a constant challenge for technology. Despite vast progress in digital computer hardware, software, and system concepts, it remains true that brains far outperform technological computers across a wide spectrum of tasks, particularly when these are considered in the light of power consumption. For example, the honeybee demonstrates remarkable task, navigational, and social intelligence while foraging for nectar, and achieves this performance using less than a million neurons, burning less than a milliwatt, using ionic device physics with a bulk mobility that is about 10 million times lower than that of electronics. This performance is many orders of magnitude more task-competent and power-efficient than current neuronal simulations or autonomous robots. For example, a 2009 ‘cat-scale’ neural simulation on a supercomputer simulated 1013 synaptic connections at 700 times slower than real time, while burning about 2 MW (Ananthanarayanan et al. 20092009); and the DARPA Grand Challenge robotic cars drove along a densely GPS-defined path, carrying over a kilowatt of sensing and computing power (Thrun et al. 2007).
Although we do not yet grasp completely nature’s principles for generating intelligent behavior at such low cost, neuroscience has made substantial progress toward describing the components, connection architectures, and computational processes of brains. All of these are remarkably different from current technology. Processing is distributed across billions of elementary units, the neurons. Each neuron is wired to thousands of others, receiving input through specialized modifiable connections, the synapses. The neuron collects and transforms this input via its tree-like dendrites, and distributes its output via tree-like axons. Memory instantiated through the synaptic connections between neurons is co-localized with processing through their spatial arrangements and analog interactions on the neurons’ input dendritic trees. Synaptic plasticity is wonderfully complex, yet allows animals to retain important memories over a lifetime while learning on the time scale of milliseconds. The output axons convey asynchronous spike events to their many targets via complex arborizations. In the neocortex the majority of the targets are close to the source neuron, indicating that network processing is strongly localized, with relatively smaller bandwidth devoted to long-range integration.
The various perceptual, cognitive, and behavioral functions of the brain are systematically organized across the space of the brain. Nevertheless at least some aspects of these various processes can be discerned within each specialized area, and their organization suggests a coalition of richly intercommunicating specialists. Overall then, the brain is characterized by vast numbers of processors, with asynchronous message passing on a vast point-to-point wired communication infrastructure. Constraints on the construction and maintenance of this wiring enforce a strategy of local collective specialization, with longer range coordination.
For the past two decades neuromorphic engineers have grappled with the implementation of these principles in integrated circuits and systems. The opportunity of this challenge is the realization of a technology for computing that combines the organizing principles of the nervous system with the superior charge carrier mobility of electronics. This book provides some insights and many practical details into the ongoing work toward this goal. These results become ever more important for more mainstream computing, as limits on component density force ever more distributed processing models.
The origin of this neuromorphic approach dates from the 1980s, when Carver Mead’s group at Caltech came to understand that they would have to emulate the brain’s style of communication if they were to emulate its style of computation. These early developments continued in a handful of laboratories around the world, but more recently there has been an increase of development both in academic and industrial labs across North America, Europe, and Asia. The relevance of the neuromorphic approach to the broader challenges of computation is now clearly recognized (Hof 2014). Progress in neuromorphic methods has been facilitated by the strongly cooperative community of neuroscientists and engineers interested in this field. That cooperation has been promoted by practical workshops such as the Telluride Neuromorphic Cognition Engineering Workshop in the United States, and the CapoCaccia Cognitive Neuromorphic Engineering Workshop in Europe.
Event-Based Neuromorphic Systems arose from this community’s wish to disseminate state-of-the-art techniques for building neuromorphic electronic systems that sense, communicate, compute, and learn using asynchronous event-based communication. This book complements the introductory textbook (Liu et al. 2002) that explained the basic circuit building blocks for neuromorphic engineering systems. Event-Based Neuromorphic Systems now shows how those building blocks can be used to construct complete systems, with a primary focus on the hot field of event-based neuromorphic systems. The systems described in this book include sensors and neuronal processing circuits that implement models of the nervous systems. Communication between the modules is based on the crucial asynchronous event-driven protocol called the address-event representation (AER), which transposes the communication of spike events on slow point-to-point axons, into digital communication of small data packets on fast buses (see, for example, Chapter 2). The book as a whole describes the state of the art in the field of neuromorphic engineering, including the building blocks necessary for constructing complete neuromorphic chips and for solving the technological challenges necessary to make multi-chip scalable systems. A glance at the index shows the wide breadth of topics, for example, next to ‘Moore’s law’ is ‘motion artifact’ and next to ‘bistable synapse’ is ‘bootstrapped mirror.’
The book is organized into two parts: Part I (Chapters 2–6) is accessible to readers from a wider range of backgrounds. It describes the range of AER communication architectures, AER sensors, and electronic neural models that are being constructed without delving exhaustively into the underlying technological details. Several of these chapters also include a historical tree that helps relate the architectures and circuits to each other, and that guides readers to the extensive literature. It also includes the largely theoretical Chapter 6 on learning in event-based systems.
Part II (Chapters 7–16) is addressed to readers who intend to construct neuromorphic electronic systems. These readers are assumed to be familiar with transistor physics (particularly subthreshold operation), and in general to be comfortable with reasoning about analog CMOS circuits. A mixed-signal CMOS designer should be comfortable reading these more specialized topics, while an application engineer would be able easily to follow the chapters on hardware and software infrastructure. This part of the book provides information about the various approaches used to construct the building blocks for the sensors and computational units modeling the nervous system, including details of silicon neurons, silicon synapses, silicon cochlea circuits, floating-gate circuits, and programmable digital bias generators. It also includes chapters on hardware and software communication infrastructure and algorithmic processing of event-based sensor output.
The book concludes with Chapter 17, which considers differences between current computers and nervous systems in the ways that computational processing is implemented, and discusses the long-term route toward more cognitive neuromorphic systems.

1.1 Origins and Historical Context

Many of the authors of Event-Based Neuromorphic Systems were strongly influenced by Analog VLSI and Neural Systems (Mead 1989). Carver Mead’s book was the story of an extended effort to apply the subthreshold transistor operating region of CMOS electronics to realize a neural style and scale of computation. The book was written at a time when automatically compiled synchronous logic circuits were just beginning to dominate silicon production, a field that Mead was central in creating. Much like the famous Mead and Conway (1980) book on logic design, which was focused toward instilling a set of methodologies for practical realization of logic chips in digital designers, Analog VLSI and Neural Systems was focused on providing a set of organizing principles for neuromorphic designers. These ideas were centered around the name of Mead’s group at Caltech, the Physics of Computation group, and emphasized notions such as signal aggregation by current summing on wires, multiplication by summed exponentials, and relations between the fundamental Boltzmann physics of energy barriers and the physics of activation of voltage-sensitive nerve channels.
However, at that time the field was so new that there were many practical aspects that did not work out in the long run, mainly because they suffered from transistor mismatch effects. So the early systems were good for demonstration but not for real-world application and mass production. The fact that current copying in CMOS is the least precise operation possible to implement in practice, was barely mentioned in the book. This omission led to designs that worked ideally in simulation but functioned poorly in practice. In relation to Event-Based Neuromorphic Systems, the central importance of communication of information was not realized until after the book was completed, and so none of the systems described in the book had an AER output; rather the analog information was scanned out serially from the systems described there. Even a later collection of chapters (Mead and Ismail 1989) about Mead-lab systems and Mead’s review paper in Proceedings of the IEEE (1990) barely touched on communication aspects.
Since 1989 there has been a continued drive to improve the technology of neuromorphic engineering. But to place the progress of neuromorphic engineering in context, we can consider logic, that is, digital chip design. Around 1990, a high-end personal computer had about 8 MB of RAM and about 25 MHz clock speed (one of the authors remembers being a proud owner of a personal CAD station that could be used to work on chip design at home). As of 2013, a state-of-the-art personal computer has about 16 GB of memory and 3 GHz clock speed. So in about 20 years we have seen approximately a 1000-fold increase in memory capacity and a 100-fold increase in clock speed. These of course are reflections of Moore’s law and investments of hundreds of billions of dollars. But the basic organizing principles used in computation have hardly changed at all. Most advances have come about because of the availability of more raw memory and computing power, not by fundamental advances in architectures.
During this period the neuromorphic engineering community has expanded considerably from its origins at Caltech, Johns Hopkins, and EPFL (Figure 1.1). At first only a few modest, rather unconvincing lab prototypes could be shown in a couple of l...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Dedication
  5. List of Contributors
  6. Foreword
  7. Acknowledgments
  8. List of Abbreviations and Acronyms
  9. Chapter 1: Introduction
  10. Part I: Understanding Neuromorphic Systems
  11. Part II: Building Neuromorphic Systems
  12. Index
  13. End User License Agreement

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Event-Based Neuromorphic Systems by Shih-Chii Liu, Tobi Delbruck, Giacomo Indiveri, Adrian Whatley, Rodney Douglas, Shih-Chii Liu,Tobi Delbruck,Giacomo Indiveri,Adrian Whatley,Rodney Douglas in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Electrical Engineering & Telecommunications. We have over one million books available in our catalogue for you to explore.