Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design
eBook - ePub

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Nan Zheng, Pinaki Mazumder

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Nan Zheng, Pinaki Mazumder

Book details
Book preview
Table of contents
Citations

About This Book

Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications

This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks.

The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware.

  • Includes cross-layer survey of hardware accelerators for neuromorphic algorithms
  • Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency
  • Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design an online PDF/ePUB?
Yes, you can access Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design by Nan Zheng, Pinaki Mazumder in PDF and/or ePUB format, as well as other popular books in Computer Science & Neural Networks. We have over one million books available in our catalogue for you to explore.

Information

Year
2019
ISBN
9781119507406
Edition
1

1
Overview

Learning never exhausts the mind.
Leonardo da Vinci

1.1 History of Neural Networks

Even though the modern von Neumann architecture‐based processors are able to conduct logic and scientific computations at an extremely fast speed, they perform poorly on many tasks that are common to human beings, such as image recognition, video motion detection, and natural language processing. Aiming to emulate the capability of human brains, a non‐Boolean paradigm of computation, called the neural network, was developed since the early 1950s and evolved slowly over many decades. So far, at least three major forms of neural networks have been presented in the literature, as shown in Figure 1.1.
c01f001
Figure 1.1 The development of neural networks over time. One of the earliest neural networks, called perceptron, is similar to a linear classifier. The type of neural network that is widely used nowadays is referred as an artificial neural network in this book. This kind of neural network uses real numbers to carry information. The spiking neural network is another type of neural network that has been gaining popularity in recent years. A spiking neural network uses spikes to represent information.
The simplest neural network is the perceptron, where hand‐crafted features are employed as input to the network. Outputs of the perceptron are binary numbers obtained through hard thresholding. Therefore, the perceptron can be conveniently used for classification problems where inputs are linearly separable. The second type of neural network is sometimes called a multilayer perceptron (MLP). Nevertheless, the “perceptrons” in an MLP are different from the simple perceptrons in the earlier neural network. In an MLP, a non‐linear activation function is associated with each neuron. Popular choices for the non‐linear activation function are the sigmoid function, the hyperbolic tangent function, and the rectifier function. The output of each neuron is a continuous variable instead of a binary state. The MLP is widely adopted by the machine learning community, as it can be easily implemented on general‐purpose processors. This type of neural network is so popular that the phrase “artificial neural network” (ANN) is often used to specify it exclusively, even though the word ANN should have been referred to any other neural network besides biological neural networks. ANN is the backbone for the concept of a widely popular mode of learning, called deep learning. A less well‐known type of neural network is called a spiking neural network (SNN). Compared to the previous two types of neural networks, SNN resembles more to a biological neural network in the sense that spikes are used to transport information. It is believed that SNNs are more powerful and advanced than ANNs, as the dynamics of an SNN is much more complicated and the information carried by an SNN could be much richer.

1.2 Neural Networks in Software

1.2.1 Artificial Neural Network

Tremendous advancements have occurred in the late 1980s and early 1990s for neural networks constructed in software. One powerful technique that significantly propelled the development of ANNs was the invention of backpropagation [1]. It turned out that backpropagation was very efficient and effective in training multilayer neural networks. It was the backpropagation algorithm that enabled neural networks to solve numerous real‐life problems, such as image recognition [2,3], control [4,5], and prediction [6,7].
In the late 1990s, it was found that other machine‐learning tools, such as support vector machines (SVMs) and even much simpler linear classifiers, were able to achieve comparable and even better performances in classification tasks, which was one of the most important applications of neural networks at that time. In addition, it was observed that training of neural networks was often stuck at local minima, and consequently failed to converge to the global minimum point. Furthermore, it was generally believed that one hidden layer was enough for neural networks, as more hidden layers did not improve the performance remarkably. Since then, research interest in neural networks started to decline in the computational intelligence community.
Interest in neural networks was revived around 2006 as researchers demonstrated that a deep feedforward neural network was able to achieve outstanding classification accuracy with proper unsupervised pretraining [8,9]. Despite its success, the deep neural network was not fully recognized by the computer vision and machine learning community until 2012 when astonishing results were achieved by AlexNet, a deep convolutional neural network (CNN) [10]. Since then, deep learning has emerged as the mainstream method in various tasks such as image recognition and audio recognition.

1.2.2 Spiking Neural Network

As another type of important neural network, SNNs did not receive much attention in comparison to the widely used ANNs. Interest in SNNs mainly came from the neuroscience community. Despite being less popular, many researchers believe that SNNs have a more powerful computational capability compared to their ANN counterparts, thanks to the spatiotemporal patterns used to carry information in SNNs. Even though SNNs are potentially more advanced, there are difficulties in harnessing the power of SNNs. Dynamics of an SNN is much more complicated in comparison to that of an ANN, which makes the purely analytical approach intractable. Furthermore, it is considerably harder to implement event‐driven SNNs efficiently on a conventional general‐purpose processor. This is also one of the main reasons that SNNs are not as popular as ANNs in the computational intelligence community.
Over the past decades, there were numerous efforts from both the computational intelligence community and the neuroscience community to develop learning algorithms for SNNs. Spike‐timing‐dependent plasticity (STDP), which was first observed in biological experiments, was proposed as an empirically successful learning rule for unsupervised learning [11–14]. In a typical STDP protocol, synaptic weight updates according to the relative order and the difference between the presynaptic and postsynaptic spike timings. Unsupervised learning is useful in discovering the underlying structure of data, yet it is not as powerful as supervised learning in many real‐life applications, at least at the current stage.

1.3 Need for Neuromorphic Hardware

The development of hardware‐based neural network or neuromorphic hardware started along with their software counterpart. There was a period of time (late 1980s to early 1990s) when many neuromorphic chips and hardware systems were introduced [15–18]. Later on, after finding out that the performance of neural networks was hard to keep apace with digital computers due to the inadequate level of integration of synapses and neurons, hardware research in neural networks took a back seat while Boolean computing advanced by leaps and bounds, leveraging the scaling and Moore's Law. ...

Table of contents