Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design
eBook - ePub

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Nan Zheng, Pinaki Mazumder

Compartir libro
  1. English
  2. ePUB (apto para móviles)
  3. Disponible en iOS y Android
eBook - ePub

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Nan Zheng, Pinaki Mazumder

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications

This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks.

The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware.

  • Includes cross-layer survey of hardware accelerators for neuromorphic algorithms
  • Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency
  • Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design un PDF/ePUB en línea?
Sí, puedes acceder a Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design de Nan Zheng, Pinaki Mazumder en formato PDF o ePUB, así como a otros libros populares de Informatique y Réseaux de neurones. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2019
ISBN
9781119507406
Edición
1
Categoría
Informatique

1
Overview

Learning never exhausts the mind.
Leonardo da Vinci

1.1 History of Neural Networks

Even though the modern von Neumann architecture‐based processors are able to conduct logic and scientific computations at an extremely fast speed, they perform poorly on many tasks that are common to human beings, such as image recognition, video motion detection, and natural language processing. Aiming to emulate the capability of human brains, a non‐Boolean paradigm of computation, called the neural network, was developed since the early 1950s and evolved slowly over many decades. So far, at least three major forms of neural networks have been presented in the literature, as shown in Figure 1.1.
c01f001
Figure 1.1 The development of neural networks over time. One of the earliest neural networks, called perceptron, is similar to a linear classifier. The type of neural network that is widely used nowadays is referred as an artificial neural network in this book. This kind of neural network uses real numbers to carry information. The spiking neural network is another type of neural network that has been gaining popularity in recent years. A spiking neural network uses spikes to represent information.
The simplest neural network is the perceptron, where hand‐crafted features are employed as input to the network. Outputs of the perceptron are binary numbers obtained through hard thresholding. Therefore, the perceptron can be conveniently used for classification problems where inputs are linearly separable. The second type of neural network is sometimes called a multilayer perceptron (MLP). Nevertheless, the “perceptrons” in an MLP are different from the simple perceptrons in the earlier neural network. In an MLP, a non‐linear activation function is associated with each neuron. Popular choices for the non‐linear activation function are the sigmoid function, the hyperbolic tangent function, and the rectifier function. The output of each neuron is a continuous variable instead of a binary state. The MLP is widely adopted by the machine learning community, as it can be easily implemented on general‐purpose processors. This type of neural network is so popular that the phrase “artificial neural network” (ANN) is often used to specify it exclusively, even though the word ANN should have been referred to any other neural network besides biological neural networks. ANN is the backbone for the concept of a widely popular mode of learning, called deep learning. A less well‐known type of neural network is called a spiking neural network (SNN). Compared to the previous two types of neural networks, SNN resembles more to a biological neural network in the sense that spikes are used to transport information. It is believed that SNNs are more powerful and advanced than ANNs, as the dynamics of an SNN is much more complicated and the information carried by an SNN could be much richer.

1.2 Neural Networks in Software

1.2.1 Artificial Neural Network

Tremendous advancements have occurred in the late 1980s and early 1990s for neural networks constructed in software. One powerful technique that significantly propelled the development of ANNs was the invention of backpropagation [1]. It turned out that backpropagation was very efficient and effective in training multilayer neural networks. It was the backpropagation algorithm that enabled neural networks to solve numerous real‐life problems, such as image recognition [2,3], control [4,5], and prediction [6,7].
In the late 1990s, it was found that other machine‐learning tools, such as support vector machines (SVMs) and even much simpler linear classifiers, were able to achieve comparable and even better performances in classification tasks, which was one of the most important applications of neural networks at that time. In addition, it was observed that training of neural networks was often stuck at local minima, and consequently failed to converge to the global minimum point. Furthermore, it was generally believed that one hidden layer was enough for neural networks, as more hidden layers did not improve the performance remarkably. Since then, research interest in neural networks started to decline in the computational intelligence community.
Interest in neural networks was revived around 2006 as researchers demonstrated that a deep feedforward neural network was able to achieve outstanding classification accuracy with proper unsupervised pretraining [8,9]. Despite its success, the deep neural network was not fully recognized by the computer vision and machine learning community until 2012 when astonishing results were achieved by AlexNet, a deep convolutional neural network (CNN) [10]. Since then, deep learning has emerged as the mainstream method in various tasks such as image recognition and audio recognition.

1.2.2 Spiking Neural Network

As another type of important neural network, SNNs did not receive much attention in comparison to the widely used ANNs. Interest in SNNs mainly came from the neuroscience community. Despite being less popular, many researchers believe that SNNs have a more powerful computational capability compared to their ANN counterparts, thanks to the spatiotemporal patterns used to carry information in SNNs. Even though SNNs are potentially more advanced, there are difficulties in harnessing the power of SNNs. Dynamics of an SNN is much more complicated in comparison to that of an ANN, which makes the purely analytical approach intractable. Furthermore, it is considerably harder to implement event‐driven SNNs efficiently on a conventional general‐purpose processor. This is also one of the main reasons that SNNs are not as popular as ANNs in the computational intelligence community.
Over the past decades, there were numerous efforts from both the computational intelligence community and the neuroscience community to develop learning algorithms for SNNs. Spike‐timing‐dependent plasticity (STDP), which was first observed in biological experiments, was proposed as an empirically successful learning rule for unsupervised learning [1114]. In a typical STDP protocol, synaptic weight updates according to the relative order and the difference between the presynaptic and postsynaptic spike timings. Unsupervised learning is useful in discovering the underlying structure of data, yet it is not as powerful as supervised learning in many real‐life applications, at least at the current stage.

1.3 Need for Neuromorphic Hardware

The development of hardware‐based neural network or neuromorphic hardware started along with their software counterpart. There was a period of time (late 1980s to early 1990s) when many neuromorphic chips and hardware systems were introduced [1518]. Later on, after finding out that the performance of neural networks was hard to keep apace with digital computers due to the inadequate level of integration of synapses and neurons, hardware research in neural networks took a back seat while Boolean computing advanced by leaps and bounds, leveraging the scaling and Moore's Law. ...

Índice