Deep Learning through Sparse and Low-Rank Modeling
eBook - ePub

Deep Learning through Sparse and Low-Rank Modeling

  1. 296 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Deep Learning through Sparse and Low-Rank Modeling

About this book

Deep Learning through Sparse Representation and Low-Rank Modeling bridges classical sparse and low rank models—those that emphasize problem-specific Interpretability—with recent deep network models that have enabled a larger learning capacity and better utilization of Big Data. It shows how the toolkit of deep learning is closely tied with the sparse/low rank methods and algorithms, providing a rich variety of theoretical and analytic tools to guide the design and interpretation of deep learning models. The development of the theory and models is supported by a wide variety of applications in computer vision, machine learning, signal processing, and data mining.This book will be highly useful for researchers, graduate students and practitioners working in the fields of computer vision, machine learning, signal processing, optimization and statistics.- Combines classical sparse and low-rank models and algorithms with the latest advances in deep learning networks- Shows how the structure and algorithms of sparse and low-rank methods improves the performance and interpretability of Deep Learning models- Provides tactics on how to build and apply customized deep learning models for various applications

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Deep Learning through Sparse and Low-Rank Modeling by Zhangyang Wang,Yun Fu,Thomas S. Huang in PDF and/or ePUB format, as well as other popular books in Computer Science & Digital Media. We have over one million books available in our catalogue for you to explore.

Information

Chapter 1

Introduction

Zhangyang WangāŽ; Ding Liu† āŽDepartment of Computer Science and Engineering, Texas A&M University, College Station, TX, United States
†Beckman Institute for Advanced Science and Technology, Urbana, IL, United States

Abstract

Deep learning has achieved prevailing success in a wide domain of machine learning and computer vision fields. On the other hand, sparsity and low-rankness have been popular regularizations in classical machine learning. This section is intended as a brief introduction to the basics if deep learning, and then focuses on its inherent connections to the concepts of sparsity and low-rankness.

Keywords

Sparsity; Low rank; Deep learning

1.1 Basics of Deep Learning

Machine learning makes computers learn from data without explicitly programming them. However, classical machine learning algorithms often find it challenging to extract semantic features directly from raw data, e.g., due to the well-known ā€œsemantic gapā€ [1], which calls for the assistance from domain experts to hand-craft many well-engineered feature representations, on which the machine learning models operate more effectively. In contrast, the recently popular deep learning relies on multilayer neural networks to derive semantically meaningful representations, by building multiple simple features to represent a sophisticated concept. Deep learning requires less hand-engineered features and expert knowledge. Taking image classification as an example [2], a deep learning-based image classification system represents an object by gradually extracting edges, textures, and structures, from lower to middle-level hidden layers, which becomes more and more associated with the target semantic concept as the model grows deeper. Driven by the emergence of big data and hardware acceleration, the intricacy of data can be extracted with higher and more abstract level representation from raw inputs, gaining more power for deep learning to solve complicated, even traditionally intractable problems. Deep learning has achieved tremendous success in visual object recognition [2–5], face recognition and verification [6,7], object detection [8–11], image restoration and enhancement [12–17], clustering [18], emotion recognition [19], aesthetics and style recognition [20–23], scene understanding [24,25], speech recognition [26], machine translation [27], image synthesis [28], and even playing Go [29] and poker [30].
A basic neural network is composed of a set of perceptrons (artificial neurons), each of which maps inputs to output values with a simple activation function. Among recent deep neural network architectures, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are the two main streams, differing in their connectivity patterns. CNNs deploy convolution operations on hidden layers for weight sharing and parameter reduction. CNNs can extract local information from grid-like input data, and have mainly shown successes in computer vision and image processing, with many popular instances such as LeNet [31], AlexNet [2], VGG [32], GoogLeNet [33], and ResNet [34]. RNNs are dedicated to processing sequential input data with variable length. RNNs produce an output at each time step. The hidden neuron at each time step is calculated based on input data and hidden neurons at the previous time step. To av...

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Contributors
  6. About the Editors
  7. Preface
  8. Acknowledgments
  9. Chapter 1: Introduction
  10. Chapter 2: Bi-Level Sparse Coding: A Hyperspectral Image Classification Example
  11. Chapter 3: Deep ā„“0 Encoders: A Model Unfolding Example
  12. Chapter 4: Single Image Super-Resolution: From Sparse Coding to Deep Learning
  13. Chapter 5: From Bi-Level Sparse Clustering to Deep Clustering
  14. Chapter 6: Signal Processing
  15. Chapter 7: Dimensionality Reduction
  16. Chapter 8: Action Recognition
  17. Chapter 9: Style Recognition and Kinship Understanding
  18. Chapter 10: Image Dehazing: Improved Techniques
  19. Chapter 11: Biomedical Image Analytics: Automated Lung Cancer Diagnosis
  20. Index