
The Principles of Deep Learning Theory
An Effective Theory Approach to Understanding Neural Networks
- English
- PDF
- Available on iOS & Android
The Principles of Deep Learning Theory
An Effective Theory Approach to Understanding Neural Networks
About this book
This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.
Frequently asked questions
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Information
Table of contents
- Cover
- Half-title
- Endorsements
- Title page
- Copyright information
- Contents
- Preface
- 0 Initialization
- 1 Pretraining
- 2 Neural Networks
- 3 Effective Theory of Deep Linear Networks at Initialization
- 4 RG Flow of Preactivations
- 5 Effective Theory of Preactivations at Initialization
- 6 Bayesian Learning
- 7 Gradient-Based Learning
- 8 RG Flow of the Neural Tangent Kernel
- 9 Effective Theory of the NTK at Initialization
- 10 Kernel Learning
- 11 Representation Learning
- ∞ The End of Training
- Epilogue ε: Model Complexity from the Macroscopic Perspective
- A Information in Deep Learning
- B Residual Learning
- References
- Index