
- 336 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
Grokking Deep Learning
About this book
Summary Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging style, seasoned deep learning expert Andrew Trask shows you the science under the hood, so you grok for yourself every detail of training neural networks.Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the Technology Deep learning, a branch of artificial intelligence, teaches computers to learn by using neural networks, technology inspired by the human brain. Online text translation, self-driving cars, personalized product recommendations, and virtual voice assistants are just a few of the exciting modern advancements possible thanks to deep learning. About the Book Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging style, seasoned deep learning expert Andrew Trask shows you the science under the hood, so you grok for yourself every detail of training neural networks. Using only Python and its math-supporting library, NumPy, you'll train your own neural networks to see and understand images, translate text into different languages, and even write like Shakespeare! When you're done, you'll be fully prepared to move on to mastering deep learning frameworks. What's inside
- The science behind deep learning
- Building and training your own neural networks
- Privacy concepts, including federated learning
- Tips for continuing your pursuit of deep learning
About the Reader For readers with high school-level math and intermediate programming skills. About the Author Andrew Trask is a PhD student at Oxford University and a research scientist at DeepMind. Previously, Andrew was a researcher and analytics product manager at Digital Reasoning, where he trained the world's largest artificial neural network and helped guide the analytics roadmap for the Synthesys cognitive computing platform. Table of Contents
- Introducing deep learning: why you should learn it
- Fundamental concepts: how do machines learn?
- Introduction to neural prediction: forward propagation
- Introduction to neural learning: gradient descent
- Learning multiple weights at a time: generalizing gradient descent
- Building your first deep neural network: introduction to backpropagation
- How to picture neural networks: in your head and on paper
- Learning signal and ignoring noise: introduction to regularization and batching
- Modeling probabilities and nonlinearities: activation functions
- Neural learning about edges and corners: intro to convolutional neural networks
- Neural networks that understand language: king - man + woman ==?
- Neural networks that write like Shakespeare: recurrent layers for variable-length data
- Introducing automatic optimization: let's build a deep learning framework
- Learning to write like Shakespeare: long short-term memory
- Deep learning on unseen data: introducing federated learning
- Where to go from here: a brief guide
Frequently asked questions
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Information
Chapter 1. Introducing deep learning: why you should learn it
- Why you should learn deep learning
- Why you should read this book
- What you need to get started
âDo not worry about your difficulties in Mathematics. I can assure you mine are still greater.âAlbert Einstein
Welcome to Grokking Deep Learning
Youâre about to learn some of the most valuable skills of the century!
Why you should learn deep learning
Itâs a powerful tool for the incremental automation of intelligence
Deep learning has the potential for significant automation of skilled labor
Itâs fun and creative. Youâll discover much about what it is to b- be human by trying to simulate intelligence and creativity
Will this be difficult to learn?
How hard will you have to work before thereâs a âfunâ payoff?
Why you should read this book
It has a uniquely low barrier to entry
It will help you understand whatâs inside a framework (Torch, Ten- nsorFlow, and so on)
All math-related material will be backed by intuitive analogies
âEverything should be made as simple as possible, but not simpler.âAttributed to Albert Einstein
Everything after the introduction chapters is âprojectâ based
What you need to get started
Install Jupyter Notebook and the NumPy Python library
Pass high school mathematics
Table of contents
- Copyright
- Brief Table of Contents
- Table of Contents
- Preface
- Acknowledgments
- About this book
- About the author
- Chapter 1. Introducing deep learning: why you should learn it
- Chapter 2. Fundamental concepts: how do machines learn?
- Chapter 3. Introduction to neural prediction: forward propagation
- Chapter 4. Introduction to neural learning: gradient descent
- Chapter 5. Learning multiple weights at a time: generalizing gradient descent
- Chapter 6. Building your first deep neural network: introduction to backpropagation
- Chapter 7. How to picture neural networks: in your head and on paper
- Chapter 8. Learning signal and ignoring noise: introduction to regularization and batching
- Chapter 9. Modeling probabilities and nonlinearities: activation functions
- Chapter 10. Neural learning about edges and corners: intro to convolutional neural networks
- Chapter 11. Neural networks that understand language: king â man + woman == ?
- Chapter 12. Neural networks that write like Shakespeare: recurrent layers for variable-length data
- Chapter 13. Introducing automatic optimization: letâs build a deep learning framework
- Chapter 14. Learning to write like Shakespeare: long short-term memory
- Chapter 15. Deep learning on unseen data: introducing federated learning
- Chapter 16. Where to go from here: a brief guide
- Index