
- English
- PDF
- Available on iOS & Android
Statistical Mechanics of Learning
About this book
Learning is one of the things that humans do naturally, and it has always been a challenge for us to understand the process. Nowadays this challenge has another dimension as we try to build machines that are able to learn and to undertake tasks such as datamining, image processing and pattern recognition. We can formulate a simple framework, artificial neural networks, in which learning from examples may be described and understood. The contribution to this subject made over the last decade by researchers applying the techniques of statistical mechanics is the subject of this book. The authors provide a coherent account of various important concepts and techniques that are currently only found scattered in papers, supplement this with background material in mathematics and physics and include many examples and exercises to make a book that can be used with courses, or for self-teaching, or as a handy reference.
Frequently asked questions
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Information
Table of contents
- Cover
- Half-title
- Title
- Copyright
- Contents
- Preface
- 1 Getting Started
- 2 Perceptron Learning – Basics
- 3 A Choice of Learning Rules
- 4 Augmented Statistical Mechanics Formulation
- 5 Noisy Teachers
- 6 The Storage Problem
- 7 Discontinuous Learning
- 8 Unsupervised Learning
- 9 On-line Learning
- 10 Making Contact with Statistics
- 11 A Bird’s Eye View: Multifractals
- 12 Multilayer Networks
- 13 On-line Learning in Multilayer Networks
- 14 What Else?
- Appendix 1 Basic Mathematics
- Appendix 2 The Gardner Analysis
- Appendix 3 Convergence of the Perceptron Rule
- Appendix 4 Stability of the Replica Symmetric Saddle Point
- Appendix 5 One-step Replica Symmetry Breaking
- Appendix 6 The Cavity Approach
- Appendix 7 The VC theorem
- Bibliography
- Index