Hands-on Machine Learning with JavaScript
Solve complex computational web problems using machine learning
Burak Kanber
- 356 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
Hands-on Machine Learning with JavaScript
Solve complex computational web problems using machine learning
Burak Kanber
About This Book
A definitive guide to creating an intelligent web application with the best of machine learning and JavaScriptAbout This Book⢠Solve complex computational problems in browser with JavaScript⢠Teach your browser how to learn from rules using the power of machine learning⢠Understand discoveries on web interface and API in machine learningWho This Book Is ForThis book is for you if you are a JavaScript developer who wants to implement machine learning to make applications smarter, gain insightful information from the data, and enter the field of machine learning without switching to another language. Working knowledge of JavaScript language is expected to get the most out of the book.What You Will Learn⢠Get an overview of state-of-the-art machine learning⢠Understand the pre-processing of data handling, cleaning, and preparation⢠Learn Mining and Pattern Extraction with JavaScript⢠Build your own model for classification, clustering, and prediction⢠Identify the most appropriate model for each type of problem⢠Apply machine learning techniques to real-world applications⢠Learn how JavaScript can be a powerful language for machine learningIn DetailIn over 20 years of existence, JavaScript has been pushing beyond the boundaries of web evolution with proven existence on servers, embedded devices, Smart TVs, IoT, Smart Cars, and more. Today, with the added advantage of machine learning research and support for JS libraries, JavaScript makes your browsers smarter than ever with the ability to learn patterns and reproduce them to become a part of innovative products and applications.Hands-on Machine Learning with JavaScript presents various avenues of machine learning in a practical and objective way, and helps implement them using the JavaScript language. Predicting behaviors, analyzing feelings, grouping data, and building neural models are some of the skills you will build from this book. You will learn how to train your machine learning models and work with different kinds of data. During this journey, you will come across use cases such as face detection, spam filtering, recommendation systems, character recognition, and more. Moreover, you will learn how to work with deep neural networks and guide your applications to gain insights from data.By the end of this book, you'll have gained hands-on knowledge on evaluating and implementing the right model, along with choosing from different JS libraries, such as NaturalNode, brain, harthur, classifier, and many more to design smarter applications.Style and approachThis is a practical tutorial that uses hands-on examples to step through some real-world applications of machine learning. Without shying away from the technical details, you will explore machine learning with JavaScript using clear and practical examples.
Frequently asked questions
Information
Classification Algorithms
- The KNN algorithm is one of the simplest classifiers, and works well when your dataset has numerical features and clustered patterns. It is similar in nature to the k-means clustering algorithm, in that it relies on plotting data points and measuring distances from point to point.
- The Naive Bayes classifier is an effective and versatile classifier based on Bayesian probability. While it can be used for numerical data, it's most commonly used in text classification problems, such as spam detection and sentiment analysis. Naive Bayes classifiers, when implemented properly, can be both fast and highly accurate for narrow domains. The Naive Bayes classifier is one of my go-to algorithms for classification.
- SVMs are, in spirit, a very advanced form of the KNN algorithm. The SVM graphs your data and attempts to find dividing lines between the categories you've labeled. Using some non-trivial mathematics, the SVM can linearize non-linear patterns, so this tool can be effective for both linear and non linear data.
- Random forests are a relatively recent development in classification algorithms, but they are effective and versatile and therefore a go-to classifier for many researchers, myself included. Random forests build an ensemble of decision trees (another type of classifier we'll discuss later), each with a random subset of the data's features. Decision trees can handle both numerical and categorical data, they can perform both regression and classification tasks, and they also assist in feature selection, so they are becoming many researchers' first tool to grab when facing new problems.
k-Nearest Neighbor
- Record all training data and their labels
- Given a new point to evaluate, generate a list of its distances to all training points
- Sort the list of distances in order of closest to farthest
- Throw out all but the k nearest distances
- Determine which label represents the majority of your k nearest neighbors; this is the result of the algorithm
Building the KNN algorithm
- Create a new folder and name it Ch5-knn.
- To the folder, add the following package.json file. Note that this file is a little different from previous examples because we have added a dependency for the jimp library, which is an image processing library that we'll use in the second example:
{
"name": "Ch5-knn",
"version": "1.0.0",
"description": "ML in JS Example for Chapter 5 - k-nearest-neighbor",
"main": "src/index.js",
"author": "Burak Kanber",
"license": "MIT",
"scripts": {
"build-web": "browserify src/index.js -o dist/index.js -t [ babelify --presets [ env ] ]",
"build-cli": "browserify src/index.js --node -o dist/ind...