Scala Machine Learning Projects
eBook - ePub

Scala Machine Learning Projects

Md. Rezaul Karim

Buch teilen
  1. 470 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

Scala Machine Learning Projects

Md. Rezaul Karim

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

Powerful smart applications using deep learning algorithms to dominate numerical computing, deep learning, and functional programming.

Key Features

  • Explore machine learning techniques with prominent open source Scala libraries such as Spark ML, H2O, MXNet, Zeppelin, and DeepLearning4j
  • Solve real-world machine learning problems by delving complex numerical computing with Scala functional programming in a scalable and faster way
  • Cover all key aspects such as collection, storing, processing, analyzing, and evaluation required to build and deploy machine models on computing clusters using Scala Play framework.

Book Description

Machine learning has had a huge impact on academia and industry by turning data into actionable information. Scala has seen a steady rise in adoption over the past few years, especially in the fields of data science and analytics. This book is for data scientists, data engineers, and deep learning enthusiasts who have a background in complex numerical computing and want to know more hands-on machine learning application development.

If you're well versed in machine learning concepts and want to expand your knowledge by delving into the practical implementation of these concepts using the power of Scala, then this book is what you need! Through 11 end-to-end projects, you will be acquainted with popular machine learning libraries such as Spark ML, H2O, DeepLearning4j, and MXNet.

At the end, you will be able to use numerical computing and functional programming to carry out complex numerical tasks to develop, build, and deploy research or commercial projects in a production-ready environment.

What you will learn

  • Apply advanced regression techniques to boost the performance of predictive models
  • Use different classification algorithms for business analytics
  • Generate trading strategies for Bitcoin and stock trading using ensemble techniques
  • Train Deep Neural Networks (DNN) using H2O and Spark ML
  • Utilize NLP to build scalable machine learning models
  • Learn how to apply reinforcement learning algorithms such as Q-learning for developing ML application
  • Learn how to use autoencoders to develop a fraud detection application
  • Implement LSTM and CNN models using DeepLearning4j and MXNet

Who this book is for

If you want to leverage the power of both Scala and Spark to make sense of Big Data, then this book is for you. If you are well versed with machine learning concepts and wants to expand your knowledge by delving into the practical implementation using the power of Scala, then this book is what you need! Strong understanding of Scala Programming language is recommended. Basic familiarity with machine Learning techniques will be more helpful.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Scala Machine Learning Projects als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Scala Machine Learning Projects von Md. Rezaul Karim im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Informatique & Technologies de l'information. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Jahr
2018
ISBN
9781788471473

Options Trading Using Q-learning and Scala Play Framework

As human beings, we learn from experiences. We have not become so charming by accident. Years of positive compliments as well as negative criticism, have all helped shape us into who we are today. We learn how to ride a bike by trying out different muscle movements until it just clicks. When you perform actions, you are sometimes rewarded immediately. This is all about Reinforcement learning (RL).
This chapter is all about designing a machine learning system driven by criticisms and rewards. We will see how to apply RL algorithms for a predictive model on real-life datasets.
From the trading point of view, an option is a contract that gives its owner the right to buy (call option) or sell (put option) a financial asset (underlying) at a fixed price (the strike price) at or before a fixed date (the expiry date).
We will see how to develop a real-life application for such options trading using an RL algorithm called QLearning. To be more precise, we will solve the problem of computing the best strategy in options trading, and we want to trade certain types of options given some market conditions and trading data.
The IBM stock datasets will be used to design a machine learning system driven by criticisms and rewards. We will start from RL and its theoretical background so that the concept is easier to grasp. Finally, we will wrap up the whole application as a web app using Scala Play Framework.
Concisely, we will learn the following topics throughout this end-to-end project:
  • Using Q-learning—an RL algorithm
  • Options trading—what is it all about?
  • Overview of technologies
  • Implementing Q-learning for options trading
  • Wrapping up the application as a web app using Scala Play Framework
  • Model deployment

Reinforcement versus supervised and unsupervised learning

Whereas supervised and unsupervised learning appear at opposite ends of the spectrum, RL exists somewhere in the middle. It is not supervised learning because the training data comes from the algorithm deciding between exploration and exploitation. In addition, it is not unsupervised because the algorithm receives feedback from the environment. As long as you are in a situation where performing an action in a state produces a reward, you can use RL to discover a good sequence of actions to take the maximum expected rewards.
The goal of an RL agent will be to maximize the total reward that it receives in the end. The third main subelement is the value function. While rewards determine an immediate desirability of the states, values indicate the long-term desirability of states, taking into account the states that may follow and the available rewards in these states. The value function is specified with respect to the chosen policy. During the learning phase, an agent tries actions that determine the states with the highest value, because these actions will get the best number of rewards in the end.

Using RL

Figure 1 shows a person making decisions to arrive at their destination. Moreover, suppose that on your drive from home to work, you always choose the same route. However, one day your curiosity takes over and you decide to try a different path, hoping for a shorter commute. This dilemma of trying out new routes or sticking to the best-known route is an example of exploration versus exploitation:
Figure 1: An agent always tries to reach the destination by passing through the route
RL techniques are being used in many areas. A general idea that is being pursued right now is creating an algorithm that does not need anything apart from a description of its task. When this kind of performance is achieved, it will be applied virtually everywhere.

Notation, policy, and utility in RL

You may notice that RL jargon involves incarnating the algorithm into taking actions in situations to receive rewards. In fact, the algorithm is often referred to as an agent that acts with the environment. You can just think of it is an intelligent hardware agent that is sensing with sensors and interacting with the environment using its actuators. Therefore, it should not be a surprise that much of RL theory is applied in robotics. Now, to extend our discussion further, we need to know a few terminologies:
  • Environment: An environment is any system having states and mechanisms to transition between different states. For example, the environment for a robot is the landscape or facility it operates in.
  • Agent: An agent is an automated system that interacts with the environment.
  • State: The state of the environment or system is the set of variables or features that fully describe the environment.
  • Goal: A goal is a state that provides a higher discounted cumulative reward than any other state. A high cumulative reward prevents the best policy from being dependent on the initial state during training.
  • Action: An action defines the transition between states, where an agent is responsible for performing, or at least recommending, an action. Upon execution of an action, the agent collects a reward (or punishment) from the environment.
  • Policy: The policy defines the action to be performed and executed for any state of the environment.
  • Reward: A reward quantifies the positive or negative interaction of the agent with the environment. Rewards are essentially the training set for the learning engine.
  • Episode (also known as trials): This defines the number of steps necessary to reach the goal state from an initial state.
We will discuss more on policy and utility later in this section. Figure 2 demonstrates the interplay between states, actions, and rewards. If you start at state s1, you can perform action a1 to obtain a reward r (s1, a1). Arrows represent actions, and states are represented by circles:
Figure 2: An agent performing an action on a state produces a reward
A robot performs actions to change between different states. But how does it decide which action to take? Well, it is all about using a different or concrete policy.

Policy

In RL lingo, we call a strategy policy. The goal of RL is to discover a good strategy. One of the most common ways to solve it is by observing the long-term consequences of actions in each state. The short-term consequence is easy to calculate: it's just the reward. Although performing an action yields an immediate reward, it is not always a good idea to greedily choose the action with the best reward. That is a lesson in life too, because the most immediate best thing to do may not always be the most satisfying in the long run. The best possible policy is called the optimal policy, and it is often the holy grail of ...

Inhaltsverzeichnis