Scala Machine Learning Projects
eBook - ePub

Scala Machine Learning Projects

Md. Rezaul Karim

Share book
  1. 470 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Scala Machine Learning Projects

Md. Rezaul Karim

Book details
Book preview
Table of contents
Citations

About This Book

Powerful smart applications using deep learning algorithms to dominate numerical computing, deep learning, and functional programming.

Key Features

  • Explore machine learning techniques with prominent open source Scala libraries such as Spark ML, H2O, MXNet, Zeppelin, and DeepLearning4j
  • Solve real-world machine learning problems by delving complex numerical computing with Scala functional programming in a scalable and faster way
  • Cover all key aspects such as collection, storing, processing, analyzing, and evaluation required to build and deploy machine models on computing clusters using Scala Play framework.

Book Description

Machine learning has had a huge impact on academia and industry by turning data into actionable information. Scala has seen a steady rise in adoption over the past few years, especially in the fields of data science and analytics. This book is for data scientists, data engineers, and deep learning enthusiasts who have a background in complex numerical computing and want to know more hands-on machine learning application development.

If you're well versed in machine learning concepts and want to expand your knowledge by delving into the practical implementation of these concepts using the power of Scala, then this book is what you need! Through 11 end-to-end projects, you will be acquainted with popular machine learning libraries such as Spark ML, H2O, DeepLearning4j, and MXNet.

At the end, you will be able to use numerical computing and functional programming to carry out complex numerical tasks to develop, build, and deploy research or commercial projects in a production-ready environment.

What you will learn

  • Apply advanced regression techniques to boost the performance of predictive models
  • Use different classification algorithms for business analytics
  • Generate trading strategies for Bitcoin and stock trading using ensemble techniques
  • Train Deep Neural Networks (DNN) using H2O and Spark ML
  • Utilize NLP to build scalable machine learning models
  • Learn how to apply reinforcement learning algorithms such as Q-learning for developing ML application
  • Learn how to use autoencoders to develop a fraud detection application
  • Implement LSTM and CNN models using DeepLearning4j and MXNet

Who this book is for

If you want to leverage the power of both Scala and Spark to make sense of Big Data, then this book is for you. If you are well versed with machine learning concepts and wants to expand your knowledge by delving into the practical implementation using the power of Scala, then this book is what you need! Strong understanding of Scala Programming language is recommended. Basic familiarity with machine Learning techniques will be more helpful.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Scala Machine Learning Projects an online PDF/ePUB?
Yes, you can access Scala Machine Learning Projects by Md. Rezaul Karim in PDF and/or ePUB format, as well as other popular books in Informatique & Technologies de l'information. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781788471473

Options Trading Using Q-learning and Scala Play Framework

As human beings, we learn from experiences. We have not become so charming by accident. Years of positive compliments as well as negative criticism, have all helped shape us into who we are today. We learn how to ride a bike by trying out different muscle movements until it just clicks. When you perform actions, you are sometimes rewarded immediately. This is all about Reinforcement learning (RL).
This chapter is all about designing a machine learning system driven by criticisms and rewards. We will see how to apply RL algorithms for a predictive model on real-life datasets.
From the trading point of view, an option is a contract that gives its owner the right to buy (call option) or sell (put option) a financial asset (underlying) at a fixed price (the strike price) at or before a fixed date (the expiry date).
We will see how to develop a real-life application for such options trading using an RL algorithm called QLearning. To be more precise, we will solve the problem of computing the best strategy in options trading, and we want to trade certain types of options given some market conditions and trading data.
The IBM stock datasets will be used to design a machine learning system driven by criticisms and rewards. We will start from RL and its theoretical background so that the concept is easier to grasp. Finally, we will wrap up the whole application as a web app using Scala Play Framework.
Concisely, we will learn the following topics throughout this end-to-end project:
  • Using Q-learning—an RL algorithm
  • Options trading—what is it all about?
  • Overview of technologies
  • Implementing Q-learning for options trading
  • Wrapping up the application as a web app using Scala Play Framework
  • Model deployment

Reinforcement versus supervised and unsupervised learning

Whereas supervised and unsupervised learning appear at opposite ends of the spectrum, RL exists somewhere in the middle. It is not supervised learning because the training data comes from the algorithm deciding between exploration and exploitation. In addition, it is not unsupervised because the algorithm receives feedback from the environment. As long as you are in a situation where performing an action in a state produces a reward, you can use RL to discover a good sequence of actions to take the maximum expected rewards.
The goal of an RL agent will be to maximize the total reward that it receives in the end. The third main subelement is the value function. While rewards determine an immediate desirability of the states, values indicate the long-term desirability of states, taking into account the states that may follow and the available rewards in these states. The value function is specified with respect to the chosen policy. During the learning phase, an agent tries actions that determine the states with the highest value, because these actions will get the best number of rewards in the end.

Using RL

Figure 1 shows a person making decisions to arrive at their destination. Moreover, suppose that on your drive from home to work, you always choose the same route. However, one day your curiosity takes over and you decide to try a different path, hoping for a shorter commute. This dilemma of trying out new routes or sticking to the best-known route is an example of exploration versus exploitation:
Figure 1: An agent always tries to reach the destination by passing through the route
RL techniques are being used in many areas. A general idea that is being pursued right now is creating an algorithm that does not need anything apart from a description of its task. When this kind of performance is achieved, it will be applied virtually everywhere.

Notation, policy, and utility in RL

You may notice that RL jargon involves incarnating the algorithm into taking actions in situations to receive rewards. In fact, the algorithm is often referred to as an agent that acts with the environment. You can just think of it is an intelligent hardware agent that is sensing with sensors and interacting with the environment using its actuators. Therefore, it should not be a surprise that much of RL theory is applied in robotics. Now, to extend our discussion further, we need to know a few terminologies:
  • Environment: An environment is any system having states and mechanisms to transition between different states. For example, the environment for a robot is the landscape or facility it operates in.
  • Agent: An agent is an automated system that interacts with the environment.
  • State: The state of the environment or system is the set of variables or features that fully describe the environment.
  • Goal: A goal is a state that provides a higher discounted cumulative reward than any other state. A high cumulative reward prevents the best policy from being dependent on the initial state during training.
  • Action: An action defines the transition between states, where an agent is responsible for performing, or at least recommending, an action. Upon execution of an action, the agent collects a reward (or punishment) from the environment.
  • Policy: The policy defines the action to be performed and executed for any state of the environment.
  • Reward: A reward quantifies the positive or negative interaction of the agent with the environment. Rewards are essentially the training set for the learning engine.
  • Episode (also known as trials): This defines the number of steps necessary to reach the goal state from an initial state.
We will discuss more on policy and utility later in this section. Figure 2 demonstrates the interplay between states, actions, and rewards. If you start at state s1, you can perform action a1 to obtain a reward r (s1, a1). Arrows represent actions, and states are represented by circles:
Figure 2: An agent performing an action on a state produces a reward
A robot performs actions to change between different states. But how does it decide which action to take? Well, it is all about using a different or concrete policy.

Policy

In RL lingo, we call a strategy policy. The goal of RL is to discover a good strategy. One of the most common ways to solve it is by observing the long-term consequences of actions in each state. The short-term consequence is easy to calculate: it's just the reward. Although performing an action yields an immediate reward, it is not always a good idea to greedily choose the action with the best reward. That is a lesson in life too, because the most immediate best thing to do may not always be the most satisfying in the long run. The best possible policy is called the optimal policy, and it is often the holy grail of ...

Table of contents