Applied Deep Learning with Python
eBook - ePub

Applied Deep Learning with Python

Use scikit-learn, TensorFlow, and Keras to create intelligent systems and machine learning solutions

Alex Galea, Luis Capelo

Share book
  1. 334 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Applied Deep Learning with Python

Use scikit-learn, TensorFlow, and Keras to create intelligent systems and machine learning solutions

Alex Galea, Luis Capelo

Book details
Book preview
Table of contents
Citations

About This Book

A hands-on guide to deep learning that's filled with intuitive explanations and engaging practical examples

Key Features

  • Designed to iteratively develop the skills of Python users who don't have a data science background
  • Covers the key foundational concepts you'll need to know when building deep learning systems
  • Full of step-by-step exercises and activities to help build the skills that you need for the real-world

Book Description

Taking an approach that uses the latest developments in the Python ecosystem, you'll first be guided through the Jupyter ecosystem, key visualization libraries and powerful data sanitization techniques before we train our first predictive model. We'll explore a variety of approaches to classification like support vector networks, random decision forests and k-nearest neighbours to build out your understanding before we move into more complex territory. It's okay if these terms seem overwhelming; we'll show you how to put them to work.

We'll build upon our classification coverage by taking a quick look at ethical web scraping and interactive visualizations to help you professionally gather and present your analysis. It's after this that we start building out our keystone deep learning application, one that aims to predict the future price of Bitcoin based on historical public data.

By guiding you through a trained neural network, we'll explore common deep learning network architectures (convolutional, recurrent, generative adversarial) and branch out into deep reinforcement learning before we dive into model optimization and evaluation. We'll do all of this whilst working on a production-ready web application that combines Tensorflow and Keras to produce a meaningful user-friendly result, leaving you with all the skills you need to tackle and develop your own real-world deep learning projects confidently and effectively.

What you will learn

  • Discover how you can assemble and clean your very own datasets
  • Develop a tailored machine learning classification strategy
  • Build, train and enhance your own models to solve unique problems
  • Work with production-ready frameworks like Tensorflow and Keras
  • Explain how neural networks operate in clear and simple terms
  • Understand how to deploy your predictions to the web

Who this book is for

If you're a Python programmer stepping into the world of data science, this is the ideal way to get started.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Applied Deep Learning with Python an online PDF/ePUB?
Yes, you can access Applied Deep Learning with Python by Alex Galea, Luis Capelo in PDF and/or ePUB format, as well as other popular books in Computer Science & Programming. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781789806991
Edition
1

Data Cleaning and Advanced Machine Learning

The goal of data analytics, in general, is to uncover actionable insights that result in positive business outcomes. In the case of predictive analytics, the aim is to do this by determining the most likely future outcome of a target, based on previous trends and patterns.
The benefits of predictive analytics are not restricted to big technology companies. Any business can find ways to benefit from machine learning, given the right data.
Companies all around the world are collecting massive amounts of data and using predictive analytics to cut costs and increase profits. Some of the most prevalent examples of this are from the technology giants Google, Facebook, and Amazon, who utilize big data on a huge scale. For example, Google and Facebook serve you personalized ads based on predictive algorithms that guess what you are most likely to click on. Similarly, Amazon recommends personalized products that you are most likely to buy, given your previous purchases.
Modern predictive analytics is done with machine learning, where computer models are trained to learn patterns from data. As we saw briefly in the previous chapter, software such as scikit-learn can be used with Jupyter Notebooks to efficiently build and test machine learning models. As we will continue to see, Jupyter Notebooks are an ideal environment for doing this type of work, as we can perform ad-hoc testing and analysis, and easily save the results for reference later.
In this chapter, we will again take a hands-on approach by running through various examples and activities in a Jupyter Notebook. Where we saw a couple of examples of machine learning in the previous chapter, here we'll take a much slower and more thoughtful approach. Using an employee retention problem as our overarching example for the chapter, we will discuss how to approach predictive analytics, what things to consider when preparing the data for modeling, and how to implement and compare a variety of models using Jupyter Notebooks.
By the end of this chapter, you will be able to:
  • Plan a machine learning classification strategy
  • Preprocess data to prepare it for machine learning
  • Train classification models
  • Use validation curves to tune model parameters
  • Use dimensionality reduction to enhance model performance

Preparing to Train a Predictive Model

Here, we will cover the preparation required to train a predictive model. Although not as technically glamorous as training the models themselves, this step should not be taken lightly. It's very important to ensure you have a good plan before proceeding with the details of building and training a reliable model. Furthermore, once you've decided on the right plan, there are technical steps in preparing the data for modeling that should not be overlooked.
We must be careful not to go so deep into the weeds of technical tasks that we lose sight of the goal. Technical tasks include things that require programming skills, for example, constructing visualizations, querying databases, and validating predictive models. It's easy to spend hours trying to implement a specific feature or get the plots looking just right. Doing this sort of thing is certainly beneficial to our programming skills, but we should not forget to ask ourselves if it's really worth our time with respect to the current project.
Also, keep in mind that Jupyter Notebooks are particularly well-suited for this step, as we can use them to document our plan, for example, by writing rough notes about the data or a list of models we are interested in training. Before starting to train models, it's good practice to even take this a step further and write out a well-structured plan to follow. Not only will this help you stay on track as you build and test the models, but it will allow others to understand what you're doing when they see your work.
After discussing the preparation, we will also cover another step in preparing to train the predictive model, which is cleaning the dataset. This is another thing that Jupyter Notebooks are well-suited for, as they offer an ideal testing ground for performing dataset transformations and keeping track of the exact changes. The data transformations required for cleaning raw data can quickly become intricate and convoluted; therefore, it's important to keep track of your work. As discussed in the first chapter, tools other than Jupyter Notebooks just don't offer very good options for doing this efficiently.

Determining a Plan for Predictive Analytics

When formulating a plan for doing predictive modeling, one should start by considering stakeholder needs. A perfect model will be useless if it doesn't solve a relevant problem. Planning a strategy around business needs ensures that a successful model will lead to actionable insights.
Although it may be possible in principle to solve many business problems, the ability to deliver the solution will always depend on the availability of the necessary data. Therefore, it's important to consider the business needs in the context of the available data sources. When data is plentiful, this will have little effect, but as the amount of available data becomes smaller, so too does the scope of problems that can be solved.
These ideas can be formed into a standard process for determining a predictive analytics plan, which goes as follows:
  1. Look at the available data to understand the range of realistically solvable business problems. At this stage, it might be too early to think about the exact problems that can be solved. Make sure you understand the data fields available and the
    time frames they apply to.
  2. Determine the business needs by speaking with key stakeholders. Seek out a problem where the solution will lead to actionable business decisions.
  3. Assess the data for suitability by considering the availability of sufficiently diverse and large feature space. Also, take into account the condition of the data: are there large chunks of missing values for certain variables or time ranges?
Steps 2 and 3 should be repeated until a realistic plan has taken shape. At this point, you will already have a good idea of what the model input will be and what you might expect as output.
Once we've identified a problem that can be solved with machine learning, along with the appropriate data sources, we should answer the following questions to lay a framework for the project. Doing this will help us determine which types of machine learning models we can use to solve the problem:
  • Is the training data labeled with the target variable we want to predict?
If the answer is yes, then we will be doing supervised machine learning. Supervised learning has many real-world use cases, whereas it's much rarer to find business cases for doing predictive analytics on unlabeled data.
If the answer is no, then you are using unlabeled data and hence doing unsupervised machine learn...

Table of contents