Intelligent Mobile Projects with TensorFlow
eBook - ePub

Intelligent Mobile Projects with TensorFlow

Build 10+ Artificial Intelligence apps using TensorFlow Mobile and Lite for iOS, Android, and Raspberry Pi

  1. 404 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Intelligent Mobile Projects with TensorFlow

Build 10+ Artificial Intelligence apps using TensorFlow Mobile and Lite for iOS, Android, and Raspberry Pi

About this book

Create Deep Learning and Reinforcement Learning apps for multiple platforms with TensorFlowAbout This Book• Build TensorFlow-powered AI applications for mobile and embedded devices • Learn modern AI topics such as computer vision, NLP, and deep reinforcement learning• Get practical insights and exclusive working code not available in the TensorFlow documentationWho This Book Is ForIf you're an iOS/Android developer interested in building and retraining others' TensorFlow models and running them in your mobile apps, or if you're a TensorFlow developer and want to run your new and amazing TensorFlow models on mobile devices, this book is for you. You'll also benefit from this book if you're interested in TensorFlow Lite, Core ML, or TensorFlow on Raspberry Pi.What You Will Learn• Classify images with transfer learning• Detect objects and their locations• Transform pictures with amazing art styles• Understand simple speech commands• Describe images in natural language• Recognize drawing with Convolutional Neural Network and Long Short-Term Memory• Predict stock price with Recurrent Neural Network in TensorFlow and Keras• Generate and enhance images with generative adversarial networks• Build AlphaZero-like mobile game app in TensorFlow and Keras• Use TensorFlow Lite and Core ML on mobile• Develop TensorFlow apps on Raspberry Pi that can move, see, listen, speak, and learnIn DetailAs a developer, you always need to keep an eye out and be ready for what will be trending soon, while also focusing on what's trending currently. So, what's better than learning about the integration of the best of both worlds, the present and the future? Artificial Intelligence (AI) is widely regarded as the next big thing after mobile, and Google's TensorFlow is the leading open source machine learning framework, the hottest branch of AI. This book covers more than 10 complete iOS, Android, and Raspberry Pi apps powered by TensorFlow and built from scratch, running all kinds of cool TensorFlow models offline on-device: from computer vision, speech and language processing to generative adversarial networks and AlphaZero-like deep reinforcement learning. You'll learn how to use or retrain existing TensorFlow models, build your own models, and develop intelligent mobile apps running those TensorFlow models. You'll learn how to quickly build such apps with step-by-step tutorials and how to avoid many pitfalls in the process with lots of hard-earned troubleshooting tips.Style and approachThis book takes a practical, project-based approach to teach specifics of mobile development with TensorFlow. Using a reader-friendly approach, this book will provide detailed instructions and also discuss the broader context covered within.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Intelligent Mobile Projects with TensorFlow by Jeff Tang in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

Building an AlphaZero-like Mobile Game App

Although the ever-increasing popularity of modern Artificial Intelligence (AI) was essentially caused by the breakthrough of deep learning in 2012, the historic events of Google DeepMind's AlphaGo beating Lee Sedol, the 18-time world champion of GO, 4-1 in March 2016 and then beating Ke Jie, the current #1-ranked GO player, 3-0 in May 2017, contributed in large part to making AI a household acronym. Due to the complexity of the GO game, it was wildly considered an impossible mission, or impossible for at least one more decade, that a computer program would beat top GO players.
After the match between AlphaGo and Ke Jie in May 2017, Google retired AlphaGo; DeepMind, the startup Google acquired for its pioneering deep reinforcement learning technologies and the developer of AlphaGo, decided to focus their AI research on other areas. Then, interestingly, in October 2017, DeepMind published another paper on the game, GO: Mastering the Game of GO without Human Knowledge (https://deepmind.com/research/publications/mastering-game-go-without-human-knowledge), which describes an improved algorithm, called AlphaGo Zero, that learns how to play GO solely by self-play reinforcement learning, with no reliance on any human expert knowledge, such as a large number of professional GO games played, which AlphaGo uses to train its model. What's amazing is that AlphaGo Zero completely defeated AlphaGo, which just humbled the world's best human GO player a few months ago, with 100-0!
It turns out this is just one step toward Google's more ambitious goal of applying and improving the AI techniques behind AlphaGo to other domains. In December 2017, DeepMind published yet another paper, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (https://arxiv.org/pdf/1712.01815.pdf), that generalized the AlphaGo Zero program to a single algorithm, called AlphaZero, and used the algorithm to quickly learn how to play the games of Chess and Shogi from scratch, starting with random play with no domain knowledge except the game rules, and in 24 hours, achieved the superhuman level and beat world champions.
In this chapter, we'll take you on a tour of the latest and coolest in AlphaZero, showing you how to build and train an AlphaZero-like model to play a simple but fun game called Connect 4 (https://en.wikipedia.org/wiki/Connect_Four) in TensorFlow and Keras, the popular high-level deep learning library we used in Chapter 8, Predicting Stock Price with RNN. We'll also cover how to use the trained AlphaZero-like model to get a trained expert policy to guide the gameplay on mobile, with the source code of complete iOS and Android apps that play the Connect 4 game using the model.
In summary, we'll cover the following topics in this chapter:
  • AlphaZero – how does it work?
  • Building and training an AlphaZero-like model for Connect 4
  • Using the model in iOS to play Connect 4
  • Using the model in Android to play Connect 4

AlphaZero – how does it work?

The AlphaZero algorithm consists of three main components:
  • A deep convolutional neural network, which takes the board position (or state) as input and outputs a value as the predicted game result from the position and a policy that is a list of move probabilities for each possible action from the input board state.
  • A general-purpose reinforcement learning algorithm, which learns via self-play from scratch with no specific domain knowledge except the game rules. The deep neural network's parameters are learned by self-play reinforcement learning to minimize the loss between the predicted value and the actual self-play game result, and maximize the similarity between the predicted policy and the search probabilities, which come from the following algorithm.
  • A general-purpose (domain-independent) Monte-Carlo Tree Search (MCTS) algorithm, which simulates games of self-play from start to end, selecting each move during the simulation by considering the predicted value and policy probability values returned from the deep neural network, as well as how frequently a node has been visited—occasionally selecting a node with a low visit count is called exploration in reinforcement learning (versus taking the move with a high predicted value and policy, which is called exploitation). A nice balance between exploration and exploitation can lead to better results.
Reinforcement learning has a long history, dating back to the 1960s when the term was first used in engineering literature. But the breakthrough came in 2013 when DeepMind combined reinforcement learning with deep learning and developed deep reinforcement learning apps that learned to play Atari games from scratch, with raw pixels as input, and were able to beat humans afterward. Unlike supervised learning, which requires labelled data for training, as we have seen in many models that we built or used in the previous chapters, reinforcement learning uses a trial-and-error method to get better: an agent interacts with an environment and receives rewards (positive or negative) for every action it takes on every state. In the example of AlphaZero playing chess, the reward only comes after the game is over, with the result of winning as +1, losing as -1, and a draw as 0. The reinforcement learning algorithm in AlphaZero uses gradient descent on the loss we mentioned before to update the parameters of the deep neural network, which acts like a universal function approximation to learn and encode the gameplay expertise.
The result of the learning or training process can be a policy generated by the deep neural network that says what action should be taken on any state, or a value function that maps each state and each possible action from that state to a long-term reward.
If the policy learned by the deep neural network using self-play reinforcement learning is ideal, we may not need to let the program perform any MCTS during gameplay—the program can simply always choose the move with the maximum probability. But in complicated games such as Chess or GO, a perfect policy can't be generated so MCTS is required to work together with the trained deep network to guide the search for the best possible action for each game state.
If you're not familiar with reinforcement learning or MCTS, there's lots of information about them on the internet. Consider checking out Richard Sutton and Andrew Barto's classic book, Reinforcement Learning: An Introduction, which is publicly available at http://incompleteideas.net/book/the-book-2nd.html. You can also watch the reinforcement learning course videos by David Silver, the technical lead for AlphaGo at DeepMind, on YouTube (search "reinforcement learning David Silver"). A fun and useful toolkit for reinforcement learning is OpenAI Gym (https://gym.openai.com). In the last chapter of the book, we'll go deeper into reinforcement learning and OpenAI Gym. For MCTS, check out its Wiki page, https://en.wikipedia.org/wiki/Monte_Carlo_tree_search, as well as this blog: http://tim.hibal.org/blog/alpha-zero-how-and-why-it-works.
In the next section, we'll take a look at a Keras implementation, with TensorFlow as the backend, of the AlphaZero algorithm, with the goal of building and training a model using the algorithm to play Connect 4. You'll see what the model architec...

Table of contents

  1. Title Page
  2. Copyright and Credits
  3. Dedication
  4. Packt Upsell
  5. Foreword
  6. Contributors
  7. Preface
  8. Getting Started with Mobile TensorFlow
  9. Classifying Images with Transfer Learning
  10. Detecting Objects and Their Locations
  11. Transforming Pictures with Amazing Art Styles
  12. Understanding Simple Speech Commands
  13. Describing Images in Natural Language
  14. Recognizing Drawing with CNN and LSTM
  15. Predicting Stock Price with RNN
  16. Generating and Enhancing Images with GAN
  17. Building an AlphaZero-like Mobile Game App
  18. Using TensorFlow Lite and Core ML on Mobile
  19. Developing TensorFlow Apps on Raspberry Pi
  20. Other Books You May Enjoy