Learn ARCore - Fundamentals of Google ARCore
Micheal Lanham
- 274 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
Learn ARCore - Fundamentals of Google ARCore
Micheal Lanham
About This Book
Create next-generation Augmented Reality and Mixed Reality apps with the latest version of Google ARCore
Key Features
- Harness the power of the Google's new augmented reality (AR) platform ARCore to build cutting-edge Augmented reality apps
- Learn core concepts of Environmental Understanding, Immersive Computing, and Motion Tracking with ARCore
- Extend your application by combining ARCore with OpenGL, Machine Learning and more.
Book Description
Are you a mobile developer or web developer who wants to create immersive and cool Augmented Reality apps with the latest Google ARCore platform? If so, this book will help you jump right into developing with ARCore and will help you create a step by step AR app easily.
This book will teach you how to implement the core features of ARCore starting from the fundamentals of 3D rendering to more advanced concepts such as lighting, shaders, Machine Learning, and others.
We'll begin with the basics of building a project on three platforms: web, Android, and Unity. Next, we'll go through the ARCore concepts of motion tracking, environmental understanding, and light estimation. For each core concept, you'll work on a practical project to use and extend the ARCore feature, from learning the basics of 3D rendering and lighting to exploring more advanced concepts.
You'll write custom shaders to light virtual objects in AR, then build a neural network to recognize the environment and explore even grander applications by using ARCore in mixed reality. At the end of the book, you'll see how to implement motion tracking and environment learning, create animations and sounds, generate virtual characters, and simulate them on your screen.
What you will learn
- Build and deploy your Augmented Reality app to the Android, Web, and Unity platforms
- Implement ARCore to identify and visualize objects as point clouds, planes, surfaces, and/or meshes
- Explore advanced concepts of environmental understanding using Google ARCore and OpenGL ES with Java
- Create light levels from ARCore and create a C# script to watch and propagate lighting changes in a scene
- Develop graphics shaders that react to changes in lighting and map the environment to place objects in Unity/C#
- Integrate motion tracking with the Web ARCore API and Google Street View to create a combined AR/VR experience
Who this book is for
This book is for web and mobile developers who have broad programming knowledge on Java or JavaScript or C# and want to develop Augmented Reality applications with Google ArCore. To follow this book no prior experience with AR development, 3D, or 3D math experience is needed.
Frequently asked questions
Information
Recognizing the Environment
- Introduction to ML
- Deep reinforcement learning
- Programming a neural network
- Training a neural network
- TensorFlow
Introduction to ML
- Target detection: Targets have been used in AR for some time. It has been the primary tracking and reference point for many AR apps previous to ARCore.
- Image recognition: This spawns into a whole set of sub-applications, all of which we will deal with in detail later.
- Object detection: Being able to detect an object in 3D from point cloud data is no easy feat, but it has been done and is getting better.
- Face detection: Detecting a person's face in an image has been around for years and has been used to great effect in many apps.
- Person detection: Detecting people or motion has great possibilities. Think Kinect comes to AR.
- Hand/Gesture detection: Not to be confused with touch gestures. This is where we detect a user's hand motions or gestures in front of a device's camera.
- Pose detection on object: Related to object detection, but now we also detect the position and orientation of the object.
- Light source detection: Being able to place realistic lights in a scene to make virtual object rendering more realistic. We already looked at the importance of lighting in Chapter 7, Light Estimation.
- Environment detection: Recognizing the environment a user has moved into has great application in mapping buildings or other locations where GPS is unavailable, which applies to most internal spaces.
The waiter asks, "What will you have?
The algorithm says, "What's everyone else having?"
- Unknown
Toolset | Pros/Cons | Machine Learning task | ||||||
Targets/Image | Object/Pose | Face | Person | Hand | Light | Environment | ||
Vuforia | Mature and easy to use. Requires internet connectivity. | Yes | Yes/Paid | |||||
XZIMG | Face and image/target tracking supported for Unity and other platforms. | Yes | Yes | |||||
ARToolkit | Mature OpenSource platform for image tacking and feature detection. | Yes | ||||||
EasyAR | Pro license gets object and feature tracking. | Yes | Yes/Paid | |||||
Google Face Detection API | Low level Android API. | Yes | ||||||
OpenCV | A mature low-level API for Android, commercial version ported to Unity. Still requires low level knowledge. | Yes | Yes | Yes | Yes | Yes | Coming | Coming |
Google TensorFlow | Still in its infancy but quickly becoming the platform standard for CNN. Low level and advanced ML knowledge required. | Yes | Yes | Yes | Yes | Yes | coming | coming |
Google ARCore | Currently, identifies planes, feature points, and light. | Yes | Yes |