This book takes you on a fantastic journey to discover the attributes of big data using Apache Hive.About This Book• Grasp the skills needed to write efficient Hive queries to analyze the Big Data• Discover how Hive can coexist and work with other tools within the Hadoop ecosystem• Uses practical, example-oriented scenarios to cover all the newly released features of Apache Hive 2.3.3Who This Book Is ForIf you are a data analyst, developer, or simply someone who wants to quickly get started with Hive to explore and analyze Big Data in Hadoop, this is the book for you. Since Hive is an SQL-like language, some previous experience with SQL will be useful to get the most out of this book.What You Will Learn• Create and set up the Hive environment• Discover how to use Hive's definition language to describe data• Discover interesting data by joining and filtering datasets in Hive• Transform data by using Hive sorting, ordering, and functions• Aggregate and sample data in different ways• Boost Hive query performance and enhance data security in Hive• Customize Hive to your needs by using user-defined functions and integrate it with other toolsIn DetailIn this book, we prepare you for your journey into big data by frstly introducing you to backgrounds in the big data domain, alongwith the process of setting up and getting familiar with your Hive working environment.Next, the book guides you through discovering and transforming the values of big data with the help of examples. It also hones your skills in using the Hive language in an effcient manner. Toward the end, the book focuses on advanced topics, such as performance, security, and extensions in Hive, which will guide you on exciting adventures on this worthwhile big data journey.By the end of the book, you will be familiar with Hive and able to work effeciently to find solutions to big data problemsStyle and approachThis book takes on a practical approach which will get you familiarized with Apache Hive and how to use it to efficiently to find solutions to your big data problems. This book covers crucial topics like performance, and data security in order to help you make the most of the Hive working environment.

eBook - ePub
Apache Hive Essentials
Essential techniques to help you process, and get unique insights from, big data, 2nd Edition
- 210 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Apache Hive Essentials
Essential techniques to help you process, and get unique insights from, big data, 2nd Edition
About this book
Trusted by 375,005 students
Access to over 1 million titles for a fair monthly price.
Study more efficiently using our study tools.
Information
Data Definition and Description
This chapter introduces the basic data types, data definition language, and schema in Hive to describe data. It also covers best practices to describe data correctly and effectively by using internal or external tables, partitions, buckets, and views. In this chapter, we will cover the following topics:
- Understanding data types
- Data type conversions
- Data definition language
- Databases
- Tables
- Partitions
- Buckets
- Views
Understanding data types
Hive data types are categorized into two types: primitive and complex. String and Int are the most useful primitive types, which are supported by most HQL functions. The details of primitive types are as follows:
ay contain a set of any type of fields. Complex types allow the nesting of types. The details of complex types a
| Primitive type | Description | Example |
| TINYINT | It has 1 byte, from -128 to 127. The postfix is Y. It is used as a small range of numbers. | 10Y |
| SMALLINT | It has 2 bytes, from -32,768 to 32,767. The postfix is S. It is used as a regular descriptive number. | 10S |
| INT | It has 4 bytes, from -2,147,483,648 to 2,147,483,647. | 10 |
| BIGINT | It has 8 bytes, from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. The postfix is L. | 100L |
| FLOAT | This is a 4 byte single-precision floating-point number, from 1.40129846432481707e-45 to 3.40282346638528860e+38 (positive or negative). Scientific notation is not yet supported. It stores very close approximations of numeric values. | 1.2345679 |
| DOUBLE | This is an 8 byte double-precision floating-point number, from 4.94065645841246544e-324d to 1.79769313486231570e+308d (positive or negative). Scientific notation is not yet supported. It stores very close approximations of numeric values. | 1.2345678901234567 |
| BINARY | This was introduced in Hive 0.8.0 and only supports CAST to STRING and vice versa. | 1011 |
| BOOLEAN | This is a TRUE or FALSE value. | TRUE |
| STRING | This includes characters expressed with either single quotes (') or double quotes ("). Hive uses C-style escaping within the strings. The max size is around 2 G. | 'Books' or "Books" |
| CHAR | This is available starting with Hive 0.13.0. Most UDF will work for this type after Hive 0.14.0. The maximum length is fixed at 255. | 'US' or "US" |
| VARCHAR | This is available starting with Hive 0.12.0. Most UDF will work for this type after Hive 0.14.0. The maximum length is fixed at 65,355. If a string value being converted/assigned to a varchar value exceeds the length specified, the string is silently truncated. | 'Books' or "Books" |
| DATE | This describes a specific year, month, and day in the format of YYYY-MM-DD. It is available starting with Hive 0.12.0. The range of dates is from 0000-01-01 to 9999-12-31. | 2013-01-01 |
| TIMESTAMP | This describes a specific year, month, day, hour, minute, second, and millisecond in the format of YYYY-MM-DD HH:MM:SS[.fff...]. It is available starting with Hive 0.8.0. | 2013-01-01 12:00:01.345 |
Hive has three main complex types: ARRAY, MAP, and STRUCT. These data types are built on top of the primitive data types. ARRAY and MAP are similar to that in Java. STRUCT is a record type, which may contain a set of any type of fields. Complex types allow the nesting of types. The details of complex types are as follows:
| Complex type | Description | Example |
| ARRAY | This is a list of items of the same type, such as [val1, val2, and so on]. You can access the value using array_name[index], for example, fruit[0]="apple". Index starts from 0. | ["apple","orange","mango"] |
| MAP | This is a set of key-value pairs, such as {key1, val1, key2, val2, and so on}. You can access the value using map_name[key] for example, fruit[1]="apple". | {1: "apple",2: "orange"} |
| STRUCT | This is a user-defined structure of any type of field, such as {val1, val2, val3, and so on}. By default, STRUCT field names will be col1, col2, and so on. You can access the value using structs_name.column_name, for example, fruit.col1=1. | {1, "apple"} |
| NAMED STRUCT | This is a user-defined structure of any number of typed fields, such as {name1, val1, name2, val2, and so on}. You can access the value using structs_name.column_name, for example, fruit.apple="gala". | {"apple":"gala","weight kg":1} |
| UNION | This is a structure that has exactly any one of the specified data types. It is available starting with Hive 0.7.0. It is not commonly used. | {2:["apple","orange"]} |
For MAP, the type of keys and values are unified. However, STRUCT is more flexible. STRUCT is more like a table, whereas MAP is more like an ARRAY with a customized index.
The following is a short exercise for all the commonly-used data types. The details of the CREATE, LOAD, and SELECT statements will be introduced in later chapters. Let's take a look at the exercise:
- Prepare the data as follows:
$vi employee.txt
Michael|Montreal,Toronto|Male,30|DB:80|Product:Developer^DLead
Will|Montreal|Male,35|Perl:85|Product:Lead,Test:Lead
Shelley|New York|Female,27|Python:80|Test:Lead,COE:Architect
Lucy|Vancouver|Female,57|Sales:89,HR:94|Sales:Lead
- Log in to beeline with the JDBC URL:
$beeline -u "jdbc:hive2://localhost:10000/default"
- Create a table using various data types (> indicates the beeline interactive mode):
> CREATE TABLE employee (
> name STRING,
> work_place ARRAY<STRING>,
> gender_age STRUCT<gender:STRING,age:INT>,
> skills_score MAP<STRING,INT>,
> depart_title MAP<STRING,ARRAY<STRING>>
> )
> ROW FORMAT DELIMITED
> FIELDS T...
Table of contents
- Title Page
- Copyright and Credits
- Dedication
- Packt Upsell
- Contributors
- Preface
- Overview of Big Data and Hive
- Setting Up the Hive Environment
- Data Definition and Description
- Data Correlation and Scope
- Data Manipulation
- Data Aggregation and Sampling
- Performance Considerations
- Extensibility Considerations
- Security Considerations
- Working with Other Tools
- Other Books You May Enjoy
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Apache Hive Essentials by Dayong Du in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Processing. We have over one million books available in our catalogue for you to explore.