Apache Hive Essentials
Essential techniques to help you process, and get unique insights from, big data, 2nd Edition
Dayong Du
- 210 páginas
- English
- ePUB (apto para móviles)
- Disponible en iOS y Android
Apache Hive Essentials
Essential techniques to help you process, and get unique insights from, big data, 2nd Edition
Dayong Du
Información del libro
This book takes you on a fantastic journey to discover the attributes of big data using Apache Hive.About This Book• Grasp the skills needed to write efficient Hive queries to analyze the Big Data• Discover how Hive can coexist and work with other tools within the Hadoop ecosystem• Uses practical, example-oriented scenarios to cover all the newly released features of Apache Hive 2.3.3Who This Book Is ForIf you are a data analyst, developer, or simply someone who wants to quickly get started with Hive to explore and analyze Big Data in Hadoop, this is the book for you. Since Hive is an SQL-like language, some previous experience with SQL will be useful to get the most out of this book.What You Will Learn• Create and set up the Hive environment• Discover how to use Hive's definition language to describe data• Discover interesting data by joining and filtering datasets in Hive• Transform data by using Hive sorting, ordering, and functions• Aggregate and sample data in different ways• Boost Hive query performance and enhance data security in Hive• Customize Hive to your needs by using user-defined functions and integrate it with other toolsIn DetailIn this book, we prepare you for your journey into big data by frstly introducing you to backgrounds in the big data domain, alongwith the process of setting up and getting familiar with your Hive working environment.Next, the book guides you through discovering and transforming the values of big data with the help of examples. It also hones your skills in using the Hive language in an effcient manner. Toward the end, the book focuses on advanced topics, such as performance, security, and extensions in Hive, which will guide you on exciting adventures on this worthwhile big data journey.By the end of the book, you will be familiar with Hive and able to work effeciently to find solutions to big data problemsStyle and approachThis book takes on a practical approach which will get you familiarized with Apache Hive and how to use it to efficiently to find solutions to your big data problems. This book covers crucial topics like performance, and data security in order to help you make the most of the Hive working environment.
Preguntas frecuentes
Información
Data Definition and Description
- Understanding data types
- Data type conversions
- Data definition language
- Databases
- Tables
- Partitions
- Buckets
- Views
Understanding data types
Primitive type | Description | Example |
TINYINT | It has 1 byte, from -128 to 127. The postfix is Y. It is used as a small range of numbers. | 10Y |
SMALLINT | It has 2 bytes, from -32,768 to 32,767. The postfix is S. It is used as a regular descriptive number. | 10S |
INT | It has 4 bytes, from -2,147,483,648 to 2,147,483,647. | 10 |
BIGINT | It has 8 bytes, from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. The postfix is L. | 100L |
FLOAT | This is a 4 byte single-precision floating-point number, from 1.40129846432481707e-45 to 3.40282346638528860e+38 (positive or negative). Scientific notation is not yet supported. It stores very close approximations of numeric values. | 1.2345679 |
DOUBLE | This is an 8 byte double-precision floating-point number, from 4.94065645841246544e-324d to 1.79769313486231570e+308d (positive or negative). Scientific notation is not yet supported. It stores very close approximations of numeric values. | 1.2345678901234567 |
BINARY | This was introduced in Hive 0.8.0 and only supports CAST to STRING and vice versa. | 1011 |
BOOLEAN | This is a TRUE or FALSE value. | TRUE |
STRING | This includes characters expressed with either single quotes (') or double quotes ("). Hive uses C-style escaping within the strings. The max size is around 2 G. | 'Books' or "Books" |
CHAR | This is available starting with Hive 0.13.0. Most UDF will work for this type after Hive 0.14.0. The maximum length is fixed at 255. | 'US' or "US" |
VARCHAR | This is available starting with Hive 0.12.0. Most UDF will work for this type after Hive 0.14.0. The maximum length is fixed at 65,355. If a string value being converted/assigned to a varchar value exceeds the length specified, the string is silently truncated. | 'Books' or "Books" |
DATE | This describes a specific year, month, and day in the format of YYYY-MM-DD. It is available starting with Hive 0.12.0. The range of dates is from 0000-01-01 to 9999-12-31. | 2013-01-01 |
TIMESTAMP | This describes a specific year, month, day, hour, minute, second, and millisecond in the format of YYYY-MM-DD HH:MM:SS[.fff...]. It is available starting with Hive 0.8.0. | 2013-01-01 12:00:01.345 |
Complex type | Description | Example |
ARRAY | This is a list of items of the same type, such as [val1, val2, and so on]. You can access the value using array_name[index], for example, fruit[0]="apple". Index starts from 0. | ["apple","orange","mango"] |
MAP | This is a set of key-value pairs, such as {key1, val1, key2, val2, and so on}. You can access the value using map_name[key] for example, fruit[1]="apple". | {1: "apple",2: "orange"} |
STRUCT | This is a user-defined structure of any type of field, such as {val1, val2, val3, and so on}. By default, STRUCT field names will be col1, col2, and so on. You can access the value using structs_name.column_name, for example, fruit.col1=1. | {1, "apple"} |
NAMED STRUCT | This is a user-defined structure of any number of typed fields, such as {name1, val1, name2, val2, and so on}. You can access the value using structs_name.column_name, for example, fruit.apple="gala". | {"apple":"gala","weight kg":1} |
UNION | This is a structure that has exactly any one of the specified data types. It is available starting with Hive 0.7.0. It is not commonly used. | {2:["apple","orange"]} |
For MAP, the type of keys and values are unified. However, STRUCT is more flexible. STRUCT is more like a table, whereas MAP is more like an ARRAY with a customized index.
- Prepare the data as follows:
$vi employee.txt
Michael|Montreal,Toronto|Male,30|DB:80|Product:Developer^DLead
Will|Montreal|Male,35|Perl:85|Product:Lead,Test:Lead
Shelley|New York|Female,27|Python:80|Test:Lead,COE:Architect
Lucy|Vancouver|Female,57|Sales:89,HR:94|Sales:Lead
- Log in to beeline with the JDBC URL:
$beeline -u "jdbc:hive2://localhost:10000/default"
- Create a table using various data types (> indicates the beeline interactive mode):
> CREATE TABLE employee (
> name STRING,
> work_place ARRAY<STRING>,
> gender_age STRUCT<gender:STRING,age:INT>,
> skills_score MAP<STRING,INT>,
> depart_title MAP<STRING,ARRAY<STRING>>
> )
> ROW FORMAT DELIMITED
> FIELDS T...