Apache Hive Essentials
eBook - ePub

Apache Hive Essentials

Essential techniques to help you process, and get unique insights from, big data, 2nd Edition

Dayong Du

Condividi libro
  1. 210 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Apache Hive Essentials

Essential techniques to help you process, and get unique insights from, big data, 2nd Edition

Dayong Du

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

This book takes you on a fantastic journey to discover the attributes of big data using Apache Hive.About This Book• Grasp the skills needed to write efficient Hive queries to analyze the Big Data• Discover how Hive can coexist and work with other tools within the Hadoop ecosystem• Uses practical, example-oriented scenarios to cover all the newly released features of Apache Hive 2.3.3Who This Book Is ForIf you are a data analyst, developer, or simply someone who wants to quickly get started with Hive to explore and analyze Big Data in Hadoop, this is the book for you. Since Hive is an SQL-like language, some previous experience with SQL will be useful to get the most out of this book.What You Will Learn• Create and set up the Hive environment• Discover how to use Hive's definition language to describe data• Discover interesting data by joining and filtering datasets in Hive• Transform data by using Hive sorting, ordering, and functions• Aggregate and sample data in different ways• Boost Hive query performance and enhance data security in Hive• Customize Hive to your needs by using user-defined functions and integrate it with other toolsIn DetailIn this book, we prepare you for your journey into big data by frstly introducing you to backgrounds in the big data domain, alongwith the process of setting up and getting familiar with your Hive working environment.Next, the book guides you through discovering and transforming the values of big data with the help of examples. It also hones your skills in using the Hive language in an effcient manner. Toward the end, the book focuses on advanced topics, such as performance, security, and extensions in Hive, which will guide you on exciting adventures on this worthwhile big data journey.By the end of the book, you will be familiar with Hive and able to work effeciently to find solutions to big data problemsStyle and approachThis book takes on a practical approach which will get you familiarized with Apache Hive and how to use it to efficiently to find solutions to your big data problems. This book covers crucial topics like performance, and data security in order to help you make the most of the Hive working environment.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Apache Hive Essentials è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Apache Hive Essentials di Dayong Du in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Informatica e Database. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2018
ISBN
9781789136517
Edizione
2
Argomento
Informatica
Categoria
Database

Data Definition and Description

This chapter introduces the basic data types, data definition language, and schema in Hive to describe data. It also covers best practices to describe data correctly and effectively by using internal or external tables, partitions, buckets, and views. In this chapter, we will cover the following topics:
  • Understanding data types
  • Data type conversions
  • Data definition language
  • Databases
  • Tables
  • Partitions
  • Buckets
  • Views

Understanding data types

Hive data types are categorized into two types: primitive and complex. String and Int are the most useful primitive types, which are supported by most HQL functions. The details of primitive types are as follows:
ay contain a set of any type of fields. Complex types allow the nesting of types. The details of complex types a
Primitive type Description Example
TINYINT It has 1 byte, from -128 to 127. The postfix is Y. It is used as a small range of numbers. 10Y
SMALLINT It has 2 bytes, from -32,768 to 32,767. The postfix is S. It is used as a regular descriptive number. 10S
INT It has 4 bytes, from -2,147,483,648 to 2,147,483,647. 10
BIGINT It has 8 bytes, from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. The postfix is L. 100L
FLOAT This is a 4 byte single-precision floating-point number, from 1.40129846432481707e-45 to 3.40282346638528860e+38 (positive or negative). Scientific notation is not yet supported. It stores very close approximations of numeric values. 1.2345679
DOUBLE This is an 8 byte double-precision floating-point number, from 4.94065645841246544e-324d to 1.79769313486231570e+308d (positive or negative). Scientific notation is not yet supported. It stores very close approximations of numeric values. 1.2345678901234567
BINARY This was introduced in Hive 0.8.0 and only supports CAST to STRING and vice versa. 1011
BOOLEAN This is a TRUE or FALSE value. TRUE
STRING This includes characters expressed with either single quotes (') or double quotes ("). Hive uses C-style escaping within the strings. The max size is around 2 G. 'Books' or "Books"
CHAR This is available starting with Hive 0.13.0. Most UDF will work for this type after Hive 0.14.0. The maximum length is fixed at 255. 'US' or "US"
VARCHAR This is available starting with Hive 0.12.0. Most UDF will work for this type after Hive 0.14.0. The maximum length is fixed at 65,355. If a string value being converted/assigned to a varchar value exceeds the length specified, the string is silently truncated. 'Books' or "Books"
DATE This describes a specific year, month, and day in the format of YYYY-MM-DD. It is available starting with Hive 0.12.0. The range of dates is from 0000-01-01 to 9999-12-31. 2013-01-01
TIMESTAMP This describes a specific year, month, day, hour, minute, second, and millisecond in the format of YYYY-MM-DD HH:MM:SS[.fff...]. It is available starting with Hive 0.8.0. 2013-01-01 12:00:01.345
Hive has three main complex types: ARRAY, MAP, and STRUCT. These data types are built on top of the primitive data types. ARRAY and MAP are similar to that in Java. STRUCT is a record type, which may contain a set of any type of fields. Complex types allow the nesting of types. The details of complex types are as follows:
Complex type Description Example
ARRAY This is a list of items of the same type, such as [val1, val2, and so on]. You can access the value using array_name[index], for example, fruit[0]="apple". Index starts from 0. ["apple","orange","mango"]
MAP This is a set of key-value pairs, such as {key1, val1, key2, val2, and so on}. You can access the value using map_name[key] for example, fruit[1]="apple". {1: "apple",2: "orange"}
STRUCT This is a user-defined structure of any type of field, such as {val1, val2, val3, and so on}. By default, STRUCT field names will be col1, col2, and so on. You can access the value using structs_name.column_name, for example, fruit.col1=1. {1, "apple"}
NAMED STRUCT This is a user-defined structure of any number of typed fields, such as {name1, val1, name2, val2, and so on}. You can access the value using structs_name.column_name, for example, fruit.apple="gala". {"apple":"gala","weight kg":1}
UNION This is a structure that has exactly any one of the specified data types. It is available starting with Hive 0.7.0. It is not commonly used. {2:["apple","orange"]}

For MAP, the type of keys and values are unified. However, STRUCT is more flexible. STRUCT is more like a table, whereas MAP is more like an ARRAY with a customized index.
The following is a short exercise for all the commonly-used data types. The details of the CREATE, LOAD, and SELECT statements will be introduced in later chapters. Let's take a look at the exercise:
  1. Prepare the data as follows:
 $vi employee.txt
Michael|Montreal,Toronto|Male,30|DB:80|Product:Developer^DLead
Will|Montreal|Male,35|Perl:85|Product:Lead,Test:Lead
Shelley|New York|Female,27|Python:80|Test:Lead,COE:Architect
Lucy|Vancouver|Female,57|Sales:89,HR:94|Sales:Lead
  1. Log in to beeline with the JDBC URL:
  $beeline -u "jdbc:hive2://localhost:10000/default"
  1. Create a table using various data types (> indicates the beeline interactive mode):
 > CREATE TABLE employee (
> name STRING,
> work_place ARRAY<STRING>,
> gender_age STRUCT<gender:STRING,age:INT>,
> skills_score MAP<STRING,INT>,
> depart_title MAP<STRING,ARRAY<STRING>>
> )
> ROW FORMAT DELIMITED
> FIELDS T...

Indice dei contenuti