Welcome Guest! Log in
Stambia versions 2.x, 3.x, S17, S18, S19 and S20 are reaching End of Support January, 15th, 2024. Please consider upgrading to the supported Semarchy xDI versions. See Global Policy Support and the Semarchy Documentation.

The Stambia User Community is moving to Semarchy! All the applicable resources have already been moved or are currently being moved to their new location. Read more…


Hive is a data warehouse software used to manage, query and structure data stored across different storage systems such as HDFS.

This article explains the basics to start working with Hive in Stambia.

Prerequisites:

You must install the Hadoop connector to be able to work with Hive.

Please refer to the following article that will guide you to accomplish this.

 

Metadata

The first step, when you want to work with Hive in Stambia DI, consists of creating and configuring the Hive Metadata.

Here is a summary of the main steps to follow:

  1. Creation of the Hive Metadata
  2. (Optional) Configuration of the Kerberos security
  3. Configuration of the server JDBC properties
  4. Reverse of the schemas and tables
  5. Configuration of HDFS

 

Here is an example of a common Metadata Configuration:

 

MetadataOverview

 

Metadata creation

Create first the Hive Metadata, as usual, by selecting the technology in the Metadata Creation Wizard:

MetadataNew

 

Click next, choose a name and click on finish.

 

Configuration of the Kerberos security

When working with Kerberos secured Hadoop clusters, connections will be protected, and you'll therefore need to specify in Stambia the credentials and necessary information to perform the Kerberos connection.

If your cluster is secured with Kerberos, close the server Wizard popup (if it is displayed), and follow the steps below before trying to connect and reverse Hive objects.

Otherwise you can go to the next section.

  1. Create a new Kerberos Metadata (or use an existing one)
  2. Define inside the Kerberos Principal to use for Hive
  3. Drag and drop it in the Hive Metadata
  4. Rename the Metadata Link to 'KERBEROS'

 

metadataLinkKerberos

 

Refer to this dedicated article for further information about the Kerberos Metadata configuration

 

Configuration of the server properties

You are now ready to configure the JDBC properties that will be used to connect to Hive.

We're going to use the Server Wizard to configure everything.

If the Server Wizard popup is not displayed (if you closed it for configuring Kerberos, or any other reason), you can open it again with a right click > Actions > Launch Server Wizard on the server node.

Define the JDBC properties to connect to Hive and click then on Connect when it is done.

launchWizard

Specifying a User / Password is not required when using Kerberos

 

About the JDBC URL

Defining the correct JDBC URL and parameters might be delicate as it depends a lot on the Hive server and network configuration, if Kerberos is used, what Hadoop distribution is used, etc.

We'll so take a little time here to give advice and examples of URLs with explanations about its structure.

First, the Hive JDBC URL must follow the given syntax:

jdbc:stambia:handler1:<Hive JDBC Class>:<JDBC URL>

The first part is a little particular and is present because we're using a custom Stambia handler which helps us to handle the Kerberos security seamlessly. It is mandatory.

The 'Hive JDBC Class' will be the following in most cases, if you are using the standard Hive JDBC driver:

jdbc:stambia:handler1:org.apache.hive.jdbc.HiveDriver:<JDBC_URL>

You should have so to configure only the <JDBC_URL> part.

 

Example of URL to connect to Hive server

jdbc:stambia:handler1:org.apache.hive.jdbc.HiveDriver:jdbc:hive2://quickstart.cloudera:10000/default

 

Example of URL to connect to hive server secured with Kerberos

jdbc:stambia:handler1:org.apache.hive.jdbc.HiveDriver:jdbc:hive2://quickstart.cloudera:10000/default;principal=hive/quickstart.cloudera@CLOUDERA

 

Below are some JDBC URL properties that are usually required when using Kerberos:

Property Mandatory Description Example
principal Yes

Kerberos principal to connect with.

principal=hive/quickstart.cloudera@CLOUDERA

 

Reversing schemas and tables

Once the connection properties are set, Kerberos optionally configured, you can click on connect and reverse your schemas and tables, as usual.

Simply follow the wizard as for any other traditional database:

reverse

 

About HDFS

Most of the Hive Templates are using HDFS operations to optimize the treatments and use the native loaders.

The Hive Metadata therefore requires an HDFS connection to create temporary files while processing.

  1. Create an HDFS Metadata or use an existing one
  2. Define in this Metadata the temporary HDFS folder where those operations should be performed (Hive must be able to access it).
  3. Drag and drop the HDFS Folder Metadata in the Hive Metadata
  4. Rename the Metadata Link to HDFS

 

 metadataLinkHdfs

 

Creating your first Mappings

Your Metadata being ready and your tables reversed, you can now start creating your first Mappings.

The Hive technology in Stambia is not different than any other database you could usually use.

Drag and drop your sources and targets, map the columns as usual, and configure the templates accordingly to your requirements.

Example of Mapping loading data from HSQL into Hive:

mappingHSQLToHive

 

Example of Mapping loading data from HSQL to Hive with rejects enabled:

mappingHSQLToHiveWithRejects

 

Example of Mapping loading a delimited file into Hive:

mappingDelimitedFileToHive

 

Example of Mapping loading data from Hive to HSQL, using a filter and performing joins:

mappingHiveToHSQLWithJoin

 

Note: For further information, please consult the template's Process and parameters description.

 

Demonstration Project

The Hadoop demonstration project that you can find on the download page contains Hive examples.

Do not hesitate to have a look at this project to find samples and examples.

 

Articles

Suggest a new Article!