Welcome Guest! Log in


Stambia Data Integration allows to work with Spark.

Note:

Stambia DI is a flexible and agile solution. It can be quickly adapted to your needs.

If you have any question, any feature request or any issue, do not hesitate to contact us.

This component may require a dedicated license.

Please contact the Stambia support team if you have any doubt and if you need to use it in a production environment.

 

Prerequisites:

You must install the Hadoop connector to be able to work with Spark.

Please refer to the following article that will guide you to accomplish this.

When the installation is completed, you can lead to Getting started with Spark article which explains how to work with Spark in Stambia.

Download

You can find below the necessary resources to work with this Component in Stambia DI

Name Description Download
Component resources You can find in download section the Component resources. Component resources download

 

Supported features

You can find below an overview of what Stambia can do on Spark 2:

Name Description
LOAD

Data can be loaded to Spark: HBase, HDFS, Hive, RBDMS, Vertica
Data can also be loaded from Spark: Hive, RDBMS, Vertica

INTEGRATE Data can in integrated from Spark: HDFS , Hive, RDBMS
STAGE Spark metadata can be used as a stage (between loading and integration) to boost Hadoop mappings
Spark Stage can be: SQL, Java

You have no rights to post comments

Articles

Suggest a new Article!