This article describes the principal changes of Snowflake Templates.
The Template download section can be found at this page
Stambia DI is a flexible and agile solution. It can be quickly adapted to your needs.
If you have any question, any feature request or any issue, do not hesitate to contact us.
Small update of Snowflake product to handle the new default fetch size feature fixed in Stambia DI Designer S19.0.21 properly.
This does not have any impact on the usage, and only make sure the product is properly configured if there is the need to use the default fetch size feature in an upcoming version.
LOAD Rdbms to Snowflake
Support splitting temporary files in multiples files
A new parameter named 'Split File Size' has been added to allow defining the maximum file size for the temporary files which will be loaded into Snowflake.
When exporting source data into temporary files the Template will automatically split data into multiple files if this size is reached.
Finally, all those files will be loaded into Snowflake.
LOAD Rdbms to Snowflake
New parameter 'Empty Field As Null'
A new parameter named 'Empty Field As Null' has been added to allow customizing the corresponding Snowflake option which is available on format options.
This parameter is used to indicate to Snowflake's loader that empty values must be considered as NULL values when loading data.
Refer to Snowflake's reference documentation about "EMPTY_FIELD_AS_NULL" file format option for further information.
Position of fields might be incorrect in load command
In some particular cases the position of fields which was generated in the load statement did not correspond to the real position of fields in the temporary file.
Calculation of those position has been fixed and should now be correct in all cases.
Cleaning of temporary local files
When disabling the 'Clean Temporary Objects" parameter for the Template to keep all temporary tables, files, containers, ... created while processing, some of the local file were still cleaned.
This has been fixed, all local temporary files are well kept from now when disabling cleaning.
Ability to customize and manage compression behavior
A whole new mechanism has been added to provide the ability to customize how compression is managed when loading data into Snowflake.
New parameters have been added in Metadata to define the default compression strategy which should be used when loading data, which can also be overridden on each Mapping through corresponding new parameters in Templates.
For further information, refer to getting started article which has been updated with more details about compression.
Ability to choose load method
A new parameter named 'Load Method' has been added in Load Templates, which is used to define how loading should be performed.
Two methods are available for now, "insert" and "copy", which correspond to the type of SQL Query used for loading data.
Depending on chosen method, an insert or a copy statement will be used.
New parameters to adjust loading behaviors
A bunch of new parameters have also been added on Load Templates to adjust more precisely loading behaviors: "Escape Unenclosed Field", "No File Behavior", "Work Folder", "Time Format", "Timestamp Format".
Refer to their documentation for further information.
Loading of files improved to be the closest possible to source file Metadata
When loading a source file into Snowflake, generated queries and temporary objects have been improved to be the closest possible to source file Metadata.
More options of source file Metadata are retrieved and computed to generate corresponding options and format on queries, to be sure loading will be efficient and matching.
Multiple issues have been fixed in this new version.
Refer to the history section of each Template for further information.
Incorrect datatype generated in temproary tables when using NUMBER datatype
Datatype used in temporary tables for NUMBER columns was incorrectly generated as INT instead of NUMBER.
NUMBER datatype creation mask has been updated to generate correct syntax.
Recycling of previous rejects fixed
When using the option to recycle the rejects of previous execution an extra step is executed to add those previous rejects in the integration flow.
This step had several issues which avoided it to work properly and which have been fixed in this version.
Moreover, possible duplicates while retrieving those rejects are now filtered using DISTINCT keyword.