Setting general connection parameters
This section describes how to configure general connection properties. For an explanation of how to configure advanced connection properties, see Setting advanced connection properties.
To add a Snowflake target endpoint to Qlik Replicate:
- In the Qlik Replicate Console, click Manage Endpoint Connections to open the Manage Endpoints Connections dialog box.
- In the Manage Endpoint Connections dialog box, click New Endpoint Connection.
- In the Name field, enter a display name for your Snowflake endpoint.
- Optionally, in the Description field, enter a description for your Snowflake target endpoint.
-
Select Target as the role.
-
Select Snowflake as the Type.
-
Configure the Snowflake access settings as follows:
- Server: Your host name for accessing Snowflake .
- Authentication: Select one of the following:
Key Pair: Select and then provide the following information:
- Username: The username of a user authorized to access the Snowflake database.
Private key file: The full path to your Private key file (in PEM format).
Example: C:\Key\snow.pem
- Private key passphrase: If the private key file is encrypted, specify the passphrase.
OAuth: To use OAuth authentication, your Snowflake database must be configured to use OAuth. The process is described in the Snowflake online help:
Configure Snowflake OAuth for Custom Clients
Authorize URL: The IdP server for requesting authorization codes. The authorization URL format depends on the IdP.
For Snowflake:
https://<yourSnowflakeAccount>/oauth/authorize
For Okta:
https://<yourOktaDomain>/oauth2/<authorizationServerId>/v1/authorize
Token URL: The IdP server used to exchange the authorization code for an access token. The access token URL format depends on the IdP.
For Snowflake:
https://<yourSnowflakeAccount>/oauth/token-request
For Okta:
https://<yourOktaDomain>/oauth2/<authorizationServerId>/v1/token
Client ID: The client ID of your application.
Client secret: The client secret of your application.
Scope: You might be required to specify at least one scope attribute, depending on your IdP configuration. Scope attributes must be separated by a space. Refer to your IdP's online help for information about the available scopes and their respective formats.
Use default proxy settings: Select to connect via a proxy server when clicking Generate. Note that the proxy settings must be defined in the server setting's Endpoints tab.
Refresh token: The refresh token value. Click Generate to generate a new refresh token. When you click Generate, your IdP will prompt you for your access credentials. Once you have provided the credentials, the Refresh token field will be populated with the token value.
Warning noteThe IdP must not be configured to rotate the refresh token.Information noteWhen using Replicate Console, the OAuth redirect URL is https://{hostname}/attunityreplicate/rest/oauth_complete.
When using Enterprise Manager, the OAuth redirect URL is https://{hostname}/attunityenterprisemanager/rest/oauth_complete.
The {hostname} part of the URL should be replaced by the domain from which you want to connect (Enterprise Manager, Replicate on Windows, Replicate on Linux, or Replicate on Windows using port 3552).
If you connect to Replicate with a hostname that differs from the hostname in the redirect URL (configured in your IdP), you need to add that name to the end of the <REPLICATE-INSTALL-DIR>\bin\repctl.cfg file in the following format (using localhost as an example):
"address": localhost
Then restart the Qlik Replicate Server service.
Username and password: Enter the username and password of a user authorized to access the Snowflake database.
Information noteThis authentication method is not supported when Snowpipe Streaming is the Loading method.
- Role: (Optional) The role to use for the connection. The specified role should be assigned to the specified Snowflake user. If no role is specified, the default role will be used.
- Database name: The name of your Snowflake database.
-
Configure the Data Loading settings as follows:
-
Loading method: Select Bulk Loading (the default) or Snowpipe Streaming.
Information noteIf you select Snowpipe Streaming, make sure that you are aware of the limitations of this method.The main reasons to choose Snowpipe Streaming over Bulk Loading are:
-
Less costly: As Snowpipe Streaming does not use the Snowflake warehouse, operating costs should be significantly cheaper, although this will depend on your specific use case.
-
Reduced latency: As the data is streamed directly to the target tables (as opposed to via staging), replication from source to target should be faster.
-
-
When Bulk Loading is selected, the following properties are available:
- Warehouse: The name of your Snowflake warehouse.
-
Staging type: Select either Snowflake (the default) or one of the available staging types.
Information noteWhen Snowflake is selected, Snowflake's internal storage will be used.-
Amazon S3
- Bucket name: The name of the Amazon S3 bucket to where the files will be copied.
- Bucket region:
- Access type: Choose one of the following:
- Key pair - Choose this method to authenticate with your Access Key and Secret Key. Then provide the following additional information:
- Access key: Type the access key information for Amazon S3.
- Secret key: Type the secret key information for Amazon S3.
- IAM Roles for EC2 - Choose this method if the machine on which Qlik Replicate is installed is configured to authenticate itself using an IAM role. Then provide the following additional information:
- External stage name: The name of your external stage. To use the IAM Roles for EC2 access type, you must create an external stage that references the S3 bucket.
To use the IAM Roles for EC2 access method, you also need to fulfill the prerequisites described in Prerequisite for using the IAM Roles for EC2 access type.
- Key pair - Choose this method to authenticate with your Access Key and Secret Key. Then provide the following additional information:
- Folder: The bucket folder to where the files will be copied.
-
Stage authentication: Only available when Key pair is selected as the Access type. Choose Storage integration (the default) or Key pair. If you choose Key pair, the Access key and Secret key defined for the Access type will be used.
- Storage integration name: Your storage integration name. Integrations avoid the need for passing explicit cloud provider credentials such as secret keys or access tokens; instead, integration objects reference a Cloud Storage service account.For more information on creating a storage integration name, see Configuring a Snowflake storage integration to access Amazon S3.
The region where your bucket is located. It is recommended to leave the default (Auto-Detect) as it usually eliminates the need to select a specific region. However, due to security considerations, for some regions (for example, AWS GovCloud) you might need to explicitly select the region. If the region you require does not appear in the regions list, select Other and specify the code in the Region code field.
For a list of region codes, see AWS Regions.
-
Azure Blob Storage
-
Storage account: The name of an account with write permissions to the container.
Information noteTo connect to an Azure resource on Government Cloud or China Cloud, you need to specify the full resource name of the storage account. For example, assuming the storage account is MyBlobStorage, then the resource name for China cloud would be MyBlobStorage.dfs.core.chinacloudapi.cn
For information on setting internal parameters, see Setting advanced connection properties
- Access key: The account access key.
- Container name: The container name.
- Folder: The container folder to where the files will be copied.
-
Stage authentication: Choose Storage integration (the default) or SAS Token.
- Storage integration name: Your storage integration name. Integrations avoid the need for passing explicit cloud provider credentials such as secret keys or access tokens; instead, integration objects reference a Cloud Storage service account.For more information on creating a storage integration name, see Configuring a Snowflake storage integration.
- SAS token: Your SAS (Shared Access Signature) for accessing the container.
-
-
Google Cloud Storage
-
Bucket name: The Google Cloud Storage bucket.
-
JSON credentials: The JSON credentials for the service account key with read and write access to the Google Cloud Storage bucket.
-
Folder: Where to create the data files in the specified bucket.
-
Stage authentication: The value is set to Storage integration and cannot be changed.
- Storage integration name: Your storage integration name. Integrations avoid the need for passing explicit cloud provider credentials such as secret keys or access tokens; instead, integration objects reference a Cloud Storage service account.For more information on creating a storage integration name, see Configuring an integration for Google Cloud Storage.
-
-
-
Max file size (MB): Relevant for Full Load and CDC. The maximum size a file can reach before it is loaded to the target. If you encounter performance issues, try adjusting this parameter.
-
File upload threads: Only available when the selected Staging type is Snowflake. The number of thread to use when uploading a single file. If you encounter performance issues with large files, try increasing this value.
-
Number of file to load in a batch: Relevant for Full Load only. The number of files to load in a single batch. If you encounter performance issues, try adjusting this parameter.
-
Batch load timeout (seconds): If you encounter frequent timeouts when loading the files, try increasing this value.
-
To determine if you are connected to the database you want to use or if the connection information you entered is correct, click Test Connection.
If the connection is successful a message in green is displayed. If the connection fails, an error message is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The server log is displayed with the information for the connection failure. Note that this button is not available unless the test connection fails.
Prerequisite for using the IAM Roles for EC2 access type
To use the IAM Roles for EC2 access type, you must run the following commands on the Snowflake database before running the task:
Command 1:
create or replace file format MY_FILE_FORMAT TYPE='CSV' field_delimiter=','compression='GZIP' record_delimiter='\n' null_if=('attrep_null') skip_header=0 FIELD_OPTIONALLY_ENCLOSED_BY='\"';
Command 2:
create or replace stage “PUBLIC”.MY_S3_STAGE file_format=MY_FILE_FORMAT url='s3://MY_STORAGE_URL' credentials=(aws_role='MY_IAM_ROLE');
Where:
MY_FILE_FORMAT
- Can be any value.
MY_S3_STAGE
- The name specified in the External stage name field above.
MY_STORAGE_URL
- The URL of your Amazon S3 bucket
MY_IAM_ROLE
- Your IAM role name.