Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

What's new in Replicate May 2025?

This section describes the new and enhanced features in Replicate May 2025.

Information noteIn addition to these release notes, customers who are not upgrading from the latest GA version are advised to review the release notes for all versions released since their current version.

Customers should also review the Replicate release notes in Qlik Community for information about the following:

  • Migration and upgrade
  • End of life/support features
  • Deprecated versions
  • Resolved issues
  • Known issues

New and enhanced endpoints

New Snowflake source endpoint

With the introduction of the new Snowflake source endpoint, customers can now leverage Snowflake's unique processing capabilities, and then replicate the processed data to any of the supported targets.

Using Snowflake as a source

Unified Snowflake target endpoint

In previous versions, customers needed to choose one of the three Snowflake endpoints according to their Snowflake cloud provider: Snowflake on AWS, Snowflake on Google, or Snowflake on Azure. This version unifies these endpoints into a single Snowflake target endpoint and adds the following functionality:

  • Support for connecting via a proxy server when replicating to Snowflake on Azure
  • Support for the Storage Integration Name property when replicating to Snowflake on Azure or Snowflake on AWS
  • Support for the use of internal Snowflake staging when replicating to Snowflake on Google.
  • Support for connecting via a proxy server to the Snowflake database, to the external storage, or both.
Information noteThe standalone Snowflake target endpoints will be removed from the product in Replicate November 2025. Upgrading to Replicate November 2025 will automatically merge any standalone Snowflake target endpoints into the new unified Snowflake target endpoint, using your existing license.

Using Snowflake as a target

Databricks Delta enhancements

Primary key support

From this version, source primary key columns will be created with the RELY keyword in Databricks Delta. Although Databricks does not enforce primary key constraints, columns with the RELY keyword are assumed to have no duplicates and can therefore be used by Databricks for query optimization.

Information note

Requirements:

  • Use Unity Catalog must be enabled in the endpoint settings
  • Databricks 14.2 or later

Iceberg reads (UniForm) support

A new Enable Iceberg read (UniForm) option has been added to the endpoint settings' Advanced tab. When this option is selected, the table will be created in a format that is readable by Iceberg consumers.

Information note

Requirements:

  • Use Unity Catalog must be enabled in the endpoint settings
  • Databricks 14.3 or later

Support for Liquid clustering using primary key columns

Liquid clustering is a replacement for table partitioning and is aimed at improving performance. To support this new functionality a new Enable Liquid clustering using primary key columns option has been added to the endpoint settings' Advanced tab.

Information note

Requirements:

  • The source table must have Primary Key or Unique Index
  • Databricks 13.3 or later

Using Databricks Lakehouse (Delta) as a target

Data type enhancements

IBM DB2 for iSeries source endpoint

Support for the following data type has been added: BINARY-DECIMAL and ZONED-DECIMAL

Databricks Delta target endpoint

In previous versions, BYTES was mapped to STRING. From this version, it will be mapped to VARCHAR (Length in Bytes)

Support for the JSON subtype

  • The JSON subtype is now supported by the Google Cloud BigQuery endpoint with the ability to set precision on the target.

    Setting advanced connection properties

  • The JSON subtype can now be captured from MongoDB without needing to define a separate transformation. This will allow target endpoints that support the JSON subtype (currently Snowflake and Google Cloud BigQuery) to handle it accordingly on the target.

S3 on-premises support

This version introduces support for replicating data to on-premises systems using S3-compatible storage (for example, DELL ECS). To this end, an On-Premises Compatibility section with the required settings been added to the Amazon S3 target endpoint's Advanced tab.

Setting advanced connection properties

SAP HANA target endpoint: Support for SSL

From this version, customers will be able to connect to their SAP HANA target using SSL.

Setting general connection properties

Kafka-based target endpoint enhancements

This version introduces the following enhancement to the Confluent Cloud, Amazon MSK, and Kafka target endpoints:

  • Tombstone support: A new Send tombstone on DELETE option in the Advanced tab: When this option is selected, only the message key will be populated; the message itself will be null, indicating that the item has been deleted. This can help consumers detect that a DELETE operation has been performed.
  • Proxy support: Customers can now connect to Confluent Schema Registry via a proxy server.

Removal of the Azure Gen1 storage option from the Microsoft Azure ADLS and Microsoft Azure HDInsight endpoints

From this version, the Azure Gen1 storage option will no longer be available for the Microsoft Azure ADLS and Microsoft Azure HDInsight endpoints. After upgrading to Replicate May 2025, endpoints configured with Azure Gen1 storage will be automatically switched to Azure Gen2 storage, and customers will need to provide the required connection parameters.

PostgreSQL-based source endpoints: Partitioned tables now supported by default

Replication (Full Load and CDC) of partitions and sub-partitions (and sub-sub partitions) from PostgreSQL-based data sources is now supported by default. Consequently, the Support partitioned tables in CDC option has been removed from the endpoint settings' Advanced tab in all PostgreSQL-based source endpoints.

Google Cloud BigQuery target endpoint: Support for data truncation error handling

The Google Cloud BigQuery target endpoint now support data truncation error handling. If a data truncation errors, you can now choose whether to log the record to the exceptions control table (default), ignore the record, suspend the table, or stop the task.

Data Errors

SAP HANA source endpoint enhancements

Support for accessing SAP HANA via SAP Application Server

In previous versions it was only possible to access the SAP HANA database directly. From this version, customers will now be able to choose whether to access SAP HANA directly or via SAP Application Server. Being able to access SAP HANA via SAP Application Server is especially beneficial for customers with a SAP Runtime License as it will allow them to use the SAP HANA source endpoint with Log-based CDC, without requiring direct access to SAP HANA.

Support for "Full record" mode when using trigger-based CDC

This version introduces a new Record mode field in the endpoint settings' Advanced tab, with the following options:

  • Primary Key only: In this mode, for each updated record, only the Primary Key values are captured and stored. The stored Primary Keys are then used to retrieve the data from the source. This mode uses less storage, but more memory to process the data being retrieved.
  • Full record: In this mode, for each captured table, the entire data record is retrieved and stored in a "shadow" table. This mode provides a full history of changes with all values, including the before-images of each UPDATE and DELETE operation. Best practice is to use this mode as it provides the following benefits:

    • Improved accuracy of the latency calculation in the Replicate monitor.
    • Does not require JOIN of SELECT statements to retrieve the full data, which in turn reduces the load on the SAP HANA system.
    • Soft deletes on the target are supported, as a full history of all the data is available.
    • Significantly less limitations than Primary Key only mode.

    This mode uses more storage on the SAP HANA system, but requires less memory to process the data being retrieved.

Using SAP HANA as a source

Newly supported endpoint and driver versions

Newly supported endpoint versions

The following source and target endpoint versions are now supported:

  • MariaDB (on-premises) and Amazon RDS for MariaDB: 11.4
  • MySQL (on-premises), MySQL Percona, Google Cloud SQL for MySQL, Amazon RDS for MySQL, and Microsoft Azure Database for MySQL - Flexible Server: 8.4
  • PostgreSQL (on-premises), Google Cloud SQL for PostgreSQL, Amazon RDS for PostgreSQL, and Microsoft Azure Database for PostgreSQL - Flexible Server: 17.x

Newly supported driver versions

The following driver versions are now supported:

IBM DB2 for LUW and IBM DB2 for z/OS: IBM Data Server Client 11.5.9

Teradata Database ODBC Driver: 20.00

End-of-life endpoints and driver versions

End-of-life endpoints

Support for Microsoft Azure Database for MySQL and Microsoft Azure Database for PostgreSQL, which have been officially retired by Microsoft, has been discontinued

End-of-life endpoint versions

Support for IBM DB2 for iSeries 7.2 which is EOL, has been discontinued.

End-of-life driver versions

Support for IBM Data Server Client 11.5.6 has been discontinued.

Server-side enhancements

Notification suppression

This version introduces the ability to suppress task notifications based on conditions and event duration.

Suppression conditions

You can now suppress task notifications based on any if the following conditions:

  • Last notification was sent less than [N] [minutes|hours] ago
  • Error message [contains|doesn't contain] ______
  • Task is still trying to recover
  • The expression (created using the Expression Builder) conditions are met

Suppress the notification

Event duration suppression

In addition to the general suppression conditions, you can also suppress notifications based on the duration of a particular event (latency, memory utilization, or disk utilization). For example, you can configure Replicate to send a notification when memory utilization exceeds 200 MB, but only if the excessive memory utilization lasts longer than 5 minutes.

Define which changes of status trigger the notification

Log rollover and cleanup defaults for new installations

In previous versions, log file management was not enabled by default. From this version, after performing a new installation (as opposed to an upgrade), the settings in the server settings' Log File Management tab will be as follows:

Automatic Rollover

Roll over the log if the log file is larger than (MB): 100

Roll over the log if the log file is older than (days): 7

Automatic Cleanup

Delete log files that are older than (days): 45

Setting automatic roll over and cleanup

Removal of the license request option

The option to request a Replicate license via the Replicate Console has been removed.

Features introduced in Replicate November 2024 Service Release 1

The following section describes the features introduced in Replicate November 2024 Service Release 1.

MongoDB source endpoint: Expanded support for mutual authentication

This version introduces the ability to configure mutual authentication for all authentication methods (except None).

Setting general connection properties

SAP OData source endpoint: Support for the Client identifier property

The Client identifier setting was added to the General tab in the endpoint settings.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!