Onboarding data
The first step of creating a data pipeline in a Qlik Open Lakehouse project is onboarding the data. This process involves transferring data from the source and storing datasets in optimized Iceberg tables. Changes from the data sources are continuously applied to the storage tables in efficient mini-batches.
Onboarding is created in a single operation, but performed in two steps.
-
Landing the data
This involves transferring the data continuously from the on-premises data source to a landing area, using a Landing data task.
Landing data from data sources
You can also land data to a lakehouse, where the data is landed to S3 file storage.
-
Storing datasets
This involves reading the initial load of landing data or incremental loads, and applying the data in read-optimized format using a Storage data task.
When you have onboarded the data, you can use the stored datasets in several ways.
-
You can use the datasets in an analytics app.
-
You can mirror data to Snowflake by adding a Mirror data task directly to the Storage data task.
-
You can transform data in Snowflake by creating a cross-project pipeline that consumes data from your onboarding project.
Onboard data
You start onboarding data in a project. Datasets will be stored in the S3 location defined in the project. For more information about projects, see Creating a data pipeline project.
-
In your project, click Create and then Onboard data.
Tip noteYou can also clickon an existing source in the project, and then click Onboard data.
-
Add Name and Description for the onboarding.
Click Next.
-
Select the source connection.
You can select an existing source connection or create a new connection to the source.
For more information, see Setting up connections to data sources.
Click Next.
-
Select data to load.
For more information, see Selecting data.
Click Next.
Settings is displayed, where you can select update method and history settings.
-
Select which method to use to update data in Update method:
-
Change data capture (CDC)
If your data contains tables that do not support CDC, or views, two data pipelines will be created: one pipeline with all tables supporting CDC, and another pipeline with all other tables and views using Reload and compare.
-
Reload and compare
-
-
Select if you want to replicate history of previous data in addition to current data in History.
-
Click Next when you are ready.
-
Preview the data tasks that are created to onboard data, and rename them if you prefer.
Tip noteThe names are used when naming database schemas in the Storage data task. Consider using names that are unique to avoid conflicts with data tasks in other projects using the same data platform. -
Select if you want to open any of the data tasks that are created, or return to the project.
When you are ready, click Finish.
-
The onboarding data tasks are now created. To start replicating data you need to:
-
Prepare and run the Landing data task.
For more information, see Landing data from data sources.
-
Prepare and run the Storage data task.
For more information, see Storing datasets.
Selecting data
You can select specific tables or views, or use selection rules to include or exclude groups of tables.
Use % as a wildcard to define a selection criteria for schemas and tables.
-
%.% defines all tables in all schemas.
-
Public.% defines all tables in the schema Public.
Selection criteria gives you a preview based on your selections.
You can now either:
-
Create a rule to include or exclude a group of tables based on the selection criteria.
Click Add rule from selection criteria to create a rule, and select either Include or Exclude.
You can see the rule under Selection rules.
-
Select one or more datasets, and click Add selected datasets.
You can see the added datasets under Explicitly selected datasets.
Selection rules only apply to the current set of tables and views, not to tables and views that are added in the future.