Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

Navigating the ML deployment interface

When you open your ML deployment, you can perform management and monitoring activities, and use it to create predictions on datasets.

Open an ML deployment from the catalog. There are navigation options for the following:

Deployment overview

The Deployment overview shows the features used in the model training and details for the deployment.

Overview of the ML deployment

Model overview pane.

If the default model in the deployment is inactive, you are notified in a banner at the top of the screen. If you have the correct permissions, you can activate the model by clicking Activate model. For more information, see:

Deployable models

The Deployable models pane is where you can manage model aliases and configure which models are used for predictions.

For more information, see Using multiple models in your ML deployment.

Deployable models pane in AutoML

'Deployable models' pane in AutoML ML deployment interface

Batch predictions

In Batch predictions, you can manage and run batch predictions using the ML deployment. Click Create prediction to create a prediction configuration, from which you run batch predictions. You can have several prediction configurations for an ML deployment.

You can use the Actions menu Three-dot menu in the table to:

  • View and edit details for prediction configurations

  • Run predictions from existing configurations

  • Edit and delete configurations

  • Create, edit, and delete prediction schedules for an existing configuration

Batch predictions with an overview and Actions menu expanded

Dataset predictions pane.

If you select Edit prediction configuration, the Prediction configuration pane is opened.

Batch predictions with side pane for prediction configuration

Prediction configuration menu and dataset schemas when creating predictions.

Real-time predictions

The Real-time predictions pane gives you access to the real-time prediction endpoint in the Machine Learning API. If the default model in the ML deployment is activated for making predictions, this pane is visible.

For information about creating real-time predictions, see Creating real-time predictions.

Information note

The real-time predictions API is deprecated and replaced by the real-time prediction endpoint in the Machine Learning API. The functionality itself is not being deprecated. For future real-time predictions, use the real-time prediction endpoint in the Machine Learning API.

Real-time predictions pane

Real time predictions pane.

Operations monitoring

In Operations monitoring, you can monitor usage information for the ML deployment. You can view details about how the ML deployment is being used, such as how many prediction events succeed or fail, and how prediction events are typically triggered.

For more information, see Monitoring deployed model operations.

Operations monitoring pane in AutoML

Embedded analysis showing operations analysis for an ML deployment

Data drift monitoring

In Data drift monitoring, you can monitor data drift for the ML deployment.

With data drift monitoring, you can assess changes in the distribution of features in the source model. When significant drift is observed, it is recommended that you retrain or re-configure your model to account for the latest data, which may indicate new patterns in data trends.

For more information, see Monitoring data drift in deployed models.

Data drift monitoring pane in AutoML

Embedded analysis showing feature drift calculations for a deployed model. The sheet includes visualizations to display information such as feature drift over time, value distributions, and a comparison of feature drift and importance within the same chart

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!