Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

Monitoring performance and usage of deployed models

As you use your ML deployment to generate predictions, you can monitor the performance of the source model by analyzing data drift over time. You can also view details about usage of the deployment for predictions, such as how predictions have been triggered and the rate of prediction failure.

Monitoring of deployed models and ML deployments is performed with embedded analytics.

Data drift monitoring

With data drift monitoring, you can analyze how the input data for model predictions has changed over time, and how it differs from the original training dataset. With these tools, you can determine the point at which your model needs to be re-trained or replaced due to significant feature drift.

For more information about data drift monitoring in AutoML, see Monitoring data drift in deployed models.

For general information about data drift, see Data drift.

Operations monitoring

As your ML deployment is used to create predictions, it is helpful to monitor details about its operations. With operations monitoring in AutoML, you can:

  • View the number of requests, predictions, prediction failures for the deployment.

  • Analyze prediction events by trigger (for example, how many were initially manually versus on a schedule).

  • View a detailed log showing each prediction event along with key details.

For more information, see Monitoring deployed model operations.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!