Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

Deploying and approving your model

The next step is to deploy your model, and then approve it so that it can generate predictions.

Deploying a model

Deploying a model allows you to use it to generate predictions on new data.

The model refinement process is different for each project you work on. Once you have a model that meets the criteria for your use case, you can deploy it. This will create an ML deployment, which is available in the catalog.

For more information about deploying your models in Qlik AutoML, see Working with ML deployments.

Information noteAutoML is continually improving its model training processes. Your model metrics, as well as the algorithm for the model which you will deploy, might not be identical to those shown in the images on this page.
  1. Switch back to the Models tab in the experiment.

  2. Click the Three-dot menu icon next to the top-performing model from v3.

  3. Click ML deployment Deploy.

  4. Type a name for your deployment, such as Customer churn deployment.

    Alternatively, keep the default deployment name.

  5. If needed, adjust the space, description, and tags.

  6. Click Deploy.

Deploying a model in Qlik AutoML

Selecting the 'Deploy' option for the chosen model.

Your new ML deployment will now be available in the catalog.

Click Open, or navigate back to the catalog and open the ML deployment.

Approving your model

At the top of the ML deployment interface, notice that the model in the deployment is in Requested state. This means that a model approver needs to activate it so it can generate predictions.

ML deployment interface, showing that the model within the deployment is not yet approved

Deployment overview for the new model in the ML deployment interface. The top of the ML deployment interface should have a toggle switch showing the model in the deployment is in 'Requested' state.
  1. Use the toggle switch at the top of the ML deployment to activate the model.

  2. In the dialog that opens, click Activate to confirm.

The model status should now say Active.

For more information about approving models, see Approving deployed models.

You can now proceed to creating predictions with your ML deployment. Move to the next topic in this tutorial.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!