Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

Working with experiments

Load historical data into an automated machine learning experiment and train a model to analyze and predict a business problem.

You can create and edit experiments in personal or shared spaces.

Workflow

Before you create an automated machine learning experiment in Qlik Cloud Analytics, you need to have a well-defined machine learning question and a suitable training dataset available in Catalog. For more information, see Defining machine learning questions and Getting your dataset ready for training.

The following steps describe an experiment workflow.

  1. Create your experiment

    Create a new experiment in Qlik Sense. Add it to a shared space if you want to work collaboratively.

    Creating experiments

  2. Configure your experiment

    Select a target to make predictions on and features to support the prediction.

    Configuring experiments

  3. Start the training

    Start the training of your first experiment version.

    Training experiments

  4. Refine the model

    During the training, suitable machine learning algorithms are applied to the training data and performance metrics are generated. Review the metrics to see how you can refine the model.

    Reviewing models

    Adjust parameters such as features and algorithms and retrain new versions of the experiment until you have a good model.

    Refining models

  5. Deploy the model

    When you have a good model, it’s time to deploy it and start making predictions.

    Working with ML deployments

Requirements and permissions

For information about user permissions requirements for working with ML deployments and predictions, see Access controls for ML experiments.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!