Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

App performance evaluation

Performance evaluation is a feature of Qlik Sense SaaS that lets you run a tailored evaluation as you develop your app and presents simple and understandable metrics such as response times for public sheets and objects. The performance evaluator provides insights on which public sheets and objects to focus on when optimizing performance, and it lets you compare specific performance metrics between evaluated versions of your app.

Information noteOnly published sheets are factored into the evaluation. This prevents sheets that are in development from affecting the performance evaluation results.

Who should use performance evaluation

Performance evaluation is designed for app developers on Qlik Sense SaaS Enterprise and Business. To run a performance evaluation on an app, you must be the app owner, or be a member of the space that contains the app and have the Can edit, Can edit data in apps, Can manage, Is admin, or Can operate (in managed spaces) space role to use the performance evaluator.

How to use performance evaluation

There are two ways to use performance evaluation:

  1. To evaluate your app performance.

  2. To determine if changes to your app affected performance.

When you run a performance evaluation, it examines response times for all public sheets and objects in the app to identify which objects to focus on when optimizing performance. The results are provided as guidance and are not guaranteed to reflect actual user-perceived performance in production environments.

To learn about the types of resources that can affect your app performance, see Optimizing app performance for more information.

Information noteThere will be some degree of variation in the performance evaluation results. This is because the performance evaluation runs in a cloud-based environment, therefore some variation in response times is expected due to latency and bandwidth fluctuations. To minimize the variation when comparing two performance evaluations, run them as close together as possible.

Running performance evaluations on apps

To run a performance evaluation, you must have reload permission for the app. You can run a performance evaluation from your activity centers or from app details.

Running performance evaluations from your activity centers

  1. In your activity center, click More on the app you want to evaluate.

  2. Select evaluation icon Evaluate performance.

    You will get a notification when the evaluation has completed.

Running performance evaluations from app details

  1. In your activity center, click More on the app you want to evaluate.

  2. Select info icon Details, then click performance evaluation icon Performance evaluation.

  3. Click Evaluate now.

Viewing evaluation results

Depending on whether you want view a single performance evaluation or compare two performance evaluations, the results tables will differ.

Information note

App performance evaluation results are kept for 90 days.

Viewing a single performance evaluation

  1. To view the performance evaluation, click More on the app.

  2. Select info icon Details, then click performance evaluation icon Performance evaluation. All evaluations are listed in the performance evaluations table.

    Tip noteYou can also go to the results by clicking View results in the notification.
  3. Click View on the evaluation you want to view.

    The performance evaluation table showing several performance evaluation runs
  4. The results window provides an information on the performance evaluation results.

    Tip noteSee Performance evaluation information for details on the specific metrics.
    Performance evluation details window showing overview tab
  5. Select the Results tab to view more specific performance information.

    Details tab showing a single performance evaluation table
  6. Click Down arrow to show the details for each row. You can also click Down arrow for each sheet to show the objects with longest load time.

  7. Click New tab icon to open the app containing the object. The specific object is highlighted on the sheet.

Comparing performance evaluations

  1. To view the performance evaluation, click More on the app.

  2. Select info icon Details, then click performance evaluation icon Performance evaluation. All evaluations are listed in the performance evaluations table.

  3. Select the two you want to view, then click Compare.

    Performance evaluation table with two rows selected and the compare button shown
  4. The evaluation results open in a window. See Performance evaluation information for details on the specific metrics. The Info tab shows the metrics for the selected performance evaluation, in addition to the difference between them.

    Comparison view for performance evaluation
  5. Select the Results tab to view more specific performance information. For each row, the absolute and relative change is shown.

    You can sort on the Absolute change and Relative change columns. Click on the column heading to sort in ascending or descending order.

    Details view of performance evaluation comparison
  6. Click Down arrow to show the details for each row.

Information noteWhen comparing two evaluations, differences are only highlighted when they are significant enough to show a degradation or improvement in performance.

Performance evaluation information

The metrics are obtained either from the app metadata, or they are measured during the performance evaluation.

Information noteClick the Download log button on the performance evaluation window to download a log file for the selected evaluation.

Info tab

The Info tab shows basic app information for the selected version.

Status

  • Shows the status of the performance evaluation.

    • Ready for to be reviewed - the performance evaluation completed successfully.

    • Warning - the performance evaluation completed but some results are missing or inaccurate.

    • Failed to evaluate - the performance evaluation did not complete successfully and results are missing or inaccurate.

App size

  • Source of metric: App metadata

  • Shows the total size of the app data model in-memory with no initial selections.

Number of rows

  • Source of metric: App metadata

  • Shows the total rows in contained in tables in the data model of the app.

Public sheets in app

  • Source of metric: App metadata

  • Shows the total public sheets in the app.

Public objects in app

  • Source of metric: App metadata

  • Shows the total public objects in the app.

    Information noteIn the performance evaluation results, the public sheets are not counted as public objects.

Not evaluated

  • Source of metric: Measured

  • Lists all objects that could not be completely evaluated. Typical reasons may include if the object has a calculation condition that has not been met or if the object type is not supported for evaluation. For example, customer-developed extension behavior is not known to the app evaluator and may not be evaluated correctly.

Warnings

  • Source of metric: Measured

  • Lists objects that have issues related to app development, which might need to be addressed. For example, a object that functions in a sheet but has error codes, such as an object that does not have measures or dimensions, is listed under Warnings. If an object sends back a data page over a specified size that will also be listed here with Payload too large.

Critical Errors

  • Source of metric: Measured

  • Lists errors that stopped the evaluation from completing, along with tenant or app quotas. This may include app evaluator errors or other infrastructure issues that prevent completion, such as if the quota for app evaluation is exceeded or if the app exceeds the app evaluation size limit of 20 GB and cannot be opened.

Results tab

The Results tab provides more specific information about the performance evaluation.

Object exhibiting problems caching

  • Source of metric: Measured

  • Lists objects are not being cached efficiently. This is determined by loading each object twice. After having already loaded the object once, a faster response time can be expected because the result set should be entered into the cache. Improvements can potentially be made by adjusting the data model or expressions. For more information, see best practices for data modeling and using expressions in visualizations.

  • For more information about general app optimization, see Optimizing app performance.

Single-threaded objects

  • Source of metric: Measured

  • This section contains objects whose performance metrics indicate predominantly single-threaded processing during loading. If an object appears in this section and the response time for a user is deemed too long, the queries resulting from any expressions in the object should be reviewed for bottlenecks. Improvements can potentially be made by adjusting the data model or expressions.

  • For more information about single-threaded performance, see Fields from different tables inside an aggregation table are avoided where possible.

Objects exceeding memory limit

  • Source of metric: Measured

  • This section contains objects that have reached a memory limit, with a corresponding error code. These may include objects that reach an engine object sandboxing limit, exceeded total engine memory, or reached a related memory boundary.

Public sheets by initial load time

  • Source of metric: Measured

  • Measurement of response time per sheet. These measurements are extracted from the first time the app is traversed and each of the sheets are being requested one by one. The values contained within this section represent a worst-case load time per sheet. For each sheet, it is possible to view the top 5 slowest objects contained therein by clicking the arrow icon to the right of the row. This gives you a quick breakdown of where time is being spent while loading the sheet.

Cached sheet load time

  • Source of metric: Measured

  • Measurement of response time per sheet. When all sheets were requested the first time they should typically have been cached. These measurements are extracted from the second time the app is traversed and each of the sheets are being requested one by one. Also here, you can retrieve a breakdown of where time is being spent on an object-basis by expanding a row using the button to the right.

Initial object load time

  • Source of metric: Measured

  • Measurement of response time per object. These measurements are extracted from the first time the app is traversed and each of the objects are being requested one by one. The values contained within this section represent a worst-case load time per object.

  • For example, you can improve the use of caching by using master items. For more information, see Master items or variablesused for expressions.

Cached object load time

  • Source of metric: Measured

  • Measurement of response time per object. When all objects were requested the first time they should typically have been cached. These measurements are extracted from the second time the app is traversed and each of the objects are being requested one by one.

Memory allocation per table

  • Source of metric: App metadata

  • A list of tables included in the data model and the size thereof. This section will be of interest when attempting to minimize the size of the data model, which translates to improved responsiveness.

  • You can drop fields and tables that are not used in any expression in the load script to improve speed and resource usage. For more information, see Data model performance.

Memory allocation per field

  • Source of metric: App metadata

  • A list of fields included in the data model and the size thereof. This section will be of interest when attempting to minimize the size of the data model, which translates to improved responsiveness.

  • You can drop fields and tables that are not used in any expression in the load script to improve speed and resource usage. For more information, see Data model performance.

Notification preferences

You can choose to be notified when performance evaluation has completed or failed.

Click More on the app and select Notifications. The following notifications for performance evaluations are available:

  • Performance evaluation for this app is ready to be reviewed

  • Performance evaluation for this app has failed to run

Limitations

  • Only public sheets in the app, including all objects on them, are evaluated.

  • It is not possible to evaluate performance of apps that are distributed from Qlik Sense Enterprise on Windows.

  • Not all chart objects are supported. If an object is not supported, it is mentioned in the Not evaluated section of the results.

  • Chart objects created from chart suggestions prior to June 2020 need to be manually updated to be supported.

  • If the app uses section access to reduce data, the evaluation is performed with data reduced for the current user. This means you need to run the evaluation as a user with access to the data set that you want to evaluate. It is not relevant to compare results from users with different section access.

  • The app performance evaluation is limited to 20 GB capacity, but will attempt to evaluate all apps. Apps requiring more memory than the capacity will fail the evaluation with an error message.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!