Model Evaluation Report Metrics#

Learn about the structure of your Model Evaluation Report (MER) and the key metrics that can assist you in analyzing the performance of a model.

Important

The sections (with their associated metrics) available in your Model Evaluation Report are dependent on your AI model configuration.

Dataset#

The Dataset section lists all the training data used to build your model, the training data distribution between training and test subsets that was performed during training and the available postprocessings.

The Ansys SimAI Pro application automatically uses 90% of your simulations for training and 10% for testing, picked randomly:

  • The training subset is composed of all the simulations learned by the AI model. The error introduced on the training subset is necessary to allow the AI model to perform well on new data. This is called genericity: it can be seen as the coherence of your reference simulations between each other.

  • The test subset is a dedicated sample of data used to assess the final performance of the model. It has never been learned by the model and is the real estimator of its genericity.

Global Coefficients#

If Global coefficients are provided in the project’s model, then the MER contains a dedicated section containing the following information:

  • Metrics of the error on the test subset for the coefficient predictions.

    • The performance of your model is evaluated based on the mean and the standard deviation of the important metrics calculated on your test subset.

    • The following two metrics should be as low as possible:

      • L1 norm: Mean of the absolute value of the errors between the target Global coefficients and the corresponding estimations.

      • Relative L1 norm: Mean of the absolute percentage error between the target Global coefficients and the corresponding estimations.

  • Trend comparison plots / trend order plots.

    • Squared correlation coefficient (R²): Measures the variance proportion between the target Global coefficients and the corresponding estimations.

      • R² equals 1 when the estimations are identical to the targets.

      • R² equals 0 when the estimations are equal to the average of the target.

  • Confidence score for the data in the test subset.

    • The confidence score can be High or Low.

    • For more information, see Confidence Score.

Surface#

The Surface section contains:

  • Metrics of the error in the surface predictions computed on the test subset.

  • Surface contours comparison plots for the variables selected as “Model Output” during model configuration.

  • Evolution plots for the defined Global coefficients.