Compare Models


This is the first step we recommend in the workflow of any supervised experiment. This function trains and compares common evaluation metrics using k-fold cross validation for all the available models in the library of the module you have imported. The evaluation metrics used are:

  • Classification: Accuracy, AUC, Recall, Precision, F1, Kappa
  • Regression: MAE, MSE, RMSE, R2, RMSLE, MAPE

The output of the function is a table showing averaged score of all models across the folds. The number of folds can be defined using fold parameter within compare_models function. By default, the fold is set to 10. The table is sorted (highest to lowest) by the metric of choice and can be defined using sort parameter. By default, table is sorted by Accuracy for classification experiments and R2 for regression experiments. Certain models are prevented for comparison because of their longer run-time. In order to bypass this prevention, turbo parameter can be set to False.

This function is only available in pycaret.classification and pycaret.regression modules.

 

Classification Example

 

Code
# Importing dataset
from pycaret.datasets import get_data
diabetes = get_data('diabetes')

# Importing module and initializing setup
from pycaret.classification import *
clf1 = setup(data = diabetes, target = 'Class variable')

# comparing all models
compare_models()

 

Output

Regression Example

 

Code
# Importing dataset
from pycaret.datasets import get_data
boston = get_data('boston')

# Importing module and initializing setup
from pycaret.regression import *
reg1 = setup(data = boston, target = 'medv')

# comparing all models
compare_models()

 

Output

Try this next