Interpret Model
Interpreting complex models are of fundamental importance in machine learning. Model Interpretability helps debug the model by analyzing what the model really thinks is important. Interpreting models in PyCaret is as simple as writing interpret_model. The function takes trained model object and type of plot as string. Interpretations are implemented based on the SHAP (SHapley Additive exPlanations) and is only available for tree-based models.
This function is only available in pycaret.classification and pycaret.regression modules.
Summary Plot
Code
# Importing dataset from pycaret.datasets import get_data diabetes = get_data('diabetes') # Importing module and initializing setup from pycaret.classification import * clf1 = setup(data = diabetes, target = 'Class variable') # creating a model xgboost = create_model('xgboost') # interpreting model interpret_model(xgboost)
Output
Correlation Plot
Code
# Importing dataset from pycaret.datasets import get_data diabetes = get_data('diabetes') # Importing module and initializing setup from pycaret.classification import * clf1 = setup(data = diabetes, target = 'Class variable') # creating a model xgboost = create_model('xgboost') # interpreting model interpret_model(xgboost, plot = 'correlation')
Output
Reason Plot at Observation Level
Code
# Importing dataset from pycaret.datasets import get_data diabetes = get_data('diabetes') # Importing module and initializing setup from pycaret.classification import * clf1 = setup(data = diabetes, target = 'Class variable') # creating a model xgboost = create_model('xgboost') # interpreting model interpret_model(xgboost, plot = 'reason', observation = 10)
Output
Try this next
Was this page helpful?
GitHub