Category "shap"

SHAP for a single data point, instead of average prediction of entire dataset

I am trying to explain a regression model based on LightGBM using SHAP. I'm using the shap.TreeExplainer(<lightgbm model>).shap_values(X) method to get

Is there a way to customize the feature order in a SHAP beeswarm plot?

I'm wondering if there's a way to change the order the features in a SHAP beeswarm plot are displayed in. The docs describe "transforms" like using shap_values.

SHAP local_accuracy

When calculating local_accuracy from metrics.py I got the following error : NameError: name 'pickle' is not defined from shap.benchmark import metrics metrics.l

Custom features in beeswarm plot of shap

I have a causal inference model with featurizer=PolynomialFeatures(degree=3) which includes a degree 3 polynomial in X variable. I get the plot for interpretab

Difference between Shapley values and SHAP for interpretable machine learning

The Paper regarding die shap package gives a formula for the Shapley Values in (4) and for SHAP values apparently in (8) Still I don't really understand the dif

SHAP function throws exception in plotting method

samples.zip The sample zipped folder contains: model.pkl x_test.csv To reproduce the problems, do the following steps: use lin2 =joblib.load('model.pkl') to loa

SHAP function throws exception in plotting method

samples.zip The sample zipped folder contains: model.pkl x_test.csv To reproduce the problems, do the following steps: use lin2 =joblib.load('model.pkl') to loa

How are SHAP's feature contributions calculated for models with word embeddings as output?

In a typical Shapley value estimation for a numerical regression task, there is a clear way in which the marginal contribution of an input feature i to the fina