'Difference between Shapley values and SHAP for interpretable machine learning

The Paper regarding die shap package gives a formula for the Shapley Values in (4) and for SHAP values apparently in (8)

Still I don't really understand the difference between Shapley and SHAP values. As far as I understand for Shapley I need to retrain my Model on each possible subset of parameters and for SHAP I am just using the basic model trained on all parameters. Is that it? So SHAP is computationally easier?



Solution 1:[1]

SHAP combines the local interpretability of other agnostic methods (s.a. LIME where a model f(x) is LOCALLY approximated with an explainable model g(x) for each instance of each factor X) and the game theoretic approach of Shapley Values. This results in some desirable properties (local accuracy, missingness, consistency).

Recall, that in formula (4) the "local" is missing and Shapley (regression) values assign one contribution score for the factor X (as a whole). In formula (8) we see, that SHAP is now a function of x. Which implies we get a contribution for each factor and in particular for each realized instance of the factor Xi = xi which makes it locally interpretable AND inheriting the desirable properties.

SHAP can thus be understood as a combination of LIME (or related concepts) and Shapley Values. In the end SHAP values are simply "the Shapley values of a conditional expectation function of the original model" Lundberg and Lee (2017). Basically, the Shapley value is defined for any value function and SHAP is just a special case of the Shapley value by the special definition of the value function!

I had the same question as you and this is my intuitive understanding of the Lundberg and Lee (2017) paper. Hope this helps.

Solution 2:[2]

The answer to your question lies in the first 3 lines on the SHAP github project:

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions

The story of SHAP started with Scott Lundeberg observing available ML explanation methods and noting that all of them satisfied Additive property (Definition 1 in the aforementioned paper):

Additive feature attribution methods have an explanation model that is a linear function of binary variables

On top of that he added 3 other desirable properties:

  • Property 1: Local accuracy (local explanations should add up to model predictions, equivalent to original Shapley's efficiency)

  • Property 2: Missingness (missing feature contributes nothing, close to original dummy properties)

  • Property 3: Consistency (if model changes, the desired explanation values should change in the same direction, close to original additivity)

It turns out that:

  • Shapley values satisfy all the Properties 1,2,3 ("satisfy" means here all the 4 original Shapley properties hold as soon as Property 1,2,3 hold)
  • Provide a unique solution (unique set of marginal contributions), which was mathematically proved yet by Young, 1985

Then, as we fixed Shapley values as a solution to the problem of model explainability with desired Properties 1,2,3, the question arises:

How to calculate Shapley values with/without a feature?

Every ML practitioner would know that a model changes if we drop/add a feature. On top of that, for a non-linear model the order in which we add features matters. So exactly calculating Shapley values by searching through all possible "2^M" feature combinations, while retraining models, is inefficient (or computationally prohibitive).

Now, to the answer to your question "Difference between Shapley values and SHAP" :

SHAP provide computationally efficient, theoretically robust way to calculate Shapley values for ML by:

  1. Using model trained only once (doesn't apply to Exact and KernelExplainer)
  2. Averaging over dropped out features by sampling background data.

As a sidenote, the SHAP solution is unique unlike that of LIME, but this is unrelated to your question.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2