KernelSHAP is a popular implementation of SHAP, but it has several shortcomings that make it difficult to use in practice. In our recent AISTATS paper [1], we revisited the regression-based approach for estimating Shapley values to better understand and ultimately improve upon KernelSHAP.
Our goal was making Shapley value estimation more practical, and we did so by developing techniques for:
This post describes our paper at a high level, while avoiding some of the mathematical details.
First, let's review the differences between KernelSHAP, SHAP and Shapley values. (If you're unfamiliar with cooperative games or Shapley values, check out this post.)
Shapley values are a credit allocation scheme for cooperative games (i.e., set functions) [2], and, given a game that accepts coalitions/subsets , the Shapley values for each player are defined as follows:
Next, SHAP is an interpretability method that explains individual predictions by assigning attribution scores to each feature using Shapley values [3]. Given a model and an input , SHAP is based on a cooperative game defined as follows (when using the conditional distribution for missing features):
SHAP values are feature attribution scores given by the Shapley values . Because of how Shapley values are derived, SHAP values represent how much each feature moves the prediction up or down relative to the base-rate prediction .
Finally, KernelSHAP is a method for approximating SHAP values [3]. Shapley values are hard to calculate, so SHAP is only useful in theory without an approximation technique. KernelSHAP is the main practical implementation of SHAP, and since the features' conditional distributions are often unknown, KernelSHAP is typically run with a slightly different cooperative game that uses the marginal distribution for missing features:
Table 1 summarizes these three related ideas.
What it means | Introduced by | |
---|---|---|
Shapley values | Credit allocation method for cooperative games | [2] |
SHAP | Model explanation method (Shapley values of a particular cooperative game) |
[3] |
KernelSHAP | Approximation approach for SHAP values | [3] |
KernelSHAP has become the preferred method for model-agnostic SHAP value calculation [3], but recent work has exposed certain weaknesses in KernelSHAP. For example, Merrick and Taly [4] raised questions about whether KernelSHAP is an unbiased estimator (or a statistical estimator in any sense), and the SAGE paper [5] showed that sampling-based approximations permit convergence detection and uncertainty estimation, whereas KernelSHAP offers neither. These are some of the issues we address in our paper, "Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression" [1].
KernelSHAP approximates Shapley values by solving a linear regression problem. Thanks to the original SHAP paper [3], we know that the Shapley values for a cooperative game are the solutions to the weighted least squares problem
where the weighting function is given by:
Solving weighted least squares problems is usually easy, but this one is difficult because it has input-output pairs (or data points) , each of which may be costly to calculate. KernelSHAP therefore uses a dataset sampling approach (described in detail in our paper) that involves solving a constrained least squares problem with a manageable number of data points. I'll omit the equations here, but this problem can be solved without much difficulty using its KKT conditions.
This describes KernelSHAP at a high level. It's worth mentioning, however, that KernelSHAP need not be used in the context of SHAP: the equations above are not specific to the game used by SHAP (), so this regression-based approach can be applied to any cooperative game. (We will continue to use the name KernelSHAP, but we now mean the regression-based approach to approximating Shapley values.)
KernelSHAP is simple to implement, but its properties were unknown until now. As prior work pointed out, it's not obvious in what sense the dataset sampling approach even represents a statistical estimator. In our paper, we improved our understanding of KernelSHAP by discovering the following properties:
Given these properties, the original KernelSHAP approach seems preferable to our provably unbiased estimator (described in the paper). The dataset sampling approach is not provably unbiased, but because it converges faster and its variance is empirically near zero, users should prefer this approach in practice.
Beyond improving our understanding of KernelSHAP, knowing that its variance evolves at a rate of allows us to modify KernelSHAP with an online variance approximation. Over the course of estimation with samples, we produce multiple independent estimates with samples (for ) and use these to approximate the variance for the final results. The variance approximation can then be used to provide confidence intervals, and to automatically determine the required number of samples (rather than using an arbitrary number or the default argument). Figure 2 shows what explanations look like when run with these new modifications.
Finally, our paper presented two more innovations for KernelSHAP:
With these new insights and tools, we hope to provide users with a more practical approach to Shapley value estimation via linear regression. Shapley values still aren't cheap to calculate relative to certain other methods, but you can now calculate them faster and be confident that you're using the right number of samples.
Shapley values are used by several promising model explanation methods (SHAP, SAGE, Shapley Effects, etc.), but fast approximations are required to use these methods in practice. Our paper develops a practical approach using a linear regression-based estimator, providing features like uncertainty estimatation and automatic convergence detection. If you want to try out our approach, our implementation is online and you can apply it with model explanation methods (such as SHAP) or with arbitrary cooperative games.