'Sklearn - Permutation Importance leads to non-zero values for zero-coefficients in model

I'm confused by sklearn's permutation_importance function. I have fitted a pipeline with a regularized logistic regression, leading to several feature coefficients being 0. However, when I want to calculate the features' permutation importance on the test data set, some of these features get non-zero importance values. How can this be when they do not contribute to the classifier?

Here's some example code & data:

import numpy as np    
from sklearn.datasets import make_classification
from sklearn.model_selection import RepeatedStratifiedKFold
import scipy.stats as stats
from sklearn.utils.fixes import loguniform
from sklearn.preprocessing import StandardScaler
from sklearn.impute import KNNImputer
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.inspection import permutation_importance


# create example data with missings
X, y = make_classification(n_samples = 500,
                           n_features = 100,
                           n_informative = 25,
                           n_redundant = 75,
                           random_state = 0)
c = 10000 # number of missings
X.ravel()[np.random.choice(X.size, c, replace = False)] = np.nan # introduce random missings
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size = 0.2, random_state = 0)

folds = 5
repeats = 5
n_iter = 25
rskfold = RepeatedStratifiedKFold(n_splits = folds, n_repeats = repeats, random_state = 1897)

scl = StandardScaler()
imp = KNNImputer(n_neighbors = 5, weights = 'uniform')
sgdc = SGDClassifier(loss = 'log', penalty = 'elasticnet', class_weight = 'balanced', random_state = 0)

pipe = Pipeline([('scaler', scl),
                 ('imputer', imp),
                 ('clf', sgdc)])
param_rand = {'clf__l1_ratio': stats.uniform(0, 1),
              'clf__alpha': loguniform(0.001, 1)}

m = RandomizedSearchCV(pipe, param_rand, n_iter = n_iter, cv = rskfold, scoring = 'accuracy', random_state = 0, verbose = 1, n_jobs = -1)
m.fit(Xtrain, ytrain)

coefs = m.best_estimator_.steps[2][1].coef_
print('Number of non-zero feature coefficients in classifier:')
print(np.sum(coefs != 0))

imps = permutation_importance(m, Xtest, ytest, n_repeats = 25, random_state = 0, n_jobs = -1)

print('Number of non-zero feature importances after permutations:')
print(np.sum(imps['importances_mean'] != 0))

You will see that the second printed number does not match the first...

Any help is highly appreciated!



Solution 1:[1]

It's because you have a KNNImputer. A feature with zero coefficient in the model still affects the imputation of other columns, and so can change the predictions of the entire pipeline when permuted, and hence can have a nonzero permutation importance.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Ben Reiniger