'Plot feature importance with xgboost
When I plot the feature importance, I get this messy plot. I have more than 7000 variables. I understand the built-in function only selects the most important, although the final graph is unreadable. This is the complete code:
import numpy as np
import pandas as pd
df = pd.read_csv('ricerice.csv')
array=df.values
X = array[:,0:7803]
Y = array[:,7804]
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
seed=0
test_size=0.30
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=test_size, random_state=seed)
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X, Y)
import matplotlib.pyplot as plt
from matplotlib import pyplot
from xgboost import plot_importance
fig1=plt.gcf()
plot_importance(model)
plt.draw()
fig1.savefig('xgboost.png', figsize=(50, 40), dpi=1000)
Solution 1:[1]
There are couple of points:
- To fit the model, you want to use the training dataset (
X_train, y_train), not the entire dataset (X, y). - You may use the
max_num_featuresparameter of theplot_importance()function to display only topmax_num_featuresfeatures (e.g. top 10).
With the above modifications to your code, with some randomly generated data the code and output are as below:
import numpy as np
# generate some random data for demonstration purpose, use your original dataset here
X = np.random.rand(1000,100) # 1000 x 100 data
y = np.random.rand(1000).round() # 0, 1 labels
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
seed=0
test_size=0.30
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size, random_state=seed)
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X_train, y_train)
import matplotlib.pylab as plt
from matplotlib import pyplot
from xgboost import plot_importance
plot_importance(model, max_num_features=10) # top 10 most important features
plt.show()
Solution 2:[2]
You can obtain feature importance from Xgboost model with feature_importances_ attribute. In your case, it will be:
model.feature_imortances_
This attribute is the array with gain importance for each feature. Then you can plot it:
from matplotlib import pyplot as plt
plt.barh(feature_names, model.feature_importances_)
(feature_names is a list with features names)
You can sort the array and select the number of features you want (for example, 10):
sorted_idx = model.feature_importances_.argsort()
plt.barh(feature_names[sorted_idx][:10], model.feature_importances_[sorted_idx][:10])
plt.xlabel("Xgboost Feature Importance")
There are two more methods to get feature importance:
- you can use
permutation_importancefromscikit-learn(from version 0.22) - you can use SHAP values
You can read more in this blog post of mine.
Solution 3:[3]
You need to sort your feature importances in descending order first:
sorted_idx = trained_mdl.feature_importances_.argsort()[::-1]
Then just plot them with the column names from your dataframe
from matplotlib import pyplot as plt
n_top_features = 10
sorted_idx = trained_mdl.feature_importances_.argsort()[::-1]
plt.barh(X_test.columns[sorted_idx][:n_top_features ], trained_mdl.feature_importances_[sorted_idx][:n_top_features ])
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Sandipan Dey |
| Solution 2 | desertnaut |
| Solution 3 | Amirkhm |


