'Remove zero importance features from XGBClassifier

I've trained an XGBClassifier and discovered there are some features with zero importance.

I retrained the model, leaving out zero importance features from the training dataset. However, the resulting model gives slightly different probability predictions from the original, and now some previously useful features have dropped to zero importance, despite the random state in the being identical for both training rounds.

Is it possible to preserve the same model training path in the second round of training, as happened in the first? Basically, I want to get a classifier that can give the very same predictions as the model with the zero importance features, but not have those unused features as a required input when scoring.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source