'How to use F1 score as an evaluation metric for XGBoost validation?
I'm trying to validate a model using GridSearchCV and XGBoost. I want my evaluation metric to be F1 score. I've seen many people use scoring='f1' and eval_metric=f1_score and other variations. I'm confused on a couple of points. Why are some people using scoring= and others using eval_metric=?
In the XGBoost documentation, there's no F1 score evaluation metric (which seems strange, btw, considering some of the others they do have). But I see lots of advice online to "just use XGBoost's built-in F1 score evaluator." Where??
No matter what I put here, my code throws an error on the eval_metric line.
Here is my code:
params = {
'max_depth': range(2,10,2),
'learning_rate': np.linspace(.1, .6, 6),
'min_child_weight': range(1,10,2),
}
grid = GridSearchCV(
estimator = XGBClassifier(n_jobs=-1,
n_estimators=500,
random_state=0),
param_grid = params,
)
eval_set = [(X_tr, y_tr),
(X_val, y_val)]
grid.fit(X_tr, y_tr,
eval_set=eval_set,
eval_metric='f1', # <------What do I put here to make this evaluate based on f1 score???
early_stopping_rounds=25,
)
Thanks!
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
