'"Classification metrics can't handle a mix of continuous and binary targets" when trying to set a custom eval_metric using LGBMClassifier

My y going in and both y_train and y_eval are binary int, what am I doing wrong?

I noticed the predictions going out are like this [0.,1.,0. ...] which is probably the culprit of the problem; but I'm not sure what is causing it.

lgbm_clf = lgbm.LGBMClassifier(
    objective="binary",
    random_state=1,
    n_estimators=10000,
    boosting="gbdt",
    is_unbalance = True,
    metric= None)

lgbm_clf.fit(
        X_train,
        y_train,
        eval_set=[(X_eval, y_eval)],
        eval_metric=evalerror,
        early_stopping_rounds=150)
    
preds = lgbm_clf.predict(X_eval)
print(f"LightGBM AUC on the evaluation set: {roc_auc_score(y_eval, preds):.5f}")

my custom metric is:

def evalerror(preds, y_eval):
  tn, fp, fn, tp = confusion_matrix(y_eval, preds).ravel()
  return 'credimiMetric',((3*tp - (3*fp) - (50*fn))*1000), True

Any idea why I'm getting this error when fitting?

Classification metrics can't handle a mix of continuous and binary targets


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source