'XGBoost custom squarederror loss function not working similar to default implementation
I've recently been trying to implement the default reg:squarederror loss function for xgboost regression, to enable me to later change it to an asymmetrical function on the basis of this function. However, I've not been able to get the same results with my custom version compared to the default implementation.
Here's the code I've been trying:
import xgboost as xgb
import numpy as np
import pandas as pd
a = np.array([1,2,3,4,5,6])
b = np.array([2,3,4,5,6,7])
a = pd.DataFrame(data=a)
b = pd.DataFrame(data=b)
model = xgb.XGBRegressor(random_state=0, objective='reg:squarederror')
model.fit(a, b)
print(model.predict(a))
def squared_error(predt: np.ndarray, dtrain: xgb.DMatrix):
y = dtrain.get_label()
grad = predt - y
hess = np.ones(predt.shape)
return grad, hess
dtrain = xgb.DMatrix(a.values, label=b.values)
dtest = xgb.DMatrix(a.values)
model2 = xgb.train({'seed': 0}, dtrain=dtrain, obj=squared_error)
print(model2.predict(dtest))
The problem is that the two models don't give the same results. Any ideas what's wrong with my code?
I've also tried the same with reg:squaredlogerror and the given example (https://xgboost.readthedocs.io/en/stable/tutorials/custom_metric_obj.html), which gave the same result for both models. This leads me to believe that there is a problem in my code.
I'd appreciate any help in finding my mistake.
-Timo
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
