'A column-vector y was passed when a 1d array was expected
I need to fit RandomForestRegressor from sklearn.ensemble.
forest = ensemble.RandomForestRegressor(**RF_tuned_parameters)
model = forest.fit(train_fold, train_y)
yhat = model.predict(test_fold)
This code always worked until I made some preprocessing of data (train_y).
The error message says:
DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
model = forest.fit(train_fold, train_y)
Previously train_y was a Series, now it's numpy array (it is a column-vector). If I apply train_y.ravel(), then it becomes a row vector and no error message appears, through the prediction step takes very long time (actually it never finishes...).
In the docs of RandomForestRegressor I found that train_y should be defined as y : array-like, shape = [n_samples] or [n_samples, n_outputs]
Any idea how to solve this issue?
Solution 1:[1]
Change this line:
model = forest.fit(train_fold, train_y)
to:
model = forest.fit(train_fold, train_y.values.ravel())
Explanation:
.values will give the values in a numpy array (shape: (n,1))
.ravel will convert that array shape to (n, ) (i.e. flatten it)
Solution 2:[2]
I also encountered this situation when I was trying to train a KNN classifier. but it seems that the warning was gone after I changed:knn.fit(X_train,y_train)
toknn.fit(X_train, np.ravel(y_train,order='C'))
Ahead of this line I used import numpy as np.
Solution 3:[3]
I had the same problem. The problem was that the labels were in a column format while it expected it in a row.
use np.ravel()
knn.score(training_set, np.ravel(training_labels))
Hope this solves it.
Solution 4:[4]
use below code:
model = forest.fit(train_fold, train_y.ravel())
if you are still getting slap by error as identical as below ?
Unknown label type: %r" % y
use this code:
y = train_y.ravel()
train_y = np.array(y).astype(int)
model = forest.fit(train_fold, train_y)
Solution 5:[5]
Y = y.values[:,0]
Y - formated_train_y
y - train_y
Solution 6:[6]
Another way of doing this is to use ravel
model = forest.fit(train_fold, train_y.values.reshape(-1,))
Solution 7:[7]
With neuraxle, you can easily solve this :
p = Pipeline([
# expected outputs shape: (n, 1)
OutputTransformerWrapper(NumpyRavel()),
# expected outputs shape: (n, )
RandomForestRegressor(**RF_tuned_parameters)
])
p, outputs = p.fit_transform(data_inputs, expected_outputs)
Neuraxle is a sklearn-like framework for hyperparameter tuning and AutoML in deep learning projects !
Solution 8:[8]
format_train_y=[]
for n in train_y:
format_train_y.append(n[0])
Solution 9:[9]
TL;DR
use
y = np.squeeze(y)
instead of
y = y.ravel()
As Python's ravel() may be a valid way to achieve the desired results in this particular case, I would, however, recommend using numpy.squeeze().
The problem here is, that if the shape of your y (numpy array) is e.g. (100, 2), then y.ravel() will concatenate the two variables on the second axis along the first axis, resulting in a shape like (200,). This might not be what you want when dealing with independent variables that have to be regarded on their own.
On the other hand, numpy.squeeze() will just trim any redundant dimensions (i.e. which are of size 1). So, if your numpy array's shape is (100, 1), this will result in an array of shape (100,), whereas the result for a numpy array of shape (100, 2) will not change, as none of the dimensions have size 1.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | mirekphd |
| Solution 2 | sorak |
| Solution 3 | Pramesh Bajracharya |
| Solution 4 | |
| Solution 5 | TheFunSideofData |
| Solution 6 | sushmit |
| Solution 7 | |
| Solution 8 | Dharman |
| Solution 9 | Marcel H. |
