'I want to use Kernel PCA for a huge dataset of shape (100000,1000) it runs but is consuming a huge amount of memory and gets killed
from sklearn.decomposition import KernelPCA
reducer = KernekPCA(eigen_solver = auto ,n_components=260 ,kernel='rbf')
reducer.fit(X_train)
X_reduces = reducer.transform(X_train)
'X_train is a (100000,1000) shape dataset.This is a sample extract of how I'm using the Kpca. Is there some way I can achieve reduction without using huge amount of memory?'
'The program is been ran on Linux OS and after been killed Linux gives an error out of memory'
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
