'KFold Validation while using transfer learning with fine-tuning or Train/Test Split?
i'm asking myself if it's appropriater to use KFold cross Validation (Train & Validation Folds (Training set), then Test (Testing set) after Fine Tuning while doing transfer learning with fine tuning or better just using Train (Training set) and Test (Testing set) split to train and then test a model in order to get an accurate performance estimation.
Let's assume there exists overall just enough data (less than 8000 samples overall) and also data-augmentation is applied in the new model on top.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
