'Is k-folds cross validation a smarter idea than using a validation set instead?

I have a somewhat large (~2000) set of medical images I plan to use to train a CV model (using efficentnet architecture) in my workplace. In preparation for this, I was reading up on some good practices for training medical images. I have split the dataset by patients to prevent leakages and split my data in train:test:val in the order of 60:20:20. However, I read that k-folds cross validation was a newer practice then using a validation set, but I was recommended away from doing so as k-folds is supposed to be far more complicated. What would you recommend in this instance, and are there any other good practices to adopt?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source