'How to implement cross-validation-training to improve pytorch cnn (ai)

I am currently working on a pytorch AI (CNN) for indoor-positioning and while building the trainingsloop I read about cross-validation-training to avoid overfitting and implemented it.

My loop looks similar to this Example Trainingsloop but I forgot to reset the weights of my model for each fold (each fold resets the optimizer). This trainingsloop yielded better results then normal batch-training. Normal batch-training resulted in an avg. error of 3m (loss: ~10) and (my wrong) cross-validation in an error of 0,3m (loss: ~0.1).

Now I have two questions I find hard to answer:

  1. Why does the wrong implementation of cross-validation outperform batch-training?
  2. Is there a better trainingsloop that reproduces the improvements from my wrong implementation? For example by introducing new parts of the dataset at given moments


Solution 1:[1]

I was able to get results I needed using the rex command instead

rex field=message.log "(?<Result>(CCC\-(\S)+\-\d{4,5}))"

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 MSkiLLz