Overfitting? #670
-
How to deal with overfitting? Is there a way to calculate loss on validation dataset? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
I think overfitting refers to the rendering ability of the model in other views of this dataset, rather than the rendering ability of other datasets. To prevent overfitting, it should be most effective to increase the number of training viewpoints and distribute them as evenly as possible in 3D space. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the answer. Let me try to formulate my question a bit differently: traditionally, we have two types of losses - training loss and validation loss. Training loss decreases monotonically during model training, but validation loss at some point begins to increase, and that's the sign to stop the training to prevent overfitting. If I don't have validation loss, then how do I know the optimal number of iterations? |
Beta Was this translation helpful? Give feedback.
-
You need a test set or validation set to quantitatively evaluate the PSNR of the rendered views. But it seems to me that nerf does not overfit, the longer the training time results the better rendering. |
Beta Was this translation helpful? Give feedback.
You need a test set or validation set to quantitatively evaluate the PSNR of the rendered views. But it seems to me that nerf does not overfit, the longer the training time results the better rendering.