Why x_test is run using inference_model, and not undergo the usual process such backpop, optimizer and so on? #215
-
In your tutorial, there is a part where you run
where the model is not being trained using the train dataset yet. But how can the subclass model can predict using the Why the X_test prediction is run in inference_mode (which not undergo the typical process such optimizing, backprop and step)? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
When a model is initialized random weights will be assigned, based of those particular weights it will make a prediction on X_test. Since, its not trained it will output some incorrect value. Validation and Testing should be done in inference mode because here we will not be worrying about the gradients. During training gradients need to be computed torch does it with the help of autograd, but during testing there is need for it so we perform in inference mode. Since, we dont train in test data there will not be steps such as optimizer.zero_grad( ), loss.backward( ) , optim.step( ) . |
Beta Was this translation helpful? Give feedback.
When a model is initialized random weights will be assigned, based of those particular weights it will make a prediction on X_test. Since, its not trained it will output some incorrect value.
Validation and Testing should be done in inference mode because here we will not be worrying about the gradients. During training gradients need to be computed torch does it with the help of autograd, but during testing there is need for it so we perform in inference mode. Since, we dont train in test data there will not be steps such as optimizer.zero_grad( ), loss.backward( ) , optim.step( ) .