Why do we need to calculate loss while in trainingg loop? We can calculate this in eval loop. #170
-
My question is why do we need to calculate loss (loss=loss_fn(y_pred,y) in training loop? We can still train the model by skipping Step #2 completely, right? And besides, we can calculate loss during eval() mode. So what's the point of knowing "loss" value during training loop? What am I missing? BTW, I really really want to thank you about this course. This kind of effort and hard work should be rewarded so I bought this course on Udemy! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
There is another line: When you use Gradient itself give info about direction to move in order to decrease the loss. So by |
Beta Was this translation helpful? Give feedback.
There is another line:
loss.backward()
that exactly uses the result of theloss_fn(y_pred, y)
.When you use
loss_fn(y_pred, y)
it calculates each parameter's influence to the result. And when you useloss.backward()
it will look for each parameter that you putrequires_grad = True
. Subsequently it will calculate the gradients of each parameter and saves it.Gradient itself give info about direction to move in order to decrease the loss. So by
optimizer.step()
you change the parameters and make courageous step towards bright future )))