Skip to content

Why do we need to calculate loss while in trainingg loop? We can calculate this in eval loop. #170

Answered by ghost
fivefishstudios asked this question in Q&A
Discussion options

You must be logged in to vote

There is another line: loss.backward() that exactly uses the result of the loss_fn(y_pred, y).

When you use loss_fn(y_pred, y) it calculates each parameter's influence to the result. And when you use loss.backward() it will look for each parameter that you put requires_grad = True. Subsequently it will calculate the gradients of each parameter and saves it.

Gradient itself give info about direction to move in order to decrease the loss. So by optimizer.step() you change the parameters and make courageous step towards bright future )))

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by fivefishstudios
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant