Loss vs Accuracy: Binary Classification #934
Replies: 2 comments
-
Just some primary observation, maybe it'll help you find a solution in the future: Since your accuracy is a 100% on the train dataset, it is quite possible that your model is using unrelated parameters (or noise) to find the result. (To know more, google "Overfitting and Underfitting") |
Beta Was this translation helpful? Give feedback.
-
It is typical overfitting issue. Answer to the different accuracy on different runs is because you didn't use randomseed( ) which splits test train same everytime. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Thanks, Daniel Bourke, for this great tutorial.
You challenged us to improve the accuracy of the binary classification model to 80% (12:33:03). I was able to achieve 100% accuracy (Epoch: 3900 | Loss: 0.10130, Accuracy: 100.00% | Test Loss: 0.13714, Test Accuracy: 100.00%).
During my experiments, I noticed that while accuracy was improving, the loss was not decreasing (e.g., Loss: 0.49138, Accuracy: 93.50% | Test Loss: 0.50832, Test Accuracy: 89.50%). I am wondering why this is happening. Could you please help me understand this?
I am also quite frustrated that the model gives different accuracy results on different runs.
Beta Was this translation helpful? Give feedback.
All reactions