-
Notifications
You must be signed in to change notification settings - Fork 4
Eager-style debugging #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Note: We have a first implementation of this now. See test_nn_debug_eager_mode.py for examples. We really use TF eager mode. The changes on RETURNN side are so far minor. For data = nn.get_extern_data(...)
data.data.placeholder = ... # reset
... Or alternatively, the user can simply directly provide It seems to work fine. At least it seems so. E.g. Control flow logic ( |
I guess we can close this for now. |
The way models are defined with the PyTorch-style API is oriented to allow for a simple mental model for the user, which allows for eager-like thinking/reasoning about the code / model definitions. This is even for recurrent definitions (#16).
For debugging purpose, it would be helpful to also allow eager execution.
This should be an optional option, and would not be used by default (default would be graph mode). But the code behavior should not change at all. It would be optional because it would be way more inefficient.
This should be technically possible though, because for all definitions / module calls, all values can be calculated at the time when the Python code is called. Some details on how we do this internally need to be sorted out. Not sure which is the easiest way. E.g.:
tf.placeholder
.)The text was updated successfully, but these errors were encountered: