-
Notifications
You must be signed in to change notification settings - Fork 18
feat: add universal one hot feature and sliding window for rop in it… #231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…r mode 1. add universal one hot feature only support lem and slem, drop support for se2 for it doesn't have atom types. 2. add sliding window for the loss monitor, in case of rop and update lr per step scheme
dptb/nn/embedding/lem.py
Outdated
@@ -110,7 +114,7 @@ def __init__( | |||
self.sh = SphericalHarmonics( | |||
irreps_sh, sh_normalized, sh_normalization | |||
) | |||
self.onehot = OneHotAtomEncoding(num_types=self.n_atom, set_features=False) | |||
self.onehot = OneHotAtomEncoding(num_types=self.n_atom, set_features=False, idp=self.idp, universal_flag=universal_flag) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we remove the _flag suffix? Other parameters does not have this surfix format before, I think we should be consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will change to universal
dptb/nnops/trainer.py
Outdated
@@ -131,8 +131,10 @@ def iteration(self, batch, ref_batch=None): | |||
#TODO: add clip large gradient | |||
self.optimizer.step() | |||
if self.update_lr_per_step_flag: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same for this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, it will change to update_lr_per_iter
dptb/nnops/trainer.py
Outdated
if isinstance(self.lr_scheduler, torch.optim.lr_scheduler.ReduceLROnPlateau): | ||
self.lr_scheduler.step(self.stats["train_loss"]["epoch_mean"]) | ||
if 'latest_avg_iter_loss' in self.stats['train_loss'].keys() and self.iter > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if latest_avg_iter_loss is not in the train_loss?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a non-effective judgement. Since I have add a function in arg check to ensure when Update learning rate per iteration is true and the lr_scheduler is rop, this flag will be set true.
https://github.com/Franklalalala/DeePTB/blob/1d4dffe68b738c53de27e1c10c87501c4d7d12a2/dptb/utils/argcheck.py#L16
https://github.com/Franklalalala/DeePTB/blob/1d4dffe68b738c53de27e1c10c87501c4d7d12a2/dptb/entrypoints/train.py#L210
I'll remove this part.
dptb/plugins/monitor.py
Outdated
@@ -13,7 +14,7 @@ | |||
class Monitor(Plugin): | |||
|
|||
def __init__(self, running_average=True, epoch_average=True, smoothing=0.7, | |||
precision=None, number_format=None, unit=''): | |||
precision=None, number_format=None, unit='', sliding_win_size=50, avg_iter_loss_flag=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the monitor is a general class for monitoring number value, we should write this flag as a general one, like "avg_per_iter", or something similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will change to avg_per_iter
dptb/utils/argcheck.py
Outdated
@@ -12,6 +12,14 @@ | |||
'dptb-hopping_net_neuron', 'dptb-env_net_neuron', 'dptb-soc_net_neuron', 'dptb-onsite_net_neuron', 'dptb-axis_neuron', 'skfunction-skformula', 'sknetwork-sk_onsite_nhidden', | |||
'sknetwork-sk_hop_nhidden'] | |||
|
|||
# set default values in case of rop & update lr per step | |||
def chk_avg_iter_loss_flag(jdata): | |||
if jdata["train_options"]["lr_scheduler"]["type"] == 'rop' and jdata["train_options"]["update_lr_per_step_flag"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update_lr_per_step_flag 这个step 是指iteration吧? 明确吧,step 含义模糊。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will change to update_lr_per_iter
does not have self.stats["train_loss"]['latest_avg_iter_loss']
…er mode