You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I create the transform encoder with pytorch's layer and view it with torchinfo summary I get a different looking output where I cannot see the layers. But the actual layer seems to match as expected:
My code and outputs:
# Create the same as above with torch.nn.TransformerEncoderLayer()torch_transformer_encoder_layer=nn.TransformerEncoderLayer(d_model=768, # Hidden size D from Table 1 for ViT-Basenhead=12, # Heads from Table 1 for ViT-Basedim_feedforward=3072, # MLP size from Table 1 for ViT-Basedropout=0.1, # Amount of dropout for dense layers from Table 3 for ViT-Baseactivation="gelu", # GELU non-linear activationbatch_first=True, # Do our batches come first?norm_first=True) # Normalize first or after MSA/MLP layers?torch_transformer_encoder_layer
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
When I create the transform encoder with pytorch's layer and view it with torchinfo
summary
I get a different looking output where I cannot see the layers. But the actual layer seems to match as expected:My code and outputs:
That above looks correct when I was follwoing the video and look slike it matches the book also.
But when I do the following to get the summary:
This does not show the layers and also the sizes seem very small.
All the other summary views I tried have worked ok up to this point. Any ideas on why?
Here are the versions from google colab:
Beta Was this translation helpful? Give feedback.
All reactions