-
-
Notifications
You must be signed in to change notification settings - Fork 48.9k
Create real_time_encoder_transformer.py #13655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Created a real-time encoder only transformer model with Time2Vec as positional encoding along with generalised classifier layer for modelling realtime data like EEG.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
#Time2Vec layer for positional encoding of real-time data like EEG | ||
class Time2Vec(nn.Module): | ||
#Encodes time steps into a continuous embedding space so to help the transformer learn temporal dependencies. | ||
def __init__(self, d_model): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: d_model
self.w = nn.Parameter(torch.randn(1, d_model - 1)) | ||
self.b = nn.Parameter(torch.randn(1, d_model - 1)) | ||
|
||
def forward(self, t): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: forward
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
Please provide type hint for the parameter: t
Please provide descriptive name for the parameter: t
|
||
#positionwise feedforward network | ||
class PositionwiseFeedForward(nn.Module): | ||
def __init__(self, d_model, hidden, drop_prob=0.1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: d_model
Please provide type hint for the parameter: hidden
Please provide type hint for the parameter: drop_prob
self.relu = nn.ReLU() | ||
self.dropout = nn.Dropout(drop_prob) | ||
|
||
def forward(self, x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: forward
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
Please provide type hint for the parameter: x
Please provide descriptive name for the parameter: x
return self.fc2(x) | ||
#scaled dot product attention | ||
class ScaleDotProductAttention(nn.Module): | ||
def __init__(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
|
||
#attention pooling layer | ||
class AttentionPooling(nn.Module): | ||
def __init__(self, d_model): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: d_model
super().__init__() | ||
self.attn_score = nn.Linear(d_model, 1) | ||
|
||
def forward(self, x, mask=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: forward
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
Please provide type hint for the parameter: x
Please provide descriptive name for the parameter: x
Please provide type hint for the parameter: mask
|
||
class EEGTransformer(nn.Module): | ||
|
||
def __init__(self, feature_dim, d_model=128, n_head=8, hidden_dim=512, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: feature_dim
Please provide type hint for the parameter: d_model
Please provide type hint for the parameter: n_head
Please provide type hint for the parameter: hidden_dim
class EEGTransformer(nn.Module): | ||
|
||
def __init__(self, feature_dim, d_model=128, n_head=8, hidden_dim=512, | ||
num_layers=4, drop_prob=0.1, output_dim=1, task_type='regression'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide type hint for the parameter: num_layers
Please provide type hint for the parameter: drop_prob
Please provide type hint for the parameter: output_dim
Please provide type hint for the parameter: task_type
# Final output layer | ||
self.output_layer = nn.Linear(d_model, output_dim) | ||
|
||
def forward(self, x, mask=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: forward
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
Please provide type hint for the parameter: x
Please provide descriptive name for the parameter: x
Please provide type hint for the parameter: mask
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# Time2Vec layer for positional encoding of real-time data like EEG | ||
class Time2Vec(nn.Module): | ||
# Encodes time steps into a continuous embedding space | ||
def __init__(self, d_model): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: d_model
self.w = nn.Parameter(torch.randn(1, d_model - 1)) | ||
self.b = nn.Parameter(torch.randn(1, d_model - 1)) | ||
|
||
def forward(self, t): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: forward
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
Please provide type hint for the parameter: t
Please provide descriptive name for the parameter: t
|
||
# positionwise feedforward network | ||
class PositionwiseFeedForward(nn.Module): | ||
def __init__(self, d_model, hidden, drop_prob=0.1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: d_model
Please provide type hint for the parameter: hidden
Please provide type hint for the parameter: drop_prob
self.relu = nn.ReLU() | ||
self.dropout = nn.Dropout(drop_prob) | ||
|
||
def forward(self, x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: forward
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
Please provide type hint for the parameter: x
Please provide descriptive name for the parameter: x
|
||
# scaled dot product attention | ||
class ScaleDotProductAttention(nn.Module): | ||
def __init__(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
d_model=128, | ||
n_head=8, | ||
hidden_dim=512, | ||
num_layers=4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide type hint for the parameter: num_layers
n_head=8, | ||
hidden_dim=512, | ||
num_layers=4, | ||
drop_prob=0.1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide type hint for the parameter: drop_prob
hidden_dim=512, | ||
num_layers=4, | ||
drop_prob=0.1, | ||
output_dim=1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide type hint for the parameter: output_dim
num_layers=4, | ||
drop_prob=0.1, | ||
output_dim=1, | ||
task_type="regression", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide type hint for the parameter: task_type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
super().__init__() | ||
self.softmax = nn.Softmax(dim=-1) | ||
|
||
def forward(self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None) -> tuple[Tensor, Tensor]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: q
Please provide descriptive name for the parameter: k
Please provide descriptive name for the parameter: v
self.w_v = nn.Linear(d_model, d_model) | ||
self.w_out = nn.Linear(d_model, d_model) | ||
|
||
def forward(self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: q
Please provide descriptive name for the parameter: k
Please provide descriptive name for the parameter: v
out = self.w_out(self.concat_heads(context)) | ||
return out | ||
|
||
def split_heads(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
d_k = d_model // self.n_head | ||
return x.view(batch, seq_len, self.n_head, d_k).transpose(1, 2) | ||
|
||
def concat_heads(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
self.softmax = nn.Softmax(dim=-1) | ||
|
||
def forward( | ||
self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: q
Please provide descriptive name for the parameter: k
Please provide descriptive name for the parameter: v
self.w_v = nn.Linear(d_model, d_model) | ||
self.w_out = nn.Linear(d_model, d_model) | ||
|
||
def forward(self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: q
Please provide descriptive name for the parameter: k
Please provide descriptive name for the parameter: v
out = self.w_out(self.concat_heads(context)) | ||
return out | ||
|
||
def split_heads(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
d_k = d_model // self.n_head | ||
return x.view(batch, seq_len, self.n_head, d_k).transpose(1, 2) | ||
|
||
def concat_heads(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
Is torch module not installed in the testing environment?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper functions | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
return e / (np.sum(e, axis=axis, keepdims=True) + 1e-12) | ||
|
||
|
||
def _stable_div(x: np.ndarray, denom: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _stable_div
Please provide descriptive name for the parameter: x
self.w = self.rng.standard_normal((1, d_model - 1)) | ||
self.b = self.rng.standard_normal((1, d_model - 1)) | ||
|
||
def forward(self, time_steps: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
self.w2 = self.rng.standard_normal((hidden, d_model)) * math.sqrt(2.0 / (hidden + d_model)) | ||
self.b2 = np.zeros((d_model,)) | ||
|
||
def forward(self, input_tensor: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
# 🔹 ScaledDotProductAttention | ||
# ------------------------------- | ||
class ScaledDotProductAttention: | ||
def forward( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
self.norm1 = LayerNorm(d_model) | ||
self.norm2 = LayerNorm(d_model) | ||
|
||
def forward(self, input_tensor: np.ndarray, mask: np.ndarray | None = None) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
def __init__(self, d_model: int, n_head: int, hidden_dim: int, num_layers: int, seed: Optional[int] = None) -> None: | ||
self.layers = [TransformerEncoderLayer(d_model, n_head, hidden_dim, seed) for _ in range(num_layers)] | ||
|
||
def forward(self, input_tensor: np.ndarray, mask: np.ndarray | None = None) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
self.w = self.rng.standard_normal((d_model,)) * math.sqrt(2.0 / d_model) | ||
self.b = 0.0 | ||
|
||
def forward(self, input_tensor: np.ndarray, mask: np.ndarray | None = None) -> tuple[np.ndarray, np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
self.w_out = self.rng.standard_normal((d_model, output_dim)) * math.sqrt(2.0 / (d_model + output_dim)) | ||
self.b_out = np.zeros((output_dim,)) | ||
|
||
def _input_proj(self, features: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _input_proj
def _input_proj(self, features: np.ndarray) -> np.ndarray: | ||
return np.tensordot(features, self.w_in, axes=([2], [0])) + self.b_in | ||
|
||
def forward(self, features: np.ndarray, mask: np.ndarray | None = None) -> tuple[np.ndarray, np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function forward
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
# ------------------------------- | ||
# 🔹 Helper softmax | ||
# ------------------------------- | ||
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py
, please provide doctest for the function _softmax
Please provide descriptive name for the parameter: x
for more information, see https://pre-commit.ci
Describe your change:
Written the entire transformer model from scratch without implementing the inbuilt encoder-transformer model.
Added Time2Vec as positional encoding and a generalized classifier layer for real-time data modeling (e.g., EEG).
Checklist: