Skip to content

Conversation

ajatshatru01
Copy link

Describe your change:

Written the entire transformer model from scratch without implementing the inbuilt encoder-transformer model.
Added Time2Vec as positional encoding and a generalized classifier layer for real-time data modeling (e.g., EEG).

  • Add an algorithm?
  • Fix a bug or typo in an existing algorithm?
  • Add or change doctests? -- Note: Please avoid changing both code and tests in a single pull request.
  • Documentation change?

Checklist:

  • I have read CONTRIBUTING.md.
  • This pull request is all my own work -- I have not plagiarized.
  • I know that pull requests will not be merged if they fail the automated tests.
  • This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
  • All new Python files are placed inside an existing directory.
  • All filenames are in all lowercase characters with no spaces or dashes.
  • All functions and variable names follow Python naming conventions.
  • All function parameters and return values are annotated with Python type hints.
  • All functions have doctests that pass the automated testing.
  • All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
  • If this pull request resolves one or more open issues then the description above includes the issue number(s) with a closing keyword: "Fixes #ISSUE-NUMBER".

Created a real-time encoder only transformer model with Time2Vec as positional encoding along with generalised classifier layer for modelling realtime data like EEG.
@algorithms-keeper algorithms-keeper bot added require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required require type hints https://docs.python.org/3/library/typing.html labels Oct 21, 2025
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

#Time2Vec layer for positional encoding of real-time data like EEG
class Time2Vec(nn.Module):
#Encodes time steps into a continuous embedding space so to help the transformer learn temporal dependencies.
def __init__(self, d_model):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

self.w = nn.Parameter(torch.randn(1, d_model - 1))
self.b = nn.Parameter(torch.randn(1, d_model - 1))

def forward(self, t):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: t

Please provide descriptive name for the parameter: t


#positionwise feedforward network
class PositionwiseFeedForward(nn.Module):
def __init__(self, d_model, hidden, drop_prob=0.1):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: hidden

Please provide type hint for the parameter: drop_prob

self.relu = nn.ReLU()
self.dropout = nn.Dropout(drop_prob)

def forward(self, x):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

return self.fc2(x)
#scaled dot product attention
class ScaleDotProductAttention(nn.Module):
def __init__(self):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:


#attention pooling layer
class AttentionPooling(nn.Module):
def __init__(self, d_model):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

super().__init__()
self.attn_score = nn.Linear(d_model, 1)

def forward(self, x, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask


class EEGTransformer(nn.Module):

def __init__(self, feature_dim, d_model=128, n_head=8, hidden_dim=512,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: feature_dim

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Please provide type hint for the parameter: hidden_dim

class EEGTransformer(nn.Module):

def __init__(self, feature_dim, d_model=128, n_head=8, hidden_dim=512,
num_layers=4, drop_prob=0.1, output_dim=1, task_type='regression'):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide type hint for the parameter: num_layers

Please provide type hint for the parameter: drop_prob

Please provide type hint for the parameter: output_dim

Please provide type hint for the parameter: task_type

# Final output layer
self.output_layer = nn.Linear(d_model, output_dim)

def forward(self, x, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

@algorithms-keeper algorithms-keeper bot added the awaiting reviews This PR is ready to be reviewed label Oct 21, 2025
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# Time2Vec layer for positional encoding of real-time data like EEG
class Time2Vec(nn.Module):
# Encodes time steps into a continuous embedding space
def __init__(self, d_model):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

self.w = nn.Parameter(torch.randn(1, d_model - 1))
self.b = nn.Parameter(torch.randn(1, d_model - 1))

def forward(self, t):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: t

Please provide descriptive name for the parameter: t


# positionwise feedforward network
class PositionwiseFeedForward(nn.Module):
def __init__(self, d_model, hidden, drop_prob=0.1):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: hidden

Please provide type hint for the parameter: drop_prob

self.relu = nn.ReLU()
self.dropout = nn.Dropout(drop_prob)

def forward(self, x):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x


# scaled dot product attention
class ScaleDotProductAttention(nn.Module):
def __init__(self):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

d_model=128,
n_head=8,
hidden_dim=512,
num_layers=4,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide type hint for the parameter: num_layers

n_head=8,
hidden_dim=512,
num_layers=4,
drop_prob=0.1,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide type hint for the parameter: drop_prob

hidden_dim=512,
num_layers=4,
drop_prob=0.1,
output_dim=1,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide type hint for the parameter: output_dim

num_layers=4,
drop_prob=0.1,
output_dim=1,
task_type="regression",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide type hint for the parameter: task_type

@algorithms-keeper algorithms-keeper bot removed require tests Tests [doctest/unittest/pytest] are required require type hints https://docs.python.org/3/library/typing.html labels Oct 21, 2025
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

super().__init__()
self.softmax = nn.Softmax(dim=-1)

def forward(self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None) -> tuple[Tensor, Tensor]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: q

Please provide descriptive name for the parameter: k

Please provide descriptive name for the parameter: v

self.w_v = nn.Linear(d_model, d_model)
self.w_out = nn.Linear(d_model, d_model)

def forward(self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None) -> Tensor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: q

Please provide descriptive name for the parameter: k

Please provide descriptive name for the parameter: v

out = self.w_out(self.concat_heads(context))
return out

def split_heads(self, x: Tensor) -> Tensor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

d_k = d_model // self.n_head
return x.view(batch, seq_len, self.n_head, d_k).transpose(1, 2)

def concat_heads(self, x: Tensor) -> Tensor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

self.softmax = nn.Softmax(dim=-1)

def forward(
self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: q

Please provide descriptive name for the parameter: k

Please provide descriptive name for the parameter: v

self.w_v = nn.Linear(d_model, d_model)
self.w_out = nn.Linear(d_model, d_model)

def forward(self, q: Tensor, k: Tensor, v: Tensor, mask: Tensor = None) -> Tensor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: q

Please provide descriptive name for the parameter: k

Please provide descriptive name for the parameter: v

out = self.w_out(self.concat_heads(context))
return out

def split_heads(self, x: Tensor) -> Tensor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

d_k = d_model // self.n_head
return x.view(batch, seq_len, self.n_head, d_k).transpose(1, 2)

def concat_heads(self, x: Tensor) -> Tensor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

@algorithms-keeper algorithms-keeper bot added the tests are failing Do not merge until tests pass label Oct 21, 2025
@algorithms-keeper algorithms-keeper bot removed the require descriptive names This PR needs descriptive function and/or variable names label Oct 21, 2025
@ajatshatru01
Copy link
Author

Is torch module not installed in the testing environment??

@algorithms-keeper algorithms-keeper bot added require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required labels Oct 22, 2025
@algorithms-keeper algorithms-keeper bot added the require descriptive names This PR needs descriptive function and/or variable names label Oct 22, 2025
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper functions
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

return e / (np.sum(e, axis=axis, keepdims=True) + 1e-12)


def _stable_div(x: np.ndarray, denom: np.ndarray) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _stable_div

Please provide descriptive name for the parameter: x

self.w = self.rng.standard_normal((1, d_model - 1))
self.b = self.rng.standard_normal((1, d_model - 1))

def forward(self, time_steps: np.ndarray) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

self.w2 = self.rng.standard_normal((hidden, d_model)) * math.sqrt(2.0 / (hidden + d_model))
self.b2 = np.zeros((d_model,))

def forward(self, input_tensor: np.ndarray) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

# 🔹 ScaledDotProductAttention
# -------------------------------
class ScaledDotProductAttention:
def forward(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

self.norm1 = LayerNorm(d_model)
self.norm2 = LayerNorm(d_model)

def forward(self, input_tensor: np.ndarray, mask: np.ndarray | None = None) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

def __init__(self, d_model: int, n_head: int, hidden_dim: int, num_layers: int, seed: Optional[int] = None) -> None:
self.layers = [TransformerEncoderLayer(d_model, n_head, hidden_dim, seed) for _ in range(num_layers)]

def forward(self, input_tensor: np.ndarray, mask: np.ndarray | None = None) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

self.w = self.rng.standard_normal((d_model,)) * math.sqrt(2.0 / d_model)
self.b = 0.0

def forward(self, input_tensor: np.ndarray, mask: np.ndarray | None = None) -> tuple[np.ndarray, np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

self.w_out = self.rng.standard_normal((d_model, output_dim)) * math.sqrt(2.0 / (d_model + output_dim))
self.b_out = np.zeros((output_dim,))

def _input_proj(self, features: np.ndarray) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _input_proj

def _input_proj(self, features: np.ndarray) -> np.ndarray:
return np.tensordot(features, self.w_in, axes=([2], [0])) + self.b_in

def forward(self, features: np.ndarray, mask: np.ndarray | None = None) -> tuple[np.ndarray, np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# -------------------------------
# 🔹 Helper softmax
# -------------------------------
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function _softmax

Please provide descriptive name for the parameter: x

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting reviews This PR is ready to be reviewed require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required tests are failing Do not merge until tests pass

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant