Skip to content
Open
Changes from 1 commit
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
220270e
Create real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
23c5117
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 21, 2025
c96d440
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
4a62b57
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 21, 2025
1eca445
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
47ba945
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 21, 2025
0974fee
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
d3a8f47
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
d30966c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 21, 2025
24c52d4
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
2a0a8f6
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
2dccc2d
Update real_time_encoder_transformer.py
ajatshatru01 Oct 21, 2025
101e305
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
53eff3c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
5f20061
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
986cd98
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
0fc2b8e
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
f10a2ea
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
86e4848
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
f9aca1e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
e6e2092
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
74714aa
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
18c156e
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
e33202b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
33cf40a
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
9628539
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
e33baeb
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
2665159
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
a21bd2b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
491e15d
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
c57d184
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
80aff7a
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
007dcf1
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
21c18c2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
8b55a8f
Update real_time_encoder_transformer.py
ajatshatru01 Oct 22, 2025
195b58b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 22, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
185 changes: 185 additions & 0 deletions neural_network/real_time_encoder_transformer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
#imports
import torch
import torch.nn as nn

Check failure on line 3 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (PLR0402)

neural_network/real_time_encoder_transformer.py:3:8: PLR0402 Use `from torch import nn` in lieu of alias
import math

Check failure on line 4 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (I001)

neural_network/real_time_encoder_transformer.py:2:1: I001 Import block is un-sorted or un-formatted
#Time2Vec layer for positional encoding of real-time data like EEG
class Time2Vec(nn.Module):
#Encodes time steps into a continuous embedding space so to help the transformer learn temporal dependencies.

Check failure on line 7 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (E501)

neural_network/real_time_encoder_transformer.py:7:89: E501 Line too long (113 > 88)
def __init__(self, d_model):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

super().__init__()
self.w0 = nn.Parameter(torch.randn(1, 1))
self.b0 = nn.Parameter(torch.randn(1, 1))
self.w = nn.Parameter(torch.randn(1, d_model - 1))
self.b = nn.Parameter(torch.randn(1, d_model - 1))

def forward(self, t):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: t

Please provide descriptive name for the parameter: t

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: t

Please provide descriptive name for the parameter: t

linear = self.w0 * t + self.b0

Check failure on line 16 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (W291)

neural_network/real_time_encoder_transformer.py:16:39: W291 Trailing whitespace
periodic = torch.sin(self.w * t + self.b)

Check failure on line 17 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (W291)

neural_network/real_time_encoder_transformer.py:17:50: W291 Trailing whitespace
return torch.cat([linear, periodic], dim=-1)

Check failure on line 18 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (W291)

neural_network/real_time_encoder_transformer.py:18:53: W291 Trailing whitespace

Check failure on line 19 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (W293)

neural_network/real_time_encoder_transformer.py:19:1: W293 Blank line contains whitespace
#positionwise feedforward network
class PositionwiseFeedForward(nn.Module):
def __init__(self, d_model, hidden, drop_prob=0.1):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: hidden

Please provide type hint for the parameter: drop_prob

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: hidden

Please provide type hint for the parameter: drop_prob

super().__init__()
self.fc1 = nn.Linear(d_model, hidden)
self.fc2 = nn.Linear(hidden, d_model)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(drop_prob)

def forward(self, x):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

x = self.fc1(x)
x = self.relu(x)
x = self.dropout(x)
return self.fc2(x)
#scaled dot product attention
class ScaleDotProductAttention(nn.Module):
def __init__(self):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

super().__init__()
self.softmax = nn.Softmax(dim=-1)

def forward(self, q, k, v, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: q

Please provide descriptive name for the parameter: q

Please provide type hint for the parameter: k

Please provide descriptive name for the parameter: k

Please provide type hint for the parameter: v

Please provide descriptive name for the parameter: v

Please provide type hint for the parameter: mask

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: q

Please provide descriptive name for the parameter: q

Please provide type hint for the parameter: k

Please provide descriptive name for the parameter: k

Please provide type hint for the parameter: v

Please provide descriptive name for the parameter: v

Please provide type hint for the parameter: mask

_, _, _, d_k = k.size()
scores = (q @ k.transpose(2, 3)) / math.sqrt(d_k)

if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)

attn = self.softmax(scores)
context = attn @ v
return context, attn
#multi head attention
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, n_head):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

super().__init__()
self.n_head = n_head
self.attn = ScaleDotProductAttention()
self.w_q = nn.Linear(d_model, d_model)
self.w_k = nn.Linear(d_model, d_model)
self.w_v = nn.Linear(d_model, d_model)
self.w_out = nn.Linear(d_model, d_model)

def forward(self, q, k, v, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: q

Please provide descriptive name for the parameter: q

Please provide type hint for the parameter: k

Please provide descriptive name for the parameter: k

Please provide type hint for the parameter: v

Please provide descriptive name for the parameter: v

Please provide type hint for the parameter: mask

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: q

Please provide descriptive name for the parameter: q

Please provide type hint for the parameter: k

Please provide descriptive name for the parameter: k

Please provide type hint for the parameter: v

Please provide descriptive name for the parameter: v

Please provide type hint for the parameter: mask

q, k, v = self.w_q(q), self.w_k(k), self.w_v(v)
q, k, v = self.split_heads(q), self.split_heads(k), self.split_heads(v)

context, _ = self.attn(q, k, v, mask)
out = self.w_out(self.concat_heads(context))
return out

def split_heads(self, x):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: split_heads. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function split_heads

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: split_heads. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function split_heads

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

batch, seq_len, d_model = x.size()
d_k = d_model // self.n_head
return x.view(batch, seq_len, self.n_head, d_k).transpose(1, 2)

def concat_heads(self, x):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: concat_heads. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function concat_heads

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: concat_heads. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function concat_heads

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

batch, n_head, seq_len, d_k = x.size()
return x.transpose(1, 2).contiguous().view(batch, seq_len, n_head * d_k)

#Layer normalization
class LayerNorm(nn.Module):
def __init__(self, d_model, eps=1e-12):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: eps

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: eps

super().__init__()
self.gamma = nn.Parameter(torch.ones(d_model))
self.beta = nn.Parameter(torch.zeros(d_model))
self.eps = eps

def forward(self, x):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

mean = x.mean(-1, keepdim=True)
var = x.var(-1, unbiased=False, keepdim=True)
return self.gamma * (x - mean) / torch.sqrt(var + self.eps) + self.beta

#transformer encoder layer
class TransformerEncoderLayer(nn.Module):
def __init__(self, d_model, n_head, hidden_dim, drop_prob=0.1):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Please provide type hint for the parameter: hidden_dim

Please provide type hint for the parameter: drop_prob

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Please provide type hint for the parameter: hidden_dim

Please provide type hint for the parameter: drop_prob

super().__init__()
self.self_attn = MultiHeadAttention(d_model, n_head)
self.ffn = PositionwiseFeedForward(d_model, hidden_dim, drop_prob)
self.norm1 = LayerNorm(d_model)
self.norm2 = LayerNorm(d_model)
self.dropout = nn.Dropout(drop_prob)

def forward(self, x, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

attn_out = self.self_attn(x, x, x, mask)
x = self.norm1(x + self.dropout(attn_out))
ffn_out = self.ffn(x)
x = self.norm2(x + self.dropout(ffn_out))

return x

#encoder stack
class TransformerEncoder(nn.Module):
def __init__(self, d_model, n_head, hidden_dim, num_layers, drop_prob=0.1):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Please provide type hint for the parameter: hidden_dim

Please provide type hint for the parameter: num_layers

Please provide type hint for the parameter: drop_prob

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Please provide type hint for the parameter: hidden_dim

Please provide type hint for the parameter: num_layers

Please provide type hint for the parameter: drop_prob

super().__init__()
self.layers = nn.ModuleList([
TransformerEncoderLayer(d_model, n_head, hidden_dim, drop_prob)
for _ in range(num_layers)
])

def forward(self, x, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

for layer in self.layers:
x = layer(x, mask)
return x


#attention pooling layer
class AttentionPooling(nn.Module):
def __init__(self, d_model):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: d_model

super().__init__()
self.attn_score = nn.Linear(d_model, 1)

def forward(self, x, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask

attn_weights = torch.softmax(self.attn_score(x).squeeze(-1), dim=-1)

if mask is not None:
attn_weights = attn_weights.masked_fill(mask == 0, 0)
attn_weights = attn_weights / (attn_weights.sum(dim=1, keepdim=True) + 1e-8)

pooled = torch.bmm(attn_weights.unsqueeze(1), x).squeeze(1)
return pooled, attn_weights

# transformer model

class EEGTransformer(nn.Module):

def __init__(self, feature_dim, d_model=128, n_head=8, hidden_dim=512,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: __init__. If the function does not return a value, please provide the type hint as: def function() -> None:

Please provide type hint for the parameter: feature_dim

Please provide type hint for the parameter: d_model

Please provide type hint for the parameter: n_head

Please provide type hint for the parameter: hidden_dim

num_layers=4, drop_prob=0.1, output_dim=1, task_type='regression'):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide type hint for the parameter: num_layers

Please provide type hint for the parameter: drop_prob

Please provide type hint for the parameter: output_dim

Please provide type hint for the parameter: task_type

super().__init__()
self.task_type = task_type
self.input_proj = nn.Linear(feature_dim, d_model)

# Time encoding for temporal understanding
self.time2vec = Time2Vec(d_model)

# Transformer encoder for sequence modeling
self.encoder = TransformerEncoder(d_model, n_head, hidden_dim, num_layers, drop_prob)

Check failure on line 154 in neural_network/real_time_encoder_transformer.py

View workflow job for this annotation

GitHub Actions / ruff

Ruff (E501)

neural_network/real_time_encoder_transformer.py:154:89: E501 Line too long (93 > 88)

# Attention pooling to summarize time dimension
self.pooling = AttentionPooling(d_model)

# Final output layer
self.output_layer = nn.Linear(d_model, output_dim)

def forward(self, x, mask=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: forward. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/real_time_encoder_transformer.py, please provide doctest for the function forward

Please provide type hint for the parameter: x

Please provide descriptive name for the parameter: x

Please provide type hint for the parameter: mask


b, t, _ = x.size()

# Create time indices and embed them
t_idx = torch.arange(t, device=x.device).view(1, t, 1).expand(b, t, 1).float()
time_emb = self.time2vec(t_idx)

# Add time embedding to feature projection
x = self.input_proj(x) + time_emb

# Pass through the Transformer encoder
x = self.encoder(x, mask)

# Aggregate features across time with attention
pooled, attn_weights = self.pooling(x, mask)

# Final output (regression or classification)
out = self.output_layer(pooled)

if self.task_type == 'classification':
out = torch.softmax(out, dim=-1)

return out, attn_weights
Loading