Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add linspace op #478

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Add linspace op #478

wants to merge 4 commits into from

Conversation

0x45f
Copy link
Collaborator

@0x45f 0x45f commented Mar 5, 2025

PR Category

Operator

Type of Change

New Feature

Description

Add linspace op

Issue

Progress

  • Change is properly reviewed (1 reviewer required, 2 recommended).
  • Change is responded to an issue.
  • Change is fully covered by a UT.

Performance

benchmark/test_tensor_constructor_perf.py 
Operator: linspace  Performance Test (dtype=torch.float16, mode=cuda,level=comprehensive)
Status       Torch Latency (ms)    Gems Latency (ms)         Gems Speedup          Size Detail
-----------------------------------------------------------------------------------------------
SUCCESS               0.007744            0.007424               1.043          {'start': 0, 'end': 65503, 'steps': 11930, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.007712            0.006176               1.249          {'start': 0, 'end': 4096, 'steps': 3598, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.006752            0.006528               1.034          {'start': 0, 'end': 65503, 'steps': 3165, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.006688            0.006976               0.959          {'start': 0, 'end': 65503, 'steps': 24547, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.006784            0.006656               1.019          {'start': 0, 'end': 65503, 'steps': 7776, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.007392            0.007648               0.967          {'start': 0, 'end': 65503, 'steps': 48606, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.007392            0.006176               1.197          {'start': 0, 'end': 10000, 'steps': 541, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.007168            0.007008               1.023          {'start': 0, 'end': 65503, 'steps': 62128, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.007264            0.007616               0.954          {'start': 0, 'end': 65503, 'steps': 30535, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.006432            0.006176               1.041          {'start': 0, 'end': 10000, 'steps': 7891, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.007360            0.007616               0.966          {'start': 0, 'end': 65503, 'steps': 39451, 'dtype': torch.float16, 'device': 'cuda'}
SUCCESS               0.006912            0.006656               1.038          {'start': 0, 'end': 65503, 'steps': 54982, 'dtype': torch.float16, 'device': 'cuda'}


Operator: linspace  Performance Test (dtype=torch.float32, mode=cuda,level=comprehensive)
Status       Torch Latency (ms)    Gems Latency (ms)         Gems Speedup          Size Detail
-----------------------------------------------------------------------------------------------
SUCCESS              11.912832            5.959104               1.999          {'start': 0, 'end': 1073741824, 'steps': 1028344234, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.006176            0.006176               1.000          {'start': 0, 'end': 4096, 'steps': 1975, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.018048            0.012000               1.504          {'start': 0, 'end': 16777216, 'steps': 981161, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.038336            0.022112               1.734          {'start': 0, 'end': 16777216, 'steps': 2767021, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               6.853696            3.430368               1.998          {'start': 0, 'end': 1073741824, 'steps': 591429131, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               2.992288            1.499456               1.996          {'start': 0, 'end': 268435456, 'steps': 257972581, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.006752            0.006528               1.034          {'start': 0, 'end': 10000, 'steps': 561, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.015200            0.011104               1.369          {'start': 0, 'end': 2560000, 'steps': 766165, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               2.924992            1.465472               1.996          {'start': 0, 'end': 655360000, 'steps': 252165756, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.006784            0.007392               0.918          {'start': 0, 'end': 10000, 'steps': 6251, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.011712            0.009184               1.275          {'start': 0, 'end': 2560000, 'steps': 440106, 'dtype': torch.float32, 'device': 'cuda'}
SUCCESS               0.064672            0.035264               1.834          {'start': 0, 'end': 655360000, 'steps': 5076269, 'dtype': torch.float32, 'device': 'cuda'}


Operator: linspace  Performance Test (dtype=torch.bfloat16, mode=cuda,level=comprehensive)
Status       Torch Latency (ms)    Gems Latency (ms)         Gems Speedup          Size Detail
-----------------------------------------------------------------------------------------------
SUCCESS               0.653472            0.329152               1.985          {'start': 0, 'end': 1073741824, 'steps': 55846449, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.006720            0.006176               1.088          {'start': 0, 'end': 4096, 'steps': 3087, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.153792            0.079296               1.939          {'start': 0, 'end': 16777216, 'steps': 12677224, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.124928            0.065760               1.900          {'start': 0, 'end': 16777216, 'steps': 10274023, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               5.398432            2.702464               1.998          {'start': 0, 'end': 1073741824, 'steps': 465792052, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               2.240704            1.123712               1.994          {'start': 0, 'end': 268435456, 'steps': 193058198, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.006432            0.006784               0.948          {'start': 0, 'end': 10000, 'steps': 9974, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.019904            0.013536               1.470          {'start': 0, 'end': 2560000, 'steps': 1178752, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               5.961792            2.984256               1.998          {'start': 0, 'end': 655360000, 'steps': 514488235, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.006432            0.006752               0.953          {'start': 0, 'end': 10000, 'steps': 5423, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               0.029824            0.017984               1.658          {'start': 0, 'end': 2560000, 'steps': 2017588, 'dtype': torch.bfloat16, 'device': 'cuda'}
SUCCESS               4.037504            2.022432               1.996          {'start': 0, 'end': 655360000, 'steps': 348277461, 'dtype': torch.bfloat16, 'device': 'cuda'}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant