Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DP4AMatMul perf refinements #23539

Merged
merged 2 commits into from
Jan 31, 2025
Merged

DP4AMatMul perf refinements #23539

merged 2 commits into from
Jan 31, 2025

Conversation

sushraja-msft
Copy link
Contributor

@sushraja-msft sushraja-msft commented Jan 30, 2025

In this change

  1. Vectorization of k is updated to 4.
  2. Tile_A, Tile_B are stored transposed in shared memory. This makes it so that memory locality is improved for our access pattern.
  3. Lane output is switched to being individual vectors and its loop unrolled, this solves the problem where laneoutput was not on registers before.

Perf improvements are not very consistent with this change. On Tigerlake GPU with 32.0.101.6460 (latest intel drivers)

Baseline

model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       7.36557e+06                         <<<<
        avg (tokens/s): 135.903
        p50 (us):       7.35498e+06
        stddev (us):    27599
        n:              5 * 1001 token(s)

With Change

model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       6.52302e+06                           <<<<
        avg (tokens/s): 153.457
        p50 (us):       6.52224e+06
        stddev (us):    10407.3
        n:              5 * 1001 token(s)

However, using the Intel GPA comparing before and after profile, one can clearly see straight runs of ALU work without being interspersed by writebacks to local memory that contained lane_output before.

image

@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Jan 31, 2025
@guschmue
Copy link
Contributor

I see similar gains on TigerLake, no impact on m4 and rtx3060.

@guschmue guschmue merged commit 271c509 into main Jan 31, 2025
98 of 100 checks passed
@guschmue guschmue deleted the user/sushraja/dp4matmul_finetunes branch January 31, 2025 18:20
sfatimar pushed a commit to intel/onnxruntime that referenced this pull request Feb 5, 2025
In this change

1. Vectorization of k is updated to 4.
2. Tile_A, Tile_B are stored transposed in shared memory. This makes it
so that memory locality is improved for our access pattern.
3. Lane output is switched to being individual vectors and its loop
unrolled, this solves the problem where laneoutput was not on registers
before.

Perf improvements are not very consistent with this change. On Tigerlake
GPU with 32.0.101.6460 (latest intel drivers)
```
Baseline

model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       7.36557e+06                         <<<<
        avg (tokens/s): 135.903
        p50 (us):       7.35498e+06
        stddev (us):    27599
        n:              5 * 1001 token(s)

With Change

model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       6.52302e+06                           <<<<
        avg (tokens/s): 153.457
        p50 (us):       6.52224e+06
        stddev (us):    10407.3
        n:              5 * 1001 token(s)
```

However, using the Intel GPA comparing before and after profile, one can
clearly see straight runs of ALU work without being interspersed by
writebacks to local memory that contained lane_output before.


![image](https://github.com/user-attachments/assets/e01d3474-8406-4a61-b352-2ecbf0855a7f)
sfatimar pushed a commit to intel/onnxruntime that referenced this pull request Feb 5, 2025
In this change

1. Vectorization of k is updated to 4.
2. Tile_A, Tile_B are stored transposed in shared memory. This makes it
so that memory locality is improved for our access pattern.
3. Lane output is switched to being individual vectors and its loop
unrolled, this solves the problem where laneoutput was not on registers
before.

Perf improvements are not very consistent with this change. On Tigerlake
GPU with 32.0.101.6460 (latest intel drivers)
```
Baseline

model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       7.36557e+06                         <<<<
        avg (tokens/s): 135.903
        p50 (us):       7.35498e+06
        stddev (us):    27599
        n:              5 * 1001 token(s)

With Change

model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       6.52302e+06                           <<<<
        avg (tokens/s): 153.457
        p50 (us):       6.52224e+06
        stddev (us):    10407.3
        n:              5 * 1001 token(s)
```

However, using the Intel GPA comparing before and after profile, one can
clearly see straight runs of ALU work without being interspersed by
writebacks to local memory that contained lane_output before.


![image](https://github.com/user-attachments/assets/e01d3474-8406-4a61-b352-2ecbf0855a7f)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants