-
Notifications
You must be signed in to change notification settings - Fork 59
Save size in scalar scratch for bo and bq #1201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: rupengliu-meta <[email protected]>
yaochengji
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! I think the trade-off is between scalar computation and scalar load/store, do you have any performance number after the modification?
yes, I will update the perf numbers later |
|
seems only having pretty minimal throughput improvement, but the improvement is consistently around 1%-2%. tested through the kernel benchmarking script (not e2e) |
kyuyeunk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this change also applicable for bkv as well? i.e., save bkv sz to a scalar scratch?
|
@kyuyeunk yep, good idea. I just checked the bkv sz, the sz is offset + bkv_sz_frm_new, which during wait is False, there is no existing value for this, we need to still do the extra calculation if added in the wait=false. So this might not be applicable for bkv? |
Ping on updating perf numbers on the pr description. |
Updated, thanks! |
kyuyeunk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm but requires approval from @bythew3i
| bo_ids_ref, # [4] (bo_sem_0_seq_idx, bo_sem_1_seq_idx, bo_sem_0_bo_idx, bo_sem_1_bo_idx) | ||
| bo_ids_ref, # [6] (bo_sem_0_seq_idx, bo_sem_1_seq_idx, bo_sem_0_bo_idx, bo_sem_1_bo_idx, bo_sem_0_sz, bo_sem_1_sz) | ||
| bkv_update_ids_ref, # [6] (bkv_sem_0_seq_idx, bkv_sem_1_seq_idx, bkv_sem_0_offset, bkv_sem_1_offset, bkv_sem_0_sz, bkv_sem_1_sz) | ||
| bq_fetch_ids_ref, # [2] (bq_sem_0_sz, bq_sem_1_sz) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: just call bq_ids_ref
| ) | ||
| else: | ||
| # Retrieve sz from scratch instead of recalculating | ||
| sz = bq_fetch_ids_ref[bq_sem_idx] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely need to retune and update the tuned block sizes. I understand you may not have autotuen script. But please write a benchmarking script even with same block size, we want to see perf on different block sizes and different models. I am very strict with this in Google internal kernel development as well. We don't want to just check in the code without really understanding how much it can bring in different model(shapes) and block sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even appending throughput change on different models is acceptable. Thanks
Description
In bo and bq, we could save size in smem to avoid calculation. This will reduce unnecessary computation.


seems only having pretty minimal throughput improvement, but the improvement is consistently around 1%-2% before tuning!
Tests have passed for both kernels
The rest of the description includes relevant details and context, examples:
If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456
Tests
Ran unit tests and done local e2e testing
Please describe how you tested this change, and include any instructions and/or
commands to reproduce.
Checklist
Before submitting this PR, please make sure: