-
Notifications
You must be signed in to change notification settings - Fork 28.6k
fix document masking for chunked attention #37429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not affect l4 but uyes fixes the document id!
""" | ||
chunk_mask = chunk_idxs[batch_idx, q_idx] == chunk_idxs[batch_idx, kv_idx] | ||
causal_doc_mask = causal_mask_mod(batch_idx, head_idx, q_idx, kv_idx) | ||
return chunk_mask & causal_doc_mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need the attention mask as well for padding and etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call to causal_mask_mod
should handle this 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah right yeah MB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flex has the mask_and
don't know if it's better or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They're basically doing the same in this case to my understanding. (Code is at https://github.com/pytorch/pytorch/blob/183bca41de89825107dce1d1ecc1502d9993a684/torch/nn/attention/flex_attention.py#L726-L730)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, torchtune does something similar in their latest llama4 release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
touchtune relies pretty heavily on compile too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you just make sure this works well for chunked attention decoding? Never know when compile can break? 😓
""" | ||
chunk_mask = chunk_idxs[batch_idx, q_idx] == chunk_idxs[batch_idx, kv_idx] | ||
causal_doc_mask = causal_mask_mod(batch_idx, head_idx, q_idx, kv_idx) | ||
return chunk_mask & causal_doc_mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flex has the mask_and
don't know if it's better or not.
What does this PR do?
The chunked attention was clobbering the document_ids meaning that the info to distinguish documents within the same chunk is gone. The proper way to do this is to simply generate a chunk mask and combine with the existing causal mask.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@ArthurZucker @SunMarc
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.