Skip to content

Integrate LLVM at 6d38dbf6eb56fd2b3399565af455de96a99ffa0f #4103

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 14, 2025

Conversation

vivekkhandelwal1
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 commented Mar 24, 2025

Update LLVM to llvm/llvm-project@72144d1

TOSA Updates Summary:

1: [TOSA] Update rescale input_/output_zp and double_round attribute

Update tosa.rescale input_/output_zp as inputs according to TOSA 1.0
Update double_round bool attribute to rounding_mode in alignment with
TOSA 1.0. rounding_mode supports "SINGLE_ROUND", "INEXACT_ROUND", and
"DOUBLE_ROUND". Existing double_round behaviours are mapped as followed:
double_round = true -> rounding_mode = "DOUBLE_ROUND"
double_round = false -> rounding_mode = "SINGLE_ROUND"
2: [TOSA] Update tosa.negate's zero-points to inputs

Update LIT tests and XFAIL sets

3: [TOSA] Update tosa.int_div to tosa.intdiv

Update LIT tests


Signed-off-by: Vivek Khandelwal [email protected]
Co-authored-by: Justin Ngo [email protected]

@vivekkhandelwal1
Copy link
Collaborator Author

This PR can't be merged until the issue is resolved as described here: #3504 (comment)

@zjgarvey
Copy link
Collaborator

I'm wondering if someone wrote a conversion from reshape to expand shape? I'll take a look to triage but might not have time to upstream a fix.

@zjgarvey
Copy link
Collaborator

The crash occurs because of an expand shape op that gets created in the pattern FoldWithProducerReshapeOpByExpansion inside the pass --linalg-fuse-elementwise-ops (not related to #3504). The actual crash occurs when trying to fold the dim ops of the expanded tensor later on in the pass.

Because @MaheshRavishankar allowed accessing the output shape for tensor.expand_shape in llvm/llvm-project@092372d, we should be able to improve the pattern https://github.com/llvm/llvm-project/blob/1a7af2a90fb043b25cbba15ce941ebfdba0e6717/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp#L1989 to fetch the dynamic dims.

@justin-ngo-arm
Copy link
Contributor

Hi @vivekkhandelwal1 and @zjgarvey , is there a resolution for this blocking issue yet? Some of our works depend on these TOSA 1.0 updates aligned upstream, and it seems like this blocking issue is not related to TOSA. Can we maybe do an integration with an LLVM hash that doesn't include the breaking issue (with all TOSA updates included, still)?

cc: @sjarus

@MaheshRavishankar
Copy link
Contributor

The crash occurs because of an expand shape op that gets created in the pattern FoldWithProducerReshapeOpByExpansion inside the pass --linalg-fuse-elementwise-ops (not related to #3504). The actual crash occurs when trying to fold the dim ops of the expanded tensor later on in the pass.

Because @MaheshRavishankar allowed accessing the output shape for tensor.expand_shape in llvm/llvm-project@092372d, we should be able to improve the pattern https://github.com/llvm/llvm-project/blob/1a7af2a90fb043b25cbba15ce941ebfdba0e6717/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp#L1989 to fetch the dynamic dims.

I just saw these patterns. We should just drop those FoldDimOfExpandShape and FoldDimOfCollapseShape patterns from canonicalizers.

@vivekkhandelwal1
Copy link
Collaborator Author

The crash occurs because of an expand shape op that gets created in the pattern FoldWithProducerReshapeOpByExpansion inside the pass --linalg-fuse-elementwise-ops (not related to #3504). The actual crash occurs when trying to fold the dim ops of the expanded tensor later on in the pass.
Because @MaheshRavishankar allowed accessing the output shape for tensor.expand_shape in llvm/llvm-project@092372d, we should be able to improve the pattern https://github.com/llvm/llvm-project/blob/1a7af2a90fb043b25cbba15ce941ebfdba0e6717/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp#L1989 to fetch the dynamic dims.

I just saw these patterns. We should just drop those FoldDimOfExpandShape and FoldDimOfCollapseShape patterns from canonicalizers.

Hi @MaheshRavishankar, can you please review this: llvm/llvm-project#134219?

@vivekkhandelwal1
Copy link
Collaborator Author

Hi @vivekkhandelwal1 and @zjgarvey , is there a resolution for this blocking issue yet? Some of our works depend on these TOSA 1.0 updates aligned upstream, and it seems like this blocking issue is not related to TOSA. Can we maybe do an integration with an LLVM hash that doesn't include the breaking issue (with all TOSA updates included, still)?

cc: @sjarus

Hi @justin-ngo-arm, we should wait for the changes to be merged in LLVM and then we can do a more latest bump.

@justin-ngo-arm
Copy link
Contributor

Hi @vivekkhandelwal1 and @zjgarvey , is there a resolution for this blocking issue yet? Some of our works depend on these TOSA 1.0 updates aligned upstream, and it seems like this blocking issue is not related to TOSA. Can we maybe do an integration with an LLVM hash that doesn't include the breaking issue (with all TOSA updates included, still)?

cc: @sjarus

Hi @justin-ngo-arm, we should wait for the changes to be merged in LLVM and then we can do a more latest bump.

Sounds good to me. Thank you for helping with this!

@sjarus
Copy link
Collaborator

sjarus commented Apr 3, 2025

Thanks a lot, @MaheshRavishankar and @vivekkhandelwal1 !

justin-ngo-arm and others added 4 commits April 14, 2025 09:17
1: [TOSA] Update rescale input_/output_zp and double_round attribute

* Update tosa.rescale input_/output_zp as inputs according to TOSA 1.0
* Update double_round bool attribute to rounding_mode in alignment with
TOSA 1.0. rounding_mode supports "SINGLE_ROUND", "INEXACT_ROUND", and
"DOUBLE_ROUND". Existing double_round behaviours are mapped as followed:
  - double_round = true -> rounding_mode = "DOUBLE_ROUND"
  - double_round = false -> rounding_mode = "SINGLE_ROUND"

2: [TOSA] Update tosa.negate's zero-points to inputs

Update LIT tests and XFAIL sets
1: [TOSA] Update rescale input_/output_zp and double_round attribute

* Update tosa.rescale input_/output_zp as inputs according to TOSA 1.0
* Update double_round bool attribute to rounding_mode in alignment with
TOSA 1.0. rounding_mode supports "SINGLE_ROUND", "INEXACT_ROUND", and
"DOUBLE_ROUND". Existing double_round behaviours are mapped as followed:
  - double_round = true -> rounding_mode = "DOUBLE_ROUND"
  - double_round = false -> rounding_mode = "SINGLE_ROUND"

2: [TOSA] Update tosa.negate's zero-points to inputs

Update LIT tests and XFAIL sets

3: [TOSA] Update tosa.int_div to tosa.intdiv

Update LIT tests
@justin-ngo-arm
Copy link
Contributor

Looks good to me. Thank you @vivekkhandelwal1 !
cc: @sjarus

Copy link
Collaborator

@sjarus sjarus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me too, @vivekkhandelwal1

@AmosLewis AmosLewis merged commit 9f2ba5a into llvm:main Apr 14, 2025
3 checks passed
@vivekkhandelwal1 vivekkhandelwal1 deleted the bump-llvm branch April 15, 2025 06:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants