Skip to content

[SROA] Prevent load atomic vector from being generated #112432

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
6 changes: 6 additions & 0 deletions llvm/lib/Transforms/Scalar/SROA.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2853,6 +2853,12 @@ class AllocaSliceRewriter : public InstVisitor<AllocaSliceRewriter, bool> {

bool visitLoadInst(LoadInst &LI) {
LLVM_DEBUG(dbgs() << " original: " << LI << "\n");

// A load atomic vector would be generated, which is illegal.
// TODO: Generate a generic bitcast in machine codegen instead.
if (LI.isAtomic() && NewAI.getAllocatedType()->isVectorTy())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it illegal from LangRef's perspective?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is at least illegal from the perspective of IR/Verifier.

Copy link
Contributor Author

@jofrn jofrn Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this is written here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The restriction is dumb and we should relax it. Instead of just hardcoding isVectorTy here, should have some kind of LoadInst::isValidAtomicType or something

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The verifier check should be moved into a LoadInst helper

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No alloca reference, just the type. The implementation should match Verifier::visitLoadInst, the isIntOrIntPtrTy || isFloatingPointTy and checkAtomicMemAccessSize (also add some non-atomic size tests?)

Copy link
Contributor Author

@jofrn jofrn Oct 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to ensure an atomic does not have a vector type. If the load is atomic and the alloca that will lend its type over has a vector type, then we will generate an atomic vector, which are illegal. We need to ensure that this doesn't occur.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you need to ensure the atomic is a valid type for an atomic load.

Alternatively you can do the load with the equivalent sized type and then bitcast (which is why this restriction is dumb in the first place, the lowering can always do the same)

Copy link
Contributor Author

@jofrn jofrn Oct 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are ensuring it is a valid type in the case under question, by checking the alloca's type. The invalid load will not be generated if the alloca has a vector type as that type will overwrite the load's type.

Copy link
Contributor Author

@jofrn jofrn Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want any loads that are atomic to have a vector type, regardless of whether the atomic itself is already valid; so checking if the atomic is more valid before translating it in AllocaSliceRewriter will miss cases where we form an invalid atomic during SROA (and then later form a vector type with it in visitLoadInst). Even though it is not likely as SROA probably won't form these, it illustrates why we don't need the extra checks here.

return false;

Value *OldOp = LI.getOperand(0);
assert(OldOp == OldPtr);

Expand Down
90 changes: 90 additions & 0 deletions llvm/test/Transforms/SROA/atomic-vector.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
; RUN: opt < %s -passes='sroa' -S | FileCheck %s

define float @atomic_vector() {
; CHECK-LABEL: define float @atomic_vector() {
; CHECK-NEXT: [[TMP1:%.*]] = alloca <1 x float>, align 4
; CHECK-NEXT: store <1 x float> undef, ptr [[TMP1]], align 4
; CHECK-NEXT: [[TMP2:%.*]] = load atomic volatile float, ptr [[TMP1]] acquire, align 4
; CHECK-NEXT: ret float [[TMP2]]
;
%src = alloca <1 x float>
%val = alloca <1 x float>
%direct = alloca ptr
call void @llvm.memcpy.p0.p0.i64(ptr %val, ptr %src, i64 4, i1 false)
store ptr %val, ptr %direct
%indirect = load ptr, ptr %direct
%ret = load atomic volatile float, ptr %indirect acquire, align 4
ret float %ret
}

define i32 @atomic_vector_int() {
; CHECK-LABEL: define i32 @atomic_vector_int() {
; CHECK-NEXT: [[VAL:%.*]] = alloca <1 x i32>, align 4
; CHECK-NEXT: store <1 x i32> undef, ptr [[VAL]], align 4
; CHECK-NEXT: [[RET:%.*]] = load atomic volatile i32, ptr [[VAL]] acquire, align 4
; CHECK-NEXT: ret i32 [[RET]]
;
%src = alloca <1 x i32>
%val = alloca <1 x i32>
%direct = alloca ptr
call void @llvm.memcpy.p0.p0.i64(ptr %val, ptr %src, i64 4, i1 false)
store ptr %val, ptr %direct
%indirect = load ptr, ptr %direct
%ret = load atomic volatile i32, ptr %indirect acquire, align 4
ret i32 %ret
}

define ptr @atomic_vector_ptr() {
; CHECK-LABEL: define ptr @atomic_vector_ptr() {
; CHECK-NEXT: [[VAL_SROA_0:%.*]] = alloca <1 x ptr>, align 8
; CHECK-NEXT: store <1 x ptr> undef, ptr [[VAL_SROA_0]], align 8
; CHECK-NEXT: [[VAL_SROA_0_0_VAL_SROA_0_0_RET:%.*]] = load atomic volatile ptr, ptr [[VAL_SROA_0]] acquire, align 4
; CHECK-NEXT: ret ptr [[VAL_SROA_0_0_VAL_SROA_0_0_RET]]
;
%src = alloca <1 x ptr>
%val = alloca <1 x ptr>
%direct = alloca ptr
call void @llvm.memcpy.p0.p0.i64(ptr %val, ptr %src, i64 8, i1 false)
store ptr %val, ptr %direct
%indirect = load ptr, ptr %direct
%ret = load atomic volatile ptr, ptr %indirect acquire, align 4
ret ptr %ret
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test a <2 x i16> or some other real vector. 1 x is a degenerate case


define i32 @atomic_2vector_int() {
; CHECK-LABEL: define i32 @atomic_2vector_int() {
; CHECK-NEXT: [[VAL_SROA_0:%.*]] = alloca i32, align 8
; CHECK-NEXT: store i32 undef, ptr [[VAL_SROA_0]], align 8
; CHECK-NEXT: [[VAL_SROA_0_0_VAL_SROA_0_0_RET:%.*]] = load atomic volatile i32, ptr [[VAL_SROA_0]] acquire, align 4
; CHECK-NEXT: ret i32 [[VAL_SROA_0_0_VAL_SROA_0_0_RET]]
;
%src = alloca <2 x i32>
%val = alloca <2 x i32>
%direct = alloca ptr
call void @llvm.memcpy.p0.p0.i64(ptr %val, ptr %src, i64 4, i1 false)
store ptr %val, ptr %direct
%indirect = load ptr, ptr %direct
%ret = load atomic volatile i32, ptr %indirect acquire, align 4
ret i32 %ret
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test for the non-byte illegal case?


define i32 @atomic_2vector_nonbyte_illegal_int() {
; CHECK-LABEL: define i32 @atomic_2vector_nonbyte_illegal_int() {
; CHECK-NEXT: [[SRC_SROA_1:%.*]] = alloca i17, align 4
; CHECK-NEXT: [[VAL_SROA_0:%.*]] = alloca i32, align 8
; CHECK-NEXT: [[VAL_SROA_2:%.*]] = alloca i17, align 4
; CHECK-NEXT: store i32 undef, ptr [[VAL_SROA_0]], align 8
; CHECK-NEXT: call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[VAL_SROA_2]], ptr align 4 [[SRC_SROA_1]], i64 4, i1 false)
; CHECK-NEXT: [[VAL_SROA_0_0_VAL_SROA_0_0_RET:%.*]] = load atomic volatile i32, ptr [[VAL_SROA_0]] acquire, align 4
; CHECK-NEXT: ret i32 [[VAL_SROA_0_0_VAL_SROA_0_0_RET]]
;
%src = alloca <2 x i17>
%val = alloca <2 x i17>
%direct = alloca ptr
call void @llvm.memcpy.p0.p0.i64(ptr %val, ptr %src, i64 8, i1 false)
store ptr %val, ptr %direct
%indirect = load ptr, ptr %direct
%ret = load atomic volatile i32, ptr %indirect acquire, align 4
ret i32 %ret
}