Rewrite load-store-vectorizer.

The motivation for this change is a workload generated by the XLA compiler
targeting nvidia GPUs.

This kernel has a few hundred i8 loads and stores.  Merging is critical for
performance.

The current LSV doesn't merge these well because it only considers instructions
within a block of 64 loads+stores.  This limit is necessary to contain the
O(n^2) behavior of the pass.  I'm hesitant to increase the limit, because this
pass is already one of the slowest parts of compiling an XLA program.

So we rewrite basically the whole thing to use a new algorithm.  Before, we
compared every load/store to every other to see if they're consecutive.  The
insight (from tra@) is that this is redundant.  If we know the offset from PtrA
to PtrB, then we don't need to compare PtrC to both of them in order to tell
whether C may be adjacent to A or B.

So that's what we do.  When scanning a basic block, we maintain a list of
chains, where we know the offset from every element in the chain to the first
element in the chain.  Each instruction gets compared only to the leaders of
all the chains.

In the worst case, this is still O(n^2), because all chains might be of length
1.  To prevent compile time blowup, we only consider the 64 most recently used
chains.  Thus we do no more comparisons than before, but we have the potential
to make much longer chains.

This rewrite affects many tests.  The changes to tests fall into two
categories.

1. The old code had what appears to be a bug when deciding whether a misaligned
   vectorized load is fast.  Suppose TTI reports that load <i32 x 4> align 4
   has relative speed 1, and suppose that load i32 align 4 has relative speed
   32.

   The intent of the code seems to be that we prefer the scalar load, because
   it's faster.  But the old code would choose the vectorized load.
   accessIsMisaligned would set RelativeSpeed to 0 for the scalar load (and not
   even call into TTI to get the relative speed), because the scalar load is
   aligned.

   After this patch, we will prefer the scalar load if it's faster.

2. This patch changes the logic for how we vectorize.  Usually this results in
   vectorizing more.

Explanation of changes to tests:

 - AMDGPU/adjust-alloca-alignment.ll: #1
 - AMDGPU/flat_atomic.ll: #2, we vectorize more.
 - AMDGPU/int_sideeffect.ll: #2, there are two possible locations for the call to @foo, and the pass is brittle to this.  Before, we'd vectorize in case 1 and not case 2.  Now we vectorize in case 2 and not case 1.  So we just move the call.
 - AMDGPU/adjust-alloca-alignment.ll: #2, we vectorize more
 - AMDGPU/insertion-point.ll: #2 we vectorize more
 - AMDGPU/merge-stores-private.ll: #1 (undoes changes from git rev 86f9117d47, which appear to have hit the bug from #1)
 - AMDGPU/multiple_tails.ll: #1
 - AMDGPU/vect-ptr-ptr-size-mismatch.ll: Fix alignment (I think related to #1 above).
 - AMDGPU CodeGen: I have difficulty commenting on these changes, but many of them look like #2, we vectorize more.
 - NVPTX/4x2xhalf.ll: Fix alignment (I think related to #1 above).
 - NVPTX/vectorize_i8.ll: We don't generate <3 x i8> vectors on NVPTX because they're not legal (and eventually get split)
 - X86/correct-order.ll: #2, we vectorize more, probably because of changes to the chain-splitting logic.
 - X86/subchain-interleaved.ll: #2, we vectorize more
 - X86/vector-scalar.ll: #2, we can now vectorize scalar float + <1 x float>
 - X86/vectorize-i8-nested-add-inseltpoison.ll: Deleted the nuw test because it was nonsensical.  It was doing `add nuw %v0, -1`, but this is equivalent to `add nuw %v0, 0xffff'ffff`, which is equivalent to asserting that %v0 == 0.
 - X86/vectorize-i8-nested-add.ll: Same as nested-add-inseltpoison.ll

Differential Revision: https://reviews.llvm.org/D149893
This commit is contained in:
Justin Lebar
2023-05-04 12:34:43 -07:00
parent 924912956e
commit 2be0abb7fe
29 changed files with 2945 additions and 1666 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -1277,26 +1277,26 @@ define amdgpu_kernel void @sdivrem_v4i32(ptr addrspace(1) %out0, ptr addrspace(1
define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1) %out1, <2 x i64> %x, <2 x i64> %y) {
; GFX8-LABEL: sdivrem_v2i64:
; GFX8: ; %bb.0:
; GFX8-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x10
; GFX8-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x0
; GFX8-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x20
; GFX8-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NEXT: s_ashr_i32 s2, s9, 31
; GFX8-NEXT: s_ashr_i32 s16, s13, 31
; GFX8-NEXT: s_add_u32 s0, s8, s2
; GFX8-NEXT: s_addc_u32 s1, s9, s2
; GFX8-NEXT: s_add_u32 s6, s12, s16
; GFX8-NEXT: s_ashr_i32 s4, s13, 31
; GFX8-NEXT: s_ashr_i32 s16, s1, 31
; GFX8-NEXT: s_add_u32 s12, s12, s4
; GFX8-NEXT: s_addc_u32 s13, s13, s4
; GFX8-NEXT: s_add_u32 s0, s0, s16
; GFX8-NEXT: s_mov_b32 s17, s16
; GFX8-NEXT: s_addc_u32 s7, s13, s16
; GFX8-NEXT: s_xor_b64 s[8:9], s[6:7], s[16:17]
; GFX8-NEXT: v_cvt_f32_u32_e32 v0, s9
; GFX8-NEXT: v_cvt_f32_u32_e32 v1, s8
; GFX8-NEXT: s_mov_b32 s3, s2
; GFX8-NEXT: s_xor_b64 s[12:13], s[0:1], s[2:3]
; GFX8-NEXT: s_addc_u32 s1, s1, s16
; GFX8-NEXT: s_xor_b64 s[6:7], s[0:1], s[16:17]
; GFX8-NEXT: v_cvt_f32_u32_e32 v0, s7
; GFX8-NEXT: v_cvt_f32_u32_e32 v1, s6
; GFX8-NEXT: s_mov_b32 s5, s4
; GFX8-NEXT: s_xor_b64 s[12:13], s[12:13], s[4:5]
; GFX8-NEXT: v_mul_f32_e32 v0, 0x4f800000, v0
; GFX8-NEXT: v_add_f32_e32 v0, v0, v1
; GFX8-NEXT: v_rcp_iflag_f32_e32 v0, v0
; GFX8-NEXT: s_sub_u32 s6, 0, s8
; GFX8-NEXT: s_subb_u32 s7, 0, s9
; GFX8-NEXT: s_xor_b64 s[18:19], s[2:3], s[16:17]
; GFX8-NEXT: s_sub_u32 s18, 0, s6
; GFX8-NEXT: s_subb_u32 s19, 0, s7
; GFX8-NEXT: v_mul_f32_e32 v0, 0x5f7ffffc, v0
; GFX8-NEXT: v_mul_f32_e32 v1, 0x2f800000, v0
; GFX8-NEXT: v_trunc_f32_e32 v2, v1
@@ -1304,12 +1304,10 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_add_f32_e32 v0, v1, v0
; GFX8-NEXT: v_cvt_u32_f32_e32 v3, v0
; GFX8-NEXT: v_cvt_u32_f32_e32 v4, v2
; GFX8-NEXT: s_ashr_i32 s16, s15, 31
; GFX8-NEXT: s_mov_b32 s17, s16
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s6, v3, 0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s6, v4, v[1:2]
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s18, v3, 0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s18, v4, v[1:2]
; GFX8-NEXT: v_mul_hi_u32 v5, v3, v0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s7, v3, v[1:2]
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s19, v3, v[1:2]
; GFX8-NEXT: v_mul_lo_u32 v2, v4, v0
; GFX8-NEXT: v_mul_hi_u32 v0, v4, v0
; GFX8-NEXT: v_mul_lo_u32 v6, v3, v1
@@ -1332,14 +1330,16 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_add_u32_e32 v1, vcc, v1, v2
; GFX8-NEXT: v_add_u32_e32 v3, vcc, v3, v0
; GFX8-NEXT: v_addc_u32_e32 v4, vcc, v4, v1, vcc
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s6, v3, 0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s6, v4, v[1:2]
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s18, v3, 0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s18, v4, v[1:2]
; GFX8-NEXT: v_mul_hi_u32 v6, v3, v0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s7, v3, v[1:2]
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s19, v3, v[1:2]
; GFX8-NEXT: v_mul_lo_u32 v2, v4, v0
; GFX8-NEXT: v_mul_hi_u32 v0, v4, v0
; GFX8-NEXT: v_mul_lo_u32 v5, v3, v1
; GFX8-NEXT: s_load_dwordx4 s[4:7], s[4:5], 0x0
; GFX8-NEXT: s_xor_b64 s[18:19], s[4:5], s[16:17]
; GFX8-NEXT: s_ashr_i32 s16, s3, 31
; GFX8-NEXT: s_mov_b32 s17, s16
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v2, v5
; GFX8-NEXT: v_cndmask_b32_e64 v5, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v2, v6
@@ -1377,46 +1377,46 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v3, vcc, v4, v3
; GFX8-NEXT: v_add_u32_e32 v4, vcc, v0, v2
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s8, v4, 0
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s6, v4, 0
; GFX8-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v3, v2
; GFX8-NEXT: v_add_u32_e32 v3, vcc, v5, v2
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s8, v3, v[1:2]
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s6, v3, v[1:2]
; GFX8-NEXT: v_mov_b32_e32 v6, s13
; GFX8-NEXT: v_sub_u32_e32 v7, vcc, s12, v0
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s9, v4, v[1:2]
; GFX8-NEXT: v_mov_b32_e32 v5, s9
; GFX8-NEXT: s_ashr_i32 s12, s11, 31
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s7, v4, v[1:2]
; GFX8-NEXT: v_mov_b32_e32 v5, s7
; GFX8-NEXT: s_ashr_i32 s12, s15, 31
; GFX8-NEXT: v_subb_u32_e64 v6, s[0:1], v6, v1, vcc
; GFX8-NEXT: v_sub_u32_e64 v0, s[0:1], s13, v1
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s9, v6
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s7, v6
; GFX8-NEXT: v_cndmask_b32_e64 v1, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s8, v7
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s6, v7
; GFX8-NEXT: v_subb_u32_e32 v0, vcc, v0, v5, vcc
; GFX8-NEXT: v_cndmask_b32_e64 v2, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s9, v6
; GFX8-NEXT: v_subrev_u32_e32 v8, vcc, s8, v7
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s7, v6
; GFX8-NEXT: v_subrev_u32_e32 v8, vcc, s6, v7
; GFX8-NEXT: v_cndmask_b32_e64 v2, v1, v2, s[0:1]
; GFX8-NEXT: v_subbrev_u32_e64 v9, s[0:1], 0, v0, vcc
; GFX8-NEXT: v_add_u32_e64 v1, s[0:1], 1, v4
; GFX8-NEXT: v_addc_u32_e64 v10, s[0:1], 0, v3, s[0:1]
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s9, v9
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s7, v9
; GFX8-NEXT: v_cndmask_b32_e64 v11, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s8, v8
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s6, v8
; GFX8-NEXT: v_cndmask_b32_e64 v12, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s9, v9
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s7, v9
; GFX8-NEXT: v_cndmask_b32_e64 v11, v11, v12, s[0:1]
; GFX8-NEXT: v_add_u32_e64 v12, s[0:1], 1, v1
; GFX8-NEXT: v_addc_u32_e64 v13, s[0:1], 0, v10, s[0:1]
; GFX8-NEXT: s_add_u32 s0, s10, s12
; GFX8-NEXT: s_addc_u32 s1, s11, s12
; GFX8-NEXT: s_add_u32 s10, s14, s16
; GFX8-NEXT: s_addc_u32 s11, s15, s16
; GFX8-NEXT: s_xor_b64 s[10:11], s[10:11], s[16:17]
; GFX8-NEXT: v_cvt_f32_u32_e32 v14, s11
; GFX8-NEXT: s_add_u32 s0, s14, s12
; GFX8-NEXT: s_addc_u32 s1, s15, s12
; GFX8-NEXT: s_add_u32 s2, s2, s16
; GFX8-NEXT: s_addc_u32 s3, s3, s16
; GFX8-NEXT: s_xor_b64 s[2:3], s[2:3], s[16:17]
; GFX8-NEXT: v_cvt_f32_u32_e32 v14, s3
; GFX8-NEXT: v_subb_u32_e32 v0, vcc, v0, v5, vcc
; GFX8-NEXT: v_cvt_f32_u32_e32 v5, s10
; GFX8-NEXT: v_subrev_u32_e32 v15, vcc, s8, v8
; GFX8-NEXT: v_cvt_f32_u32_e32 v5, s2
; GFX8-NEXT: v_subrev_u32_e32 v15, vcc, s6, v8
; GFX8-NEXT: v_subbrev_u32_e32 v16, vcc, 0, v0, vcc
; GFX8-NEXT: v_mul_f32_e32 v0, 0x4f800000, v14
; GFX8-NEXT: v_add_f32_e32 v0, v0, v5
@@ -1431,15 +1431,15 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_add_f32_e32 v0, v1, v0
; GFX8-NEXT: v_cvt_u32_f32_e32 v13, v0
; GFX8-NEXT: s_mov_b32 s13, s12
; GFX8-NEXT: s_xor_b64 s[8:9], s[0:1], s[12:13]
; GFX8-NEXT: s_sub_u32 s3, 0, s10
; GFX8-NEXT: s_xor_b64 s[6:7], s[0:1], s[12:13]
; GFX8-NEXT: s_sub_u32 s5, 0, s2
; GFX8-NEXT: v_cmp_ne_u32_e32 vcc, 0, v2
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s3, v13, 0
; GFX8-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s5, v13, 0
; GFX8-NEXT: v_cndmask_b32_e32 v4, v4, v5, vcc
; GFX8-NEXT: v_cvt_u32_f32_e32 v5, v12
; GFX8-NEXT: s_subb_u32 s20, 0, s11
; GFX8-NEXT: s_subb_u32 s20, 0, s3
; GFX8-NEXT: v_cndmask_b32_e32 v10, v3, v10, vcc
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s3, v5, v[1:2]
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s5, v5, v[1:2]
; GFX8-NEXT: v_cmp_ne_u32_e64 s[0:1], 0, v11
; GFX8-NEXT: v_cndmask_b32_e64 v3, v8, v15, s[0:1]
; GFX8-NEXT: v_mad_u64_u32 v[1:2], s[14:15], s20, v13, v[1:2]
@@ -1468,22 +1468,22 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v3, v2
; GFX8-NEXT: v_add_u32_e32 v1, vcc, v1, v2
; GFX8-NEXT: v_add_u32_e32 v8, vcc, v13, v0
; GFX8-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s3, v8, 0
; GFX8-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s5, v8, 0
; GFX8-NEXT: v_addc_u32_e32 v5, vcc, v5, v1, vcc
; GFX8-NEXT: v_xor_b32_e32 v1, s18, v4
; GFX8-NEXT: v_mov_b32_e32 v0, v3
; GFX8-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s3, v5, v[0:1]
; GFX8-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s5, v5, v[0:1]
; GFX8-NEXT: v_xor_b32_e32 v9, s19, v10
; GFX8-NEXT: v_mov_b32_e32 v10, s19
; GFX8-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s20, v8, v[3:4]
; GFX8-NEXT: v_subrev_u32_e32 v0, vcc, s18, v1
; GFX8-NEXT: v_subb_u32_e32 v1, vcc, v9, v10, vcc
; GFX8-NEXT: v_xor_b32_e32 v4, s2, v7
; GFX8-NEXT: v_xor_b32_e32 v4, s4, v7
; GFX8-NEXT: v_mul_lo_u32 v7, v5, v2
; GFX8-NEXT: v_mul_lo_u32 v9, v8, v3
; GFX8-NEXT: v_mul_hi_u32 v11, v8, v2
; GFX8-NEXT: v_mul_hi_u32 v2, v5, v2
; GFX8-NEXT: v_xor_b32_e32 v6, s2, v6
; GFX8-NEXT: v_xor_b32_e32 v6, s4, v6
; GFX8-NEXT: v_add_u32_e32 v7, vcc, v7, v9
; GFX8-NEXT: v_cndmask_b32_e64 v9, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v7, vcc, v7, v11
@@ -1503,56 +1503,56 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_add_u32_e32 v3, vcc, v3, v7
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v8, v2
; GFX8-NEXT: v_addc_u32_e32 v3, vcc, v5, v3, vcc
; GFX8-NEXT: v_mov_b32_e32 v10, s2
; GFX8-NEXT: v_mul_lo_u32 v7, s9, v2
; GFX8-NEXT: v_mul_lo_u32 v8, s8, v3
; GFX8-NEXT: v_subrev_u32_e32 v4, vcc, s2, v4
; GFX8-NEXT: v_mov_b32_e32 v10, s4
; GFX8-NEXT: v_mul_lo_u32 v7, s7, v2
; GFX8-NEXT: v_mul_lo_u32 v8, s6, v3
; GFX8-NEXT: v_subrev_u32_e32 v4, vcc, s4, v4
; GFX8-NEXT: v_subb_u32_e32 v5, vcc, v6, v10, vcc
; GFX8-NEXT: v_mul_hi_u32 v6, s8, v2
; GFX8-NEXT: v_mul_hi_u32 v6, s6, v2
; GFX8-NEXT: v_add_u32_e32 v7, vcc, v7, v8
; GFX8-NEXT: v_cndmask_b32_e64 v8, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v6, vcc, v7, v6
; GFX8-NEXT: v_cndmask_b32_e64 v6, 0, 1, vcc
; GFX8-NEXT: v_mul_lo_u32 v7, s9, v3
; GFX8-NEXT: v_mul_hi_u32 v2, s9, v2
; GFX8-NEXT: v_mul_lo_u32 v7, s7, v3
; GFX8-NEXT: v_mul_hi_u32 v2, s7, v2
; GFX8-NEXT: v_add_u32_e32 v6, vcc, v8, v6
; GFX8-NEXT: v_mul_hi_u32 v8, s8, v3
; GFX8-NEXT: v_mul_hi_u32 v8, s6, v3
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v7, v2
; GFX8-NEXT: v_cndmask_b32_e64 v7, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v2, vcc, v2, v8
; GFX8-NEXT: v_cndmask_b32_e64 v8, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v7, vcc, v7, v8
; GFX8-NEXT: v_add_u32_e32 v8, vcc, v2, v6
; GFX8-NEXT: v_mul_hi_u32 v9, s9, v3
; GFX8-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s10, v8, 0
; GFX8-NEXT: v_mul_hi_u32 v9, s7, v3
; GFX8-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s2, v8, 0
; GFX8-NEXT: v_cndmask_b32_e64 v6, 0, 1, vcc
; GFX8-NEXT: v_add_u32_e32 v6, vcc, v7, v6
; GFX8-NEXT: v_add_u32_e32 v9, vcc, v9, v6
; GFX8-NEXT: v_mad_u64_u32 v[6:7], s[0:1], s10, v9, v[3:4]
; GFX8-NEXT: v_mov_b32_e32 v10, s9
; GFX8-NEXT: v_sub_u32_e32 v2, vcc, s8, v2
; GFX8-NEXT: v_mad_u64_u32 v[6:7], s[0:1], s11, v8, v[6:7]
; GFX8-NEXT: v_mov_b32_e32 v3, s11
; GFX8-NEXT: v_mad_u64_u32 v[6:7], s[0:1], s2, v9, v[3:4]
; GFX8-NEXT: v_mov_b32_e32 v10, s7
; GFX8-NEXT: v_sub_u32_e32 v2, vcc, s6, v2
; GFX8-NEXT: v_mad_u64_u32 v[6:7], s[0:1], s3, v8, v[6:7]
; GFX8-NEXT: v_mov_b32_e32 v3, s3
; GFX8-NEXT: v_subb_u32_e64 v7, s[0:1], v10, v6, vcc
; GFX8-NEXT: v_sub_u32_e64 v6, s[0:1], s9, v6
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s11, v7
; GFX8-NEXT: v_sub_u32_e64 v6, s[0:1], s7, v6
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s3, v7
; GFX8-NEXT: v_cndmask_b32_e64 v10, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s10, v2
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s2, v2
; GFX8-NEXT: v_cndmask_b32_e64 v11, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s11, v7
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s3, v7
; GFX8-NEXT: v_subb_u32_e32 v6, vcc, v6, v3, vcc
; GFX8-NEXT: v_cndmask_b32_e64 v10, v10, v11, s[0:1]
; GFX8-NEXT: v_subrev_u32_e32 v11, vcc, s10, v2
; GFX8-NEXT: v_subrev_u32_e32 v11, vcc, s2, v2
; GFX8-NEXT: v_subbrev_u32_e64 v12, s[0:1], 0, v6, vcc
; GFX8-NEXT: v_add_u32_e64 v13, s[0:1], 1, v8
; GFX8-NEXT: v_addc_u32_e64 v14, s[0:1], 0, v9, s[0:1]
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s11, v12
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s3, v12
; GFX8-NEXT: v_cndmask_b32_e64 v15, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s10, v11
; GFX8-NEXT: v_cmp_le_u32_e64 s[0:1], s2, v11
; GFX8-NEXT: v_subb_u32_e32 v3, vcc, v6, v3, vcc
; GFX8-NEXT: v_cndmask_b32_e64 v16, 0, -1, s[0:1]
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s11, v12
; GFX8-NEXT: v_subrev_u32_e32 v6, vcc, s10, v11
; GFX8-NEXT: v_cmp_eq_u32_e64 s[0:1], s3, v12
; GFX8-NEXT: v_subrev_u32_e32 v6, vcc, s2, v11
; GFX8-NEXT: v_cndmask_b32_e64 v15, v15, v16, s[0:1]
; GFX8-NEXT: v_add_u32_e64 v16, s[0:1], 1, v13
; GFX8-NEXT: v_subbrev_u32_e32 v3, vcc, 0, v3, vcc
@@ -1578,38 +1578,37 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX8-NEXT: v_mov_b32_e32 v8, s12
; GFX8-NEXT: v_subrev_u32_e32 v6, vcc, s12, v6
; GFX8-NEXT: v_subb_u32_e32 v7, vcc, v7, v8, vcc
; GFX8-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NEXT: v_mov_b32_e32 v9, s5
; GFX8-NEXT: v_mov_b32_e32 v8, s4
; GFX8-NEXT: v_mov_b32_e32 v8, s8
; GFX8-NEXT: v_mov_b32_e32 v9, s9
; GFX8-NEXT: flat_store_dwordx4 v[8:9], v[0:3]
; GFX8-NEXT: s_nop 0
; GFX8-NEXT: v_mov_b32_e32 v0, s6
; GFX8-NEXT: v_mov_b32_e32 v1, s7
; GFX8-NEXT: v_mov_b32_e32 v0, s10
; GFX8-NEXT: v_mov_b32_e32 v1, s11
; GFX8-NEXT: flat_store_dwordx4 v[0:1], v[4:7]
; GFX8-NEXT: s_endpgm
;
; GFX9-LABEL: sdivrem_v2i64:
; GFX9: ; %bb.0:
; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x10
; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x0
; GFX9-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x20
; GFX9-NEXT: s_waitcnt lgkmcnt(0)
; GFX9-NEXT: s_ashr_i32 s2, s9, 31
; GFX9-NEXT: s_ashr_i32 s16, s13, 31
; GFX9-NEXT: s_add_u32 s0, s8, s2
; GFX9-NEXT: s_addc_u32 s1, s9, s2
; GFX9-NEXT: s_add_u32 s6, s12, s16
; GFX9-NEXT: s_ashr_i32 s4, s13, 31
; GFX9-NEXT: s_ashr_i32 s16, s1, 31
; GFX9-NEXT: s_add_u32 s12, s12, s4
; GFX9-NEXT: s_addc_u32 s13, s13, s4
; GFX9-NEXT: s_add_u32 s0, s0, s16
; GFX9-NEXT: s_mov_b32 s17, s16
; GFX9-NEXT: s_addc_u32 s7, s13, s16
; GFX9-NEXT: s_xor_b64 s[8:9], s[6:7], s[16:17]
; GFX9-NEXT: v_cvt_f32_u32_e32 v0, s9
; GFX9-NEXT: v_cvt_f32_u32_e32 v1, s8
; GFX9-NEXT: s_mov_b32 s3, s2
; GFX9-NEXT: s_xor_b64 s[12:13], s[0:1], s[2:3]
; GFX9-NEXT: s_addc_u32 s1, s1, s16
; GFX9-NEXT: s_xor_b64 s[6:7], s[0:1], s[16:17]
; GFX9-NEXT: v_cvt_f32_u32_e32 v0, s7
; GFX9-NEXT: v_cvt_f32_u32_e32 v1, s6
; GFX9-NEXT: s_mov_b32 s5, s4
; GFX9-NEXT: s_xor_b64 s[12:13], s[12:13], s[4:5]
; GFX9-NEXT: v_mul_f32_e32 v0, 0x4f800000, v0
; GFX9-NEXT: v_add_f32_e32 v0, v0, v1
; GFX9-NEXT: v_rcp_iflag_f32_e32 v0, v0
; GFX9-NEXT: s_sub_u32 s6, 0, s8
; GFX9-NEXT: s_subb_u32 s7, 0, s9
; GFX9-NEXT: s_xor_b64 s[18:19], s[2:3], s[16:17]
; GFX9-NEXT: s_sub_u32 s18, 0, s6
; GFX9-NEXT: s_subb_u32 s19, 0, s7
; GFX9-NEXT: v_mul_f32_e32 v0, 0x5f7ffffc, v0
; GFX9-NEXT: v_mul_f32_e32 v1, 0x2f800000, v0
; GFX9-NEXT: v_trunc_f32_e32 v2, v1
@@ -1617,12 +1616,10 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add_f32_e32 v0, v1, v0
; GFX9-NEXT: v_cvt_u32_f32_e32 v3, v0
; GFX9-NEXT: v_cvt_u32_f32_e32 v4, v2
; GFX9-NEXT: s_ashr_i32 s16, s15, 31
; GFX9-NEXT: s_mov_b32 s17, s16
; GFX9-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s6, v3, 0
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s6, v4, v[1:2]
; GFX9-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s18, v3, 0
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s18, v4, v[1:2]
; GFX9-NEXT: v_mul_hi_u32 v5, v3, v0
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s7, v3, v[1:2]
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s19, v3, v[1:2]
; GFX9-NEXT: v_mul_lo_u32 v2, v4, v0
; GFX9-NEXT: v_mul_hi_u32 v0, v4, v0
; GFX9-NEXT: v_mul_lo_u32 v6, v3, v1
@@ -1644,15 +1641,17 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add3_u32 v1, v5, v2, v1
; GFX9-NEXT: v_add_co_u32_e32 v3, vcc, v3, v0
; GFX9-NEXT: v_addc_co_u32_e32 v4, vcc, v4, v1, vcc
; GFX9-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s6, v3, 0
; GFX9-NEXT: v_mov_b32_e32 v7, s9
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s6, v4, v[1:2]
; GFX9-NEXT: v_mad_u64_u32 v[0:1], s[0:1], s18, v3, 0
; GFX9-NEXT: v_mov_b32_e32 v7, s7
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s18, v4, v[1:2]
; GFX9-NEXT: v_mul_hi_u32 v6, v3, v0
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s7, v3, v[1:2]
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s19, v3, v[1:2]
; GFX9-NEXT: v_mul_lo_u32 v2, v4, v0
; GFX9-NEXT: v_mul_hi_u32 v0, v4, v0
; GFX9-NEXT: v_mul_lo_u32 v5, v3, v1
; GFX9-NEXT: s_load_dwordx4 s[4:7], s[4:5], 0x0
; GFX9-NEXT: s_xor_b64 s[18:19], s[4:5], s[16:17]
; GFX9-NEXT: s_ashr_i32 s16, s3, 31
; GFX9-NEXT: s_mov_b32 s17, s16
; GFX9-NEXT: v_add_co_u32_e32 v2, vcc, v2, v5
; GFX9-NEXT: v_cndmask_b32_e64 v5, 0, 1, vcc
; GFX9-NEXT: v_add_co_u32_e32 v2, vcc, v2, v6
@@ -1688,47 +1687,47 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, v0, v3
; GFX9-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc
; GFX9-NEXT: v_add_co_u32_e32 v5, vcc, v0, v2
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s8, v5, 0
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s6, v5, 0
; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
; GFX9-NEXT: v_add_u32_e32 v3, v4, v3
; GFX9-NEXT: v_add3_u32 v4, v3, v0, v6
; GFX9-NEXT: v_mov_b32_e32 v0, v2
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s8, v4, v[0:1]
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s6, v4, v[0:1]
; GFX9-NEXT: v_mov_b32_e32 v6, s13
; GFX9-NEXT: v_sub_co_u32_e32 v8, vcc, s12, v1
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s9, v5, v[2:3]
; GFX9-NEXT: s_ashr_i32 s12, s11, 31
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s7, v5, v[2:3]
; GFX9-NEXT: s_ashr_i32 s12, s15, 31
; GFX9-NEXT: v_mov_b32_e32 v0, 0
; GFX9-NEXT: v_subb_co_u32_e64 v6, s[0:1], v6, v2, vcc
; GFX9-NEXT: v_sub_u32_e32 v1, s13, v2
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s9, v6
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s7, v6
; GFX9-NEXT: v_cndmask_b32_e64 v2, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s8, v8
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s6, v8
; GFX9-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v7, vcc
; GFX9-NEXT: v_cndmask_b32_e64 v3, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s9, v6
; GFX9-NEXT: v_subrev_co_u32_e32 v9, vcc, s8, v8
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s7, v6
; GFX9-NEXT: v_subrev_co_u32_e32 v9, vcc, s6, v8
; GFX9-NEXT: v_cndmask_b32_e64 v3, v2, v3, s[0:1]
; GFX9-NEXT: v_subbrev_co_u32_e64 v10, s[0:1], 0, v1, vcc
; GFX9-NEXT: v_add_co_u32_e64 v2, s[0:1], 1, v5
; GFX9-NEXT: v_addc_co_u32_e64 v11, s[0:1], 0, v4, s[0:1]
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s9, v10
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s7, v10
; GFX9-NEXT: v_cndmask_b32_e64 v12, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s8, v9
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s6, v9
; GFX9-NEXT: v_cndmask_b32_e64 v13, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s9, v10
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s7, v10
; GFX9-NEXT: v_cndmask_b32_e64 v12, v12, v13, s[0:1]
; GFX9-NEXT: v_add_co_u32_e64 v13, s[0:1], 1, v2
; GFX9-NEXT: v_addc_co_u32_e64 v14, s[0:1], 0, v11, s[0:1]
; GFX9-NEXT: s_add_u32 s0, s10, s12
; GFX9-NEXT: s_addc_u32 s1, s11, s12
; GFX9-NEXT: s_add_u32 s10, s14, s16
; GFX9-NEXT: s_addc_u32 s11, s15, s16
; GFX9-NEXT: s_xor_b64 s[10:11], s[10:11], s[16:17]
; GFX9-NEXT: v_cvt_f32_u32_e32 v15, s11
; GFX9-NEXT: s_add_u32 s0, s14, s12
; GFX9-NEXT: s_addc_u32 s1, s15, s12
; GFX9-NEXT: s_add_u32 s2, s2, s16
; GFX9-NEXT: s_addc_u32 s3, s3, s16
; GFX9-NEXT: s_xor_b64 s[2:3], s[2:3], s[16:17]
; GFX9-NEXT: v_cvt_f32_u32_e32 v15, s3
; GFX9-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v7, vcc
; GFX9-NEXT: v_cvt_f32_u32_e32 v7, s10
; GFX9-NEXT: v_subrev_co_u32_e32 v16, vcc, s8, v9
; GFX9-NEXT: v_cvt_f32_u32_e32 v7, s2
; GFX9-NEXT: v_subrev_co_u32_e32 v16, vcc, s6, v9
; GFX9-NEXT: v_subbrev_co_u32_e32 v17, vcc, 0, v1, vcc
; GFX9-NEXT: v_mul_f32_e32 v1, 0x4f800000, v15
; GFX9-NEXT: v_add_f32_e32 v1, v1, v7
@@ -1743,14 +1742,14 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add_f32_e32 v1, v2, v1
; GFX9-NEXT: v_cvt_u32_f32_e32 v14, v1
; GFX9-NEXT: s_mov_b32 s13, s12
; GFX9-NEXT: s_xor_b64 s[8:9], s[0:1], s[12:13]
; GFX9-NEXT: s_sub_u32 s3, 0, s10
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s3, v14, 0
; GFX9-NEXT: s_xor_b64 s[6:7], s[0:1], s[12:13]
; GFX9-NEXT: s_sub_u32 s5, 0, s2
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s5, v14, 0
; GFX9-NEXT: v_cvt_u32_f32_e32 v13, v13
; GFX9-NEXT: v_cmp_ne_u32_e32 vcc, 0, v3
; GFX9-NEXT: s_subb_u32 s14, 0, s11
; GFX9-NEXT: s_subb_u32 s14, 0, s3
; GFX9-NEXT: v_cndmask_b32_e32 v5, v5, v7, vcc
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s3, v13, v[2:3]
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s5, v13, v[2:3]
; GFX9-NEXT: v_cndmask_b32_e32 v7, v4, v11, vcc
; GFX9-NEXT: v_mul_hi_u32 v11, v14, v1
; GFX9-NEXT: v_mad_u64_u32 v[2:3], s[0:1], s14, v14, v[2:3]
@@ -1778,23 +1777,23 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add3_u32 v2, v4, v3, v2
; GFX9-NEXT: v_add_co_u32_e64 v11, s[0:1], v14, v1
; GFX9-NEXT: v_addc_co_u32_e64 v12, s[0:1], v13, v2, s[0:1]
; GFX9-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s3, v11, 0
; GFX9-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s5, v11, 0
; GFX9-NEXT: v_cndmask_b32_e32 v8, v8, v9, vcc
; GFX9-NEXT: v_xor_b32_e32 v9, s18, v5
; GFX9-NEXT: v_mov_b32_e32 v1, v4
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s3, v12, v[1:2]
; GFX9-NEXT: v_mad_u64_u32 v[1:2], s[0:1], s5, v12, v[1:2]
; GFX9-NEXT: v_cndmask_b32_e32 v6, v6, v10, vcc
; GFX9-NEXT: v_xor_b32_e32 v7, s19, v7
; GFX9-NEXT: v_mad_u64_u32 v[4:5], s[0:1], s14, v11, v[1:2]
; GFX9-NEXT: v_mov_b32_e32 v10, s19
; GFX9-NEXT: v_subrev_co_u32_e32 v1, vcc, s18, v9
; GFX9-NEXT: v_subb_co_u32_e32 v2, vcc, v7, v10, vcc
; GFX9-NEXT: v_xor_b32_e32 v5, s2, v8
; GFX9-NEXT: v_xor_b32_e32 v5, s4, v8
; GFX9-NEXT: v_mul_lo_u32 v7, v12, v3
; GFX9-NEXT: v_mul_lo_u32 v8, v11, v4
; GFX9-NEXT: v_mul_hi_u32 v9, v11, v3
; GFX9-NEXT: v_mul_hi_u32 v3, v12, v3
; GFX9-NEXT: v_xor_b32_e32 v6, s2, v6
; GFX9-NEXT: v_xor_b32_e32 v6, s4, v6
; GFX9-NEXT: v_add_co_u32_e32 v7, vcc, v7, v8
; GFX9-NEXT: v_cndmask_b32_e64 v8, 0, 1, vcc
; GFX9-NEXT: v_add_co_u32_e32 v7, vcc, v7, v9
@@ -1813,55 +1812,55 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add3_u32 v4, v8, v7, v4
; GFX9-NEXT: v_add_co_u32_e32 v3, vcc, v11, v3
; GFX9-NEXT: v_addc_co_u32_e32 v4, vcc, v12, v4, vcc
; GFX9-NEXT: v_mul_lo_u32 v7, s9, v3
; GFX9-NEXT: v_mul_lo_u32 v8, s8, v4
; GFX9-NEXT: v_mul_hi_u32 v10, s8, v3
; GFX9-NEXT: v_mul_hi_u32 v3, s9, v3
; GFX9-NEXT: v_mul_hi_u32 v12, s9, v4
; GFX9-NEXT: v_mul_lo_u32 v7, s7, v3
; GFX9-NEXT: v_mul_lo_u32 v8, s6, v4
; GFX9-NEXT: v_mul_hi_u32 v10, s6, v3
; GFX9-NEXT: v_mul_hi_u32 v3, s7, v3
; GFX9-NEXT: v_mul_hi_u32 v12, s7, v4
; GFX9-NEXT: v_add_co_u32_e32 v7, vcc, v7, v8
; GFX9-NEXT: v_cndmask_b32_e64 v8, 0, 1, vcc
; GFX9-NEXT: v_add_co_u32_e32 v7, vcc, v7, v10
; GFX9-NEXT: v_cndmask_b32_e64 v7, 0, 1, vcc
; GFX9-NEXT: v_mul_lo_u32 v10, s9, v4
; GFX9-NEXT: v_mul_lo_u32 v10, s7, v4
; GFX9-NEXT: v_add_u32_e32 v7, v8, v7
; GFX9-NEXT: v_mul_hi_u32 v8, s8, v4
; GFX9-NEXT: v_mov_b32_e32 v9, s2
; GFX9-NEXT: v_mul_hi_u32 v8, s6, v4
; GFX9-NEXT: v_mov_b32_e32 v9, s4
; GFX9-NEXT: v_add_co_u32_e32 v3, vcc, v10, v3
; GFX9-NEXT: v_cndmask_b32_e64 v10, 0, 1, vcc
; GFX9-NEXT: v_add_co_u32_e32 v3, vcc, v3, v8
; GFX9-NEXT: v_cndmask_b32_e64 v8, 0, 1, vcc
; GFX9-NEXT: v_add_co_u32_e32 v11, vcc, v3, v7
; GFX9-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s10, v11, 0
; GFX9-NEXT: v_mad_u64_u32 v[3:4], s[0:1], s2, v11, 0
; GFX9-NEXT: v_cndmask_b32_e64 v7, 0, 1, vcc
; GFX9-NEXT: v_subrev_co_u32_e32 v5, vcc, s2, v5
; GFX9-NEXT: v_subrev_co_u32_e32 v5, vcc, s4, v5
; GFX9-NEXT: v_add_u32_e32 v8, v10, v8
; GFX9-NEXT: v_subb_co_u32_e32 v6, vcc, v6, v9, vcc
; GFX9-NEXT: v_add3_u32 v9, v8, v7, v12
; GFX9-NEXT: v_mad_u64_u32 v[7:8], s[0:1], s10, v9, v[4:5]
; GFX9-NEXT: v_mov_b32_e32 v10, s9
; GFX9-NEXT: v_sub_co_u32_e32 v3, vcc, s8, v3
; GFX9-NEXT: v_mad_u64_u32 v[7:8], s[0:1], s11, v11, v[7:8]
; GFX9-NEXT: v_mov_b32_e32 v4, s11
; GFX9-NEXT: v_mad_u64_u32 v[7:8], s[0:1], s2, v9, v[4:5]
; GFX9-NEXT: v_mov_b32_e32 v10, s7
; GFX9-NEXT: v_sub_co_u32_e32 v3, vcc, s6, v3
; GFX9-NEXT: v_mad_u64_u32 v[7:8], s[0:1], s3, v11, v[7:8]
; GFX9-NEXT: v_mov_b32_e32 v4, s3
; GFX9-NEXT: v_subb_co_u32_e64 v8, s[0:1], v10, v7, vcc
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s11, v8
; GFX9-NEXT: v_sub_u32_e32 v7, s9, v7
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s3, v8
; GFX9-NEXT: v_sub_u32_e32 v7, s7, v7
; GFX9-NEXT: v_cndmask_b32_e64 v10, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s10, v3
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s2, v3
; GFX9-NEXT: v_cndmask_b32_e64 v12, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s11, v8
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s3, v8
; GFX9-NEXT: v_subb_co_u32_e32 v7, vcc, v7, v4, vcc
; GFX9-NEXT: v_cndmask_b32_e64 v10, v10, v12, s[0:1]
; GFX9-NEXT: v_subrev_co_u32_e32 v12, vcc, s10, v3
; GFX9-NEXT: v_subrev_co_u32_e32 v12, vcc, s2, v3
; GFX9-NEXT: v_subbrev_co_u32_e64 v13, s[0:1], 0, v7, vcc
; GFX9-NEXT: v_add_co_u32_e64 v14, s[0:1], 1, v11
; GFX9-NEXT: v_addc_co_u32_e64 v15, s[0:1], 0, v9, s[0:1]
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s11, v13
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s3, v13
; GFX9-NEXT: v_cndmask_b32_e64 v16, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s10, v12
; GFX9-NEXT: v_cmp_le_u32_e64 s[0:1], s2, v12
; GFX9-NEXT: v_subb_co_u32_e32 v4, vcc, v7, v4, vcc
; GFX9-NEXT: v_cndmask_b32_e64 v17, 0, -1, s[0:1]
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s11, v13
; GFX9-NEXT: v_subrev_co_u32_e32 v7, vcc, s10, v12
; GFX9-NEXT: v_cmp_eq_u32_e64 s[0:1], s3, v13
; GFX9-NEXT: v_subrev_co_u32_e32 v7, vcc, s2, v12
; GFX9-NEXT: v_cndmask_b32_e64 v16, v16, v17, s[0:1]
; GFX9-NEXT: v_add_co_u32_e64 v17, s[0:1], 1, v14
; GFX9-NEXT: v_subbrev_co_u32_e32 v4, vcc, 0, v4, vcc
@@ -1887,45 +1886,46 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_mov_b32_e32 v9, s12
; GFX9-NEXT: v_subrev_co_u32_e32 v7, vcc, s12, v7
; GFX9-NEXT: v_subb_co_u32_e32 v8, vcc, v8, v9, vcc
; GFX9-NEXT: s_waitcnt lgkmcnt(0)
; GFX9-NEXT: global_store_dwordx4 v0, v[1:4], s[4:5]
; GFX9-NEXT: global_store_dwordx4 v0, v[5:8], s[6:7]
; GFX9-NEXT: global_store_dwordx4 v0, v[1:4], s[8:9]
; GFX9-NEXT: global_store_dwordx4 v0, v[5:8], s[10:11]
; GFX9-NEXT: s_endpgm
;
; GFX10-LABEL: sdivrem_v2i64:
; GFX10: ; %bb.0:
; GFX10-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x10
; GFX10-NEXT: s_clause 0x1
; GFX10-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x20
; GFX10-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x0
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: s_ashr_i32 s2, s9, 31
; GFX10-NEXT: s_ashr_i32 s6, s13, 31
; GFX10-NEXT: s_add_u32 s0, s8, s2
; GFX10-NEXT: s_addc_u32 s1, s9, s2
; GFX10-NEXT: s_add_u32 s8, s12, s6
; GFX10-NEXT: s_mov_b32 s7, s6
; GFX10-NEXT: s_addc_u32 s9, s13, s6
; GFX10-NEXT: s_mov_b32 s3, s2
; GFX10-NEXT: s_xor_b64 s[8:9], s[8:9], s[6:7]
; GFX10-NEXT: s_xor_b64 s[0:1], s[0:1], s[2:3]
; GFX10-NEXT: v_cvt_f32_u32_e32 v1, s9
; GFX10-NEXT: s_sub_u32 s20, 0, s8
; GFX10-NEXT: s_subb_u32 s21, 0, s9
; GFX10-NEXT: s_ashr_i32 s12, s11, 31
; GFX10-NEXT: v_cvt_f32_u32_e32 v0, s8
; GFX10-NEXT: s_xor_b64 s[18:19], s[2:3], s[6:7]
; GFX10-NEXT: s_ashr_i32 s16, s15, 31
; GFX10-NEXT: v_mul_f32_e32 v1, 0x4f800000, v1
; GFX10-NEXT: s_add_u32 s6, s10, s12
; GFX10-NEXT: s_addc_u32 s7, s11, s12
; GFX10-NEXT: s_add_u32 s10, s14, s16
; GFX10-NEXT: s_ashr_i32 s16, s1, 31
; GFX10-NEXT: s_ashr_i32 s4, s13, 31
; GFX10-NEXT: s_mov_b32 s17, s16
; GFX10-NEXT: s_addc_u32 s11, s15, s16
; GFX10-NEXT: s_add_u32 s12, s12, s4
; GFX10-NEXT: s_addc_u32 s13, s13, s4
; GFX10-NEXT: s_add_u32 s0, s0, s16
; GFX10-NEXT: s_addc_u32 s1, s1, s16
; GFX10-NEXT: s_mov_b32 s5, s4
; GFX10-NEXT: s_xor_b64 s[6:7], s[0:1], s[16:17]
; GFX10-NEXT: s_xor_b64 s[0:1], s[12:13], s[4:5]
; GFX10-NEXT: v_cvt_f32_u32_e32 v1, s7
; GFX10-NEXT: s_sub_u32 s20, 0, s6
; GFX10-NEXT: s_subb_u32 s21, 0, s7
; GFX10-NEXT: s_ashr_i32 s12, s15, 31
; GFX10-NEXT: v_cvt_f32_u32_e32 v0, s6
; GFX10-NEXT: s_xor_b64 s[18:19], s[4:5], s[16:17]
; GFX10-NEXT: s_ashr_i32 s16, s3, 31
; GFX10-NEXT: v_mul_f32_e32 v1, 0x4f800000, v1
; GFX10-NEXT: s_add_u32 s14, s14, s12
; GFX10-NEXT: s_addc_u32 s15, s15, s12
; GFX10-NEXT: s_add_u32 s2, s2, s16
; GFX10-NEXT: s_mov_b32 s17, s16
; GFX10-NEXT: s_addc_u32 s3, s3, s16
; GFX10-NEXT: v_add_f32_e32 v0, v1, v0
; GFX10-NEXT: s_xor_b64 s[10:11], s[10:11], s[16:17]
; GFX10-NEXT: s_xor_b64 s[2:3], s[2:3], s[16:17]
; GFX10-NEXT: s_mov_b32 s13, s12
; GFX10-NEXT: v_cvt_f32_u32_e32 v1, s11
; GFX10-NEXT: v_cvt_f32_u32_e32 v2, s10
; GFX10-NEXT: v_cvt_f32_u32_e32 v1, s3
; GFX10-NEXT: v_cvt_f32_u32_e32 v2, s2
; GFX10-NEXT: v_rcp_iflag_f32_e32 v0, v0
; GFX10-NEXT: s_xor_b64 s[14:15], s[6:7], s[12:13]
; GFX10-NEXT: s_xor_b64 s[14:15], s[14:15], s[12:13]
; GFX10-NEXT: v_mul_f32_e32 v1, 0x4f800000, v1
; GFX10-NEXT: v_add_f32_e32 v1, v1, v2
; GFX10-NEXT: v_mul_f32_e32 v0, 0x5f7ffffc, v0
@@ -1941,62 +1941,62 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_trunc_f32_e32 v4, v4
; GFX10-NEXT: v_cvt_u32_f32_e32 v6, v0
; GFX10-NEXT: v_mul_f32_e32 v2, 0xcf800000, v4
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s3, s20, v6, 0
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s5, s20, v6, 0
; GFX10-NEXT: v_mul_lo_u32 v8, s21, v6
; GFX10-NEXT: v_add_f32_e32 v2, v2, v3
; GFX10-NEXT: v_cvt_u32_f32_e32 v3, v4
; GFX10-NEXT: s_sub_u32 s3, 0, s10
; GFX10-NEXT: s_subb_u32 s6, 0, s11
; GFX10-NEXT: s_sub_u32 s5, 0, s2
; GFX10-NEXT: s_subb_u32 s22, 0, s3
; GFX10-NEXT: v_cvt_u32_f32_e32 v4, v2
; GFX10-NEXT: v_mul_lo_u32 v9, s3, v3
; GFX10-NEXT: v_mul_lo_u32 v9, s5, v3
; GFX10-NEXT: v_add3_u32 v7, v1, v7, v8
; GFX10-NEXT: v_mul_lo_u32 v10, v5, v0
; GFX10-NEXT: v_mul_hi_u32 v11, v6, v0
; GFX10-NEXT: v_mad_u64_u32 v[1:2], s7, s3, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v8, s6, v4
; GFX10-NEXT: v_mad_u64_u32 v[1:2], s23, s5, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v8, s22, v4
; GFX10-NEXT: v_mul_lo_u32 v12, v6, v7
; GFX10-NEXT: v_mul_hi_u32 v0, v5, v0
; GFX10-NEXT: v_mul_lo_u32 v13, v5, v7
; GFX10-NEXT: v_mul_hi_u32 v14, v6, v7
; GFX10-NEXT: v_mul_hi_u32 v7, v5, v7
; GFX10-NEXT: v_add3_u32 v2, v2, v9, v8
; GFX10-NEXT: v_add_co_u32 v10, s7, v10, v12
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s7
; GFX10-NEXT: v_add_co_u32 v0, s7, v13, v0
; GFX10-NEXT: v_add_co_u32 v10, s23, v10, v12
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v0, s23, v13, v0
; GFX10-NEXT: v_mul_lo_u32 v8, v3, v1
; GFX10-NEXT: v_cndmask_b32_e64 v13, 0, 1, s7
; GFX10-NEXT: v_cndmask_b32_e64 v13, 0, 1, s23
; GFX10-NEXT: v_mul_lo_u32 v15, v4, v2
; GFX10-NEXT: v_add_co_u32 v10, s7, v10, v11
; GFX10-NEXT: v_add_co_u32 v10, s23, v10, v11
; GFX10-NEXT: v_mul_hi_u32 v9, v4, v1
; GFX10-NEXT: v_mul_hi_u32 v1, v3, v1
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s7
; GFX10-NEXT: v_add_co_u32 v0, s7, v0, v14
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v0, s23, v0, v14
; GFX10-NEXT: v_mul_lo_u32 v14, v3, v2
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, 1, s7
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, 1, s23
; GFX10-NEXT: v_add_nc_u32_e32 v10, v12, v10
; GFX10-NEXT: v_add_co_u32 v8, s7, v8, v15
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s7
; GFX10-NEXT: v_add_co_u32 v8, s23, v8, v15
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s23
; GFX10-NEXT: v_mul_hi_u32 v16, v4, v2
; GFX10-NEXT: v_add_nc_u32_e32 v11, v13, v11
; GFX10-NEXT: v_add_co_u32 v1, s7, v14, v1
; GFX10-NEXT: v_cndmask_b32_e64 v13, 0, 1, s7
; GFX10-NEXT: v_add_co_u32 v0, s7, v0, v10
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s7
; GFX10-NEXT: v_add_co_u32 v8, s7, v8, v9
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s7
; GFX10-NEXT: v_add_co_u32 v9, s7, v1, v16
; GFX10-NEXT: v_add_co_u32 v1, s23, v14, v1
; GFX10-NEXT: v_cndmask_b32_e64 v13, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v0, s23, v0, v10
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v8, s23, v8, v9
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v9, s23, v1, v16
; GFX10-NEXT: v_add3_u32 v7, v11, v10, v7
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, 1, s7
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v6, vcc_lo, v6, v0
; GFX10-NEXT: v_add_nc_u32_e32 v8, v12, v8
; GFX10-NEXT: v_add_co_ci_u32_e32 v5, vcc_lo, v5, v7, vcc_lo
; GFX10-NEXT: v_mul_hi_u32 v2, v3, v2
; GFX10-NEXT: v_add_nc_u32_e32 v10, v13, v1
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s7, s20, v6, 0
; GFX10-NEXT: v_add_co_u32 v7, s7, v9, v8
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s23, s20, v6, 0
; GFX10-NEXT: v_add_co_u32 v7, s23, v9, v8
; GFX10-NEXT: v_mul_lo_u32 v9, s21, v6
; GFX10-NEXT: v_mul_lo_u32 v11, s20, v5
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s7
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s23
; GFX10-NEXT: v_add_co_u32 v4, vcc_lo, v4, v7
; GFX10-NEXT: v_add3_u32 v2, v10, v8, v2
; GFX10-NEXT: v_mul_lo_u32 v8, v5, v0
@@ -2005,74 +2005,73 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_mul_hi_u32 v0, v5, v0
; GFX10-NEXT: v_add_co_ci_u32_e32 v3, vcc_lo, v3, v2, vcc_lo
; GFX10-NEXT: v_mul_lo_u32 v12, v6, v7
; GFX10-NEXT: v_mad_u64_u32 v[1:2], s7, s3, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v9, s6, v4
; GFX10-NEXT: v_mul_lo_u32 v11, s3, v3
; GFX10-NEXT: v_mad_u64_u32 v[1:2], s20, s5, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v9, s22, v4
; GFX10-NEXT: v_mul_lo_u32 v11, s5, v3
; GFX10-NEXT: v_mul_lo_u32 v13, v5, v7
; GFX10-NEXT: v_mul_hi_u32 v14, v6, v7
; GFX10-NEXT: v_mul_hi_u32 v7, v5, v7
; GFX10-NEXT: v_add_co_u32 v8, s3, v8, v12
; GFX10-NEXT: v_add_co_u32 v8, s5, v8, v12
; GFX10-NEXT: v_mul_lo_u32 v15, v3, v1
; GFX10-NEXT: v_mul_hi_u32 v16, v4, v1
; GFX10-NEXT: v_add3_u32 v2, v2, v11, v9
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v0, s3, v13, v0
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v8, s3, v8, v10
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v0, s3, v0, v14
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s3
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v0, s5, v13, v0
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v8, s5, v8, v10
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v0, s5, v0, v14
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s5
; GFX10-NEXT: v_mul_lo_u32 v12, v4, v2
; GFX10-NEXT: v_add_nc_u32_e32 v8, v9, v8
; GFX10-NEXT: v_mul_hi_u32 v1, v3, v1
; GFX10-NEXT: v_mul_lo_u32 v13, v3, v2
; GFX10-NEXT: v_add_nc_u32_e32 v10, v11, v10
; GFX10-NEXT: v_mul_hi_u32 v9, v4, v2
; GFX10-NEXT: v_add_co_u32 v0, s3, v0, v8
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v11, s3, v15, v12
; GFX10-NEXT: v_add_co_u32 v0, s5, v0, v8
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v11, s5, v15, v12
; GFX10-NEXT: v_add_co_u32 v0, vcc_lo, v6, v0
; GFX10-NEXT: v_add3_u32 v7, v10, v8, v7
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v1, s3, v13, v1
; GFX10-NEXT: v_cndmask_b32_e64 v13, 0, 1, s3
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v1, s5, v13, v1
; GFX10-NEXT: v_cndmask_b32_e64 v13, 0, 1, s5
; GFX10-NEXT: v_add_co_ci_u32_e32 v5, vcc_lo, v5, v7, vcc_lo
; GFX10-NEXT: v_add_co_u32 v8, s3, v11, v16
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v1, s3, v1, v9
; GFX10-NEXT: v_add_co_u32 v8, s5, v11, v16
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v1, s5, v1, v9
; GFX10-NEXT: v_mul_lo_u32 v7, s1, v0
; GFX10-NEXT: v_mul_lo_u32 v9, s0, v5
; GFX10-NEXT: v_mul_hi_u32 v10, s1, v0
; GFX10-NEXT: v_mul_hi_u32 v0, s0, v0
; GFX10-NEXT: v_mul_lo_u32 v11, s1, v5
; GFX10-NEXT: v_cndmask_b32_e64 v6, 0, 1, s3
; GFX10-NEXT: v_cndmask_b32_e64 v6, 0, 1, s5
; GFX10-NEXT: v_add_nc_u32_e32 v8, v12, v8
; GFX10-NEXT: v_mul_hi_u32 v12, s0, v5
; GFX10-NEXT: v_mul_hi_u32 v5, s1, v5
; GFX10-NEXT: v_add_co_u32 v7, s3, v7, v9
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v10, s3, v11, v10
; GFX10-NEXT: v_add_co_u32 v0, s6, v7, v0
; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s6
; GFX10-NEXT: v_cndmask_b32_e64 v7, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v10, s3, v10, v12
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v7, s5, v7, v9
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v10, s5, v11, v10
; GFX10-NEXT: v_add_co_u32 v0, s20, v7, v0
; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s20
; GFX10-NEXT: v_cndmask_b32_e64 v7, 0, 1, s5
; GFX10-NEXT: v_add_co_u32 v10, s5, v10, v12
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, 1, s5
; GFX10-NEXT: v_add_nc_u32_e32 v0, v9, v0
; GFX10-NEXT: v_add_co_u32 v8, s3, v1, v8
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v8, s5, v1, v8
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, 1, s5
; GFX10-NEXT: v_add_nc_u32_e32 v7, v7, v11
; GFX10-NEXT: v_add_co_u32 v9, s3, v10, v0
; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s3
; GFX10-NEXT: v_add_co_u32 v9, s5, v10, v0
; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s5
; GFX10-NEXT: v_mul_hi_u32 v2, v3, v2
; GFX10-NEXT: v_add_nc_u32_e32 v6, v13, v6
; GFX10-NEXT: v_add_co_u32 v4, vcc_lo, v4, v8
; GFX10-NEXT: v_add3_u32 v5, v7, v0, v5
; GFX10-NEXT: s_load_dwordx4 s[4:7], s[4:5], 0x0
; GFX10-NEXT: v_mul_hi_u32 v8, s14, v4
; GFX10-NEXT: v_add3_u32 v2, v6, v1, v2
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s3, s8, v9, 0
; GFX10-NEXT: v_mul_lo_u32 v6, s9, v9
; GFX10-NEXT: v_mul_lo_u32 v7, s8, v5
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s5, s6, v9, 0
; GFX10-NEXT: v_mul_lo_u32 v6, s7, v9
; GFX10-NEXT: v_mul_lo_u32 v7, s6, v5
; GFX10-NEXT: v_add_co_ci_u32_e32 v2, vcc_lo, v3, v2, vcc_lo
; GFX10-NEXT: v_mul_lo_u32 v3, s15, v4
; GFX10-NEXT: v_mul_hi_u32 v4, s15, v4
@@ -2084,23 +2083,23 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_sub_nc_u32_e32 v12, s1, v1
; GFX10-NEXT: v_sub_co_u32 v13, vcc_lo, s0, v0
; GFX10-NEXT: v_sub_co_ci_u32_e64 v14, s0, s1, v1, vcc_lo
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v0, vcc_lo, s9, v12, vcc_lo
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s8, v13
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v0, vcc_lo, s7, v12, vcc_lo
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s6, v13
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, -1, vcc_lo
; GFX10-NEXT: v_sub_co_u32 v12, vcc_lo, v13, s8
; GFX10-NEXT: v_sub_co_u32 v12, vcc_lo, v13, s6
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v15, s0, 0, v0, vcc_lo
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s9, v14
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v0, vcc_lo, s9, v0, vcc_lo
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s7, v14
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v0, vcc_lo, s7, v0, vcc_lo
; GFX10-NEXT: v_cndmask_b32_e64 v16, 0, -1, s0
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s8, v12
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s6, v12
; GFX10-NEXT: v_cndmask_b32_e64 v17, 0, -1, s0
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s9, v15
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s7, v15
; GFX10-NEXT: v_cndmask_b32_e64 v18, 0, -1, s0
; GFX10-NEXT: v_add_co_u32 v19, s0, v6, 1
; GFX10-NEXT: v_add_co_ci_u32_e64 v20, s0, 0, v7, s0
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s9, v14
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s7, v14
; GFX10-NEXT: v_cndmask_b32_e64 v16, v16, v1, s0
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s9, v15
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s7, v15
; GFX10-NEXT: v_cndmask_b32_e64 v17, v18, v17, s0
; GFX10-NEXT: v_add_co_u32 v1, s0, v3, v10
; GFX10-NEXT: v_mul_hi_u32 v10, s14, v2
@@ -2117,14 +2116,14 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_add_nc_u32_e32 v3, v8, v10
; GFX10-NEXT: v_add_co_u32 v4, s0, v4, v1
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, 1, s0
; GFX10-NEXT: v_sub_co_u32 v8, s0, v12, s8
; GFX10-NEXT: v_sub_co_u32 v8, s0, v12, s6
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v10, s0, 0, v0, s0
; GFX10-NEXT: v_add3_u32 v2, v3, v1, v2
; GFX10-NEXT: v_cndmask_b32_e32 v3, v6, v19, vcc_lo
; GFX10-NEXT: v_cndmask_b32_e32 v6, v7, v20, vcc_lo
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s0, s10, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v7, s10, v2
; GFX10-NEXT: v_mul_lo_u32 v11, s11, v4
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s0, s2, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v7, s2, v2
; GFX10-NEXT: v_mul_lo_u32 v11, s3, v4
; GFX10-NEXT: v_cmp_ne_u32_e64 s0, 0, v17
; GFX10-NEXT: v_cmp_ne_u32_e32 vcc_lo, 0, v16
; GFX10-NEXT: v_mov_b32_e32 v16, 0
@@ -2139,33 +2138,33 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_cndmask_b32_e32 v6, v14, v6, vcc_lo
; GFX10-NEXT: v_sub_nc_u32_e32 v1, s15, v1
; GFX10-NEXT: v_xor_b32_e32 v0, s18, v3
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s11, v9
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s3, v9
; GFX10-NEXT: v_xor_b32_e32 v3, s19, v5
; GFX10-NEXT: v_xor_b32_e32 v6, s2, v6
; GFX10-NEXT: v_xor_b32_e32 v6, s4, v6
; GFX10-NEXT: v_cndmask_b32_e64 v5, 0, -1, vcc_lo
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v10, vcc_lo, s11, v1, s0
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s10, v8
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v10, vcc_lo, s3, v1, s0
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s2, v8
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, -1, vcc_lo
; GFX10-NEXT: v_sub_co_u32 v12, vcc_lo, v8, s10
; GFX10-NEXT: v_sub_co_u32 v12, vcc_lo, v8, s2
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v13, s0, 0, v10, vcc_lo
; GFX10-NEXT: v_sub_co_u32 v0, s0, v0, s18
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v1, s0, s19, v3, s0
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s11, v9
; GFX10-NEXT: v_xor_b32_e32 v3, s2, v7
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v10, vcc_lo, s11, v10, vcc_lo
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s3, v9
; GFX10-NEXT: v_xor_b32_e32 v3, s4, v7
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v10, vcc_lo, s3, v10, vcc_lo
; GFX10-NEXT: v_cndmask_b32_e64 v5, v5, v11, s0
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s11, v13
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s3, v13
; GFX10-NEXT: v_cndmask_b32_e64 v7, 0, -1, s0
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s10, v12
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s2, v12
; GFX10-NEXT: v_cndmask_b32_e64 v11, 0, -1, s0
; GFX10-NEXT: v_add_co_u32 v14, s0, v4, 1
; GFX10-NEXT: v_add_co_ci_u32_e64 v15, s0, 0, v2, s0
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s11, v13
; GFX10-NEXT: v_cmp_eq_u32_e64 s0, s3, v13
; GFX10-NEXT: v_cndmask_b32_e64 v7, v7, v11, s0
; GFX10-NEXT: v_add_co_u32 v11, s0, v14, 1
; GFX10-NEXT: v_add_co_ci_u32_e64 v17, s0, 0, v15, s0
; GFX10-NEXT: v_cmp_ne_u32_e32 vcc_lo, 0, v7
; GFX10-NEXT: v_sub_co_u32 v7, s0, v12, s10
; GFX10-NEXT: v_sub_co_u32 v7, s0, v12, s2
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v10, s0, 0, v10, s0
; GFX10-NEXT: v_cndmask_b32_e32 v11, v14, v11, vcc_lo
; GFX10-NEXT: v_cmp_ne_u32_e64 s0, 0, v5
@@ -2177,9 +2176,9 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_cndmask_b32_e64 v8, v8, v5, s0
; GFX10-NEXT: v_cndmask_b32_e64 v7, v9, v7, s0
; GFX10-NEXT: s_xor_b64 s[0:1], s[12:13], s[16:17]
; GFX10-NEXT: v_sub_co_u32 v4, vcc_lo, v3, s2
; GFX10-NEXT: v_sub_co_u32 v4, vcc_lo, v3, s4
; GFX10-NEXT: v_xor_b32_e32 v3, s0, v10
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v5, vcc_lo, s2, v6, vcc_lo
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v5, vcc_lo, s4, v6, vcc_lo
; GFX10-NEXT: v_xor_b32_e32 v6, s1, v2
; GFX10-NEXT: v_xor_b32_e32 v8, s12, v8
; GFX10-NEXT: v_xor_b32_e32 v7, s12, v7
@@ -2187,9 +2186,8 @@ define amdgpu_kernel void @sdivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v3, vcc_lo, s1, v6, vcc_lo
; GFX10-NEXT: v_sub_co_u32 v6, vcc_lo, v8, s12
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v7, vcc_lo, s12, v7, vcc_lo
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: global_store_dwordx4 v16, v[0:3], s[4:5]
; GFX10-NEXT: global_store_dwordx4 v16, v[4:7], s[6:7]
; GFX10-NEXT: global_store_dwordx4 v16, v[0:3], s[8:9]
; GFX10-NEXT: global_store_dwordx4 v16, v[4:7], s[10:11]
; GFX10-NEXT: s_endpgm
%div = sdiv <2 x i64> %x, %y
store <2 x i64> %div, ptr addrspace(1) %out0

View File

@@ -985,8 +985,8 @@ define amdgpu_kernel void @udivrem_v4i32(ptr addrspace(1) %out0, ptr addrspace(1
define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1) %out1, <2 x i64> %x, <2 x i64> %y) {
; GFX8-LABEL: udivrem_v2i64:
; GFX8: ; %bb.0:
; GFX8-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x10
; GFX8-NEXT: s_load_dwordx4 s[4:7], s[4:5], 0x0
; GFX8-NEXT: s_load_dwordx4 s[12:15], s[4:5], 0x20
; GFX8-NEXT: s_load_dwordx8 s[4:11], s[4:5], 0x0
; GFX8-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NEXT: v_cvt_f32_u32_e32 v0, s13
; GFX8-NEXT: v_cvt_f32_u32_e32 v1, s12
@@ -1255,7 +1255,7 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
;
; GFX9-LABEL: udivrem_v2i64:
; GFX9: ; %bb.0:
; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x10
; GFX9-NEXT: s_load_dwordx4 s[12:15], s[4:5], 0x20
; GFX9-NEXT: s_waitcnt lgkmcnt(0)
; GFX9-NEXT: v_cvt_f32_u32_e32 v0, s13
; GFX9-NEXT: v_cvt_f32_u32_e32 v1, s12
@@ -1264,7 +1264,7 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_mul_f32_e32 v0, 0x4f800000, v0
; GFX9-NEXT: v_add_f32_e32 v0, v0, v1
; GFX9-NEXT: v_rcp_iflag_f32_e32 v0, v0
; GFX9-NEXT: s_load_dwordx4 s[4:7], s[4:5], 0x0
; GFX9-NEXT: s_load_dwordx8 s[4:11], s[4:5], 0x0
; GFX9-NEXT: v_mul_f32_e32 v0, 0x5f7ffffc, v0
; GFX9-NEXT: v_mul_f32_e32 v1, 0x2f800000, v0
; GFX9-NEXT: v_trunc_f32_e32 v2, v1
@@ -1325,6 +1325,7 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_add3_u32 v1, v5, v2, v1
; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, v3, v0
; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, v4, v1, vcc
; GFX9-NEXT: s_waitcnt lgkmcnt(0)
; GFX9-NEXT: v_mul_lo_u32 v2, s9, v0
; GFX9-NEXT: v_mul_lo_u32 v3, s8, v1
; GFX9-NEXT: v_mul_hi_u32 v4, s8, v0
@@ -1510,14 +1511,13 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX9-NEXT: v_cndmask_b32_e32 v9, v13, v20, vcc
; GFX9-NEXT: v_cndmask_b32_e64 v7, v8, v7, s[0:1]
; GFX9-NEXT: v_cndmask_b32_e64 v8, v11, v9, s[0:1]
; GFX9-NEXT: s_waitcnt lgkmcnt(0)
; GFX9-NEXT: global_store_dwordx4 v0, v[1:4], s[4:5]
; GFX9-NEXT: global_store_dwordx4 v0, v[5:8], s[6:7]
; GFX9-NEXT: s_endpgm
;
; GFX10-LABEL: udivrem_v2i64:
; GFX10: ; %bb.0:
; GFX10-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x10
; GFX10-NEXT: s_load_dwordx4 s[12:15], s[4:5], 0x20
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: v_cvt_f32_u32_e32 v0, s13
; GFX10-NEXT: v_cvt_f32_u32_e32 v1, s15
@@ -1616,11 +1616,11 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_mul_lo_u32 v10, v5, v1
; GFX10-NEXT: v_mul_lo_u32 v11, v4, v1
; GFX10-NEXT: v_mul_hi_u32 v14, v5, v1
; GFX10-NEXT: v_mul_hi_u32 v1, v4, v1
; GFX10-NEXT: s_load_dwordx8 s[4:11], s[4:5], 0x0
; GFX10-NEXT: v_mul_lo_u32 v15, v8, v3
; GFX10-NEXT: v_mul_lo_u32 v16, v6, v3
; GFX10-NEXT: v_mul_hi_u32 v17, v8, v3
; GFX10-NEXT: v_mul_hi_u32 v3, v6, v3
; GFX10-NEXT: v_mul_hi_u32 v1, v4, v1
; GFX10-NEXT: v_add_co_u32 v10, s0, v12, v10
; GFX10-NEXT: v_cndmask_b32_e64 v12, 0, 1, s0
; GFX10-NEXT: v_add_co_u32 v0, s0, v11, v0
@@ -1642,65 +1642,66 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_add_nc_u32_e32 v11, v11, v13
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, 1, s0
; GFX10-NEXT: v_add_nc_u32_e32 v7, v15, v7
; GFX10-NEXT: v_mul_hi_u32 v3, v6, v3
; GFX10-NEXT: v_add_co_u32 v0, vcc_lo, v5, v0
; GFX10-NEXT: v_add_nc_u32_e32 v10, v16, v10
; GFX10-NEXT: v_add3_u32 v1, v11, v9, v1
; GFX10-NEXT: v_add_co_u32 v2, s0, v2, v7
; GFX10-NEXT: v_add_nc_u32_e32 v10, v16, v10
; GFX10-NEXT: v_cndmask_b32_e64 v7, 0, 1, s0
; GFX10-NEXT: v_mul_hi_u32 v5, s8, v0
; GFX10-NEXT: v_add_co_ci_u32_e32 v1, vcc_lo, v4, v1, vcc_lo
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: v_mul_lo_u32 v4, s9, v0
; GFX10-NEXT: v_mul_hi_u32 v5, s8, v0
; GFX10-NEXT: v_add3_u32 v3, v10, v7, v3
; GFX10-NEXT: v_mul_hi_u32 v0, s9, v0
; GFX10-NEXT: v_mul_lo_u32 v7, s8, v1
; GFX10-NEXT: v_mul_lo_u32 v10, s9, v1
; GFX10-NEXT: v_mul_hi_u32 v0, s9, v0
; GFX10-NEXT: v_mul_lo_u32 v9, s9, v1
; GFX10-NEXT: v_add_co_u32 v2, vcc_lo, v8, v2
; GFX10-NEXT: v_add_co_ci_u32_e32 v3, vcc_lo, v6, v3, vcc_lo
; GFX10-NEXT: v_mul_hi_u32 v6, s8, v1
; GFX10-NEXT: v_mul_hi_u32 v1, s9, v1
; GFX10-NEXT: v_add_co_u32 v4, s0, v4, v7
; GFX10-NEXT: v_cndmask_b32_e64 v7, 0, 1, s0
; GFX10-NEXT: v_add_co_u32 v0, s0, v10, v0
; GFX10-NEXT: v_add_co_u32 v0, s0, v9, v0
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s0
; GFX10-NEXT: v_add_co_u32 v4, s0, v4, v5
; GFX10-NEXT: v_cndmask_b32_e64 v4, 0, 1, s0
; GFX10-NEXT: v_add_co_u32 v0, s0, v0, v6
; GFX10-NEXT: v_cndmask_b32_e64 v5, 0, 1, s0
; GFX10-NEXT: v_mul_lo_u32 v6, s11, v2
; GFX10-NEXT: v_mul_hi_u32 v1, s9, v1
; GFX10-NEXT: v_add_nc_u32_e32 v4, v7, v4
; GFX10-NEXT: v_mul_lo_u32 v6, s11, v2
; GFX10-NEXT: v_mul_lo_u32 v7, s10, v3
; GFX10-NEXT: v_mul_lo_u32 v10, s11, v3
; GFX10-NEXT: v_add_nc_u32_e32 v5, v8, v5
; GFX10-NEXT: v_mul_hi_u32 v8, s10, v2
; GFX10-NEXT: v_add_co_u32 v4, s0, v0, v4
; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s0
; GFX10-NEXT: v_mul_hi_u32 v2, s11, v2
; GFX10-NEXT: v_mul_hi_u32 v11, s10, v3
; GFX10-NEXT: v_mul_lo_u32 v9, s11, v3
; GFX10-NEXT: v_mul_hi_u32 v10, s10, v3
; GFX10-NEXT: v_add_co_u32 v6, s0, v6, v7
; GFX10-NEXT: v_add3_u32 v5, v5, v0, v1
; GFX10-NEXT: v_cndmask_b32_e64 v7, 0, 1, s0
; GFX10-NEXT: v_mad_u64_u32 v[0:1], s0, s12, v4, 0
; GFX10-NEXT: v_mul_lo_u32 v12, s13, v4
; GFX10-NEXT: v_mul_lo_u32 v13, s12, v5
; GFX10-NEXT: v_add_co_u32 v2, s0, v10, v2
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, 1, s0
; GFX10-NEXT: v_mul_lo_u32 v11, s13, v4
; GFX10-NEXT: v_mul_lo_u32 v12, s12, v5
; GFX10-NEXT: v_add_co_u32 v2, s0, v9, v2
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, 1, s0
; GFX10-NEXT: v_add_co_u32 v6, s0, v6, v8
; GFX10-NEXT: v_cndmask_b32_e64 v6, 0, 1, s0
; GFX10-NEXT: v_add_co_u32 v2, s0, v2, v11
; GFX10-NEXT: v_add_co_u32 v2, s0, v2, v10
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, 1, s0
; GFX10-NEXT: v_add3_u32 v1, v1, v13, v12
; GFX10-NEXT: v_add3_u32 v1, v1, v12, v11
; GFX10-NEXT: v_add_nc_u32_e32 v6, v7, v6
; GFX10-NEXT: v_mul_hi_u32 v3, s11, v3
; GFX10-NEXT: s_load_dwordx4 s[4:7], s[4:5], 0x0
; GFX10-NEXT: v_add_nc_u32_e32 v7, v10, v8
; GFX10-NEXT: v_mov_b32_e32 v10, 0
; GFX10-NEXT: v_add_nc_u32_e32 v7, v9, v8
; GFX10-NEXT: v_sub_nc_u32_e32 v8, s9, v1
; GFX10-NEXT: v_sub_co_u32 v10, vcc_lo, s8, v0
; GFX10-NEXT: v_sub_co_u32 v9, vcc_lo, s8, v0
; GFX10-NEXT: v_sub_co_ci_u32_e64 v11, s0, s9, v1, vcc_lo
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v0, vcc_lo, s13, v8, vcc_lo
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s12, v10
; GFX10-NEXT: v_mov_b32_e32 v9, 0
; GFX10-NEXT: v_cmp_le_u32_e32 vcc_lo, s12, v9
; GFX10-NEXT: v_cndmask_b32_e64 v1, 0, -1, vcc_lo
; GFX10-NEXT: v_sub_co_u32 v8, vcc_lo, v10, s12
; GFX10-NEXT: v_sub_co_u32 v8, vcc_lo, v9, s12
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v12, s0, 0, v0, vcc_lo
; GFX10-NEXT: v_cmp_le_u32_e64 s0, s13, v11
; GFX10-NEXT: v_subrev_co_ci_u32_e32 v0, vcc_lo, s13, v0, vcc_lo
@@ -1747,34 +1748,33 @@ define amdgpu_kernel void @udivrem_v2i64(ptr addrspace(1) %out0, ptr addrspace(1
; GFX10-NEXT: v_sub_co_u32 v14, s0, v7, s14
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v15, s2, 0, v2, s0
; GFX10-NEXT: v_cndmask_b32_e64 v5, v5, v8, s1
; GFX10-NEXT: v_cndmask_b32_e32 v4, v10, v4, vcc_lo
; GFX10-NEXT: v_cndmask_b32_e32 v4, v9, v4, vcc_lo
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v2, s0, s15, v2, s0
; GFX10-NEXT: v_cmp_le_u32_e64 s1, s15, v15
; GFX10-NEXT: v_cndmask_b32_e64 v8, 0, -1, s1
; GFX10-NEXT: v_cmp_le_u32_e64 s1, s14, v14
; GFX10-NEXT: v_cndmask_b32_e64 v10, 0, -1, s1
; GFX10-NEXT: v_cndmask_b32_e64 v9, 0, -1, s1
; GFX10-NEXT: v_add_co_u32 v16, s1, v6, 1
; GFX10-NEXT: v_add_co_ci_u32_e64 v17, s1, 0, v3, s1
; GFX10-NEXT: v_cmp_eq_u32_e64 s1, s15, v15
; GFX10-NEXT: v_cndmask_b32_e64 v8, v8, v10, s1
; GFX10-NEXT: v_add_co_u32 v10, s1, v16, 1
; GFX10-NEXT: v_cndmask_b32_e64 v8, v8, v9, s1
; GFX10-NEXT: v_add_co_u32 v9, s1, v16, 1
; GFX10-NEXT: v_add_co_ci_u32_e64 v18, s1, 0, v17, s1
; GFX10-NEXT: v_cmp_ne_u32_e64 s0, 0, v8
; GFX10-NEXT: v_sub_co_u32 v8, s1, v14, s14
; GFX10-NEXT: v_subrev_co_ci_u32_e64 v2, s1, 0, v2, s1
; GFX10-NEXT: v_cndmask_b32_e64 v10, v16, v10, s0
; GFX10-NEXT: v_cndmask_b32_e64 v9, v16, v9, s0
; GFX10-NEXT: v_cndmask_b32_e64 v16, v17, v18, s0
; GFX10-NEXT: v_cmp_ne_u32_e64 s1, 0, v5
; GFX10-NEXT: v_cndmask_b32_e64 v8, v14, v8, s0
; GFX10-NEXT: v_cndmask_b32_e64 v14, v15, v2, s0
; GFX10-NEXT: v_cndmask_b32_e32 v5, v11, v12, vcc_lo
; GFX10-NEXT: v_cndmask_b32_e64 v2, v6, v10, s1
; GFX10-NEXT: v_cndmask_b32_e64 v2, v6, v9, s1
; GFX10-NEXT: v_cndmask_b32_e64 v3, v3, v16, s1
; GFX10-NEXT: v_cndmask_b32_e64 v6, v7, v8, s1
; GFX10-NEXT: v_cndmask_b32_e64 v7, v13, v14, s1
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: global_store_dwordx4 v9, v[0:3], s[4:5]
; GFX10-NEXT: global_store_dwordx4 v9, v[4:7], s[6:7]
; GFX10-NEXT: global_store_dwordx4 v10, v[0:3], s[4:5]
; GFX10-NEXT: global_store_dwordx4 v10, v[4:7], s[6:7]
; GFX10-NEXT: s_endpgm
%div = udiv <2 x i64> %x, %y
store <2 x i64> %div, ptr addrspace(1) %out0

View File

@@ -13,16 +13,17 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: $sgpr0 = S_ADD_U32 $sgpr0, $sgpr17, implicit-def $scc, implicit-def $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: $sgpr1 = S_ADDC_U32 $sgpr1, 0, implicit-def dead $scc, implicit $scc, implicit-def $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: renamable $vgpr31 = COPY $vgpr0, implicit $exec
; GFX90A-NEXT: renamable $sgpr20_sgpr21_sgpr22_sgpr23 = S_LOAD_DWORDX4_IMM renamable $sgpr8_sgpr9, 24, 0 :: (dereferenceable invariant load (s128) from %ir.arg3.kernarg.offset.align.down, align 8, addrspace 4)
; GFX90A-NEXT: renamable $sgpr33 = S_LOAD_DWORD_IMM renamable $sgpr8_sgpr9, 24, 0 :: (dereferenceable invariant load (s32) from %ir.arg4.kernarg.offset.align.down, align 8, addrspace 4)
; GFX90A-NEXT: renamable $sgpr20_sgpr21_sgpr22_sgpr23 = S_LOAD_DWORDX4_IMM renamable $sgpr8_sgpr9, 24, 0 :: (dereferenceable invariant load (s128) from %ir.arg6.kernarg.offset.align.down, align 8, addrspace 4)
; GFX90A-NEXT: renamable $sgpr17 = S_LOAD_DWORD_IMM renamable $sgpr8_sgpr9, 40, 0 :: (dereferenceable invariant load (s32) from %ir.arg6.kernarg.offset.align.down + 16, align 8, addrspace 4)
; GFX90A-NEXT: renamable $sgpr24_sgpr25_sgpr26_sgpr27 = S_LOAD_DWORDX4_IMM renamable $sgpr8_sgpr9, 0, 0 :: (dereferenceable invariant load (s128) from %ir.arg.kernarg.offset1, addrspace 4)
; GFX90A-NEXT: renamable $sgpr58_sgpr59 = S_LOAD_DWORDX2_IMM renamable $sgpr8_sgpr9, 16, 0 :: (dereferenceable invariant load (s64) from %ir.arg.kernarg.offset1 + 16, align 16, addrspace 4)
; GFX90A-NEXT: renamable $sgpr33 = S_LOAD_DWORD_IMM renamable $sgpr8_sgpr9, 24, 0 :: (dereferenceable invariant load (s32) from %ir.arg4.kernarg.offset.align.down, align 8, addrspace 4)
; GFX90A-NEXT: S_BITCMP1_B32 renamable $sgpr20, 0, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr12_sgpr13 = S_CSELECT_B64 -1, 0, implicit $scc
; GFX90A-NEXT: S_BITCMP1_B32 renamable $sgpr33, 0, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr12_sgpr13 = S_CSELECT_B64 -1, 0, implicit killed $scc
; GFX90A-NEXT: renamable $sgpr34_sgpr35 = S_MOV_B64 -1
; GFX90A-NEXT: renamable $sgpr28_sgpr29 = S_XOR_B64 renamable $sgpr12_sgpr13, -1, implicit-def dead $scc
; GFX90A-NEXT: S_BITCMP1_B32 renamable $sgpr33, 8, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr18_sgpr19 = S_CSELECT_B64 -1, 0, implicit $scc
; GFX90A-NEXT: renamable $sgpr18_sgpr19 = S_CSELECT_B64 -1, 0, implicit killed $scc
; GFX90A-NEXT: renamable $sgpr30_sgpr31 = S_XOR_B64 killed renamable $sgpr18_sgpr19, -1, implicit-def dead $scc
; GFX90A-NEXT: renamable $vgpr3 = V_MOV_B32_e32 0, implicit $exec
; GFX90A-NEXT: renamable $vgpr2 = DS_READ_B32_gfx9 renamable $vgpr3, 0, 0, implicit $exec :: (load (s32) from `ptr addrspace(3) null`, align 8, addrspace 3)
@@ -32,7 +33,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.1.bb103:
; GFX90A-NEXT: successors: %bb.58(0x40000000), %bb.2(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x00000000000000FC, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr17, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x00000000000000FF, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $sgpr34_sgpr35 = S_MOV_B64 0
; GFX90A-NEXT: renamable $vcc = S_AND_B64 $exec, renamable $sgpr30_sgpr31, implicit-def dead $scc
@@ -45,10 +46,10 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.2:
; GFX90A-NEXT: successors: %bb.3(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr24, $sgpr33, $vgpr31, $agpr0, $vgpr26, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8, $sgpr9, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr58, $sgpr59, $sgpr21, $sgpr22_sgpr23, $sgpr24_sgpr25_sgpr26, $sgpr26_sgpr27, $vgpr2, $vgpr3, $vgpr20, $vgpr22
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr24, $sgpr33, $vgpr31, $agpr0, $vgpr26, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8, $sgpr9, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr58, $sgpr59, $sgpr22, $sgpr20_sgpr21, $sgpr24_sgpr25_sgpr26, $sgpr26_sgpr27, $vgpr2, $vgpr3, $vgpr20, $vgpr22
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $sgpr17 = IMPLICIT_DEF
; GFX90A-NEXT: renamable $sgpr20 = IMPLICIT_DEF
; GFX90A-NEXT: renamable $sgpr23 = IMPLICIT_DEF
; GFX90A-NEXT: renamable $agpr1 = IMPLICIT_DEF
; GFX90A-NEXT: renamable $vgpr21 = IMPLICIT_DEF
; GFX90A-NEXT: renamable $vgpr23 = IMPLICIT_DEF
@@ -58,7 +59,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.3.Flow17:
; GFX90A-NEXT: successors: %bb.4(0x40000000), %bb.57(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr17, $sgpr20, $sgpr33, $vgpr31, $agpr0_agpr1:0x000000000000000F, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $vgpr20_vgpr21:0x000000000000000F, $vgpr22_vgpr23:0x000000000000000F, $vgpr24_vgpr25:0x000000000000000F, $vgpr26_vgpr27:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr17, $sgpr23, $sgpr33, $vgpr31, $agpr0_agpr1:0x000000000000000F, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $vgpr20_vgpr21:0x000000000000000F, $vgpr22_vgpr23:0x000000000000000F, $vgpr24_vgpr25:0x000000000000000F, $vgpr26_vgpr27:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr4 = V_AND_B32_e32 1023, $vgpr31, implicit $exec
; GFX90A-NEXT: renamable $vcc = S_AND_B64 $exec, killed renamable $sgpr34_sgpr35, implicit-def dead $scc
@@ -66,7 +67,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.4.bb15:
; GFX90A-NEXT: successors: %bb.35(0x40000000), %bb.5(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr0_vgpr1 = V_LSHLREV_B64_e64 2, $vgpr2_vgpr3, implicit $exec
; GFX90A-NEXT: renamable $vgpr5 = COPY renamable $sgpr25, implicit $exec
@@ -200,7 +201,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr58_sgpr59, $vgpr0_vgpr1:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $sgpr8 = S_ADD_U32 renamable $sgpr8, 48, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr9 = S_ADDC_U32 killed renamable $sgpr9, 0, implicit-def dead $scc, implicit $scc
; GFX90A-NEXT: renamable $sgpr9 = S_ADDC_U32 killed renamable $sgpr9, 0, implicit-def dead $scc, implicit killed $scc
; GFX90A-NEXT: renamable $sgpr12_sgpr13 = SI_PC_ADD_REL_OFFSET target-flags(amdgpu-gotprel32-lo) @f2 + 4, target-flags(amdgpu-gotprel32-hi) @f2 + 12, implicit-def dead $scc
; GFX90A-NEXT: renamable $sgpr18_sgpr19 = S_LOAD_DWORDX2_IMM killed renamable $sgpr12_sgpr13, 0, 0 :: (dereferenceable invariant load (s64) from got, addrspace 4)
; GFX90A-NEXT: $sgpr12 = COPY killed renamable $sgpr14
@@ -365,7 +366,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.35.bb20:
; GFX90A-NEXT: successors: %bb.37(0x40000000), %bb.36(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr0 = GLOBAL_LOAD_SBYTE renamable $vgpr40_vgpr41, 1024, 0, implicit $exec :: (load (s8) from %ir.i21, addrspace 1)
; GFX90A-NEXT: renamable $vgpr42 = V_ADD_CO_U32_e32 1024, $vgpr40, implicit-def $vcc, implicit $exec
@@ -412,7 +413,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.37.bb27:
; GFX90A-NEXT: successors: %bb.39(0x40000000), %bb.38(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr52_sgpr53, $sgpr50_sgpr51, $sgpr48_sgpr49, $sgpr46_sgpr47, $sgpr44_sgpr45, $sgpr42_sgpr43
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr52_sgpr53, $sgpr50_sgpr51, $sgpr48_sgpr49, $sgpr46_sgpr47, $sgpr44_sgpr45, $sgpr42_sgpr43
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr0 = GLOBAL_LOAD_UBYTE renamable $vgpr40_vgpr41, 2048, 0, implicit $exec :: (load (s8) from %ir.i28, addrspace 1)
; GFX90A-NEXT: renamable $vgpr44 = V_ADD_CO_U32_e32 2048, $vgpr40, implicit-def $vcc, implicit $exec
@@ -463,7 +464,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.39.bb34:
; GFX90A-NEXT: successors: %bb.41(0x40000000), %bb.40(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr52_sgpr53, $sgpr50_sgpr51, $sgpr48_sgpr49, $sgpr46_sgpr47, $sgpr44_sgpr45
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr18_sgpr19, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr52_sgpr53, $sgpr50_sgpr51, $sgpr48_sgpr49, $sgpr46_sgpr47, $sgpr44_sgpr45
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr0 = GLOBAL_LOAD_UBYTE renamable $vgpr40_vgpr41, 3072, 0, implicit $exec :: (load (s8) from %ir.i35, addrspace 1)
; GFX90A-NEXT: renamable $vgpr56 = V_ADD_CO_U32_e32 3072, $vgpr40, implicit-def $vcc, implicit $exec
@@ -512,7 +513,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.41.bb41:
; GFX90A-NEXT: successors: %bb.46(0x40000000), %bb.42(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr52_sgpr53, $sgpr50_sgpr51, $sgpr48_sgpr49, $sgpr46_sgpr47
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr52_sgpr53, $sgpr50_sgpr51, $sgpr48_sgpr49, $sgpr46_sgpr47
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr58 = V_ADD_CO_U32_e32 4096, $vgpr40, implicit-def $vcc, implicit $exec
; GFX90A-NEXT: renamable $sgpr18_sgpr19 = COPY $vcc
@@ -564,10 +565,10 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.43.bb55:
; GFX90A-NEXT: successors: %bb.48(0x40000000), %bb.44(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr44_sgpr45, $sgpr52_sgpr53, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr46_sgpr47
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr44_sgpr45, $sgpr52_sgpr53, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr46_sgpr47
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: S_BITCMP1_B32 renamable $sgpr33, 16, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr64_sgpr65 = S_CSELECT_B64 -1, 0, implicit $scc
; GFX90A-NEXT: S_BITCMP1_B32 killed renamable $sgpr33, 16, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr64_sgpr65 = S_CSELECT_B64 -1, 0, implicit killed $scc
; GFX90A-NEXT: renamable $sgpr48_sgpr49 = S_XOR_B64 renamable $sgpr64_sgpr65, -1, implicit-def dead $scc
; GFX90A-NEXT: renamable $vgpr62 = V_ADD_CO_U32_e32 6144, $vgpr40, implicit-def $vcc, implicit $exec
; GFX90A-NEXT: renamable $vgpr63, dead renamable $vcc = V_ADDC_U32_e64 0, $vgpr41, killed $vcc, 0, implicit $exec
@@ -614,7 +615,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.46.bb48:
; GFX90A-NEXT: successors: %bb.43(0x40000000), %bb.47(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr46_sgpr47, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr44_sgpr45, $sgpr52_sgpr53
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr46_sgpr47, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr44_sgpr45, $sgpr52_sgpr53
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr60 = V_ADD_CO_U32_e32 5120, $vgpr40, implicit-def $vcc, implicit $exec
; GFX90A-NEXT: renamable $sgpr18_sgpr19 = COPY $vcc
@@ -665,7 +666,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.48.bb63:
; GFX90A-NEXT: successors: %bb.50(0x40000000), %bb.49(0x40000000)
; GFX90A-NEXT: liveins: $vcc, $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr48_sgpr49, $sgpr58_sgpr59:0x000000000000000F, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr52_sgpr53, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr46_sgpr47
; GFX90A-NEXT: liveins: $vcc, $sgpr14, $sgpr15, $sgpr16, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr48_sgpr49, $sgpr58_sgpr59:0x000000000000000F, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr52_sgpr53, $sgpr56_sgpr57, $sgpr54_sgpr55, $sgpr46_sgpr47
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $sgpr44_sgpr45 = S_MOV_B64 0
; GFX90A-NEXT: S_CBRANCH_VCCNZ %bb.50, implicit $vcc
@@ -679,7 +680,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.50.bb68:
; GFX90A-NEXT: successors: %bb.54(0x40000000), %bb.51(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr48_sgpr49, $sgpr58_sgpr59:0x000000000000000F, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr46_sgpr47, $sgpr52_sgpr53, $sgpr56_sgpr57, $sgpr54_sgpr55
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr48_sgpr49, $sgpr58_sgpr59:0x000000000000000F, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr46_sgpr47, $sgpr52_sgpr53, $sgpr56_sgpr57, $sgpr54_sgpr55
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr0_vgpr1 = V_LSHLREV_B64_e64 3, $vgpr4_vgpr5, implicit $exec
; GFX90A-NEXT: renamable $vcc = S_AND_B64 $exec, killed renamable $sgpr48_sgpr49, implicit-def dead $scc
@@ -707,13 +708,13 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.52.bb80:
; GFX90A-NEXT: successors: %bb.59(0x40000000), %bb.53(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr58_sgpr59:0x000000000000000F, $sgpr60_sgpr61, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $vgpr6_vgpr7:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr58_sgpr59:0x000000000000000F, $sgpr60_sgpr61, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $vgpr6_vgpr7:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $sgpr17 = S_BFE_U32 killed renamable $sgpr33, 65560, implicit-def dead $scc
; GFX90A-NEXT: renamable $sgpr17 = S_BFE_U32 renamable $sgpr20, 65560, implicit-def dead $scc
; GFX90A-NEXT: S_CMP_EQ_U32 killed renamable $sgpr17, 0, implicit-def $scc
; GFX90A-NEXT: renamable $vgpr8 = V_ADD_CO_U32_e32 4096, $vgpr0, implicit-def $vcc, implicit $exec
; GFX90A-NEXT: renamable $vgpr9, dead renamable $vcc = V_ADDC_U32_e64 0, $vgpr1, killed $vcc, 0, implicit $exec
; GFX90A-NEXT: S_CBRANCH_SCC1 %bb.59, implicit $scc
; GFX90A-NEXT: S_CBRANCH_SCC1 %bb.59, implicit killed $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.53:
; GFX90A-NEXT: successors: %bb.61(0x80000000)
@@ -736,7 +737,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.54.bb73:
; GFX90A-NEXT: successors: %bb.52(0x40000000), %bb.55(0x40000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr58_sgpr59:0x000000000000000F, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr52_sgpr53, $sgpr56_sgpr57
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr20, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr58_sgpr59:0x000000000000000F, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003F, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr52_sgpr53, $sgpr56_sgpr57
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr5 = GLOBAL_LOAD_UBYTE renamable $vgpr0_vgpr1, 2048, 0, implicit $exec :: (load (s8) from %ir.i74, addrspace 1)
; GFX90A-NEXT: renamable $vgpr6 = V_ADD_CO_U32_e32 2048, $vgpr0, implicit-def $vcc, implicit $exec
@@ -774,9 +775,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: renamable $vgpr5 = V_MOV_B32_e32 0, implicit $exec
; GFX90A-NEXT: renamable $vgpr16_vgpr17 = DS_READ_B64_gfx9 killed renamable $vgpr5, 0, 0, implicit $exec :: (load (s64) from `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: renamable $vgpr5 = COPY renamable $sgpr21, implicit $exec
; GFX90A-NEXT: renamable $vgpr18_vgpr19 = DS_READ_B64_gfx9 killed renamable $vgpr5, 0, 0, implicit $exec :: (load (s64) from %ir.3, addrspace 3)
; GFX90A-NEXT: renamable $vgpr18_vgpr19 = DS_READ_B64_gfx9 killed renamable $vgpr5, 0, 0, implicit $exec :: (load (s64) from %ir.7, addrspace 3)
; GFX90A-NEXT: renamable $vgpr5 = COPY renamable $sgpr22, implicit $exec
; GFX90A-NEXT: renamable $vgpr14_vgpr15 = DS_READ_B64_gfx9 killed renamable $vgpr5, 0, 0, implicit $exec :: (load (s64) from %ir.4, addrspace 3)
; GFX90A-NEXT: renamable $vgpr14_vgpr15 = DS_READ_B64_gfx9 killed renamable $vgpr5, 0, 0, implicit $exec :: (load (s64) from %ir.8, addrspace 3)
; GFX90A-NEXT: renamable $vgpr5 = COPY renamable $sgpr58, implicit $exec
; GFX90A-NEXT: renamable $vgpr13 = V_ALIGNBIT_B32_e64 killed $sgpr59, killed $vgpr5, 1, implicit $exec
; GFX90A-NEXT: renamable $vgpr30 = V_ALIGNBIT_B32_e64 $vgpr19, $vgpr18, 1, implicit $exec
@@ -788,9 +789,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.57:
; GFX90A-NEXT: successors: %bb.7(0x80000000)
; GFX90A-NEXT: liveins: $exec:0x000000000000000F, $sgpr14, $sgpr15, $sgpr16, $sgpr17:0x0000000000000003, $sgpr20:0x0000000000000003, $vgpr31, $agpr0_agpr1:0x000000000000000F, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr36_sgpr37, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $vgpr20_vgpr21:0x000000000000000F, $vgpr22_vgpr23:0x000000000000000F, $vgpr24_vgpr25:0x000000000000000F, $vgpr26_vgpr27:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: liveins: $exec:0x000000000000000F, $sgpr14, $sgpr15, $sgpr16, $sgpr17:0x0000000000000003, $sgpr23:0x0000000000000003, $vgpr31, $agpr0_agpr1:0x000000000000000F, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr36_sgpr37, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x0000000000000003, $vgpr20_vgpr21:0x000000000000000F, $vgpr22_vgpr23:0x000000000000000F, $vgpr24_vgpr25:0x000000000000000F, $vgpr26_vgpr27:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr17 = COPY killed renamable $sgpr20, implicit $exec
; GFX90A-NEXT: renamable $vgpr17 = COPY killed renamable $sgpr23, implicit $exec
; GFX90A-NEXT: renamable $vgpr19 = COPY killed renamable $sgpr17, implicit $exec
; GFX90A-NEXT: renamable $sgpr56_sgpr57 = S_MOV_B64 0
; GFX90A-NEXT: renamable $sgpr54_sgpr55 = S_MOV_B64 0
@@ -825,21 +826,20 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.58.bb105:
; GFX90A-NEXT: successors: %bb.3(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x00000000000000FC, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr17, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr58_sgpr59:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x00000000000000FF, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr2_vgpr3:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $sgpr17 = S_LOAD_DWORD_IMM renamable $sgpr8_sgpr9, 40, 0 :: (dereferenceable invariant load (s32) from %ir.arg3.kernarg.offset.align.down + 16, align 8, addrspace 4)
; GFX90A-NEXT: renamable $vgpr0 = V_MOV_B32_e32 0, implicit $exec
; GFX90A-NEXT: renamable $vgpr24_vgpr25 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: renamable $vgpr0 = COPY renamable $sgpr23, implicit $exec
; GFX90A-NEXT: renamable $vgpr22_vgpr23 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.434, addrspace 3)
; GFX90A-NEXT: renamable $vgpr0 = COPY renamable $sgpr21, implicit $exec
; GFX90A-NEXT: renamable $vgpr20_vgpr21 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.3, addrspace 3)
; GFX90A-NEXT: renamable $vgpr20_vgpr21 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.7, addrspace 3)
; GFX90A-NEXT: renamable $vgpr0 = COPY killed renamable $sgpr17, implicit $exec
; GFX90A-NEXT: renamable $agpr0_agpr1 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.435, addrspace 3)
; GFX90A-NEXT: renamable $vgpr0 = COPY renamable $sgpr22, implicit $exec
; GFX90A-NEXT: renamable $vgpr26_vgpr27 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.4, addrspace 3)
; GFX90A-NEXT: renamable $vgpr26_vgpr27 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.8, addrspace 3)
; GFX90A-NEXT: renamable $sgpr36_sgpr37 = S_MOV_B64 -1
; GFX90A-NEXT: renamable $sgpr20 = S_MOV_B32 0
; GFX90A-NEXT: renamable $sgpr23 = S_MOV_B32 0
; GFX90A-NEXT: renamable $sgpr17 = S_MOV_B32 0
; GFX90A-NEXT: S_BRANCH %bb.3
; GFX90A-NEXT: {{ $}}
@@ -986,13 +986,13 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: renamable $vgpr35 = COPY renamable $vgpr29, implicit $exec
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr29, renamable $vgpr28_vgpr29, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: renamable $vgpr5 = COPY renamable $sgpr21, implicit $exec
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr38_vgpr39, 0, 0, implicit $exec :: (store (s64) into %ir.3, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr38_vgpr39, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
; GFX90A-NEXT: renamable $vgpr12 = COPY killed renamable $sgpr22, implicit $exec
; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr12, killed renamable $vgpr36_vgpr37, 0, 0, implicit $exec :: (store (s64) into %ir.4, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr12, killed renamable $vgpr36_vgpr37, 0, 0, implicit $exec :: (store (s64) into %ir.8, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr29, killed renamable $vgpr50_vgpr51, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr48_vgpr49, 0, 0, implicit $exec :: (store (s64) into %ir.3, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr48_vgpr49, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr29, killed renamable $vgpr32_vgpr33, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr5, killed renamable $vgpr52_vgpr53, 0, 0, implicit $exec :: (store (s64) into %ir.3, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr5, killed renamable $vgpr52_vgpr53, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr29, killed renamable $vgpr34_vgpr35, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr3, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr2, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)

View File

@@ -6,24 +6,26 @@ define amdgpu_kernel void @cannot_create_empty_or_backwards_segment(i1 %arg, i1
; CHECK: ; %bb.0: ; %bb
; CHECK-NEXT: s_mov_b64 s[26:27], s[2:3]
; CHECK-NEXT: s_mov_b64 s[24:25], s[0:1]
; CHECK-NEXT: s_load_dword s2, s[4:5], 0x0
; CHECK-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; CHECK-NEXT: s_load_dword s6, s[4:5], 0x4
; CHECK-NEXT: s_add_u32 s24, s24, s7
; CHECK-NEXT: s_addc_u32 s25, s25, 0
; CHECK-NEXT: s_waitcnt lgkmcnt(0)
; CHECK-NEXT: s_bitcmp1_b32 s0, 0
; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0
; CHECK-NEXT: s_bitcmp1_b32 s0, 8
; CHECK-NEXT: s_bitcmp1_b32 s2, 0
; CHECK-NEXT: s_cselect_b64 s[16:17], -1, 0
; CHECK-NEXT: s_bitcmp1_b32 s2, 8
; CHECK-NEXT: s_cselect_b64 s[10:11], -1, 0
; CHECK-NEXT: s_bitcmp1_b32 s0, 16
; CHECK-NEXT: v_cndmask_b32_e64 v1, 0, 1, s[2:3]
; CHECK-NEXT: s_bitcmp1_b32 s2, 16
; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0
; CHECK-NEXT: s_bitcmp1_b32 s0, 24
; CHECK-NEXT: s_cselect_b64 s[8:9], -1, 0
; CHECK-NEXT: s_xor_b64 s[4:5], s[8:9], -1
; CHECK-NEXT: s_bitcmp1_b32 s1, 0
; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[2:3]
; CHECK-NEXT: s_cselect_b64 s[12:13], -1, 0
; CHECK-NEXT: s_bitcmp1_b32 s1, 8
; CHECK-NEXT: s_bitcmp1_b32 s6, 8
; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[2:3]
; CHECK-NEXT: v_cndmask_b32_e64 v1, 0, 1, s[16:17]
; CHECK-NEXT: s_cselect_b64 s[14:15], -1, 0
; CHECK-NEXT: v_cmp_ne_u32_e64 s[2:3], 1, v0
; CHECK-NEXT: s_and_b64 s[4:5], exec, s[4:5]

View File

@@ -25,79 +25,70 @@ define i32 @private_load_2xi16_align2(ptr addrspace(5) %p) #0 {
; GFX7-UNALIGNED-LABEL: private_load_2xi16_align2:
; GFX7-UNALIGNED: ; %bb.0:
; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX7-UNALIGNED-NEXT: v_add_i32_e32 v1, vcc, 2, v0
; GFX7-UNALIGNED-NEXT: buffer_load_ushort v1, v1, s[0:3], 0 offen
; GFX7-UNALIGNED-NEXT: buffer_load_ushort v0, v0, s[0:3], 0 offen
; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(1)
; GFX7-UNALIGNED-NEXT: v_lshlrev_b32_e32 v1, 16, v1
; GFX7-UNALIGNED-NEXT: buffer_load_dword v0, v0, s[0:3], 0 offen
; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)
; GFX7-UNALIGNED-NEXT: v_or_b32_e32 v0, v0, v1
; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]
;
; GFX9-LABEL: private_load_2xi16_align2:
; GFX9: ; %bb.0:
; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX9-NEXT: buffer_load_ushort v1, v0, s[0:3], 0 offen
; GFX9-NEXT: buffer_load_ushort v2, v0, s[0:3], 0 offen offset:2
; GFX9-NEXT: buffer_load_dword v0, v0, s[0:3], 0 offen
; GFX9-NEXT: s_mov_b32 s4, 0xffff
; GFX9-NEXT: s_waitcnt vmcnt(0)
; GFX9-NEXT: v_lshl_or_b32 v0, v2, 16, v1
; GFX9-NEXT: v_and_b32_e32 v1, 0xffff0000, v0
; GFX9-NEXT: v_and_or_b32 v0, v0, s4, v1
; GFX9-NEXT: s_setpc_b64 s[30:31]
;
; GFX9-FLASTSCR-LABEL: private_load_2xi16_align2:
; GFX9-FLASTSCR: ; %bb.0:
; GFX9-FLASTSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX9-FLASTSCR-NEXT: v_add_u32_e32 v1, 2, v0
; GFX9-FLASTSCR-NEXT: scratch_load_ushort v2, v0, off
; GFX9-FLASTSCR-NEXT: scratch_load_ushort v3, v1, off
; GFX9-FLASTSCR-NEXT: scratch_load_dword v0, v0, off
; GFX9-FLASTSCR-NEXT: s_mov_b32 s0, 0xffff
; GFX9-FLASTSCR-NEXT: s_waitcnt vmcnt(0)
; GFX9-FLASTSCR-NEXT: v_lshl_or_b32 v0, v3, 16, v2
; GFX9-FLASTSCR-NEXT: v_and_b32_e32 v1, 0xffff0000, v0
; GFX9-FLASTSCR-NEXT: v_and_or_b32 v0, v0, s0, v1
; GFX9-FLASTSCR-NEXT: s_setpc_b64 s[30:31]
;
; GFX10-LABEL: private_load_2xi16_align2:
; GFX10: ; %bb.0:
; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: s_clause 0x1
; GFX10-NEXT: buffer_load_ushort v1, v0, s[0:3], 0 offen
; GFX10-NEXT: buffer_load_ushort v2, v0, s[0:3], 0 offen offset:2
; GFX10-NEXT: buffer_load_dword v0, v0, s[0:3], 0 offen
; GFX10-NEXT: s_waitcnt vmcnt(0)
; GFX10-NEXT: v_lshl_or_b32 v0, v2, 16, v1
; GFX10-NEXT: v_and_b32_e32 v1, 0xffff0000, v0
; GFX10-NEXT: v_and_or_b32 v0, 0xffff, v0, v1
; GFX10-NEXT: s_setpc_b64 s[30:31]
;
; GFX10-FLASTSCR-LABEL: private_load_2xi16_align2:
; GFX10-FLASTSCR: ; %bb.0:
; GFX10-FLASTSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-FLASTSCR-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-FLASTSCR-NEXT: v_add_nc_u32_e32 v1, 2, v0
; GFX10-FLASTSCR-NEXT: s_clause 0x1
; GFX10-FLASTSCR-NEXT: scratch_load_ushort v2, v0, off
; GFX10-FLASTSCR-NEXT: scratch_load_ushort v3, v1, off
; GFX10-FLASTSCR-NEXT: scratch_load_dword v0, v0, off
; GFX10-FLASTSCR-NEXT: s_waitcnt vmcnt(0)
; GFX10-FLASTSCR-NEXT: v_lshl_or_b32 v0, v3, 16, v2
; GFX10-FLASTSCR-NEXT: v_and_b32_e32 v1, 0xffff0000, v0
; GFX10-FLASTSCR-NEXT: v_and_or_b32 v0, 0xffff, v0, v1
; GFX10-FLASTSCR-NEXT: s_setpc_b64 s[30:31]
;
; GFX11-LABEL: private_load_2xi16_align2:
; GFX11: ; %bb.0:
; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-NEXT: v_add_nc_u32_e32 v1, 2, v0
; GFX11-NEXT: s_clause 0x1
; GFX11-NEXT: scratch_load_u16 v0, v0, off
; GFX11-NEXT: scratch_load_u16 v1, v1, off
; GFX11-NEXT: scratch_load_b32 v0, v0, off
; GFX11-NEXT: s_waitcnt vmcnt(0)
; GFX11-NEXT: v_lshl_or_b32 v0, v1, 16, v0
; GFX11-NEXT: v_and_b32_e32 v1, 0xffff0000, v0
; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX11-NEXT: v_and_or_b32 v0, 0xffff, v0, v1
; GFX11-NEXT: s_setpc_b64 s[30:31]
;
; GFX11-FLASTSCR-LABEL: private_load_2xi16_align2:
; GFX11-FLASTSCR: ; %bb.0:
; GFX11-FLASTSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-FLASTSCR-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-FLASTSCR-NEXT: v_add_nc_u32_e32 v1, 2, v0
; GFX11-FLASTSCR-NEXT: s_clause 0x1
; GFX11-FLASTSCR-NEXT: scratch_load_u16 v0, v0, off
; GFX11-FLASTSCR-NEXT: scratch_load_u16 v1, v1, off
; GFX11-FLASTSCR-NEXT: scratch_load_b32 v0, v0, off
; GFX11-FLASTSCR-NEXT: s_waitcnt vmcnt(0)
; GFX11-FLASTSCR-NEXT: v_lshl_or_b32 v0, v1, 16, v0
; GFX11-FLASTSCR-NEXT: v_and_b32_e32 v1, 0xffff0000, v0
; GFX11-FLASTSCR-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX11-FLASTSCR-NEXT: v_and_or_b32 v0, 0xffff, v0, v1
; GFX11-FLASTSCR-NEXT: s_setpc_b64 s[30:31]
%gep.p = getelementptr i16, ptr addrspace(5) %p, i64 1
%p.0 = load i16, ptr addrspace(5) %p, align 2
@@ -125,32 +116,24 @@ define void @private_store_2xi16_align2(ptr addrspace(5) %p, ptr addrspace(5) %r
; GFX7-UNALIGNED-LABEL: private_store_2xi16_align2:
; GFX7-UNALIGNED: ; %bb.0:
; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v3, 1
; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, 2
; GFX7-UNALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v1
; GFX7-UNALIGNED-NEXT: buffer_store_short v3, v1, s[0:3], 0 offen
; GFX7-UNALIGNED-NEXT: buffer_store_short v0, v2, s[0:3], 0 offen
; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX7-UNALIGNED-NEXT: buffer_store_dword v0, v1, s[0:3], 0 offen
; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)
; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]
;
; GFX9-LABEL: private_store_2xi16_align2:
; GFX9: ; %bb.0:
; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX9-NEXT: v_mov_b32_e32 v0, 1
; GFX9-NEXT: buffer_store_short v0, v1, s[0:3], 0 offen
; GFX9-NEXT: v_mov_b32_e32 v0, 2
; GFX9-NEXT: buffer_store_short v0, v1, s[0:3], 0 offen offset:2
; GFX9-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX9-NEXT: buffer_store_dword v0, v1, s[0:3], 0 offen
; GFX9-NEXT: s_waitcnt vmcnt(0)
; GFX9-NEXT: s_setpc_b64 s[30:31]
;
; GFX9-FLASTSCR-LABEL: private_store_2xi16_align2:
; GFX9-FLASTSCR: ; %bb.0:
; GFX9-FLASTSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX9-FLASTSCR-NEXT: v_mov_b32_e32 v2, 1
; GFX9-FLASTSCR-NEXT: v_add_u32_e32 v0, 2, v1
; GFX9-FLASTSCR-NEXT: scratch_store_short v1, v2, off
; GFX9-FLASTSCR-NEXT: v_mov_b32_e32 v1, 2
; GFX9-FLASTSCR-NEXT: scratch_store_short v0, v1, off
; GFX9-FLASTSCR-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX9-FLASTSCR-NEXT: scratch_store_dword v1, v0, off
; GFX9-FLASTSCR-NEXT: s_waitcnt vmcnt(0)
; GFX9-FLASTSCR-NEXT: s_setpc_b64 s[30:31]
;
@@ -158,10 +141,8 @@ define void @private_store_2xi16_align2(ptr addrspace(5) %p, ptr addrspace(5) %r
; GFX10: ; %bb.0:
; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: v_mov_b32_e32 v0, 1
; GFX10-NEXT: v_mov_b32_e32 v2, 2
; GFX10-NEXT: buffer_store_short v0, v1, s[0:3], 0 offen
; GFX10-NEXT: buffer_store_short v2, v1, s[0:3], 0 offen offset:2
; GFX10-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX10-NEXT: buffer_store_dword v0, v1, s[0:3], 0 offen
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: s_setpc_b64 s[30:31]
;
@@ -169,11 +150,8 @@ define void @private_store_2xi16_align2(ptr addrspace(5) %p, ptr addrspace(5) %r
; GFX10-FLASTSCR: ; %bb.0:
; GFX10-FLASTSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-FLASTSCR-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-FLASTSCR-NEXT: v_mov_b32_e32 v0, 1
; GFX10-FLASTSCR-NEXT: v_add_nc_u32_e32 v2, 2, v1
; GFX10-FLASTSCR-NEXT: v_mov_b32_e32 v3, 2
; GFX10-FLASTSCR-NEXT: scratch_store_short v1, v0, off
; GFX10-FLASTSCR-NEXT: scratch_store_short v2, v3, off
; GFX10-FLASTSCR-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX10-FLASTSCR-NEXT: scratch_store_dword v1, v0, off
; GFX10-FLASTSCR-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-FLASTSCR-NEXT: s_setpc_b64 s[30:31]
;
@@ -181,11 +159,8 @@ define void @private_store_2xi16_align2(ptr addrspace(5) %p, ptr addrspace(5) %r
; GFX11: ; %bb.0:
; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-NEXT: v_dual_mov_b32 v0, 1 :: v_dual_mov_b32 v3, 2
; GFX11-NEXT: v_add_nc_u32_e32 v2, 2, v1
; GFX11-NEXT: s_clause 0x1
; GFX11-NEXT: scratch_store_b16 v1, v0, off
; GFX11-NEXT: scratch_store_b16 v2, v3, off
; GFX11-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX11-NEXT: scratch_store_b32 v1, v0, off
; GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-NEXT: s_setpc_b64 s[30:31]
;
@@ -193,11 +168,8 @@ define void @private_store_2xi16_align2(ptr addrspace(5) %p, ptr addrspace(5) %r
; GFX11-FLASTSCR: ; %bb.0:
; GFX11-FLASTSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-FLASTSCR-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-FLASTSCR-NEXT: v_dual_mov_b32 v0, 1 :: v_dual_mov_b32 v3, 2
; GFX11-FLASTSCR-NEXT: v_add_nc_u32_e32 v2, 2, v1
; GFX11-FLASTSCR-NEXT: s_clause 0x1
; GFX11-FLASTSCR-NEXT: scratch_store_b16 v1, v0, off
; GFX11-FLASTSCR-NEXT: scratch_store_b16 v2, v3, off
; GFX11-FLASTSCR-NEXT: v_mov_b32_e32 v0, 0x20001
; GFX11-FLASTSCR-NEXT: scratch_store_b32 v1, v0, off
; GFX11-FLASTSCR-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-FLASTSCR-NEXT: s_setpc_b64 s[30:31]
%gep.r = getelementptr i16, ptr addrspace(5) %r, i64 1

View File

@@ -3220,19 +3220,18 @@ define <2 x half> @v_fneg_select_infloop_regression_v2f16(<2 x half> %arg, i1 %a
define amdgpu_kernel void @s_fneg_select_infloop_regression_v2f32(<2 x float> %arg, i1 %arg1, ptr addrspace(1) %ptr) {
; SI-LABEL: s_fneg_select_infloop_regression_v2f32:
; SI: ; %bb.0:
; SI-NEXT: s_load_dword s4, s[0:1], 0xb
; SI-NEXT: s_load_dwordx2 s[2:3], s[0:1], 0x9
; SI-NEXT: s_load_dwordx4 s[4:7], s[0:1], 0x9
; SI-NEXT: s_load_dwordx2 s[0:1], s[0:1], 0xd
; SI-NEXT: v_bfrev_b32_e32 v0, 1
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: s_bitcmp1_b32 s4, 0
; SI-NEXT: v_mov_b32_e32 v1, s2
; SI-NEXT: s_cselect_b64 s[4:5], -1, 0
; SI-NEXT: v_cndmask_b32_e64 v2, -v1, v0, s[4:5]
; SI-NEXT: v_mov_b32_e32 v1, s3
; SI-NEXT: v_cndmask_b32_e64 v0, -v1, v0, s[4:5]
; SI-NEXT: v_cndmask_b32_e64 v1, v0, 0, s[4:5]
; SI-NEXT: v_cndmask_b32_e64 v0, v2, 0, s[4:5]
; SI-NEXT: s_bitcmp1_b32 s6, 0
; SI-NEXT: v_mov_b32_e32 v1, s4
; SI-NEXT: s_cselect_b64 s[2:3], -1, 0
; SI-NEXT: v_cndmask_b32_e64 v2, -v1, v0, s[2:3]
; SI-NEXT: v_mov_b32_e32 v1, s5
; SI-NEXT: v_cndmask_b32_e64 v0, -v1, v0, s[2:3]
; SI-NEXT: v_cndmask_b32_e64 v1, v0, 0, s[2:3]
; SI-NEXT: v_cndmask_b32_e64 v0, v2, 0, s[2:3]
; SI-NEXT: v_mov_b32_e32 v3, s1
; SI-NEXT: v_mov_b32_e32 v2, s0
; SI-NEXT: flat_store_dwordx2 v[2:3], v[0:1]
@@ -3240,19 +3239,18 @@ define amdgpu_kernel void @s_fneg_select_infloop_regression_v2f32(<2 x float> %a
;
; VI-LABEL: s_fneg_select_infloop_regression_v2f32:
; VI: ; %bb.0:
; VI-NEXT: s_load_dword s4, s[0:1], 0x2c
; VI-NEXT: s_load_dwordx2 s[2:3], s[0:1], 0x24
; VI-NEXT: s_load_dwordx4 s[4:7], s[0:1], 0x24
; VI-NEXT: s_load_dwordx2 s[0:1], s[0:1], 0x34
; VI-NEXT: v_bfrev_b32_e32 v0, 1
; VI-NEXT: s_waitcnt lgkmcnt(0)
; VI-NEXT: s_bitcmp1_b32 s4, 0
; VI-NEXT: v_mov_b32_e32 v1, s2
; VI-NEXT: s_cselect_b64 s[4:5], -1, 0
; VI-NEXT: v_cndmask_b32_e64 v2, -v1, v0, s[4:5]
; VI-NEXT: v_mov_b32_e32 v1, s3
; VI-NEXT: v_cndmask_b32_e64 v0, -v1, v0, s[4:5]
; VI-NEXT: v_cndmask_b32_e64 v1, v0, 0, s[4:5]
; VI-NEXT: v_cndmask_b32_e64 v0, v2, 0, s[4:5]
; VI-NEXT: s_bitcmp1_b32 s6, 0
; VI-NEXT: v_mov_b32_e32 v1, s4
; VI-NEXT: s_cselect_b64 s[2:3], -1, 0
; VI-NEXT: v_cndmask_b32_e64 v2, -v1, v0, s[2:3]
; VI-NEXT: v_mov_b32_e32 v1, s5
; VI-NEXT: v_cndmask_b32_e64 v0, -v1, v0, s[2:3]
; VI-NEXT: v_cndmask_b32_e64 v1, v0, 0, s[2:3]
; VI-NEXT: v_cndmask_b32_e64 v0, v2, 0, s[2:3]
; VI-NEXT: v_mov_b32_e32 v3, s1
; VI-NEXT: v_mov_b32_e32 v2, s0
; VI-NEXT: flat_store_dwordx2 v[2:3], v[0:1]

View File

@@ -110,13 +110,13 @@ define amdgpu_kernel void @float8_inselt(ptr addrspace(1) %out, <8 x float> %vec
; GCN-LABEL: float8_inselt:
; GCN: ; %bb.0: ; %entry
; GCN-NEXT: s_load_dwordx8 s[4:11], s[0:1], 0x44
; GCN-NEXT: s_load_dwordx2 s[2:3], s[0:1], 0x24
; GCN-NEXT: s_load_dword s1, s[0:1], 0x64
; GCN-NEXT: s_load_dword s2, s[0:1], 0x64
; GCN-NEXT: s_load_dwordx2 s[0:1], s[0:1], 0x24
; GCN-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NEXT: v_mov_b32_e32 v0, s4
; GCN-NEXT: s_add_u32 s0, s2, 16
; GCN-NEXT: s_mov_b32 m0, s1
; GCN-NEXT: s_addc_u32 s1, s3, 0
; GCN-NEXT: s_mov_b32 m0, s2
; GCN-NEXT: s_add_u32 s2, s0, 16
; GCN-NEXT: s_addc_u32 s3, s1, 0
; GCN-NEXT: v_mov_b32_e32 v1, s5
; GCN-NEXT: v_mov_b32_e32 v2, s6
; GCN-NEXT: v_mov_b32_e32 v3, s7
@@ -124,13 +124,13 @@ define amdgpu_kernel void @float8_inselt(ptr addrspace(1) %out, <8 x float> %vec
; GCN-NEXT: v_mov_b32_e32 v5, s9
; GCN-NEXT: v_mov_b32_e32 v6, s10
; GCN-NEXT: v_mov_b32_e32 v7, s11
; GCN-NEXT: v_mov_b32_e32 v9, s1
; GCN-NEXT: v_mov_b32_e32 v9, s3
; GCN-NEXT: v_movreld_b32_e32 v0, 1.0
; GCN-NEXT: v_mov_b32_e32 v8, s0
; GCN-NEXT: v_mov_b32_e32 v8, s2
; GCN-NEXT: flat_store_dwordx4 v[8:9], v[4:7]
; GCN-NEXT: s_nop 0
; GCN-NEXT: v_mov_b32_e32 v5, s3
; GCN-NEXT: v_mov_b32_e32 v4, s2
; GCN-NEXT: v_mov_b32_e32 v5, s1
; GCN-NEXT: v_mov_b32_e32 v4, s0
; GCN-NEXT: flat_store_dwordx4 v[4:5], v[0:3]
; GCN-NEXT: s_endpgm
entry:

View File

@@ -497,42 +497,38 @@ define <12 x float> @insertelement_to_v12f32_undef() nounwind {
define amdgpu_kernel void @dynamic_insertelement_v2f32(ptr addrspace(1) %out, <2 x float> %a, i32 %b) nounwind {
; SI-LABEL: dynamic_insertelement_v2f32:
; SI: ; %bb.0:
; SI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x0
; SI-NEXT: s_load_dword s8, s[4:5], 0x4
; SI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x2
; SI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x0
; SI-NEXT: v_mov_b32_e32 v0, 0x40a00000
; SI-NEXT: s_mov_b32 s7, 0x100f000
; SI-NEXT: s_mov_b32 s6, -1
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: v_mov_b32_e32 v1, s3
; SI-NEXT: s_cmp_lg_u32 s8, 1
; SI-NEXT: s_cmp_lg_u32 s2, 1
; SI-NEXT: v_mov_b32_e32 v1, s1
; SI-NEXT: s_cselect_b64 vcc, -1, 0
; SI-NEXT: s_cmp_lg_u32 s8, 0
; SI-NEXT: s_cmp_lg_u32 s2, 0
; SI-NEXT: v_cndmask_b32_e32 v1, v0, v1, vcc
; SI-NEXT: v_mov_b32_e32 v2, s2
; SI-NEXT: v_mov_b32_e32 v2, s0
; SI-NEXT: s_cselect_b64 vcc, -1, 0
; SI-NEXT: s_mov_b32 s4, s0
; SI-NEXT: s_mov_b32 s5, s1
; SI-NEXT: v_cndmask_b32_e32 v0, v0, v2, vcc
; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[4:7], 0
; SI-NEXT: s_endpgm
;
; VI-LABEL: dynamic_insertelement_v2f32:
; VI: ; %bb.0:
; VI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x0
; VI-NEXT: s_load_dword s8, s[4:5], 0x10
; VI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x8
; VI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x0
; VI-NEXT: v_mov_b32_e32 v0, 0x40a00000
; VI-NEXT: s_mov_b32 s7, 0x1100f000
; VI-NEXT: s_mov_b32 s6, -1
; VI-NEXT: s_waitcnt lgkmcnt(0)
; VI-NEXT: v_mov_b32_e32 v1, s3
; VI-NEXT: s_cmp_lg_u32 s8, 1
; VI-NEXT: s_cmp_lg_u32 s2, 1
; VI-NEXT: v_mov_b32_e32 v1, s1
; VI-NEXT: s_cselect_b64 vcc, -1, 0
; VI-NEXT: s_cmp_lg_u32 s8, 0
; VI-NEXT: s_cmp_lg_u32 s2, 0
; VI-NEXT: v_cndmask_b32_e32 v1, v0, v1, vcc
; VI-NEXT: v_mov_b32_e32 v2, s2
; VI-NEXT: v_mov_b32_e32 v2, s0
; VI-NEXT: s_cselect_b64 vcc, -1, 0
; VI-NEXT: s_mov_b32 s4, s0
; VI-NEXT: s_mov_b32 s5, s1
; VI-NEXT: v_cndmask_b32_e32 v0, v0, v2, vcc
; VI-NEXT: buffer_store_dwordx2 v[0:1], off, s[4:7], 0
; VI-NEXT: s_endpgm
@@ -658,8 +654,8 @@ define amdgpu_kernel void @dynamic_insertelement_v4f32(ptr addrspace(1) %out, <4
define amdgpu_kernel void @dynamic_insertelement_v8f32(ptr addrspace(1) %out, <8 x float> %a, i32 %b) nounwind {
; SI-LABEL: dynamic_insertelement_v8f32:
; SI: ; %bb.0:
; SI-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; SI-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x8
; SI-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; SI-NEXT: s_load_dword s4, s[4:5], 0x10
; SI-NEXT: v_mov_b32_e32 v8, 0x40a00000
; SI-NEXT: s_mov_b32 s3, 0x100f000
@@ -681,8 +677,8 @@ define amdgpu_kernel void @dynamic_insertelement_v8f32(ptr addrspace(1) %out, <8
;
; VI-LABEL: dynamic_insertelement_v8f32:
; VI: ; %bb.0:
; VI-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; VI-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x20
; VI-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; VI-NEXT: s_load_dword s4, s[4:5], 0x40
; VI-NEXT: v_mov_b32_e32 v8, 0x40a00000
; VI-NEXT: s_mov_b32 s3, 0x1100f000
@@ -1022,37 +1018,33 @@ define amdgpu_kernel void @dynamic_insertelement_v16f32(ptr addrspace(1) %out, <
define amdgpu_kernel void @dynamic_insertelement_v2i32(ptr addrspace(1) %out, <2 x i32> %a, i32 %b) nounwind {
; SI-LABEL: dynamic_insertelement_v2i32:
; SI: ; %bb.0:
; SI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x0
; SI-NEXT: s_load_dword s8, s[4:5], 0x4
; SI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x2
; SI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x0
; SI-NEXT: s_mov_b32 s7, 0x100f000
; SI-NEXT: s_mov_b32 s6, -1
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: s_mov_b32 s4, s0
; SI-NEXT: s_cmp_lg_u32 s8, 1
; SI-NEXT: s_cselect_b32 s0, s3, 5
; SI-NEXT: s_cmp_lg_u32 s8, 0
; SI-NEXT: s_mov_b32 s5, s1
; SI-NEXT: s_cselect_b32 s1, s2, 5
; SI-NEXT: v_mov_b32_e32 v0, s1
; SI-NEXT: v_mov_b32_e32 v1, s0
; SI-NEXT: s_cmp_lg_u32 s2, 1
; SI-NEXT: s_cselect_b32 s1, s1, 5
; SI-NEXT: s_cmp_lg_u32 s2, 0
; SI-NEXT: s_cselect_b32 s0, s0, 5
; SI-NEXT: v_mov_b32_e32 v0, s0
; SI-NEXT: v_mov_b32_e32 v1, s1
; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[4:7], 0
; SI-NEXT: s_endpgm
;
; VI-LABEL: dynamic_insertelement_v2i32:
; VI: ; %bb.0:
; VI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x0
; VI-NEXT: s_load_dword s8, s[4:5], 0x10
; VI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x8
; VI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x0
; VI-NEXT: s_mov_b32 s7, 0x1100f000
; VI-NEXT: s_mov_b32 s6, -1
; VI-NEXT: s_waitcnt lgkmcnt(0)
; VI-NEXT: s_mov_b32 s4, s0
; VI-NEXT: s_cmp_lg_u32 s8, 1
; VI-NEXT: s_cselect_b32 s0, s3, 5
; VI-NEXT: s_cmp_lg_u32 s8, 0
; VI-NEXT: s_mov_b32 s5, s1
; VI-NEXT: s_cselect_b32 s1, s2, 5
; VI-NEXT: v_mov_b32_e32 v0, s1
; VI-NEXT: v_mov_b32_e32 v1, s0
; VI-NEXT: s_cmp_lg_u32 s2, 1
; VI-NEXT: s_cselect_b32 s1, s1, 5
; VI-NEXT: s_cmp_lg_u32 s2, 0
; VI-NEXT: s_cselect_b32 s0, s0, 5
; VI-NEXT: v_mov_b32_e32 v0, s0
; VI-NEXT: v_mov_b32_e32 v1, s1
; VI-NEXT: buffer_store_dwordx2 v[0:1], off, s[4:7], 0
; VI-NEXT: s_endpgm
%vecins = insertelement <2 x i32> %a, i32 5, i32 %b
@@ -1162,8 +1154,8 @@ define amdgpu_kernel void @dynamic_insertelement_v8i32(ptr addrspace(1) %out, <8
; SI-LABEL: dynamic_insertelement_v8i32:
; SI: ; %bb.0:
; SI-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x8
; SI-NEXT: s_load_dword s6, s[4:5], 0x10
; SI-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; SI-NEXT: s_load_dword s4, s[4:5], 0x10
; SI-NEXT: s_mov_b32 s3, 0x100f000
; SI-NEXT: s_mov_b32 s2, -1
; SI-NEXT: s_waitcnt lgkmcnt(0)
@@ -1175,7 +1167,7 @@ define amdgpu_kernel void @dynamic_insertelement_v8i32(ptr addrspace(1) %out, <8
; SI-NEXT: v_mov_b32_e32 v5, s13
; SI-NEXT: v_mov_b32_e32 v6, s14
; SI-NEXT: v_mov_b32_e32 v7, s15
; SI-NEXT: s_mov_b32 m0, s6
; SI-NEXT: s_mov_b32 m0, s4
; SI-NEXT: v_movreld_b32_e32 v0, 5
; SI-NEXT: buffer_store_dwordx4 v[4:7], off, s[0:3], 0 offset:16
; SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0
@@ -1184,8 +1176,8 @@ define amdgpu_kernel void @dynamic_insertelement_v8i32(ptr addrspace(1) %out, <8
; VI-LABEL: dynamic_insertelement_v8i32:
; VI: ; %bb.0:
; VI-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x20
; VI-NEXT: s_load_dword s6, s[4:5], 0x40
; VI-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x0
; VI-NEXT: s_load_dword s4, s[4:5], 0x40
; VI-NEXT: s_mov_b32 s3, 0x1100f000
; VI-NEXT: s_mov_b32 s2, -1
; VI-NEXT: s_waitcnt lgkmcnt(0)
@@ -1197,7 +1189,7 @@ define amdgpu_kernel void @dynamic_insertelement_v8i32(ptr addrspace(1) %out, <8
; VI-NEXT: v_mov_b32_e32 v5, s13
; VI-NEXT: v_mov_b32_e32 v6, s14
; VI-NEXT: v_mov_b32_e32 v7, s15
; VI-NEXT: s_mov_b32 m0, s6
; VI-NEXT: s_mov_b32 m0, s4
; VI-NEXT: v_movreld_b32_e32 v0, 5
; VI-NEXT: buffer_store_dwordx4 v[4:7], off, s[0:3], 0 offset:16
; VI-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0

View File

@@ -279,9 +279,10 @@ define amdgpu_kernel void @merge_private_load_4_vector_elts_loads_v4i8() {
; Make sure we don't think the alignment will increase if the base address isn't an alloca
define void @private_store_2xi16_align2_not_alloca(ptr addrspace(5) %p, ptr addrspace(5) %r) #0 {
; CHECK-LABEL: @private_store_2xi16_align2_not_alloca(
; CHECK-NEXT: [[GEP_R:%.*]] = getelementptr i16, ptr addrspace(5) [[R:%.*]], i32 1
; CHECK-NEXT: store i16 1, ptr addrspace(5) [[R]], align 2
; CHECK-NEXT: store i16 2, ptr addrspace(5) [[GEP_R]], align 2
; ALIGNED-NEXT: [[GEP_R:%.*]] = getelementptr i16, ptr addrspace(5) [[R:%.*]], i32 1
; ALIGNED-NEXT: store i16 1, ptr addrspace(5) [[R]], align 2
; ALIGNED-NEXT: store i16 2, ptr addrspace(5) [[GEP_R]], align 2
; UNALIGNED-NEXT:store <2 x i16>
; CHECK-NEXT: ret void
;
%gep.r = getelementptr i16, ptr addrspace(5) %r, i32 1
@@ -309,11 +310,12 @@ define void @private_store_2xi16_align1_not_alloca(ptr addrspace(5) %p, ptr addr
define i32 @private_load_2xi16_align2_not_alloca(ptr addrspace(5) %p) #0 {
; CHECK-LABEL: @private_load_2xi16_align2_not_alloca(
; CHECK-NEXT: [[GEP_P:%.*]] = getelementptr i16, ptr addrspace(5) [[P:%.*]], i64 1
; CHECK-NEXT: [[P_0:%.*]] = load i16, ptr addrspace(5) [[P]], align 2
; CHECK-NEXT: [[P_1:%.*]] = load i16, ptr addrspace(5) [[GEP_P]], align 2
; CHECK-NEXT: [[ZEXT_0:%.*]] = zext i16 [[P_0]] to i32
; CHECK-NEXT: [[ZEXT_1:%.*]] = zext i16 [[P_1]] to i32
; ALIGNED-NEXT: [[GEP_P:%.*]] = getelementptr i16, ptr addrspace(5) [[P:%.*]], i64 1
; ALIGNED-NEXT: [[P_0:%.*]] = load i16, ptr addrspace(5) [[P]], align 2
; ALIGNED-NEXT: [[P_1:%.*]] = load i16, ptr addrspace(5) [[GEP_P]], align 2
; UNALIGNED-NEXT:load <2 x i16>
; CHECK: [[ZEXT_0:%.*]] = zext i16
; CHECK-NEXT: [[ZEXT_1:%.*]] = zext i16
; CHECK-NEXT: [[SHL_1:%.*]] = shl i32 [[ZEXT_1]], 16
; CHECK-NEXT: [[OR:%.*]] = or i32 [[ZEXT_0]], [[SHL_1]]
; CHECK-NEXT: ret i32 [[OR]]

View File

@@ -85,21 +85,14 @@ define float @insert_store_point_alias(ptr addrspace(1) nocapture %a, i64 %idx)
ret float %x
}
; Here we have four stores, with an aliasing load before the last one. We
; could vectorize two of the stores before the load (although we currently
; don't), but the important thing is that we *don't* sink the store to
; a[idx + 1] below the load.
; Here we have four stores, with an aliasing load before the last one. We can
; vectorize three of the stores before the load, but the important thing is that
; we *don't* sink the store to a[idx + 1] below the load.
;
; CHECK-LABEL: @insert_store_point_alias_ooo
; CHECK: store float
; CHECK-SAME: %a.idx.3
; CHECK: store float
; CHECK-SAME: %a.idx.1
; CHECK: store float
; CHECK-SAME: %a.idx.2
; CHECK: store <3 x float>{{.*}} %a.idx.1
; CHECK: load float, ptr addrspace(1) %a.idx.2
; CHECK: store float
; CHECK-SAME: %a.idx
; CHECK: store float{{.*}} %a.idx
define float @insert_store_point_alias_ooo(ptr addrspace(1) nocapture %a, i64 %idx) {
%a.idx = getelementptr inbounds float, ptr addrspace(1) %a, i64 %idx
%a.idx.1 = getelementptr inbounds float, ptr addrspace(1) %a.idx, i64 1

View File

@@ -57,10 +57,17 @@ define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v4i32_align1(
}
; ALL-LABEL: @merge_private_store_4_vector_elts_loads_v4i32_align2(
; ALL: store i32
; ALL: store i32
; ALL: store i32
; ALL: store i32
; ALIGNED: store i32
; ALIGNED: store i32
; ALIGNED: store i32
; ALIGNED: store i32
; ELT4-UNALIGNED: store i32
; ELT4-UNALIGNED: store i32
; ELT4-UNALIGNED: store i32
; ELT4-UNALIGNED: store i32
; ELT8-UNALIGNED: store <2 x i32>
; ELT8-UNALIGNED: store <2 x i32>
; ELT16-UNALIGNED: store <4 x i32>
define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v4i32_align2(ptr addrspace(5) %out) #0 {
%out.gep.1 = getelementptr i32, ptr addrspace(5) %out, i32 1
%out.gep.2 = getelementptr i32, ptr addrspace(5) %out, i32 2
@@ -117,8 +124,9 @@ define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v2i16(ptr add
}
; ALL-LABEL: @merge_private_store_4_vector_elts_loads_v2i16_align2(
; ALL: store i16
; ALL: store i16
; ALIGNED: store i16
; ALIGNED: store i16
; UNALIGNED: store <2 x i16>
define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v2i16_align2(ptr addrspace(5) %out) #0 {
%out.gep.1 = getelementptr i16, ptr addrspace(5) %out, i32 1

View File

@@ -1,5 +1,5 @@
; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=hawaii -passes=load-store-vectorizer -S -o - %s | FileCheck -check-prefixes=GCN,GFX7 %s
; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -passes=load-store-vectorizer -S -o - %s | FileCheck -check-prefixes=GCN,GFX9 %s
; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=hawaii -passes=load-store-vectorizer -S -o - %s | FileCheck -check-prefixes=GCN %s
; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -passes=load-store-vectorizer -S -o - %s | FileCheck -check-prefixes=GCN %s
target datalayout = "e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
@@ -26,25 +26,17 @@ define amdgpu_kernel void @no_crash(i32 %arg) {
ret void
}
; Check adjiacent memory locations are properly matched and the
; Check adjacent memory locations are properly matched and the
; longest chain vectorized
; GCN-LABEL: @interleave_get_longest
; GFX7: load <2 x i32>
; GFX7: load i32
; GFX7: store <2 x i32> zeroinitializer
; GFX7: load i32
; GFX7: load <2 x i32>
; GFX7: load i32
; GFX7: load i32
; GFX9: load <4 x i32>
; GFX9: load i32
; GFX9: store <2 x i32> zeroinitializer
; GFX9: load i32
; GFX9: load i32
; GFX9: load i32
; GCN: load <2 x i32>{{.*}} %tmp1
; GCN: store <2 x i32> zeroinitializer{{.*}} %tmp1
; GCN: load <2 x i32>{{.*}} %tmp2
; GCN: load <2 x i32>{{.*}} %tmp4
; GCN: load i32{{.*}} %tmp5
; GCN: load i32{{.*}} %tmp5
define amdgpu_kernel void @interleave_get_longest(i32 %arg) {
%a1 = add i32 %arg, 1

View File

@@ -42,6 +42,54 @@ entry:
ret void
}
; CHECK-LABEL: @merge_ptr_i32(
; CHECK: load <4 x i32>
; CHECK: store <4 x i32>
define amdgpu_kernel void @merge_ptr_i32(ptr addrspace(3) nocapture %a, ptr addrspace(3) nocapture readonly %b) #0 {
entry:
%a.0 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %a, i64 0
%a.1 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %a, i64 1
%a.2 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %a, i64 2
%b.0 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %b, i64 0
%b.1 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %b, i64 1
%b.2 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %b, i64 2
%ld.0 = load i32, ptr addrspace(3) %b.0, align 16
%ld.1 = load ptr addrspace(3), ptr addrspace(3) %b.1, align 4
%ld.2 = load <2 x i32>, ptr addrspace(3) %b.2, align 8
store i32 0, ptr addrspace(3) %a.0, align 16
store ptr addrspace(3) null, ptr addrspace(3) %a.1, align 4
store <2 x i32> <i32 0, i32 0>, ptr addrspace(3) %a.2, align 8
ret void
}
; CHECK-LABEL: @merge_ptr_i32_vec_first(
; CHECK: load <4 x i32>
; CHECK: store <4 x i32>
define amdgpu_kernel void @merge_ptr_i32_vec_first(ptr addrspace(3) nocapture %a, ptr addrspace(3) nocapture readonly %b) #0 {
entry:
%a.0 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %a, i64 0
%a.1 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %a, i64 2
%a.2 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %a, i64 3
%b.0 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %b, i64 0
%b.1 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %b, i64 2
%b.2 = getelementptr inbounds ptr addrspace(3), ptr addrspace(3) %b, i64 3
%ld.0 = load <2 x i32>, ptr addrspace(3) %b.0, align 16
%ld.1 = load ptr addrspace(3), ptr addrspace(3) %b.1, align 8
%ld.2 = load i32, ptr addrspace(3) %b.2, align 4
store <2 x i32> <i32 0, i32 0>, ptr addrspace(3) %a.0, align 16
store ptr addrspace(3) null, ptr addrspace(3) %a.1, align 8
store i32 0, ptr addrspace(3) %a.2, align 4
ret void
}
; CHECK-LABEL: @merge_load_i64_ptr64(
; CHECK: load <2 x i64>
; CHECK: [[ELT1:%[^ ]+]] = extractelement <2 x i64> %{{[^ ]+}}, i32 1

View File

@@ -82,7 +82,7 @@ entry:
%a.ascast = addrspacecast ptr addrspace(5) %p to ptr
%b.ascast = addrspacecast ptr addrspace(5) %gep2 to ptr
%tmp1 = load i8, ptr %a.ascast, align 1
%tmp2 = load i8, ptr %b.ascast, align 1
%tmp2 = load i8, ptr %b.ascast, align 2
unreachable
}

View File

@@ -1,10 +1,10 @@
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
define void @ldg_f16(ptr nocapture align 16 %rd0) {
%load1 = load <2 x half>, ptr %rd0, align 4
%load1 = load <2 x half>, ptr %rd0, align 16
%p1 = fcmp ogt <2 x half> %load1, zeroinitializer
%s1 = select <2 x i1> %p1, <2 x half> %load1, <2 x half> zeroinitializer
store <2 x half> %s1, ptr %rd0, align 4
store <2 x half> %s1, ptr %rd0, align 16
%in2 = getelementptr half, ptr %rd0, i64 2
%load2 = load <2 x half>, ptr %in2, align 4
%p2 = fcmp ogt <2 x half> %load2, zeroinitializer

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
; CHECK-LABEL: @overlapping_stores
; CHECK: store i16
; CHECK: store i16
; CHECK: store i16
define void @overlapping_stores(ptr nocapture align 2 %ptr) {
%ptr0 = getelementptr i16, ptr %ptr, i64 0
%ptr1 = getelementptr i8, ptr %ptr, i64 1
%ptr2 = getelementptr i16, ptr %ptr, i64 1
store i16 0, ptr %ptr0, align 2
store i16 0, ptr %ptr1, align 1
store i16 0, ptr %ptr2, align 2
ret void
}

View File

@@ -0,0 +1,33 @@
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
define void @i1x8(ptr nocapture align 4 %ptr) {
%ptr0 = getelementptr i8, ptr %ptr, i64 0
%ptr1 = getelementptr i8, ptr %ptr, i64 1
%ptr2 = getelementptr i8, ptr %ptr, i64 2
%ptr3 = getelementptr i8, ptr %ptr, i64 3
%l0 = load <8 x i1>, ptr %ptr0, align 4
%l1 = load <8 x i1>, ptr %ptr1, align 1
%l2 = load <8 x i1>, ptr %ptr2, align 2
%l3 = load <8 x i1>, ptr %ptr3, align 1
ret void
; CHECK-LABEL: @i1x8
; CHECK-DAG: load <32 x i1>
}
define void @i1x8x16x8(ptr nocapture align 4 %ptr) {
%ptr0 = getelementptr i8, ptr %ptr, i64 0
%ptr1 = getelementptr i8, ptr %ptr, i64 1
%ptr2 = getelementptr i8, ptr %ptr, i64 3
%l0 = load <8 x i1>, ptr %ptr0, align 4
%l2 = load <16 x i1>, ptr %ptr1, align 1
%l3 = load <8 x i1>, ptr %ptr2, align 1
ret void
; CHECK-LABEL: @i1x8x16x8
; CHECK-DAG: load <32 x i1>
}

View File

@@ -0,0 +1,17 @@
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
; CHECK-LABEL: @int16x2
; CHECK: load <2 x i16>
; CHECK: store <2 x i16>
define void @int16x2(ptr nocapture align 4 %ptr) {
%ptr0 = getelementptr i16, ptr %ptr, i64 0
%ptr1 = getelementptr i16, ptr %ptr, i64 1
%l0 = load i16, ptr %ptr0, align 4
%l1 = load i16, ptr %ptr1, align 2
store i16 %l1, ptr %ptr0, align 4
store i16 %l0, ptr %ptr1, align 2
ret void
}

View File

@@ -0,0 +1,21 @@
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
; We don't need to vectorize this. Just make sure it doesn't crash.
; CHECK-LABEL: @int24x2
; CHECK: load i24
; CHECK: load i24
; CHECK: store i24
; CHECK: store i24
define void @int24x2(ptr nocapture align 4 %ptr) {
%ptr0 = getelementptr i24, ptr %ptr, i64 0
%ptr1 = getelementptr i24, ptr %ptr, i64 1
%l0 = load i24, ptr %ptr0, align 4
%l1 = load i24, ptr %ptr1, align 1
store i24 %l1, ptr %ptr0, align 4
store i24 %l0, ptr %ptr1, align 1
ret void
}

View File

@@ -1,4 +1,3 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
; Vectorize and emit valid code (Issue #54896).
@@ -41,8 +40,10 @@ define void @int8x3a4(ptr nocapture align 4 %ptr) {
ret void
; CHECK-LABEL: @int8x3a4
; CHECK: load <3 x i8>
; CHECK: store <3 x i8>
; CHECK: load <2 x i8>
; CHECK: load i8
; CHECK: store <2 x i8>
; CHECK: store i8
}
define void @int8x12a4(ptr nocapture align 4 %ptr) {

View File

@@ -0,0 +1,17 @@
; RUN: opt -mtriple=nvptx64-nvidia-cuda -passes=load-store-vectorizer -S -o - %s | FileCheck %s
; CHECK-LABEL: @int8x3Plus1
; CHECK: load <4 x i8>
; CHECK: store <4 x i8>
define void @int8x3Plus1(ptr nocapture align 4 %ptr) {
%ptr0 = getelementptr i8, ptr %ptr, i64 0
%ptr3 = getelementptr i8, ptr %ptr, i64 3
%l0 = load <3 x i8>, ptr %ptr0, align 4
%l1 = load i8, ptr %ptr3, align 1
store <3 x i8> <i8 0, i8 0, i8 0>, ptr %ptr0, align 4
store i8 0, ptr %ptr3, align 1
ret void
}

View File

@@ -7,12 +7,13 @@ target datalayout = "e-m:e-i64:64-i128:128-n32:64-S128"
define void @correct_order(ptr noalias %ptr) {
; CHECK-LABEL: @correct_order(
; CHECK-NEXT: [[NEXT_GEP1:%.*]] = getelementptr i32, ptr [[PTR:%.*]], i64 1
; CHECK-NEXT: [[TMP2:%.*]] = load <2 x i32>, ptr [[NEXT_GEP1]], align 4
; CHECK-NEXT: [[L11:%.*]] = extractelement <2 x i32> [[TMP2]], i32 0
; CHECK-NEXT: [[L42:%.*]] = extractelement <2 x i32> [[TMP2]], i32 1
; CHECK-NEXT: [[L2:%.*]] = load i32, ptr [[PTR]], align 4
; CHECK-NEXT: [[TMP1:%.*]] = load <2 x i32>, ptr [[PTR]], align 4
; CHECK-NEXT: [[L21:%.*]] = extractelement <2 x i32> [[TMP1]], i32 0
; CHECK-NEXT: [[L12:%.*]] = extractelement <2 x i32> [[TMP1]], i32 1
; CHECK-NEXT: store <2 x i32> zeroinitializer, ptr [[PTR]], align 4
; CHECK-NEXT: [[L3:%.*]] = load i32, ptr [[NEXT_GEP1]], align 4
; CHECK-NEXT: [[TMP2:%.*]] = load <2 x i32>, ptr [[NEXT_GEP1]], align 4
; CHECK-NEXT: [[L33:%.*]] = extractelement <2 x i32> [[TMP2]], i32 0
; CHECK-NEXT: [[L44:%.*]] = extractelement <2 x i32> [[TMP2]], i32 1
; CHECK-NEXT: ret void
;
%next.gep1 = getelementptr i32, ptr %ptr, i64 1

View File

@@ -8,9 +8,8 @@ target datalayout = "e-m:e-i64:64-i128:128-n32:64-S128"
; CHECK-LABEL: @interleave_2L_2S(
; CHECK: load <2 x i32>
; CHECK: load i32
; CHECK: store <2 x i32>
; CHECK: load i32
; CHECK: load <2 x i32>
define void @interleave_2L_2S(ptr noalias %ptr) {
%next.gep1 = getelementptr i32, ptr %ptr, i64 1
%next.gep2 = getelementptr i32, ptr %ptr, i64 2
@@ -26,9 +25,9 @@ define void @interleave_2L_2S(ptr noalias %ptr) {
}
; CHECK-LABEL: @interleave_3L_2S_1L(
; CHECK: load <3 x i32>
; CHECK: load <2 x i32>
; CHECK: store <2 x i32>
; CHECK: load i32
; CHECK: load <2 x i32>
define void @interleave_3L_2S_1L(ptr noalias %ptr) {
%next.gep1 = getelementptr i32, ptr %ptr, i64 1
@@ -82,15 +81,10 @@ define void @chain_prefix_suffix(ptr noalias %ptr) {
ret void
}
; FIXME: If the chain is too long and TLI says misaligned is not fast,
; then LSV fails to vectorize anything in that chain.
; To reproduce below, add a tmp5 (ptr+4) and load tmp5 into l6 and l7.
; CHECK-LABEL: @interleave_get_longest
; CHECK: load <3 x i32>
; CHECK: load i32
; CHECK: load <2 x i32>
; CHECK: store <2 x i32> zeroinitializer
; CHECK: load i32
; CHECK: load <3 x i32>
; CHECK: load i32
; CHECK: load i32
@@ -98,6 +92,7 @@ define void @interleave_get_longest(ptr noalias %ptr) {
%tmp2 = getelementptr i32, ptr %ptr, i64 1
%tmp3 = getelementptr i32, ptr %ptr, i64 2
%tmp4 = getelementptr i32, ptr %ptr, i64 3
%tmp5 = getelementptr i32, ptr %ptr, i64 4
%l1 = load i32, ptr %tmp2, align 4
%l2 = load i32, ptr %ptr, align 4
@@ -106,8 +101,32 @@ define void @interleave_get_longest(ptr noalias %ptr) {
%l3 = load i32, ptr %tmp2, align 4
%l4 = load i32, ptr %tmp3, align 4
%l5 = load i32, ptr %tmp4, align 4
%l6 = load i32, ptr %tmp4, align 4
%l7 = load i32, ptr %tmp4, align 4
%l6 = load i32, ptr %tmp5, align 4
%l7 = load i32, ptr %tmp5, align 4
ret void
}
; CHECK-LABEL: @interleave_get_longest_aligned
; CHECK: load <2 x i32>
; CHECK: store <2 x i32> zeroinitializer
; CHECK: load <4 x i32>
define void @interleave_get_longest_aligned(ptr noalias %ptr) {
%tmp2 = getelementptr i32, ptr %ptr, i64 1
%tmp3 = getelementptr i32, ptr %ptr, i64 2
%tmp4 = getelementptr i32, ptr %ptr, i64 3
%tmp5 = getelementptr i32, ptr %ptr, i64 4
%l1 = load i32, ptr %tmp2, align 4
%l2 = load i32, ptr %ptr, align 4
store i32 0, ptr %tmp2, align 4
store i32 0, ptr %ptr, align 4
%l3 = load i32, ptr %tmp2, align 16
%l4 = load i32, ptr %tmp3, align 4
%l5 = load i32, ptr %tmp4, align 8
%l6 = load i32, ptr %tmp5, align 4
%l7 = load i32, ptr %tmp5, align 4
ret void
}

View File

@@ -4,8 +4,7 @@
; Check that the LoadStoreVectorizer does not crash due to not differentiating <1 x T> and T.
; CHECK-LABEL: @vector_scalar(
; CHECK: store double
; CHECK: store <1 x double>
; CHECK: store <2 x double>
define void @vector_scalar(ptr %ptr, double %a, <1 x double> %b) {
%1 = getelementptr <1 x double>, ptr %ptr, i32 1
store double %a, ptr %ptr, align 8

View File

@@ -55,53 +55,6 @@ bb:
ret void
}
define void @ld_v4i8_add_nuw(i32 %v0, i32 %v1, ptr %src, ptr %dst) {
; CHECK-LABEL: @ld_v4i8_add_nuw(
; CHECK-NEXT: bb:
; CHECK-NEXT: [[TMP:%.*]] = add nuw i32 [[V0:%.*]], -1
; CHECK-NEXT: [[TMP1:%.*]] = add nuw i32 [[V1:%.*]], [[TMP]]
; CHECK-NEXT: [[TMP2:%.*]] = zext i32 [[TMP1]] to i64
; CHECK-NEXT: [[TMP3:%.*]] = getelementptr inbounds i8, ptr [[SRC:%.*]], i64 [[TMP2]]
; CHECK-NEXT: [[TMP1:%.*]] = load <4 x i8>, ptr [[TMP3]], align 1
; CHECK-NEXT: [[TMP41:%.*]] = extractelement <4 x i8> [[TMP1]], i32 0
; CHECK-NEXT: [[TMP82:%.*]] = extractelement <4 x i8> [[TMP1]], i32 1
; CHECK-NEXT: [[TMP133:%.*]] = extractelement <4 x i8> [[TMP1]], i32 2
; CHECK-NEXT: [[TMP184:%.*]] = extractelement <4 x i8> [[TMP1]], i32 3
; CHECK-NEXT: [[TMP19:%.*]] = insertelement <4 x i8> poison, i8 [[TMP41]], i32 0
; CHECK-NEXT: [[TMP20:%.*]] = insertelement <4 x i8> [[TMP19]], i8 [[TMP82]], i32 1
; CHECK-NEXT: [[TMP21:%.*]] = insertelement <4 x i8> [[TMP20]], i8 [[TMP133]], i32 2
; CHECK-NEXT: [[TMP22:%.*]] = insertelement <4 x i8> [[TMP21]], i8 [[TMP184]], i32 3
; CHECK-NEXT: store <4 x i8> [[TMP22]], ptr [[DST:%.*]], align 4
; CHECK-NEXT: ret void
;
bb:
%tmp = add nuw i32 %v0, -1
%tmp1 = add nuw i32 %v1, %tmp
%tmp2 = zext i32 %tmp1 to i64
%tmp3 = getelementptr inbounds i8, ptr %src, i64 %tmp2
%tmp4 = load i8, ptr %tmp3, align 1
%tmp5 = add nuw i32 %v1, %v0
%tmp6 = zext i32 %tmp5 to i64
%tmp7 = getelementptr inbounds i8, ptr %src, i64 %tmp6
%tmp8 = load i8, ptr %tmp7, align 1
%tmp9 = add nuw i32 %v0, 1
%tmp10 = add nuw i32 %v1, %tmp9
%tmp11 = zext i32 %tmp10 to i64
%tmp12 = getelementptr inbounds i8, ptr %src, i64 %tmp11
%tmp13 = load i8, ptr %tmp12, align 1
%tmp14 = add nuw i32 %v0, 2
%tmp15 = add nuw i32 %v1, %tmp14
%tmp16 = zext i32 %tmp15 to i64
%tmp17 = getelementptr inbounds i8, ptr %src, i64 %tmp16
%tmp18 = load i8, ptr %tmp17, align 1
%tmp19 = insertelement <4 x i8> poison, i8 %tmp4, i32 0
%tmp20 = insertelement <4 x i8> %tmp19, i8 %tmp8, i32 1
%tmp21 = insertelement <4 x i8> %tmp20, i8 %tmp13, i32 2
%tmp22 = insertelement <4 x i8> %tmp21, i8 %tmp18, i32 3
store <4 x i8> %tmp22, ptr %dst
ret void
}
; Make sure we don't vectorize the loads below because the source of
; sext instructions doesn't have the nsw flag.

View File

@@ -55,53 +55,6 @@ bb:
ret void
}
define void @ld_v4i8_add_nuw(i32 %v0, i32 %v1, ptr %src, ptr %dst) {
; CHECK-LABEL: @ld_v4i8_add_nuw(
; CHECK-NEXT: bb:
; CHECK-NEXT: [[TMP:%.*]] = add nuw i32 [[V0:%.*]], -1
; CHECK-NEXT: [[TMP1:%.*]] = add nuw i32 [[V1:%.*]], [[TMP]]
; CHECK-NEXT: [[TMP2:%.*]] = zext i32 [[TMP1]] to i64
; CHECK-NEXT: [[TMP3:%.*]] = getelementptr inbounds i8, ptr [[SRC:%.*]], i64 [[TMP2]]
; CHECK-NEXT: [[TMP1:%.*]] = load <4 x i8>, ptr [[TMP3]], align 1
; CHECK-NEXT: [[TMP41:%.*]] = extractelement <4 x i8> [[TMP1]], i32 0
; CHECK-NEXT: [[TMP82:%.*]] = extractelement <4 x i8> [[TMP1]], i32 1
; CHECK-NEXT: [[TMP133:%.*]] = extractelement <4 x i8> [[TMP1]], i32 2
; CHECK-NEXT: [[TMP184:%.*]] = extractelement <4 x i8> [[TMP1]], i32 3
; CHECK-NEXT: [[TMP19:%.*]] = insertelement <4 x i8> undef, i8 [[TMP41]], i32 0
; CHECK-NEXT: [[TMP20:%.*]] = insertelement <4 x i8> [[TMP19]], i8 [[TMP82]], i32 1
; CHECK-NEXT: [[TMP21:%.*]] = insertelement <4 x i8> [[TMP20]], i8 [[TMP133]], i32 2
; CHECK-NEXT: [[TMP22:%.*]] = insertelement <4 x i8> [[TMP21]], i8 [[TMP184]], i32 3
; CHECK-NEXT: store <4 x i8> [[TMP22]], ptr [[DST:%.*]]
; CHECK-NEXT: ret void
;
bb:
%tmp = add nuw i32 %v0, -1
%tmp1 = add nuw i32 %v1, %tmp
%tmp2 = zext i32 %tmp1 to i64
%tmp3 = getelementptr inbounds i8, ptr %src, i64 %tmp2
%tmp4 = load i8, ptr %tmp3, align 1
%tmp5 = add nuw i32 %v1, %v0
%tmp6 = zext i32 %tmp5 to i64
%tmp7 = getelementptr inbounds i8, ptr %src, i64 %tmp6
%tmp8 = load i8, ptr %tmp7, align 1
%tmp9 = add nuw i32 %v0, 1
%tmp10 = add nuw i32 %v1, %tmp9
%tmp11 = zext i32 %tmp10 to i64
%tmp12 = getelementptr inbounds i8, ptr %src, i64 %tmp11
%tmp13 = load i8, ptr %tmp12, align 1
%tmp14 = add nuw i32 %v0, 2
%tmp15 = add nuw i32 %v1, %tmp14
%tmp16 = zext i32 %tmp15 to i64
%tmp17 = getelementptr inbounds i8, ptr %src, i64 %tmp16
%tmp18 = load i8, ptr %tmp17, align 1
%tmp19 = insertelement <4 x i8> undef, i8 %tmp4, i32 0
%tmp20 = insertelement <4 x i8> %tmp19, i8 %tmp8, i32 1
%tmp21 = insertelement <4 x i8> %tmp20, i8 %tmp13, i32 2
%tmp22 = insertelement <4 x i8> %tmp21, i8 %tmp18, i32 3
store <4 x i8> %tmp22, ptr %dst
ret void
}
; Apply different operand orders for the nested add sequences
define void @ld_v4i8_add_nsw_operand_orders(i32 %v0, i32 %v1, ptr %src, ptr %dst) {
; CHECK-LABEL: @ld_v4i8_add_nsw_operand_orders(
@@ -150,54 +103,6 @@ bb:
ret void
}
; Apply different operand orders for the nested add sequences
define void @ld_v4i8_add_nuw_operand_orders(i32 %v0, i32 %v1, ptr %src, ptr %dst) {
; CHECK-LABEL: @ld_v4i8_add_nuw_operand_orders(
; CHECK-NEXT: bb:
; CHECK-NEXT: [[TMP:%.*]] = add nuw i32 [[V0:%.*]], -1
; CHECK-NEXT: [[TMP1:%.*]] = add nuw i32 [[V1:%.*]], [[TMP]]
; CHECK-NEXT: [[TMP2:%.*]] = zext i32 [[TMP1]] to i64
; CHECK-NEXT: [[TMP3:%.*]] = getelementptr inbounds i8, ptr [[SRC:%.*]], i64 [[TMP2]]
; CHECK-NEXT: [[TMP1:%.*]] = load <4 x i8>, ptr [[TMP3]], align 1
; CHECK-NEXT: [[TMP41:%.*]] = extractelement <4 x i8> [[TMP1]], i32 0
; CHECK-NEXT: [[TMP82:%.*]] = extractelement <4 x i8> [[TMP1]], i32 1
; CHECK-NEXT: [[TMP133:%.*]] = extractelement <4 x i8> [[TMP1]], i32 2
; CHECK-NEXT: [[TMP184:%.*]] = extractelement <4 x i8> [[TMP1]], i32 3
; CHECK-NEXT: [[TMP19:%.*]] = insertelement <4 x i8> undef, i8 [[TMP41]], i32 0
; CHECK-NEXT: [[TMP20:%.*]] = insertelement <4 x i8> [[TMP19]], i8 [[TMP82]], i32 1
; CHECK-NEXT: [[TMP21:%.*]] = insertelement <4 x i8> [[TMP20]], i8 [[TMP133]], i32 2
; CHECK-NEXT: [[TMP22:%.*]] = insertelement <4 x i8> [[TMP21]], i8 [[TMP184]], i32 3
; CHECK-NEXT: store <4 x i8> [[TMP22]], ptr [[DST:%.*]]
; CHECK-NEXT: ret void
;
bb:
%tmp = add nuw i32 %v0, -1
%tmp1 = add nuw i32 %v1, %tmp
%tmp2 = zext i32 %tmp1 to i64
%tmp3 = getelementptr inbounds i8, ptr %src, i64 %tmp2
%tmp4 = load i8, ptr %tmp3, align 1
%tmp5 = add nuw i32 %v0, %v1
%tmp6 = zext i32 %tmp5 to i64
%tmp7 = getelementptr inbounds i8, ptr %src, i64 %tmp6
%tmp8 = load i8, ptr %tmp7, align 1
%tmp9 = add nuw i32 %v0, 1
%tmp10 = add nuw i32 %tmp9, %v1
%tmp11 = zext i32 %tmp10 to i64
%tmp12 = getelementptr inbounds i8, ptr %src, i64 %tmp11
%tmp13 = load i8, ptr %tmp12, align 1
%tmp14 = add nuw i32 %v0, 2
%tmp15 = add nuw i32 %v1, %tmp14
%tmp16 = zext i32 %tmp15 to i64
%tmp17 = getelementptr inbounds i8, ptr %src, i64 %tmp16
%tmp18 = load i8, ptr %tmp17, align 1
%tmp19 = insertelement <4 x i8> undef, i8 %tmp4, i32 0
%tmp20 = insertelement <4 x i8> %tmp19, i8 %tmp8, i32 1
%tmp21 = insertelement <4 x i8> %tmp20, i8 %tmp13, i32 2
%tmp22 = insertelement <4 x i8> %tmp21, i8 %tmp18, i32 3
store <4 x i8> %tmp22, ptr %dst
ret void
}
define void @ld_v4i8_add_known_bits(i32 %ind0, i32 %ind1, ptr %src, ptr %dst) {
; CHECK-LABEL: @ld_v4i8_add_known_bits(
; CHECK-NEXT: bb:

View File

@@ -78,9 +78,9 @@ define void @test_inaccessiblememonly_not_willreturn(ptr %p) {
; CHECK-NEXT: [[P2:%.*]] = getelementptr float, ptr [[P]], i64 2
; CHECK-NEXT: [[P3:%.*]] = getelementptr float, ptr [[P]], i64 3
; CHECK-NEXT: [[L0:%.*]] = load float, ptr [[P]], align 16
; CHECK-NEXT: call void @foo() #[[ATTR2:[0-9]+]]
; CHECK-NEXT: [[L1:%.*]] = load float, ptr [[P1]], align 4
; CHECK-NEXT: [[L2:%.*]] = load float, ptr [[P2]], align 4
; CHECK-NEXT: call void @foo() #[[ATTR2:[0-9]+]]
; CHECK-NEXT: [[L3:%.*]] = load float, ptr [[P3]], align 4
; CHECK-NEXT: store float [[L0]], ptr [[P]], align 16
; CHECK-NEXT: call void @foo() #[[ATTR2]]
@@ -93,9 +93,9 @@ define void @test_inaccessiblememonly_not_willreturn(ptr %p) {
%p2 = getelementptr float, ptr %p, i64 2
%p3 = getelementptr float, ptr %p, i64 3
%l0 = load float, ptr %p, align 16
call void @foo() inaccessiblememonly nounwind
%l1 = load float, ptr %p1
%l2 = load float, ptr %p2
call void @foo() inaccessiblememonly nounwind
%l3 = load float, ptr %p3
store float %l0, ptr %p, align 16
call void @foo() inaccessiblememonly nounwind
@@ -111,9 +111,9 @@ define void @test_inaccessiblememonly_not_nounwind(ptr %p) {
; CHECK-NEXT: [[P2:%.*]] = getelementptr float, ptr [[P]], i64 2
; CHECK-NEXT: [[P3:%.*]] = getelementptr float, ptr [[P]], i64 3
; CHECK-NEXT: [[L0:%.*]] = load float, ptr [[P]], align 16
; CHECK-NEXT: call void @foo() #[[ATTR3:[0-9]+]]
; CHECK-NEXT: [[L1:%.*]] = load float, ptr [[P1]], align 4
; CHECK-NEXT: [[L2:%.*]] = load float, ptr [[P2]], align 4
; CHECK-NEXT: call void @foo() #[[ATTR3:[0-9]+]]
; CHECK-NEXT: [[L3:%.*]] = load float, ptr [[P3]], align 4
; CHECK-NEXT: store float [[L0]], ptr [[P]], align 16
; CHECK-NEXT: call void @foo() #[[ATTR3]]
@@ -126,9 +126,9 @@ define void @test_inaccessiblememonly_not_nounwind(ptr %p) {
%p2 = getelementptr float, ptr %p, i64 2
%p3 = getelementptr float, ptr %p, i64 3
%l0 = load float, ptr %p, align 16
call void @foo() inaccessiblememonly willreturn
%l1 = load float, ptr %p1
%l2 = load float, ptr %p2
call void @foo() inaccessiblememonly willreturn
%l3 = load float, ptr %p3
store float %l0, ptr %p, align 16
call void @foo() inaccessiblememonly willreturn