Files
clang-p2996/llvm/test/Transforms/SLPVectorizer/X86/consecutive-access.ll
Roman Tereshin ed047b0184 [SCEV] Add [zs]ext{C,+,x} -> (D + [zs]ext{C-D,+,x})<nuw><nsw> transform
as well as sext(C + x + ...) -> (D + sext(C-D + x + ...))<nuw><nsw>
similar to the equivalent transformation for zext's

if the top level addition in (D + (C-D + x * n)) could be proven to
not wrap, where the choice of D also maximizes the number of trailing
zeroes of (C-D + x * n), ensuring homogeneous behaviour of the
transformation and better canonicalization of such AddRec's

(indeed, there are 2^(2w) different expressions in `B1 + ext(B2 + Y)` form for
the same Y, but only 2^(2w - k) different expressions in the resulting `B3 +
ext((B4 * 2^k) + Y)` form, where w is the bit width of the integral type)

This patch generalizes sext(C1 + C2*X) --> sext(C1) + sext(C2*X) and
sext{C1,+,C2} --> sext(C1) + sext{0,+,C2} transformations added in
r209568 relaxing the requirements the following way:

1. C2 doesn't have to be a power of 2, it's enough if it's divisible by 2
 a sufficient number of times;
2. C1 doesn't have to be less than C2, instead of extracting the entire
  C1 we can split it into 2 terms: (00...0XXX + YY...Y000), keep the
  second one that may cause wrapping within the extension operator, and
  move the first one that doesn't affect wrapping out of the extension
  operator, enabling further simplifications;
3. C1 and C2 don't have to be positive, splitting C1 like shown above
 produces a sum that is guaranteed to not wrap, signed or unsigned;
4. in AddExpr case there could be more than 2 terms, and in case of
  AddExpr the 2nd and following terms and in case of AddRecExpr the
  Step component don't have to be in the C2*X form or constant
  (respectively), they just need to have enough trailing zeros,
  which in turn could be guaranteed by means other than arithmetics,
  e.g. by a pointer alignment;
5. the extension operator doesn't have to be a sext, the same
  transformation works and profitable for zext's as well.

Apparently, optimizations like SLPVectorizer currently fail to
vectorize even rather trivial cases like the following:

 double bar(double *a, unsigned n) {
   double x = 0.0;
   double y = 0.0;
   for (unsigned i = 0; i < n; i += 2) {
     x += a[i];
     y += a[i + 1];
   }
   return x * y;
 }

If compiled with `clang -std=c11 -Wpedantic -Wall -O3 main.c -S -o - -emit-llvm`
(!{!"clang version 7.0.0 (trunk 337339) (llvm/trunk 337344)"})

it produces scalar code with the loop not unrolled with the unsigned `n` and
`i` (like shown above), but vectorized and unrolled loop with signed `n` and
`i`. With the changes made in this commit the unsigned version will be
vectorized (though not unrolled for unclear reasons).

How it all works:

Let say we have an AddExpr that looks like (C + x + y + ...), where C
is a constant and x, y, ... are arbitrary SCEVs. Let's compute the
minimum number of trailing zeroes guaranteed of that sum w/o the
constant term: (x + y + ...). If, for example, those terms look like
follows:

        i
XXXX...X000
YYYY...YY00
   ...
ZZZZ...0000

then the rightmost non-guaranteed-zero bit (a potential one at i-th
position above) can change the bits of the sum to the left (and at
i-th position itself), but it can not possibly change the bits to the
right. So we can compute the number of trailing zeroes by taking a
minimum between the numbers of trailing zeroes of the terms.

Now let's say that our original sum with the constant is effectively
just C + X, where X = x + y + .... Let's also say that we've got 2
guaranteed trailing zeros for X:

         j
CCCC...CCCC
XXXX...XX00  // this is X = (x + y + ...)

Any bit of C to the left of j may in the end cause the C + X sum to
wrap, but the rightmost 2 bits of C (at positions j and j - 1) do not
affect wrapping in any way. If the upper bits cause a wrap, it will be
a wrap regardless of the values of the 2 least significant bits of C.
If the upper bits do not cause a wrap, it won't be a wrap regardless
of the values of the 2 bits on the right (again).

So let's split C to 2 constants like follows:

0000...00CC  = D
CCCC...CC00  = (C - D)

and represent the whole sum as D + (C - D + X). The second term of
this new sum looks like this:

CCCC...CC00
XXXX...XX00
-----------  // let's add them up
YYYY...YY00

The sum above (let's call it Y)) may or may not wrap, we don't know,
so we need to keep it under a sext/zext. Adding D to that sum though
will never wrap, signed or unsigned, if performed on the original bit
width or the extended one, because all that that final add does is
setting the 2 least significant bits of Y to the bits of D:

YYYY...YY00 = Y
0000...00CC = D
-----------  <nuw><nsw>
YYYY...YYCC

Which means we can safely move that D out of the sext or zext and
claim that the top-level sum neither sign wraps nor unsigned wraps.

Let's run an example, let's say we're working in i8's and the original
expression (zext's or sext's operand) is 21 + 12x + 8y. So it goes
like this:

0001 0101  // 21
XXXX XX00  // 12x
YYYY Y000  // 8y

0001 0101  // 21
ZZZZ ZZ00  // 12x + 8y

0000 0001  // D
0001 0100  // 21 - D = 20
ZZZZ ZZ00  // 12x + 8y

0000 0001  // D
WWWW WW00  // 21 - D + 12x + 8y = 20 + 12x + 8y

therefore zext(21 + 12x + 8y) = (1 + zext(20 + 12x + 8y))<nuw><nsw>

This approach could be improved if we move away from using trailing
zeroes and use KnownBits instead. For instance, with KnownBits we could
have the following picture:

    i
10 1110...0011  // this is C
XX X1XX...XX00  // this is X = (x + y + ...)

Notice that some of the bits of X are known ones, also notice that
known bits of X are interspersed with unknown bits and not grouped on
the rigth or left.

We can see at the position i that C(i) and X(i) are both known ones,
therefore the (i + 1)th carry bit is guaranteed to be 1 regardless of
the bits of C to the right of i. For instance, the C(i - 1) bit only
affects the bits of the sum at positions i - 1 and i, and does not
influence if the sum is going to wrap or not. Therefore we could split
the constant C the following way:

    i
00 0010...0011  = D
10 1100...0000  = (C - D)

Let's compute the KnownBits of (C - D) + X:

XX1 1            = carry bit, blanks stand for known zeroes
 10 1100...0000  = (C - D)
 XX X1XX...XX00  = X
--- -----------
 XX X0XX...XX00

Will this add wrap or not essentially depends on bits of X. Adding D
to this sum, however, is guaranteed to not to wrap:

0    X
 00 0010...0011  = D
 sX X0XX...XX00  = (C - D) + X
--- -----------
 sX XXXX   XX11

As could be seen above, adding D preserves the sign bit of (C - D) +
X, if any, and has a guaranteed 0 carry out, as expected.

The more bits of (C - D) we constrain, the better the transformations
introduced here canonicalize expressions as it leaves less freedom to
what values the constant part of ((C - D) + x + y + ...) can take.

Reviewed By: mzolotukhin, efriedma

Differential Revision: https://reviews.llvm.org/D48853

llvm-svn: 337943
2018-07-25 18:01:41 +00:00

338 lines
14 KiB
LLVM

; RUN: opt < %s -basicaa -slp-vectorizer -S | FileCheck %s
target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-apple-macosx10.9.0"
@A = common global [2000 x double] zeroinitializer, align 16
@B = common global [2000 x double] zeroinitializer, align 16
@C = common global [2000 x float] zeroinitializer, align 16
@D = common global [2000 x float] zeroinitializer, align 16
; Currently SCEV isn't smart enough to figure out that accesses
; A[3*i], A[3*i+1] and A[3*i+2] are consecutive, but in future
; that would hopefully be fixed. For now, check that this isn't
; vectorized.
; CHECK-LABEL: foo_3double
; CHECK-NOT: x double>
; Function Attrs: nounwind ssp uwtable
define void @foo_3double(i32 %u) #0 {
entry:
%u.addr = alloca i32, align 4
store i32 %u, i32* %u.addr, align 4
%mul = mul nsw i32 %u, 3
%idxprom = sext i32 %mul to i64
%arrayidx = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom
%0 = load double, double* %arrayidx, align 8
%arrayidx4 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom
%1 = load double, double* %arrayidx4, align 8
%add5 = fadd double %0, %1
store double %add5, double* %arrayidx, align 8
%add11 = add nsw i32 %mul, 1
%idxprom12 = sext i32 %add11 to i64
%arrayidx13 = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom12
%2 = load double, double* %arrayidx13, align 8
%arrayidx17 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom12
%3 = load double, double* %arrayidx17, align 8
%add18 = fadd double %2, %3
store double %add18, double* %arrayidx13, align 8
%add24 = add nsw i32 %mul, 2
%idxprom25 = sext i32 %add24 to i64
%arrayidx26 = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom25
%4 = load double, double* %arrayidx26, align 8
%arrayidx30 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom25
%5 = load double, double* %arrayidx30, align 8
%add31 = fadd double %4, %5
store double %add31, double* %arrayidx26, align 8
ret void
}
; SCEV should be able to tell that accesses A[C1 + C2*i], A[C1 + C2*i], ...
; A[C1 + C2*i] are consecutive, if C2 is a power of 2, and C2 > C1 > 0.
; Thus, the following code should be vectorized.
; CHECK-LABEL: foo_2double
; CHECK: x double>
; Function Attrs: nounwind ssp uwtable
define void @foo_2double(i32 %u) #0 {
entry:
%u.addr = alloca i32, align 4
store i32 %u, i32* %u.addr, align 4
%mul = mul nsw i32 %u, 2
%idxprom = sext i32 %mul to i64
%arrayidx = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom
%0 = load double, double* %arrayidx, align 8
%arrayidx4 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom
%1 = load double, double* %arrayidx4, align 8
%add5 = fadd double %0, %1
store double %add5, double* %arrayidx, align 8
%add11 = add nsw i32 %mul, 1
%idxprom12 = sext i32 %add11 to i64
%arrayidx13 = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom12
%2 = load double, double* %arrayidx13, align 8
%arrayidx17 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom12
%3 = load double, double* %arrayidx17, align 8
%add18 = fadd double %2, %3
store double %add18, double* %arrayidx13, align 8
ret void
}
; Similar to the previous test, but with different datatype.
; CHECK-LABEL: foo_4float
; CHECK: x float>
; Function Attrs: nounwind ssp uwtable
define void @foo_4float(i32 %u) #0 {
entry:
%u.addr = alloca i32, align 4
store i32 %u, i32* %u.addr, align 4
%mul = mul nsw i32 %u, 4
%idxprom = sext i32 %mul to i64
%arrayidx = getelementptr inbounds [2000 x float], [2000 x float]* @C, i32 0, i64 %idxprom
%0 = load float, float* %arrayidx, align 4
%arrayidx4 = getelementptr inbounds [2000 x float], [2000 x float]* @D, i32 0, i64 %idxprom
%1 = load float, float* %arrayidx4, align 4
%add5 = fadd float %0, %1
store float %add5, float* %arrayidx, align 4
%add11 = add nsw i32 %mul, 1
%idxprom12 = sext i32 %add11 to i64
%arrayidx13 = getelementptr inbounds [2000 x float], [2000 x float]* @C, i32 0, i64 %idxprom12
%2 = load float, float* %arrayidx13, align 4
%arrayidx17 = getelementptr inbounds [2000 x float], [2000 x float]* @D, i32 0, i64 %idxprom12
%3 = load float, float* %arrayidx17, align 4
%add18 = fadd float %2, %3
store float %add18, float* %arrayidx13, align 4
%add24 = add nsw i32 %mul, 2
%idxprom25 = sext i32 %add24 to i64
%arrayidx26 = getelementptr inbounds [2000 x float], [2000 x float]* @C, i32 0, i64 %idxprom25
%4 = load float, float* %arrayidx26, align 4
%arrayidx30 = getelementptr inbounds [2000 x float], [2000 x float]* @D, i32 0, i64 %idxprom25
%5 = load float, float* %arrayidx30, align 4
%add31 = fadd float %4, %5
store float %add31, float* %arrayidx26, align 4
%add37 = add nsw i32 %mul, 3
%idxprom38 = sext i32 %add37 to i64
%arrayidx39 = getelementptr inbounds [2000 x float], [2000 x float]* @C, i32 0, i64 %idxprom38
%6 = load float, float* %arrayidx39, align 4
%arrayidx43 = getelementptr inbounds [2000 x float], [2000 x float]* @D, i32 0, i64 %idxprom38
%7 = load float, float* %arrayidx43, align 4
%add44 = fadd float %6, %7
store float %add44, float* %arrayidx39, align 4
ret void
}
; Similar to the previous tests, but now we are dealing with AddRec SCEV.
; CHECK-LABEL: foo_loop
; CHECK: x double>
; Function Attrs: nounwind ssp uwtable
define i32 @foo_loop(double* %A, i32 %n) #0 {
entry:
%A.addr = alloca double*, align 8
%n.addr = alloca i32, align 4
%sum = alloca double, align 8
%i = alloca i32, align 4
store double* %A, double** %A.addr, align 8
store i32 %n, i32* %n.addr, align 4
store double 0.000000e+00, double* %sum, align 8
store i32 0, i32* %i, align 4
%cmp1 = icmp slt i32 0, %n
br i1 %cmp1, label %for.body.lr.ph, label %for.end
for.body.lr.ph: ; preds = %entry
br label %for.body
for.body: ; preds = %for.body.lr.ph, %for.body
%0 = phi i32 [ 0, %for.body.lr.ph ], [ %inc, %for.body ]
%1 = phi double [ 0.000000e+00, %for.body.lr.ph ], [ %add7, %for.body ]
%mul = mul nsw i32 %0, 2
%idxprom = sext i32 %mul to i64
%arrayidx = getelementptr inbounds double, double* %A, i64 %idxprom
%2 = load double, double* %arrayidx, align 8
%mul1 = fmul double 7.000000e+00, %2
%add = add nsw i32 %mul, 1
%idxprom3 = sext i32 %add to i64
%arrayidx4 = getelementptr inbounds double, double* %A, i64 %idxprom3
%3 = load double, double* %arrayidx4, align 8
%mul5 = fmul double 7.000000e+00, %3
%add6 = fadd double %mul1, %mul5
%add7 = fadd double %1, %add6
store double %add7, double* %sum, align 8
%inc = add nsw i32 %0, 1
store i32 %inc, i32* %i, align 4
%cmp = icmp slt i32 %inc, %n
br i1 %cmp, label %for.body, label %for.cond.for.end_crit_edge
for.cond.for.end_crit_edge: ; preds = %for.body
%split = phi double [ %add7, %for.body ]
br label %for.end
for.end: ; preds = %for.cond.for.end_crit_edge, %entry
%.lcssa = phi double [ %split, %for.cond.for.end_crit_edge ], [ 0.000000e+00, %entry ]
%conv = fptosi double %.lcssa to i32
ret i32 %conv
}
; Similar to foo_2double but with a non-power-of-2 factor and potential
; wrapping (both indices wrap or both don't in the same time)
; CHECK-LABEL: foo_2double_non_power_of_2
; CHECK: load <2 x double>
; CHECK: load <2 x double>
; Function Attrs: nounwind ssp uwtable
define void @foo_2double_non_power_of_2(i32 %u) #0 {
entry:
%u.addr = alloca i32, align 4
store i32 %u, i32* %u.addr, align 4
%mul = mul i32 %u, 6
%add6 = add i32 %mul, 6
%idxprom = sext i32 %add6 to i64
%arrayidx = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom
%0 = load double, double* %arrayidx, align 8
%arrayidx4 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom
%1 = load double, double* %arrayidx4, align 8
%add5 = fadd double %0, %1
store double %add5, double* %arrayidx, align 8
%add7 = add i32 %mul, 7
%idxprom12 = sext i32 %add7 to i64
%arrayidx13 = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom12
%2 = load double, double* %arrayidx13, align 8
%arrayidx17 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom12
%3 = load double, double* %arrayidx17, align 8
%add18 = fadd double %2, %3
store double %add18, double* %arrayidx13, align 8
ret void
}
; Similar to foo_2double_non_power_of_2 but with zext's instead of sext's
; CHECK-LABEL: foo_2double_non_power_of_2_zext
; CHECK: load <2 x double>
; CHECK: load <2 x double>
; Function Attrs: nounwind ssp uwtable
define void @foo_2double_non_power_of_2_zext(i32 %u) #0 {
entry:
%u.addr = alloca i32, align 4
store i32 %u, i32* %u.addr, align 4
%mul = mul i32 %u, 6
%add6 = add i32 %mul, 6
%idxprom = zext i32 %add6 to i64
%arrayidx = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom
%0 = load double, double* %arrayidx, align 8
%arrayidx4 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom
%1 = load double, double* %arrayidx4, align 8
%add5 = fadd double %0, %1
store double %add5, double* %arrayidx, align 8
%add7 = add i32 %mul, 7
%idxprom12 = zext i32 %add7 to i64
%arrayidx13 = getelementptr inbounds [2000 x double], [2000 x double]* @A, i32 0, i64 %idxprom12
%2 = load double, double* %arrayidx13, align 8
%arrayidx17 = getelementptr inbounds [2000 x double], [2000 x double]* @B, i32 0, i64 %idxprom12
%3 = load double, double* %arrayidx17, align 8
%add18 = fadd double %2, %3
store double %add18, double* %arrayidx13, align 8
ret void
}
; Similar to foo_2double_non_power_of_2, but now we are dealing with AddRec SCEV.
; Alternatively, this is like foo_loop, but with a non-power-of-2 factor and
; potential wrapping (both indices wrap or both don't in the same time)
; CHECK-LABEL: foo_loop_non_power_of_2
; CHECK: <2 x double>
; Function Attrs: nounwind ssp uwtable
define i32 @foo_loop_non_power_of_2(double* %A, i32 %n) #0 {
entry:
%A.addr = alloca double*, align 8
%n.addr = alloca i32, align 4
%sum = alloca double, align 8
%i = alloca i32, align 4
store double* %A, double** %A.addr, align 8
store i32 %n, i32* %n.addr, align 4
store double 0.000000e+00, double* %sum, align 8
store i32 0, i32* %i, align 4
%cmp1 = icmp slt i32 0, %n
br i1 %cmp1, label %for.body.lr.ph, label %for.end
for.body.lr.ph: ; preds = %entry
br label %for.body
for.body: ; preds = %for.body.lr.ph, %for.body
%0 = phi i32 [ 0, %for.body.lr.ph ], [ %inc, %for.body ]
%1 = phi double [ 0.000000e+00, %for.body.lr.ph ], [ %add7, %for.body ]
%mul = mul i32 %0, 12
%add.5 = add i32 %mul, 5
%idxprom = sext i32 %add.5 to i64
%arrayidx = getelementptr inbounds double, double* %A, i64 %idxprom
%2 = load double, double* %arrayidx, align 8
%mul1 = fmul double 7.000000e+00, %2
%add.6 = add i32 %mul, 6
%idxprom3 = sext i32 %add.6 to i64
%arrayidx4 = getelementptr inbounds double, double* %A, i64 %idxprom3
%3 = load double, double* %arrayidx4, align 8
%mul5 = fmul double 7.000000e+00, %3
%add6 = fadd double %mul1, %mul5
%add7 = fadd double %1, %add6
store double %add7, double* %sum, align 8
%inc = add i32 %0, 1
store i32 %inc, i32* %i, align 4
%cmp = icmp slt i32 %inc, %n
br i1 %cmp, label %for.body, label %for.cond.for.end_crit_edge
for.cond.for.end_crit_edge: ; preds = %for.body
%split = phi double [ %add7, %for.body ]
br label %for.end
for.end: ; preds = %for.cond.for.end_crit_edge, %entry
%.lcssa = phi double [ %split, %for.cond.for.end_crit_edge ], [ 0.000000e+00, %entry ]
%conv = fptosi double %.lcssa to i32
ret i32 %conv
}
; This is generated by `clang -std=c11 -Wpedantic -Wall -O3 main.c -S -o - -emit-llvm`
; with !{!"clang version 7.0.0 (trunk 337339) (llvm/trunk 337344)"} and stripping off
; the !tbaa metadata nodes to fit the rest of the test file, where `cat main.c` is:
;
; double bar(double *a, unsigned n) {
; double x = 0.0;
; double y = 0.0;
; for (unsigned i = 0; i < n; i += 2) {
; x += a[i];
; y += a[i + 1];
; }
; return x * y;
; }
;
; The resulting IR is similar to @foo_loop, but with zext's instead of sext's.
;
; Make sure we are able to vectorize this from now on:
;
; CHECK-LABEL: @bar
; CHECK: load <2 x double>
define double @bar(double* nocapture readonly %a, i32 %n) local_unnamed_addr #0 {
entry:
%cmp15 = icmp eq i32 %n, 0
br i1 %cmp15, label %for.cond.cleanup, label %for.body
for.cond.cleanup: ; preds = %for.body, %entry
%x.0.lcssa = phi double [ 0.000000e+00, %entry ], [ %add, %for.body ]
%y.0.lcssa = phi double [ 0.000000e+00, %entry ], [ %add4, %for.body ]
%mul = fmul double %x.0.lcssa, %y.0.lcssa
ret double %mul
for.body: ; preds = %entry, %for.body
%i.018 = phi i32 [ %add5, %for.body ], [ 0, %entry ]
%y.017 = phi double [ %add4, %for.body ], [ 0.000000e+00, %entry ]
%x.016 = phi double [ %add, %for.body ], [ 0.000000e+00, %entry ]
%idxprom = zext i32 %i.018 to i64
%arrayidx = getelementptr inbounds double, double* %a, i64 %idxprom
%0 = load double, double* %arrayidx, align 8
%add = fadd double %x.016, %0
%add1 = or i32 %i.018, 1
%idxprom2 = zext i32 %add1 to i64
%arrayidx3 = getelementptr inbounds double, double* %a, i64 %idxprom2
%1 = load double, double* %arrayidx3, align 8
%add4 = fadd double %y.017, %1
%add5 = add i32 %i.018, 2
%cmp = icmp ult i32 %add5, %n
br i1 %cmp, label %for.body, label %for.cond.cleanup
}
attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }
!llvm.ident = !{!0}
!0 = !{!"clang version 3.5.0 "}