Chris Lattner
c4f67e67d2
Do not sink any instruction with side effects, including vaarg. This fixes
...
PR640
llvm-svn: 24046
2005-10-27 17:13:11 +00:00
Chris Lattner
c6372cca78
Fix typo
...
llvm-svn: 24033
2005-10-27 06:26:26 +00:00
Chris Lattner
0fe7551bc0
Teach instcombine to promote stuff like (cast (malloc sbyte, 8*X) to int*)
...
into: malloc int, (2*X)
llvm-svn: 24032
2005-10-27 06:24:46 +00:00
Chris Lattner
b3ecf96900
Promote cases like cast (malloc sbyte, 100) to int* into
...
(malloc [25 x int]) directly without having to convert to
(malloc [100 x sbyte]) first.
llvm-svn: 24031
2005-10-27 06:12:00 +00:00
Chris Lattner
bb17180a23
Minor change to this file to support obscure cases with constant array amounts
...
llvm-svn: 24030
2005-10-27 05:53:56 +00:00
Chris Lattner
38a1b00a0f
fold nested and's early to avoid inefficiencies in MaskedValueIsZero. This
...
fixes a very slow compile in PR639.
llvm-svn: 24011
2005-10-26 17:18:16 +00:00
Chris Lattner
46705b2f2d
Handle allocations that, even after removing dead uses, still have more than
...
one use (but one is a cast). This handles the very common case of:
X = alloc [n x byte]
Y = cast X to somethingbetter
seteq X, null
In order to avoid infinite looping when there are multiple casts, we only
allow this if the xform is strictly increasing the alignment of the
allocation.
llvm-svn: 23961
2005-10-24 06:35:18 +00:00
Chris Lattner
355ecc09f8
Fix a bug where we would 'promote' an allocation from one type to another
...
where the second has less alignment required. If we had explicit alignment
support in the IR, we could handle this case, but we can't until we do.
llvm-svn: 23960
2005-10-24 06:26:18 +00:00
Chris Lattner
ac87beb03a
Before promoting a malloc type, remove dead uses. This makes instcombine
...
more effective at promoting these allocations, catching them earlier in the
compile process.
llvm-svn: 23959
2005-10-24 06:22:12 +00:00
Chris Lattner
216be91817
Pull some code out into a function, no functionality change
...
llvm-svn: 23958
2005-10-24 06:03:58 +00:00
Chris Lattner
da1b152c43
Make this work for FP constantexprs
...
llvm-svn: 23773
2005-10-17 20:18:38 +00:00
Chris Lattner
7fde91e365
Oops, X+0.0 isn't foldable, but X+-0.0 is.
...
llvm-svn: 23772
2005-10-17 17:56:38 +00:00
Chris Lattner
32979336a7
relax this a bit, as we only support the default rounding mode
...
llvm-svn: 23771
2005-10-17 17:49:32 +00:00
Chris Lattner
03b9eb506c
Make MaskedValueIsZero a bit more aggressive
...
llvm-svn: 23677
2005-10-09 22:08:50 +00:00
Chris Lattner
62010c450f
Fix funky xcode indentation
...
llvm-svn: 23674
2005-10-09 06:36:35 +00:00
Jeff Cohen
572910c9a2
Remove useless variable.
...
llvm-svn: 23656
2005-10-07 05:28:29 +00:00
Chris Lattner
0b011ec8e2
Factor the GetGEPGlobalInitializer out of this pass and into Transforms/Utils
...
as ConstantFoldLoadThroughGEPConstantExpr.
llvm-svn: 23445
2005-09-26 05:28:06 +00:00
Chris Lattner
0b3557f54a
Move MaskedValueIsZero up.
...
Match a bunch of idioms for sign extensions, implementing InstCombine/signext.ll
llvm-svn: 23428
2005-09-24 23:43:33 +00:00
Chris Lattner
b4b2530a1a
Refactor this code a bit and make it more general. This now compiles:
...
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus2 (unsigned int x) { b.j += x; }
To:
_plus2:
lis r2, ha16(L_b$non_lazy_ptr)
lwz r2, lo16(L_b$non_lazy_ptr)(r2)
lwz r4, 0(r2)
slwi r3, r3, 6
add r3, r4, r3
rlwimi r3, r4, 0, 26, 14
stw r3, 0(r2)
blr
instead of:
_plus2:
lis r2, ha16(L_b$non_lazy_ptr)
lwz r2, lo16(L_b$non_lazy_ptr)(r2)
lwz r4, 0(r2)
rlwinm r5, r4, 26, 21, 31
add r3, r5, r3
rlwimi r4, r3, 6, 15, 25
stw r4, 0(r2)
blr
by eliminating an 'and'.
I'm pretty sure this is as small as we can go :)
llvm-svn: 23386
2005-09-18 07:22:02 +00:00
Chris Lattner
797dee7705
Compile
...
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus2 (unsigned int x) {
b.j += x;
}
to:
plus2:
mov %EAX, DWORD PTR [b]
mov %ECX, %EAX
and %ECX, 131008
mov %EDX, DWORD PTR [%ESP + 4]
shl %EDX, 6
add %EDX, %ECX
and %EDX, 131008
and %EAX, -131009
or %EDX, %EAX
mov DWORD PTR [b], %EDX
ret
instead of:
plus2:
mov %EAX, DWORD PTR [b]
mov %ECX, %EAX
shr %ECX, 6
and %ECX, 2047
add %ECX, DWORD PTR [%ESP + 4]
shl %ECX, 6
and %ECX, 131008
and %EAX, -131009
or %ECX, %EAX
mov DWORD PTR [b], %ECX
ret
llvm-svn: 23385
2005-09-18 06:30:59 +00:00
Chris Lattner
01f56c68e9
Generalize this transform, using MaskedValueIsZero, allowing us to compile:
...
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus3 (unsigned int x) { b.k += x; }
To:
plus3:
mov %EAX, DWORD PTR [%ESP + 4]
shl %EAX, 17
add DWORD PTR [b], %EAX
ret
instead of:
plus3:
mov %EAX, DWORD PTR [%ESP + 4]
shl %EAX, 17
mov %ECX, DWORD PTR [b]
add %EAX, %ECX
and %EAX, -131072
and %ECX, 131071
or %ECX, %EAX
mov DWORD PTR [b], %ECX
ret
llvm-svn: 23384
2005-09-18 06:02:59 +00:00
Chris Lattner
4ebc8ab4e0
fix typeo
...
llvm-svn: 23383
2005-09-18 05:25:20 +00:00
Chris Lattner
e5b23a6d67
Remove unintentionally committed code
...
llvm-svn: 23382
2005-09-18 05:12:51 +00:00
Chris Lattner
27cb9dbd35
implement shift.ll:test25. This compiles:
...
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus3 (unsigned int x) {
b.k += x;
}
to:
_plus3:
lis r2, ha16(L_b$non_lazy_ptr)
lwz r2, lo16(L_b$non_lazy_ptr)(r2)
lwz r3, 0(r2)
rlwinm r4, r3, 0, 0, 14
add r4, r4, r3
rlwimi r4, r3, 0, 15, 31
stw r4, 0(r2)
blr
instead of:
_plus3:
lis r2, ha16(L_b$non_lazy_ptr)
lwz r2, lo16(L_b$non_lazy_ptr)(r2)
lwz r4, 0(r2)
srwi r5, r4, 17
add r3, r5, r3
slwi r3, r3, 17
rlwimi r3, r4, 0, 15, 31
stw r3, 0(r2)
blr
llvm-svn: 23381
2005-09-18 05:12:10 +00:00
Chris Lattner
af517574ce
Implement add.ll:test29. Codegening:
...
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus1 (unsigned int x) {
b.i += x;
}
as:
_plus1:
lis r2, ha16(L_b$non_lazy_ptr)
lwz r2, lo16(L_b$non_lazy_ptr)(r2)
lwz r4, 0(r2)
add r3, r4, r3
rlwimi r3, r4, 0, 0, 25
stw r3, 0(r2)
blr
instead of:
_plus1:
lis r2, ha16(L_b$non_lazy_ptr)
lwz r2, lo16(L_b$non_lazy_ptr)(r2)
lwz r4, 0(r2)
rlwinm r5, r4, 0, 26, 31
add r3, r5, r3
rlwimi r3, r4, 0, 0, 25
stw r3, 0(r2)
blr
llvm-svn: 23379
2005-09-18 04:24:45 +00:00
Chris Lattner
027eaf01cf
remove debug output
...
llvm-svn: 23377
2005-09-18 03:50:25 +00:00
Chris Lattner
1521298993
Implement or.ll:test21. This teaches instcombine to be able to turn this:
...
struct {
unsigned int bit0:1;
unsigned int ubyte:31;
} sdata;
void foo() {
sdata.ubyte++;
}
into this:
foo:
add DWORD PTR [sdata], 2
ret
instead of this:
foo:
mov %EAX, DWORD PTR [sdata]
mov %ECX, %EAX
add %ECX, 2
and %ECX, -2
and %EAX, 1
or %EAX, %ECX
mov DWORD PTR [sdata], %EAX
ret
llvm-svn: 23376
2005-09-18 03:42:07 +00:00
Chris Lattner
a393e4d4b3
Fix the regression last night compiling povray
...
llvm-svn: 23348
2005-09-14 17:32:56 +00:00
Chris Lattner
2a8932960d
Add a simple xform to simplify array accesses with casts in the way.
...
This is useful for 178.galgel where resolution of dope vectors (by the
optimizer) causes the scales to become apparent.
llvm-svn: 23328
2005-09-13 18:36:04 +00:00
Chris Lattner
567b81f0d2
Add a helper function, allowing us to simplify some code a bit, changing
...
indentation, no functionality change
llvm-svn: 23325
2005-09-13 00:40:14 +00:00
Chris Lattner
219175c84d
Implement a simple xform to turn code like this:
...
if () { store A -> P; } else { store B -> P; }
into a PHI node with one store, in the most trival case. This implements
load.ll:test10.
llvm-svn: 23324
2005-09-12 23:23:25 +00:00
Chris Lattner
e0bfdf1485
Another load-peephole optimization: do gcse when two loads are next to
...
each other. This implements InstCombine/load.ll:test9
llvm-svn: 23322
2005-09-12 22:21:03 +00:00
Chris Lattner
b990f7d8ed
Implement a trivial form of store->load forwarding where the store and the
...
load are exactly consequtive. This is picked up by other passes, but this
triggers thousands of times in fortran programs that use static locals
(and is thus a compile-time speedup).
llvm-svn: 23320
2005-09-12 22:00:15 +00:00
Chris Lattner
9f269e40c9
Use the new 'moveBefore' method to simplify some code. Really, which is
...
easier to understand? :)
llvm-svn: 22706
2005-08-08 19:11:57 +00:00
Chris Lattner
2c14cf7b74
Add some simple folds that occur in bitfield cases. Fix a minor bug in
...
isHighOnes, where it would consider 0 to have high ones.
llvm-svn: 22693
2005-08-07 07:03:10 +00:00
Chris Lattner
9f9c260b8c
now that hasConstantValue defaults to only returning values that dominate
...
the PHI node, this ugly code can vanish.
llvm-svn: 22672
2005-08-05 01:04:30 +00:00
Nate Begeman
b392321cae
Fix a fixme in CondPropagate.cpp by moving a PhiNode optimization into
...
BasicBlock's removePredecessor routine. This requires shuffling around
the definition and implementation of hasContantValue from Utils.h,cpp into
Instructions.h,cpp
llvm-svn: 22664
2005-08-04 23:24:19 +00:00
Chris Lattner
22d00a8e90
Update to use the new MathExtras.h support for log2 computation.
...
Patch contributed by Jim Laskey!
llvm-svn: 22592
2005-08-02 19:16:58 +00:00
Jeff Cohen
5f4ef3c5a8
Eliminate all remaining tabs and trailing spaces.
...
llvm-svn: 22523
2005-07-27 06:12:32 +00:00
Chris Lattner
18aa4d8196
Do not let MaskedValueIsZero consider undef to be zero, for reasons
...
explained in the comment.
This fixes UnitTests/2003-09-18-BitFieldTest on darwin
llvm-svn: 22483
2005-07-20 18:49:28 +00:00
Chris Lattner
247aef884c
When transforming &A[i] < &A[j] -> i < j, make sure to perform the comparison
...
as a signed compare. This patch may fix PR597, but is correct in any case.
llvm-svn: 22465
2005-07-18 23:07:33 +00:00
Chris Lattner
4ed40f7c6f
Fix a problem that instcombine would hit when dealing with unreachable code.
...
Because the instcombine has to scan the entire function when it starts up
to begin with, we might as well do it in DFO so we can nuke unreachable code.
This fixes: Transforms/InstCombine/2005-07-07-DeadPHILoop.ll
llvm-svn: 22348
2005-07-07 20:40:38 +00:00
Reid Spencer
4fdd96c4e0
Clean up some uninitialized variables and missing return statements that
...
GCC 4.0.0 compiler (sometimes incorrectly) warns about under release build.
llvm-svn: 22249
2005-06-18 17:37:34 +00:00
Chris Lattner
2ceb6ee576
This is not true: (X != 13 | X < 15) -> X < 15
...
It is actually always true. This fixes PR586 and
Transforms/InstCombine/2005-06-16-SetCCOrSetCCMiscompile.ll
llvm-svn: 22236
2005-06-17 03:59:17 +00:00
Chris Lattner
73bcba5f61
Don't crash when dealing with INTMIN. This fixes PR585 and
...
Transforms/InstCombine/2005-06-16-RangeCrash.ll
llvm-svn: 22234
2005-06-17 02:05:55 +00:00
Chris Lattner
c53cb9d3ff
avoid constructing out of range shift amounts.
...
llvm-svn: 22230
2005-06-17 01:29:28 +00:00
Chris Lattner
89dc4f16f5
Fix PR583 and testcase Transforms/InstCombine/2005-06-15-DivSelectCrash.ll
...
llvm-svn: 22227
2005-06-16 04:55:52 +00:00
Chris Lattner
252a845e30
Fix PR571, removing code that does just the WRONG thing :)
...
llvm-svn: 22225
2005-06-16 03:00:08 +00:00
Chris Lattner
104002bee3
Fix a bug in my previous patch. Do not get the shift amount type (which
...
is always ubyte, get the type being shifted). This unbreaks espresso
llvm-svn: 22224
2005-06-16 01:52:07 +00:00
Chris Lattner
19b57f55aa
Fix PR577 and testcase InstCombine/2005-06-15-ShiftSetCCCrash.ll.
...
Do not perform undefined out of range shifts.
llvm-svn: 22217
2005-06-15 20:53:31 +00:00