Commit Graph

11656 Commits

Author SHA1 Message Date
Chris Lattner
6d277517d1 Recommit the fix for rdar://9289512 with a couple tweaks to
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
   would cause an assert.
2. The load may not be used by the current chain of instructions,
   and we could move it past a side-effecting instruction. Change
   how we process uses to define the problem away.

llvm-svn: 130018
2011-04-22 21:59:37 +00:00
Benjamin Kramer
341c11da3b DAGCombine: fold "(zext x) == C" into "x == (trunc C)" if the trunc is lossless.
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
  movzbl	(%rdi), %eax
  cmpl	$47, %eax
->
  cmpb	$47, (%rdi)

This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)

llvm-svn: 130005
2011-04-22 18:47:44 +00:00
Devang Patel
ad45d911bb Do not leak argument's DbgVariables.
llvm-svn: 130004
2011-04-22 18:09:57 +00:00
Evan Cheng
8ea3af47bd Typo
llvm-svn: 129970
2011-04-22 01:40:20 +00:00
Bill Wendling
c14d7322ee Branch folding is folding a landing pad into a regular BB.
An exception is thrown via a call to _cxa_throw, which we don't expect to
return. Therefore, the "true" part of the invoke goes to a BB that has
'unreachable' as its only instruction. This is lowered into an empty MachineBB.
The landing pad for this invoke, however, is directly after the "true" MBB.
When the empty MBB is removed, the landing pad is directly below the BB with the
invoke call. The unconditional branch is removed and then the two blocks are
merged together.

The testcase is too big for a regression test.
<rdar://problem/9305728>

llvm-svn: 129965
2011-04-22 01:07:09 +00:00
Devang Patel
2266aa84a1 Refactor.
llvm-svn: 129938
2011-04-21 21:07:35 +00:00
Matt Beaumont-Gay
70597d4e50 Don't recycle loop variables.
llvm-svn: 129928
2011-04-21 19:46:23 +00:00
Jakob Stoklund Olesen
6a663b8dc8 Allow allocatable ranges from global live range splitting to be split again.
These intervals are allocatable immediately after splitting, but they may be
evicted because of later splitting. This is rare, but when it happens they
should be split again.

The remainder intervals that cannot be allocated after splitting still move
directly to spilling.

SplitEditor::finish can optionally provide a mapping from new live intervals
back to the original interval indexes returned by openIntv().

Each original interval index can map to multiple new intervals after connected
components have been separated. Dead code elimination may also add existing
intervals to the list.

The reverse mapping allows the SplitEditor client to treat the new intervals
differently depending on the split region they came from.

llvm-svn: 129925
2011-04-21 18:38:15 +00:00
Devang Patel
28f2719d83 Add comment in output stream.
llvm-svn: 129921
2011-04-21 17:50:24 +00:00
Daniel Dunbar
6309828206 Revert r1296656, "Fix rdar://9289512 - not folding load into compare at -O0...",
which broke a couple GCC test suite tests at -O0.

llvm-svn: 129914
2011-04-21 16:14:46 +00:00
Jakob Stoklund Olesen
86e53ced08 Add debug output for rematerializable instructions.
llvm-svn: 129883
2011-04-20 22:14:20 +00:00
Jakob Stoklund Olesen
90d79bdcd2 Permit remat when a virtual register has multiple defs.
TII::isTriviallyReMaterializable() shouldn't depend on any properties of the
register being defined by the instruction. Rematerialization is going to create
a new virtual register anyway.

llvm-svn: 129882
2011-04-20 22:14:17 +00:00
Jakob Stoklund Olesen
0e34c1dfac Prefer cheap registers for busy live ranges.
On the x86-64 and thumb2 targets, some registers are more expensive to encode
than others in the same register class.

Add a CostPerUse field to the TableGen register description, and make it
available from TRI->getCostPerUse. This represents the cost of a REX prefix or a
32-bit instruction encoding required by choosing a high register.

Teach the greedy register allocator to prefer cheap registers for busy live
ranges (as indicated by spill weight).

llvm-svn: 129864
2011-04-20 18:19:48 +00:00
Stuart Hastings
45fe3c38c5 ARM byval support. Will be enabled by another patch to the FE. <rdar://problem/7662569>
llvm-svn: 129858
2011-04-20 16:47:52 +00:00
Rafael Espindola
e473aaf540 Remove unused arguments.
llvm-svn: 129844
2011-04-20 03:08:09 +00:00
Eric Christopher
bcaedb5ce0 Rewrite the expander for umulo/smulo to remember to sign extend the input
manually and pass all (now) 4 arguments to the mul libcall. Add a new
ExpandLibCall for just this (copied gratuitously from type legalization).

Fixes rdar://9292577

llvm-svn: 129842
2011-04-20 01:19:45 +00:00
Daniel Dunbar
cd01ed5bd6 ADT/Triple: Renambe isOSX... methods to isMacOSX for consistency with the OS
triple component.

llvm-svn: 129838
2011-04-20 00:14:25 +00:00
Daniel Dunbar
4a7783b0c2 CodeGen: Eliminate a use of getDarwinMajorNumber().
- There is a minor semantic change here (evidenced by the test change) for
   Darwin triples that have no version component. I debated changing the default
   behavior of isOSVersionLT, but decided it made more sense for triples to be
   explicit.

llvm-svn: 129802
2011-04-19 20:32:39 +00:00
Stuart Hastings
468086d5e1 Delete unnecessary variable. <rdar://problem/7662569>
llvm-svn: 129796
2011-04-19 20:09:38 +00:00
Bob Wilson
df612ba006 Avoid write-after-write issue hazards for Cortex-A9.
Add a avoidWriteAfterWrite() target hook to identify register classes that
suffer from write-after-write hazards. For those register classes, try to avoid
writing the same register in two consecutive instructions.

This is currently disabled by default.  We should not spill to avoid hazards!
The command line flag -avoid-waw-hazard can be used to enable waw avoidance.

llvm-svn: 129772
2011-04-19 18:11:45 +00:00
Jakob Stoklund Olesen
af12138d10 Force the greedy register allocator to be linked alongside linear scan.
This means that the new register allocator can be used with 'clang -mllvm -regalloc=greedy'.

llvm-svn: 129764
2011-04-19 17:17:58 +00:00
Eli Friedman
bcd09b3a7f SelectBasicBlock is rather slow even when it doesn't do anything; skip the
unnecessary work where possible.

llvm-svn: 129763
2011-04-19 17:01:08 +00:00
Stuart Hastings
0b68c1219f Support nested CALLSEQ_BEGIN/END; necessary for ARM byval support. <rdar://problem/7662569>
llvm-svn: 129761
2011-04-19 16:16:58 +00:00
Chris Lattner
91328b317b Implement support for x86 fastisel of small fixed-sized memcpys, which are generated
en-mass for C++ PODs.  On my c++ test file, this cuts the fast isel rejects by 10x 
and shrinks the generated .s file by 5%

llvm-svn: 129755
2011-04-19 05:52:03 +00:00
Eli Friedman
ec138b4b27 Simplify declarations slightly by using typedefs.
llvm-svn: 129720
2011-04-18 21:21:37 +00:00
Devang Patel
17740e70d5 Reduce clutter in asm output. Do not emit source location as comment for each instruction.
llvm-svn: 129715
2011-04-18 20:26:49 +00:00
Jakob Stoklund Olesen
9f294a9e52 Handle spilling around an instruction that has an early-clobber re-definition of
the spilled register.

This is quite common on ARM now that some stores have early-clobber defines.

llvm-svn: 129714
2011-04-18 20:23:27 +00:00
Eric Christopher
c37aa0b26a Fix a bug where we were counting the alias sets as completely used
registers for fast allocation a different way. This has us updating
used registers only when we're using that exact register.

Fixes rdar://9207598

llvm-svn: 129711
2011-04-18 19:26:25 +00:00
Chris Lattner
48f75ad678 while we're at it, handle 'sdiv exact' of a power of 2 also,
this fixes a few rejects on c++ iterator loops.

llvm-svn: 129694
2011-04-18 07:00:40 +00:00
Chris Lattner
562d6e82bd fix rdar://9297011 - udiv by power of two causing fast-isel rejects
llvm-svn: 129693
2011-04-18 06:55:51 +00:00
Chris Lattner
b53ccb8e36 1. merge fast-isel-shift-imm.ll into fast-isel-x86-64.ll
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the 
   shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
   instead of FastEmit_ri to simplify code.

llvm-svn: 129666
2011-04-17 20:23:29 +00:00
Chris Lattner
4832660b4d fix an oversight which caused us to compile the testcase (and other
less trivial things) into a dummy lea.  Before we generated:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	ret

now we produce:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	ret

This is part of rdar://9289558

llvm-svn: 129662
2011-04-17 17:12:08 +00:00
Chris Lattner
045c43855c Fix rdar://9289512 - not folding load into compare at -O0
The basic issue here is that bottom-up isel is matching the branch
and compare, and was failing to fold the load into the branch/compare
combo.  Fixing this (by allowing folding into any instruction of a
sequence that is selected) allows us to produce things like:


cmpb    $0, 52(%rax)
je      LBB4_2

instead of:

movb    52(%rax), %cl
cmpb    $0, %cl
je      LBB4_2

This makes the generated -O0 code run a bit faster, but also speeds up
compile time by putting less pressure on the register allocator and 
generating less code.

This was one of the biggest classes of missing load folding.  Implementing
this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm)
line count.

llvm-svn: 129656
2011-04-17 06:35:44 +00:00
Chris Lattner
d70ff0d807 split a complex predicate out to a helper function. Simplify two for loops,
which don't need to check for falling off the end of a block *and* end of phi
nodes, since terminators are never phis.

llvm-svn: 129655
2011-04-17 06:03:19 +00:00
Chris Lattner
fba7ca63cc fix rdar://9289583 - fast isel should handle non-canonical commutative binops
allowing us to fold the immediate into the 'and' in this case:

int test1(int i) {
  return 8&i;
}

llvm-svn: 129653
2011-04-17 01:16:47 +00:00
Eli Friedman
55b0acd624 PR9055: extend the fix to PR4050 (r70179) to apply to zext and anyext.
Returning a new node makes the code try to replace the old node, which
in the included testcase is killed by CSE.

llvm-svn: 129650
2011-04-16 23:25:34 +00:00
Francois Pichet
beb17d9359 Unbreak the MSVC 2010 build.
For further information on this particular issue see: http://connect.microsoft.com/VisualStudio/feedback/details/520043/error-converting-from-null-to-a-pointer-type-in-std-pair

llvm-svn: 129642
2011-04-16 14:20:39 +00:00
Benjamin Kramer
659bfb34ff Remove unused variable.
llvm-svn: 129639
2011-04-16 10:30:47 +00:00
Rafael Espindola
a83b177035 Put each personality function in a section. This fixes the gnu ld warning:
error in foo.o; no .eh_frame_hdr table will be created.

llvm-svn: 129635
2011-04-16 03:51:21 +00:00
Evan Cheng
b14ce09fca Fix divmod libcall lowering. Convert to {S|U}DIVREM first and then expand the node to a libcall. rdar://9280991
llvm-svn: 129633
2011-04-16 03:08:26 +00:00
Devang Patel
514b4006c2 Introduce support to encode Objective-C property information in debugging information generated for an interface.
llvm-svn: 129624
2011-04-16 00:11:51 +00:00
Rafael Espindola
beb74c3f00 Some refactoring suggested by Anton Korobeynikov.
llvm-svn: 129600
2011-04-15 20:32:03 +00:00
Jakob Stoklund Olesen
1af8b4dc92 Teach the SplitKit blitter to handle multiply defined values as well.
The transferValues() function can now handle both singly and multiply defined
values, as long as the resulting live range is known. Only rematerialized values
have their live range recomputed by extendRange().

The updateSSA() function can now insert PHI values in bulk across multiple
values in multiple target registers in one pass. The list of blocks received
from transferValues() is in layout order which seems to work well for the
iterative algorithm. Blocks from extendRange() are still in reverse BFS order,
but this function is used so rarely now that it doesn't matter.

llvm-svn: 129580
2011-04-15 17:24:49 +00:00
Jakob Stoklund Olesen
871f70609a Remember to set flag.
llvm-svn: 129579
2011-04-15 17:24:46 +00:00
Rafael Espindola
a01cdb0e37 Add 129518 back with a fix for when we are producing eh just because of debug info.
Change ELF systems to use CFI for producing the EH tables. This reduces the
size of the clang binary in Debug builds from 690MB to 679MB.

llvm-svn: 129571
2011-04-15 15:11:06 +00:00
Chris Lattner
0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
NAKAMURA Takumi
b5e3e9dd27 Revert r129518, "Change ELF systems to use CFI for producing the EH tables. This reduces the"
It broke several builds.

llvm-svn: 129557
2011-04-15 03:35:57 +00:00
Owen Anderson
a519284fec Fix another instance of the DAG combiner not using the correct type for the RHS of a shift.
llvm-svn: 129522
2011-04-14 17:30:49 +00:00
Rafael Espindola
aa2a7cd828 Change ELF systems to use CFI for producing the EH tables. This reduces the
size of the clang binary in Debug builds from 690MB to 679MB.

llvm-svn: 129518
2011-04-14 15:18:53 +00:00
Andrew Trick
bfbd972b1f In the pre-RA scheduler, maintain cmp+br proximity.
This is done by pushing physical register definitions close to their
use, which happens to handle flag definitions if they're not glued to
the branch. This seems to be generally a good thing though, so I
didn't need to add a target hook yet.

The primary motivation is to generate code closer to what people
expect and rule out missed opportunity from enabling macro-op
fusion. As a side benefit, we get several 2-5% gains on x86
benchmarks. There is one regression:
SingleSource/Benchmarks/Shootout/lists slows down be -10%. But this is
an independent scheduler bug that will be tracked separately.
See rdar://problem/9283108.

Incidentally, pre-RA scheduling is only half the solution. Fixing the
later passes is tracked by:
<rdar://problem/8932804> [pre-RA-sched] on x86, attempt to schedule CMP/TEST adjacent with condition jump

Fixes:
<rdar://problem/9262453> Scheduler unnecessary break of cmp/jump fusion

llvm-svn: 129508
2011-04-14 05:15:06 +00:00