Commit Graph

157 Commits

Author SHA1 Message Date
Nicolas Geoffray
fbfc451ba9 The ELF ABI specifies F1-F8 registers as argument registers for double, not
F1-F10. This affects only ELF, not MachO.

llvm-svn: 35622
2007-04-03 10:27:07 +00:00
Nicolas Geoffray
89d81878d2 Differentiate between the MachO and the ELF ABI the CALL instruction.
llvm-svn: 34667
2007-02-27 13:01:19 +00:00
Chris Lattner
535bd6d3ba always lower to RETFLAG, never leave it as just ret.
llvm-svn: 34639
2007-02-26 19:44:02 +00:00
Chris Lattner
84ab9a556c one important bugfix: PPC32 didn't have both elf and macho support for
external symbols and global addresses.  Add the missing ones.

one important workaround: PPCISD::CALL is matched by both PPCcall_ELF
and PPCcall_Macho, disable the _ELF patterns for now.

llvm-svn: 34601
2007-02-25 19:20:53 +00:00
Chris Lattner
43df5b335c implement support for the linux/ppc function call ABI. Patch by
Nicolas Geoffray!

llvm-svn: 34574
2007-02-25 05:34:32 +00:00
Jim Laskey
f9e5445ed4 Make LABEL a builtin opcode.
llvm-svn: 33537
2007-01-26 14:34:52 +00:00
Chris Lattner
542dfd5510 Rewrite the branch selector to be correct in the face of large functions.
The algorithm it used before wasn't 100% correct, we now use an iterative
expansion model.  This fixes assembler errors when compiling 403.gcc with
tail merging enabled.

Change the way the branch selector works overall: Now, the isel generates
PPC::BCC instructions (as it used to) directly, and these BCC instructions
are emitted to the output or jitted directly if branches don't need
expansion.  Only if branches need expansion are instructions rewritten
and created.  This should make branch select faster, and eliminates the
Bxx instructions from the .td file.

llvm-svn: 31837
2006-11-18 00:32:03 +00:00
Chris Lattner
33fc1d45e5 add encoding for BCC, after finally wrestling strange ppc/tblgen endianness
issues to the ground.

llvm-svn: 31836
2006-11-17 23:53:28 +00:00
Chris Lattner
be9377a1e3 convert PPC::BCC to use the 'pred' operand instead of separate predicate
value and CR reg #.  This requires swapping the order of these everywhere
that touches BCC and requires us to write custom matching logic for
PPCcondbranch :(

llvm-svn: 31835
2006-11-17 22:37:34 +00:00
Chris Lattner
e0263794f4 rename PPC::COND_BRANCH to PPC::BCC
llvm-svn: 31834
2006-11-17 22:14:47 +00:00
Chris Lattner
8c6a41ea12 start using PPC predicates more consistently.
llvm-svn: 31833
2006-11-17 22:10:59 +00:00
Jim Laskey
48850c10c0 This is a general clean up of the PowerPC ABI. Address several problems and
bugs including making sure that the TOS links back to the previous frame,
that the maximum call frame size is not included twice when using frame
pointers, no longer growing the frame on calls, double storing of SP and
a cleaner/faster dynamic alloca.

llvm-svn: 31792
2006-11-16 22:43:37 +00:00
Chris Lattner
a7ff5162b0 fix broken encoding
llvm-svn: 31778
2006-11-16 01:01:28 +00:00
Chris Lattner
6f5840c409 add patterns for ppc32 preinc stores. ppc64 next.
llvm-svn: 31775
2006-11-16 00:41:37 +00:00
Chris Lattner
3a494989a6 switch these back to the 'bad old way'
llvm-svn: 31774
2006-11-16 00:33:34 +00:00
Chris Lattner
5771156be0 Stop using isTwoAddress, switching to operand constraints instead.
Tell the codegen emitter that specific operands are not to be encoded, fixing
JIT regressions w.r.t. pre-inc loads and stores (e.g. lwzu, which we generate
even when general preinc loads are not enabled).

llvm-svn: 31770
2006-11-15 23:24:18 +00:00
Chris Lattner
474b5b7c95 fix ldu/stu jit encoding. Swith 64-bit preinc load instrs to use memri
addrmodes.

llvm-svn: 31757
2006-11-15 19:55:13 +00:00
Chris Lattner
1396961e85 Switch loads over to use memri as the operand instead of a reg/imm operand
pair for cleanliness.  Add instructions for PPC32 preinc-stores with commented
out patterns.  More improvement is needed to enable the patterns, but we're
getting close.

llvm-svn: 31749
2006-11-15 02:43:19 +00:00
Chris Lattner
e79a451475 group load and store instructions together. No functionality change.
llvm-svn: 31736
2006-11-14 19:19:53 +00:00
Chris Lattner
44dbdbe5cf Rework PPC64 calls. Now we have a LR8/CTR8 register which the PPC64 calls
clobber.  This allows LR8 to be save/restored correctly as a 64-bit quantity,
instead of handling it as a 32-bit quantity.  This unbreaks ppc64 codegen when
the code is actually located above the 4G boundary.

llvm-svn: 31734
2006-11-14 18:44:47 +00:00
Chris Lattner
2ff632c54b Mark operands as symbol lo instead of imm32 so that they print lo(x) around
globals.

llvm-svn: 31672
2006-11-11 04:51:36 +00:00
Chris Lattner
6c8656a6b1 dform 8/9 are identical to dform 1
llvm-svn: 31637
2006-11-10 17:51:02 +00:00
Chris Lattner
ce6455489a add an initial cut at preinc loads for ppc32. This is broken for ppc64
(because the 64-bit reg target versions aren't implemented yet), doesn't
support r+r addr modes, and doesn't handle stores, but it works otherwise. :)

This is disabled unless -enable-ppc-preinc is passed to llc for now.

llvm-svn: 31621
2006-11-10 02:08:47 +00:00
Chris Lattner
6a5a4f85d3 correct the (currently unused) pattern for lwzu.
llvm-svn: 31535
2006-11-08 02:13:12 +00:00
Chris Lattner
2959789c92 encode BLR predicate info for the JIT
llvm-svn: 31450
2006-11-04 05:42:48 +00:00
Chris Lattner
6be726048e Go through all kinds of trouble to mark 'blr' as having a predicate operand
that takes a register and condition code.  Print these pieces of BLR the
right way, even though it is currently set to 'always'.

Next up: get the JIT encoding right, then enhance branch folding to produce
predicated blr for simple examples.

llvm-svn: 31449
2006-11-04 05:27:39 +00:00
Chris Lattner
c8a68d08c3 Describe PPC predicates, which are a pair of CR# and condition.
llvm-svn: 31438
2006-11-03 23:53:25 +00:00
Chris Lattner
895d199348 remove dead vars
llvm-svn: 31433
2006-11-03 23:46:45 +00:00
Chris Lattner
d43e8a7429 Add intrinsics for the rest of the DCB* instructions.
llvm-svn: 31148
2006-10-24 01:08:42 +00:00
Evan Cheng
ab51cf2e78 Merge ISD::TRUNCSTORE to ISD::STORE. Switch to using StoreSDNode.
llvm-svn: 30945
2006-10-13 21:14:26 +00:00
Chris Lattner
cf56917053 set isBarrier correctly
llvm-svn: 30936
2006-10-13 19:10:34 +00:00
Chris Lattner
7374bc0577 mark adjcallstack up/down as clobbering and using the SP
llvm-svn: 30908
2006-10-12 17:56:34 +00:00
Evan Cheng
577ef7694e Add properties to ComplexPattern.
llvm-svn: 30891
2006-10-11 21:03:53 +00:00
Evan Cheng
e71fe34d75 Reflects ISD::LOAD / ISD::LOADX / LoadSDNode changes.
llvm-svn: 30844
2006-10-09 20:57:25 +00:00
Chris Lattner
67f8cc51f4 Use abstract private/comment directives, to increase portability to ppc/linux
llvm-svn: 30621
2006-09-27 02:55:21 +00:00
Nate Begeman
d31efd190f Fold AND and ROTL more often
llvm-svn: 30577
2006-09-22 05:01:56 +00:00
Evan Cheng
81b645a76b CALLSEQ_* produces chain even if that's not needed.
llvm-svn: 29603
2006-08-11 09:03:33 +00:00
Chris Lattner
4f8eb5ccaf bswapped load/store instructions are only availble in indexed addressing form.
As such, use xoaddr (indexed only), not xaddr for address selection.

This fixes CodeGen/PowerPC/2006-07-19-stwbrx-crash.ll, a crash compiling lencod.

llvm-svn: 29208
2006-07-19 17:15:36 +00:00
Chris Lattner
b00b6c2e86 Make the implicit def instructions look like other instrs.
llvm-svn: 29174
2006-07-18 16:33:26 +00:00
Chris Lattner
a7976d329e Implement Regression/CodeGen/PowerPC/bswap-load-store.ll by folding bswaps
into i16/i32 load/stores.

llvm-svn: 29089
2006-07-10 20:56:58 +00:00
Chris Lattner
3b5873456e Add 64-bit MTCTR so that indirect calls work.
llvm-svn: 28931
2006-06-27 18:36:44 +00:00
Chris Lattner
d48ce27532 Implement 64-bit undef, sub, shl/shr, srem/urem
llvm-svn: 28929
2006-06-27 18:18:41 +00:00
Chris Lattner
97b3da1519 Implement a bunch of 64-bit cleanliness work. With this, treeadd builds (but
doesn't work right).

llvm-svn: 28921
2006-06-27 00:04:13 +00:00
Chris Lattner
b6a65f4661 Remove two more definitions
llvm-svn: 28918
2006-06-26 22:47:37 +00:00
Chris Lattner
86e6046515 remove two unused instructions.
llvm-svn: 28917
2006-06-26 22:44:13 +00:00
Chris Lattner
1f1b096142 Make these predicates correct in 64-bit mode too.
llvm-svn: 28890
2006-06-20 23:21:20 +00:00
Chris Lattner
52a956da52 Rename OR4 -> OR. Move some PPC64-specific stuff to the 64-bit file
llvm-svn: 28889
2006-06-20 23:18:58 +00:00
Chris Lattner
5705d4d519 remove unused flag
llvm-svn: 28888
2006-06-20 23:15:07 +00:00
Chris Lattner
7a856a6d88 remove some unused patterns
llvm-svn: 28886
2006-06-20 23:11:36 +00:00
Chris Lattner
7e742e46ac Add some 64-bit logical ops.
Split imm16Shifted into a sext/zext form for 64-bit support.
Add some patterns for immediate formation.  For example, we now compile this:

static unsigned long long Y;
void test3() {
  Y = 0xF0F00F00;
}

into:

_test3:
        li r2, 3840
        lis r3, ha16(_Y)
        xoris r2, r2, 61680
        std r2, lo16(_Y)(r3)
        blr

GCC produces:

_test3:
        li r0,0
        lis r2,ha16(_Y)
        ori r0,r0,61680
        sldi r0,r0,16
        ori r0,r0,3840
        std r0,lo16(_Y)(r2)
        blr

llvm-svn: 28883
2006-06-20 22:34:10 +00:00