Commit Graph

1733 Commits

Author SHA1 Message Date
Evan Cheng df9ac47e5e Make use of getStore().
llvm-svn: 30759
2006-10-05 23:01:46 +00:00
Evan Cheng af309d29b1 Add getStore() helper function to create ISD::STORE nodes.
llvm-svn: 30758
2006-10-05 22:57:11 +00:00
Jim Laskey 6549d22ef9 Alias analysis code clean ups.
llvm-svn: 30753
2006-10-05 15:07:25 +00:00
Evan Cheng f80dfa83a0 Fix some typos that can cause a flag value to have more than one use.
llvm-svn: 30727
2006-10-04 22:23:53 +00:00
Jim Laskey 708d0db2d8 More extensive alias analysis.
llvm-svn: 30721
2006-10-04 16:53:27 +00:00
Evan Cheng 5d9fd977d3 Combine ISD::EXTLOAD, ISD::SEXTLOAD, ISD::ZEXTLOAD into ISD::LOADX. Add an
extra operand to LOADX to specify the exact value extension type.

llvm-svn: 30714
2006-10-04 00:56:09 +00:00
Evan Cheng 91d76cb27f Fix an obvious typo.
llvm-svn: 30711
2006-10-03 23:08:27 +00:00
Jim Laskey e73a22514d Debugging kruft
llvm-svn: 30688
2006-10-02 13:01:17 +00:00
Jim Laskey 1368c265da Add ability to annotate (color) nodes in a viewGraph.
llvm-svn: 30686
2006-10-02 12:26:53 +00:00
Chris Lattner a9caf95591 refactor critical edge breaking out into the SplitCritEdgesForPHIConstants method.
This is a baby step towards fixing PR925.

llvm-svn: 30643
2006-09-28 06:17:10 +00:00
Andrew Lenharth c19ef92403 Comments on JumpTableness
llvm-svn: 30615
2006-09-26 20:02:30 +00:00
Jim Laskey 60832693a7 Load chain check is not needed
llvm-svn: 30613
2006-09-26 17:44:58 +00:00
Jim Laskey dde51671e5 Chain can be any operand
llvm-svn: 30611
2006-09-26 09:32:41 +00:00
Jim Laskey 5f3e0af9d0 Wrong size for load
llvm-svn: 30610
2006-09-26 08:14:06 +00:00
Jim Laskey b4a864d533 Can't move a load node if it's chain is not used.
llvm-svn: 30609
2006-09-26 07:37:42 +00:00
Jim Laskey 7aa0638aa9 Accidental enable of bad code
llvm-svn: 30601
2006-09-25 21:11:32 +00:00
Jim Laskey b5534e5c28 Fix chain dropping in load and drop unused stores in ret blocks.
llvm-svn: 30600
2006-09-25 19:32:58 +00:00
Jim Laskey d07be232ba Core antialiasing for load and store.
llvm-svn: 30597
2006-09-25 16:29:54 +00:00
Andrew Lenharth 783a4a9d86 Add support for other relocation bases to jump tables, as well as custom asm directives
llvm-svn: 30593
2006-09-24 19:45:58 +00:00
Evan Cheng 77c0757f8b PIC jump table entries are always 32-bit. This fixes PIC jump table support on X86-64.
llvm-svn: 30590
2006-09-24 05:22:38 +00:00
Evan Cheng 449a0c7e33 Make it work for DAG combine of multi-value nodes.
llvm-svn: 30573
2006-09-21 19:04:05 +00:00
Jim Laskey 35f7eebb49 core corrections
llvm-svn: 30570
2006-09-21 17:35:47 +00:00
Jim Laskey 5d19d59017 Basic "in frame" alias analysis.
llvm-svn: 30568
2006-09-21 16:28:59 +00:00
Chris Lattner 082db3f9aa fold (aext (and (trunc x), cst)) -> (and x, cst).
llvm-svn: 30561
2006-09-21 06:40:43 +00:00
Chris Lattner fa9f92cf65 Check the right value type. This fixes 186.crafty on x86
llvm-svn: 30560
2006-09-21 06:17:39 +00:00
Chris Lattner 8d8a3bf9c9 Compile:
int %test(ulong *%tmp) {
        %tmp = load ulong* %tmp         ; <ulong> [#uses=1]
        %tmp.mask = shr ulong %tmp, ubyte 50            ; <ulong> [#uses=1]
        %tmp.mask = cast ulong %tmp.mask to ubyte
        %tmp2 = and ubyte %tmp.mask, 3          ; <ubyte> [#uses=1]
        %tmp2 = cast ubyte %tmp2 to int         ; <int> [#uses=1]
        ret int %tmp2
}

to:

_test:
        movl 4(%esp), %eax
        movl 4(%eax), %eax
        shrl $18, %eax
        andl $3, %eax
        ret

instead of:

_test:
        movl 4(%esp), %eax
        movl 4(%eax), %eax
        shrl $18, %eax
        # TRUNCATE movb %al, %al
        andb $3, %al
        movzbl %al, %eax
        ret

llvm-svn: 30558
2006-09-21 06:14:31 +00:00
Chris Lattner a31f0a622b Generalize (zext (truncate x)) and (sext (truncate x)) folding to work when
the src/dst are not the same size.  This catches things like "truncate
32-bit X to 8 bits, then zext to 16", which happens a bit on X86.

llvm-svn: 30557
2006-09-21 06:00:20 +00:00
Chris Lattner c8cd62d381 Compile:
int test3(int a, int b) { return (a < 0) ? a : 0; }

to:

_test3:
        srawi r2, r3, 31
        and r3, r2, r3
        blr

instead of:

_test3:
        cmpwi cr0, r3, 1
        li r2, 0
        blt cr0, LBB2_2 ;entry
LBB2_1: ;entry
        mr r3, r2
LBB2_2: ;entry
        blr


This implements: PowerPC/select_lt0.ll:seli32_a_a

llvm-svn: 30517
2006-09-20 06:41:35 +00:00
Chris Lattner 8746e2cd57 Fold the full generality of (any_extend (truncate x))
llvm-svn: 30514
2006-09-20 06:29:17 +00:00
Chris Lattner 8b68decb27 Two things:
1. teach SimplifySetCC that '(srl (ctlz x), 5) == 0' is really x != 0.
2. Teach visitSELECT_CC to use SimplifySetCC instead of calling it and
   ignoring the result.  This allows us to compile:

bool %test(ulong %x) {
  %tmp = setlt ulong %x, 4294967296
  ret bool %tmp
}

to:

_test:
        cntlzw r2, r3
        cmplwi cr0, r3, 1
        srwi r2, r2, 5
        li r3, 0
        beq cr0, LBB1_2 ;
LBB1_1: ;
        mr r3, r2
LBB1_2: ;
        blr

instead of:

_test:
        addi r2, r3, -1
        cntlzw r2, r2
        cntlzw r3, r3
        srwi r2, r2, 5
        cmplwi cr0, r2, 0
        srwi r2, r3, 5
        li r3, 0
        bne cr0, LBB1_2 ;
LBB1_1: ;
        mr r3, r2
LBB1_2: ;
        blr

This isn't wonderful, but it's an improvement.

llvm-svn: 30513
2006-09-20 06:19:26 +00:00
Chris Lattner 875ea0cdbd Expand 64-bit shifts more optimally if we know that the high bit of the
shift amount is one or zero.  For example, for:

long long foo1(long long X, int C) {
  return X << (C|32);
}

long long foo2(long long X, int C) {
  return X << (C&~32);
}

we get:

_foo1:
        movb $31, %cl
        movl 4(%esp), %edx
        andb 12(%esp), %cl
        shll %cl, %edx
        xorl %eax, %eax
        ret
_foo2:
        movb $223, %cl
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        andb 12(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        ret

instead of:

_foo1:
        subl $4, %esp
        movl %ebx, (%esp)
        movb $32, %bl
        movl 8(%esp), %eax
        movl 12(%esp), %edx
        movb %bl, %cl
        orb 16(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        xorl %ecx, %ecx
        testb %bl, %bl
        cmovne %eax, %edx
        cmovne %ecx, %eax
        movl (%esp), %ebx
        addl $4, %esp
        ret
_foo2:
        subl $4, %esp
        movl %ebx, (%esp)
        movb $223, %cl
        movl 8(%esp), %eax
        movl 12(%esp), %edx
        andb 16(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        xorl %ecx, %ecx
        xorb %bl, %bl
        testb %bl, %bl
        cmovne %eax, %edx
        cmovne %ecx, %eax
        movl (%esp), %ebx
        addl $4, %esp
        ret

llvm-svn: 30506
2006-09-20 03:38:48 +00:00
Chris Lattner 5a42ebcff3 Fold extract_element(cst) to cst
llvm-svn: 30478
2006-09-19 05:02:39 +00:00
Chris Lattner 4c059f4962 Minor speedup for legalize by avoiding some malloc traffic
llvm-svn: 30477
2006-09-19 04:51:23 +00:00
Evan Cheng 1fc7c363e6 Fix a typo.
llvm-svn: 30474
2006-09-18 23:28:33 +00:00
Evan Cheng 4bfaf0bd2c Allow i32 UDIV, SDIV, UREM, SREM to be expanded into libcalls.
llvm-svn: 30470
2006-09-18 21:49:04 +00:00
Andrew Lenharth c50458fb90 absolute addresses must match pointer size
llvm-svn: 30461
2006-09-18 17:59:35 +00:00
Chris Lattner e50f5d1fb1 Oh yeah, this is needed too
llvm-svn: 30407
2006-09-16 05:08:34 +00:00
Chris Lattner 1b63391fdf simplify control flow, no functionality change
llvm-svn: 30403
2006-09-16 00:21:44 +00:00
Chris Lattner fbadbda6ba Allow custom expand of mul
llvm-svn: 30402
2006-09-16 00:09:24 +00:00
Chris Lattner 46d710e6ea Fold (X & C1) | (Y & C2) -> (X|Y) & C3 when possible.
This implements CodeGen/X86/and-or-fold.ll

llvm-svn: 30379
2006-09-14 21:11:37 +00:00
Chris Lattner 97614c86ce Split rotate matching code out to its own function. Make it stronger, by
matching things like ((x >> c1) & c2) | ((x << c3) & c4) to (rot x, c5) & c6

llvm-svn: 30376
2006-09-14 20:50:57 +00:00
Chris Lattner 84cc1f7cb8 If LSR went through a lot of trouble to put constants (e.g. the addr of a global
in a specific BB, don't undo this!).  This allows us to compile
CodeGen/X86/loop-hoist.ll into:

_foo:
        xorl %eax, %eax
***     movl L_Arr$non_lazy_ptr, %ecx
        movl 4(%esp), %edx
LBB1_1: #cond_true
        movl %eax, (%ecx,%eax,4)
        incl %eax
        cmpl %edx, %eax
        jne LBB1_1      #cond_true
LBB1_2: #return
        ret

instead of:

_foo:
        xorl %eax, %eax
        movl 4(%esp), %ecx
LBB1_1: #cond_true
***     movl L_Arr$non_lazy_ptr, %edx
        movl %eax, (%edx,%eax,4)
        incl %eax
        cmpl %ecx, %eax
        jne LBB1_1      #cond_true
LBB1_2: #return
        ret

This was noticed in 464.h264ref.  This doesn't usually affect PPC,
but strikes X86 all the time.

llvm-svn: 30290
2006-09-13 06:02:42 +00:00
Chris Lattner 72b503bcad Compile X << 1 (where X is a long-long) to:
addl %ecx, %ecx
        adcl %eax, %eax

instead of:

        movl %ecx, %edx
        addl %edx, %edx
        shrl $31, %ecx
        addl %eax, %eax
        orl %ecx, %eax

and to:

        addc r5, r5, r5
        adde r4, r4, r4

instead of:

        slwi r2,r9,1
        srwi r0,r11,31
        slwi r3,r11,1
        or r2,r0,r2

on PPC.

llvm-svn: 30284
2006-09-13 03:50:39 +00:00
Evan Cheng 45fe3bc72c Added support for machine specific constantpool values. These are useful for
representing expressions that can only be resolved at link time, etc.

llvm-svn: 30278
2006-09-12 21:00:35 +00:00
Chris Lattner 2e0dfb0b16 This code was trying too hard. By eliminating redundant edges in the CFG
due to switch cases going to the same place, it make #pred != #phi entries,
breaking live interval analysis.

This fixes 458.sjeng on x86 with llc.

llvm-svn: 30236
2006-09-10 06:36:57 +00:00
Chris Lattner f0359b343a Implement the fpowi now by lowering to a libcall
llvm-svn: 30225
2006-09-09 06:03:30 +00:00
Chris Lattner e4bbb6c341 Allow targets to custom lower expanded BIT_CONVERT's
llvm-svn: 30217
2006-09-09 00:20:27 +00:00
Chris Lattner 707339a57b Fix CodeGen/Generic/2006-09-06-SwitchLowering.ll, a bug where SDIsel inserted
too many phi operands when lowering a switch to branches in some cases.

llvm-svn: 30142
2006-09-07 01:59:34 +00:00
Chris Lattner 0dce3311c4 Change the default to 0, which means 'default'.
llvm-svn: 30114
2006-09-05 17:39:15 +00:00
Chris Lattner af23f9b5f6 Completely eliminate def&use operands. Now a register operand is EITHER a
def operand or a use operand.

llvm-svn: 30109
2006-09-05 02:31:13 +00:00
Duraid Madina 373be1d1a2 forgot this
llvm-svn: 30097
2006-09-04 07:44:11 +00:00
Evan Cheng e93762d36e Allow legalizer to expand ISD::MUL using only MULHS in the rare case that is
possible and the target only supports MULHS.

llvm-svn: 30022
2006-09-01 18:17:58 +00:00
Evan Cheng 31305c45da DAG combiner fix for rotates. Previously the outer-most condition checks
for ROTL availability. This prevents it from forming ROTR for targets that
has ROTR only.

llvm-svn: 29997
2006-08-31 07:41:12 +00:00
Evan Cheng e5570a4c3f Move isCommutativeBinOp from SelectionDAG.cpp and DAGCombiner.cpp out. Make it a static method of SelectionDAG.
llvm-svn: 29951
2006-08-29 06:42:35 +00:00
Chris Lattner 3d27be1333 s|llvm/Support/Visibility.h|llvm/Support/Compiler.h|
llvm-svn: 29911
2006-08-27 12:54:02 +00:00
Evan Cheng 849f4bf8dd Eliminate SelectNodeTo() and getTargetNode() variants which take more than
3 SDOperand operands. They are replaced by versions which take an array
of SDOperand and the number of operands.

llvm-svn: 29905
2006-08-27 08:08:54 +00:00
Evan Cheng 34b70eea5c SelectNodeTo now returns a SDNode*.
llvm-svn: 29901
2006-08-26 08:00:10 +00:00
Chris Lattner 451b099113 Fix PR861
llvm-svn: 29796
2006-08-21 20:24:53 +00:00
Chris Lattner d86418ab20 switch the SUnit pred/succ sets from being std::sets to being smallvectors.
This reduces selectiondag time on kc++ from 5.43s to 4.98s (9%).  More
significantly, this speeds up the default ppc scheduler from ~1571ms to 1063ms,
a 33% speedup.

llvm-svn: 29743
2006-08-17 00:09:56 +00:00
Chris Lattner 65879caf07 minor changes.
llvm-svn: 29740
2006-08-16 22:57:46 +00:00
Chris Lattner a4f3625c23 Use the appropriate typedef
llvm-svn: 29730
2006-08-16 20:59:32 +00:00
Chris Lattner a5a3eafbd0 Start using SDVTList more consistently
llvm-svn: 29711
2006-08-15 19:11:05 +00:00
Chris Lattner f98411a220 add a new SDVTList type and new SelectionDAG::getVTList methods to streamline
the creation of canonical VTLists.

llvm-svn: 29709
2006-08-15 17:46:01 +00:00
Chris Lattner bd8877744b eliminate use of getNode that takes vector of valuetypes.
llvm-svn: 29687
2006-08-14 23:53:35 +00:00
Chris Lattner 3bf4be453f Add a new getNode() method that takes a pointer to an already-intern'd list
of value-type nodes.  This avoids having to do mallocs for std::vectors of
valuetypes when a node returns more than one type.

llvm-svn: 29685
2006-08-14 23:31:51 +00:00
Chris Lattner e93a39f2d7 remove SelectionDAG::InsertISelMapEntry, it is dead
llvm-svn: 29677
2006-08-14 22:24:39 +00:00
Chris Lattner 63268f0672 Add code to resize the CSEMap hash table. This doesn't speedup codegen of
kimwitu, but seems like a good idea from a "avoid performance cliffs" standpoint :)

llvm-svn: 29675
2006-08-14 22:19:25 +00:00
Chris Lattner 8e37283d8b Add the actual constant to the hash for ConstantPool nodes. Thanks to
Rafael Espindola for pointing this out.

llvm-svn: 29669
2006-08-14 20:12:44 +00:00
Chris Lattner 0a60294fa0 Switch to using SuperFastHash instead of adding all elements together. This
doesn't significantly improve performance but it helps a small amount.

llvm-svn: 29642
2006-08-12 01:07:10 +00:00
Chris Lattner 04aa034f38 Switch NodeID to track 32-bit chunks instead of 8-bit chunks, for a 2.5%
speedup in isel time.

llvm-svn: 29640
2006-08-11 23:55:53 +00:00
Chris Lattner 0c2e5412bb Remove 8 more std::map's.
llvm-svn: 29631
2006-08-11 21:55:30 +00:00
Chris Lattner 3f16b201e2 Move the BBNodes, GlobalValues, TargetGlobalValues, Constants, TargetConstants,
RegNodes, and ValueNodes maps into the CSEMap.

llvm-svn: 29626
2006-08-11 21:01:22 +00:00
Chris Lattner fcb16470ec eliminate the NullaryOps map, use CSEMap instead.
llvm-svn: 29621
2006-08-11 18:38:11 +00:00
Chris Lattner 6f22ebd8be change internal impl of dag combiner so that calls to CombineTo never have to
make a temporary vector.

llvm-svn: 29618
2006-08-11 17:56:38 +00:00
Chris Lattner a2f4086828 Change one ReplaceAllUsesWith method to take an array of operands to replace
instead of a vector of operands.

llvm-svn: 29616
2006-08-11 17:46:28 +00:00
Chris Lattner c24a1d3093 Start eliminating temporary vectors used to create DAG nodes. Instead, pass
in the start of an array and a count of operands where applicable.  In many
cases, the number of operands is known, so this static array can be allocated
on the stack, avoiding the heap.  In many other cases, a SmallVector can be
used, which has the same benefit in the common cases.

I updated a lot of code calling getNode that takes a vector, but ran out of
time.  The rest of the code should be updated, and these methods should be
removed.

We should also do the same thing to eliminate the methods that take a
vector of MVT::ValueTypes.

It would be extra nice to convert the dagiselemitter to avoid creating vectors
for operands when calling getTargetNode.

llvm-svn: 29566
2006-08-08 02:23:42 +00:00
Chris Lattner 97af9d5d3a Eliminate some malloc traffic by allocating vectors on the stack. Change some
method that took std::vector<SDOperand> to take a pointer to a first operand
and #operands.

This speeds up isel on kc++ by about 3%.

llvm-svn: 29561
2006-08-08 01:09:31 +00:00
Chris Lattner 1ee75ce65d Revamp the "CSEMap" datastructure used in the SelectionDAG class. This
eliminates a bunch of std::map's in the SelectionDAG, replacing them with a
home-grown hashtable.

This is still a work in progress: not all the maps have been moved over and the
hashtable never resizes.  That said, this still speeds up llc 20% on kimwitu++
with -fast -regalloc=local using a release build.

llvm-svn: 29550
2006-08-07 23:03:03 +00:00
Evan Cheng 445b91a041 Clear TopOrder before assigning topological order. Some clean ups.
llvm-svn: 29546
2006-08-07 22:13:29 +00:00
Evan Cheng 1640ae5a84 Reverse the FlaggedNodes after scanning up for flagged preds or else the order would be reversed.
llvm-svn: 29545
2006-08-07 22:12:12 +00:00
Chris Lattner 8927c875bb Make SelectionDAG::RemoveDeadNodes iterative instead of recursive, which
also make it simpler.

llvm-svn: 29524
2006-08-04 17:45:20 +00:00
Jim Laskey a5b707e3ad Copy the liveins for the first block. PR859
llvm-svn: 29511
2006-08-03 20:51:06 +00:00
Chris Lattner 524c1a21f2 Work around a GCC 3.3.5 bug noticed by a user.
llvm-svn: 29490
2006-08-03 00:18:59 +00:00
Evan Cheng bba1ebda32 - Change AssignTopologicalOrder to return vector of SDNode* by reference.
- Tweak implementation to avoid using std::map.

llvm-svn: 29479
2006-08-02 22:00:34 +00:00
Jim Laskey 29e635d3c9 Final polish on machine pass registries.
llvm-svn: 29471
2006-08-02 12:30:23 +00:00
Jim Laskey 17c67efe8a Now that the ISel is available, it's possible to create a default instruction
scheduler creator.

llvm-svn: 29452
2006-08-01 19:14:14 +00:00
Jim Laskey 03593f72db 1. Change use of "Cache" to "Default".
2. Added argument to instruction scheduler creators so the creators can do
special things.
3. Repaired target hazard code.
4. Misc.

More to follow.

llvm-svn: 29450
2006-08-01 18:29:48 +00:00
Jim Laskey 95eda5b1f3 Introducing plugable register allocators and instruction schedulers.
llvm-svn: 29434
2006-08-01 14:21:23 +00:00
Evan Cheng 9631a60020 Added AssignTopologicalOrder() to assign each node an unique id based on their topological order.
llvm-svn: 29431
2006-08-01 08:20:41 +00:00
Evan Cheng 6ae6ac1216 PIC jump table entries are always 32-bit even in 64-bit mode.
llvm-svn: 29422
2006-08-01 01:03:13 +00:00
Evan Cheng b572401bea Remove InFlightSet hack. No longer needed.
llvm-svn: 29373
2006-07-28 00:47:19 +00:00
Nate Begeman efc312a5c7 Code cleanups, per review
llvm-svn: 29347
2006-07-27 16:46:58 +00:00
Evan Cheng acb606ff33 AssignNodeIds should return unsigned.
llvm-svn: 29343
2006-07-27 07:36:47 +00:00
Evan Cheng 29eefc164c AssignNodeIds assign each node in the DAG an unique id.
llvm-svn: 29337
2006-07-27 06:39:06 +00:00
Chris Lattner 85ea83e821 Add some advice
llvm-svn: 29324
2006-07-27 04:24:14 +00:00
Nate Begeman 787565024a Support jump tables when in PIC relocation model
llvm-svn: 29318
2006-07-27 01:13:04 +00:00
Chris Lattner 4488f0c303 Fix a case where LegalizeAllNodesNotLeadingTo could take exponential time.
This manifested itself as really long time to compile
Regression/CodeGen/Generic/2003-05-28-ManyArgs.ll on ppc.
This is PR847.

llvm-svn: 29313
2006-07-26 23:55:56 +00:00
Reid Spencer 421475cd3b For PR780:
1. Move IncludeFile.h to System library
2. Move IncludeFile.cpp to System library
3. #1 and #2 required to prevent cyclic library dependencies for libSystem
4. Convert all existing uses of Support/IncludeFile.h to System/IncludeFile.h
5. Add IncludeFile support to various lib/System classes.
6. Add new lib/System classes to LinkAllVMCore.h
All this in an attempt to pull in lib/System to what's required for VMCore

llvm-svn: 29287
2006-07-26 16:18:00 +00:00
Reid Spencer 658b9476f0 Initialize some variables the compiler warns about.
llvm-svn: 29277
2006-07-25 20:44:41 +00:00
Jim Laskey 4e153f1b91 Use an enumeration to eliminate data relocations.
llvm-svn: 29249
2006-07-21 20:57:35 +00:00
Evan Cheng 7c970b98d0 If a shuffle is a splat, check if the argument is a build_vector with all elements being the same. If so, return the argument.
llvm-svn: 29242
2006-07-21 08:25:53 +00:00
Chris Lattner 55782c6c41 Build more debugger/selectiondag libraries as archives instead of .o files.
This works around bugs in some versions of the cygwin linker.

Patch contributed by Anton Korobeynikov.

llvm-svn: 29239
2006-07-21 00:10:47 +00:00
Evan Cheng 8472e0c4af If a shuffle is unary, i.e. one of the vector argument is not needed, turn the
operand into a undef and adjust mask accordingly.

llvm-svn: 29232
2006-07-20 22:44:41 +00:00
Chris Lattner b030532910 Mems can be in the output list also. This is the second half of a fix for
PR833

llvm-svn: 29224
2006-07-20 19:02:21 +00:00
Andrew Lenharth ec104a2b41 80 cols
llvm-svn: 29221
2006-07-20 17:43:27 +00:00
Andrew Lenharth c496b418b5 Reduce number of exported symbols
llvm-svn: 29220
2006-07-20 17:28:38 +00:00
Chris Lattner c0973edc69 Add an out-of-line virtual method for the sdnode class to give it a home.
llvm-svn: 29192
2006-07-19 00:00:37 +00:00
Jim Laskey f7300b2706 It was pointed out that DEBUG() is only available with -debug.
llvm-svn: 29106
2006-07-11 18:25:13 +00:00
Jim Laskey c3d341ea98 Ensure that dump calls that are associated with asserts are removed from
non-debug build.

llvm-svn: 29105
2006-07-11 17:58:07 +00:00
Chris Lattner 1b8ea1f5ba Fix CodeGen/Alpha/2006-07-03-ASMFormalLowering.ll and PR818.
llvm-svn: 29099
2006-07-11 01:40:09 +00:00
Evan Cheng d19938834b Ugly hack! Add helper functions InsertInFlightSetEntry and
RemoveInFlightSetEntry. They are used in place of direct set operators to
reduce instruction selection function stack size.

llvm-svn: 28987
2006-06-29 23:57:05 +00:00
Chris Lattner 996795b0dd Use hidden visibility to make symbols in an anonymous namespace get
dropped.  This shrinks libllvmgcc.dylib another 67K

llvm-svn: 28975
2006-06-28 23:17:24 +00:00
Chris Lattner e097e6f7c7 Shave another 27K off libllvmgcc.dylib with visibility hidden
llvm-svn: 28973
2006-06-28 22:17:39 +00:00
Chris Lattner 54a34cd20b Mark these two classes as hidden, shrinking libllbmgcc.dylib by 25K
llvm-svn: 28970
2006-06-28 21:58:30 +00:00
Chris Lattner 710b3d5ea1 Fix CodeGen/Generic/2006-06-28-SimplifySetCCCrash.ll
llvm-svn: 28965
2006-06-28 18:29:47 +00:00
Reid Spencer ee7eaa25cf For PR801:
Refactor the Graph writing code to use a common implementation which is
now in lib/Support/GraphWriter.cpp. This completes the PR.

Patch by Anton Korobeynikov. Thanks, Anton!

llvm-svn: 28925
2006-06-27 16:49:46 +00:00
Evan Cheng ef9e07d3f0 Consistency. EXTRACT_ELEMENT index operand should have ptr type.
llvm-svn: 28795
2006-06-15 08:11:54 +00:00
Evan Cheng 55772ccfd6 Instructions with variable operands (variable_ops) can have a number required
operands. e.g.
def CALL32r : I<0xFF, MRM2r, (ops GR32:$dst, variable_ops),
                "call {*}$dst", [(X86call GR32:$dst)]>;
TableGen should emit operand informations for the "required" operands.

Added a target instruction info flag M_VARIABLE_OPS to indicate the target
instruction may have more operands in addition to the minimum required
operands.

llvm-svn: 28791
2006-06-15 07:22:16 +00:00
Chris Lattner 32d92e004d Make sure to update the CFG correctly if a switch only has a default dest.
This fixes CodeGen/Generic/2006-06-12-LowerSwitchCrash.ll

llvm-svn: 28755
2006-06-12 18:25:29 +00:00
Andrew Lenharth 0e57b2cb92 Start on my todo list
llvm-svn: 28752
2006-06-12 16:07:18 +00:00
Chris Lattner c03a9259c0 Fix X86/inline-asm.ll:test2, a case where an input value was implicitly
truncated.

llvm-svn: 28733
2006-06-08 18:27:11 +00:00
Chris Lattner 705948d742 Fix Regression/CodeGen/X86/inline-asm.ll, a case where inline asm causes
implement extension of a register.

llvm-svn: 28731
2006-06-08 18:22:48 +00:00
Reid Spencer 614cb2ff82 For PR798:
Provide GraphViz support for MingW32. Patch provided by Anton Korobeynikov

llvm-svn: 28688
2006-06-05 16:26:06 +00:00
Reid Spencer a647c7ff42 Use archive libraries instead of object files for VMCore, BCReader,
BCWriter, and bzip2 libraries. Adjust the various makefiles to accommodate
these changes. This was done to speed up link times.

llvm-svn: 28610
2006-06-01 01:30:27 +00:00
Evan Cheng 0c0996a97b commuteInstruction() does not always create a new MI!
llvm-svn: 28592
2006-05-31 18:03:39 +00:00
Evan Cheng 9d91caa053 Eliminate a memory leak.
llvm-svn: 28585
2006-05-31 07:13:03 +00:00
Evan Cheng 64d2846017 visitVBinOp: Can't fold divide by zero!
llvm-svn: 28584
2006-05-31 06:08:35 +00:00
Evan Cheng d12c97d23a Make sure the register pressure reduction schedulers work for non-uniform
latency targets, e.g. PPC32.

llvm-svn: 28561
2006-05-30 18:05:39 +00:00
Evan Cheng 61e9f0d680 When a priority_queue is empty, the behavior of top() operator is
non-deterministic. Returns NULL when it's empty!

llvm-svn: 28560
2006-05-30 18:04:34 +00:00
Chris Lattner 8f872d2091 Fix a nasty dag combiner bug that caused nondeterminstic crashes (MY FAVORITE!):
SimplifySelectOps would eliminate a Select, delete it, then return true.

The clients would see that it did something and return null.

The top level would see a null return, and decide that nothing happened,
proceeding to process the node in other ways: boom.

The fix is simple: clients of SimplifySelectOps should return the select
node itself.

In order to catch really obnoxious boogs like this in the future, add an
assert that nodes are not deleted.  We do this by checking for a sentry node
type that the SDNode dtor sets when a node is destroyed.

llvm-svn: 28514
2006-05-27 00:43:02 +00:00
Evan Cheng 21dee4e0b2 Make CALL node consistent with RET node. Signness of value has type MVT::i32
instead of MVT::i1. Either is fine except MVT::i32 is probably a legal type
for most (if not all) platforms while MVT::i1 is not.

llvm-svn: 28511
2006-05-26 23:13:20 +00:00
Evan Cheng a2e9953c54 Change RET node to include signness information of the return values. e.g.
RET chain, value1, sign1, value2, sign2

llvm-svn: 28509
2006-05-26 23:09:09 +00:00
Evan Cheng 009f5f55f7 Turn on -sched-commute-nodes by default.
llvm-svn: 28465
2006-05-25 08:37:31 +00:00
Evan Cheng 4582771f3f CALL node change: now including signness of every argument.
llvm-svn: 28461
2006-05-25 00:55:32 +00:00
Chris Lattner aa2372562e Patches to make the LLVM sources more -pedantic clean. Patch provided
by Anton Korobeynikov!  This is a step towards closing PR786.

llvm-svn: 28447
2006-05-24 17:04:05 +00:00
Evan Cheng ac4f66ff24 -enable-unsafe-fp-math implies -enable-finite-only-fp-math
llvm-svn: 28437
2006-05-23 18:18:46 +00:00
Vladimir Prus df1d439849 Fix missing include
llvm-svn: 28435
2006-05-23 13:43:15 +00:00
Evan Cheng 1c5b7d12df Incorrect SETCC CondCode used for FP comparisons.
llvm-svn: 28433
2006-05-23 06:40:47 +00:00
Evan Cheng d8e2f6ebc1 lib/Target/Target.td
llvm-svn: 28386
2006-05-18 20:42:07 +00:00
Chris Lattner 7949c2e8b2 Fix the result of the call to use a correct vbitconvert. There is no need to
use getPackedTypeBreakdown at all here.

llvm-svn: 28365
2006-05-17 20:49:36 +00:00
Chris Lattner 938155ca57 Correct a previous patch which broke CodeGen/PowerPC/vec_call.ll
llvm-svn: 28364
2006-05-17 20:43:21 +00:00
Evan Cheng 751cd7653d Fixed a LowerCallTo and LowerArguments bug. They were introducing illegal
VBIT_VECTOR nodes. There were some confusion about the semantics of
getPackedTypeBreakdown(). e.g. for <4 x f32> it returns 1 and v4f32, not 4,
and f32.

llvm-svn: 28352
2006-05-17 18:16:39 +00:00
Chris Lattner 62f1b83c0e When we legalize target nodes, do not use getNode to create a new node,
use UpdateNodeOperands to just update the operands!  This is important because
getNode will allocate a new node if the node returns a flag and this breaks
assumptions in the legalizer that you can legalize some things multiple times
and get exactly the same results.

This latent bug was exposed by my ppc patch last night, and this fixes
gsm/toast.

llvm-svn: 28348
2006-05-17 18:00:08 +00:00
Chris Lattner a1cec0106a Add an assertion, avoid some unneeded work for each call. No functionality
change.

llvm-svn: 28347
2006-05-17 17:55:45 +00:00
Chris Lattner b77ba73a29 Add support for calls that pass and return legal vectors.
llvm-svn: 28340
2006-05-16 23:39:44 +00:00
Chris Lattner aaa23d953f Add a new ISD::CALL node, make the default impl of TargetLowering::LowerCallTo
produce it.

llvm-svn: 28338
2006-05-16 22:53:20 +00:00
Andrew Lenharth 1dc9ec5874 Move this code to a common place
llvm-svn: 28329
2006-05-16 17:42:15 +00:00
Chris Lattner 3d82699605 Add a chain to FORMAL_ARGUMENTS. This is a minimal port of the X86 backend,
it doesn't currently use/maintain the chain properly.  Also, make the
X86ISelLowering.cpp file 80-col clean.

llvm-svn: 28320
2006-05-16 06:45:34 +00:00
Chris Lattner 957cb6733a Move function-live-in-handling code from the sdisel code to the scheduler.
This code should be emitted after legalize, so it can't be in sdisel.

Note that the EmitFunctionEntryCode hook should be updated to operate on the
DAG.  The X86 backend is the only one currently using this hook.

llvm-svn: 28315
2006-05-16 06:10:58 +00:00
Chris Lattner 5f0edfb849 Legalize FORMAL_ARGUMENTS nodes correctly, we don't want to legalize them once
for each argument.

llvm-svn: 28313
2006-05-16 05:49:56 +00:00
Evan Cheng 99f2f79e2f Fixing 2006-05-01-SchedCausingSpills.ll; some clean up
llvm-svn: 28279
2006-05-13 08:22:24 +00:00
Evan Cheng d1915cfa6f Revert an un-intended change
llvm-svn: 28278
2006-05-13 05:53:47 +00:00
Chris Lattner 69a0ce6261 Merge identical code.
llvm-svn: 28274
2006-05-13 02:11:14 +00:00
Chris Lattner 53cdb2f2b0 Remove dead vars
llvm-svn: 28255
2006-05-12 18:06:45 +00:00
Chris Lattner da076e41ab remove dead vars
llvm-svn: 28254
2006-05-12 18:04:28 +00:00
Chris Lattner afe72481f6 Comment out dead variables
llvm-svn: 28252
2006-05-12 17:57:54 +00:00
Chris Lattner 8c02c3f41a Compile:
%tmp152 = setgt uint %tmp144, %tmp149           ; <bool> [#uses=1]
        %tmp159 = setlt uint %tmp144, %tmp149           ; <bool> [#uses=1]
        %bothcond2 = or bool %tmp152, %tmp159           ; <bool> [#uses=1]

To setne, not setune, which causes an assertion fault.

llvm-svn: 28244
2006-05-12 17:03:46 +00:00
Owen Anderson 8c2c1e90c4 Refactor a bunch of includes so that TargetMachine.h doesn't have to include
TargetData.h.  This should make recompiles a bit faster with my current
TargetData tinkering.

llvm-svn: 28238
2006-05-12 06:33:49 +00:00
Evan Cheng 095c9d9b7f Duh. That could take a long time.
llvm-svn: 28235
2006-05-12 06:05:18 +00:00
Chris Lattner 66adee93aa Two simplifications for token factor nodes: simplify tf(x,x) -> x.
simplify tf(x,y,y,z) -> tf(x,y,z).

llvm-svn: 28233
2006-05-12 05:01:37 +00:00
Evan Cheng afed73eebe Add capability to scheduler to commute nodes for profit.
If a two-address code whose first operand has uses below, it should be commuted
when possible.

llvm-svn: 28230
2006-05-12 01:58:24 +00:00
Evan Cheng d38c22bdd3 Refactor scheduler code. Move register-reduction list scheduler to a
separate file. Added an initial implementation of top-down register pressure
reduction list scheduler.

llvm-svn: 28226
2006-05-11 23:55:42 +00:00
Evan Cheng 9665ba053f Templatify RegReductionPriorityQueue
llvm-svn: 28212
2006-05-10 06:16:44 +00:00
Nate Begeman 1a225d23ae Fix PR773
llvm-svn: 28207
2006-05-09 18:20:51 +00:00
Evan Cheng 7d693898ee Add pseudo dependency to force a def&use operand to be scheduled last (unless
the distance between the def and another use is much longer). This is under
option control for now "-sched-lower-defnuse".

llvm-svn: 28201
2006-05-09 07:13:34 +00:00
Evan Cheng 2c74848af1 Debugging info
llvm-svn: 28200
2006-05-09 06:55:15 +00:00
Chris Lattner 446e1ef26a Make the case I just checked in stronger. Now we compile this:
short test2(short X, short x) {
  int Y = (short)(X+x);
  return Y >> 1;
}

to:

_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r3, r2, 1
        blr

instead of:

_test2:
        add r2, r3, r4
        extsh r2, r2
        srwi r2, r2, 1
        extsh r3, r2
        blr

llvm-svn: 28175
2006-05-08 21:18:59 +00:00
Chris Lattner 29062da0ac Implement and_sext.ll:test3, generating:
_test4:
        srawi r3, r3, 16
        blr

instead of:

_test4:
        srwi r2, r3, 16
        extsh r3, r2
        blr

for:

short test4(unsigned X) {
  return (X >> 16);
}

llvm-svn: 28174
2006-05-08 20:59:41 +00:00
Chris Lattner 2935d8190c Compile this:
short test4(unsigned X) {
  return (X >> 16);
}

to:

_test4:
        movl 4(%esp), %eax
        sarl $16, %eax
        ret

instead of:

_test4:
        movl $-65536, %eax
        andl 4(%esp), %eax
        sarl $16, %eax
        ret

llvm-svn: 28171
2006-05-08 20:51:54 +00:00
Chris Lattner 78da6792e7 Fold shifts with undef operands.
llvm-svn: 28167
2006-05-08 17:29:49 +00:00
Nate Begeman d7a19102d1 Make emission of jump tables a bit less conservative; they are now required
to be only 31.25% dense, rather than 75% dense.

llvm-svn: 28165
2006-05-08 16:51:36 +00:00
Nate Begeman e5ce5bb6da Fix PR772
llvm-svn: 28161
2006-05-08 01:35:01 +00:00
Chris Lattner 7e7bcf3a54 Simplify some code, add a couple minor missed folds
llvm-svn: 28152
2006-05-06 23:06:26 +00:00
Chris Lattner 751817c54f constant fold sign_extend_inreg
llvm-svn: 28151
2006-05-06 23:05:41 +00:00
Chris Lattner 2a4d7b845b remove cases handled elsewhere
llvm-svn: 28150
2006-05-06 22:43:44 +00:00
Chris Lattner 1ecb2a2dac Use the new TargetLowering::ComputeNumSignBits method to eliminate
sign_extend_inreg operations.  Though ComputeNumSignBits is still rudimentary,
this is enough to compile this:

short test(short X, short x) {
  int Y = X+x;
  return (Y >> 1);
}
short test2(short X, short x) {
  int Y = (short)(X+x);
  return Y >> 1;
}

into:

_test:
        add r2, r3, r4
        srawi r3, r2, 1
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r3, r2, 1
        blr

instead of:

_test:
        add r2, r3, r4
        srawi r2, r2, 1
        extsh r3, r2
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r2, r2, 1
        extsh r3, r2
        blr

llvm-svn: 28146
2006-05-06 09:30:03 +00:00
Chris Lattner 21cd99024a When inserting casts, be careful of where we put them. We cannot insert
a cast immediately before a PHI node.

This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll

llvm-svn: 28143
2006-05-06 09:10:37 +00:00
Chris Lattner 907e392dba Fold trunc(any_ext). This gives stuff like:
27,28c27
<       movzwl %di, %edi
<       movl %edi, %ebx
---
>       movw %di, %bx

llvm-svn: 28137
2006-05-05 22:56:26 +00:00
Chris Lattner 57f8c5a387 Shrink shifts when possible.
llvm-svn: 28136
2006-05-05 22:53:17 +00:00
Chris Lattner 3d26577396 Fold (fpext (load x)) -> (extload x)
llvm-svn: 28130
2006-05-05 21:34:35 +00:00
Chris Lattner 3e3f2c63c3 More aggressively sink GEP offsets into loops. For example, before we
generated:

        movl 8(%esp), %eax
        movl %eax, %edx
        addl $4316, %edx
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, (%edx)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx
        movl %edx, 4460(%eax)
        ret
...

Now we generate:

        movl 8(%esp), %eax
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, 4316(%eax)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx
        movl %ecx, 4460(%eax)
        ret

... which uses one fewer register.

llvm-svn: 28129
2006-05-05 21:17:49 +00:00
Chris Lattner 25a5283a86 Fold some common code.
llvm-svn: 28124
2006-05-05 06:32:04 +00:00
Chris Lattner 002ee91457 Implement:
// fold (and (sext x), (sext y)) -> (sext (and x, y))
  // fold (or  (sext x), (sext y)) -> (sext (or  x, y))
  // fold (xor (sext x), (sext y)) -> (sext (xor x, y))
  // fold (and (aext x), (aext y)) -> (aext (and x, y))
  // fold (or  (aext x), (aext y)) -> (aext (or  x, y))
  // fold (xor (aext x), (aext y)) -> (aext (xor x, y))

llvm-svn: 28123
2006-05-05 06:31:05 +00:00
Chris Lattner 5ac4293606 Pull and through and/or/xor. This compiles some bitfield code to:
mov EAX, DWORD PTR [ESP + 4]
        mov ECX, DWORD PTR [EAX]
        mov EDX, ECX
        add EDX, EDX
        or EDX, ECX
        and EDX, -2147483648
        and ECX, 2147483647
        or EDX, ECX
        mov DWORD PTR [EAX], EDX
        ret

instead of:

        sub ESP, 4
        mov DWORD PTR [ESP], ESI
        mov EAX, DWORD PTR [ESP + 8]
        mov ECX, DWORD PTR [EAX]
        mov EDX, ECX
        add EDX, EDX
        mov ESI, ECX
        and ESI, -2147483648
        and EDX, -2147483648
        or EDX, ESI
        and ECX, 2147483647
        or EDX, ECX
        mov DWORD PTR [EAX], EDX
        mov ESI, DWORD PTR [ESP]
        add ESP, 4
        ret

llvm-svn: 28122
2006-05-05 06:10:43 +00:00
Chris Lattner 812646aa0c Implement a variety of simplifications for ANY_EXTEND.
llvm-svn: 28121
2006-05-05 05:58:59 +00:00
Chris Lattner 8d6fc20181 Factor some code, add these transformations:
// fold (and (trunc x), (trunc y)) -> (trunc (and x, y))
  // fold (or  (trunc x), (trunc y)) -> (trunc (or  x, y))
  // fold (xor (trunc x), (trunc y)) -> (trunc (xor x, y))

llvm-svn: 28120
2006-05-05 05:51:50 +00:00
Jeff Cohen 78a7f0e05e Fix VC++ compilation error.
llvm-svn: 28117
2006-05-05 01:47:05 +00:00
Chris Lattner 7a3ecf7993 Sink noop copies into the basic block that uses them. This reduces the number
of cross-block live ranges, and allows the bb-at-a-time selector to always
coallesce these away, at isel time.

This reduces the load on the coallescer and register allocator.  For example
on a codec on X86, we went from:

   1643 asm-printer           - Number of machine instrs printed
    419 liveintervals         - Number of loads/stores folded into instructions
   1144 liveintervals         - Number of identity moves eliminated after coalescing
   1022 liveintervals         - Number of interval joins performed
    282 liveintervals         - Number of intervals after coalescing
   1304 liveintervals         - Number of original intervals
     86 regalloc              - Number of times we had to backtrack
1.90232 regalloc              - Ratio of intervals processed over total intervals
     40 spiller               - Number of values reused
    182 spiller               - Number of loads added
    121 spiller               - Number of stores added
    132 spiller               - Number of register spills
      6 twoaddressinstruction - Number of instructions commuted to coalesce
    360 twoaddressinstruction - Number of two-address instructions

to:

   1636 asm-printer           - Number of machine instrs printed
    403 liveintervals         - Number of loads/stores folded into instructions
   1155 liveintervals         - Number of identity moves eliminated after coalescing
   1033 liveintervals         - Number of interval joins performed
    279 liveintervals         - Number of intervals after coalescing
   1312 liveintervals         - Number of original intervals
     76 regalloc              - Number of times we had to backtrack
1.88998 regalloc              - Ratio of intervals processed over total intervals
      1 spiller               - Number of copies elided
     41 spiller               - Number of values reused
    191 spiller               - Number of loads added
    114 spiller               - Number of stores added
    128 spiller               - Number of register spills
      4 twoaddressinstruction - Number of instructions commuted to coalesce
    356 twoaddressinstruction - Number of two-address instructions

On this testcase, this change provides a modest reduction in spill code,
regalloc iterations, and total instructions emitted.  It increases the number
of register coallesces.

llvm-svn: 28115
2006-05-05 01:04:50 +00:00
Evan Cheng 9add880566 Initial support for register pressure aware scheduling. The register reduction
scheduler can go into a "vertical mode" (i.e. traversing up the two-address
chain, etc.) when the register pressure is low.
This does seem to reduce the number of spills in the cases I've looked at. But
with x86, it's no guarantee the performance of the code improves.
It can be turned on with -sched-vertically option.

llvm-svn: 28108
2006-05-04 19:16:39 +00:00
Chris Lattner 469647bf38 Remove and simplify some more machineinstr/machineoperand stuff.
llvm-svn: 28105
2006-05-04 18:16:01 +00:00
Chris Lattner 10b71c0d08 Rename MO_VirtualRegister -> MO_Register. Clean up immediate handling.
llvm-svn: 28104
2006-05-04 18:05:43 +00:00
Chris Lattner 940cc978ef Remove a bunch more SparcV9 specific stuff
llvm-svn: 28093
2006-05-04 01:15:02 +00:00
Nate Begeman df4883971e Finish up the initial jump table implementation by allowing jump tables to
not be 100% dense.  Increase the minimum threshold for the number of cases
in a switch statement from 4 to 6 in order to create a jump table.

llvm-svn: 28079
2006-05-03 03:48:02 +00:00
Evan Cheng ffef8b9412 Bottom up register pressure reduction work: clean up some hacks and enhanced
the heuristic to further reduce spills for several test cases. (Note, it may
not necessarily translate to runtime win!)

llvm-svn: 28076
2006-05-03 02:10:45 +00:00
Owen Anderson 20a631fde7 Refactor TargetMachine, pushing handling of TargetData into the target-specific subclasses. This has one caller-visible change: getTargetData() now returns a pointer instead of a reference.
This fixes PR 759.

llvm-svn: 28074
2006-05-03 01:29:57 +00:00
Evan Cheng 0d084fb9ca Dis-favor stores more
llvm-svn: 28035
2006-05-01 09:20:44 +00:00
Evan Cheng 24e795496d Bottom up register-pressure reduction scheduler now pushes store operations
up the schedule. This helps code that looks like this:

loads ...
computations (first set) ...
stores (first set) ...
loads
computations (seccond set) ...
stores (seccond set) ...

Without this change, the stores and computations are more likely to
interleave:

loads ...
loads ...
computations (first set) ...
computations (second set) ...
computations (first set) ...
stores (first set) ...
computations (second set) ...
stores (stores set) ...

This can increase the number of spills if we are unlucky.

llvm-svn: 28033
2006-05-01 09:14:40 +00:00
Evan Cheng 10ff7b27ce Didn't mean ScheduleDAGList.cpp to make the last checkin.
llvm-svn: 28030
2006-05-01 08:56:34 +00:00
Evan Cheng a656242690 Remove temp. option -spiller-check-liveout, it didn't cause any failure nor performance regressions.
llvm-svn: 28029
2006-05-01 08:54:57 +00:00
Chris Lattner 2b48a94413 Remove a bogus transformation. This fixes SingleSource/UnitTests/2006-01-23-InitializedBitField.c
with some changes I have to the new CFE.

llvm-svn: 28022
2006-04-28 23:33:20 +00:00
Evan Cheng c5e8ce8b8c Remove the temporary option: -no-isel-fold-inflight
llvm-svn: 28012
2006-04-28 18:54:11 +00:00
Evan Cheng d43c5c6046 TargetLowering::LowerArguments should return a VBIT_CONVERT of
FORMAL_ARGUMENTS SDOperand in the return result vector.

llvm-svn: 28009
2006-04-28 05:25:15 +00:00
Evan Cheng 51ab4498e7 Added a temporary option -no-isel-fold-inflight to control whether a "inflight"
node can be folded.

llvm-svn: 28003
2006-04-28 02:09:19 +00:00
Evan Cheng 3784f3c57c Insert a VBIT_CONVERT between a FORMAL_ARGUMENT node and its vector uses
(VAND, VADD, etc.). Legalizer will assert otherwise.

llvm-svn: 27991
2006-04-27 08:29:42 +00:00
Chris Lattner 393d96a56c Fix Regression/CodeGen/Generic/2006-04-26-SetCCAnd.ll and
PR748.

llvm-svn: 27987
2006-04-27 05:01:07 +00:00
Evan Cheng 9618df1190 Don't forget return void.
llvm-svn: 27974
2006-04-25 23:03:35 +00:00
Nate Begeman 866b4b4d45 Fix the updating of the machine CFG when a PHI node was in a successor of
the jump table's range check block.  This re-enables 100% dense jump tables
by default on PPC & x86

llvm-svn: 27952
2006-04-23 06:26:20 +00:00
Nate Begeman ecb1dafd3d Turn of jump tables for a bit, there are still some issues to work out with
updating the machine CFG.

llvm-svn: 27949
2006-04-22 23:51:56 +00:00
Nate Begeman 4ca2ea5b43 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Chris Lattner b21d3bfd1f The BFS scheduler is apparently nondeterminstic (causes many llvmgcc bootstrap
miscompares).  Switch RISC targets to use the list-td scheduler, which isn't.

llvm-svn: 27933
2006-04-21 17:16:16 +00:00
Chris Lattner 662e940f73 Fix a couple more memory issues
llvm-svn: 27930
2006-04-21 15:32:26 +00:00
Chris Lattner cc47ab3305 Fix a really subtle and obnoxious memory bug that caused issues with an
llvm-gcc4 boostrap.  Whenever a node is deleted by the dag combiner, it
*must* be returned by the visit function, or the dag combiner will not
know that the node has been processed (and will, e.g., send it to the
target dag combine xforms).

llvm-svn: 27922
2006-04-20 23:55:59 +00:00
Evan Cheng a320abc494 Turn a VAND into a VECTOR_SHUFFLE is applicable.
DAG combiner can turn a VAND V, <-1, 0, -1, -1>, i.e. vector clear elements,
into a vector shuffle with a zero vector. It only does so when TLI tells it
the xform is profitable.

llvm-svn: 27874
2006-04-20 08:56:16 +00:00
Chris Lattner bc1b262725 Implement folding of a bunch of binops with undef
llvm-svn: 27863
2006-04-20 05:39:12 +00:00
Chris Lattner 73eb58e1a2 Simplify some code
llvm-svn: 27846
2006-04-19 23:17:50 +00:00
Chris Lattner 916ae0775e Fix handling of calls in functions that use vectors. This fixes a crash on
the code in GCC PR26546.

llvm-svn: 27780
2006-04-17 22:10:08 +00:00
Chris Lattner 326870b40b Codegen insertelement with constant insertion points as scalar_to_vector
and a shuffle.  For this:

void %test2(<4 x float>* %F, float %f) {
        %tmp = load <4 x float>* %F             ; <<4 x float>> [#uses=2]
        %tmp3 = add <4 x float> %tmp, %tmp              ; <<4 x float>> [#uses=1]
        %tmp2 = insertelement <4 x float> %tmp3, float %f, uint 2               ; <<4 x float>> [#uses=2]
        %tmp6 = add <4 x float> %tmp2, %tmp2            ; <<4 x float>> [#uses=1]
        store <4 x float> %tmp6, <4 x float>* %F
        ret void
}

we now get this on X86 (which will get better):

_test2:
        movl 4(%esp), %eax
        movaps (%eax), %xmm0
        addps %xmm0, %xmm0
        movaps %xmm0, %xmm1
        shufps $3, %xmm1, %xmm1
        movaps %xmm0, %xmm2
        shufps $1, %xmm2, %xmm2
        unpcklps %xmm1, %xmm2
        movss 8(%esp), %xmm1
        unpcklps %xmm1, %xmm0
        unpcklps %xmm2, %xmm0
        addps %xmm0, %xmm0
        movaps %xmm0, (%eax)
        ret

instead of:

_test2:
        subl $28, %esp
        movl 32(%esp), %eax
        movaps (%eax), %xmm0
        addps %xmm0, %xmm0
        movaps %xmm0, (%esp)
        movss 36(%esp), %xmm0
        movss %xmm0, 8(%esp)
        movaps (%esp), %xmm0
        addps %xmm0, %xmm0
        movaps %xmm0, (%eax)
        addl $28, %esp
        ret

llvm-svn: 27765
2006-04-17 19:21:01 +00:00
Chris Lattner 91226e5799 Add support for promoting stores from one legal type to another, allowing us
to write one pattern for vector stores instead of 4.

llvm-svn: 27730
2006-04-16 01:36:45 +00:00
Chris Lattner 7e7ad593cc Make these predicates return true for bit_convert(buildvector)'s as well as
buildvectors.

llvm-svn: 27723
2006-04-15 23:38:00 +00:00
Chris Lattner 086e986e94 Make this assertion better
llvm-svn: 27695
2006-04-14 06:08:35 +00:00
Evan Cheng 119266ea92 Promote vector AND, OR, and XOR
llvm-svn: 27632
2006-04-12 21:20:24 +00:00
Evan Cheng be8a8933e6 Vector type promotion for ISD::LOAD and ISD::SELECT
llvm-svn: 27606
2006-04-12 16:33:18 +00:00
Chris Lattner d3b504ae10 Implement support for the formal_arguments node. To get this, targets shouldcustom legalize it and remove their XXXTargetLowering::LowerArguments overload
llvm-svn: 27604
2006-04-12 16:20:43 +00:00
Chris Lattner 417b96b6dd Don't memoize vloads in the load map! Don't memoize them anywhere here, let
getNode do it.  This fixes CodeGen/Generic/2006-04-11-vecload.ll

llvm-svn: 27602
2006-04-12 03:25:41 +00:00
Evan Cheng 7256b0ae05 Only get Tmp2 for cases where number of operands is > 1. Fixed return void.
llvm-svn: 27586
2006-04-11 06:33:39 +00:00
Chris Lattner 6cf3bbbe17 add some todos
llvm-svn: 27580
2006-04-11 02:00:08 +00:00
Chris Lattner 2eb22eef7d Add basic support for legalizing returns of vectors
llvm-svn: 27578
2006-04-11 01:31:51 +00:00
Evan Cheng cb73b8d419 Missing break
llvm-svn: 27559
2006-04-10 18:54:36 +00:00
Chris Lattner 02274a5265 Add code generator support for VSELECT
llvm-svn: 27542
2006-04-08 22:22:57 +00:00
Chris Lattner e1401e3610 Canonicalize vvector_shuffle(x,x) -> vvector_shuffle(x,undef) to enable patterns
to match again :)

llvm-svn: 27533
2006-04-08 05:34:25 +00:00
Chris Lattner 098c01e94e Codegen shufflevector as VVECTOR_SHUFFLE
llvm-svn: 27529
2006-04-08 04:15:24 +00:00
Chris Lattner 101ea66813 add a sanity check: LegalizeOp should return a value that is the same type
as its input.

llvm-svn: 27528
2006-04-08 04:13:17 +00:00
Evan Cheng 78e3d565af INSERT_VECTOR_ELT lowering bug:
store vector to $esp
  store element to $esp + sizeof(VT) * index
  load  vector from $esp
The bug is VT is the type of the vector element, not the type of the vector!

llvm-svn: 27517
2006-04-08 01:46:37 +00:00
Chris Lattner aa3185f12e Stub out shufflevector
llvm-svn: 27514
2006-04-08 01:19:25 +00:00
Evan Cheng 613996c55e 1. If both vector operands of a vector_shuffle are undef, turn it into an undef.
2. A shuffle mask element can also be an undef.

llvm-svn: 27472
2006-04-06 23:20:43 +00:00
Chris Lattner 4a2413a590 Make a vector live across blocks have the correct Vec type. This fixes
CodeGen/X86/2006-04-04-CrossBlockCrash.ll

llvm-svn: 27436
2006-04-05 06:54:42 +00:00
Evan Cheng 9fa8959dce Exapnd a VECTOR_SHUFFLE to a BUILD_VECTOR if target asks for it to be expanded
or custom lowering fails.

llvm-svn: 27432
2006-04-05 06:07:11 +00:00
Chris Lattner 4ea52cac01 Do not create ZEXTLOAD's unless we are before legalize or the operation is
legal.

llvm-svn: 27402
2006-04-04 17:39:18 +00:00
Chris Lattner 6be79823e7 * Add supprot for SCALAR_TO_VECTOR operations where the input needs to be
promoted/expanded (e.g. SCALAR_TO_VECTOR from i8/i16 on PPC).
* Add support for targets to request that VECTOR_SHUFFLE nodes be promoted
  to a canonical type, for example, we only want v16i8 shuffles on PPC.
* Move isShuffleLegal out of TLI into Legalize.
* Teach isShuffleLegal to allow shuffles that need to be promoted.

llvm-svn: 27399
2006-04-04 17:23:26 +00:00
Chris Lattner a9e77d14c7 Constant fold bitconvert(undef)
llvm-svn: 27391
2006-04-04 01:02:22 +00:00
Chris Lattner e1e3adf802 Add a missing check, this fixes UnitTests/Vector/sumarray.c
llvm-svn: 27375
2006-04-03 17:29:28 +00:00
Chris Lattner 04c00fc844 Add a missing check, which broke a bunch of vector tests.
llvm-svn: 27374
2006-04-03 17:21:50 +00:00
Andrew Lenharth 94f012f606 back this out
llvm-svn: 27367
2006-04-03 03:16:50 +00:00
Andrew Lenharth 015eaf5f33 This should be a win of every arch
llvm-svn: 27364
2006-04-02 21:42:45 +00:00
Chris Lattner 4993249a04 Add a little dag combine to compile this:
int %AreSecondAndThirdElementsBothNegative(<4 x float>* %in) {
entry:
        %tmp1 = load <4 x float>* %in           ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.ppc.altivec.vcmpgefp.p( int 1, <4 x float> < float 0x7FF8000000000000, float 0.000000e+00, float 0.000000e+00, float 0x7FF8000000000000 >, <4 x float> %tmp1 )           ; <int> [#uses=1]
        %tmp = seteq int %tmp, 0                ; <bool> [#uses=1]
        %tmp3 = cast bool %tmp to int           ; <int> [#uses=1]
        ret int %tmp3
}

into this:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        lvx v0, 0, r3
        lvx v1, r5, r4
        vcmpgefp. v0, v1, v0
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        mtspr 256, r2
        blr

instead of this:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        lvx v0, 0, r3
        lvx v1, r5, r4
        vcmpgefp. v0, v1, v0
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27356
2006-04-02 06:11:11 +00:00
Chris Lattner 42a5fca47e Implement promotion for EXTRACT_VECTOR_ELT, allowing v16i8 multiplies to work with PowerPC.
llvm-svn: 27349
2006-04-02 05:06:04 +00:00
Chris Lattner 87f080949b Implement the Expand action for binary vector operations to break the binop
into elements and operate on each piece.  This allows generic vector integer
multiplies to work on PPC, though the generated code is horrible.

llvm-svn: 27347
2006-04-02 03:57:31 +00:00
Chris Lattner a9c59156be Intrinsics that just load from memory can be treated like loads: they don't
have to serialize against each other.  This allows us to schedule lvx's
across each other, for example.

llvm-svn: 27346
2006-04-02 03:41:14 +00:00
Chris Lattner 0442a18758 Constant fold all of the vector binops. This allows us to compile this:
"vector unsigned char mergeLowHigh = (vector unsigned char)
( 8, 9, 10, 11, 16, 17, 18, 19, 12, 13, 14, 15, 20, 21, 22, 23 );
vector unsigned char mergeHighLow = vec_xor( mergeLowHigh, vec_splat_u8(8));"

aka:

void %test2(<16 x sbyte>* %P) {
  store <16 x sbyte> cast (<4 x int> xor (<4 x int> cast (<16 x ubyte> < ubyte 8, ubyte 9, ubyte 10, ubyte 11, ubyte 16, ubyte 17, ubyte 18, ubyte 19, ubyte 12, ubyte 13, ubyte 14, ubyte 15, ubyte 20, ubyte 21, ubyte 22, ubyte 23 > to <4 x int>), <4 x int> cast (<16 x sbyte> < sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8 > to <4 x int>)) to <16 x sbyte>), <16 x sbyte> * %P
  ret void
}

into this:

_test2:
        mfspr r2, 256
        oris r4, r2, 32768
        mtspr 256, r4
        li r4, lo16(LCPI2_0)
        lis r5, ha16(LCPI2_0)
        lvx v0, r5, r4
        stvx v0, 0, r3
        mtspr 256, r2
        blr

instead of this:

_test2:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI2_0)
        lis r5, ha16(LCPI2_0)
        vspltisb v0, 8
        lvx v1, r5, r4
        vxor v0, v1, v0
        stvx v0, 0, r3
        mtspr 256, r2
        blr

... which occurs here:
http://developer.apple.com/hardware/ve/calcspeed.html

llvm-svn: 27343
2006-04-02 03:25:57 +00:00
Chris Lattner ef598059f2 Add a new -view-legalize-dags command line option
llvm-svn: 27342
2006-04-02 03:07:27 +00:00
Chris Lattner e4e64b6b85 Implement constant folding of bit_convert of arbitrary constant vbuild_vector nodes.
llvm-svn: 27341
2006-04-02 02:53:43 +00:00
Chris Lattner 1c22728787 These entries already exist
llvm-svn: 27340
2006-04-02 02:51:27 +00:00
Chris Lattner 1985e1cbb8 Add some missing node names
llvm-svn: 27339
2006-04-02 02:41:18 +00:00
Chris Lattner bec582f4cd Prefer larger register classes over smaller ones when a register occurs in
multiple register classes.  This fixes PowerPC/2006-04-01-FloatDoubleExtend.ll

llvm-svn: 27334
2006-04-02 00:24:45 +00:00
Chris Lattner 39dcf1a9e2 Delete identity shuffles, implementing CodeGen/Generic/vector-identity-shuffle.ll
llvm-svn: 27317
2006-03-31 22:16:43 +00:00
Chris Lattner d9e4daabd2 Do not endian swap split vector loads. This fixes UnitTests/Vector/sumarray-dbl on PPC.
Now all UnitTests/Vector/* tests pass on PPC.

llvm-svn: 27299
2006-03-31 18:22:37 +00:00
Chris Lattner 8d90f526d7 Do not endian swap the operands to a store if the operands came from a vector.
This fixes UnitTests/Vector/simple.c with altivec.

llvm-svn: 27298
2006-03-31 18:20:46 +00:00
Chris Lattner 7e30af3887 Remove dead *extloads. This allows us to codegen vector.ll:test_extract_elt
to:

test_extract_elt:
        alloc r3 = ar.pfs,0,1,0,0
        adds r8 = 12, r32
        ;;
        ldfs f8 = [r8]
        mov ar.pfs = r3
        br.ret.sptk.many rp

instead of:

test_extract_elt:
        alloc r3 = ar.pfs,0,1,0,0
        adds r8 = 28, r32
        adds r9 = 24, r32
        adds r10 = 20, r32
        adds r11 = 16, r32
        ;;
        ldfs f6 = [r8]
        ;;
        ldfs f6 = [r9]
        adds r8 = 12, r32
        adds r9 = 8, r32
        adds r14 = 4, r32
        ;;
        ldfs f6 = [r10]
        ;;
        ldfs f6 = [r11]
        ldfs f8 = [r8]
        ;;
        ldfs f6 = [r9]
        ;;
        ldfs f6 = [r14]
        ;;
        ldfs f6 = [r32]
        mov ar.pfs = r3
        br.ret.sptk.many rp

llvm-svn: 27297
2006-03-31 18:10:41 +00:00
Chris Lattner 2d8551c85b Delete dead loads in the dag. This allows us to compile
vector.ll:test_extract_elt2 into:

_test_extract_elt2:
        lfd f1, 32(r3)
        blr

instead of:

_test_extract_elt2:
        lfd f0, 56(r3)
        lfd f0, 48(r3)
        lfd f0, 40(r3)
        lfd f1, 32(r3)
        lfd f0, 24(r3)
        lfd f0, 16(r3)
        lfd f0, 8(r3)
        lfd f0, 0(r3)
        blr

llvm-svn: 27296
2006-03-31 18:06:18 +00:00
Chris Lattner 6f42325dca Implement PromoteOp for VEXTRACT_VECTOR_ELT. Thsi fixes
Generic/vector.ll:test_extract_elt on non-sse X86 systems.

llvm-svn: 27294
2006-03-31 17:55:51 +00:00
Chris Lattner 8e1fcab2bc Scalarized vector stores need not be legal, e.g. if the vector element type
needs to be promoted or expanded.  Relegalize the scalar store once created.
This fixes CodeGen/Generic/vector.ll:test1 on non-SSE x86 targets.

llvm-svn: 27293
2006-03-31 17:37:22 +00:00
Chris Lattner ba38035e21 Make sure to pass enough values to phi nodes when we are dealing with
decimated vectors.  This fixes UnitTests/Vector/sumarray-dbl.c

llvm-svn: 27280
2006-03-31 02:12:18 +00:00
Chris Lattner 5fe1f54c17 Significantly improve handling of vectors that are live across basic blocks,
handling cases where the vector elements need promotion, expansion, and when
the vector type itself needs to be decimated.

llvm-svn: 27278
2006-03-31 02:06:56 +00:00
Evan Cheng 168e45b0b3 Expand INSERT_VECTOR_ELT to store vec, sp; store elt, sp+k; vec = load sp;
llvm-svn: 27274
2006-03-31 01:27:51 +00:00
Chris Lattner 67271869a8 Bug fixes: handle constantexpr insert/extract element operations
Handle constantpacked vectors with constantexpr elements.

This fixes CodeGen/Generic/vector-constantexpr.ll

llvm-svn: 27241
2006-03-29 00:11:43 +00:00
Chris Lattner 20e619fba3 When building a VVECTOR_SHUFFLE node from extract_element operations, make
sure to build it as SHUFFLE(X, undef, mask), not SHUFFLE(X, X, mask).

The later is not canonical form, and prevents the PPC splat pattern from
matching.  For a particular splat, we go from generating this:

	li r10, lo16(LCPI1_0)
	lis r11, ha16(LCPI1_0)
	lvx v3, r11, r10
	vperm v3, v2, v2, v3

to generating:

	vspltw v3, v2, 3

llvm-svn: 27236
2006-03-28 22:19:47 +00:00
Chris Lattner a46dfe80c8 Canonicalize VECTOR_SHUFFLE(X, X, Y) -> VECTOR_SHUFFLE(X,undef,Y')
llvm-svn: 27235
2006-03-28 22:11:53 +00:00
Chris Lattner c9992548fc Turn a series of extract_element's feeding a build_vector into a
vector_shuffle node.  For this:

void test(__m128 *res, __m128 *A, __m128 *B) {
  *res = _mm_unpacklo_ps(*A, *B);
}

we now produce this code:

_test:
        movl 8(%esp), %eax
        movaps (%eax), %xmm0
        movl 12(%esp), %eax
        unpcklps (%eax), %xmm0
        movl 4(%esp), %eax
        movaps %xmm0, (%eax)
        ret

instead of this:

_test:
        subl $76, %esp
        movl 88(%esp), %eax
        movaps (%eax), %xmm0
        movaps %xmm0, (%esp)
        movaps %xmm0, 32(%esp)
        movss 4(%esp), %xmm0
        movss 32(%esp), %xmm1
        unpcklps %xmm0, %xmm1
        movl 84(%esp), %eax
        movaps (%eax), %xmm0
        movaps %xmm0, 16(%esp)
        movaps %xmm0, 48(%esp)
        movss 20(%esp), %xmm0
        movss 48(%esp), %xmm2
        unpcklps %xmm0, %xmm2
        unpcklps %xmm1, %xmm2
        movl 80(%esp), %eax
        movaps %xmm2, (%eax)
        addl $76, %esp
        ret

GCC produces this (with -fomit-frame-pointer):

_test:
        subl    $12, %esp
        movl    20(%esp), %eax
        movaps  (%eax), %xmm0
        movl    24(%esp), %eax
        unpcklps        (%eax), %xmm0
        movl    16(%esp), %eax
        movaps  %xmm0, (%eax)
        addl    $12, %esp
        ret

llvm-svn: 27233
2006-03-28 20:28:38 +00:00
Chris Lattner f6f94d3bce Teach Legalize how to pack VVECTOR_SHUFFLE nodes into VECTOR_SHUFFLE nodes.
llvm-svn: 27232
2006-03-28 20:24:43 +00:00
Chris Lattner 8d57da2ffc new node
llvm-svn: 27231
2006-03-28 19:54:42 +00:00
Chris Lattner b7163598f9 Don't crash on X^X if X is a vector. Instead, produce a vector of zeros.
llvm-svn: 27229
2006-03-28 19:11:05 +00:00
Chris Lattner ffec47ebff Add an assertion
llvm-svn: 27228
2006-03-28 19:04:49 +00:00
Jim Laskey 67a636c587 More bulletproofing of llvm.dbg.declare.
llvm-svn: 27224
2006-03-28 13:45:20 +00:00
Chris Lattner e55d171ccd Tblgen doesn't like multiple SDNode<> definitions that map to the sameenum value. Split them into separate enums.
llvm-svn: 27201
2006-03-28 00:40:33 +00:00
Jim Laskey d387cc5cde Reactivate llvm.dbg.declare.
llvm-svn: 27192
2006-03-27 23:31:10 +00:00
Chris Lattner 5bb1d90afd Disable dbg_declare, it currently breaks the CFE build
llvm-svn: 27182
2006-03-27 21:36:03 +00:00
Chris Lattner d5f94c9574 Fix legalization of intrinsics with chain and result values
llvm-svn: 27181
2006-03-27 20:28:29 +00:00
Chris Lattner 0e84f1e532 Unbreak the build on non-apple compilers :-(
llvm-svn: 27173
2006-03-27 16:10:59 +00:00
Evan Cheng d09b05b0bc Try again
llvm-svn: 27171
2006-03-27 08:10:26 +00:00
Evan Cheng 64efb35c32 Incorrect check for FP all one's
llvm-svn: 27169
2006-03-27 07:26:17 +00:00
Evan Cheng c70e33cd6e Change isBuildVectorAllOnesInteger to isBuildVectorAllOnes. Also check for
floating point cases.

llvm-svn: 27165
2006-03-27 06:58:47 +00:00
Chris Lattner 52fcad3a37 Instead of printing "INTRINSIC" on intrinsic node, print the intrinsic name.
llvm-svn: 27164
2006-03-27 06:45:25 +00:00
Nate Begeman ed728c1291 SelectionDAGISel can now natively handle Switch instructions, in the same
manner that the LowerSwitch LLVM to LLVM pass does: emitting a binary
search tree of basic blocks.  The new approach has several advantages:
it is faster, it generates significantly smaller code in many cases, and
it paves the way for implementing dense switch tables as a jump table by
handling switches directly in the instruction selector.

This functionality is currently only enabled on x86, but should be safe for
every target.  In anticipation of making it the default, the cfg is now
properly updated in the x86, ppc, and sparc select lowering code.

llvm-svn: 27156
2006-03-27 01:32:24 +00:00
Jim Laskey 7092888bcc Bullet proof against undefined args produced by upgrading ols-style debug info.
llvm-svn: 27155
2006-03-26 22:46:27 +00:00
Evan Cheng a67899195f Add ISD::isBuildVectorAllZeros predicate
llvm-svn: 27147
2006-03-26 09:50:58 +00:00
Chris Lattner 30ee72586d Allow targets to custom lower their own intrinsics if desired.
llvm-svn: 27146
2006-03-26 09:12:51 +00:00
Chris Lattner f6e3b957b8 Fix a bug in ISD::isBuildVectorAllOnesInteger that caused it to always return
false

llvm-svn: 27131
2006-03-25 22:59:28 +00:00
Chris Lattner c2d2811a07 Implement the ISD::isBuildVectorAllOnesInteger predicate
llvm-svn: 27130
2006-03-25 22:57:01 +00:00
Chris Lattner dc1eab5886 Don't call SimplifyDemandedBits on vectors
llvm-svn: 27128
2006-03-25 22:19:00 +00:00
Chris Lattner 313229c74b fix inverted conditional
llvm-svn: 27089
2006-03-24 22:49:42 +00:00
Evan Cheng 68d9bf26c8 Only to vector shuffle for {x,x,y,y} cases when SCALAR_TO_VECTOR is free.
llvm-svn: 27071
2006-03-24 18:45:20 +00:00
Jim Laskey 53f1ecc560 Rename for truth in advertising.
llvm-svn: 27063
2006-03-24 09:50:27 +00:00
Chris Lattner 77e271cb4e prefer to generate constant pool loads over splats. This prevents us from
using a splat for {1.0,1.0,1.0,1.0}

llvm-svn: 27055
2006-03-24 07:29:17 +00:00
Chris Lattner 87b1dddb1c fix spello
llvm-svn: 27053
2006-03-24 07:15:07 +00:00
Chris Lattner a4f6805a86 legalize vbit_convert nodes whose result is a legal type.
Legalize intrinsic nodes.

llvm-svn: 27036
2006-03-24 02:26:29 +00:00
Chris Lattner d96b09a7b9 Lower target intrinsics into an INTRINSIC node
llvm-svn: 27035
2006-03-24 02:22:33 +00:00
Chris Lattner 6b05290922 fix some bogus assertions: noop bitconverts are legal
llvm-svn: 27032
2006-03-24 02:20:47 +00:00
Evan Cheng 1d2e995fc1 Lower BUILD_VECTOR to VECTOR_SHUFFLE if there are two distinct nodes (and if
the target can handle it). Issue two SCALAR_TO_VECTOR ops followed by a
VECTOR_SHUFFLE to select from the two vectors.

llvm-svn: 27023
2006-03-24 01:17:21 +00:00
Chris Lattner ebac9a4adf Identify the INTRINSIC node
llvm-svn: 27020
2006-03-24 01:04:30 +00:00
Chris Lattner d7c4e7d255 add support for splitting casts. This implements
CodeGen/Generic/vector.ll:test_cast_2.

llvm-svn: 26999
2006-03-23 21:16:34 +00:00
Jim Laskey a8bdac875d Handle new forms of llvm.dbg intrinsics.
llvm-svn: 26988
2006-03-23 18:06:46 +00:00
Chris Lattner 9ea1b3f9fd simplify some code
llvm-svn: 26972
2006-03-23 05:29:04 +00:00
Chris Lattner b893d04a67 Fix a typo
llvm-svn: 26965
2006-03-22 22:20:49 +00:00
Chris Lattner 2f4119a608 Implement simple support for vector casting. This can currently only handle
casts between legal vector types.

llvm-svn: 26961
2006-03-22 20:09:35 +00:00
Chris Lattner 8fa445a89d Endianness does not affect the order of vector fields. This fixes
SingleSource/UnitTests/Vector/build.c

llvm-svn: 26936
2006-03-22 01:46:54 +00:00
Chris Lattner 5be4352124 Enclose some variables in a scope to avoid error with some gcc versions
llvm-svn: 26934
2006-03-22 00:12:37 +00:00
Chris Lattner 340a6b5c26 add expand support for extractelement
llvm-svn: 26931
2006-03-21 21:02:03 +00:00
Chris Lattner 7c0cd8cafc add some trivial support for extractelement.
llvm-svn: 26928
2006-03-21 20:44:12 +00:00
Chris Lattner 672a42d731 Add a hacky workaround for crashes due to vectors live across blocks.
Note that this code won't work for vectors that aren't legal on the
target.  Improvements coming.

llvm-svn: 26925
2006-03-21 19:20:37 +00:00
Chris Lattner 21e68c8001 If a target supports splatting with SHUFFLE_VECTOR, lower to it from BUILD_VECTOR(x,x,x,x)
llvm-svn: 26885
2006-03-20 01:52:29 +00:00
Chris Lattner 6b20104410 TargetData doesn't know the alignment of vectors :(
llvm-svn: 26884
2006-03-20 01:51:46 +00:00
Chris Lattner 00f0589bc0 Add very basic support for VECTOR_SHUFFLE
llvm-svn: 26880
2006-03-19 23:56:04 +00:00
Chris Lattner 79fb91cc69 Allow SCALAR_TO_VECTOR to be custom lowered.
llvm-svn: 26867
2006-03-19 06:47:21 +00:00
Chris Lattner 9cdc5a0ce7 Add SCALAR_TO_VECTOR support
llvm-svn: 26866
2006-03-19 06:31:19 +00:00
Chris Lattner eb5b2e705c Don't bother storing undef elements of BUILD_VECTOR's
llvm-svn: 26858
2006-03-19 05:46:04 +00:00
Chris Lattner 5d3ff12c8f Implement expand of BUILD_VECTOR containing variable elements.
This implements CodeGen/Generic/vector.ll:test_variable_buildvector

llvm-svn: 26852
2006-03-19 04:18:56 +00:00
Chris Lattner 5336a59e4b fold insertelement(buildvector) -> buildvector if the inserted element # is
a constant.  This implements test_constant_insert in CodeGen/Generic/vector.ll

llvm-svn: 26851
2006-03-19 01:27:56 +00:00
Chris Lattner 29b2301460 implement basic support for INSERT_VECTOR_ELT.
llvm-svn: 26849
2006-03-19 01:17:20 +00:00
Chris Lattner f4e1a53647 Rename ConstantVec -> BUILD_VECTOR and VConstant -> VBUILD_VECTOR. Allow*BUILD_VECTOR to take variable inputs.
llvm-svn: 26847
2006-03-19 00:52:58 +00:00
Chris Lattner c16b05e67d implement vector.ll:test_undef
llvm-svn: 26845
2006-03-19 00:20:20 +00:00
Chris Lattner 93640543a9 Fix the remaining bugs in the vector expansion rework I commited yesterday.
This fixes CodeGen/Generic/vector.ll

llvm-svn: 26843
2006-03-19 00:07:49 +00:00
Chris Lattner 32206f54c6 Change the structure of lowering vector stuff. Note: This breaks some
things.

llvm-svn: 26840
2006-03-18 01:44:44 +00:00
Chris Lattner 98931bc381 add a couple enum values
llvm-svn: 26830
2006-03-17 19:53:59 +00:00
Nate Begeman bb01d4f272 Remove BRTWOWAY*
Make the PPC backend not dependent on BRTWOWAY_CC and make the branch
selector smarter about the code it generates, fixing a case in the
readme.

llvm-svn: 26814
2006-03-17 01:40:33 +00:00
Chris Lattner 7ececaad83 Fix a problem fully scalarizing values.
llvm-svn: 26811
2006-03-16 23:05:19 +00:00
Chris Lattner 8471b15706 Add support for CopyFromReg from vector values. Note: this doesn't support
illegal vector types yet!

llvm-svn: 26799
2006-03-16 19:57:50 +00:00
Chris Lattner 49409cb925 Teach CreateRegForValue how to handle vector types.
llvm-svn: 26798
2006-03-16 19:51:18 +00:00
Chris Lattner 4024c00ce7 add support for vector->vector casts
llvm-svn: 26788
2006-03-15 22:19:46 +00:00
Chris Lattner cad70c3e46 Add a note, this code should be moved to the dag combiner.
llvm-svn: 26787
2006-03-15 22:19:18 +00:00
Chris Lattner 68ac09d5cb make sure dead token factor nodes are removed by the dag combiner.
llvm-svn: 26731
2006-03-13 18:37:30 +00:00
Jim Laskey acb6e34277 Handle the removal of the debug chain.
llvm-svn: 26729
2006-03-13 13:07:37 +00:00
Chris Lattner d8c2a48d58 Fold X+Y -> X|Y when safe. This implements:
Regression/CodeGen/PowerPC/and_add.ll

a case that occurs with dynamic allocas of constant size.

llvm-svn: 26727
2006-03-13 06:51:27 +00:00
Chris Lattner 8bb6cb7d7b add a couple of missing folds
llvm-svn: 26724
2006-03-13 06:26:26 +00:00
Chris Lattner 994d8e6bd4 For targets with FABS/FNEG support, lower copysign to an integer load,
a select and FABS/FNEG.

This speeds up a trivial (aka stupid) copysign benchmark I wrote from 6.73s
to 2.64s, woo.

llvm-svn: 26723
2006-03-13 06:08:38 +00:00
Chris Lattner a767dbf197 Don't advance the hazard recognizer when there are no hazards and no instructions
to be emitted.

Don't add one to the latency of a completed instruction if the latency of the
op is 0.

llvm-svn: 26718
2006-03-12 09:01:41 +00:00
Chris Lattner 86a9b60a25 Chain operands aren't real uses: they don't require the full latency of the
predecessor to finish before they can start.

llvm-svn: 26717
2006-03-12 03:52:09 +00:00
Chris Lattner 572003ca15 As a pending queue data structure to keep track of instructions whose
operands have all issued, but whose results are not yet available.  This
allows us to compile:

int G;
int test(int A, int B, int* P) {
   return (G+A)*(B+1);
}

to:

_test:
        lis r2, ha16(L_G$non_lazy_ptr)
        addi r4, r4, 1
        lwz r2, lo16(L_G$non_lazy_ptr)(r2)
        lwz r2, 0(r2)
        add r2, r2, r3
        mullw r3, r2, r4
        blr

instead of this, which has a stall between the lis/lwz:

_test:
        lis r2, ha16(L_G$non_lazy_ptr)
        lwz r2, lo16(L_G$non_lazy_ptr)(r2)
        addi r4, r4, 1
        lwz r2, 0(r2)
        add r2, r2, r3
        mullw r3, r2, r4
        blr

llvm-svn: 26716
2006-03-12 00:38:57 +00:00
Chris Lattner 356183d91e rename priorityqueue -> availablequeue. When a node is scheduled, remember
which cycle it lands on.

llvm-svn: 26714
2006-03-11 22:44:37 +00:00
Chris Lattner 063086b0f4 Make CurrCycle a local var instead of an instance var
llvm-svn: 26713
2006-03-11 22:34:41 +00:00
Chris Lattner 9995a0c019 Move some methods around so that BU specific code is together, TD specific code
is together, and direction independent code is together.

llvm-svn: 26712
2006-03-11 22:28:35 +00:00
Chris Lattner 578d8fcb59 merge preds/chainpreds -> preds set
merge succs/chainsuccs -> succs set

This has no functionality change, simplifies the code, and reduces the size
of sunits.

llvm-svn: 26711
2006-03-11 22:24:20 +00:00
Evan Cheng 38280c0020 Added a parameter to control whether Constant::getStringValue() would chop
off the result string at the first null terminator.

llvm-svn: 26704
2006-03-10 23:52:03 +00:00
Chris Lattner d3ef6c290a scrape out bits of llvm-db
llvm-svn: 26701
2006-03-10 22:48:19 +00:00
Chris Lattner f918e15362 Move simple-selector-specific types to the simple selector.
llvm-svn: 26693
2006-03-10 07:51:18 +00:00
Chris Lattner 5255d04357 Simplify the interface to the schedulers, to not pass the selected heuristicin.
llvm-svn: 26692
2006-03-10 07:49:12 +00:00
Chris Lattner a5b93b8c6d Move some simple-sched-specific instance vars to the simple scheduler.
llvm-svn: 26690
2006-03-10 07:42:02 +00:00
Chris Lattner e015178de1 prune #includes
llvm-svn: 26689
2006-03-10 07:37:35 +00:00
Chris Lattner 4b70ff7876 move some simple scheduler methods into the simple scheduler
llvm-svn: 26688
2006-03-10 07:35:21 +00:00
Chris Lattner dc2f135f5c Make EmitNode take a SDNode instead of a NodeInfo*
llvm-svn: 26687
2006-03-10 07:28:36 +00:00
Chris Lattner b9d8fa0342 Move the VRBase field from NodeInfo to being a separate, explicit, map.
llvm-svn: 26686
2006-03-10 07:25:12 +00:00
Chris Lattner c48cfba44b no need to build groups anymore
llvm-svn: 26684
2006-03-10 07:15:58 +00:00
Chris Lattner 6f82fe8106 Create SUnits directly from the SelectionDAG.
llvm-svn: 26683
2006-03-10 07:13:32 +00:00
Chris Lattner 2f8c7c3d55 Push PrepareNodeInfo/IdentifyGroups down the inheritance hierarchy
llvm-svn: 26682
2006-03-10 06:34:51 +00:00
Chris Lattner 349e9ddccc Teach the latency scheduler some new tricks. In particular, to break ties,
keep track of a sense of "mobility", i.e. how many other nodes scheduling one
node will free up.  For something like this:

float testadd(float *X, float *Y, float *Z, float *W, float *V) {
  return (*X+*Y)*(*Z+*W)+*V;
}

For example, this makes us schedule *X then *Y, not *X then *Z.  The former
allows us to issue the add, the later only lets us issue other loads.

This turns the above code from this:

_testadd:
        lfs f0, 0(r3)
        lfs f1, 0(r6)
        lfs f2, 0(r4)
        lfs f3, 0(r5)
        fadds f0, f0, f2
        fadds f1, f3, f1
        lfs f2, 0(r7)
        fmadds f1, f0, f1, f2
        blr

into this:

_testadd:
        lfs f0, 0(r6)
        lfs f1, 0(r5)
        fadds f0, f1, f0
        lfs f1, 0(r4)
        lfs f2, 0(r3)
        fadds f1, f2, f1
        lfs f2, 0(r7)
        fmadds f1, f1, f0, f2
        blr

llvm-svn: 26680
2006-03-10 05:51:05 +00:00
Chris Lattner 25e2556b71 add an aggregate method for reinserting scheduled nodes, add a callback for
priority impls that want to be notified when a node is scheduled

llvm-svn: 26678
2006-03-10 04:32:49 +00:00
Jeff Cohen 6ce97687f7 Fix VC++ build breakage.
llvm-svn: 26676
2006-03-10 03:57:45 +00:00
Chris Lattner 213209a248 remove dbg_declare, it's not used yet.
llvm-svn: 26659
2006-03-09 20:02:42 +00:00
Chris Lattner c6c9e65301 remove temporary option
llvm-svn: 26646
2006-03-09 17:31:22 +00:00
Chris Lattner d17d77aa1d yes yes, enabled debug output is bad
llvm-svn: 26637
2006-03-09 07:39:25 +00:00
Chris Lattner 6398c13128 switch the t-d scheduler to use a really dumb and trivial critical path
latency priority function.

llvm-svn: 26636
2006-03-09 07:38:27 +00:00
Chris Lattner d4130375c0 Pull latency information for target instructions out of the latency tables. :)
Only enable this with -use-sched-latencies, I'll enable it by default with a
clean nightly tester run tonight.

PPC is the only target that provides latency info currently.

llvm-svn: 26634
2006-03-09 07:15:18 +00:00
Chris Lattner da6aafeef4 don't copy all itinerary data
llvm-svn: 26633
2006-03-09 07:13:00 +00:00
Chris Lattner 399bee27f0 PriorityQueue is an instance var, use it.
llvm-svn: 26632
2006-03-09 06:48:37 +00:00
Chris Lattner 9e95accf4e add some comments
llvm-svn: 26631
2006-03-09 06:37:29 +00:00
Chris Lattner 9df647539d Refactor the priority mechanism one step further: now that it is a separate
class, sever its implementation from the interface.  Now we can provide new
implementations of the same interface (priority computation) without touching
the scheduler itself.

llvm-svn: 26630
2006-03-09 06:35:14 +00:00
Jim Laskey 2698f0de7a Get rid of the multiple copies of getStringValue. Now a Constant:: method.
llvm-svn: 26616
2006-03-08 18:11:07 +00:00
Chris Lattner fd22d42945 Split the priority function computation and priority queue management out
of the ScheduleDAGList class into a new SchedulingPriorityQueue class.

llvm-svn: 26613
2006-03-08 05:18:27 +00:00
Chris Lattner 42e2026cb0 switch from an explicitly managed list of SUnits to a simple vector of sunits
llvm-svn: 26612
2006-03-08 04:54:34 +00:00
Chris Lattner 12c6d89204 Shrinkify some fields, fit to 80 columns
llvm-svn: 26611
2006-03-08 04:41:06 +00:00
Chris Lattner 3fe975b846 revert the previous patch, didn't mean to check it in yet
llvm-svn: 26610
2006-03-08 04:39:05 +00:00
Chris Lattner af5e26c980 remove "Slot", it is dead
llvm-svn: 26609
2006-03-08 04:37:58 +00:00
Chris Lattner 543832d39d Change the interface for getting a target HazardRecognizer to be more clean.
llvm-svn: 26608
2006-03-08 04:25:59 +00:00
Chris Lattner 0c801bd1cf Fix some formatting, when looking for hazards, prefer target nodes over
things like copyfromreg.

llvm-svn: 26586
2006-03-07 05:40:43 +00:00
Chris Lattner 01aa752a36 update file comment
llvm-svn: 26573
2006-03-06 17:58:04 +00:00
Evan Cheng a00c61932d Remove some code that doesn't make sense
llvm-svn: 26572
2006-03-06 07:31:44 +00:00
Evan Cheng c5c0658aa6 Remove SUnit::Priority1: it is re-calculated on demand as number of live
range to be generated.

llvm-svn: 26570
2006-03-06 06:08:54 +00:00
Chris Lattner 47639dbb93 Hoist the HazardRecognizer out of the ScheduleDAGList.cpp file to where
targets can implement them.  Make the top-down scheduler non-g5-specific.

Remove the old testing hazard recognizer.

llvm-svn: 26569
2006-03-06 00:22:00 +00:00
Chris Lattner 00b52ea8f9 Comment fixes
llvm-svn: 26567
2006-03-05 23:59:20 +00:00
Chris Lattner 80268aaeed Don't depend on the C99 copysign function, implement it ourselves.
llvm-svn: 26566
2006-03-05 23:57:58 +00:00
Chris Lattner 2d945ba4c7 When a hazard recognizer needs noops to be inserted, do so. This represents
noops as null pointers in the instruction sequence.

llvm-svn: 26564
2006-03-05 23:51:47 +00:00
Chris Lattner fa5e1c9c26 Implement G5HazardRecognizer as a trivial thing that wants 5 cycles between
copyfromreg nodes.  Clearly useful!

llvm-svn: 26559
2006-03-05 23:13:56 +00:00
Chris Lattner e50c092b7c Add basic hazard recognizer support. noop insertion isn't complete yet though.
llvm-svn: 26558
2006-03-05 22:45:01 +00:00
Jeff Cohen 55e2aac24b Fix VC++ compilation error.
llvm-svn: 26554
2006-03-05 21:43:37 +00:00
Chris Lattner 98ecb8ec61 Split the list scheduler into top-down and bottom-up pieces. The priority
function of the top-down scheduler are completely bogus currently, and
having (future) PPC specific in this file is also wrong, but this is a
small incremental step.

llvm-svn: 26552
2006-03-05 21:10:33 +00:00
Chris Lattner 7a36d97518 Move the available queue to being inside the ListSchedule method, since it
bounds its lifetime.

llvm-svn: 26550
2006-03-05 20:21:55 +00:00
Chris Lattner bdaf4f38b5 Reinstate this now that the offending opposite xform has been removed.
llvm-svn: 26548
2006-03-05 19:53:55 +00:00
Chris Lattner c610e62e46 print arbitrary constant pool entries
llvm-svn: 26545
2006-03-05 09:38:03 +00:00
Evan Cheng d428e22c07 Back out fold (shl (add x, c1), c2) -> (add (shl x, c2), c1<<c2) for now.
It's causing an infinite loop compiling ldecod on x86 / Darwin.

llvm-svn: 26544
2006-03-05 07:30:16 +00:00
Chris Lattner 3bc4050217 Add some simple copysign folds
llvm-svn: 26543
2006-03-05 05:30:57 +00:00
Chris Lattner 5c1ba2ac08 Codegen copysign[f] into a FCOPYSIGN node
llvm-svn: 26542
2006-03-05 05:09:38 +00:00
Chris Lattner f29f5204cc fold (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2)
fold (shl (add x, c1), c2) -> (add (shl x, c2), c1<<c2)

This allows us to compile CodeGen/PowerPC/addi-reassoc.ll into:

_test1:
        slwi r2, r4, 4
        add r2, r2, r3
        lwz r3, 36(r2)
        blr
_test2:
        mulli r2, r4, 5
        add r2, r2, r3
        lbz r2, 11(r2)
        extsb r3, r2
        blr

instead of:

_test1:
        addi r2, r4, 2
        slwi r2, r2, 4
        add r2, r3, r2
        lwz r3, 4(r2)
        blr
_test2:
        addi r2, r4, 2
        mulli r2, r2, 5
        add r2, r3, r2
        lbz r2, 1(r2)
        extsb r3, r2
        blr

llvm-svn: 26535
2006-03-04 23:33:26 +00:00
Evan Cheng 3bf916ddd9 Add more vector NodeTypes: VSDIV, VUDIV, VAND, VOR, and VXOR.
llvm-svn: 26504
2006-03-03 07:01:07 +00:00
Evan Cheng 23e75f5b49 SDOperand::isOperand should not be a forwarding. It must check *this against N's operands.
llvm-svn: 26502
2006-03-03 06:42:32 +00:00
Evan Cheng 6b08ae8497 Added isOperand(N): true if this is an operand of N
llvm-svn: 26501
2006-03-03 06:24:54 +00:00
Evan Cheng 5e9a695026 A bit more tweaking
llvm-svn: 26500
2006-03-03 06:23:43 +00:00
Jeff Cohen 55c1173a6c Fix VC++ compilation errors.
llvm-svn: 26498
2006-03-03 03:25:07 +00:00
Chris Lattner ad3c974a77 remove the read/write port/io intrinsics.
llvm-svn: 26479
2006-03-03 00:19:58 +00:00
Chris Lattner 093c159efb Split memcpy/memset/memmove intrinsics into i32/i64 versions, resolving
PR709, and paving the way for future progress.

llvm-svn: 26476
2006-03-03 00:00:25 +00:00
Evan Cheng 4e3904f637 - Fixed some priority calculation bugs that were causing bug 478. Among them:
a predecessor appearing more than once in the operand list was counted as
  multiple predecessor; priority1 should be updated during scheduling;
  CycleBound was updated after the node is inserted into priority queue; one
  of the tie breaking condition was flipped.
- Take into consideration of two address opcodes. If a predecessor is a def&use
  operand, it should have a higher priority.
- Scheduler should also favor floaters, i.e. nodes that do not have real
  predecessors such as MOV32ri.
- The scheduling fixes / tweaks fixed bug 478:
        .text
        .align  4
        .globl  _f
_f:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        imull %ecx, %edx
        imull %eax, %eax
        imull %ecx, %ecx
        addl %eax, %ecx
        leal (%ecx,%edx,2), %eax
        ret

  It is also a slight performance win (1% - 3%) for most tests.

llvm-svn: 26470
2006-03-02 21:38:29 +00:00
Chris Lattner 0db2f2c689 Fix CodeGen/Generic/2006-03-01-dagcombineinfloop.ll, an infinite loop
in the dag combiner on 176.gcc on x86.

llvm-svn: 26459
2006-03-01 21:47:21 +00:00
Chris Lattner 232024edb8 Fix a typo evan noticed
llvm-svn: 26454
2006-03-01 19:55:35 +00:00
Chris Lattner bc1c85beea Add support for target-specific dag combines
llvm-svn: 26443
2006-03-01 04:53:38 +00:00
Chris Lattner fbcd62d3bb Add a new AddToWorkList method, start using it
llvm-svn: 26441
2006-03-01 04:03:14 +00:00
Chris Lattner 324871ef1a Pull shifts by a constant through multiplies (a form of reassociation),
implementing Regression/CodeGen/X86/mul-shift-reassoc.ll

llvm-svn: 26440
2006-03-01 03:44:24 +00:00
Evan Cheng b97aab4371 Vector ops lowering.
llvm-svn: 26436
2006-03-01 01:09:54 +00:00
Evan Cheng be85e89ec4 - Added VConstant as an abstract version of ConstantVec.
- All abstrct vector nodes must have # of elements and element type as their
first two operands.

llvm-svn: 26432
2006-03-01 00:51:13 +00:00
Chris Lattner f0032b350c Compile:
unsigned foo4(unsigned short *P) { return *P & 255; }
unsigned foo5(short *P) { return *P & 255; }

to:

_foo4:
        lbz r3,1(r3)
        blr
_foo5:
        lbz r3,1(r3)
        blr

not:

_foo4:
        lhz r2, 0(r3)
        rlwinm r3, r2, 0, 24, 31
        blr
_foo5:
        lhz r2, 0(r3)
        rlwinm r3, r2, 0, 24, 31
        blr

llvm-svn: 26419
2006-02-28 06:49:37 +00:00
Chris Lattner bdbc4476d9 Fold "and (LOAD P), 255" -> zextload. This allows us to compile:
unsigned foo3(unsigned *P) { return *P & 255; }
as:
_foo3:
        lbz r3, 3(r3)
        blr

instead of:

_foo3:
        lwz r2, 0(r3)
        rlwinm r3, r2, 0, 24, 31
        blr

and:

unsigned short foo2(float a) { return a; }

as:
_foo2:
        fctiwz f0, f1
        stfd f0, -8(r1)
        lhz r3, -2(r1)
        blr

instead of:

_foo2:
        fctiwz f0, f1
        stfd f0, -8(r1)
        lwz r2, -4(r1)
        rlwinm r3, r2, 0, 16, 31
        blr

llvm-svn: 26417
2006-02-28 06:35:35 +00:00
Chris Lattner 0f8a727c49 fold (sra (sra x, c1), c2) -> (sra x, c1+c2)
llvm-svn: 26416
2006-02-28 06:23:04 +00:00
Chris Lattner 9fed5b6122 Add support for output memory constraints.
llvm-svn: 26410
2006-02-27 23:45:39 +00:00
Chris Lattner 47ee42829d remove some completed notes
llvm-svn: 26390
2006-02-27 00:39:31 +00:00
Evan Cheng 9f9662b86e Print ConstantPoolSDNode offset field.
llvm-svn: 26381
2006-02-26 08:36:57 +00:00
Evan Cheng ed169db8a5 Added an offset field to ConstantPoolSDNode.
llvm-svn: 26371
2006-02-25 09:54:52 +00:00
Chris Lattner 5af3fdec12 Pass all the flags to the asm printer, not just the # operands.
llvm-svn: 26362
2006-02-24 19:50:58 +00:00
Chris Lattner 2f8a794b13 rename NumOps -> NumVals to avoid shadowing a NumOps var in an outer scope.
Add support for addressing modes.

llvm-svn: 26361
2006-02-24 19:18:20 +00:00
Chris Lattner 86c51000db Refactor operand adding out to a new AddOperand method
llvm-svn: 26358
2006-02-24 18:54:03 +00:00
Jeff Cohen 83c22e0d75 Get VC++ building again.
llvm-svn: 26351
2006-02-24 02:52:40 +00:00
Chris Lattner dcf785bf46 Implement (most of) selection of inline asm memory operands.
llvm-svn: 26350
2006-02-24 02:13:54 +00:00
Chris Lattner 7ef7a64ebb Lower C_Memory operands.
llvm-svn: 26346
2006-02-24 01:11:24 +00:00
Chris Lattner e7c0ffb3a0 Fix an endianness problem on big-endian targets with expanded operands
to inline asms.  Mark some methods const.

llvm-svn: 26334
2006-02-23 20:06:57 +00:00
Chris Lattner 571d9647c6 Record all of the expanded registers in the DAG and machine instr, fixing
several bugs in inline asm expanded operands.

llvm-svn: 26332
2006-02-23 19:21:04 +00:00
Chris Lattner b1124f3c76 This fixes a couple of problems with expansion
llvm-svn: 26318
2006-02-22 23:09:03 +00:00
Chris Lattner 6f87d18be9 Change a whole bunch of code to be built around RegsForValue instead of
a single register number.  This fully implements promotion for inline asms,
expand is close but not quite right yet.

llvm-svn: 26316
2006-02-22 22:37:12 +00:00
Chris Lattner 7ad77dfc2a split register class handling from explicit physreg handling.
llvm-svn: 26308
2006-02-22 00:56:39 +00:00
Chris Lattner 5c79f98f15 Adjust to changes in getRegForInlineAsmConstraint prototype
llvm-svn: 26306
2006-02-21 23:12:12 +00:00
Chris Lattner 301f45cf6f Fix a problem Nate and Duraid reported where simplifying nodes can cause
them to get ressurected, in which case, deleting the undead nodes is
unfriendly.

llvm-svn: 26291
2006-02-20 06:51:04 +00:00
Chris Lattner 486d1bc5ed Fix a problem on itanium with memset. The value to set has been promoted to
i64 before this code, so zero_ext doesn't work.

llvm-svn: 26290
2006-02-20 06:38:35 +00:00
Nate Begeman abac61603f Add checks to make sure we don't create bogus extend nodes, and fix a bug
where we were doing exactly that which was causing failures on x86 and
alpha.

llvm-svn: 26284
2006-02-18 02:40:58 +00:00
Chris Lattner 375e1a71cc Fix a tricky issue in the SimplifyDemandedBits code where CombineTo wasn't
exactly the API we wanted to call into.  This fixes the crash on crafty last
night.

llvm-svn: 26269
2006-02-17 21:58:01 +00:00
Nate Begeman fb5dbadf15 Clean up DemandedBitsAreZero interface
Make more use of the new mask helpers in valuetypes.h
Combine (sra (srl x, c1), c1) -> sext_inreg if legal

llvm-svn: 26263
2006-02-17 19:54:08 +00:00
Nate Begeman 57b3567552 Don't expand sdiv by power of two before legalize, since it will likely
generate illegal nodes.

llvm-svn: 26261
2006-02-17 07:26:20 +00:00
Nate Begeman 5965bd19f8 kill ADD_PARTS & SUB_PARTS and replace them with fancy new ADDC, ADDE, SUBC
and SUBE nodes that actually expose what's going on and allow for
significant simplifications in the targets.

llvm-svn: 26255
2006-02-17 05:43:56 +00:00
Chris Lattner 9ec392b2aa Fix another miscompilation exposed by lencode, where we lowered i64->f32
conversions to __floatdidf instead of __floatdisf on targets that support
f32 but not i64 (e.g. sparc).

llvm-svn: 26254
2006-02-17 04:32:33 +00:00
Evan Cheng c3dcf5a4d7 Dumb bug. Code sees a memcpy from X+c so it increments src offset. But it
turns out not to point to a constant string but it forgot change the offset
back.

llvm-svn: 26242
2006-02-16 23:11:42 +00:00
Nate Begeman 8a77efe4f7 Rework the SelectionDAG-based implementations of SimplifyDemandedBits
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.

llvm-svn: 26238
2006-02-16 21:11:51 +00:00
Evan Cheng 42c01c8d39 If the false case is the current basic block, then this is a self loop.
We do not want to emit "Loop: ... brcond Out; br Loop", as it adds an extra
instruction in the loop.  Instead, invert the condition and emit
"Loop: ... br!cond Loop; br Out.

Generalize the fix by moving it from PPCDAGToDAGISel to SelectionDAGLowering.

llvm-svn: 26231
2006-02-16 08:27:56 +00:00
Chris Lattner 471627c49d Lowering of sdiv X, pow2 was broken, this fixes it. This patch is written
by Nate, I'm just committing it for him.

llvm-svn: 26230
2006-02-16 08:02:36 +00:00
Evan Cheng 93e4865d4b Remove an unused function parameter.
llvm-svn: 26221
2006-02-15 22:12:35 +00:00
Evan Cheng 6781b6e62e Turn a memcpy from string constant into a series of stores of constant values.
llvm-svn: 26219
2006-02-15 21:59:04 +00:00
Jim Laskey 2eea436192 Should not combine ISD::LOCATIONs until we have scheme to remove from
MachineDebugInfo tables.

llvm-svn: 26216
2006-02-15 19:34:44 +00:00
Evan Cheng e2038bdeee Lower memcpy with small constant size operand into a series of load / store
ops.

llvm-svn: 26195
2006-02-15 01:54:51 +00:00
Evan Cheng 0451499b3c Doh again!
llvm-svn: 26188
2006-02-14 23:05:54 +00:00
Evan Cheng db2a7a736a Keep to < 80 cols
llvm-svn: 26177
2006-02-14 20:12:38 +00:00
Evan Cheng 038521ef76 Missed a break so memcpy cases fell through to memset. Doh.
llvm-svn: 26176
2006-02-14 19:45:56 +00:00
Evan Cheng d502610604 Fixed a build breakage.
llvm-svn: 26175
2006-02-14 09:11:59 +00:00
Evan Cheng 4b40a42653 Rename maxStoresPerMemSet to maxStoresPerMemset, etc.
llvm-svn: 26174
2006-02-14 08:38:30 +00:00
Evan Cheng 81fcea8aa2 Expand memset dst, c, size to a series of stores if size falls below the
target specific theshold, e.g. 16 for x86.

llvm-svn: 26171
2006-02-14 08:22:34 +00:00
Chris Lattner 1784a9d267 now that libcalls don't suck, we can remove this hack
llvm-svn: 26164
2006-02-14 05:39:35 +00:00
Chris Lattner 8e2ee7358f Fix a latent bug in the call sequence handling stuff. Some targets (e.g. x86)
create these nodes with flag results.  Remember that we legalized them.

llvm-svn: 26156
2006-02-14 00:55:02 +00:00
Jim Laskey 390c63e9d9 Rename to better reflect usage (current and planned.)
llvm-svn: 26145
2006-02-13 12:50:39 +00:00
Chris Lattner 462505fc5f Completely rewrite libcall insertion by the legalizer, providing the
following handy-dandy properties:

1. it is always correct now
2. it is much faster than before
3. it is easier to understand

This implementation builds off of the recent simplifications of the
legalizer that made it single-pass instead of iterative.

This fixes JM/lencod, JM/ldecod, and
CodeGen/Generic/2006-02-12-InsertLibcall.ll (at least on PPC).

llvm-svn: 26144
2006-02-13 09:18:02 +00:00
Jim Laskey 5995d0160c Reorg for integration with gcc4. Old style debug info will not be passed though
to SelIDAG.

llvm-svn: 26115
2006-02-11 01:01:30 +00:00
Evan Cheng a1ef3ec5b5 Added SelectionDAG::InsertISelMapEntry(). This is used to workaround the gcc
problem where it inline the map insertion call too aggressively. Before this
change it was producing a frame size of 24k for Select_store(), now it's down
to 10k (by calling this method rather than calling the map insertion operator).

llvm-svn: 26094
2006-02-09 22:11:03 +00:00
Evan Cheng d3f1db93c1 More changes to reduce frame size.
Move all getTargetNode() out of SelectionDAG.h into SelectionDAG.cpp. This
prevents them from being inlined.
Change getTargetNode() so they return SDNode * instead of SDOperand to prevent
copying. It should also help compilation speed.

llvm-svn: 26083
2006-02-09 07:15:23 +00:00
Chris Lattner 4576bb74d5 Make MachineConstantPool entries alignments explicit
llvm-svn: 26071
2006-02-09 02:23:13 +00:00
Chris Lattner a10e23c19f Compile this:
xori r6, r2, 1
        rlwinm r6, r6, 0, 31, 31
        cmpwi cr0, r6, 0
        bne cr0, LBB1_3 ; endif

to this:

        rlwinm r6, r2, 0, 31, 31
        cmpwi cr0, r6, 0
        beq cr0, LBB1_3 ; endif

llvm-svn: 26047
2006-02-08 02:13:15 +00:00
Nate Begeman 8c9cd461df Back out previous commit, it isn't safe.
llvm-svn: 26006
2006-02-05 08:23:00 +00:00
Nate Begeman 3dc8b89493 fold c1 << (x + c2) into (c1 << c2) << x. fix a warning.
llvm-svn: 26005
2006-02-05 08:07:24 +00:00
Nate Begeman c89fdf1eb3 Handle urem by shifted powers of 2.
llvm-svn: 26001
2006-02-05 07:36:48 +00:00
Nate Begeman 25d178bece handle combining A / (B << N) into A >>u (log2(B)+N) when B is a power of 2
llvm-svn: 26000
2006-02-05 07:20:23 +00:00
Evan Cheng d37645c07d * Added SDNode::isOnlyUse().
* Fix hasNUsesOfValue(), it should be const.

llvm-svn: 25990
2006-02-05 06:29:23 +00:00
Jeff Cohen 95ae171d5b Fix VC++ warning.
llvm-svn: 25975
2006-02-04 16:20:31 +00:00
Evan Cheng f9adce90bf Get rid of some memory leaks identified by Valgrind
llvm-svn: 25960
2006-02-04 06:49:00 +00:00
Chris Lattner 3b48431333 Add initial support for immediates. This allows us to compile this:
int %rlwnm(int %A, int %B) {
  %C = call int asm "rlwnm $0, $1, $2, $3, $4", "=r,r,r,n,n"(int %A, int %B, int 4, int 17)
  ret int %C
}

into:

_rlwnm:
        or r2, r3, r3
        or r3, r4, r4
        rlwnm r2, r2, r3, 4, 17    ;; note the immediates :)
        or r3, r2, r2
        blr

llvm-svn: 25955
2006-02-04 02:26:14 +00:00
Chris Lattner 65ad53feb3 Initial early support for non-register operands, like immediates
llvm-svn: 25952
2006-02-04 02:16:44 +00:00
Nate Begeman dc7bba9ffe Add a framework for eliminating instructions that produces undemanded bits.
llvm-svn: 25945
2006-02-03 22:24:05 +00:00
Chris Lattner f68fd20286 remove some #ifdef'd out code, which should properly be in the dag combiner anyway.
llvm-svn: 25941
2006-02-03 20:13:59 +00:00
Chris Lattner 6091407783 remove dead fn
llvm-svn: 25935
2006-02-03 06:51:34 +00:00
Nate Begeman 22e251abf1 Add common code for reassociating ops in the dag combiner
llvm-svn: 25934
2006-02-03 06:46:56 +00:00
Evan Cheng 02b5b9cdd6 Added case HANDLENODE to getOperationName().
llvm-svn: 25920
2006-02-03 01:33:01 +00:00
Chris Lattner 49beaf40fc Turn any_extend nodes into zero_extend nodes when it allows us to remove an
and instruction.  This allows us to compile stuff like this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

to this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        ret

instead of this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

This occurs quite a bit with the X86 backend.  For example, 25 times in
lambda, 30 times in 177.mesa, 14 times in galgel,  70 times in fma3d,
25 times in vpr, several hundred times in gcc, ~45 times in crafty,
~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap,
16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K
programs.

llvm-svn: 25901
2006-02-02 07:17:31 +00:00
Chris Lattner 49ce35542f add two dag combines:
(C1-X) == C2 --> X == C1-C2
(X+C1) == C2 --> X == C2-C1

This allows us to compile this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

into this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

not this:

_X:
        movl $14, %eax
        addl 4(%esp), %eax
        cmpl $12345, %eax
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

Testcase here: Regression/CodeGen/X86/compare-add.ll

nukage of the and coming up next.

llvm-svn: 25898
2006-02-02 06:36:13 +00:00
Chris Lattner 0bd74558ae make -debug output less newliney
llvm-svn: 25895
2006-02-02 00:38:08 +00:00
Chris Lattner 7f5880b1c7 Implement matching constraints. We can now say things like this:
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4)

and get:

xyz r2, r3, r4, r2

note that the r2's are pinned together.  Yaay for 2-address instructions.

2342 ----------------------------------------------------------------------

llvm-svn: 25893
2006-02-02 00:25:23 +00:00
Nate Begeman 01bd9d9911 *** empty log message ***
llvm-svn: 25879
2006-02-01 19:05:15 +00:00
Chris Lattner 1558fc64f9 Implement simple register assignment for inline asms. This allows us to compile:
int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

into:

 (0x8906130, LLVM BB @0x8902220):
        %r2 = OR4 %r3, %r3
        %r3 = OR4 %r4, %r4
        INLINEASM <es:xyz $0, $1, $2>, %r2<def>, %r2, %r3
        %r3 = OR4 %r2, %r2
        BLR

which asmprints as:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz $0, $1, $2      ;; need to print the operands now :)
        or r3, r2, r2
        blr

llvm-svn: 25878
2006-02-01 18:59:47 +00:00
Nate Begeman 7e7f439f85 Fix some of the stuff in the PPC README file, and clean up legalization
of the SELECT_CC, BR_CC, and BRTWOWAY_CC nodes.

llvm-svn: 25875
2006-02-01 07:19:44 +00:00
Chris Lattner 3a5ed55187 adjust to changes in InlineAsm interface. Fix a few minor bugs.
llvm-svn: 25865
2006-02-01 01:28:23 +00:00
Evan Cheng 32be2dc0af Allow the specification of explicit alignments for constant pool entries.
llvm-svn: 25855
2006-01-31 22:23:14 +00:00
Evan Cheng 2443ab932d Allow custom lowering of fabs. I forgot to check in this change which
caused several test failures.

llvm-svn: 25852
2006-01-31 18:14:25 +00:00
Chris Lattner e9721b2984 Only insert an AND when converting from BR_COND to BRCC if needed.
llvm-svn: 25832
2006-01-31 05:04:52 +00:00
Chris Lattner 2e56e89452 Handle physreg input/outputs. We now compile this:
int %test_cpuid(int %op) {
        %B = alloca int
        %C = alloca int
        %D = alloca int
        %A = call int asm "cpuid", "=eax,==ebx,==ecx,==edx,eax"(int* %B, int* %C, int* %D, int %op)
        %Bv = load int* %B
        %Cv = load int* %C
        %Dv = load int* %D
        %x = add int %A, %Bv
        %y = add int %x, %Cv
        %z = add int %y, %Dv
        ret int %z
}

to this:

_test_cpuid:
        sub %ESP, 16
        mov DWORD PTR [%ESP], %EBX
        mov %EAX, DWORD PTR [%ESP + 20]
        cpuid
        mov DWORD PTR [%ESP + 8], %ECX
        mov DWORD PTR [%ESP + 12], %EBX
        mov DWORD PTR [%ESP + 4], %EDX
        mov %ECX, DWORD PTR [%ESP + 12]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 8]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 4]
        add %EAX, %ECX
        mov %EBX, DWORD PTR [%ESP]
        add %ESP, 16
        ret

... note the proper register allocation.  :)

it is unclear to me why the loads aren't folded into the adds.

llvm-svn: 25827
2006-01-31 02:03:41 +00:00
Chris Lattner f263a23735 Fix a bug in my legalizer reworking that caused the X86 backend to not get
a chance to custom legalize setcc, which broke a bunch of C++ Codes.
Testcase here: CodeGen/X86/2006-01-30-LongSetcc.ll

llvm-svn: 25821
2006-01-30 22:43:50 +00:00
Chris Lattner d6f5ae4455 don't insert an and node if it isn't needed here, this can prevent folding
of lowered target nodes.

llvm-svn: 25804
2006-01-30 04:22:28 +00:00
Chris Lattner f0b24d2dc0 Move MaskedValueIsZero from the DAGCombiner to the TargetLowering interface,making isMaskedValueZeroForTargetNode simpler, and useable from other partsof the compiler.
llvm-svn: 25803
2006-01-30 04:09:27 +00:00
Chris Lattner 3b40e64aa3 pass the address of MaskedValueIsZero into isMaskedValueZeroForTargetNode,
to permit recursion

llvm-svn: 25799
2006-01-30 03:49:37 +00:00
Chris Lattner 4d1ea71a31 Fix RET of promoted values on targets that custom expand RET to a target node.
llvm-svn: 25794
2006-01-29 21:02:23 +00:00
Chris Lattner 2c748afd6c cleanups to the ValueTypeActions interface
llvm-svn: 25785
2006-01-29 08:42:06 +00:00
Chris Lattner ccb4476c87 Remove some special case hacks for CALLSEQ_*, using UpdateNodeOperands
instead.

llvm-svn: 25780
2006-01-29 07:58:15 +00:00
Chris Lattner 2f292789dc Allow custom expansion of ConstantVec nodes. PPC will use this in the future.
llvm-svn: 25774
2006-01-29 06:34:16 +00:00
Chris Lattner 758b0ac54b Legalize ConstantFP into TargetConstantFP when the target allows. Implement
custom expansion of ConstantFP nodes.

llvm-svn: 25772
2006-01-29 06:26:56 +00:00
Chris Lattner 678da98835 eliminate uses of SelectionDAG::getBR2Way_CC
llvm-svn: 25767
2006-01-29 06:00:45 +00:00
Chris Lattner d02b05473c Use the new "UpdateNodeOperands" method to simplify LegalizeDAG and make it
faster.  This cuts about 120 lines of code out of the legalizer (mostly code
checking to see if operands have changed).

It also fixes an ugly performance issue, where the legalizer cloned the entire
graph after any change.  Now the "UpdateNodeOperands" method gives it a chance
to reuse nodes if the operands of a node change but not its opcode or valuetypes.

This speeds up instruction selection time on kimwitu++ by about 8.2% with a
release build.

llvm-svn: 25746
2006-01-28 10:58:55 +00:00
Chris Lattner 580b12ad34 add another method variant
llvm-svn: 25744
2006-01-28 10:09:25 +00:00
Chris Lattner f34156e8cb add some methods for updating nodes
llvm-svn: 25742
2006-01-28 09:32:45 +00:00
Chris Lattner eb63751499 minor tweaks
llvm-svn: 25740
2006-01-28 08:31:04 +00:00
Chris Lattner 689bdcc9cf move a bunch of code, no other change.
llvm-svn: 25739
2006-01-28 08:25:58 +00:00
Chris Lattner fcfda5a174 remove a couple more now-extraneous legalizeop's
llvm-svn: 25738
2006-01-28 08:22:56 +00:00
Chris Lattner 364b89a784 fix a bug
llvm-svn: 25737
2006-01-28 07:42:08 +00:00
Chris Lattner 9dcce6da8e Several major changes:
1. Pull out the expand cases for BSWAP and CT* into a separate function,
   reducing the size of LegalizeOp.
2. Fix a bug where expand(bswap i64) was wrong when i64 is legal.
3. Changed LegalizeOp/PromoteOp so that the legalizer never needs to be
   iterative.  It now operates in a single pass over the nodes.
4. Simplify a LOT of code, with a net reduction of ~280 lines.

llvm-svn: 25736
2006-01-28 07:39:30 +00:00
Chris Lattner fd4a7f76a9 Eliminate the need for ExpandOp to set 'needsanotheriteration', as it already
relegalizes the stuff it returns.

Add the ability to custom expand ADD/SUB, so that targets don't need to deal
with ADD_PARTS/SUB_PARTS if they don't want.

Fix some obscure potential bugs and simplify code.

llvm-svn: 25732
2006-01-28 05:07:51 +00:00
Chris Lattner 10f677508f Instead of making callers of ExpandLibCall legalize the result, make
ExpandLibCall do it itself.

llvm-svn: 25731
2006-01-28 04:28:26 +00:00
Chris Lattner a593acfe66 Eliminate the need to do another iteration of the legalizer after inserting
a libcall.

llvm-svn: 25730
2006-01-28 04:23:12 +00:00
Chris Lattner 98ed05c81d remove method I just added
llvm-svn: 25728
2006-01-28 03:43:09 +00:00
Chris Lattner 43b867dd3b add a new callback
llvm-svn: 25727
2006-01-28 03:37:03 +00:00
Nate Begeman 595ec734fc Implement Promote for VAARG, and allow it to be custom promoted for people
who don't want the default behavior (Alpha).

llvm-svn: 25726
2006-01-28 03:14:31 +00:00
Nate Begeman af397cec0b Add a missing case to the dag combiner.
llvm-svn: 25723
2006-01-28 01:06:30 +00:00
Chris Lattner fb16a62fba Remove the ISD::CALL and ISD::TAILCALL nodes
llvm-svn: 25721
2006-01-28 00:18:58 +00:00
Nate Begeman 8c47c3a3b1 Remove TLI.LowerReturnTo, and just let targets custom lower ISD::RET for
the same functionality.  This addresses another piece of bug 680.  Next,
on to fixing Alpha VAARG, which I broke last time.

llvm-svn: 25696
2006-01-27 21:09:22 +00:00
Chris Lattner 4df279cfda Teach the scheduler to emit the appropriate INLINEASM MachineInstr for an
ISD::INLINEASM node.

llvm-svn: 25668
2006-01-26 23:28:04 +00:00
Chris Lattner 476e67be14 initial selectiondag support for new INLINEASM node. Note that inline asms
with outputs or inputs are not supported yet. :)

llvm-svn: 25664
2006-01-26 22:24:51 +00:00
Evan Cheng c4c339c3d0 Clean up some code; improve efficiency; and fixed a potential bug involving
chain successors.

llvm-svn: 25630
2006-01-26 00:30:29 +00:00
Reid Spencer 5edde66863 Don't break the optimized build (by incorrect placement of #endif)
llvm-svn: 25613
2006-01-25 21:49:13 +00:00
Evan Cheng 1880f8db02 No need to keep track of top and bottom nodes in a group since the vector is
already in order. Thanks Jim for pointing it out.

llvm-svn: 25608
2006-01-25 18:54:24 +00:00
Nate Begeman e74795cd70 First part of bug 680:
Remove TLI.LowerVA* and replace it with SDNodes that are lowered the same
way as everything else.

llvm-svn: 25606
2006-01-25 18:21:52 +00:00
Jeff Cohen fb20616aa6 Fix VC++ compilation error.
llvm-svn: 25604
2006-01-25 17:17:49 +00:00
Evan Cheng ab49556cf4 Bottom up register usage reducing list scheduler.
llvm-svn: 25601
2006-01-25 09:14:32 +00:00
Evan Cheng fbc88a624a Keep track of bottom / top element of a set of flagged nodes.
llvm-svn: 25600
2006-01-25 09:13:41 +00:00
Evan Cheng a6eff8a432 If scheduler choice is the default (-sched=default), use target scheduling
preference to determine which scheduler to use. SchedulingForLatency ==
Breadth first; SchedulingForRegPressure == bottom up register reduction list
scheduler.

llvm-svn: 25599
2006-01-25 09:12:57 +00:00
Chris Lattner f9a1e3aadc Fix an infinite loop I caused by making sure to legalize the flag operand
of CALLSEQ_* nodes

llvm-svn: 25582
2006-01-24 05:48:21 +00:00
Jeff Cohen 12f8441c03 Fix VC++ compilation error.
llvm-svn: 25577
2006-01-24 04:43:17 +00:00
Andrew Lenharth 683352382e another couple selects
llvm-svn: 25551
2006-01-23 21:51:14 +00:00
Andrew Lenharth c28563874c another selectto
llvm-svn: 25548
2006-01-23 20:59:12 +00:00
Jim Laskey b8566fa10a Typo.
llvm-svn: 25545
2006-01-23 13:34:04 +00:00
Evan Cheng 31272347d4 Skeleton of the list schedule.
llvm-svn: 25544
2006-01-23 08:26:10 +00:00
Evan Cheng 421cfe8006 Minor clean up.
llvm-svn: 25543
2006-01-23 08:25:34 +00:00
Chris Lattner 763dfd7723 Fix Regression/CodeGen/SparcV8/2006-01-22-BitConvertLegalize.ll by making
sure that the result of expanding a BIT_CONVERT node is itself legalized.

llvm-svn: 25538
2006-01-23 07:30:46 +00:00
Evan Cheng 87063b9986 Remove a couple of unnecessary #include's
llvm-svn: 25535
2006-01-23 07:21:01 +00:00
Evan Cheng c1e1d9724d Factor out more instruction scheduler code to the base class.
llvm-svn: 25532
2006-01-23 07:01:07 +00:00
Chris Lattner deda32a786 Fix bugs lowering stackrestore, fixing 2004-08-12-InlinerAndAllocas.c on
PPC.

llvm-svn: 25522
2006-01-23 05:22:07 +00:00
Chris Lattner de02d7727f Add explicit #includes of <iostream>
llvm-svn: 25515
2006-01-22 23:41:00 +00:00
Chris Lattner e23928c67f Fix a bug in a recent refactor that caused a bunch of programs to miscompile
or the compiler to crash.

llvm-svn: 25503
2006-01-21 19:12:11 +00:00
Chris Lattner 44cab00045 Fix CodeGen/PowerPC/2006-01-20-ShiftPartsCrash.ll
llvm-svn: 25496
2006-01-21 04:27:00 +00:00
Evan Cheng 739a6a456e Do some code refactoring on Jim's scheduler in preparation of the new list
scheduler.

llvm-svn: 25493
2006-01-21 02:32:06 +00:00
Chris Lattner 15afe462a8 remove some unintentionally committed code
llvm-svn: 25483
2006-01-20 18:40:10 +00:00
Chris Lattner 222ceabbee If the target doesn't support f32 natively, insert the FP_EXTEND in target-indep
code, so that the LowerReturn code doesn't have to handle it.

llvm-svn: 25482
2006-01-20 18:38:32 +00:00
Evan Cheng 13e8c9d6de Another typo
llvm-svn: 25440
2006-01-19 04:54:52 +00:00
Andrew Lenharth 7599b6e4af was ignoring the legalized chain in this case, fixed SPASS on alpha
llvm-svn: 25428
2006-01-18 23:19:08 +00:00
Nate Begeman 569c439567 Get rid of code in the DAGCombiner that is duplicated in SelectionDAG.cpp
Now all constant folding in the code generator is in one place.

llvm-svn: 25426
2006-01-18 22:35:16 +00:00
Chris Lattner e2ee190821 Temporary work around for a libcall insertion bug: If a target doesn't
support FSIN/FCOS nodes, do not lower sin/cos to them.

llvm-svn: 25425
2006-01-18 21:50:14 +00:00
Chris Lattner 5fee908be5 Fix a backwards conditional that caused an inf loop in some cases. This
fixes: test/Regression/CodeGen/Generic/2005-01-18-SetUO-InfLoop.ll

llvm-svn: 25419
2006-01-18 19:13:41 +00:00
Robert Bocchino 03e95af9f7 Support for the insertelement operation.
llvm-svn: 25405
2006-01-17 20:06:42 +00:00
Evan Cheng 6f86a7db07 Bug fix: missing LegalizeOp() on newly created nodes.
llvm-svn: 25401
2006-01-17 19:47:13 +00:00
Jim Laskey b9966029fe Adding basic support for Dwarf line number debug information.
I promise to keep future commits smaller.

llvm-svn: 25396
2006-01-17 17:31:53 +00:00
Reid Spencer b4f9a6f110 For PR411:
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
  llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
  llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
  llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
  llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
  llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.

llvm-svn: 25366
2006-01-16 21:12:35 +00:00
Nate Begeman 1e1eb5ee6c Constant fold ctpop/ctlz/cttz, and a couple other small cleanups
llvm-svn: 25357
2006-01-16 08:07:10 +00:00
Nate Begeman 2642a35f4c Expand case for 64b Legalize, even though no one should end up using this
(itanium supports bswap natively, alpha should custom lower it using the
VAX floating point swapload, ha ha).

llvm-svn: 25356
2006-01-16 07:59:13 +00:00
Chris Lattner fcdb420baf Disable two transformations that contribute to bus errors on SparcV8.
llvm-svn: 25339
2006-01-15 18:58:59 +00:00
Chris Lattner 59b82f9848 Allow the target to specify 'expand' if they just require the amount to
be subtracted from the stack pointer.

llvm-svn: 25331
2006-01-15 08:54:32 +00:00
Chris Lattner 2d59142613 Fix custom lowering of dynamic_stackalloc
llvm-svn: 25329
2006-01-15 08:43:08 +00:00