Commit Graph

275 Commits

Author SHA1 Message Date
Chris Lattner 8ba9ec9bbb Turn things with obviously undefined semantics into 'store -> null'
llvm-svn: 17110
2004-10-18 02:59:09 +00:00
Chris Lattner 3b92f17165 My friend the invoke instruction does not dominate all basic blocks if it
occurs in the entry node of a function

llvm-svn: 17109
2004-10-18 01:48:31 +00:00
Chris Lattner 107c15c33d Remove printout, realize that instructions in the entry block dominate all
other blocks.

llvm-svn: 17099
2004-10-17 21:31:34 +00:00
Chris Lattner e29d634a94 hasConstantValue will soon return instructions that don't dominate the PHI node,
so prepare for this.

llvm-svn: 17095
2004-10-17 21:22:38 +00:00
Chris Lattner 67f0545daf Fix a type violation
llvm-svn: 17069
2004-10-16 23:28:04 +00:00
Chris Lattner 684c5c6587 Kill the bogon that slipped into my buffer before I committed.
llvm-svn: 17067
2004-10-16 19:46:33 +00:00
Chris Lattner 6580e09fef Implement InstCombine/getelementptr.ll:test9, which is the source of many
ugly and giant constnat exprs in some programs.

llvm-svn: 17066
2004-10-16 19:44:59 +00:00
Chris Lattner 81a7a23494 Optimize instructions involving undef values. For example X+undef == undef.
llvm-svn: 17047
2004-10-16 18:11:37 +00:00
Chris Lattner 00648e1f86 Transform memmove -> memcpy when the source is obviously constant memory.
llvm-svn: 16932
2004-10-12 04:52:52 +00:00
Chris Lattner a92af96c56 Reenable the transform, turning X/-10 < 1 into X > -10
llvm-svn: 16918
2004-10-11 19:40:04 +00:00
Chris Lattner 4ad08352b4 Implement sub.ll:test17, -X/C -> X/-C
llvm-svn: 16863
2004-10-09 02:50:40 +00:00
Chris Lattner 0b41e861b6 Temporarily disable a buggy transformation until it can be fixed. This fixes
254.gap.

llvm-svn: 16853
2004-10-08 19:15:44 +00:00
Chris Lattner bff91d9a2e Instcombine (X & FF00) + xx00 -> (X+xx00) & FF00, implementing and.ll:test27
This comes up when doing adds to bitfield elements.

llvm-svn: 16836
2004-10-08 05:07:56 +00:00
Chris Lattner 44bd392cbf Little patch to turn (shl (add X, 123), 4) -> (add (shl X, 4), 123 << 4)
This triggers in cases of bitfield additions, opening opportunities for
future improvements.

llvm-svn: 16834
2004-10-08 03:46:20 +00:00
Chris Lattner 0aee4b7947 Instcombine: -(X sdiv C) -> (X sdiv -C), tested by sub.ll:test16
llvm-svn: 16769
2004-10-06 15:08:25 +00:00
Chris Lattner abae776b18 Hrm, debugging printouts do not need to be in here
llvm-svn: 16598
2004-09-29 21:21:14 +00:00
Chris Lattner 6862fbd2cf * Pull range optimization code out into new InsertRangeTest function.
* SubOne/AddOne functions always return ConstantInt, declare them as such
* Pull code for handling setcc X, cst, where cst is at the end of the range,
  or cc is LE or GE up earlier in visitSetCondInst.  This reduces #iterations
  in some cases.
* Fold: (div X, C1) op C2 -> range check, implementing div.ll:test6 - test9.

llvm-svn: 16588
2004-09-29 17:40:11 +00:00
Chris Lattner 6a4adcda4c Fold binary expressions and casts into PHI nodes that have all constant inputs.
This takes something like this:

%A = phi int [ 3, %cond_false.0 ], [ 2, %endif.0.i ], [ 2, %endif.1.i ]
%B = div int %tmp.243, 4

and turns it into:

%A = phi int [ 3/4, %cond_false.0 ], [ 2/4, %endif.0.i ], [ 2/4, %endif.1.i ]

which is later simplified (in this case) into %A = 0.

This triggers thousands of times in spec, for example, 269 times in 176.gcc.

This is tested by InstCombine/add.ll:test23 and set.ll:test18.

llvm-svn: 16582
2004-09-29 05:07:12 +00:00
Chris Lattner c949128b2f Hrm, really, all tests passed without this, but it is scary to think how...
llvm-svn: 16568
2004-09-29 03:16:24 +00:00
Chris Lattner be7a69ebd8 Remove debugging printout
Instcombine (setcc (truncate X), C1).

This occurs THOUSANDS of times in many benchmarks.  Particularlly common
seem to be things like (seteq (cast bool X to int), int 0)

This turns it into (seteq bool %X, false), which then becomes (not %X).

llvm-svn: 16567
2004-09-29 03:09:18 +00:00
Chris Lattner dcf756ec22 Fold (X setcc C1) | (X setcc C2)
This implements or.ll:test1[89]

llvm-svn: 16561
2004-09-28 22:33:08 +00:00
Chris Lattner 623826c888 Fold (and (setcc X, C1), (setcc X, C2))
This is important for several reasons:

1. Benchmarks have lots of code that looks like this (perlbmk in particular):

  %tmp.2.i = setne int %tmp.0.i, 128              ; <bool> [#uses=1]
  %tmp.6343 = seteq int %tmp.0.i, 1               ; <bool> [#uses=1]
  %tmp.63 = and bool %tmp.2.i, %tmp.6343          ; <bool> [#uses=1]

   we now fold away the setne, a clear improvement.

2. In the more important cases, such as (X >= 10) & (X < 20), we now produce
   smaller code: (X-10) < 10.

3. Perhaps the nicest effect of this patch is that it really helps out the
   code generators.  In particular, for a 'range test' like the above,
   instead of generating this on X86 (the difference on PPC is even more
   pronounced):

        cmp %EAX, 50
        setge %CL
        cmp %EAX, 100
        setl %AL
        and %CL, %AL
        cmp %CL, 0

   we now generate this:

        add %EAX, -50
        cmp %EAX, 50

   Furthermore, this causes setcc's to be folded into branches more often.

These combinations trigger dozens of times in the spec benchmarks, particularly
in 176.gcc, 186.crafty, 253.perlbmk, 254.gap, & 099.go.

llvm-svn: 16559
2004-09-28 21:48:02 +00:00
Chris Lattner 272d5ca9e0 Implement X / C1 / C2 folding
Implement (setcc (shl X, C1), C2) folding.

The second one occurs several dozen times in spec.  The first was added
just in case.  :)

These are tested by shift.ll:test2[12], and div.ll:test5

llvm-svn: 16549
2004-09-28 18:22:15 +00:00
Chris Lattner 6afc02f816 shl is always zero extending, so always use a zero extending shift right.
This latent bug was exposed by recent changes, and is tested as:
llvm/test/Regression/Transforms/InstCombine/2004-09-28-BadShiftAndSetCC.llx

llvm-svn: 16546
2004-09-28 17:54:07 +00:00
Chris Lattner bfff18a869 Fix two bugs: one where a condition was mistakenly swapped, and another
where we folded (X & 254) -> X < 1 instead of X < 2.  These problems were
latent problems exposed by the latest patch.

llvm-svn: 16528
2004-09-27 19:29:18 +00:00
Chris Lattner 1023b8726e Fold: (setcc (shr X, ShAmt), CI), where 'cc' is eq or ne. This xform
triggers often, for example:

6x in povray, 1x in gzip, 279x in gcc, 1x in crafty, 8x in eon, 11x in perlbmk,
362x in gap, 4x in vortex, 14 in m88ksim, 211x in 126.gcc, 1x in compress,
11x in ijpeg, and 4x in 147.vortex.

llvm-svn: 16521
2004-09-27 16:18:50 +00:00
Chris Lattner 7e794273f5 Implement shift-and combinations, implementing InstCombine/and.ll:test19-21
These combinations trigger 4 times in povray, 7x in gcc, 4x in gap, and 2x in bzip2.

llvm-svn: 16508
2004-09-24 15:21:34 +00:00
Chris Lattner e1b4d2a470 Move LHSI->hasOneUse() into the arms of the conditional, reindenting code.
No functionality changes here.

llvm-svn: 16505
2004-09-23 21:52:49 +00:00
Chris Lattner 8fc5af4da9 Implement Transforms/InstCombine/and.ll:test18, a case that occurs 20 times
in perlbmk

llvm-svn: 16504
2004-09-23 21:46:38 +00:00
Chris Lattner bdcf41a8a2 Implement select.ll:test16: fold load (select C, X, null) -> load X
llvm-svn: 16499
2004-09-23 15:46:00 +00:00
Chris Lattner b121ae1cec Do not fold (X + C1 != C2) if there are other users of the add. Doing
this transformation used to take a loop like this:

int Array[1000];
void test(int X) {
  int i;
  for (i = 0; i < 1000; ++i)
    Array[i] += X;
}

Compiled to LLVM is:

no_exit:                ; preds = %entry, %no_exit
        %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]            ; <uint> [#uses=2]
        %tmp.4 = getelementptr [1000 x int]* %Array, int 0, uint %indvar                ; <int*> [#uses=2]
        %tmp.7 = load int* %tmp.4               ; <int> [#uses=1]
        %tmp.9 = add int %tmp.7, %X             ; <int> [#uses=1]
        store int %tmp.9, int* %tmp.4
***     %indvar.next = add uint %indvar, 1              ; <uint> [#uses=2]
***     %exitcond = seteq uint %indvar.next, 1000               ; <bool> [#uses=1]
        br bool %exitcond, label %return, label %no_exit

and turn it into a loop like this:

no_exit:                ; preds = %entry, %no_exit
        %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]            ; <uint> [#uses=3]
        %tmp.4 = getelementptr [1000 x int]* %Array, int 0, uint %indvar                ; <int*> [#uses=2]
        %tmp.7 = load int* %tmp.4               ; <int> [#uses=1]
        %tmp.9 = add int %tmp.7, %X             ; <int> [#uses=1]
        store int %tmp.9, int* %tmp.4
***     %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
***     %exitcond = seteq uint %indvar, 999             ; <bool> [#uses=1]
        br bool %exitcond, label %return, label %no_exit

Note that indvar.next and indvar can no longer be coallesced.  In machine
code terms, this patch changes this code:

.LBBtest_1:     # no_exit
        mov %EDX, OFFSET Array
        mov %ESI, %EAX
        add %ESI, DWORD PTR [%EDX + 4*%ECX]
        mov %EDX, OFFSET Array
        mov DWORD PTR [%EDX + 4*%ECX], %ESI
        mov %EDX, %ECX
        inc %EDX
        cmp %ECX, 999
        mov %ECX, %EDX
        jne .LBBtest_1  # no_exit

into this:

.LBBtest_1:     # no_exit
        mov %EDX, OFFSET Array
        mov %ESI, %EAX
        add %ESI, DWORD PTR [%EDX + 4*%ECX]
        mov %EDX, OFFSET Array
        mov DWORD PTR [%EDX + 4*%ECX], %ESI
        inc %ECX
        cmp %ECX, 1000
        jne .LBBtest_1  # no_exit

We need better instruction selection to get this:

.LBBtest_1:     # no_exit
        add DWORD PTR [Array + 4*%ECX], EAX
        inc %ECX
        cmp %ECX, 1000
        jne .LBBtest_1  # no_exit

... but at least there is less register juggling

llvm-svn: 16473
2004-09-21 21:35:23 +00:00
Chris Lattner 42618551d5 Fix potential miscompilations: InstCombine/2004-09-20-BadLoadCombine*.llx
llvm-svn: 16447
2004-09-20 10:15:10 +00:00
Alkis Evlogimenos d59cebf87a Fix loop condition so that we don't decrement off the beginning of the
list.

llvm-svn: 16440
2004-09-20 06:42:58 +00:00
Chris Lattner e6f13093e6 Make isSafeToLoadUnconditionally a bit smarter, implementing PR362 and
Regression/Transforms/InstCombine/CPP_min_max.llx

llvm-svn: 16409
2004-09-19 19:18:10 +00:00
Chris Lattner f62ea8ef4b Make instruction combining a bit more aggressive in the face of volatile
loads, and implement two new transforms: InstCombine/load.ll:test[56].

llvm-svn: 16404
2004-09-19 18:43:46 +00:00
Reid Spencer 7c16caa336 Changes For Bug 352
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.

llvm-svn: 16137
2004-09-01 22:55:40 +00:00
Chris Lattner 4456da6a4c Fix InstCombine/2004-08-10-BoolSetCC.ll, a bug that is miscompiling
176.gcc.  Note that this is apparently not the only bug miscompiling gcc
though. :(

llvm-svn: 15639
2004-08-11 00:50:51 +00:00
Chris Lattner 8e7260652b Fix InstCombine/2004-08-09-RemInfLoop.llx
This should go into the 1.3 branch

llvm-svn: 15593
2004-08-09 21:05:48 +00:00
Alkis Evlogimenos 832437255d Stop using getValues().
llvm-svn: 15487
2004-08-04 08:44:43 +00:00
Chris Lattner 7aa2d4747a Fix a regression in InstCombine/xor.ll
llvm-svn: 15410
2004-08-01 19:42:59 +00:00
Misha Brukman 9c003d8f65 Fix De Morgan's name.
llvm-svn: 15343
2004-07-30 12:50:08 +00:00
Chris Lattner d4252a7c64 Start using the PatternMatcher a bit.
llvm-svn: 15342
2004-07-30 07:50:03 +00:00
Robert Bocchino 7b5b86cd0f This change fixed a bug in the function visitMul. The prior version
assumed that a constant on the RHS of a multiplication was either an
IntConstant or an FPConstant.  It checked for an IntConstant and then,
if it did not find one, did a hard cast to an FPConstant.  That code
would crash if the RHS were a ConstantExpr that was neither an
IntConstant nor an FPConstant.  This version replaces the hard cast
with a dyn_cast.  It performs the same way for IntConstants and
FPConstants but does nothing, instead of crashing, for constant
expressions.

The regression test for this change is 2004-07-27-ConstantExprMul.ll.

llvm-svn: 15291
2004-07-27 21:02:21 +00:00
Brian Gaeke 38b79e8fbc Make the create...() functions for some of these passes return a FunctionPass *.
llvm-svn: 15276
2004-07-27 17:43:21 +00:00
Chris Lattner d8f5e2ccac * Further cleanup.
* Test for whether bits are shifted out during the optzn.

If so, the fold is illegal, though it can be handled explicitly for setne/seteq

This fixes the miscompilation of 254.gap last night, which was a latent bug
exposed by other optimizer improvements.

llvm-svn: 15085
2004-07-21 20:14:10 +00:00
Chris Lattner 1638de4499 Make cast-cast code a bit more defensive
"simplify" a bit of code for comparison/and folding

llvm-svn: 15082
2004-07-21 19:50:44 +00:00
Chris Lattner 4fbad968f8 Remove special casing of pointers and treat them generically as integers of
the appopriate size.  This gives us the ability to eliminate int -> ptr -> int

llvm-svn: 15063
2004-07-21 04:27:24 +00:00
Chris Lattner 11ffd59e37 Implement Transforms/InstCombine/IntPtrCast.ll
llvm-svn: 15029
2004-07-20 05:21:00 +00:00
Chris Lattner 44d0b9502a Implement InstCombine/GEPIdxCanon.ll
llvm-svn: 15024
2004-07-20 01:48:15 +00:00
Chris Lattner 4e2dbc6b4a Rewrite cast->cast elimination code completely based on the information we
actually care about.  Someday when the cast instruction is gone, we can do
better here, but this will do for now.  This implements
instcombine/cast.ll:test17/18 as well.

llvm-svn: 15018
2004-07-20 00:59:32 +00:00