Chris Lattner
36dd7c98d1
Turn x86 unaligned load/store intrinsics into aligned load/store instructions
...
if the pointer is known aligned.
llvm-svn: 27781
2006-04-17 22:26:56 +00:00
Chris Lattner
9095186deb
Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform
...
Make the insert/extract elt -> shuffle code more aggressive.
This fixes CodeGen/PowerPC/vec_shuffle.ll
llvm-svn: 27728
2006-04-16 00:51:47 +00:00
Chris Lattner
34cebe785d
Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask').
...
llvm-svn: 27727
2006-04-16 00:03:56 +00:00
Chris Lattner
39fac448d6
significant cleanups to code that uses insert/extractelt heavily. This builds
...
maximal shuffles out of them where possible.
llvm-svn: 27717
2006-04-15 01:39:45 +00:00
Chris Lattner
b19a5c661b
Turn casts into getelementptr's when possible. This enables SROA to be more
...
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.
This implements InstCombine/cast.ll:test27/28.
llvm-svn: 27620
2006-04-12 18:09:35 +00:00
Chris Lattner
2d37f920ad
Implement vec_shuffle.ll:test3
...
llvm-svn: 27573
2006-04-10 23:06:36 +00:00
Chris Lattner
fbb77a408b
Implement InstCombine/vec_shuffle.ll:test[12]
...
llvm-svn: 27571
2006-04-10 22:45:52 +00:00
Chris Lattner
e79d249c29
Lower vperm(x,y, mask) -> shuffle(x,y,mask) if mask is constant. This allows
...
us to compile oh-so-realistic stuff like this:
vec_vperm(A, B, (vector unsigned char){14});
to:
vspltb v0, v0, 14
instead of:
vspltisb v0, 14
vperm v0, v2, v1, v0
llvm-svn: 27452
2006-04-06 19:19:17 +00:00
Chris Lattner
caba72b6ff
vector casts of casts are eliminable. Transform this:
...
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
%tmp = cast <4 x int> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
into:
%tmp = cast <4 x uint> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
llvm-svn: 27355
2006-04-02 05:43:13 +00:00
Chris Lattner
ebca476b27
Allow transforming this:
...
%tmp = cast <4 x uint>* %testData to <4 x int>* ; <<4 x int>*> [#uses=1]
%tmp = load <4 x int>* %tmp ; <<4 x int>> [#uses=1]
to this:
%tmp = load <4 x uint>* %testData ; <<4 x uint>> [#uses=1]
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
llvm-svn: 27353
2006-04-02 05:37:12 +00:00
Chris Lattner
f42d0aeda1
Turn altivec lvx/stvx intrinsics into loads and stores. This allows the
...
elimination of one load from this:
int AreSecondAndThirdElementsBothNegative( vector float *in ) {
#define QNaN 0x7FC00000
const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
vector float test = vec_ld( 0, (float*) &testData );
return ! vec_any_ge( test, *in );
}
Now generating:
_AreSecondAndThirdElementsBothNegative:
mfspr r2, 256
oris r4, r2, 49152
mtspr 256, r4
li r4, lo16(LCPI1_0)
lis r5, ha16(LCPI1_0)
addi r6, r1, -16
lvx v0, r5, r4
stvx v0, 0, r6
lvx v1, 0, r3
vcmpgefp. v0, v0, v1
mfcr r3, 2
rlwinm r3, r3, 27, 31, 31
xori r3, r3, 1
cntlzw r3, r3
srwi r3, r3, 5
mtspr 256, r2
blr
llvm-svn: 27352
2006-04-02 05:30:25 +00:00
Chris Lattner
6cf4914fd4
Fix InstCombine/2006-04-01-InfLoop.ll
...
llvm-svn: 27330
2006-04-01 22:05:01 +00:00
Chris Lattner
dcd0792622
Fold A^(B&A) -> (B&A)^A
...
Fold (B&A)^A == ~B & A
This implements InstCombine/xor.ll:test2[56]
llvm-svn: 27328
2006-04-01 08:03:55 +00:00
Chris Lattner
8d1d8d364c
If we can look through vector operations to find the scalar version of an
...
extract_element'd value, do so.
llvm-svn: 27323
2006-03-31 23:01:56 +00:00
Chris Lattner
92346c315e
extractelement(undef,x) -> undef
...
llvm-svn: 27300
2006-03-31 18:25:14 +00:00
Chris Lattner
612fa8e6f3
Fix Transforms/InstCombine/2006-03-30-ExtractElement.ll
...
llvm-svn: 27261
2006-03-30 22:02:40 +00:00
Chris Lattner
d70d9f5b24
Don't crash on packed logical ops
...
llvm-svn: 27125
2006-03-25 21:58:26 +00:00
Jim Laskey
83f99115db
Can't combine anymore - we don't have a chain through llvm.dbg intrinsics.
...
llvm-svn: 26992
2006-03-23 18:10:42 +00:00
Chris Lattner
53ef5a032c
Teach the alignment handling code to look through constant expr casts and GEPs
...
llvm-svn: 26580
2006-03-07 01:28:57 +00:00
Chris Lattner
82f2ef20b6
Teach instcombine to increase the alignment of memset/memcpy/memmove when
...
the pointer is known to come from either a global variable, alloca or
malloc. This allows us to compile this:
P = malloc(28);
memset(P, 0, 28);
into explicit stores on PPC instead of a memset call.
llvm-svn: 26577
2006-03-06 20:18:44 +00:00
Chris Lattner
6bc98653c2
Make vector narrowing more effective, implementing
...
Transforms/InstCombine/vec_narrow.ll. This add support for narrowing
extract_element(insertelement) also.
llvm-svn: 26538
2006-03-05 00:22:33 +00:00
Chris Lattner
32c01df299
Canonicalize (X+C1)*C2 -> X*C2+C1*C2
...
This implements Transforms/InstCombine/add.ll:test31
llvm-svn: 26519
2006-03-04 06:04:02 +00:00
Chris Lattner
681ef2f083
Change this to work with renamed intrinsics.
...
llvm-svn: 26484
2006-03-03 01:34:17 +00:00
Chris Lattner
85dda9a2bd
Generalize the REM folding code to handle another case Nick Lewycky
...
pointed out: realize the AND can provide factors and look through Casts.
llvm-svn: 26469
2006-03-02 06:50:58 +00:00
Chris Lattner
c5b6c9a12a
Fix a regression in a patch from a couple of days ago. This fixes
...
Transforms/InstCombine/2006-02-28-Crash.ll
llvm-svn: 26427
2006-02-28 19:47:20 +00:00
Chris Lattner
b70f141893
Implement rem.ll:test[7-9] and PR712
...
llvm-svn: 26415
2006-02-28 05:49:21 +00:00
Chris Lattner
2a7c7b8bab
Simplify some code now that the RHS of a rem can't be 0
...
llvm-svn: 26413
2006-02-28 05:40:55 +00:00
Chris Lattner
0de4a8d7b7
Rearrange some code, fold "rem X, 0", implementing rem.ll:test6
...
llvm-svn: 26411
2006-02-28 05:30:45 +00:00
Chris Lattner
c7bfed0f7b
Merge two almost-identical pieces of code.
...
Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand. This lets us fold this:
int %test23(int %a) {
%tmp.1 = and int %a, 1
%tmp.2 = seteq int %tmp.1, 0
%tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1
ret int %tmp.3
}
into: xor (and a, 1), 1
llvm-svn: 26396
2006-02-27 02:38:23 +00:00
Chris Lattner
f5c8a0b83f
Fold (A^B) == A -> B == 0
...
and (A-B) == A -> B == 0
llvm-svn: 26394
2006-02-27 01:44:11 +00:00
Chris Lattner
f78df7c14d
Fold (X|C1)^C2 -> X^(C1|C2) when possible. This implements
...
InstCombine/or.ll:test23.
llvm-svn: 26385
2006-02-26 19:57:54 +00:00
Chris Lattner
b580d26e7d
Fix a problem that Nate noticed that boils down to an over conservative check
...
in the code that does "select C, (X+Y), (X-Y) --> (X+(select C, Y, (-Y)))".
We now compile this loop:
LBB1_1: ; no_exit
add r6, r2, r3
subf r3, r2, r3
cmpwi cr0, r2, 0
addi r7, r5, 4
lwz r2, 0(r5)
addi r4, r4, 1
blt cr0, LBB1_4 ; no_exit
LBB1_3: ; no_exit
mr r3, r6
LBB1_4: ; no_exit
cmpwi cr0, r4, 16
mr r5, r7
bne cr0, LBB1_1 ; no_exit
into this instead:
LBB1_1: ; no_exit
srawi r6, r2, 31
add r2, r2, r6
xor r6, r2, r6
addi r7, r5, 4
lwz r2, 0(r5)
addi r4, r4, 1
add r3, r3, r6
cmpwi cr0, r4, 16
mr r5, r7
bne cr0, LBB1_1 ; no_exit
llvm-svn: 26356
2006-02-24 18:05:58 +00:00
Jeff Cohen
0add83e969
Fix bugs identified by VC++.
...
llvm-svn: 26287
2006-02-18 03:20:33 +00:00
Nate Begeman
8a77efe4f7
Rework the SelectionDAG-based implementations of SimplifyDemandedBits
...
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.
llvm-svn: 26238
2006-02-16 21:11:51 +00:00
Chris Lattner
8b10ab3002
Implement Instcombine/and.ll:test34
...
llvm-svn: 26155
2006-02-13 23:07:23 +00:00
Chris Lattner
7d8522884b
If any of the sign extended bits are demanded, the input sign bit is demanded
...
for a sign extension.
This fixes InstCombine/2006-02-13-DemandedMiscompile.ll and Ptrdist/bc.
llvm-svn: 26152
2006-02-13 22:41:07 +00:00
Chris Lattner
68e7475777
Be careful not to request or look at bits shifted in from outside the size
...
of the input. This fixes the mediabench/gsm/toast failure last night.
llvm-svn: 26138
2006-02-13 06:09:08 +00:00
Chris Lattner
f5b4ef7f58
remove some more dead special case code
...
llvm-svn: 26135
2006-02-12 08:07:37 +00:00
Chris Lattner
5b2edb1fca
Eliminate special case hacks that are superceded by general purpose hacks
...
llvm-svn: 26134
2006-02-12 08:02:11 +00:00
Chris Lattner
ee0f280743
Three changes:
...
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
between the shift and and. More aggressive bitfolding for other reasons
was turning signed shr's into unsigned shr's, leaving the noop cast in
the way.
llvm-svn: 26131
2006-02-12 02:07:56 +00:00
Chris Lattner
0157e7f55b
Port the recent innovations in ComputeMaskedBits to SimplifyDemandedBits.
...
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
llvm-svn: 26122
2006-02-11 09:31:47 +00:00
Chris Lattner
24cd2fa269
Fix 80-column violations
...
llvm-svn: 26088
2006-02-09 07:41:14 +00:00
Chris Lattner
4534dd59a3
Enhance MVIZ in three ways:
...
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
llvm-svn: 26087
2006-02-09 07:38:58 +00:00
Chris Lattner
ab2dc4d70d
Simplify some code, reducing calls to MaskedValueIsZero. Implement a minor
...
optimization where we reduce the number of bits in AND masks when possible.
llvm-svn: 26056
2006-02-08 07:34:50 +00:00
Chris Lattner
5997cf9381
Use EraseInstFromFunction in a few cases to put the uses of the removed
...
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
llvm-svn: 26050
2006-02-08 03:25:32 +00:00
Chris Lattner
ddba3289b5
Fix a problem in my patch yesterday, causing a miscompilation of 176.gcc
...
llvm-svn: 26045
2006-02-08 01:20:23 +00:00
Chris Lattner
44314827d6
Fix Transforms/InstCombine/2006-02-07-SextZextCrash.ll
...
llvm-svn: 26040
2006-02-07 19:07:40 +00:00
Chris Lattner
92a6865321
Generalize MaskedValueIsZero into a ComputeMaskedNonZeroBits function, which
...
is just as efficient as MVIZ and is also more general.
Fix a few minor bugs introduced in recent patches
llvm-svn: 26036
2006-02-07 08:05:22 +00:00
Chris Lattner
c3ebf40031
Make MaskedValueIsZero take a uint64_t instead of a ConstantIntegral as a
...
mask. This allows the code to be simpler and more efficient.
Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.
llvm-svn: 26035
2006-02-07 07:27:52 +00:00
Chris Lattner
77defbae0a
Use Type::getIntegralTypeMask() to simplify some code
...
llvm-svn: 26034
2006-02-07 07:00:41 +00:00