Commit Graph

1194 Commits

Author SHA1 Message Date
Chris Lattner 94cfb281c3 make sure to set Changed=true when instcombine hacks on the code,
not doing so prevents it from properly iterating and prevents it
from deleting the entire body of dce-iterate.ll

llvm-svn: 63476
2009-01-31 07:04:22 +00:00
Mon P Wang 3537a62704 Fixed optimization of combining two shuffles where the first shuffle inputs
has a different number of elements than the output.

llvm-svn: 62998
2009-01-26 04:39:00 +00:00
Torok Edwin f4395ea97a testcase for PR3381.
Also it was an empty struct, not a void after all.

llvm-svn: 62920
2009-01-24 17:16:04 +00:00
Torok Edwin 73ff92272f void* is represented as pointer to empty struct {}.
Thus we need to check whether the struct is empty before trying to index into
it. This fixes PR3381.

llvm-svn: 62918
2009-01-24 11:30:49 +00:00
Chris Lattner 72cd68fe64 Make InstCombineStoreToCast handle aggregates more aggressively,
handling the case in Transforms/InstCombine/cast-store-gep.ll, which
is a heavily reduced testcase from Clang on x86-64.

llvm-svn: 62904
2009-01-24 01:00:13 +00:00
Chris Lattner 77527f5812 Remove uses of uint32_t in favor of 'unsigned' for better
compatibility with cygwin.  Patch by Jay Foad!

llvm-svn: 62695
2009-01-21 18:09:24 +00:00
Dale Johannesen b5721632ee Make special cases (0 inf nan) work for frem.
Besides APFloat, this involved removing code
from two places that thought they knew the
result of frem(0., x) but were wrong.

llvm-svn: 62645
2009-01-21 00:35:19 +00:00
Chris Lattner db2d9613d2 Fix PR3335 by not turning a store to one address space into a store to another.
llvm-svn: 62351
2009-01-16 20:12:52 +00:00
Chris Lattner 733256fe31 reduce indentation by using early exits, no functionality change.
llvm-svn: 62350
2009-01-16 20:08:59 +00:00
Evan Cheng beac6f8b0c Clean up previous cast optimization a bit. Also make zext elimination a bit more aggressive: if it's not necessary to emit an AND (i.e. high bits are already zero), it's profitable to evaluate the operand at a different type.
llvm-svn: 62297
2009-01-16 02:11:43 +00:00
Evan Cheng ff716cb342 Eliminate a redundant check.
llvm-svn: 62264
2009-01-15 17:09:07 +00:00
Evan Cheng 60e19a46f2 - Teach CanEvaluateInDifferentType of this xform: sext (zext ty1), ty2 -> zext ty2
- Looking at the number of sign bits of the a sext instruction to determine  whether new trunc + sext pair should be added when its source is being evaluated in a different type.

llvm-svn: 62263
2009-01-15 17:01:23 +00:00
Dan Gohman 59af77376c Make instcombine ensure that all allocas are explicitly aligned at at
least their preferred alignment.

llvm-svn: 62176
2009-01-13 20:18:38 +00:00
Duncan Sands dc020f9c3c Rename getABITypeSize to getTypePaddedSize, as
suggested by Chris.

llvm-svn: 62099
2009-01-12 20:38:59 +00:00
Chris Lattner bd3c7c8b52 Duncan is nervous about undefinedness of % with negatives. I'm
not thrilled about 64-bit % in general, so rewrite to use * instead.

llvm-svn: 62047
2009-01-11 20:41:36 +00:00
Chris Lattner b19151686f do not generated GEPs into vectors where they don't already exist.
We should treat vectors as atomic types, not like arrays.

llvm-svn: 62046
2009-01-11 20:23:52 +00:00
Chris Lattner 171d2d474f Make a couple of cleanups to the instcombine bitcast/gep
canonicalization transform based on duncan's comments:

1) improve the comment about %.
2) within our index loop make sure the offset stays 
   within the *type size*, instead of within the *abi size*.
   This allows us to reason explicitly about landing in tail
   padding and means that issues like non-zero offsets into
   [0 x foo] types don't occur anymore.

llvm-svn: 62045
2009-01-11 20:15:20 +00:00
Chris Lattner 5f54d50917 fix typo Duncan noticed.
llvm-svn: 61997
2009-01-09 18:31:39 +00:00
Chris Lattner f50aa6ae5c Implement rdar://6480391, extending of equality icmp's to avoid a truncation.
I noticed this in the code compiled for a routine using std::map, which produced
this code:
	%25 = tail call i32 @memcmp(i8* %24, i8* %23, i32 6) nounwind readonly
	%.lobit.i = lshr i32 %25, 31		; <i32> [#uses=1]
	%tmp.i = trunc i32 %.lobit.i to i8		; <i8> [#uses=1]
	%toBool = icmp eq i8 %tmp.i, 0		; <i1> [#uses=1]
	br i1 %toBool, label %bb3, label %bb4
which compiled to:

	call	L_memcmp$stub
	shrl	$31, %eax
	testb	%al, %al
	jne	LBB1_11	## 

with this change, we compile it to:

	call	L_memcmp$stub
	testl	%eax, %eax
	js	LBB1_11

This triggers all the time in common code, with patters like this:

	%169 = and i32 %ply, 1		; <i32> [#uses=1]
	%170 = trunc i32 %169 to i8		; <i8> [#uses=1]
	%toBool = icmp ne i8 %170, 0		; <i1> [#uses=1]

 	%7 = lshr i32 %6, 24		; <i32> [#uses=1]
	%9 = trunc i32 %7 to i8		; <i8> [#uses=1]
	%10 = icmp ne i8 %9, 0		; <i1> [#uses=1]

etc

llvm-svn: 61985
2009-01-09 07:47:06 +00:00
Chris Lattner 0f7cf1d7e1 Remove some old code that looks like a remanant from signed-types days.
llvm-svn: 61984
2009-01-09 07:10:58 +00:00
Chris Lattner fef138b140 Fix part 3/2 of PR3290, making instcombine zap (gep(bitcast)) when possible.
llvm-svn: 61980
2009-01-09 05:44:56 +00:00
Chris Lattner a784a2ce01 move some code, check to see if the input to the GEP is a bitcast
(which is constant time and cheap) before checking hasAllZeroIndices.

llvm-svn: 61976
2009-01-09 04:53:57 +00:00
Chris Lattner 2fdcc59bb6 Change m_ConstantInt and m_SelectCst to take their constant integers
as template arguments instead of as instance variables, exposing more
optimization opportunities to the compiler earlier.

llvm-svn: 61776
2009-01-05 23:53:12 +00:00
Bill Wendling 0c04f9fdc3 Revert this transform. It was causing some dramatic slowdowns in a few tests. See PR3266.
llvm-svn: 61623
2009-01-04 06:19:11 +00:00
Bill Wendling 0fcff2c203 Fix comment.
llvm-svn: 61538
2009-01-01 01:19:59 +00:00
Bill Wendling aedb54a947 Add transformation:
xor (or (icmp, icmp), true) -> and(icmp, icmp)

This is possible because of De Morgan's law.

llvm-svn: 61537
2009-01-01 01:18:23 +00:00
Nick Lewycky 4bc10c9e77 Remove redundant test for vector-nature. Scan the vector first to see whether
our optz'n will apply to it, then build the replacement vector only if needed.

llvm-svn: 61279
2008-12-20 16:48:00 +00:00
Nick Lewycky c3a70ade66 Oops! Left out a line.
Simplifying the sdiv might allow further simplifications for our users.

llvm-svn: 61196
2008-12-18 06:42:28 +00:00
Nick Lewycky 0f0e63fe73 Make all the vector elements positive in an srem of constant vector.
llvm-svn: 61195
2008-12-18 06:31:11 +00:00
Bill Wendling 87beb9b909 Remove some errors that crept in. No functionality change.
llvm-svn: 60403
2008-12-02 06:24:20 +00:00
Bill Wendling 790b4bf9a9 Merge two if-statements into one.
llvm-svn: 60402
2008-12-02 06:22:04 +00:00
Bill Wendling 5635295266 More styalistic changes. No functionality change.
llvm-svn: 60401
2008-12-02 06:18:11 +00:00
Bill Wendling 85de4b35ca - Remove the buggy -X/C -> X/-C transform. This isn't valid when X isn't a
constant. If X is a constant, then this is folded elsewhere.

- Added a note to Target/README.txt to indicate that we'd like to implement
  this when we're able.

llvm-svn: 60399
2008-12-02 05:12:47 +00:00
Bill Wendling 5369db5917 Improve comment.
llvm-svn: 60398
2008-12-02 05:09:00 +00:00
Bill Wendling 21716dff5e - Reduce nesting.
- No need to do a swap on a canonicalized pattern.

No functionality change.

llvm-svn: 60397
2008-12-02 05:06:43 +00:00
Bill Wendling 6f71bce4cf Don't rebuild RHSNeg. Just use the one that's already there.
llvm-svn: 60370
2008-12-01 21:06:30 +00:00
Bill Wendling 84f6f2539f Document what this check is doing. Also, no need to cast to ConstantInt.
llvm-svn: 60369
2008-12-01 21:03:43 +00:00
Bill Wendling e6c87a4952 Use a simple comparison. Overflow on integer negation can only occur when the
integer is "minint".

llvm-svn: 60366
2008-12-01 19:46:27 +00:00
Bill Wendling 47f733e4ea Generalize the FoldOrWithConstant method to fold for any two constants which
don't have overlapping bits.

llvm-svn: 60344
2008-12-01 08:32:40 +00:00
Bill Wendling 22e761b302 Reduce copy-and-paste code by splitting out the code into its own function.
llvm-svn: 60343
2008-12-01 08:23:25 +00:00
Bill Wendling 582fe6b0ca Use m_Specific() instead of double matching.
llvm-svn: 60341
2008-12-01 08:09:47 +00:00
Bill Wendling 4eecfb655b Move pattern check outside of the if-then statement. This prevents us from fiddling with constants unless we have to.
llvm-svn: 60340
2008-12-01 07:47:02 +00:00
Chris Lattner 9e6b243428 simplify these patterns using m_Specific. No need to grep for
xor in testcase (or is a substring).

llvm-svn: 60328
2008-12-01 05:16:26 +00:00
Chris Lattner 084b3a47d3 Change instcombine to use FoldPHIArgGEPIntoPHI to fold two operand PHIs
instead of using FoldPHIArgBinOpIntoPHI.  In addition to being more
obvious, this also fixes a problem where instcombine wouldn't merge two
phis that had different variable indices.  This prevented instcombine
from factoring big chunks of code in 403.gcc.  For example:

 insn_cuid.exit:                
-       %tmp336 = load i32** @uid_cuid, align 4      
-       %tmp337 = getelementptr %struct.rtx_def* %insn_addr.0.ph.i, i32 0, i32 3    
-       %tmp338 = bitcast [1 x %struct.rtunion]* %tmp337 to i32*               
-       %tmp339 = load i32* %tmp338, align 4           
-       %tmp340 = getelementptr i32* %tmp336, i32 %tmp339     
        br label %bb62
 
 bb61:       
-       %tmp341 = load i32** @uid_cuid, align 4     
-       %tmp342 = getelementptr %struct.rtx_def* %insn, i32 0, i32 3        
-       %tmp343 = bitcast [1 x %struct.rtunion]* %tmp342 to i32*           
-       %tmp344 = load i32* %tmp343, align 4        
-       %tmp345 = getelementptr i32* %tmp341, i32 %tmp344          
        br label %bb62
 
 bb62:      
-       %iftmp.62.0.in = phi i32* [ %tmp345, %bb61 ], [ %tmp340, %insn_cuid.exit ]         
+       %insn.pn2 = phi %struct.rtx_def* [ %insn, %bb61 ], [ %insn_addr.0.ph.i, %insn_cuid.exit ]         
+       %tmp344.pn.in.in = getelementptr %struct.rtx_def* %insn.pn2, i32 0, i32 3     
+       %tmp344.pn.in = bitcast [1 x %struct.rtunion]* %tmp344.pn.in.in to i32*  
+       %tmp341.pn = load i32** @uid_cuid     
+       %tmp344.pn = load i32* %tmp344.pn.in 
+       %iftmp.62.0.in = getelementptr i32* %tmp341.pn, i32 %tmp344.pn   
        %iftmp.62.0 = load i32* %iftmp.62.0.in     

llvm-svn: 60325
2008-12-01 03:42:51 +00:00
Chris Lattner 9d02a70a7d Teach inst combine to merge GEPs through PHIs. This is really
important because it is sinking the loads using the GEPs, but
not the GEPs themselves.  This triggers 647 times on 403.gcc
and makes the .s file much much nicer.  For example before:

        je      LBB1_87 ## bb78
LBB1_62:        ## bb77
        leal    84(%esi), %eax
LBB1_63:        ## bb79
        movl    (%eax), %eax
...
LBB1_87:        ## bb78
        movl    $0, 4(%esp)
        movl    %esi, (%esp)
        call    L_make_decl_rtl$stub
        jmp     LBB1_62 ## bb77


after:

        jne     LBB1_63 ## bb79
LBB1_62:        ## bb78
        movl    $0, 4(%esp)
        movl    %esi, (%esp)
        call    L_make_decl_rtl$stub
LBB1_63:        ## bb79
        movl    84(%esi), %eax

The input code was (and the GEPs are merged and
the PHI is now eliminated by instcombine):

        br i1 %tmp233, label %bb78, label %bb77
bb77:           
        %tmp234 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22              
        br label %bb79
bb78:           
        call void @make_decl_rtl(%struct.tree_node* %t_addr.3, i8* null) nounwind
        %tmp235 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22              
        br label %bb79
bb79:           
        %iftmp.12.0.in = phi %struct.rtx_def** [ %tmp235, %bb78 ], [ %tmp234, %bb77 ]           
        %iftmp.12.0 = load %struct.rtx_def** %iftmp.12.0.in             

llvm-svn: 60322
2008-12-01 02:34:36 +00:00
Bill Wendling 5b902c5b1e Implement ((A|B)&1)|(B&-2) -> (A&1) | B transformation. This also takes care of
permutations of this pattern.

llvm-svn: 60312
2008-12-01 01:07:11 +00:00
Eli Friedman 11c15a5de7 Minor cleanup: use getTrue and getFalse where appropriate. No
functional change.

llvm-svn: 60307
2008-11-30 22:48:49 +00:00
Eli Friedman 55e4becba9 Some minor cleanups to instcombine; no functionality change.
Note that the FoldOpIntoPhi call is dead because it's impossible for the 
first operand of a subtraction to be both a ConstantInt and a PHINode.

llvm-svn: 60306
2008-11-30 21:09:11 +00:00
Bill Wendling de89bc275c Add instruction combining for ((A&~B)|(~A&B)) -> A^B and all permutations.
llvm-svn: 60291
2008-11-30 13:52:49 +00:00
Bill Wendling 9eef421e12 Implement (A&((~A)|B)) -> A&B transformation in the instruction combiner. This
takes care of all permutations of this pattern.

llvm-svn: 60290
2008-11-30 13:08:13 +00:00
Bill Wendling 2fe3229824 Forgot one remaining call to getSExtValue().
llvm-svn: 60289
2008-11-30 12:41:09 +00:00
Bill Wendling 2d2e7861b5 getSExtValue() doesn't work for ConstantInts with bitwidth > 64 bits. Use all
APInt calls instead.

This fixes PR3144.

llvm-svn: 60288
2008-11-30 12:38:24 +00:00
Bill Wendling 7abf352f44 Don't make TwoToExp signed by default.
llvm-svn: 60279
2008-11-30 05:29:33 +00:00
Bill Wendling af200e9237 From Hacker's Delight:
"For signed integers, the determination of overflow of x*y is not so simple. If
x and y have the same sign, then overflow occurs iff xy > 2**31 - 1. If they
have opposite signs, then overflow occurs iff xy < -2**31."

In this case, x == -1.

llvm-svn: 60278
2008-11-30 05:01:05 +00:00
Bill Wendling 70635adea3 Instcombine was illegally transforming -X/C into X/-C when either X or C
overflowed on negation. This commit checks to make sure that neithe C nor X
overflows. This requires that the RHS of X (a subtract instruction) be a
constant integer.

llvm-svn: 60275
2008-11-30 03:42:12 +00:00
Nick Lewycky 4ab50b93c8 Chris prefers icmp/select over udiv!
llvm-svn: 60187
2008-11-27 22:41:10 +00:00
Nick Lewycky 69941fd0a0 Add a couple of missed optimizations on integer vectors. Multiply and divide
by 1, as well as multiply by -1.

llvm-svn: 60182
2008-11-27 20:21:08 +00:00
Chris Lattner e0d019def6 switch InstCombine::visitLoadInst to use
FindAvailableLoadedValue

llvm-svn: 60169
2008-11-27 08:56:30 +00:00
Chris Lattner dd7083452f reapply Sanjiv's patch to genericize memcpy/memset/memmove to take an
arbitrary integer width for the count.

llvm-svn: 59823
2008-11-21 16:42:48 +00:00
Bill Wendling 4bce2bff88 Revert r59802. It was breaking the build of llvm-gcc:
g++ -m32 -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wmissing-format-attribute -fno-common -mdynamic-no-pic -DHAVE_CONFIG_H -Wno-unused -DTARGET_NAME=\"i386-apple-darwin9.5.0\" -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include  -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include -DENABLE_LLVM -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/../llvm.src/include  -D_DEBUG  -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS   -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include  -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include ../../llvm-gcc.src/gcc/llvm-types.cpp -o llvm-types.o
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemCpy(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemMove(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemSet(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i64' is not a member of 'llvm::Intrinsic'
make[3]: *** [llvm-convert.o] Error 1
make[3]: *** Waiting for unfinished jobs....
rm fsf-funding.pod gcov.pod gfdl.pod cpp.pod gpl.pod gcc.pod
make[2]: *** [all-stage1-gcc] Error 2
make[1]: *** [stage1-bubble] Error 2
make: *** [all] Error 2

llvm-svn: 59809
2008-11-21 09:09:41 +00:00
Sanjiv Gupta 09a203765a Make mem[cpy,move,set] intrinsics overloaded.
llvm-svn: 59802
2008-11-21 07:49:09 +00:00
Nick Lewycky 07d726ec4d Optimize (x/y)*y into x-(x%y) in general. Div and rem are about the same, and
a subtract is cheaper than a multiply. This generalizes an existing transform.

llvm-svn: 59800
2008-11-21 07:33:58 +00:00
Devang Patel 7ed6c5317c If there are two consecutive llvm.dbg.stoppoint calls then
it is likely that the optimizer deleted code in between these
two intrinsics. Keep only the last llvm.dbg.stoppoint in this case.

llvm-svn: 59657
2008-11-19 18:56:50 +00:00
Chris Lattner 44152742a0 simplify a bunch more instcombines to use m_Specific etc.
llvm-svn: 59403
2008-11-16 05:38:51 +00:00
Chris Lattner d397fef50d factor the code for simplifying (icmp)|(icmp) into its own function.
llvm-svn: 59402
2008-11-16 05:20:07 +00:00
Chris Lattner 909b969b18 do some computation with apints instead of ConstantInts.
llvm-svn: 59401
2008-11-16 05:14:43 +00:00
Chris Lattner feaea9bdf7 merge a check into a place where it is simpler.
llvm-svn: 59400
2008-11-16 05:10:52 +00:00
Chris Lattner 269cbd5770 factor a whole bunch of code out into a helper function.
llvm-svn: 59398
2008-11-16 05:06:21 +00:00
Chris Lattner b37b6e7e96 simplify the conditions on two gigantic if's, decreasing indentation
a bit.  Next step is to factor out into their own helper functions.

llvm-svn: 59397
2008-11-16 04:55:20 +00:00
Chris Lattner f1be285134 simplify some instcombine matches by using m_Specific
llvm-svn: 59395
2008-11-16 04:46:19 +00:00
Chris Lattner fae5e33111 Use new m_SelectCst template to eliminate macros.
llvm-svn: 59392
2008-11-16 04:33:38 +00:00
Chris Lattner 569d78cbb5 simplify code.
llvm-svn: 59390
2008-11-16 04:26:55 +00:00
Chris Lattner c3f3b059d0 Handle the case where there is no "not". It is possible it got
folded into the select.

llvm-svn: 59389
2008-11-16 04:25:26 +00:00
Chris Lattner 5f6d9a313b factor a bunch of copy/paste code out into a helper function.
Eliminate the cases checking for cond?0:-1, since that is already
handled by commutative checking.

llvm-svn: 59388
2008-11-16 04:24:12 +00:00
Chris Lattner 68d2da2a19 rearrange some code, no functionality change.
llvm-svn: 59381
2008-11-16 03:56:24 +00:00
Chris Lattner e02c7c7ad2 if we're going to use a macro, use it maximally. no functionality change.
llvm-svn: 59380
2008-11-16 03:54:57 +00:00
Bill Wendling 7ef7314d1a Third time's a charm.
The previous patches didn't match correctly. Also, we need to make sure that
the conditional is the same before doing the transformation.

llvm-svn: 58978
2008-11-10 06:59:06 +00:00
Mon P Wang 25f0106fd9 Added support for the following definition of shufflevector
<result> = shufflevector <n x <ty>> <v1>, <n x <ty>> <v2>, <m x i32> <mask> 

llvm-svn: 58964
2008-11-10 04:46:22 +00:00
Bill Wendling 4fb13c051d Correction for the last patch. Should match the conditional in the first part
of the select match, not the select instruction itself.

llvm-svn: 58947
2008-11-09 23:37:53 +00:00
Bill Wendling 1579287550 The method of doing the matching with a 'select' instruction was wrong. The
original code was matching like this:

	if (match(A, m_Not(m_Value(B))))

B was already matched as a 'select' instruction. However, this isn't matching
what we think it's matching. It would match B as a 'Value', so basically
anything would match to it. In this case, a Constant matched. B was replaced
with a constant representation. And then the wrong value would be used in the
SelectInst::Create statement, causing a crash.

After thinking on this for a moment, and after Nick L. told me how the pattern
matching stuff was supposed to work, the solution was to match NOT an m_Value,
but an m_Select.

llvm-svn: 58946
2008-11-09 23:17:42 +00:00
Bill Wendling 3f547be28f If the LHS of the FCMP is coming from a UIToFP instruction, then we don't want
to generate signed ICMP instructions to replace the FCMP. This would violate
the following:

define i1 @test1(i32 %val) {
  %1 = uitofp i32 %val to double
  %2 = fcmp ole double %1, 0.000000e+00
  ret i1 %2
}

would be transformed into:

define i1 @test1(i32 %val) {
  %1 = icmp slt i33 %val, 1
  ret i1 %1
}

which is obviously wrong. This patch modifes InstCombiner::FoldFCmp_IntToFP_Cst
to handle when the LHS comes from UIToFP.

llvm-svn: 58929
2008-11-09 04:26:50 +00:00
Mon P Wang 5ca2ec65bd Fixed scalarizing an extract subvector and prevent an infinite loop
when simplify a vector. 

llvm-svn: 58820
2008-11-06 22:52:21 +00:00
Nick Lewycky 8d8acf327b Fix demanded bits analysis with srem by negative number. Based on a patch
by Richard Osborne.

llvm-svn: 58555
2008-11-02 02:41:50 +00:00
Dan Gohman 83eea0b17f Fix this recently moved code to use the correct type. CI is now a
ConstantInt, and SI is the original cast instruction. This fixes
PR2996.

llvm-svn: 58549
2008-11-02 00:17:33 +00:00
Dan Gohman 13cbcf1c18 Canonicalize sext(i1) to i1?-1:0, and update various instcombine
optimizations accordingly.

llvm-svn: 58457
2008-10-30 20:40:10 +00:00
Dan Gohman 2c34c130bf (A & sext(C)) | (B & ~sext(C) -> C ? A : B
llvm-svn: 58351
2008-10-28 22:38:57 +00:00
Dan Gohman bc0278400c Teach instcombine's visitLoad to scan back several instructions
to find opportunities for store-to-load forwarding or load CSE,
in the same way that visitStore scans back to do DSE. Also, define
a new helper function for testing whether the addresses of two
memory accesses are known to have the same value, and use it in
both visitStore and visitLoad.

These two changes allow instcombine to eliminate loads in code
produced by front-ends that frequently emit obviously redundant
addressing for memory references.

llvm-svn: 57608
2008-10-15 23:19:35 +00:00
Evan Cheng d885f6e139 Combine (fcmp cc0 x, y) | (fcmp cc1 x, y) into a single fcmp when possible.
llvm-svn: 57515
2008-10-14 18:44:08 +00:00
Evan Cheng ce70752b11 - Somehow I forgot about one / une.
- Renumber fcmp predicates to match their icmp counterparts.
- Try swapping operands to expose more optimization opportunities.

llvm-svn: 57513
2008-10-14 18:13:38 +00:00
Evan Cheng 67786cce66 Optimize anding of two fcmp into a single fcmp if the operands are the same. e.g. uno && ueq -> ueq
ord && olt -> olt
     ord && ueq -> oeq

llvm-svn: 57507
2008-10-14 17:15:11 +00:00
Matthijs Kooijman f7d3cb5435 Make InstructionCombining::getBitCastOperand() recognize GEP instructions and
constant expression with all zero indices as being the same as a bitcast.

llvm-svn: 57442
2008-10-13 15:17:01 +00:00
Chris Lattner da435910e8 Fix PR2697 by rewriting the '(X / pos) op neg' logic. This also changes
a couple other cases for clarity, but shouldn't affect correctness.

Patch by Eli Friedman!

llvm-svn: 57387
2008-10-11 22:55:00 +00:00
Dale Johannesen 4f0bd68cfe Add a "loses information" return value to APFloat::convert
and APFloat::convertToInteger.  Restore return value to
IEEE754.  Adjust all users accordingly.

llvm-svn: 57329
2008-10-09 23:00:39 +00:00
Chris Lattner 42d5785dbd Add parentheses to avoid warnings in GCC 4.4.0,
patch by Samuel Tardieu!

llvm-svn: 57288
2008-10-08 06:42:28 +00:00
Chris Lattner 917a6c1343 rewrite bswap matching to be more general, allowing arbitrary
shifting and masking inside a bswap expr.  This allows it to handle
the cases from PR2842, which involve the intermediate 'or' 
expressions being shifted, not just the input value.

llvm-svn: 57095
2008-10-05 02:13:19 +00:00
Chris Lattner ca91f265c4 fix a bug where the bswap matcher could match a case involving
ashr.  It should only apply to lshr.

llvm-svn: 57089
2008-10-05 00:50:57 +00:00
Duncan Sands d65a4daeea Factorize code: remove variants of "strip off
pointer bitcasts and GEP's", and centralize the
logic in Value::getUnderlyingObject.  The
difference with stripPointerCasts is that
stripPointerCasts only strips GEPs if all
indices are zero, while getUnderlyingObject
strips GEPs no matter what the indices are.

llvm-svn: 56922
2008-10-01 15:25:41 +00:00
Nick Lewycky e8ced3ec19 Fix misoptimization of: xor i1 (icmp eq (X, C1), icmp s[lg]t (X, C2))
llvm-svn: 56834
2008-09-30 06:08:34 +00:00
Devang Patel a05633e105 Now Attributes are divided in three groups
- return attributes - inreg, zext and sext
- parameter attributes
- function attributes - nounwind, readonly, readnone, noreturn

Return attributes use 0 as the index.
Function attributes use ~0U as the index.

This patch requires corresponding changes in llvm-gcc and clang.

llvm-svn: 56704
2008-09-26 22:53:05 +00:00
Devang Patel 4c758ea3e0 Large mechanical patch.
s/ParamAttr/Attribute/g
s/PAList/AttrList/g
s/FnAttributeWithIndex/AttributeWithIndex/g
s/FnAttr/Attribute/g

This sets the stage 
- to implement function notes as function attributes and 
- to distinguish between function attributes and return value attributes.

This requires corresponding changes in llvm-gcc and clang.

llvm-svn: 56622
2008-09-25 21:00:45 +00:00