Commit Graph

392 Commits

Author SHA1 Message Date
Dan Gohman 360c86aed5 Add explicit keywords.
llvm-svn: 47382
2008-02-20 16:44:09 +00:00
Dan Gohman d0ff91dac5 Convert DAGCombiner to use the APInt form of ComputeMaskedBits.
llvm-svn: 47381
2008-02-20 16:33:30 +00:00
Anton Korobeynikov 035eaacd1f Update gcc 4.3 warnings fix patch with recent head changes
llvm-svn: 47368
2008-02-20 11:10:28 +00:00
Evan Cheng 6200c225e0 - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.

llvm-svn: 47290
2008-02-18 23:04:32 +00:00
Chris Lattner ee322b44a4 teach dag combiner how to eliminate MERGE_VALUES nodes.
llvm-svn: 47052
2008-02-13 07:25:05 +00:00
Duncan Sands 7377f5fbe3 Add a isBigEndian method to complement isLittleEndian.
llvm-svn: 46954
2008-02-11 10:37:04 +00:00
Bill Wendling 9c2ce9a32d Return "(c1 + c2)" instead of yet another ADD node (which made this a
no-op).

llvm-svn: 46922
2008-02-10 08:10:24 +00:00
Chris Lattner d2d166ea9f the world doesn't need my debugging code.
llvm-svn: 46678
2008-02-03 07:01:05 +00:00
Chris Lattner b2b9d6f0fb Change the 'global modification' APIs in SelectionDAG to take a new
DAGUpdateListener object pointer instead of just returning a vector 
of deleted nodes.  This makes the interfaces more efficient (no more
allocating a vector [at least a malloc], filling it in, then walking
it) and more clean.  This also allows the client to be notified of
nodes that are *changed* but not deleted.

llvm-svn: 46677
2008-02-03 06:49:24 +00:00
Dan Gohman 47a7d6fafe Factor the addressing mode and the load/store VT out of LoadSDNode
and StoreSDNode into their common base class LSBaseSDNode. Member
functions getLoadedVT and getStoredVT are replaced with the common
getMemoryVT to simplify code that will handle both loads and stores.

llvm-svn: 46538
2008-01-30 00:15:11 +00:00
Dan Gohman 70de4cb1cd Use empty() instead of comparing size() with zero.
llvm-svn: 46514
2008-01-29 13:02:09 +00:00
Chris Lattner 2ee91f4300 Fix PowerPC/./2007-10-18-PtrArithmetic.ll
llvm-svn: 46424
2008-01-27 23:32:17 +00:00
Chris Lattner d0496d0433 fix a crash on CodeGen/X86/vector-rem.ll
llvm-svn: 46422
2008-01-27 23:21:58 +00:00
Chris Lattner 888560d62c Implement some dag combines that allow doing fneg/fabs/fcopysign in integer
registers if used by a bitconvert or using a bitconvert.  This allows us to
avoid constant pool loads and use cheaper integer instructions when the
values come from or end up in integer regs anyway.  For example, we now 
compile CodeGen/X86/fp-in-intregs.ll to:

_test1:
	movl	$2147483648, %eax
	xorl	4(%esp), %eax
	ret
_test2:
	movl	$1065353216, %eax
	orl	4(%esp), %eax
	andl	$3212836864, %eax
	ret

Instead of:
_test1:
	movss	4(%esp), %xmm0
	xorps	LCPI2_0, %xmm0
	movd	%xmm0, %eax
	ret
_test2:
	movss	4(%esp), %xmm0
	andps	LCPI3_0, %xmm0
	movss	LCPI3_1, %xmm1
	andps	LCPI3_2, %xmm1
	orps	%xmm0, %xmm1
	movd	%xmm1, %eax
	ret

bitconverts can happen due to various calling conventions that require
fp values to passed in integer regs in some cases, e.g. when returning
a complex.

llvm-svn: 46414
2008-01-27 17:42:27 +00:00
Chris Lattner e30e33af4f Infer alignment of loads and increase their alignment when we can tell they are
from the stack.  This allows us to compile stack-align.ll to:

_test:
	movsd	LCPI1_0, %xmm0
	movapd	%xmm0, %xmm1
***	andpd	4(%esp), %xmm1
	andpd	_G, %xmm0
	addsd	%xmm1, %xmm0
	movl	20(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

instead of:

_test:
	movsd	LCPI1_0, %xmm0
**	movsd	4(%esp), %xmm1
**	andpd	%xmm0, %xmm1
	andpd	_G, %xmm0
	addsd	%xmm1, %xmm0
	movl	20(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

llvm-svn: 46401
2008-01-26 19:45:50 +00:00
Chris Lattner 31e9edce1c Fix some bugs in SimplifyNodeWithTwoResults where it would call deletenode to
delete a node even if it was not dead in some cases.  Instead, just add it to
the worklist.  Also, make sure to use the CombineTo methods, as it was doing
things that were unsafe: the top level combine loop could touch dangling memory.

This fixes CodeGen/Generic/2008-01-25-dag-combine-mul.ll

llvm-svn: 46384
2008-01-26 01:09:19 +00:00
Chris Lattner cb3cf546c3 reduce indentation
llvm-svn: 46377
2008-01-25 23:34:24 +00:00
Chris Lattner 2d7a830ff3 Add skeletal code to increase the alignment of loads and stores when
we can infer it.  This will eventually help stuff, though it doesn't
do much right now because all fixed FI's have an alignment of 1.

llvm-svn: 46349
2008-01-25 07:20:16 +00:00
Chris Lattner 34ed27c46d clarify a comment, thanks Duncan.
llvm-svn: 46313
2008-01-24 17:10:01 +00:00
Chris Lattner e97fa8cdf0 Fix this buggy transformation. Two observations:
1. we already know the value is dead, so don't bother replacing 
   it with undef.
2. The very case the comment describes actually makes the load
   live which asserts in deletenode.  If we do the replacement
   and the node becomes live, just treat it as new.  This fixes
   a failure on X86/2008-01-16-InvalidDAGCombineXform.ll with
   some local changes in my tree.

llvm-svn: 46306
2008-01-24 07:57:06 +00:00
Chris Lattner d66eac62fd The dag combiner is missing revisiting nodes that it really should, and thus leaving
dead stuff around.  This gets fed into the isel pass and causes certain foldings from
happening because nodes have extraneous uses floating around.  For example, if we turned
foo(bar(x)) -> baz(x), we sometimes left bar(x) around.

llvm-svn: 46305
2008-01-24 07:18:21 +00:00
Chris Lattner 0feb1b0f84 fold fp_round(fp_round(x)) -> fp_round(x).
llvm-svn: 46304
2008-01-24 06:45:35 +00:00
Chris Lattner 1ea55cf816 This commit changes:
1. Legalize now always promotes truncstore of i1 to i8. 
2. Remove patterns and gunk related to truncstore i1 from targets.
3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
   X86 currently doesn't support truncstore of any of its integer types.
6. Add legalize support for truncstores with invalid value input types.
7. Add a dag combine transform to turn store(truncate) into truncstore when
   safe.

The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:

_foo:
	fldt	20(%esp)
	fldt	4(%esp)
	faddp	%st(1)
	movl	36(%esp), %eax
	fstps	(%eax)
	ret

instead of:

_foo:
	subl	$4, %esp
	fldt	24(%esp)
	fldt	8(%esp)
	faddp	%st(1)
	fstps	(%esp)
	movl	40(%esp), %eax
	movss	(%esp), %xmm0
	movss	%xmm0, (%eax)
	addl	$4, %esp
	ret

llvm-svn: 46140
2008-01-17 19:59:44 +00:00
Chris Lattner 7eabed3521 code cleanups, no functionality change.
llvm-svn: 46126
2008-01-17 07:20:38 +00:00
Chris Lattner 72733e573b * Introduce a new SelectionDAG::getIntPtrConstant method
and switch various codegen pieces and the X86 backend over
  to using it.

* Add some comments to SelectionDAGNodes.h

* Introduce a second argument to FP_ROUND, which indicates
  whether the FP_ROUND changes the value of its input. If
  not it is safe to xform things like fp_extend(fp_round(x)) -> x.

llvm-svn: 46125
2008-01-17 07:00:52 +00:00
Evan Cheng 7be1528004 Fixes a nasty dag combiner bug that causes a bunch of tests to fail at -O0.
It's not safe to use the two value CombineTo variant to combine away a dead load.
e.g. 
v1, chain2 = load chain1, loc
v2, chain3 = load chain2, loc
v3         = add v2, c 
Now we replace use of v1 with undef, use of chain2 with chain1.
ReplaceAllUsesWith() will iterate through uses of the first load and update operands:
v1, chain2 = load chain1, loc
v2, chain3 = load chain1, loc
v3         = add v2, c 
Now the second load is the same as the first load, SelectionDAG cse will ensure
the use of second load is replaced with the first load.
v1, chain2 = load chain1, loc
v3         = add v1, c
Then v1 is replaced with undef and bad things happen.

llvm-svn: 46099
2008-01-16 23:11:54 +00:00
Chris Lattner 2e50a6f90f Factor the ReachesChainWithoutSideEffects out of dag combiner into
a public SDOperand::reachesChainWithoutSideEffects method.  No 
functionality change.

llvm-svn: 46050
2008-01-16 05:49:24 +00:00
Chris Lattner 51b01bf8a5 Make load->store deletion a bit smarter. This allows us to compile this:
void test(long long *P) { *P ^= 1; }

into just:

_test:
	movl	4(%esp), %eax
	xorl	$1, (%eax)
	ret

instead of code like this:

_test:
	movl	4(%esp), %ecx
        xorl    $1, (%ecx)
	movl	4(%ecx), %edx
	movl	%edx, 4(%ecx)
	ret

llvm-svn: 45762
2008-01-08 23:08:06 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Chris Lattner 2de9b85297 make sure not to zap volatile stores, thanks a lot to Dale for noticing this!
llvm-svn: 45402
2007-12-29 07:15:45 +00:00
Chris Lattner 5919b48fe9 don't fold fp_round(fp_extend(load)) -> fp_round(extload)
llvm-svn: 45400
2007-12-29 06:55:23 +00:00
Chris Lattner 3f9c6a7260 Delete a store whose input is a load from the same pointer:
x = load p
  store x -> p

llvm-svn: 45398
2007-12-29 06:26:16 +00:00
Chris Lattner efd1cddb5a Tell TargetLoweringOpt whether it is running before
or after legalize.

llvm-svn: 45321
2007-12-22 20:56:36 +00:00
Evan Cheng 9f06e5e2df Don't leave newly created nodes around if it turns out they are not needed.
llvm-svn: 45186
2007-12-19 01:34:38 +00:00
Dale Johannesen 5eff4de9c8 Redo previous patch so optimization only done for i1.
Simpler and safer.

llvm-svn: 44663
2007-12-06 17:53:31 +00:00
Chris Lattner eedaf92fcf third time around: instead of disabling this completely,
only disable it if we don't know it will be obviously profitable.
Still fixme, but less so. :)

llvm-svn: 44658
2007-12-06 07:47:55 +00:00
Chris Lattner b5fdfb9612 Actually, disable this code for now. More analysis and improvements to
the X86 backend are needed before this should be enabled by default.

llvm-svn: 44657
2007-12-06 07:44:31 +00:00
Chris Lattner 7c709a5d08 implement a readme entry, compiling the code into:
_foo:
	movl	$12, %eax
	andl	4(%esp), %eax
	movl	_array(%eax), %eax
	ret

instead of:

_foo:
	movl	4(%esp), %eax
	shrl	$2, %eax
	andl	$3, %eax
	movl	_array(,%eax,4), %eax
	ret

As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:

-       movl    8(%eax), %eax
-       shll    $2, %eax
-       andl    $1020, %eax
-       movl    (%esi,%eax), %eax
+       movzbl  8(%eax), %eax
+       movl    (%esi,%eax,4), %eax


-       shll    $2, %edx
-       andl    $1020, %edx
-       movl    (%edi,%edx), %edx
+       andl    $255, %edx
+       movl    (%edi,%edx,4), %edx

Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:

-       andl    $85, %ebx
-       addl    _bit_count(,%ebx,4), %ebp
+       shll    $2, %ebx
+       andl    $340, %ebx
+       addl    _bit_count(%ebx), %ebp

llvm-svn: 44656
2007-12-06 07:33:36 +00:00
Dale Johannesen 05bbbda78a Fix PR1842.
llvm-svn: 44649
2007-12-06 01:43:46 +00:00
Dan Gohman 9a69341725 Don't lower srem/urem X%C to X-X/C*C unless the division is actually
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.

llvm-svn: 44341
2007-11-26 23:46:11 +00:00
Duncan Sands e795efea5b Move MinAlign to MathExtras.h.
llvm-svn: 43944
2007-11-09 13:41:39 +00:00
Duncan Sands e7a9ac929f Fix some load/store logic that would be wrong for
apints on big-endian machines if the bitwidth is
not a multiple of 8.  Introduce a new helper,
MVT::getStoreSizeInBits, and use it.

llvm-svn: 43934
2007-11-09 08:57:19 +00:00
Evan Cheng ece4c68b82 If both parts of smul_lohi, etc. are used, don't simplify. If only one part is used, try simplify it.
llvm-svn: 43888
2007-11-08 09:25:29 +00:00
Evan Cheng 0747bc1df6 Typo.
llvm-svn: 43511
2007-10-30 20:11:21 +00:00
Dan Gohman ae95d72a52 Fix a DAGCombiner abort on a bitcast from a scalar to a vector.
llvm-svn: 43470
2007-10-29 20:44:42 +00:00
Evan Cheng e106e2f142 Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Duncan Sands 1826deda68 The guaranteed alignment of ptr+offset is only the minimum of
of offset and the alignment of ptr if these are both powers of
2.  While the ptr alignment is guaranteed to be a power of 2,
there is no reason to think that offset is.  For example, if
offset is 12 (the size of a long double on x86-32 linux) and
the alignment of ptr is 8, then the alignment of ptr+offset
will in general be 4, not 8.  Introduce a function MinAlign,
lifted from gcc, for computing the minimum guaranteed alignment.
I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/.
I also changed some places that weren't wrong (because both values
were a power of 2), as a defensive change against people copying
and pasting the code.
Hopefully someone who cares about alignment will review the rest
of LLVM and fix up the remaining places.  Since I'm on x86 I'm
not very motivated to do this myself...

llvm-svn: 43421
2007-10-28 12:59:45 +00:00
Dale Johannesen 6802d0c96f Redo "last ppc long double fix" as Chris wants.
llvm-svn: 43189
2007-10-19 20:29:00 +00:00
Dale Johannesen 10432e5a67 More ppcf128 issues (maybe the last)?
llvm-svn: 43160
2007-10-19 00:59:18 +00:00
Dale Johannesen e5facd51cb Disable attempts to constant fold PPC f128.
Remove the assumption that this will happen from
various places.

llvm-svn: 43053
2007-10-16 23:38:29 +00:00