Commit Graph

936 Commits

Author SHA1 Message Date
Chris Lattner 0fe6bce9ce fdiv/frem of undef can produce undef, because the undef operand
can be a SNaN.  We could be more aggressive and turn this into 
unreachable, but that is less nice, and not really worth it.

llvm-svn: 47313
2008-02-19 06:12:18 +00:00
Nick Lewycky fefd0202c9 Correctly fold divide-by-constant, even when faced with overflow.
llvm-svn: 47287
2008-02-18 22:48:05 +00:00
Chris Lattner 1e3c501cb8 Transforming -A + -B --> -(A + B) isn't safe for FP, thanks
to Dale for noticing this!

llvm-svn: 47276
2008-02-18 17:50:16 +00:00
Chris Lattner 024f8c8f09 optimize away stackrestore calls that have no intervening alloca or call.
llvm-svn: 47258
2008-02-18 06:12:38 +00:00
Chris Lattner cc22601bc3 Fold (-x + -y) -> -(x+y) which promotes better association, fixing
the second half of PR2047

llvm-svn: 47244
2008-02-17 21:03:36 +00:00
Dan Gohman 1ee8dc97d9 Rename APInt's isPositive to isNonNegative, to reflect what it
actually does.

llvm-svn: 47090
2008-02-13 22:09:18 +00:00
Chris Lattner 682a7dc653 Fix a bug compiling PR1978 (perhaps not the only one though) which
was incorrectly simplifying "x == (gep x, 1, i)" into false, even 
though i could be negative.  As it turns out, all the code to 
handle this already existed, we just need to disable the incorrect
optimization case and let the general case handle it.

llvm-svn: 46739
2008-02-05 04:45:32 +00:00
Nick Lewycky 3b59214320 There are some cases where icmp(add) can be folded into a new icmp. Handle them.
llvm-svn: 46687
2008-02-03 16:33:09 +00:00
Nick Lewycky c7a4ba044b Hack on vectors too.
llvm-svn: 46684
2008-02-03 08:19:11 +00:00
Nick Lewycky e6e3a7f6ea Fold away one multiply in instcombine. This would normally be caught in
reassociate anyways, but they could be generated during instcombine's run.

llvm-svn: 46683
2008-02-03 07:42:09 +00:00
Chris Lattner 17819d971e eliminate additions of 0.0 when they are obviously dead. This has to be careful to
avoid turning -0.0 + 0.0 -> -0.0 which is incorrect.

llvm-svn: 46499
2008-01-29 06:52:45 +00:00
Nick Lewycky 8ea81e8ba4 Handle some more combinations of extend and icmp. Fixes PR1940.
llvm-svn: 46431
2008-01-28 03:48:02 +00:00
Chris Lattner 710b441174 Fix PR1932 by disabling an xform invalid for fdiv.
llvm-svn: 46429
2008-01-28 00:58:18 +00:00
Chris Lattner fa1e7eef30 Fold fptrunc(add (fpextend x), (fpextend y)) -> add(x,y), as GCC does.
llvm-svn: 46406
2008-01-27 05:29:54 +00:00
Nick Lewycky f069264164 Enable the fix I just checked in, silly me.
llvm-svn: 46247
2008-01-22 05:42:02 +00:00
Nick Lewycky 78712e5b59 Multiply can be evaluated in a different type, so long as the target type has
a smaller bitwidth.

llvm-svn: 46244
2008-01-22 05:08:48 +00:00
Duncan Sands b5ca2e9fcb I noticed that the trampoline straightening transformation could
drop attributes on varargs call arguments.  Also, it could generate
invalid IR if the transformed call already had the 'nest' attribute
somewhere (this can never happen for code coming from llvm-gcc,
but it's a theoretical possibility).  Fix both problems.

llvm-svn: 45973
2008-01-14 19:52:09 +00:00
Chris Lattner 92bd785323 Turn a memcpy from a double* into a load/store of double instead of
a load/store of i64.  The later prevents promotion/scalarrepl of the
source and dest in many cases.

This fixes the 300% performance regression of the byval stuff on 
stepanov_v1p2.

llvm-svn: 45945
2008-01-14 00:28:35 +00:00
Chris Lattner 57974c8d51 factor memcpy/memmove simplification out to its own SimplifyMemTransfer
method, no functionality change.

llvm-svn: 45944
2008-01-13 23:50:23 +00:00
Chris Lattner 8c5cdddfb9 simplify some code. If we can infer alignment for source and dest that are
greater than memcpy alignment, and if we lower to load/store, use the best 
alignment info we have.

llvm-svn: 45943
2008-01-13 22:30:28 +00:00
Chris Lattner 5a86612d3f simplify some code by adding a InsertBitCastBefore method,
make memmove->memcpy conversion a bit simpler.

llvm-svn: 45942
2008-01-13 22:23:22 +00:00
Chris Lattner 5bc253c8f2 Fix PR1907, a nasty miscompilation because instcombine didn't
realize that ne & sgt  was a signed comparison (it was only 
looking at whether the left compare was signed).

llvm-svn: 45937
2008-01-13 20:59:02 +00:00
Duncan Sands 781f6549db When turning a call to a bitcast function into a direct call,
if this becomes a varargs call then deal correctly with any
parameter attributes on the newly vararg call arguments.

llvm-svn: 45931
2008-01-13 08:02:44 +00:00
Chris Lattner 2940c5c56d Implement PR1795, an instcombine hack for forming GEPs with integer pointer arithmetic.
llvm-svn: 45745
2008-01-08 07:23:51 +00:00
Duncan Sands b18c30acec Small cleanup for handling of type/parameter attribute
incompatibility.

llvm-svn: 45704
2008-01-07 17:16:06 +00:00
Duncan Sands 404eb05247 The transform that tries to turn calls to bitcast functions into
direct calls bails out unless caller and callee have essentially
equivalent parameter attributes.  This is illogical - the callee's
attributes should be of no relevance here.  Rework the logic, which
incidentally fixes a crash when removed arguments have attributes.

llvm-svn: 45658
2008-01-06 18:27:01 +00:00
Duncan Sands 55e5090fe8 When transforming a call to a bitcast function into
a direct call with cast parameters and cast return
value (if any), instcombine was prepared to cast any
non-void return value into any other, whether castable
or not.  Add a new predicate for testing whether casting
is valid, and check it both for the return value and
(as a cleanup) for the parameters.

llvm-svn: 45657
2008-01-06 10:12:28 +00:00
Chris Lattner e666bc272d remove a couple more unsafe xforms in the face of overflow.
llvm-svn: 45613
2008-01-05 01:22:42 +00:00
Chris Lattner db026d703b remove the (x-y) < 0 comparison xform, it miscompiles
things that are not equality comparisons, for example:
   (2147479553+4096)-2147479553 < 0    !=   (2147479553+4096) < 2147479553

llvm-svn: 45612
2008-01-05 01:18:20 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Christopher Lamb b053b80b79 Disable null pointer folding transforms for non-generic address spaces. This should probably be a target-specific predicate based on address space. That way for targets where this isn't applicable the predicate can be optimized away.
llvm-svn: 45403
2007-12-29 07:56:53 +00:00
Owen Anderson 7363914ef7 Repair a transform that Chris noticed a bug in. Thanks to Nicholas for pointing out my stupid mistakes when writing this patch. :-)
llvm-svn: 45384
2007-12-28 07:42:12 +00:00
Chris Lattner 5179819beb disable this instcombine xform, it miscompiles:
define i32 @main() {
entry:
	%z = alloca i32		; <i32*> [#uses=2]
	store i32 0, i32* %z
	%tmp = load i32* %z		; <i32> [#uses=1]
	%sub = sub i32 %tmp, 1		; <i32> [#uses=1]
	%cmp = icmp ult i32 %sub, 0		; <i1> [#uses=1]
	%retval = select i1 %cmp, i32 1, i32 0		; <i32> [#uses=1]
	ret i32 %retval
}

into ret 1, instead of ret 0.

Christopher, please investigate.

llvm-svn: 45383
2007-12-28 06:24:31 +00:00
Chris Lattner 74b2ab59fd implement InstCombine/shift-trunc-shift.ll. This allows
us to compile:
#include <math.h>
int t1(double d) { return signbit(d); }

into:

_t1:
	movd	%xmm0, %rax
	shrq	$63, %rax
	ret

instead of:

_t1:
	movd	%xmm0, %rax
	shrq	$32, %rax
	shrl	$31, %eax
	ret

on x86-64.

llvm-svn: 45311
2007-12-22 09:07:47 +00:00
Christopher Lamb 7d82bc46b8 Implement review feedback, including additional transforms
(icmp slt (sub A B) 1) -> (icmp sle A B)
icmp sgt (sub A B) -1) -> (icmp sge A B)

and add testcase.

llvm-svn: 45256
2007-12-20 07:21:11 +00:00
Chris Lattner 16a51da0e2 simplify this code with the new m_Zero() pattern. Make sure the select only
has a single use, and generalize it to not require N to be a constant.

llvm-svn: 45250
2007-12-20 01:56:58 +00:00
Duncan Sands aa31b92508 When inlining through an 'nounwind' call, mark inlined
calls 'nounwind'.  It is important for correct C++
exception handling that nounwind markings do not get
lost, so this transformation is actually needed for
correctness.

llvm-svn: 45218
2007-12-19 21:13:37 +00:00
Christopher Lamb f00ac6dd93 Fold subtracts into integer compares vs. zero. This improves generate code for this case on X86
from
_foo:
        movl    $99, %ecx
        movl    4(%esp), %eax
        subl    %eax, %ecx
        xorl    %edx, %edx
        testl   %ecx, %ecx
        cmovs   %edx, %eax
        ret

to
_foo:
        xorl    %ecx, %ecx
        movl    4(%esp), %eax
        cmpl    $99, %eax
        cmovg   %ecx, %eax
        ret

llvm-svn: 45173
2007-12-18 21:32:20 +00:00
Christopher Lamb b7016c53d1 Fix comments
llvm-svn: 45170
2007-12-18 20:33:11 +00:00
Christopher Lamb 74dbad9216 Remove an orthogonal transformation of the selection condition from my most recent submission.
llvm-svn: 45169
2007-12-18 20:30:28 +00:00
Duncan Sands 3353ed09ac Rename isNoReturn to doesNotReturn, and isNoUnwind to
doesNotThrow.

llvm-svn: 45160
2007-12-18 09:59:50 +00:00
Christopher Lamb 30291f4a30 Fix typos.
llvm-svn: 45159
2007-12-18 09:45:40 +00:00
Christopher Lamb 8b09a464b4 Fold certain additions through selects (and their compares) so as to eliminate subtractions. This code is often produced by the SMAX expansion in SCEV.
This implements test/Transforms/InstCombine/2007-12-18-AddSelCmpSub.ll

llvm-svn: 45158
2007-12-18 09:34:41 +00:00
Christopher Lamb edf0788758 Change the PointerType api for creating pointer types. The old functionality of PointerType::get() has become PointerType::getUnqual(), which returns a pointer in the generic address space. The new prototype of PointerType::get() requires both a type and an address space.
llvm-svn: 45082
2007-12-17 01:12:55 +00:00
Duncan Sands 8e4847ee95 Make instcombine promote inline asm calls to 'nounwind'
calls.  Remove special casing of inline asm from the
inliner.  There is a potential problem: the verifier
rejects invokes of inline asm (not sure why).  If an
asm call is not marked "nounwind" in some .ll, and
instcombine is not run, but the inliner is run, then
an illegal module will be created.  This is bad but
I'm not sure what the best approach is.  I'm tempted
to remove the check in the verifier...

llvm-svn: 45073
2007-12-16 15:51:49 +00:00
Wojciech Matyjewicz 309e5a723b 1. "Upgrage" comments.
2. Using zero-extended value of Scale and unsigned division is safe provided
   that Scale doesn't have the sign bit set.
   Previously these 2 instructions:
        %p = bitcast [100 x {i8,i8,i8}]* %x to i8*
        %q = getelementptr i8* %p, i32 -4
   were combined into:
        %q = getelementptr [100 x { i8, i8, i8 }]* %x, i32 0,
               i32 1431655764, i32 0
   what was incorrect.

llvm-svn: 44936
2007-12-12 15:21:32 +00:00
Chris Lattner d2bbbabbfb simplify some code.
llvm-svn: 44655
2007-12-06 06:25:04 +00:00
Chris Lattner 0ccb663cca move some ashr-specific code out of commonShiftTransforms into visitAShr.
llvm-svn: 44650
2007-12-06 01:59:46 +00:00
Duncan Sands ad0ea2d430 Fix PR1146: parameter attributes are longer part of
the function type, instead they belong to functions
and function calls.  This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll).  Hopefully
a bitcode guru (who might that be? :) ) will fix it.

llvm-svn: 44359
2007-11-27 13:23:08 +00:00
Chris Lattner c00e8adfe0 Implement PR1822
llvm-svn: 44318
2007-11-25 21:27:53 +00:00
Duncan Sands 185eeac0f8 Fix PR1816. If a bitcast of a function only exists because of a
trivial difference in function attributes, allow calls to it to
be converted to direct calls.  Based on a patch by Török Edwin.
While there, move the various lists of mutually incompatible
parameters etc out of the verifier and into ParameterAttributes.h.

llvm-svn: 44315
2007-11-25 14:10:56 +00:00
Chris Lattner 0cf083815a add a comment.
llvm-svn: 44293
2007-11-23 22:35:18 +00:00
Chris Lattner 1985d96dc9 Fix PR1817.
llvm-svn: 44284
2007-11-22 23:47:13 +00:00
Chris Lattner c53b18362a Fix PR1800 by correcting mistaken logic.
llvm-svn: 44188
2007-11-16 06:04:17 +00:00
Andrew Lenharth 19ca5c7021 Better check
llvm-svn: 43897
2007-11-08 18:45:15 +00:00
Andrew Lenharth 8cf11aa330 Fix PR1780
llvm-svn: 43893
2007-11-08 17:39:28 +00:00
Chris Lattner d8515f8e80 Implement PR1777 by detecting dependent phis that
all compute the same value.

llvm-svn: 43777
2007-11-06 21:52:06 +00:00
Chris Lattner 362709dff1 wrap long lines
llvm-svn: 43745
2007-11-06 01:15:27 +00:00
Dan Gohman 4decbc5002 Fix an abort in instcombine when folding creates a vector rem instruction.
llvm-svn: 43743
2007-11-05 23:16:33 +00:00
Duncan Sands 44b8721de8 Executive summary: getTypeSize -> getTypeStoreSize / getABITypeSize.
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment.  This gives a primitive type for
which getTypeSize differed from getABITypeSize.  For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).

This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition).  Instead there is:

(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type.  For a primitive type, this is the minimum number
of bits.  For an i36 this is 36 bits.  For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.

(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it).  For an
i36 this is 40 bits, for an x86 long double it is 80 bits.  This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes).  There doesn't seem to be anything
corresponding to this in gcc.

(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment.  For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS.  This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes).  This is
TYPE_SIZE in gcc.

Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize.  This means that the size of an array
is the length times the getABITypeSize.  It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize.  Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case.  So alloca's and mallocs should use getABITypeSize.  Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.

Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.

In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases).  I will get around to auditing these too at some point,
but I could do with some help.

Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize.  I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers.  If someone wants to pack these types more
tightly they can always use a packed struct.

llvm-svn: 43620
2007-11-01 20:53:16 +00:00
Chris Lattner 74709473ed Fix InstCombine/2007-10-31-RangeCrash.ll
llvm-svn: 43596
2007-11-01 02:18:41 +00:00
Chris Lattner 55b8302dfe simplify some code by using the new isNaN predicate
llvm-svn: 43305
2007-10-24 18:54:45 +00:00
Chris Lattner c62877e9da Implement a couple of foldings for ordered and unordered comparisons,
implementing cases related to PR1738.

llvm-svn: 43289
2007-10-24 05:38:08 +00:00
Devang Patel df49cf52e2 Try again.
Instead of loading small global string from memory, use
integer constant.

llvm-svn: 43148
2007-10-18 19:52:32 +00:00
Evan Cheng cdcc1d0444 Reverting r43070 for now. It's causing llc test failures.
llvm-svn: 43103
2007-10-17 23:51:13 +00:00
Devang Patel 91ff13edcc Apply "Instead of loading small c string constant, use integer constant directly" transformation while processing load instruction.
llvm-svn: 43070
2007-10-17 07:24:40 +00:00
Devang Patel 8d818f5e80 Use immediate stores.
llvm-svn: 43055
2007-10-16 23:44:18 +00:00
Devang Patel bff4aea328 Achieve same result but use fewer lines of code.
llvm-svn: 42985
2007-10-15 15:31:35 +00:00
Devang Patel 371e6ca690 Dest type is always i8 *. This allows some simplification.
Do not filter memmove.

llvm-svn: 42930
2007-10-12 20:10:21 +00:00
Chris Lattner ad618f66e6 Fix a bug in my patch last night that broke InstCombine/2007-10-12-Crash.ll
llvm-svn: 42920
2007-10-12 18:05:47 +00:00
Gabor Greif 5d8f7e0cc7 eliminate warning
llvm-svn: 42892
2007-10-12 07:44:54 +00:00
Chris Lattner d8675e4915 Fix some 80 column violations.
Fix DecomposeSimpleLinearExpr to handle simple constants better.
Don't nuke gep(bitcast(allocation)) if the bitcast(allocation) will
fold the allocation.  This fixes PR1728 and Instcombine/malloc3.ll

llvm-svn: 42891
2007-10-12 05:30:59 +00:00
Devang Patel 899cc56612 Lower memcpy if it makes sense.
llvm-svn: 42864
2007-10-11 17:21:57 +00:00
Dale Johannesen 9d559cfff5 Tone down an overzealous optimization.
llvm-svn: 42582
2007-10-03 17:45:27 +00:00
Duncan Sands d31649bc59 Improve comment.
llvm-svn: 42132
2007-09-19 10:25:38 +00:00
Duncan Sands 56df7dec2b A global variable with external weak linkage can be null, while
an alias could alias such a global variable.

llvm-svn: 42130
2007-09-19 10:10:31 +00:00
Dan Gohman 2ac2652779 Instcombine x-((x/y)*y) into a remainder operator.
llvm-svn: 42035
2007-09-17 17:31:57 +00:00
Duncan Sands 6d5da71288 Factor the trampoline transformation into a subroutine.
llvm-svn: 42021
2007-09-17 10:26:40 +00:00
Dale Johannesen 98d3a08d8f Remove the assumption that FP's are either float or
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).

llvm-svn: 41967
2007-09-14 22:26:36 +00:00
Chris Lattner d9111b88d1 silence a bogus gcc warning.
llvm-svn: 41949
2007-09-14 03:07:24 +00:00
Duncan Sands 9204663bcb Turn calls to trampolines into calls to the underlying
nested function.

llvm-svn: 41844
2007-09-11 14:35:41 +00:00
Chris Lattner e804567cd8 remove some dead code, this is handled by constant folding.
llvm-svn: 41819
2007-09-10 23:46:29 +00:00
Chris Lattner 85a51e0060 Don't zap back to back volatile load/stores
llvm-svn: 41759
2007-09-07 05:33:03 +00:00
Dale Johannesen bed9dc423c Next round of APFloat changes.
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double.  Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)

llvm-svn: 41747
2007-09-06 18:13:44 +00:00
Nick Lewycky 0c5c47944a Use isTrueWhenEqual. Thanks Chris!
llvm-svn: 41741
2007-09-06 02:40:25 +00:00
Nick Lewycky b0b066eaaa When the two operands of an icmp are equal, there are five possible predicates
that would make the icmp true. Fixes PR1637.

llvm-svn: 41740
2007-09-06 01:10:22 +00:00
Chuck Rose III 2320323647 Forgot to obey 80 column rule. Fixing that.
llvm-svn: 41725
2007-09-05 20:36:41 +00:00
Chuck Rose III e58572233d Added default parameters to GetElementPtrInstr constructor call. Visual Studio 2k5 was getting confused and was unable to compile it. Suspected compiler error.
llvm-svn: 41721
2007-09-05 16:54:38 +00:00
David Greene c656cbb8c2 Update GEP constructors to use an iterator interface to fix
GLIBCXX_DEBUG issues.

llvm-svn: 41697
2007-09-04 15:46:09 +00:00
Chris Lattner 0e258b8518 Cut off crazy computation. This helps PR1622 slightly.
llvm-svn: 41522
2007-08-28 04:23:55 +00:00
David Greene 703623d571 Update InvokeInst to work like CallInst
llvm-svn: 41506
2007-08-27 19:04:21 +00:00
Chris Lattner 99c8ee2977 Transform a load from an undef/zero global into an undef/global even if we
have complex pointer manipulation going on.  This allows us to compile
stuff like this:

__m128i foo(__m128i x){
                static const unsigned int c_0[4] = { 0, 0, 0, 0 };
                __m128i v_Zero = _mm_loadu_si128((__m128i*)c_0);
                x  = _mm_unpacklo_epi8(x,  v_Zero);
                return x;
}

into:

_foo:
        xorps   %xmm1, %xmm1
        punpcklbw       %xmm1, %xmm0
        ret

llvm-svn: 41022
2007-08-11 18:48:48 +00:00
Chris Lattner a8e4b4bc7b when we see a unaligned load from an insufficiently aligned global or
alloca, increase the alignment of the load, turning it into an aligned load.

This allows us to compile:

#include <xmmintrin.h>
__m128i foo(__m128i x){
 static const unsigned int c_0[4] = { 0, 0, 0, 0 };
	  __m128i v_Zero = _mm_loadu_si128((__m128i*)c_0);
  x  = _mm_unpacklo_epi8(x,  v_Zero);
  return x;
}

into:

_foo:
	punpcklbw	_c_0.5944, %xmm0
	ret
	.data
	.lcomm	_c_0.5944,16,4		# c_0.5944

instead of:

_foo:
	movdqu	_c_0.5944, %xmm1
	punpcklbw	%xmm1, %xmm0
	ret
	.data
	.lcomm	_c_0.5944,16,2		# c_0.5944

llvm-svn: 40971
2007-08-09 19:05:49 +00:00
Nick Lewycky 8052019a20 It's safe to fold not of fcmp.
llvm-svn: 40870
2007-08-06 20:04:16 +00:00
Chris Lattner f0da7975ea at the end of instcombine, explicitly clear WorklistMap.
This shrinks it down to something small.  On the testcase
from PR1432, this speeds up instcombine from 0.7959s to 0.5000s,
(59%)

llvm-svn: 40840
2007-08-05 08:47:58 +00:00
Chandler Carruth 7132e00de7 This is the patch to provide clean intrinsic function overloading support in LLVM. It cleans up the intrinsic definitions and generally smooths the process for more complicated intrinsic writing. It will be used by the upcoming atomic intrinsics as well as vector and float intrinsics in the future.
This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported.

llvm-svn: 40807
2007-08-04 01:51:18 +00:00
Chris Lattner dc2cf228ce Replacing a cast with another one does not reduce the number of
casts in the input.

llvm-svn: 40741
2007-08-02 17:23:38 +00:00
Chris Lattner 222b214be7 Disable an xform that causes an infinite loop. This fixes PR1594
llvm-svn: 40739
2007-08-02 16:56:32 +00:00
Chris Lattner 2740694450 wrap some long lines. Major offenders that are left include
gvn, gvnpre, dse, and predsimplify.  To see these, use:

  make check-line-length

llvm-svn: 40738
2007-08-02 16:53:43 +00:00
Chris Lattner b0418fc607 Enhance instcombine to be more aggressive about folding casts of
operations of casts.  This implements InstCombine/zext-fold.ll

llvm-svn: 40726
2007-08-02 06:11:14 +00:00