Commit Graph

3029 Commits

Author SHA1 Message Date
Andrew Trick f53622e129 indvars test case for r135558.
llvm-svn: 135559
2011-07-20 02:14:37 +00:00
Andrew Trick c5dd3e976a indvars -disable-iv-rewrite fix: derived GEP IVs
llvm-svn: 135558
2011-07-20 02:08:58 +00:00
Eli Friedman 55d6ccbb79 PR10386: Don't try to split an edge from an indirectbr.
llvm-svn: 135534
2011-07-19 22:59:41 +00:00
Nick Lewycky 56e99c7933 Remove bogus test: for all possible inputs of %X, the 'sub nsw' is guaranteed
to perform a signed wrap. Don't rely on any particular handling of that case.

llvm-svn: 135471
2011-07-19 08:22:57 +00:00
Andrew Trick 7da2417c8a indvars: LinearFunctionTestReplace for non-canonical IVs.
For -disable-iv-rewrite, perform LFTR without generating a new
"canonical" induction variable. Instead find the "best" existing
induction variable for use in the loop exit test and compute the final
value of that IV for use in the new loop exit test. In short,
convert to a simple eq/ne exit test as long as it's cheap to do so.

llvm-svn: 135420
2011-07-18 20:32:31 +00:00
Chad Rosier c1e40f8d26 A real testcase for r135286.
llvm-svn: 135299
2011-07-15 20:58:38 +00:00
Chad Rosier b45111556d Add testcase for r135286.
llvm-svn: 135291
2011-07-15 19:06:58 +00:00
Evan Cheng 4a40a747ba Change test case, one that actually failed before my commit.
llvm-svn: 135064
2011-07-13 19:19:44 +00:00
Evan Cheng b94674b325 It's not safe to fold (fptrunc (sqrt (fpext x))) to (sqrtf x) if there is another use of sqrt. rdar://9763193
llvm-svn: 135058
2011-07-13 19:08:16 +00:00
Rafael Espindola 403256763f Don't duplicate the work done by a gep into a "bitcast" if the gep has
more than one use.

Fixes PR10322.

llvm-svn: 134883
2011-07-11 03:43:47 +00:00
Chris Lattner b1ed91f397 Land the long talked about "type system rewrite" patch. This
patch brings numerous advantages to LLVM.  One way to look at it
is through diffstat:
 109 files changed, 3005 insertions(+), 5906 deletions(-)

Removing almost 3K lines of code is a good thing.  Other advantages
include:

1. Value::getType() is a simple load that can be CSE'd, not a mutating
   union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
   uniques them.  This means that the compiler doesn't merge them structurally
   which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
   struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
   in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead 
   "const Type *" everywhere.

Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.  
"LLVM 3.0" is the right time to do this.

There are still some cleanups pending after this, this patch is large enough
as-is.

llvm-svn: 134829
2011-07-09 17:41:24 +00:00
Chris Lattner 2522d2df06 more tests not making the jump into the brave new world.
llvm-svn: 134820
2011-07-09 16:57:10 +00:00
Lang Hames c5c191b0a4 Added test cases for GVN signed intrinsics recognition, r134777.
llvm-svn: 134778
2011-07-09 00:36:54 +00:00
Lang Hames 29cd98fd52 Make GVN look through extractvalues for recognised intrinsics. GVN can then CSE ops that match values produced by the intrinsics.
llvm-svn: 134677
2011-07-08 01:50:54 +00:00
Andrew Trick 3239055dee indvars -disable-iv-rewrite: Added SimplifyCongruentIVs.
llvm-svn: 134530
2011-07-06 20:50:43 +00:00
Tobias Grosser 4a5d9a9c20 LICM: Do not loose alignment on promotion
The promotion code lost any alignment information, when hoisting loads and
stores out of the loop. This lead to incorrect aligned memory accesses. We now
use the largest alignment we can prove to be correct.

llvm-svn: 134520
2011-07-06 19:19:55 +00:00
Jakub Staszak 3f158fdf6e Introduce "expect" intrinsic instructions.
llvm-svn: 134516
2011-07-06 18:22:43 +00:00
Benjamin Kramer 9eca5feff1 PR10267: Don't combine an equality compare with an AND into an inequality compare when the AND has more than one use.
This can pessimize code, inequalities are generally more expensive.

llvm-svn: 134379
2011-07-04 20:16:36 +00:00
Andrew Trick 6d12309475 indvars -disable-iv-rewrite: bug fix involving weird geps and related cleanup.
llvm-svn: 134306
2011-07-02 02:34:25 +00:00
Dan Gohman 54664ed714 Improve constant folding of undef for cmp and select operators.
llvm-svn: 134223
2011-07-01 01:03:43 +00:00
Dan Gohman ca8d9e1341 Improve constant folding of undef for binary operators.
llvm-svn: 134221
2011-07-01 00:42:17 +00:00
Rafael Espindola b10a0f223a Add r134057 back, but splice the predecessor after the successors phi
nodes.

Original message:
Let simplify cfg simplify bb with only debug and lifetime intrinsics.

llvm-svn: 134182
2011-06-30 20:14:24 +00:00
Andrew Trick efe89ad414 indvars -disable-iv-rewrite: handle cloning binary operators that cannot overflow.
llvm-svn: 134177
2011-06-30 19:02:17 +00:00
Andrew Trick cc68605353 indvars -disable-iv-rewrite: handle an edge case involving identity phis.
llvm-svn: 134124
2011-06-30 01:27:23 +00:00
Andrew Trick ecdd6e4c67 indvars -disable-iv-rewrite: insert new trunc instructions carefully.
llvm-svn: 134112
2011-06-29 23:03:57 +00:00
Chad Rosier 96ed721d9b Temporarily revert r134057: "Let simplify cfg simplify bb with only debug and
lifetime intrinsics" due to buildbot failures.

llvm-svn: 134071
2011-06-29 16:22:11 +00:00
Rafael Espindola 4c0dfcec7e Let simplify cfg simplify bb with only debug and lifetime intrinsics.
llvm-svn: 134057
2011-06-29 05:25:47 +00:00
Andrew Trick efe2b1963d indvars -disable-iv-rewrite: just because SCEV ignores casts doesn't
mean they can be removed.

llvm-svn: 134054
2011-06-29 03:13:40 +00:00
Andrew Trick 9083ef1918 FileCheckify and prepare for -disable-iv-rewrite.
llvm-svn: 133998
2011-06-28 06:34:10 +00:00
Nick Lewycky a61df3f843 Teach one piece of scalarrepl to handle lifetime markers. When transforming an
alloca that only holds a copy of a global and we're going to replace the users
of the alloca with that global, just nuke the lifetime intrinsics. Part of
PR10121.

llvm-svn: 133905
2011-06-27 05:40:02 +00:00
Eli Friedman 2c980fafff PR10180: Fix a instcombine crash with FP vectors.
llvm-svn: 133756
2011-06-23 20:40:23 +00:00
Jay Foad 165910fa6d Add a reduced test case for the buildbot failure (clang self-hosted
build) caused by r133435.

llvm-svn: 133509
2011-06-21 08:33:49 +00:00
Andrew Trick 69d4452f2e indvars -disable-iv-rewrite: Adds support for eliminating identity
ops.

This is a rewrite of the IV simplification algorithm used by
-disable-iv-rewrite. To avoid perturbing the default mode, I
temporarily split the driver and created SimplifyIVUsersNoRewrite. The
idea is to avoid doing opcode/pattern matching inside
IndVarSimplify. SCEV already does it. We want to optimize with the
full generality of SCEV, but optimize def-use chains top down on-demand rather
than rewriting the entire expression bottom-up. This was easy to do
for operations that SCEV can prove are identity function. So we're now
eliminating bitmasks and zero extends this way.

A result of this rewrite is that indvars -disable-iv-rewrite no longer
requires IVUsers.

llvm-svn: 133502
2011-06-21 03:22:38 +00:00
Jay Foad 29ed2e3bdc This is an automatically reduced test case that crashed in GVN, at some
point during the development of the phi operand changes.

llvm-svn: 133436
2011-06-20 14:46:47 +00:00
Chris Lattner 8936d2bfbc Remove support for parsing the "type i32" syntax for defining a numbered
top level type without a specified number.  This syntax isn't documented
and blocks forward progress.

llvm-svn: 133371
2011-06-19 00:03:46 +00:00
Hans Wennborg 4ab4a8e63a Fix PR10103: Less code for enum type translation.
In cases such as the attached test, where the case value for a switch
destination is used in a phi node that follows the destination, it
might be better to replace that value with the condition value of the
switch, so that more blocks can be folded away with
TryToSimplifyUncondBranchFromEmptyBlock because there are less
conflicts in the phi node.

llvm-svn: 133344
2011-06-18 10:28:47 +00:00
Nick Lewycky 4c94631505 Add test for r133251.
llvm-svn: 133339
2011-06-18 07:23:25 +00:00
Cameron Zwarich 9601ddb2f3 When scalar replacement returns a vector type, only accept it if the vector
type's bitwidth matches the (allocated) size of the alloca. This severely
pessimizes vector scalar replacement when the only vector type being used is
something like <3 x float> on x86 or ARM whose allocated size matches a
<4 x float>.

I hope to fix some of the flawed assumptions about allocated size throughout
scalar replacement and reenable this in most cases.

llvm-svn: 133338
2011-06-18 06:17:51 +00:00
Chris Lattner 80ed9dc9e5 rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which is
for pre-2.9 bitcode files.  We keep x86 unaligned loads, movnt, crc32, and the
target indep prefetch change.

As usual, updating the testsuite is a PITA.

llvm-svn: 133337
2011-06-18 06:05:24 +00:00
Cameron Zwarich 2a26100c87 Fix an invalid bitcast crash that occurs when doing a partial memset of a vector
alloca. Fixes part of <rdar://problem/9580800>.

llvm-svn: 133336
2011-06-18 05:47:49 +00:00
Chris Lattner 6bc5c89093 Stop accepting and ignoring attributes in function types. Attributes are applied
to functions and call/invokes, not to types.

llvm-svn: 133266
2011-06-17 17:37:13 +00:00
Chris Lattner 5756c16cdf make the asmparser reject function and type redefinitions. 'Merging' hasn't been
needed since llvm-gcc 3.4 days.

llvm-svn: 133248
2011-06-17 07:06:44 +00:00
Chris Lattner 59345c8b65 remove asmparser support for the old getresult instruction, which has been subsumed by extractvalue.
llvm-svn: 133247
2011-06-17 06:57:15 +00:00
Chris Lattner 33de427cd6 remove parser support for the obsolete "multiple return values" syntax, which
was replaced with return of a "first class aggregate".

llvm-svn: 133245
2011-06-17 06:49:41 +00:00
Chris Lattner 4649a73cc3 stop accepting begin/end around function bodies in the .ll parser, this isn't pascal anymore.
llvm-svn: 133244
2011-06-17 06:42:57 +00:00
Chris Lattner def1949c00 Remove support for using "foo" as symbols instead of %"foo". This is ancient
syntax and has been long obsolete.  As usual, updating the tests is the nasty
part of this.

llvm-svn: 133242
2011-06-17 06:36:20 +00:00
Chris Lattner b90ed2233c manually upgrade a bunch of tests to modern syntax, and remove some that
are either unreduced or only test old syntax.

llvm-svn: 133228
2011-06-17 03:14:27 +00:00
Dan Gohman 00fa9634d5 Fix ARCOpt to insert releases on both successors of an invoke rather
than trying to insert them immediately after the invoke.

llvm-svn: 133188
2011-06-16 20:57:14 +00:00
John McCall d935e9c359 The ARC language-specific optimizer. Credit to Dan Gohman.
llvm-svn: 133108
2011-06-15 23:37:01 +00:00
Stuart Hastings 351a3f881f Avoid fusing bitcasts with dynamic allocas if the amount-to-allocate
might overflow.  Re-typing the alloca to a larger type (e.g. double)
hoists a shift into the alloca, potentially exposing overflow in the
expression.  rdar://problem/9265821

llvm-svn: 132926
2011-06-13 18:48:49 +00:00
Benjamin Kramer c970849ea0 InstCombine: Fold A-b == C --> b == A-C if A and C are constants.
The backend already knew this trick.

llvm-svn: 132915
2011-06-13 15:24:24 +00:00
Benjamin Kramer 91f914ce21 InstCombine: Shrink ((zext X) & C1) == C2 to fold away the cast if the "zext" and the "and" have one use.
llvm-svn: 132897
2011-06-12 22:48:00 +00:00
Benjamin Kramer 35159c114c Simplify code. No functionality changes, name changes aside.
llvm-svn: 132896
2011-06-12 22:47:53 +00:00
John McCall fc1ca36866 SplitCriticalEdge can sometimes split the edge from an invoke to a landing
pad, separating the exception and selector calls from the new lpad.  Teaching
it not to do that, or to properly adjust the CFG afterwards, is out of
scope because it would require the other edges to the landing pad to be split
as well (effectively).  Instead, just recover from the most likely cases
during inlining.  The best long-term solution is to change the exception
representation and commit to either requiring or not requiring the more
complex edge-splitting logic;  this is just a shorter-term hack.

llvm-svn: 132799
2011-06-09 20:06:24 +00:00
Cameron Zwarich 77a699a829 Fix PR10104 by adding a bounds check on a vector element access check. It was
assuming that all offsets are legal vector accesses, and thus trying to access
the float member of { <2 x float>, float } as the 3rd element of the first
member.

llvm-svn: 132766
2011-06-09 01:45:33 +00:00
Cameron Zwarich c3b1cc9aca Fix an assymmetry between ConvertScalar_ExtractValue and ConvertScalar_InsertValue. The
former was using the size of the entire alloca, whereas the latter was correctly using
the allocated size of the immediate type being converted (which may differ from the size
of the alloca). This fixes PR10082.

llvm-svn: 132759
2011-06-08 22:08:31 +00:00
Nick Lewycky 40b4e80ce8 This directory was missing the dg.exp to cause the tests to run. Some time since
it was added, the test has regressed, so XFAIL it.

llvm-svn: 132686
2011-06-06 20:23:00 +00:00
Bill Wendling 4f163dfed1 If the block that we're threading through is jumped to by an indirect branch,
then we don't want to set the destination in the indirect branch to the
destination. This is because the indirect branch needs its destinations to have
had their block addresses taken. This isn't so of the new critical edge that's
split during this process. If it turns out that the destination block has only
one predecessor, and that being a BB with an indirect branch, then it won't be
marked as 'used' and may be removed.
PR10072

llvm-svn: 132638
2011-06-04 09:42:04 +00:00
Dan Gohman baf1afb289 Add a testcase to demonstrate the problem where phi translation is
ignored for clobbering partial-alias loads.

llvm-svn: 132633
2011-06-04 07:05:05 +00:00
Dan Gohman a471751c24 Disable the main feature of 130180, the elimination of loads that are
redundant with partially-aliasing loads.

When computing what portion of a clobbering load value is needed,
it doesn't consider phi-translation which may have occurred
between the clobbing load and the redundant load.

llvm-svn: 132631
2011-06-04 06:48:50 +00:00
Nick Lewycky 611582401f Bail on unswitching a switch statement for a case with a critical edge. We name
which edge to split by pred/succ pair, which means that we can end up splitting
the wrong edge (by case value) in the switch statement entirely. Fixes PR10031!

llvm-svn: 132535
2011-06-03 06:27:15 +00:00
Andrew Trick 443332deca Test case pasto (failed when run with IR verifier).
llvm-svn: 132516
2011-06-02 23:57:27 +00:00
Eli Friedman 5da0ff41d7 PR10067: Add missing safety check to call return transformation in MemCpyOpt::processStore. If something accesses the dest of the "copy" between the call and the copy, the performCallSlotOptzn transformation is not valid.
llvm-svn: 132485
2011-06-02 21:24:42 +00:00
Eli Friedman b576b1675c When marking a block as being unanalyzable, use "Clobber" on the terminator instead of the first instruction in the block. This is a bit of a hack; "Clobber" isn't really the right marking in the first place. memdep doesn't really have any way of properly expressing "unanalyzable" at the moment. Using it on the terminator is much less ambiguous than using it on an arbitrary instruction, though.
In the given testcase, the "Clobber" was pointing to a load, and GVN was incorrectly assuming that meant that the "Clobber" load overlapped the load being analyzed (when they are actually unrelated).

The included testcase tests both this commit and r132434.

Part two of rdar://9429882.  (r132434 was mislabeled.)

llvm-svn: 132442
2011-06-02 00:08:52 +00:00
Stuart Hastings 2380483355 Reapply 132348 with fixes. rdar://problem/6501862
llvm-svn: 132402
2011-06-01 16:42:47 +00:00
John McCall fca7786267 First, do no harm -- even if we can't find a selector for an enclosing
landing pad, forward llvm.eh.resume calls to it instead of turning them
invalidly into invokes.

llvm-svn: 132382
2011-06-01 02:17:11 +00:00
Andrew Trick 812276eed4 scev: Better sign-extend removal. Normalize postincrement recurrences
so that their sign extended forms are congruent when no overflow occurs.

llvm-svn: 132360
2011-05-31 21:17:47 +00:00
Stuart Hastings 9d6a06d536 Revert to pacify a buildbot. rdar://problem/6501862
llvm-svn: 132351
2011-05-31 19:56:35 +00:00
Stuart Hastings 780f723309 Followup to 132316; accept arbitrary constants, add with a constant,
sub with a non-constant.  Fix comments, enlarge test case.
rdar://problem/6501862

llvm-svn: 132348
2011-05-31 19:29:55 +00:00
Stuart Hastings 8284374b07 (1 - X) * (-2) -> (x - 1) * 2, for all positive nonzero powers of 2
rdar://problem/6501862

llvm-svn: 132316
2011-05-30 20:00:33 +00:00
John McCall f19cf99097 Add the test case for phis in the outer landing pad during the inliner's
forwarding of eh.resume that I promised yesterday.

llvm-svn: 132307
2011-05-30 01:08:04 +00:00
Nick Lewycky 63353933c6 Add testcase for r132290, to check for the crasher caught by the buildbots
doing llvm-gcc selfhost (or cross).

llvm-svn: 132292
2011-05-29 19:41:14 +00:00
Nick Lewycky a3bb03e400 Obey the isVolatile bit on memory intrinsics when analyzing uses of a global
variable. Noticed by inspection.

Simulate memset in EvaluateFunction where the target of the memset and the
value we're setting are both the null value. Fixes PR10047!

llvm-svn: 132288
2011-05-29 18:41:56 +00:00
Benjamin Kramer fd53a27f99 ConstantFoldInstOperands doesn't like compares, hand it off to instsimplify instead.
Fixes PR10040.

llvm-svn: 132254
2011-05-28 10:16:58 +00:00
John McCall 046c47e970 Implement and document the llvm.eh.resume intrinsic, which is
transformed by the inliner into a branch to the enclosing landing pad
(when inlined through an invoke).  If not so optimized, it is lowered
DWARF EH preparation into a call to _Unwind_Resume (or _Unwind_SjLj_Resume
as appropriate).  Its chief advantage is that it takes both the
exception value and the selector value as arguments, meaning that there
is zero effort in recovering these;  however, the frontend is required
to pass these down, which is not actually particularly difficult.

Also document the behavior of landing pads a bit better, and make it
clearer that it's okay that personality functions don't always land at
landing pads.  This is just a fact of life.  Don't write optimizations that
rely on pushing things over an unwind edge.

llvm-svn: 132253
2011-05-28 07:45:59 +00:00
John McCall bd04b74bb2 Fix the inliner to maintain the current de facto invoke semantics:
- the selector for the landing pad must provide all available information
    about the handlers, filters, and cleanups within that landing pad
  - calls to _Unwind_Resume must be converted to branches to the enclosing
    lpad so as to avoid re-entering the unwinder when the lpad claimed it
    was going to handle the exception in some way
This is quite specific to libUnwind-based unwinding.  In an effort to not
interfere too badly with other unwinders, and with existing hacks in frontends,
this only triggers on _Unwind_Resume (not _Unwind_Resume_or_Rethrow) and does
nothing with selectors if it cannot find a selector call for either lpad.

llvm-svn: 132200
2011-05-27 18:34:38 +00:00
Benjamin Kramer 749ef5f420 InstCombine: Make switch folding with equality compares more aggressive by trying instsimplify on the arm where we know the compared value.
Stuff like "x == y ? y : x&y" now folds into "x&y".

llvm-svn: 132185
2011-05-27 13:00:16 +00:00
Chad Rosier b362884ca9 Renamed llvm.x86.sse42.crc32 intrinsics; crc64 doesn't exist.
crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and
crc64.[8|16|32] have been renamed to .crc32.64.[8|64].

llvm-svn: 132163
2011-05-26 23:13:19 +00:00
Andrew Trick 7fac79e255 indvars: incremental fixes for -disable-iv-rewrite and testcases.
Use a proper worklist for use-def traversal without holding onto an
iterator. Now that we process all IV uses, we need complete logic for
resusing existing derived IV defs. See HoistStep.

llvm-svn: 132103
2011-05-26 00:46:11 +00:00
Eli Friedman 865866e7fe PR9998: ashr exact %x, 31 is not equivalent to sdiv exact %x, -2147483648.
llvm-svn: 132097
2011-05-25 23:26:20 +00:00
Andrew Trick eb3c36e69c indvars: fixed IV cloning in -disable-iv-rewrite mode with associated
cleanup and overdue test cases.

llvm-svn: 132038
2011-05-25 04:42:22 +00:00
Cameron Zwarich d7707fc911 Fix "make check" in Release by removing debug-only options from an 'opt' invocation.
llvm-svn: 131972
2011-05-24 18:26:09 +00:00
Cameron Zwarich 843bc7d673 Make LoadAndStorePromoter preserve debug info and create llvm.dbg.values when
promoting allocas to SSA variables. Fixes <rdar://problem/9479036>.

llvm-svn: 131953
2011-05-24 03:10:43 +00:00
Andrew Trick 37f0082804 FileCheck-ize a couple of IV unit tests.
llvm-svn: 131946
2011-05-24 01:02:49 +00:00
Andrew Trick 1ea0243bd0 Test case for r130799 - indvars: Added canExpandBackEdgeTakenCount.
llvm-svn: 131939
2011-05-24 00:17:53 +00:00
Chris Lattner 026f5e61f0 fix a really nasty basicaa mod/ref calculation bug that was causing miscompilation of
UnitTests/ObjC/messages-2.m with the recent optimizer improvements.

llvm-svn: 131897
2011-05-23 05:15:43 +00:00
Chris Lattner 8aff4f8efc Transform any logical shift of a power of two into an exact/NUW shift when
in a known-non-zero context.

llvm-svn: 131887
2011-05-23 00:21:50 +00:00
Chris Lattner 83791ced7b Teach valuetracking that byval arguments with a specified alignment are
aligned.

Teach memcpyopt to not give up all hope when confonted with an underaligned
memcpy feeding an overaligned byval.  If the *source* of the memcpy can be
determined to be adequeately aligned, or if it can be forced to be, we can
eliminate the memcpy.

This addresses PR9794.  We now compile the example into:

define i32 @f(%struct.p* nocapture byval align 8 %q) nounwind ssp {
entry:
  %call = call i32 @g(%struct.p* byval align 8 %q) nounwind
  ret i32 %call
}

in both x86-64 and x86-32 mode.  We still don't get a tailcall though,
because tailcalls apparently can't handle byval.

llvm-svn: 131884
2011-05-23 00:03:39 +00:00
Chris Lattner 713d52364f implement PR9315, constant folding exp2 in terms of pow (since hosts without
C99 runtimes don't have exp2).

llvm-svn: 131872
2011-05-22 22:22:35 +00:00
Chris Lattner 7c99f19d9f Carve out a place in instcombine to put transformations which work knowing that their
result is non-zero.  Implement an example optimization (PR9814), which allows us to
transform:
  A / ((1 << B) >>u 2)
into:
  A >>u (B-2)

which we compile into:

_divu3:                                 ## @divu3
	leal	-2(%rsi), %ecx
	shrl	%cl, %edi
	movl	%edi, %eax
	ret

instead of:

_divu3:                                 ## @divu3
	movb	%sil, %cl
	movl	$1, %esi
	shll	%cl, %esi
	shrl	$2, %esi
	movl	%edi, %eax
	xorl	%edx, %edx
	divl	%esi, %eax
	ret

llvm-svn: 131860
2011-05-22 18:18:41 +00:00
Chris Lattner c4ca7ab7e7 Fix PR9815: I was trying to get out of "generating code and then
failing to form a memset, then having to delete it" but my approximation
isn't safe for self recurrent loops.  Instead of doign a hack, just
do it the right way.

llvm-svn: 131858
2011-05-22 17:39:56 +00:00
Frits van Bommel ad964559ef Add a parameter to ConstantFoldTerminator() that callers can use to ask it to also clean up the condition of any conditional terminator it folds to be unconditional, if that turns the condition into dead code. This just means it calls RecursivelyDeleteTriviallyDeadInstructions() in strategic spots. It defaults to the old behavior.
I also changed -simplifycfg, -jump-threading and -codegenprepare to use this to produce slightly better code without any extra cleanup passes (AFAICT this was the only place in -simplifycfg where now-dead conditions of replaced terminators weren't being cleaned up). The only other user of this function is -sccp, but I didn't read that thoroughly enough to figure out whether it might be holding pointers to instructions that could be deleted by this.

llvm-svn: 131855
2011-05-22 16:24:18 +00:00
Chris Lattner 1a1acc2191 fix PR9856, an incorrectly conservative assertion: a global can be
"stored once" even if its address is compared.

llvm-svn: 131849
2011-05-22 07:15:13 +00:00
Chris Lattner f0d59072de fix PR9841 by having GVN not process dead loads. This was
causing it to get into infinite loops when it would widen a 
load (which can necessarily leave around dead loads).

llvm-svn: 131847
2011-05-22 07:03:34 +00:00
Chris Lattner a10327f531 remove a trivial test, make some other tests less trivial.
llvm-svn: 131846
2011-05-22 07:02:43 +00:00
Chris Lattner cc87723178 make this test less trivial.
llvm-svn: 131845
2011-05-22 06:59:33 +00:00
Nick Lewycky d60e135cfe Commit test change, forgotten as part of r131838.
llvm-svn: 131839
2011-05-22 05:31:47 +00:00
Nick Lewycky a68ec83b36 Teach the inliner to emit llvm.lifetime.start/end, to scope the local variables
of the inlinee to the code representing the original function.

llvm-svn: 131838
2011-05-22 05:22:10 +00:00
Nick Lewycky 1c8af13719 Fix grammar in test.
llvm-svn: 131831
2011-05-22 01:16:00 +00:00
Benjamin Kramer fda5dc4968 Revert "InstCombine: Turn mul.with.overflow(X, 2) into the cheaper add.with.overflow(X, X)"
It's better to do this in codegen, mul.with.overflow(X, 2) is more canonical because it has only one use on "X".

llvm-svn: 131798
2011-05-21 18:31:42 +00:00
Benjamin Kramer 691731eb9c InstCombine: Turn mul.with.overflow(X, 2) into the cheaper add.with.overflow(X, X)
llvm-svn: 131789
2011-05-21 09:22:06 +00:00
Evan Cheng e8d2e9eb35 Revert r131664 and fix it in instcombine instead. rdar://9467055
llvm-svn: 131708
2011-05-20 00:54:37 +00:00
Stuart Hastings ae012a7525 Move test to Transforms/InstCombine.
llvm-svn: 131634
2011-05-19 05:53:22 +00:00
Rafael Espindola 3f60a0b411 Add test for PR9946.
llvm-svn: 131621
2011-05-19 02:35:26 +00:00
Eli Friedman 41e509a33d More instcombine cleanup, towards improving debug line info.
llvm-svn: 131604
2011-05-18 23:58:37 +00:00
Dan Gohman 3268e4d692 When forming an ICmpZero LSRUse, normalize the non-IV operand
of the comparison, so that the resulting expression is fully
normalized. This fixes PR9939.

llvm-svn: 131576
2011-05-18 21:02:18 +00:00
Eli Friedman 49346010f8 More instcombine cleanup aimed towards improving debug line info.
llvm-svn: 131559
2011-05-18 19:57:14 +00:00
Eli Friedman 96254a0d53 Start trying to make InstCombine preserve more debug info. The idea here is to set the debug location on the IRBuilder, which will be then right location in most cases. This should magically give many transformations debug locations, and fixing places which are missing a debug location will usually just means changing the code creating it to use the IRBuilder.
As an example, the change to InstCombineCalls catches a common case where a call to a bitcast of a function is rewritten.

Chris, does this approach look reasonable?

llvm-svn: 131516
2011-05-18 01:28:27 +00:00
Stuart Hastings a7ae4552af Drop lli, revise test.
llvm-svn: 131452
2011-05-17 02:38:59 +00:00
Rafael Espindola 2050af838d Don't do tail calls in a function that call setjmp. The stack might be
corrupted when setjmp returns again.

llvm-svn: 131399
2011-05-16 03:05:33 +00:00
Benjamin Kramer cb7e56e592 Disable test harder.
llvm-svn: 131363
2011-05-14 19:30:39 +00:00
Stuart Hastings 3c2fd1cf62 Disable this test while I revise it. rdar://problem/9267970
llvm-svn: 131350
2011-05-14 18:39:05 +00:00
Benjamin Kramer d96205c4e5 SimplifyCFG: Use ComputeMaskedBits to prune dead cases from switch instructions.
llvm-svn: 131345
2011-05-14 15:57:25 +00:00
Stuart Hastings 66a82b966e Avoid combining GEPs that might overflow at runtime.
rdar://problem/9267970

Patch by Julien Lerouge!

llvm-svn: 131339
2011-05-14 05:55:10 +00:00
Duncan Sands af32728a57 The comparision "max(x,y)==x" is equivalent to "x>=y". Since the max is
often expressed as "x >= y ? x : y", there is a good chance we can extract
the existing "x >= y" from it and use that as a replacement for "max(x,y)==x".

llvm-svn: 131049
2011-05-07 16:56:49 +00:00
Galina Kistanova a335f5aeeb Move few target-dependant tests to appropriate directories.
llvm-svn: 131002
2011-05-06 18:24:46 +00:00
Duncan Sands a071c82900 Fix PR9820: a read-only call differs from a load in that a load doesn't
return the pointer being dereferenced, it returns the pointee, but a call
might return the pointer itself.

llvm-svn: 130979
2011-05-06 10:30:37 +00:00
Eli Friedman 8a20e66926 PR9838: Fix transform introduced in r127064 to not trigger when only one side of the icmp is an exact shift.
llvm-svn: 130954
2011-05-05 21:59:18 +00:00
Duncan Sands a228785526 Add variations on: max(x,y) >= min(x,z) folds to true. This isn't that common,
but according to my super-optimizer there are only two missed simplifications
of -instsimplify kind when compiling bzip2, and this is one of them.  It amuses
me to have bzip2 be perfectly optimized as far as instsimplify goes!

llvm-svn: 130840
2011-05-04 16:05:05 +00:00
Duncan Sands 0a9c1246d7 Implement some basic simplifications involving min/max, for example
max(a,b) >= a -> true.  According to my super-optimizer, these are
by far the most common simplifications (of the -instsimplify kind)
that occur in the testsuite and aren't caught by -std-compile-opts.

llvm-svn: 130780
2011-05-03 19:53:10 +00:00
Duncan Sands f91c5ab341 Fix PR9579: when simplifying a compare to "true" or "false", and it was
a vector compare, generate a vector result rather than i1 (and crashing).

llvm-svn: 130706
2011-05-02 18:51:41 +00:00
Duncan Sands a3e3699c88 Move some rem transforms out of instcombine and into instsimplify.
This automagically provides a transform noticed by my super-optimizer
as occurring quite often: "rem x, (select cond, x, 1)" -> 0.

llvm-svn: 130694
2011-05-02 16:27:02 +00:00
Benjamin Kramer 9aa91b1f4e InstCombine: Turn (zext A) udiv (zext B) into (zext (A udiv B)). Same for urem or constant B.
This obviously helps a lot if the division would be turned into a libcall
(think i64 udiv on i386), but div is also one of the few remaining instructions
on modern CPUs that become more expensive when the bitwidth gets bigger.

This also helps register pressure on i386 when dividing chars, divb needs
two 8-bit parts of a 16 bit register as input where divl uses two registers.

int foo(unsigned char a) { return a/10; }
int bar(unsigned char a, unsigned char b) { return a/b; }

compiles into (x86_64)
_foo:
  imull $205, %edi, %eax
  shrl  $11, %eax
  ret
_bar:
  movzbl        %dil, %eax
  divb  %sil, %al
  movzbl        %al, %eax
  ret

llvm-svn: 130615
2011-04-30 18:16:07 +00:00
Benjamin Kramer 57b3df59b9 Use SimplifyDemandedBits on div instructions.
This folds away silly stuff like (a&255)/1000 -> 0.

llvm-svn: 130614
2011-04-30 18:16:00 +00:00
Benjamin Kramer 6a50bbd284 FileCheckize.
llvm-svn: 130613
2011-04-30 18:15:53 +00:00
Peter Collingbourne 616044acd5 SimplifyCFG: Expose phi node folding cost threshold as command line parameter
llvm-svn: 130528
2011-04-29 18:47:38 +00:00
Peter Collingbourne e3511e15e0 SimplifyCFG: Add CostRemaining parameter to DominatesMergePoint
llvm-svn: 130527
2011-04-29 18:47:31 +00:00
Peter Collingbourne 61f6602acd SimplifyCFG: Add Trunc, ZExt and SExt to the list of cheap instructions for phi node folding
llvm-svn: 130526
2011-04-29 18:47:25 +00:00
Benjamin Kramer 16f18ed7b5 InstCombine: turn (C1 << A) << C2) into (C1 << C2) << A)
Fixes PR9809.

llvm-svn: 130485
2011-04-29 08:15:41 +00:00
Chris Lattner 1777601a74 final step needed to resolve PR6627, which allows us to flatten the code down to
a nice and tidy:
  %x1 = load i32* %0, align 4
  %1 = icmp eq i32 %x1, 1179403647
  br i1 %1, label %if.then, label %if.end

instead of doing lots of loads and branches.  May the FreeBSD bootloader
long fit in its allocated space.

llvm-svn: 130416
2011-04-28 18:15:47 +00:00
Benjamin Kramer 4145c0d3b1 InstCombine: Merge "(trunc x) == C1 & (and x, CA) == C2" into a single and+icmp.
This happens when GVN widens loads. Part of PR6627.

llvm-svn: 130405
2011-04-28 16:58:40 +00:00
Chris Lattner 827a270a2a teach GVN to widen integer loads when they are overaligned, when doing an
wider load would allow elimination of subsequent loads, and when the wider
load is still a native integer type.  This eliminates a ton of loads on 
various benchmarks involving struct fields, though it is somewhat hobbled
by clang not being very aggressive about field alignment.

This is yet another step along the way towards resolving PR6627.

llvm-svn: 130390
2011-04-28 07:29:08 +00:00
Andrew Trick 29ac7b8858 Fixes PR9730: indvars: An asserting value handle still pointed to this value
Modified LinearFunctionTestReplace to push the condition on the dead
list instead of eagerly deleting it. This can cause unnecessary
IV rewrites, which should have no effect on codegen and will not be an
issue once we stop generating canonical IVs.

llvm-svn: 130340
2011-04-27 23:00:03 +00:00
Devang Patel 12bf0ab4b5 Simplify cfg inserts a call to trap when unreachable code is detected. Assign DebugLoc to this new trap instruction.
llvm-svn: 130315
2011-04-27 17:59:27 +00:00
Chris Lattner 6b96621a8a remove support for llvm.invariant.end from memdep. It is a
work-in-progress that is not progressing, and it has issues.

llvm-svn: 130247
2011-04-26 21:50:51 +00:00
Chris Lattner 029afe4787 make a couple of changes to the standard pass pipeline:
1. Only run the early (in the module pass pipe) instcombine/simplifycfg
   if the "unit at a time" passes they are cleaning up after runs.

2. Move the "clean up after the unroller" pass to the very end of the
   function-level pass pipeline.  Loop unroll uses instsimplify now,
   so it doesn't create a ton of trash.  Moving instcombine later allows
   it to clean up after opportunities are exposed by GVN, DSE, etc.

3. Introduce some phase ordering tests for things that are specifically
   intended to be simplified by the full optimizer as a whole.

This resolves PR2338, and is progress towards PR6627, which will be 
generating code that looks similar to test2.

llvm-svn: 130241
2011-04-26 20:45:33 +00:00
Chris Lattner 1b06c71668 Transform: "icmp eq (trunc (lshr(X, cst1)), cst" to "icmp (and X, mask), cst"
when X has multiple uses.  This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use.  For example,
this can disable load folding.

This is inching towards resolving PR6627.

llvm-svn: 130238
2011-04-26 20:18:20 +00:00
Chris Lattner eb045f9c02 Improve the bail-out predicate to really only kick in when phi
translation fails.  We were bailing out in some cases that would
cause us to miss GVN'ing some non-local cases away.

llvm-svn: 130206
2011-04-26 17:41:02 +00:00
Chris Lattner 6f83d06ffa Enhance MemDep: When alias analysis returns a partial alias result,
return it as a clobber.  This allows GVN to do smart things.

Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load.  In this case, forward the value.  This
allows us to compile stuff like this:

int test(void *P) {
  int tmp = *(unsigned int*)P;
  return tmp+*((unsigned char*)P+1);
}

into:

_test:                                  ## @test
	movl	(%rdi), %ecx
	movzbl	%ch, %eax
	addl	%ecx, %eax
	ret

which has one load.  We already handled the case where the smaller
load was from a must-aliased base pointer.

llvm-svn: 130180
2011-04-26 01:21:15 +00:00
Cameron Zwarich ca4c633489 Fix another case of <rdar://problem/9184212> that only occurs with code
generated by llvm-gcc, since llvm-gcc uses 2 i64s for passing a 4 x float
vector on ARM rather than an i64 array like Clang.

llvm-svn: 129878
2011-04-20 21:48:38 +00:00
Frits van Bommel d097212a08 Add test cases for Jay's r129641 and fix a 32-bit-centric testcase in a file with a 64-bit datalayout.
llvm-svn: 129643
2011-04-16 14:31:50 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Eli Friedman 2395626605 Add an instcombine for constructs like a | -(b != c); a select is more
canonical, and generally leads to better code.  Found while looking at
an article about saturating arithmetic.

llvm-svn: 129545
2011-04-14 22:41:27 +00:00
Owen Anderson 92651ec374 Fix an infinite alternation in JumpThreading where two transforms would repeatedly undo each other. The solution is to perform more aggressive constant folding to make one of the edges just folded away rather than trying to thread it.
Fixes <rdar://problem/9284786>.

Discovered with CSmith.

llvm-svn: 129538
2011-04-14 21:35:50 +00:00
Mon P Wang 2e5528f0b2 Vectors with different number of elements of the same element type can have
the same allocation size but different primitive sizes(e.g., <3xi32> and
<4xi32>).  When ScalarRepl promotes them, it can't use a bit cast but
should use a shuffle vector instead.

llvm-svn: 129472
2011-04-13 21:40:02 +00:00
Dan Gohman 1c6c34834b Fix reassociate to use a worklist instead of recursing when new
reassociation opportunities are exposed. This fixes a bug where
the nested reassociation expects to be the IR to be consistent,
but it isn't, because the outer reassociation has disconnected
some of the operands.  rdar://9167457

llvm-svn: 129324
2011-04-12 00:11:56 +00:00
Chris Lattner e81d045d94 remove the StructRetPromotion pass. It is unused, not maintained and
has some bugs.  If this is interesting functionality, it should be 
reimplemented in the argpromotion pass.

llvm-svn: 129314
2011-04-11 23:09:44 +00:00
Eli Friedman 9cca0715aa Add back a couple checks removed by r129128; the fact that an intitializer
is an array of structures doesn't imply it's a ConstantArray of
ConstantStruct.

llvm-svn: 129207
2011-04-09 09:11:09 +00:00
Chris Lattner 88974f4625 fix PR9523, a crash in looprotate on a non-canonical loop made out of indirectbr.
llvm-svn: 129203
2011-04-09 07:25:58 +00:00
Eli Friedman 17822fcde9 PR9604; try to deal with RAUW updates correctly in the AST. I'm not convinced
it's completely safe to cache the AST across LICM runs even with this fix,
but this fix can't hurt.

llvm-svn: 129198
2011-04-09 06:55:46 +00:00
Eli Friedman 4db39cefdb Test for r129190.
llvm-svn: 129197
2011-04-09 06:39:43 +00:00
Devang Patel bc3d8b212f Do not let debug info interfer with branch folding.
llvm-svn: 129114
2011-04-07 23:11:25 +00:00
Devang Patel 197c35298a While hoisting common code from if/else, hoist debug info intrinsics if they match.
llvm-svn: 129078
2011-04-07 17:27:36 +00:00
Eli Friedman c5f22a7815 PR9634: Don't unconditionally tell the AliasSetTracker that the PreheaderLoad
is equivalent to any other relevant value; it isn't true in general.
If it is equivalent, the LoopPromoter will tell the AST the equivalence.
Also, delete the PreheaderLoad if it is unused.

Chris, since you were the last one to make major changes here, can you check
that this is sane?

llvm-svn: 129049
2011-04-07 01:35:06 +00:00
Nadav Rotem cc771acd77 This testcase passed even without the fix. Added the target info to make the
test fail (without the fix). Thanks Dan.

llvm-svn: 128999
2011-04-06 11:18:29 +00:00
Nadav Rotem a069c6ce05 InstCombine optimizes gep(bitcast(x)) even when the bitcasts casts away address
space info. We crash with an assert in this case. This change checks that the
address space of the bitcasted pointer is the same as the gep ptr.

llvm-svn: 128884
2011-04-05 14:29:52 +00:00
Eli Friedman 17bf4922c9 PR9446: RecursivelyDeleteTriviallyDeadInstructions can delete the instruction
after the given instruction; make sure to handle that case correctly.
(It's difficult to trigger; the included testcase involves a dead 
block, but I don't think that's a requirement.) 

While I'm here, get rid of the unnecessary warning about
SimplifyInstructionsInBlock, since it should work correctly as far as I know.

llvm-svn: 128782
2011-04-02 22:45:17 +00:00
Benjamin Kramer d121765e64 InstCombine: Turn icmp + sext into bitwise/integer ops when the input has only one unknown bit.
int test1(unsigned x) { return (x&8) ? 0 : -1; }
int test3(unsigned x) { return (x&8) ? -1 : 0; }

before (x86_64):
_test1:
	andl	$8, %edi
	cmpl	$1, %edi
	sbbl	%eax, %eax
	ret
_test3:
	andl	$8, %edi
	cmpl	$1, %edi
	sbbl	%eax, %eax
	notl	%eax
	ret

after:
_test1:
	shrl	$3, %edi
	andl	$1, %edi
	leal	-1(%rdi), %eax
	ret
_test3:
	shll	$28, %edi
	movl	%edi, %eax
	sarl	$31, %eax
	ret

llvm-svn: 128732
2011-04-01 20:09:10 +00:00
Nadav Rotem d74b72b8a9 Instcombile optimization: extractelement(cast) -> cast(extractelement)
llvm-svn: 128683
2011-03-31 22:57:29 +00:00
Benjamin Kramer 5291054ef1 InstCombine: APFloat can't perform arithmetic on PPC double doubles, don't even try.
Thanks Eli!

llvm-svn: 128676
2011-03-31 21:35:49 +00:00
Benjamin Kramer be209ab8a2 InstCombine: Fix transform to use the swapped predicate.
Thanks Frits!

llvm-svn: 128628
2011-03-31 10:46:03 +00:00
Benjamin Kramer d159d94644 InstCombine: fold fcmp (fneg x), (fneg y) -> fcmp x, y
llvm-svn: 128627
2011-03-31 10:12:22 +00:00
Benjamin Kramer a8c5d0872d InstCombine: fold fcmp pred (fneg x), C -> fcmp swap(pred) x, -C
llvm-svn: 128626
2011-03-31 10:12:15 +00:00
Benjamin Kramer cbb18e91a8 InstCombine: Shrink "fcmp (fpext x), C" to "fcmp x, C" if C can be losslessly converted to the type of x.
Fixes PR9592.

llvm-svn: 128625
2011-03-31 10:12:07 +00:00
Benjamin Kramer 2ccfbc8b71 InstCombine: fold fcmp (fpext x), (fpext y) -> fcmp x, y.
llvm-svn: 128624
2011-03-31 10:11:58 +00:00
Bill Wendling 5034159c5f * The DSE code that tested for overlapping needed to take into account the fact
that one of the numbers is signed while the other is unsigned. This could lead
  to a wrong result when the signed was promoted to an unsigned int.

* Add the data layout line to the testcase so that it will test the appropriate
  thing.

Patch by David Terei!

llvm-svn: 128577
2011-03-30 21:37:19 +00:00
Benjamin Kramer af0ed953c5 Avoid turning a floating point division with a constant power of two into a denormal multiplication.
Some platforms may treat denormals as zero, on other platforms multiplication
with a subnormal is slower than dividing by a normal.

llvm-svn: 128555
2011-03-30 17:02:54 +00:00
Benjamin Kramer 8564e0de96 InstCombine: If the divisor of an fdiv has an exact inverse, turn it into an fmul.
Fixes PR9587.

llvm-svn: 128546
2011-03-30 15:42:35 +00:00
Benjamin Kramer 272f2b0044 InstCombine: Add a few missing combines for ANDs and ORs of sign bit tests.
On x86 we now compile "if (a < 0 && b < 0)" into
	testl	%edi, %esi
	js	IF.THEN

llvm-svn: 128496
2011-03-29 22:06:41 +00:00
Cameron Zwarich ff811cc475 Do some simple copy propagation through integer loads and stores when promoting
vector types. This helps a lot with inlined functions when using the ARM soft
float ABI. Fixes <rdar://problem/9184212>.

llvm-svn: 128453
2011-03-29 05:19:52 +00:00
Nick Lewycky 8544228d5a Teach the transformation that moves binary operators around selects to preserve
the subclass optional data.

llvm-svn: 128388
2011-03-27 19:51:23 +00:00
Frits van Bommel 0bb2ad2cf7 Constant folding support for calls to umul.with.overflow(), basically identical to the smul.with.overflow() code.
llvm-svn: 128379
2011-03-27 14:26:13 +00:00
Nick Lewycky 83167df787 Add a small missed optimization: turn X == C ? X : Y into X == C ? C : Y. This
removes one use of X which helps it pass the many hasOneUse() checks.

In my analysis, this turns up very often where X = A >>exact B and that can't be
simplified unless X has one use (except by increasing the lifetime of A which is
generally a performance loss).

llvm-svn: 128373
2011-03-27 07:30:57 +00:00
Cameron Zwarich d4174ee43e Fix a typo and add a test.
llvm-svn: 128331
2011-03-26 04:58:50 +00:00
Bill Wendling db40b5c899 PR9561: A store with a negative offset (via GEP) could erroniously say that it
completely overlaps a previous store, thus mistakenly deleting that store. Check
for this condition.

llvm-svn: 128319
2011-03-26 01:20:37 +00:00
Cameron Zwarich 10ebc189ee Fix PR9464 by correcting some math that just happened to be right in most cases
that were hit in practice.

llvm-svn: 128146
2011-03-23 05:25:55 +00:00
Anders Carlsson ee6bc70d2f Add an optimization to GlobalOpt that eliminates calls to __cxa_atexit, if the function passed is empty.
llvm-svn: 127970
2011-03-20 17:59:11 +00:00
Andrew Trick 1c4b42d00f Avoid creating canonical induction variables for non-native types.
For example, on 32-bit architecture, don't promote all uses of the IV
to 64-bits just because one use is a 64-bit cast.
Alternate implementation of the patch by Arnaud de Grandmaison.

llvm-svn: 127884
2011-03-18 16:50:32 +00:00
Eli Friedman c17c9a78aa FileCheck-ize and update test.
llvm-svn: 127845
2011-03-18 01:10:31 +00:00
Devang Patel aad34d882d Try to not lose variable's debug info during instcombine.
This is done by lowering dbg.declare intrinsic into dbg.value intrinsic.
Radar 9143931.

llvm-svn: 127834
2011-03-17 22:18:16 +00:00
Cameron Zwarich 0454253d7a Only convert allocas to scalars if it is profitable. The profitability metric I
chose is having a non-memcpy/memset use and being larger than any native integer
type. Originally I chose having an access of a size smaller than the total size
of the alloca, but this caused some minor issues on the spirit benchmark where
SRoA runs again after some inlining.

This fixes <rdar://problem/8613163>.

llvm-svn: 127718
2011-03-16 00:13:44 +00:00
Cameron Zwarich 7b0f3c6a1a Add native integer type TargetData to some existing tests.
llvm-svn: 127717
2011-03-16 00:13:40 +00:00
Cameron Zwarich 0b8cdfb6ec Do not add PHIs with no users when creating LCSSA form. Patch by Andrew Clinton.
llvm-svn: 127674
2011-03-15 07:41:25 +00:00
Eli Friedman c4414c6e92 PR9450: Make switch optimization in SimplifyCFG not dependent on the ordering
of pointers in an std::map.

llvm-svn: 127650
2011-03-15 02:23:35 +00:00
Eric Christopher 2139d3148f If we don't know how long a string is we can't fold an _chk version to the
normal version.

Fixes rdar://9123638

llvm-svn: 127636
2011-03-15 00:25:41 +00:00
Benjamin Kramer 5acc751b6f Teach ComputeMaskedBits about sub nsw.
llvm-svn: 127548
2011-03-12 17:18:11 +00:00
Cameron Zwarich 338d362200 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar 94ccb27b43 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Benjamin Kramer 391a946fa9 ComputeMaskedBits: sub falls through to add, and sub doesn't have the same overflow semantics as add.
Should fix the selfhost failures that started with r127463.

llvm-svn: 127465
2011-03-11 14:46:49 +00:00
Benjamin Kramer 51897bcd3e InstCombine: Fix a thinko where transform an icmp under the assumption that it's a zero comparison when it's not.
Fixes PR9454.

llvm-svn: 127464
2011-03-11 11:37:40 +00:00
Nick Lewycky cc79973856 Teach ComputeMaskedBits about nsw on add. I don't think there's anything we can
do with nuw here, but sub and mul should be given similar treatment.
Fixes PR9343 #15!

llvm-svn: 127463
2011-03-11 09:00:19 +00:00
Cameron Zwarich cc27b3acc4 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Dan Gohman 154ed49784 Fix reassociate to postpone certain instruction deletions until
after it has finished all of its reassociations, because its
habit of unlinking operands and holding them in a datastructure
while working means that it's not easy to determine when an
instruction is really dead until after all its regular work is
done. rdar://9096268.

llvm-svn: 127424
2011-03-10 19:51:54 +00:00
Benjamin Kramer b49b964b98 InstCombine: Turn umul_with_overflow into mul nuw if we can prove that it cannot overflow.
This happens a lot in clang-compiled C++ code because it adds overflow checks to operator new[]:
  unsigned *foo(unsigned n) { return new unsigned[n]; }
We can optimize away the overflow check on 64 bit targets because (uint64_t)n*4 cannot overflow.

llvm-svn: 127418
2011-03-10 18:40:14 +00:00
Benjamin Kramer 1885d21700 Fix mistyped CHECK lines.
llvm-svn: 127366
2011-03-09 22:07:31 +00:00
Devang Patel 13f8c7d48e Preserve line number information while simplifying libcalls.
llvm-svn: 127362
2011-03-09 21:27:52 +00:00
Cameron Zwarich 718918b07a Add a test case for r127320.
llvm-svn: 127321
2011-03-09 08:11:02 +00:00
Nick Lewycky 980104d1d6 Add another micro-optimization. Apologies for the lack of refactoring, but I
gave up when I realized I couldn't come up with a good name for what the
refactored function would be, to describe what it does.

This is PR9343 test12, which is test3 with arguments reordered. Whoops!

llvm-svn: 127318
2011-03-09 06:26:03 +00:00
Cameron Zwarich 3b649f4d01 Add support to scalar replacement for partial vector accesses of an alloca, e.g.
a union of a float, <2 x float>, and <4 x float>. This mostly comes up with the
use of vector intrinsics, especially in NEON when programmers know the layout of
the register file. This enables codegen to eliminate a lot of the subregister
traffic it would otherwise generate.

This commit only enables this for a small number of floating-point cases, but a
lot more integer cases. I assume this is okay for all ports, but I did not do
extensive testing of the quality of code involving i512 vectors and the like. If
there is a use case where this generates worse code than before, let me know and
we can scale it back.

This fixes <rdar://problem/9036264>.

llvm-svn: 127317
2011-03-09 05:43:05 +00:00
Eli Friedman a81a82dcaf PR9346: Prevent SimplifyDemandedBits from incorrectly introducing
INT_MIN % -1.

llvm-svn: 127306
2011-03-09 01:28:35 +00:00
Eli Friedman aac35b3fbb PR9420; an instruction before an unreachable is guaranteed not to have any
reachable uses, but there still might be uses in dead blocks.  Use the
standard solution of replacing all the uses with undef.  This is
a rare case because it's very sensitive to phase ordering in SimplifyCFG.

llvm-svn: 127299
2011-03-09 00:48:33 +00:00
Duncan Sands 7dc3d47c34 Fix PR9331. Simplified version of a patch by Jakub Staszak.
llvm-svn: 127243
2011-03-08 12:39:03 +00:00
Devang Patel 97d0be8ee1 While sinking an instruction, do not lose llvm.dbg.value intrinsic.
llvm-svn: 127214
2011-03-08 03:06:19 +00:00
Devang Patel d00c628f8f Preserve line no. info.
Radar 9097659

llvm-svn: 127182
2011-03-07 22:43:45 +00:00
Rafael Espindola 15a29867ed Add test for r127138.
llvm-svn: 127172
2011-03-07 21:28:14 +00:00
Nick Lewycky ac55c79dd6 Tweak this test. We can analyze what happens and show that we still do the
right thing, instead of merely being unable to analyze and the transform
doesn't occur.

llvm-svn: 127149
2011-03-07 02:10:18 +00:00
Nick Lewycky e467979d0a Add more analysis of the sign bit of an srem instruction. If the LHS is negative
then the result could go either way. If it's provably positive then so is the
srem. Fixes PR9343 #7!

llvm-svn: 127146
2011-03-07 01:50:10 +00:00
Nick Lewycky 92db8e8e39 ConstantInt has some getters which return ConstantInt's or ConstantVector's of
the value splatted into every element. Extend this to getTrue and getFalse which
by providing new overloads that take Types that are either i1 or <N x i1>. Use
it in InstCombine to add vector support to some code, fixing PR8469!

llvm-svn: 127116
2011-03-06 03:36:19 +00:00
Nick Lewycky 9719a719c7 Thread comparisons over udiv/sdiv/ashr/lshr exact and lshr nuw/nsw whenever
possible. This goes into instcombine and instsimplify because instsimplify
doesn't need to check hasOneUse since it returns (almost exclusively) constants.

This fixes PR9343 #4 #5 and #8!

llvm-svn: 127064
2011-03-05 05:19:11 +00:00
Nick Lewycky 25cc338d88 Try once again to optimize "icmp (srem X, Y), Y" by turning the comparison into
true/false or "icmp slt/sge Y, 0".

llvm-svn: 127063
2011-03-05 04:28:48 +00:00
Nick Lewycky 41c529bd09 Revert broken srem logic from r126991.
llvm-svn: 127021
2011-03-04 19:26:08 +00:00
Nick Lewycky 8e3a79da9f Fold "icmp pred (srem X, Y), Y" like we do for urem. Handle signed comparisons
in the urem case, though not the other way around. This is enough to get #3 from
PR9343!

llvm-svn: 126991
2011-03-04 10:06:52 +00:00
Nick Lewycky 3cec6f5563 Teach instruction simplify to use constant ranges to solve problems of the form
"icmp pred %X, CI" and a number of examples where "%X = binop %Y, CI2".

Some of these cases (div and rem) used to make it through opt -O2, but the
others are probably now making code elsewhere redundant (probably instcombine).

llvm-svn: 126988
2011-03-04 07:00:57 +00:00
Richard Osborne af52c52569 Optimize fprintf -> iprintf if there are no floating point arguments
and siprintf is available on the target.

llvm-svn: 126940
2011-03-03 14:20:22 +00:00
Richard Osborne 2dfb888392 Optimize sprintf -> siprintf if there are no floating point arguments
and siprintf is available on the target.

llvm-svn: 126937
2011-03-03 14:09:28 +00:00
Richard Osborne 815de536e5 Optimize printf -> iprintf if there are no floating point arguments
and iprintf is available on the target. Currently iprintf is only
marked as being available on the XCore.

llvm-svn: 126935
2011-03-03 13:17:51 +00:00
Anders Carlsson da80afef99 Make InstCombiner::FoldAndOfICmps create a ConstantRange that's the
intersection of the LHS and RHS ConstantRanges and return "false" when
the range is empty.

This simplifies some code and catches some extra cases.

llvm-svn: 126744
2011-03-01 15:05:01 +00:00
Nick Lewycky c9d20067cd Optimize "icmp pred (urem X, Y), Y" --> true/false depending on pred. There's
more work to do here, "icmp ult (urem X, 10), 11" doesn't optimize away yet.
Fixes example 3 from PR9343!

llvm-svn: 126741
2011-03-01 08:15:50 +00:00
Eli Friedman 683bbc16c4 Add an obvious missing safety check to DAE::RemoveDeadArgumentsFromCallers.
llvm-svn: 126720
2011-03-01 00:33:47 +00:00
Dan Gohman 6564ca0c23 Delete obsolete test.
llvm-svn: 126680
2011-02-28 19:58:14 +00:00
Frits van Bommel 8ae07996c9 Teach SimplifyCFG that (switch (select cond, X, Y)) is better expressed as a branch.
Based on a patch by Alistair Lynn.

llvm-svn: 126647
2011-02-28 09:44:07 +00:00
Nick Lewycky 66f4f22f7b srem doesn't actually have the same resulting sign as its numerator, you could
also have a zero when numerator = denominator. Reverts parts of r126635 and
r126637.

llvm-svn: 126644
2011-02-28 09:17:39 +00:00
Nick Lewycky 174a705497 Teach InstCombine to fold "(shr exact X, Y) == 0" --> X == 0, fixing #1 from
PR9343.

llvm-svn: 126643
2011-02-28 08:31:40 +00:00
Nick Lewycky 6b445419b0 The sign of an srem instruction is the sign of its dividend (the first
argument), regardless of the divisor. Teach instcombine about this and fix
test7 in PR9343!

llvm-svn: 126635
2011-02-28 06:20:05 +00:00
Benjamin Kramer ceb5daa567 Revert "SimplifyCFG: GEPs with just one non-constant index are also cheap."
Yes, there are other types than i8* and GEPs on them can produce an add+multiply.
We don't consider that cheap enough to be speculatively executed.

llvm-svn: 126481
2011-02-25 10:33:33 +00:00
Benjamin Kramer dfdca1a14d SimplifyCFG: GEPs with just one non-constant index are also cheap.
llvm-svn: 126452
2011-02-24 23:26:09 +00:00
Benjamin Kramer 27361a7124 SimplifyCFG: GEPs with constant indices are cheap enough to be executed unconditionally.
llvm-svn: 126445
2011-02-24 22:46:11 +00:00
Chris Lattner adf38b3e09 change instcombine to not turn a call to non-varargs bitcast of
function prototype into a call to a varargs prototype.  We do
allow the xform if we have a definition, but otherwise we don't
want to risk that we're changing the abi in a subtle way.  On
X86-64, for example, varargs require passing stuff in %al.

llvm-svn: 126363
2011-02-24 05:10:56 +00:00
Cameron Zwarich 826308586c Make LoopDeletion work on loops with multiple edges, as long as the incoming
values from all of the loop's exiting blocks are equal. Patch by Andrew Clinton.

llvm-svn: 126253
2011-02-22 22:25:39 +00:00
Benjamin Kramer d5d7f37beb InstCombine: Add a bunch of combines of the form x | (y ^ z).
We usually catch this kind of optimization through InstSimplify's distributive
magic, but or doesn't distribute over xor in general.

"A | ~(A | B) -> A | ~B" hits 24 times on gcc.c.

llvm-svn: 126081
2011-02-20 13:23:43 +00:00
Nick Lewycky c8a1569950 Teach RecursivelyDeleteDeadPHINodes to handle multiple self-references. Patch
by Andrew Clinton!

llvm-svn: 126077
2011-02-20 08:38:20 +00:00
Eli Friedman ef200db4fd PR9218: SimplifyDemandedVectorElts can return a non-null value that is not
the instruction passed in.  Make sure to account for this correctly, instead
of looping infinitely.

llvm-svn: 126058
2011-02-19 22:42:40 +00:00
Chris Lattner 72a35fb974 rewrite the memset_pattern pattern generation stuff to accept any 2/4/8/16-byte
constant, including globals.  This makes us generate much more "pretty" pattern
globals as well because it doesn't break it down to an array of bytes all the
time.

This enables us to handle stores of relocatable globals.  This kicks in about
48 times in 254.gap, giving us stuff like this:

@.memset_pattern40 = internal constant [2 x %struct.TypHeader* (%struct.TypHeader*, %struct.TypHeader*)*] [%struct.TypHeader* (%struct.TypHeader*, %struct
.TypHeader*)* @IsFalse, %struct.TypHeader* (%struct.TypHeader*, %struct.TypHeader*)* @IsFalse], align 16

...
  call void @memset_pattern16(i8* %scevgep5859, i8* bitcast ([2 x %struct.TypHeader* (%struct.TypHeader*, %struct.TypHeader*)*]* @.memset_pattern40 to i8*
), i64 %tmp75) nounwind

llvm-svn: 126044
2011-02-19 19:56:44 +00:00
Chris Lattner acf6b0776a Stores of null pointers should turn into memset, we weren't recognizing
them as splat values.

llvm-svn: 126041
2011-02-19 19:35:49 +00:00
Chris Lattner 0f4a64011e Implement rdar://9009151, transforming strided loop stores of
unsplatable values into memset_pattern16 when it is available
(recent darwins).  This transforms lots of strided loop stores
of ints for example, like 5 in vpr:

  Formed memset:   call void @memset_pattern16(i8* %4, i8* getelementptr inbounds ([16 x i8]* @.memset_pattern9, i32 0, i32 0), i64 %tmp25)
    from store to: {%3,+,4}<%11> at:   store i32 3, i32* %scevgep, align 4, !tbaa !4

llvm-svn: 126040
2011-02-19 19:31:39 +00:00
Duncan Sands 84653b3674 Add some transforms of the kind X-Y>X -> 0>Y which are valid when there is no
overflow.  These subsume some existing equality transforms, so zap those.

llvm-svn: 125843
2011-02-18 16:25:37 +00:00
Chris Lattner 6b88c76f13 add a testcase for r125827
llvm-svn: 125831
2011-02-18 05:05:01 +00:00
Chris Lattner 1a924e770a prevent jump threading from merging blocks when their address is
taken (and used!).  This prevents merging the blocks (invalidating
the block addresses) in a case like this:

#define _THIS_IP_  ({ __label__ __here; __here: (unsigned long)&&__here; })

void foo() {
  printf("%p\n", _THIS_IP_);
  printf("%p\n", _THIS_IP_);
  printf("%p\n", _THIS_IP_);
}

which fixes PR4151.

llvm-svn: 125829
2011-02-18 04:43:06 +00:00
Chris Lattner a8fed47eed have instcombine preserve nsw/nuw/exact when sinking
common operations through a phi. 

llvm-svn: 125790
2011-02-17 23:01:49 +00:00
Chris Lattner abb8eb2c63 fix instcombine merging GEPs through a PHI to only make the
result inbounds if all of the inputs are inbounds.

llvm-svn: 125785
2011-02-17 22:21:26 +00:00
Nadav Rotem 7cc6d12ad0 Enhance constant folding of bitcast operations on vectors of floats.
Add getAllOnesValue of FP numbers to Constants and APFloat.
Add more tests.

llvm-svn: 125776
2011-02-17 21:22:27 +00:00
Duncan Sands e522001171 Transform "A + B >= A + C" into "B >= C" if the adds do not wrap. Likewise for some
variations (some of these were already present so I unified the code).  Spotted by my
auto-simplifier as occurring a lot.

llvm-svn: 125734
2011-02-17 07:46:37 +00:00
Chris Lattner 5592071768 preserve NUW/NSW when transforming add x,x
llvm-svn: 125711
2011-02-17 02:23:02 +00:00
Chris Lattner 0ad64291d8 filecheckize
llvm-svn: 125710
2011-02-17 02:21:03 +00:00
Chris Lattner 3eb0af94c4 fix PR9215, preventing -reassociate from clearing nsw/nuw when
it swaps the LHS/RHS of a single binop.

llvm-svn: 125700
2011-02-17 01:29:24 +00:00
Nick Lewycky 038124b671 Teach PatternMatch that splat vectors could be floating point as well as
integer. Fixes PR9228!

llvm-svn: 125613
2011-02-15 23:13:23 +00:00
Nadav Rotem 67d67a0385 Fix 9216 - Endless loop in InstCombine pass.
The pattern "A&(A^B) -> A & ~B" recreated itself because ~B is
actually a xor -1.

llvm-svn: 125557
2011-02-15 07:13:48 +00:00
Devang Patel 3058398655 Do not hoist @llvm.dbg.value. Here, @llvm.dbg.value is "referring" a value that is modified inside loop.
llvm-svn: 125529
2011-02-14 23:03:23 +00:00
Duncan Sands d114ab331c Teach instsimplify that X+Y>=X+Z is the same as Y>=Z if neither side overflows,
plus some variations of this.  According to my auto-simplifier this occurs a lot
but usually in combination with max/min idioms.  Because max/min aren't handled
yet this unfortunately doesn't have much effect in the testsuite.

llvm-svn: 125462
2011-02-13 17:15:40 +00:00
Nadav Rotem 0e162c57f8 Fix test
llvm-svn: 125460
2011-02-13 16:13:16 +00:00
Nadav Rotem 27b848afb0 Fix a regression from r125393;
It caused a crash in MultiSource/Benchmarks/Bullet.
Opt hit an assertion with "opt -std-compile-opts" because
Constant::getAllOnesValue doesn't know how to handle floats.

This patch added a test to reproduce the problem and a check that the
destination vector is of integer type.

Thank you Benjamin!

llvm-svn: 125459
2011-02-13 15:45:34 +00:00
Chris Lattner 333e27d74b add PR#
llvm-svn: 125455
2011-02-13 08:27:31 +00:00
Chris Lattner 43273affb9 implement instcombine folding for things like (x >> c) < 42.
We were previously simplifying divisions, but not right shifts!

llvm-svn: 125454
2011-02-13 08:07:21 +00:00
Daniel Dunbar 210ce0feb5 SimplifyLibCalls: Add missing legalize check on various printf to puts and
putchar transforms, their return values are not compatible.

llvm-svn: 125442
2011-02-12 18:19:57 +00:00
Daniel Dunbar 76c95562bc tests: FileCheckize
llvm-svn: 125441
2011-02-12 18:19:53 +00:00
Benjamin Kramer 1800d823de Also fold (A+B) == A -> B == 0 when the add is commuted.
llvm-svn: 125411
2011-02-11 21:46:48 +00:00
Nadav Rotem 10134c33f2 Fix 9173.
Add more folding patterns to constant expressions of vector selects and vector
bitcasts.

llvm-svn: 125393
2011-02-11 19:37:55 +00:00
Cameron Zwarich 4c898c239e Add a test for the LSR issue exposed by r125254.
llvm-svn: 125325
2011-02-11 00:49:27 +00:00
Nick Lewycky ac0b62c277 Tolerate degenerate phi nodes that can occur in the middle of optimization
passes. Fixes PR9112. Patch by Jakub Staszak!

llvm-svn: 125319
2011-02-10 23:54:10 +00:00
Cameron Zwarich d8e66038f4 Rename 'loopsimplify' to 'loop-simplify'.
llvm-svn: 125317
2011-02-10 23:38:10 +00:00
Chris Lattner d86ded17ad implement the first part of PR8882: when lowering an inbounds
gep to explicit addressing, we know that none of the intermediate
computation overflows.

This could use review: it seems that the shifts certainly wouldn't
overflow, but could the intermediate adds overflow if there is a 
negative index?

Previously the testcase would instcombine to:

define i1 @test(i64 %i) {
  %p1.idx.mask = and i64 %i, 4611686018427387903
  %cmp = icmp eq i64 %p1.idx.mask, 1000
  ret i1 %cmp
}

now we get:

define i1 @test(i64 %i) {
  %cmp = icmp eq i64 %i, 1000
  ret i1 %cmp
}

llvm-svn: 125271
2011-02-10 07:11:16 +00:00
Chris Lattner 6b657aed33 Enhance a bunch of transformations in instcombine to start generating
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.

Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck.  I believe that this takes care of a bunch of related
code quality issues attached to PR8862.

llvm-svn: 125267
2011-02-10 05:36:31 +00:00
Chris Lattner 98457101fc Enhance the "compare with shift" and "compare with div"
optimizations to be much more aggressive in the face of
exact/nsw/nuw div and shifts.  For example, these (which
are the same except the first is 'exact' sdiv:

define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
  %A = sdiv exact i64 %X, -5   ; X/-5 == 0 --> x == 0
  %B = icmp eq i64 %A, 0
  ret i1 %B
}

define i1 @sdiv_icmp4(i64 %X) nounwind {
  %A = sdiv i64 %X, -5   ; X/-5 == 0 --> x == 0
  %B = icmp eq i64 %A, 0
  ret i1 %B
}

compile down to:

define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
  %1 = icmp eq i64 %X, 0
  ret i1 %1
}

define i1 @sdiv_icmp4(i64 %X) nounwind {
  %X.off = add i64 %X, 4
  %1 = icmp ult i64 %X.off, 9
  ret i1 %1
}

This happens when you do something like:
  (ptr1-ptr2) == 42

where the pointers are pointers to non-unit types.

llvm-svn: 125266
2011-02-10 05:23:05 +00:00
Chris Lattner dcef03fba2 more cleanups, notably bitcast isn't used for "signed to unsigned type
conversions". :)

llvm-svn: 125265
2011-02-10 05:17:27 +00:00
Chris Lattner 9e4aa0259f Teach instsimplify some tricks about exact/nuw/nsw shifts.
improve interfaces to instsimplify to take this info.

llvm-svn: 125196
2011-02-09 17:15:04 +00:00
Chris Lattner 206b065afb merge two tests.
llvm-svn: 125195
2011-02-09 17:06:41 +00:00
Nick Lewycky 292e78c3cd When removing a function from the function set and adding it to deferred, we
could end up removing a different function than we intended because it was
functionally equivalent, then end up with a comparison of a function against
itself in the next round of comparisons (the one in the function set and the
one on the deferred list). To fix this, I introduce a choice in the form of
comparison for ComparableFunctions, either normal or "pointer only" used to
find exact Function*'s in lookups.

Also add some debugging statements.

llvm-svn: 125180
2011-02-09 06:32:02 +00:00
Benjamin Kramer 8d6a8c130b SimplifyCFG: Track the number of used icmps when turning a icmp chain into a switch. If we used only one icmp, don't turn it into a switch.
Also prevent the switch-to-icmp transform from creating identity adds, noticed by Marius Wachtler.

llvm-svn: 125056
2011-02-07 22:37:28 +00:00
Chris Lattner 6e57b15228 teach instsimplify to transform (X / Y) * Y to X
when the div is an exact udiv.

llvm-svn: 124994
2011-02-06 22:05:31 +00:00
Chris Lattner 9c70414551 rename test.
llvm-svn: 124993
2011-02-06 21:59:10 +00:00
Chris Lattner 35315d065b enhance vmcore to know that udiv's can be exact, and add a trivial
instcombine xform to exercise this.

Nothing forms exact udivs yet though.  This is progress on PR8862

llvm-svn: 124992
2011-02-06 21:44:57 +00:00
Anders Carlsson d21b06a0db When loading from a constant, fold inttoptr if the integer type and the resulting pointer type both have the same size.
llvm-svn: 124987
2011-02-06 20:11:56 +00:00
Benjamin Kramer 62aa46b852 SimplifyCFG: Also transform switches that represent a range comparison but are not sorted into sub+icmp.
This transforms another 1000 switches in gcc.c.

llvm-svn: 124826
2011-02-03 22:51:41 +00:00
Duncan Sands 06504025d2 Improve threading of comparisons over select instructions (spotted by my
auto-simplifier).  This has a big impact on Ada code, but not much else.
Unfortunately the impact is mostly negative!  This is due to PR9004 (aka
SCCP failing to resolve conditional branch conditions in the destination
blocks of the branch), in which simple correlated expressions are not
resolved but complicated ones are, so simplifying has a bad effect!

llvm-svn: 124788
2011-02-03 09:37:39 +00:00
Duncan Sands 5747abab10 Reenable the transform "(X*Y)/Y->X" when the multiplication is known not to
overflow (nsw flag), which was disabled because it breaks 254.gap.  I have
informed the GAP authors of the mistake in their code, and arranged for the
testsuite to use -fwrapv when compiling this benchmark.

llvm-svn: 124746
2011-02-02 20:52:00 +00:00
Benjamin Kramer f4ea1d5f79 SimplifyCFG: Turn switches into sub+icmp+branch if possible.
This makes the job of the later optzn passes easier, allowing the vast amount of
icmp transforms to chew on it.

We transform 840 switches in gcc.c, leading to a 16k byte shrink of the resulting
binary on i386-linux.

The testcase from README.txt now compiles into
  decl  %edi
  cmpl  $3, %edi
  sbbl  %eax, %eax
  andl  $1, %eax
  ret

llvm-svn: 124724
2011-02-02 15:56:22 +00:00
Dan Gohman 08d2c98c23 Fix reassociate to clear optional flags, such as nsw.
llvm-svn: 124712
2011-02-02 02:02:34 +00:00
Duncan Sands cf0ff030a8 Have m_One also match constant vectors for which every element is 1.
llvm-svn: 124655
2011-02-01 08:39:12 +00:00
Anders Carlsson f23a6da271 Recognize and simplify
(A+B) == A  ->  B == 0
A == (A+B)  ->  B == 0

llvm-svn: 124567
2011-01-30 22:01:13 +00:00
Duncan Sands 2e5a58da8f Commit 124487 broke 254.gap. See if disabling the part that might be triggered
by PR9088 fixes things.

llvm-svn: 124561
2011-01-30 18:24:20 +00:00
Duncan Sands b67edc6a29 Transform (X/Y)*Y into X if the division is exact. Instcombine already knows how
to do this and more, but would only do it if X/Y had only one use.  Spotted as the
most common missed simplification in SPEC by my auto-simplifier, now that it knows
about nuw/nsw/exact flags.  This removes a bunch of multiplications from 447.dealII
and 483.xalancbmk.  It also removes a lot from tramp3d-v4, which results in much
more inlining.

llvm-svn: 124560
2011-01-30 18:03:50 +00:00
Nick Lewycky 97a2895e73 Add the select optimization recently added to instcombine to constant folding.
This is the one where one of the branches of the select is another select on
the same condition.

llvm-svn: 124547
2011-01-29 20:35:06 +00:00
Frits van Bommel c2549661af Move InstCombine's knowledge of fdiv to SimplifyInstruction().
llvm-svn: 124534
2011-01-29 15:26:31 +00:00
Duncan Sands 2e9e4f1be3 Fix typo: should have been testing that X was odd, not V.
llvm-svn: 124533
2011-01-29 13:27:00 +00:00
Evan Cheng 73c29178ac Add a test for TCE return duplication.
llvm-svn: 124527
2011-01-29 04:53:35 +00:00
Evan Cheng d983eba7dc Re-apply r124518 with fix. Watch out for invalidated iterator.
llvm-svn: 124526
2011-01-29 04:46:23 +00:00
Evan Cheng 65b8ccf6ac Revert r124518. It broke Linux self-host.
llvm-svn: 124522
2011-01-29 02:43:04 +00:00
Evan Cheng d4eff31476 Re-commit r124462 with fixes. Tail recursion elim will now dup ret into unconditional predecessor to enable TCE on demand.
llvm-svn: 124518
2011-01-29 01:29:26 +00:00
Duncan Sands 771e82a863 My auto-simplifier noticed that ((X/Y)*Y)/Y occurs several times in SPEC
benchmarks, and that it can be simplified to X/Y.  (In general you can only
simplify (Z*Y)/Y to Z if the multiplication did not overflow; if Z has the
form "X/Y" then this is the case).  This patch implements that transform and
moves some Div logic out of instcombine and into InstructionSimplify.
Unfortunately instcombine gets in the way somewhat, since it likes to change
(X/Y)*Y into X-(X rem Y), so I had to teach instcombine about this too.
Finally, thanks to the NSW/NUW flags, sometimes we know directly that "Z*Y"
does not overflow, because the flag says so, so I added that logic too.  This
eliminates a bunch of divisions and subtractions in 447.dealII, and has good
effects on some other benchmarks too.  It seems to have quite an effect on
tramp3d-v4 but it's hard to say if it's good or bad because inlining decisions
changed, resulting in massive changes all over.

llvm-svn: 124487
2011-01-28 16:51:11 +00:00
Evan Cheng aaa9606b2f Revert r124462. There are a few big regressions that I need to fix first.
llvm-svn: 124478
2011-01-28 07:12:38 +00:00
Nick Lewycky db34be0e31 Clean up the tests a little, make sure we match an instruction in the right
test.

llvm-svn: 124473
2011-01-28 05:13:17 +00:00
Nick Lewycky b074e32641 Fold select + select where both selects are on the same condition.
llvm-svn: 124469
2011-01-28 03:28:10 +00:00
Evan Cheng 417fca86c4 - Stop simplifycfg from duplicating "ret" instructions into unconditional
branches. PR8575, rdar://5134905, rdar://8911460.
- Allow codegen tail duplication to dup small return blocks after register
  allocation is done.

llvm-svn: 124462
2011-01-28 02:19:21 +00:00
Nick Lewycky 13e04aef2a Fix surprising missed optimization in mergefunc where we forgot to consider
that relationships like "i8* null" is equivalent to "i32* null".

llvm-svn: 124368
2011-01-27 08:38:19 +00:00
Duncan Sands 69bdb585b2 Fix PR9039, a use-after-free in reassociate. The issue was that the
operand being factorized (and erased) could occur several times in Ops,
resulting in freed memory being used when the next occurrence in Ops was
analyzed.

llvm-svn: 124287
2011-01-26 10:08:38 +00:00
Duncan Sands 9e9d5b25e2 In which I discover that zero+zero is zero, d'oh!
llvm-svn: 124188
2011-01-25 15:14:15 +00:00
Duncan Sands c78548d791 Turn off this test - the corresponding instsimplify logic has been
disabled.

llvm-svn: 124185
2011-01-25 12:31:43 +00:00
Duncan Sands d395108394 According to my auto-simplifier the most common missed simplifications in
optimized code are:
  (non-negative number)+(power-of-two) != 0 -> true
and
  (x | 1) != 0 -> true
Instcombine knows about the second one of course, but only does it if X|1
has only one use.  These fire thousands of times in the testsuite.

llvm-svn: 124183
2011-01-25 09:38:29 +00:00
Nick Lewycky f1cec164ce Teach mergefunc how to emit aliases safely again -- but keep it turned it off
for now. It's controlled by the HasGlobalAliases variable which is not attached
to any flag yet.

llvm-svn: 124182
2011-01-25 08:56:50 +00:00
Chris Lattner 2bcec1297e merge all the "crash tests" into crash.ll
llvm-svn: 124101
2011-01-24 03:37:34 +00:00
Chris Lattner b4017769ae fix PR9017, a bug where we'd assert when promoting in unreachable
code.

llvm-svn: 124100
2011-01-24 03:29:07 +00:00
Chris Lattner d83e7b0ff6 enhance SRoA to promote allocas that are used by PHI nodes. This often
occurs because instcombine sinks loads and inserts phis.  This kicks in 
on such apps as 175.vpr, eon, 403.gcc, xalancbmk and a bunch of times in
spec2006 in some app that uses std::deque.

This resolves the last of rdar://7339113.

llvm-svn: 124090
2011-01-24 01:07:11 +00:00
Chris Lattner a960725d18 Enhance SRoA to promote allocas that are used by selects in some
common cases.  This triggers a surprising number of times in SPEC2K6
because min/max idioms end up doing this.  For example, code from the
STL ends up looking like this to SRoA:

  %202 = load i64* %__old_size, align 8, !tbaa !3
  %203 = load i64* %__old_size, align 8, !tbaa !3
  %204 = load i64* %__n, align 8, !tbaa !3
  %205 = icmp ult i64 %203, %204
  %storemerge.i = select i1 %205, i64* %__n, i64* %__old_size
  %206 = load i64* %storemerge.i, align 8, !tbaa !3

We can now promote both the __n and the __old_size allocas.

This addresses another chunk of rdar://7339113, poor codegen on
stringswitch.

llvm-svn: 124088
2011-01-23 22:04:55 +00:00
Chris Lattner 9491dee24e Enhance SRoA to be more aggressive about scalarization of aggregate allocas
that have PHI or select uses of their element pointers.  This can often happen
when instcombine sinks two loads into a successor, inserting a phi or select.

With this patch, we can scalarize the alloca, but the pinned elements are not
yet promoted.  This is still a win for large aggregates where only one element
is used.  This fixes rdar://8904039 and part of rdar://7339113 (poor codegen
on stringswitch).

llvm-svn: 124070
2011-01-23 08:27:54 +00:00
Chris Lattner a587ab7b94 remove an old hack that avoided creating MMX datatypes. The
X86 backend has been fixed.

llvm-svn: 124064
2011-01-23 06:40:33 +00:00
Dan Gohman 19e30d5a7d Actually check memcpy lengths, instead of just commenting about
how they should be checked.

llvm-svn: 123999
2011-01-21 22:07:57 +00:00
Owen Anderson a834200dbe Just because we have determined that an (fcmp | fcmp) is true for A < B,
A == B, and A > B, does not mean we can fold it to true.  We still need to
check for A ? B (A unordered B).

llvm-svn: 123993
2011-01-21 19:39:42 +00:00
Chris Lattner b5e15d1907 fix PR9013, an infinite loop in instcombine.
llvm-svn: 123968
2011-01-21 05:29:50 +00:00
Nick Lewycky 6a083cf820 Don't try to pull vector bitcasts that change the number of elements through
a select. A vector select is pairwise on each element so we'd need a new
condition with the right number of elements to select on. Fixes PR8994.

llvm-svn: 123963
2011-01-21 02:30:43 +00:00
Nick Lewycky 39b12c059d Add a constant folding of casts from zero to zero. Fixes PR9011!
While here, I'd like to complain about how vector is not an aggregate type
according to llvm::Type::isAggregateType(), but they're listed under aggregate
types in the LangRef and zero vectors are stored as ConstantAggregateZero.

llvm-svn: 123956
2011-01-21 01:12:09 +00:00
Duncan Sands 8fb2c3827c At -O123 the early-cse pass is run before instcombine has run. According to my
auto-simplier the transform most missed by early-cse is (zext X) != 0 -> X != 0.
This patch adds this transform and some related logic to InstructionSimplify
and removes some of the logic from instcombine (unfortunately not all because
there are several situations in which instcombine can improve things by making
new instructions, whereas instsimplify is not allowed to do this).  At -O2 this
often results in more than 15% more simplifications by early-cse, and results in
hundreds of lines of bitcode being eliminated from the testsuite.  I did see some
small negative effects in the testsuite, for example a few additional instructions
in three programs.  One program, 483.xalancbmk, got an additional 35 instructions,
which seems to be due to a function getting an additional instruction and then
being inlined all over the place.

llvm-svn: 123911
2011-01-20 13:21:55 +00:00
Rafael Espindola fc355bc070 Add unnamed_addr when we can show that address of a global is not used.
llvm-svn: 123834
2011-01-19 16:32:21 +00:00
Duncan Sands 99589d07e9 For completeness, generalize the (X + Y) - Y -> X transform and add X - (X + 1) -> -1.
These were not recommended by my auto-simplifier since they don't fire often enough.
However they do fire from time to time, for example they remove one subtraction from
the final bitcode for 483.xalancbmk.

llvm-svn: 123755
2011-01-18 11:50:19 +00:00
Duncan Sands 9b8e2bd8ef Simplify (X<<1)-X into X. According to my auto-simplier this is the most common missed
simplification in fully optimized code.  It occurs sporadically in the testsuite, and
many times in 403.gcc: the final bitcode has 131 fewer subtractions after this change.
The reason that the multiplies are not eliminated is the same reason that instcombine
did not catch this: they are used by other instructions (instcombine catches this with
a more general transform which in general is only profitable if the operands have only
one use).

llvm-svn: 123754
2011-01-18 09:24:58 +00:00
Nick Lewycky 872a453ada Test for lazy value info's ability to prove the absense of NULLs in pointers.
llvm-svn: 123601
2011-01-16 21:57:20 +00:00
Anders Carlsson d3db83349e Teach DAE to look for functions whose arguments are unused, and change all callers to pass in an undefvalue instead.
llvm-svn: 123596
2011-01-16 21:25:33 +00:00
Rafael Espindola 751677a040 Don't merge two constants if we care about the address of both.
This fixes the original testcase in PR8927. It also causes a clang
binary built with a patched clang to increase in size by 0.21%.

We can probably get some of the size back by writing a pass that
detects that a global never has its pointer compared and adds
unnamed_addr to it (maybe extend global opt). It is also possible that
there are some other cases clang could add unnamed_addr to.

I will investigate extending globalopt next.

llvm-svn: 123584
2011-01-16 17:05:09 +00:00
Owen Anderson ec3b10fc56 Reduce and merge testcases.
llvm-svn: 123579
2011-01-16 09:13:31 +00:00
Chris Lattner e5f8de8639 fix PR8932, a case where arg promotion could infinitely promote.
llvm-svn: 123574
2011-01-16 08:09:24 +00:00
Chris Lattner 6fab2e9418 if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Chris Lattner d55581ded8 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.

llvm-svn: 123568
2011-01-16 05:28:59 +00:00
Owen Anderson 4e54efd625 Improve the safety of my globalopt enhancement by ensuring that the bitcast
of the stored value to the new store type is always.  Also, add a testcase.

llvm-svn: 123563
2011-01-16 04:33:33 +00:00
Chris Lattner 08f43456c9 fix PR8983, a broken assertion.
llvm-svn: 123562
2011-01-16 03:43:53 +00:00
Chris Lattner 1e209b87ad remove the partial specialization pass. It is unmaintained and has bugs.
llvm-svn: 123554
2011-01-16 00:27:10 +00:00
Nick Lewycky 0296a481f9 Make constmerge a two-pass algorithm so that it won't miss merging
opporuntities. Fixes PR8978.

llvm-svn: 123541
2011-01-15 18:14:21 +00:00
Chris Lattner af26390790 temporarily revert r123526. While working on a follow-on patch I
realize that ConstantFoldTerminator doesn't preserve dominfo.

llvm-svn: 123527
2011-01-15 07:51:19 +00:00
Chris Lattner 8df83c4a24 fix rdar://8785296 - -fcatch-undefined-behavior generates inefficient code
The basic issue is that isel (very reasonably!) expects conditional branches
to be folded, so CGP leaving around a bunch dead computation feeding
conditional branches isn't such a good idea.  Just fold branches on constants
into unconditional branches.

llvm-svn: 123526
2011-01-15 07:36:13 +00:00
Chris Lattner 1b93be501d Now that instruction optzns can update the iterator as they go, we can
have objectsize folding recursively simplify away their result when it
folds.  It is important to catch this here, because otherwise we won't
eliminate the cross-block values at isel and other times.

llvm-svn: 123524
2011-01-15 07:25:29 +00:00
Chris Lattner 9c10d587f6 implement an instcombine xform that canonicalizes casts outside of and-with-constant operations.
This fixes rdar://8808586 which observed that we used to compile:


union xy {
        struct x { _Bool b[15]; } x;
        __attribute__((packed))
        struct y {
                __attribute__((packed)) unsigned long b0to7;
                __attribute__((packed)) unsigned int b8to11;
                __attribute__((packed)) unsigned short b12to13;
                __attribute__((packed)) unsigned char b14;
        } y;
};

struct x
foo(union xy *xy)
{
        return xy->x;
}

into:

_foo:                                   ## @foo
	movq	(%rdi), %rax
	movabsq	$1095216660480, %rcx    ## imm = 0xFF00000000
	andq	%rax, %rcx
	movabsq	$-72057594037927936, %rdx ## imm = 0xFF00000000000000
	andq	%rax, %rdx
	movzbl	%al, %esi
	orq	%rdx, %rsi
	movq	%rax, %rdx
	andq	$65280, %rdx            ## imm = 0xFF00
	orq	%rsi, %rdx
	movq	%rax, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rdx, %rsi
	movl	%eax, %edx
	andl	$-16777216, %edx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rdx
	orq	%rcx, %rdx
	movabsq	$280375465082880, %rcx  ## imm = 0xFF0000000000
	movq	%rax, %rsi
	andq	%rcx, %rsi
	orq	%rdx, %rsi
	movabsq	$71776119061217280, %r8 ## imm = 0xFF000000000000
	andq	%r8, %rax
	orq	%rsi, %rax
	movzwl	12(%rdi), %edx
	movzbl	14(%rdi), %esi
	shlq	$16, %rsi
	orl	%edx, %esi
	movq	%rsi, %r9
	shlq	$32, %r9
	movl	8(%rdi), %edx
	orq	%r9, %rdx
	andq	%rdx, %rcx
	movzbl	%sil, %esi
	shlq	$32, %rsi
	orq	%rcx, %rsi
	movl	%edx, %ecx
	andl	$-16777216, %ecx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rcx
	movq	%rdx, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rcx, %rsi
	movq	%rdx, %rcx
	andq	$65280, %rcx            ## imm = 0xFF00
	orq	%rsi, %rcx
	movzbl	%dl, %esi
	orq	%rcx, %rsi
	andq	%r8, %rdx
	orq	%rsi, %rdx
	ret

We now compile this into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movzwl	12(%rdi), %eax
	movzbl	14(%rdi), %ecx
	shlq	$16, %rcx
	orl	%eax, %ecx
	shlq	$32, %rcx
	movl	8(%rdi), %edx
	orq	%rcx, %rdx
	movq	(%rdi), %rax
	ret

A small improvement :-)

llvm-svn: 123520
2011-01-15 06:32:33 +00:00
Duncan Sands d6f1a9584d Turn X-(X-Y) into Y. According to my auto-simplifier this is the most common
simplification present in fully optimized code (I think instcombine fails to
transform some of these when "X-Y" has more than one use).  Fires here and
there all over the test-suite, for example it eliminates 8 subtractions in
the final IR for 445.gobmk, 2 subs in 447.dealII, 2 in paq8p etc.

llvm-svn: 123442
2011-01-14 15:26:10 +00:00
Duncan Sands 571fd9a606 Factorize common code out of the InstructionSimplify shift logic. Add in
threading of shifts over selects and phis while there.  This fires here and
there in the testsuite, to not much effect.  For example when compiling spirit
it fires 5 times, during early-cse, resulting in 6 more cse simplifications,
and 3 more terminators being folded by jump threading, but the final bitcode
doesn't change in any interesting way: other optimizations would have caught
the opportunity anyway, only later.

llvm-svn: 123441
2011-01-14 14:44:12 +00:00
Duncan Sands c3eb0f4b2e Rename this test.
llvm-svn: 123440
2011-01-14 14:16:33 +00:00
Chris Lattner 5e0fef8531 relax testcase a bit.
llvm-svn: 123433
2011-01-14 07:46:33 +00:00
Duncan Sands 7f60dc1eb0 Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.

llvm-svn: 123417
2011-01-14 00:37:45 +00:00
Bob Wilson 08713d3c5f Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00
Bob Wilson 12eec40c83 Make SROA more aggressive with allocas containing padding.
SROA only split up structs and arrays one level at a time, so padding can
only cause trouble if it is located in between the struct or array elements.

llvm-svn: 123380
2011-01-13 17:45:08 +00:00
Duncan Sands 8d25a7c3a0 The most common simplification missed by instsimplify in unoptimized bitcode
is "X != 0 -> X" when X is a boolean.  This occurs a lot because of the way
llvm-gcc converts gcc's conditional expressions.  Add this, and a few other
similar transforms for completeness.

llvm-svn: 123372
2011-01-13 08:56:29 +00:00
Chris Lattner dd5f60b7a7 revert 123144, reenabling the rest of memset formation.
llvm-svn: 123302
2011-01-12 03:25:15 +00:00
Chris Lattner 654098f411 revert r123146 which disabled code that wasn't the root cause
of the bootstrap miscompare issue.

llvm-svn: 123299
2011-01-12 01:52:23 +00:00
Chris Lattner 054d2a8525 merge tests into one crash.ll test.
llvm-svn: 123220
2011-01-11 07:50:07 +00:00
Chris Lattner 63fe78de68 remove a bogus assertion: the latch block of a loop is not
neccesarily an uncond branch to the header.  This fixes 
PR8955 (the assertion tripping).

llvm-svn: 123219
2011-01-11 07:47:59 +00:00
Chandler Carruth b1e7f557b7 Teach constant folding to perform conversions from constant floating
point values to their integer representation through the SSE intrinsic
calls. This is the last part of a README.txt entry for which I have real
world examples.

llvm-svn: 123206
2011-01-11 01:07:24 +00:00
Chandler Carruth fdf4969149 FileCheck-ize a test, and move a no-longer calling test case to another
file and make it actually test something...

llvm-svn: 123205
2011-01-11 01:07:20 +00:00
Owen Anderson d490c2d2ae Fix a random missed optimization by making InstCombine more aggressive when determining which bits are demanded by
a comparison against a constant.

llvm-svn: 123203
2011-01-11 00:36:45 +00:00
Chandler Carruth cf414cf0a6 Teach instcombine about the rest of the SSE and SSE2 conversion
intrinsics element dependencies. Reviewed by Nick.

llvm-svn: 123161
2011-01-10 07:19:37 +00:00
Chandler Carruth 7bb282ebb1 Fold two related tests into the newly FileCheck-ized test, migrating
them to FileCheck as well.

llvm-svn: 123154
2011-01-10 02:53:58 +00:00
Chandler Carruth ef7aac5961 Clean up and FileCheck-ize a test.
llvm-svn: 123153
2011-01-10 02:53:54 +00:00
Chris Lattner ec1387cf4b fix typo
llvm-svn: 123148
2011-01-10 02:33:34 +00:00
Chris Lattner 4662bd4b13 another (more) aggressive attempt to bring llvm-gcc-i386-linux-selfhost
back to life.

llvm-svn: 123146
2011-01-10 00:47:34 +00:00
Chris Lattner 1017fa6746 temporarily disable memset formation from memsets in an effort to restore buildbot stability.
llvm-svn: 123144
2011-01-09 23:52:48 +00:00
Tobias Grosser cc21c4aa98 Instcombine: Fix pattern where the sext did not dominate the icmp using it
llvm-svn: 123121
2011-01-09 16:00:11 +00:00
Chris Lattner 9a1d63ba9f Merge memsets followed by neighboring memsets and other stores into
larger memsets.  Among other things, this fixes rdar://8760394 and
allows us to handle "Example 2" from http://blog.regehr.org/archives/320,
compiling it into a single 4096-byte memset:

_mad_synth_mute:                        ## @mad_synth_mute
## BB#0:                                ## %entry
	pushq	%rax
	movl	$4096, %esi             ## imm = 0x1000
	callq	___bzero
	popq	%rax
	ret

llvm-svn: 123089
2011-01-08 21:19:19 +00:00
Chris Lattner 5120ebf184 fix an issue in IsPointerOffset that prevented us from recognizing that
P and P+1 are relative to the same base pointer.

llvm-svn: 123087
2011-01-08 21:07:56 +00:00
Chris Lattner 4dc1fd938f enhance memcpyopt to merge a store and a subsequent
memset into a single larger memset.

llvm-svn: 123086
2011-01-08 20:54:51 +00:00
Chris Lattner 9dbbc49f74 merge two tests and filecheckify
llvm-svn: 123082
2011-01-08 20:27:22 +00:00
Chris Lattner 59c82f850d When loop rotation happens, it is *very* common for the duplicated condbr
to be foldable into an uncond branch.  When this happens, we can make a
much simpler CFG for the loop, which is important for nested loop cases
where we want the outer loop to be aggressively optimized.

Handle this case more aggressively.  For example, previously on
phi-duplicate.ll we would get this:


define void @test(i32 %N, double* %G) nounwind ssp {
entry:
  %cmp1 = icmp slt i64 1, 1000
  br i1 %cmp1, label %bb.nph, label %for.end

bb.nph:                                           ; preds = %entry
  br label %for.body

for.body:                                         ; preds = %bb.nph, %for.cond
  %j.02 = phi i64 [ 1, %bb.nph ], [ %inc, %for.cond ]
  %arrayidx = getelementptr inbounds double* %G, i64 %j.02
  %tmp3 = load double* %arrayidx
  %sub = sub i64 %j.02, 1
  %arrayidx6 = getelementptr inbounds double* %G, i64 %sub
  %tmp7 = load double* %arrayidx6
  %add = fadd double %tmp3, %tmp7
  %arrayidx10 = getelementptr inbounds double* %G, i64 %j.02
  store double %add, double* %arrayidx10
  %inc = add nsw i64 %j.02, 1
  br label %for.cond

for.cond:                                         ; preds = %for.body
  %cmp = icmp slt i64 %inc, 1000
  br i1 %cmp, label %for.body, label %for.cond.for.end_crit_edge

for.cond.for.end_crit_edge:                       ; preds = %for.cond
  br label %for.end

for.end:                                          ; preds = %for.cond.for.end_crit_edge, %entry
  ret void
}

Now we get the much nicer:

define void @test(i32 %N, double* %G) nounwind ssp {
entry:
  br label %for.body

for.body:                                         ; preds = %entry, %for.body
  %j.01 = phi i64 [ 1, %entry ], [ %inc, %for.body ]
  %arrayidx = getelementptr inbounds double* %G, i64 %j.01
  %tmp3 = load double* %arrayidx
  %sub = sub i64 %j.01, 1
  %arrayidx6 = getelementptr inbounds double* %G, i64 %sub
  %tmp7 = load double* %arrayidx6
  %add = fadd double %tmp3, %tmp7
  %arrayidx10 = getelementptr inbounds double* %G, i64 %j.01
  store double %add, double* %arrayidx10
  %inc = add nsw i64 %j.01, 1
  %cmp = icmp slt i64 %inc, 1000
  br i1 %cmp, label %for.body, label %for.end

for.end:                                          ; preds = %for.body
  ret void
}

With all of these recent changes, we are now able to compile:

void foo(char *X) {
 for (int i = 0; i != 100; ++i) 
   for (int j = 0; j != 100; ++j)
     X[j+i*100] = 0;
}

into a single memset of 10000 bytes.  This series of changes
should also be helpful for other nested loop scenarios as well.

llvm-svn: 123079
2011-01-08 19:59:06 +00:00
Chris Lattner 063dca0f6a Three major changes:
1. Rip out LoopRotate's domfrontier updating code.  It isn't
   needed now that LICM doesn't use DF and it is super complex
   and gross.
2. Make DomTree updating code a lot simpler and faster.  The 
   old loop over all the blocks was just to find a block??
3. Change the code that inserts the new preheader to just use
   SplitCriticalEdge instead of doing an overcomplex 
   reimplementation of it.

No behavior change, except for the name of the inserted preheader.

llvm-svn: 123072
2011-01-08 18:52:51 +00:00
Frits van Bommel 6a1fb8f235 Fix a bug in r123034 (trying to sext/zext non-integers) and clean up a little.
llvm-svn: 123061
2011-01-08 10:51:36 +00:00
Chris Lattner 8c5defd0b0 Have loop-rotate simplify instructions (yay instsimplify!) as it clones
them into the loop preheader, eliminating silly instructions like
"icmp i32 0, 100" in fixed tripcount loops.  This also better exposes the 
bigger problem with loop rotate that I'd like to fix: once this has been
folded, the duplicated conditional branch *often* turns into an uncond branch.

Not aggressively handling this is pessimizing later loop optimizations 
somethin' fierce by making "dominates all exit blocks" checks fail.

llvm-svn: 123060
2011-01-08 08:24:46 +00:00
Tobias Grosser fc3d7f664b InstCombine: Match min/max hidden by sext/zext
X = sext x; x >s c ? X : C+1 --> X = sext x; X <s C+1 ? C+1 : X
X = sext x; x <s c ? X : C-1 --> X = sext x; X >s C-1 ? C-1 : X
X = zext x; x >u c ? X : C+1 --> X = zext x; X <u C+1 ? C+1 : X
X = zext x; x <u c ? X : C-1 --> X = zext x; X >u C-1 ? C-1 : X
X = sext x; x >u c ? X : C+1 --> X = sext x; X <u C+1 ? C+1 : X
X = sext x; x <u c ? X : C-1 --> X = sext x; X >u C-1 ? C-1 : X

Instead of calculating this with mixed types promote all to the
larger type. This enables scalar evolution to analyze this
expression. PR8866

llvm-svn: 123034
2011-01-07 21:33:14 +00:00
Benjamin Kramer 134cde912a Revert 122959, it needs more thought. Add it back to README.txt with additional notes.
llvm-svn: 123030
2011-01-07 20:42:20 +00:00
Benjamin Kramer ae67cc13a9 InstCombine: Turn _chk functions into the "unsafe" variant if length and max langth are equal.
This happens when we take the (non-constant) length from a malloc.

llvm-svn: 122961
2011-01-06 14:22:52 +00:00
Benjamin Kramer 799b011276 InstCombine: If we call llvm.objectsize on a malloc call we can replace it with the size passed to malloc.
llvm-svn: 122959
2011-01-06 13:11:05 +00:00
Benjamin Kramer a76cc117e0 InstCombine: Teach llvm.objectsize folding to look through GEPs.
llvm-svn: 122958
2011-01-06 13:07:49 +00:00
Chris Lattner 5858e091a6 implement constant folding support for an exotic constant expr:
ret i64 ptrtoint (i8* getelementptr ([1000 x i8]* @X, i64 1, i64 sub (i64 0, i64 ptrtoint ([1000 x i8]* @X to i64))) to i64)

to "ret i64 1000".  This allows us to correctly compute the trip count
on a loop in PR8883, which occurs with std::fill on a char array.  This
allows us to transform it into a memset with a constant size.

llvm-svn: 122950
2011-01-06 06:19:46 +00:00
Chris Lattner c86e67e110 fix an off-by-one bug that caused a crash analyzing
ashr's with huge shift amounts, PR8896

llvm-svn: 122814
2011-01-04 18:19:15 +00:00
Chris Lattner 8643810ede Teach loop-idiom to turn a loop containing a memset into a larger memset
when safe.

The testcase is basically this nested loop:
void foo(char *X) {
  for (int i = 0; i != 100; ++i) 
    for (int j = 0; j != 100; ++j)
      X[j+i*100] = 0;
}

which gets turned into a single memset now.  clang -O3 doesn't optimize
this yet though due to a phase ordering issue I haven't analyzed yet.

llvm-svn: 122806
2011-01-04 07:46:33 +00:00
Chris Lattner bde6ec1db6 Duncan deftly points out that readnone functions aren't
invalidated by stores, so they can be handled as 'simple'
operations.

llvm-svn: 122785
2011-01-03 23:38:13 +00:00
Chris Lattner 9e5e9ed79a earlycse can do trivial with-a-block dead store
elimination as well.  This deletes 60 stores in 176.gcc
that largely come from bitfield code.

llvm-svn: 122736
2011-01-03 04:17:24 +00:00
Chris Lattner e0e32a9ef0 now that loads are in their own table, we can implement
store->load forwarding.  This allows EarlyCSE to zap 600 more
loads from 176.gcc.

llvm-svn: 122732
2011-01-03 03:46:34 +00:00
Chris Lattner 0446bb23f8 add a testcase for readonly call CSE
llvm-svn: 122730
2011-01-03 03:33:47 +00:00
Chris Lattner b9a8efc960 Teach EarlyCSE to do trivial CSE of loads and read-only calls.
On 176.gcc, this catches 13090 loads and calls, and increases the
number of simple instructions CSE'd from 29658 to 36208.

llvm-svn: 122727
2011-01-03 03:18:43 +00:00
Chris Lattner 8fac5db251 add DEBUG and -stats output to earlycse.
Teach it to CSE the rest of the non-side-effecting instructions.

llvm-svn: 122716
2011-01-02 23:19:45 +00:00
Chris Lattner 18ae5436b1 Enhance earlycse to do CSE of casts, instsimplify and die.
Add a testcase.

llvm-svn: 122715
2011-01-02 23:04:14 +00:00
Chris Lattner 9c69406f2b fix a miscompilation of tramp3d-v4: when forming a memcpy, we have to make
sure that the loop we're promoting into a memcpy doesn't mutate the input
of the memcpy.  Before we were just checking that the dest of the memcpy
wasn't mod/ref'd by the loop.

llvm-svn: 122712
2011-01-02 21:14:18 +00:00
Chris Lattner 5702a43c09 If a loop iterates exactly once (has backedge count = 0) then don't
mess with it.  We'd rather peel/unroll it than convert all of its 
stores into memsets.

llvm-svn: 122711
2011-01-02 20:24:21 +00:00
Chris Lattner 8455b6e45e enhance loop idiom recognition to scan *all* unconditionally executed
blocks in a loop, instead of just the header block.  This makes it more
aggressive, able to handle Duncan's Ada examples.

llvm-svn: 122704
2011-01-02 19:01:03 +00:00
Duncan Sands 64f1c0dcda Fix PR8702 by not having LoopSimplify claim to preserve LCSSA form. As described
in the PR, the pass could break LCSSA form when inserting preheaders.  It probably
would be easy enough to fix this, but since currently we always go into LCSSA form
after running this pass, doing so is not urgent.

llvm-svn: 122695
2011-01-02 13:38:21 +00:00
Chris Lattner ddf58010bd Allow loop-idiom to run on multiple BB loops, but still only scan the loop
header for now for memset/memcpy opportunities.  It turns out that loop-rotate
is successfully rotating loops, but *DOESN'T MERGE THE BLOCKS*, turning "for 
loops" into 2 basic block loops that loop-idiom was ignoring.

With this fix, we form many *many* more memcpy and memsets than before, including
on the "history" loops in the viterbi benchmark, which look like this:

        for (j=0; j<MAX_history; ++j) {
          history_new[i][j+1] = history[2*i][j];
        }

Transforming these loops into memcpy's speeds up the viterbi benchmark from
11.98s to 3.55s on my machine.  Woo.

llvm-svn: 122685
2011-01-02 07:58:36 +00:00
Chris Lattner 85b6d81d41 teach loop idiom recognition to form memcpy's from simple loops.
llvm-svn: 122678
2011-01-02 03:37:56 +00:00
Chris Lattner 1903c42b97 fix a globalopt crash on two Adobe-C++ testcases that the recent
loop idiom pass exposed.

llvm-svn: 122674
2011-01-01 22:31:46 +00:00
Chris Lattner a3514441e0 add a validity check that was missed, fixing a crash on the
new testcase.

llvm-svn: 122662
2011-01-01 20:12:04 +00:00
Duncan Sands 772749aea1 Revert commit 122654 at the request of Chris, who reckons that instsimplify
is the wrong hammer for this nail, and is probably right.

llvm-svn: 122661
2011-01-01 20:08:02 +00:00
Chris Lattner 91a4435875 improve validity check to handle constant-trip-count loops more
aggressively.  In practice, this doesn't help anything though,
see the todo.

llvm-svn: 122660
2011-01-01 19:54:22 +00:00
Chris Lattner 8b3baf6d75 implement the "no aliasing accesses in loop" safety check. This pass
should be correct now.

llvm-svn: 122659
2011-01-01 19:39:01 +00:00
Duncan Sands e3c539581c Fix a README item by having InstructionSimplify do a mild form of value
numbering, in which it considers (for example) "%a = add i32 %x, %y" and
"%b = add i32 %x, %y" to be equal because the operands are equal and the
result of the instructions only depends on the values of the operands.
This has almost no effect (it removes 4 instructions from gcc-as-one-file),
and perhaps slows down compilation: I measured a 0.4% slowdown on the large
gcc-as-one-file testcase, but it wasn't statistically significant.

llvm-svn: 122654
2011-01-01 16:12:09 +00:00
NAKAMURA Takumi a834008959 test/Transforms/ConstProp/logicaltest.ll: FileCheck-ize.
llvm-svn: 122620
2010-12-29 03:58:56 +00:00
Chris Lattner 29e14edc8d implement enough of the memset inference algorithm to recognize and insert
memsets.  This is still missing one important validity check, but this is enough
to compile stuff like this:

void test0(std::vector<char> &X) {
  for (std::vector<char>::iterator I = X.begin(), E = X.end(); I != E; ++I)
    *I = 0;
}

void test1(std::vector<int> &X) {
  for (long i = 0, e = X.size(); i != e; ++i)
    X[i] = 0x01010101;
}

With:
 $ clang t.cpp -S -o - -O2 -emit-llvm | opt -loop-idiom | opt -O3 | llc 

to:

__Z5test0RSt6vectorIcSaIcEE:            ## @_Z5test0RSt6vectorIcSaIcEE
## BB#0:                                ## %entry
	subq	$8, %rsp
	movq	(%rdi), %rax
	movq	8(%rdi), %rsi
	cmpq	%rsi, %rax
	je	LBB0_2
## BB#1:                                ## %bb.nph
	subq	%rax, %rsi
	movq	%rax, %rdi
	callq	___bzero
LBB0_2:                                 ## %for.end
	addq	$8, %rsp
	ret
...
__Z5test1RSt6vectorIiSaIiEE:            ## @_Z5test1RSt6vectorIiSaIiEE
## BB#0:                                ## %entry
	subq	$8, %rsp
	movq	(%rdi), %rax
	movq	8(%rdi), %rdx
	subq	%rax, %rdx
	cmpq	$4, %rdx
	jb	LBB1_2
## BB#1:                                ## %for.body.preheader
	andq	$-4, %rdx
	movl	$1, %esi
	movq	%rax, %rdi
	callq	_memset
LBB1_2:                                 ## %for.end
	addq	$8, %rsp
	ret

llvm-svn: 122573
2010-12-26 23:42:51 +00:00
Chris Lattner 6cf8d6cc6e start using irbuilder to make mem intrinsics in a few passes.
llvm-svn: 122572
2010-12-26 22:57:41 +00:00
Benjamin Kramer ea9152e551 MemCpyOpt: Turn memcpys from a constant into a memset if possible.
This allows us to compile "int cst[] = {-1, -1, -1};" into
  movl  $-1, 16(%rsp)
  movq  $-1, 8(%rsp)
instead of
  movl  _cst+8(%rip), %eax
  movl  %eax, 16(%rsp)
  movq  _cst(%rip), %rax
  movq  %rax, 8(%rsp)

llvm-svn: 122548
2010-12-24 21:17:12 +00:00
Owen Anderson 226ac14afb When determining if we can fold (x >> C1) << C2, the bits that we need to verify are zero
are not the low bits of x, but the bits that WILL be the low bits after the operation completes.

llvm-svn: 122529
2010-12-23 23:56:24 +00:00
Benjamin Kramer 8ef5001b27 InstCombine: creating selects from -1 and 0 is fine, they combine into a sext from i1.
llvm-svn: 122453
2010-12-22 23:12:15 +00:00
Duncan Sands a45cfbd405 When determining whether the new instruction was already present in
the original instruction, half the cases were missed (making it not
wrong but suboptimal).  Also correct a typo (A <-> B) in the second
chunk. 

llvm-svn: 122414
2010-12-22 17:15:25 +00:00
Duncan Sands bab9456f81 Make this test not depend on how the variable is named.
llvm-svn: 122413
2010-12-22 17:08:04 +00:00
Duncan Sands fbb9ac3cca Add a generic expansion transform: A op (B op' C) -> (A op B) op' (A op C)
if both A op B and A op C simplify.  This fires fairly often but doesn't
make that much difference.  On gcc-as-one-file it removes two "and"s and
turns one branch into a select.

llvm-svn: 122399
2010-12-22 13:36:08 +00:00
Owen Anderson 5ab8d4b5e5 Give GVN back the ability to perform simple conditional propagation on conditional branch values.
I still think that LVI should be handling this, but that capability is some ways off in the future,
and this matters for some significant benchmarks.

llvm-svn: 122378
2010-12-21 23:54:34 +00:00
Duncan Sands 76befde93a Add an additional InstructionSimplify factorization test.
llvm-svn: 122333
2010-12-21 15:12:22 +00:00
Duncan Sands fecc642224 While I don't think any later transforms can fire, it seems cleaner to
not assume this (for example in case more transforms get added below
it).  Suggested by Frits van Bommel.

llvm-svn: 122332
2010-12-21 15:03:43 +00:00
Duncan Sands 07c17132d7 Fix typo in comment, spotted by Deewiant.
llvm-svn: 122329
2010-12-21 13:39:20 +00:00
Duncan Sands ee3ec6eb94 Teach InstructionSimplify about distributive laws. These transforms fire
quite often, but don't make much difference in practice presumably because
instcombine also knows them and more.

llvm-svn: 122328
2010-12-21 13:32:22 +00:00
Duncan Sands 6c7a52cf80 Add generic simplification of associative operations, generalizing
a couple of existing transforms.  This fires surprisingly often, for
example when compiling gcc "(X+(-1))+1->X" fires quite a lot as well
as various "and" simplifications (usually with a phi node operand).
Most of the time this doesn't make a real difference since the same
thing would have been done elsewhere anyway, eg: by instcombine, but
there are a few places where this results in simplifications that we
were not doing before.

llvm-svn: 122326
2010-12-21 08:49:00 +00:00
Benjamin Kramer 68531baea9 Teach InstCombine to merge (icmp ult (X + CA), C1) | (icmp eq X, C2) into (icmp ult (X + CA), C1 + 1) if C2 + CA == C1.
InstCombine creates these so now we compile x == 23 || x == 24 || x == 25 to
  %x.off = add i32 %x, -23
  %1 = icmp ult i32 %x.off, 3
instead of
  %x.off = add i32 %x, -23
  %1 = icmp ult i32 %x.off, 2
  %cmp3 = icmp eq i32 %x, 25
  %ret2 = or i1 %1, %cmp3

llvm-svn: 122248
2010-12-20 16:18:51 +00:00
Duncan Sands ed6d6c33dd Have SimplifyBinOp dispatch Xor, Add and Sub to the corresponding methods
(they had just been forgotten before).  Adding Xor causes "main" in the
existing testcase 2010-11-01-lshr-mask.ll to be hugely more simplified.

llvm-svn: 122245
2010-12-20 14:47:04 +00:00
Chris Lattner 27ca8ebd4b fix PR8807 by making transformConstExprCastCall aware of byval arguments.
llvm-svn: 122238
2010-12-20 08:36:38 +00:00
Chris Lattner 0f11495289 when eliding a byval copy due to inlining a readonly function, we have
to make sure that the reused alloca has sufficient alignment.

llvm-svn: 122236
2010-12-20 08:10:40 +00:00
Chris Lattner 0099744506 pull byval processing out to its own helper function.
llvm-svn: 122235
2010-12-20 07:57:41 +00:00
Chris Lattner 7394680a00 fix PR8769, a miscompilation by inliner when inlining a function with a byval
argument.  The generated alloca has to have at least the alignment of the
byval, if not, the client may be making assumptions that the new alloca won't
satisfy.

llvm-svn: 122234
2010-12-20 07:45:28 +00:00
Chris Lattner a9a5c59dd1 merge two tests.
llvm-svn: 122233
2010-12-20 07:39:57 +00:00
Chris Lattner 6f3ddbd5bc filecheckize
llvm-svn: 122232
2010-12-20 07:38:24 +00:00
Mon P Wang 7bcead020d Test case for r122215 when InstCombine optimizes memset
llvm-svn: 122216
2010-12-20 01:06:23 +00:00
Chris Lattner 1e8c032a6e X86 supports i8/i16 overflow ops (except i8 multiplies), we should
generate them.  

Now we compile:

define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
  %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
  %cmp = extractvalue %0 %0, 1
  br i1 %cmp, label %if.then, label %if.end

into:

_X:                                     ## @X
## BB#0:                                ## %entry
	subl	$12, %esp
	movb	16(%esp), %al
	addb	20(%esp), %al
	jo	LBB0_2

Before we were generating:

_X:                                     ## @X
## BB#0:                                ## %entry
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movb	12(%ebp), %al
	testb	%al, %al
	setge	%cl
	movb	8(%ebp), %dl
	testb	%dl, %dl
	setge	%ah
	cmpb	%cl, %ah
	sete	%cl
	addb	%al, %dl
	testb	%dl, %dl
	setge	%al
	cmpb	%al, %ah
	setne	%al
	andb	%cl, %al
	testb	%al, %al
	jne	LBB0_2

llvm-svn: 122186
2010-12-19 20:03:11 +00:00
Chris Lattner 5e0c0c72e9 recognize an unsigned add with overflow idiom into uadd.
This resolves a README entry and technically resolves PR4916,
but we still get poor code for the testcase in that PR because
GVN isn't CSE'ing uadd with add, filed as PR8817.

Previously we got:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	cmpq	%rdi, %rsi
	movl	$42, %eax
	cmovaq	%rsi, %rax
	ret

Now we get:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	movl	$42, %eax
	cmovbq	%rsi, %rax
	ret

llvm-svn: 122182
2010-12-19 19:37:52 +00:00
Chris Lattner 33dc3f0cfa optimize uadd(x, cst) into a comparison when the normal
result is dead.  This is required for my next patch to not
regress the testsuite.

llvm-svn: 122181
2010-12-19 19:35:32 +00:00
Chris Lattner 79874566ce generalize the sadd creation code to not require that the
sadd formed is half the size of the original type. We can
now compile this into a sadd.i8:

unsigned char X(char a, char b) {
  int res = a+b;
  if ((unsigned )(res+128) > 255U)
    abort();
  return res;
}

llvm-svn: 122178
2010-12-19 18:35:09 +00:00
Chris Lattner c56c845377 fix another miscompile in the llvm.sadd formation logic: it wasn't
checking to see if the high bits of the original add result were dead.
Inserting a smaller add and zexting back to that size is not good enough.

This is likely to be the fix for 8816.

llvm-svn: 122177
2010-12-19 18:22:06 +00:00
Chris Lattner f29562db25 fix a bug (possibly 8816) in the sadd forming xform: it isn't
profitable (or safe) to promote code when the add-with-constant
has other uses.

llvm-svn: 122175
2010-12-19 17:59:02 +00:00
Chris Lattner 408a684d29 Enhance LICM to promote alias sets whose pointers themselves are stored,
which doesn't affect the memory address being promoted.

llvm-svn: 122172
2010-12-19 05:57:25 +00:00
Chris Lattner 3337a81450 fix PR8602, a bug in an assertion: a volatile store *of* a pointer
does not make the alias set for that pointer volatile, just stores
*to* the pointer.

llvm-svn: 122171
2010-12-19 05:51:54 +00:00
Chris Lattner fb888622c3 revert r122164, I'm going to go with a different approach.
llvm-svn: 122168
2010-12-19 04:23:03 +00:00
Chris Lattner 583ec6fa44 first step to fixing PR8642: don't fold away empty basic blocks
which have trapping constant exprs in them due to PHI nodes.
Eliminating them can cause the constant expr to be evalutated
on new paths if the input edges are critical.

llvm-svn: 122164
2010-12-19 03:02:34 +00:00
Chris Lattner 2b43f2df2b move this test into the ARM test so that it is only run when the arm backend
is enabled.

llvm-svn: 122163
2010-12-19 02:58:14 +00:00
Nate Begeman 7aa18bf46a Add vector versions of some existing scalar transforms to aid codegen in matching psign & pblend operations to the IR produced by clang/gcc for their C idioms.
llvm-svn: 122105
2010-12-17 23:12:19 +00:00
Owen Anderson 1294ea7d53 Reapply r121905 (automatic synthesis of @llvm.sadd.with.overflow) with a fix for a bug that manifested itself
on the DragonEgg self-host bot.  Unfortunately, the testcase is pretty messy and doesn't reduce well due to
interactions with other parts of InstCombine.

llvm-svn: 122072
2010-12-17 18:08:00 +00:00
Benjamin Kramer e5f49c4ff2 SimplifyCFG: Ranges can be larger than 64 bits. Fixes Release-selfhost build.
llvm-svn: 122054
2010-12-17 10:48:14 +00:00
Chris Lattner d14b0f1db7 improve switch formation to handle small range
comparisons formed by comparisons.  For example,
this:

void foo(unsigned x) {
  if (x == 0 || x == 1 || x == 3 || x == 4 || x == 6) 
    bar();
}

compiles into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	cmpl	$6, %edi
	ja	LBB0_2
## BB#1:                                ## %entry
	movl	%edi, %eax
	movl	$91, %ecx
	btq	%rax, %rcx
	jb	LBB0_3

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	cmpl	$2, %edi
	jb	LBB0_4
## BB#1:                                ## %switch.early.test
	cmpl	$6, %edi
	ja	LBB0_3
## BB#2:                                ## %switch.early.test
	movl	%edi, %eax
	movl	$88, %ecx
	btq	%rax, %rcx
	jb	LBB0_4

This catches a bunch of cases in GCC, which look like this:

 %804 = load i32* @which_alternative, align 4, !tbaa !0
 %805 = icmp ult i32 %804, 2
 %806 = icmp eq i32 %804, 3
 %or.cond121 = or i1 %805, %806
 %807 = icmp eq i32 %804, 4
 %or.cond124 = or i1 %or.cond121, %807
 br i1 %or.cond124, label %.thread, label %808

turning this into a range comparison.

llvm-svn: 122045
2010-12-17 06:20:15 +00:00
Dan Gohman 93dc2b808f Revert r64460. strtol and friends cannot be marked readonly, even with
a null endptr argument, because they may write to errno.

This fixes a seflhost miscompile observed on Linux targets when TBAA
was enabled.

llvm-svn: 122014
2010-12-17 01:09:43 +00:00
Duncan Sands 8d1ab6f6e1 Speculatively revert commit 121905 since it looks like it might have broken the
dragonegg self-host buildbot.  Original commit message:

Add an InstCombine transform to recognize instances of manual overflow-safe addition
(performing the addition in a wider type and explicitly checking for overflow), and
fold them down to intrinsics.  This currently only supports signed-addition, but could
be generalized if someone works out the magic constant formulas for other operations.

llvm-svn: 121965
2010-12-16 09:40:54 +00:00
Dan Gohman 4467aa5294 Preserve TBAA tags when doing load PRE.
llvm-svn: 121921
2010-12-15 23:53:55 +00:00
Owen Anderson 1cf8881299 Add an InstCombine transform to recognize instances of manual overflow-safe addition
(performing the addition in a wider type and explicitly checking for overflow), and
fold them down to intrinsics.  This currently only supports signed-addition, but could
be generalized if someone works out the magic constant formulas for other operations.

Fixes <rdar://problem/8558713>.

llvm-svn: 121905
2010-12-15 22:32:38 +00:00
Frits van Bommel 3d1803495e Teach jump threading to "look through" a select when the branch direction of a terminator depends on it.
When it sees a promising select it now tries to figure out whether the condition of the select is known in any of the predecessors and if so it maps the operands appropriately.

llvm-svn: 121859
2010-12-15 09:51:20 +00:00
Owen Anderson 35609d97ae Fix PR8790, another instance where unreachable code can cause instruction simplification to fail,
this case involve a select that simplifies to itself.

llvm-svn: 121817
2010-12-15 00:55:35 +00:00
Chris Lattner 7499b452c1 - Insert new instructions before DomBlock's terminator,
which is simpler than finding a place to insert in BB.
 - Don't perform the 'if condition hoisting' xform on certain
   i1 PHIs, as it interferes with switch formation.

This re-fixes "example 7", without breaking the world hopefully.

llvm-svn: 121764
2010-12-14 08:46:09 +00:00
Chris Lattner 335f0e4ad4 fix two significant issues with FoldTwoEntryPHINode:
first, it can kick in on blocks whose conditions have been
folded to a constant, even though one of the edges will be
trivially folded.

second, it doesn't clean up the "if diamond" that it just 
eliminated away.  This is a problem because other simplifycfg
xforms kick in depending on the order of block visitation,
causing pointless work.

llvm-svn: 121762
2010-12-14 08:01:53 +00:00
Chris Lattner f130661688 fix yet anohter broken line
llvm-svn: 121750
2010-12-14 06:09:07 +00:00
Chris Lattner 5a9d59d918 reapply my recent change that disables a piece of the switch formation
work, but fixes 400.perlbmk.

llvm-svn: 121749
2010-12-14 05:57:30 +00:00
Owen Anderson 3e5648896e Fix recent buildbot breakage by pulling SimplifyCFG back to its state as of r121694, the most recent state
where I'm confident there were no crashes or miscompilations.  XFAIL the test added since then for now.

llvm-svn: 121733
2010-12-13 23:49:28 +00:00
Chris Lattner a6e5d5694a temporarily disable part of my previous patch, which causes an iterator invalidation issue, causing a crash on some versions of perlbmk.
llvm-svn: 121728
2010-12-13 23:02:19 +00:00
Benjamin Kramer 1e155ab7e1 Fix sort predicate. qsort(3)'s predicate semantics differ from std::sort's. Fixes PR 8780.
llvm-svn: 121705
2010-12-13 18:20:38 +00:00
Chris Lattner fb836f8c1a reinstate my patch: the miscompile was caused by an inverted branch in the
'and' case.

llvm-svn: 121695
2010-12-13 08:12:19 +00:00
Chris Lattner 79db357d80 Completely disable the optimization I added in r121680 until
I can track down a miscompile.  This should bring the buildbots
back to life

llvm-svn: 121693
2010-12-13 07:41:29 +00:00
Chris Lattner fbeb55844b Make simplifycfg reprocess newly formed "br (cond1 | cond2)" conditions
when simplifying, allowing them to be eagerly turned into switches.  This
is the last step required to get "Example 7" from this blog post:
http://blog.regehr.org/archives/320

On X86, we now generate this machine code, which (to my eye) seems better
than the ICC generated code:

_crud:                                  ## @crud
## BB#0:                                ## %entry
	cmpb	$33, %dil
	jb	LBB0_4
## BB#1:                                ## %switch.early.test
	addb	$-34, %dil
	cmpb	$58, %dil
	ja	LBB0_3
## BB#2:                                ## %switch.early.test
	movzbl	%dil, %eax
	movabsq	$288230376537592865, %rcx ## imm = 0x400000017001421
	btq	%rax, %rcx
	jb	LBB0_4
LBB0_3:                                 ## %lor.rhs
	xorl	%eax, %eax
	ret
LBB0_4:                                 ## %lor.end
	movl	$1, %eax
	ret

llvm-svn: 121690
2010-12-13 07:00:06 +00:00
Chris Lattner cb570f87e5 fix a bug in r121680 that upset the various buildbots.
llvm-svn: 121687
2010-12-13 05:34:18 +00:00
Chris Lattner bc9e6d9dbe make these tests a bit less fragile
llvm-svn: 121682
2010-12-13 05:10:30 +00:00
Chris Lattner a442f24a36 enhance the "change or icmp's into switch" xform to handle one value in an
'or sequence' that it doesn't understand.  This allows us to optimize
something insane like this:

int crud (unsigned char c, unsigned x)
 {
   if(((((((((( (int) c <= 32 ||
                    (int) c == 46) || (int) c == 44)
                  || (int) c == 58) || (int) c == 59) || (int) c == 60)
               || (int) c == 62) || (int) c == 34) || (int) c == 92)
            || (int) c == 39) != 0)
     foo();
 }

into:

define i32 @crud(i8 zeroext %c, i32 %x) nounwind ssp noredzone {
entry:
  %cmp = icmp ult i8 %c, 33
  br i1 %cmp, label %if.then, label %switch.early.test

switch.early.test:                                ; preds = %entry
  switch i8 %c, label %if.end [
    i8 39, label %if.then
    i8 44, label %if.then
    i8 58, label %if.then
    i8 59, label %if.then
    i8 60, label %if.then
    i8 62, label %if.then
    i8 46, label %if.then
    i8 92, label %if.then
    i8 34, label %if.then
  ]

by pulling the < comparison out ahead of the newly formed switch.

llvm-svn: 121680
2010-12-13 04:50:38 +00:00
Chris Lattner a737721d14 merge two tests
llvm-svn: 121679
2010-12-13 04:45:56 +00:00
Chris Lattner 62cc76e9cc Fix my previous patch to handle a degenerate case that the llvm-gcc
bootstrap buildbot tripped over.

llvm-svn: 121674
2010-12-13 03:43:57 +00:00
Chris Lattner d9bacc088a fix a fairly serious oversight with switch formation from
or'd conditions.  Previously we'd compile something like this:

int crud (unsigned char c) {
   return c == 62 || c == 34 || c == 92;
}

into:

  switch i8 %c, label %lor.rhs [
    i8 62, label %lor.end
    i8 34, label %lor.end
  ]

lor.rhs:                                          ; preds = %entry
  %cmp8 = icmp eq i8 %c, 92
  br label %lor.end

lor.end:                                          ; preds = %entry, %entry, %lor.rhs
  %0 = phi i1 [ true, %entry ], [ %cmp8, %lor.rhs ], [ true, %entry ]
  %lor.ext = zext i1 %0 to i32
  ret i32 %lor.ext

which failed to merge the compare-with-92 into the switch.  With this patch
we simplify this all the way to:

  switch i8 %c, label %lor.rhs [
    i8 62, label %lor.end
    i8 34, label %lor.end
    i8 92, label %lor.end
  ]

lor.rhs:                                          ; preds = %entry
  br label %lor.end

lor.end:                                          ; preds = %entry, %entry, %entry, %lor.rhs
  %0 = phi i1 [ true, %entry ], [ false, %lor.rhs ], [ true, %entry ], [ true, %entry ]
  %lor.ext = zext i1 %0 to i32
  ret i32 %lor.ext

which is much better for codegen's switch lowering stuff.  This kicks in 33 times
on 176.gcc (for example) cutting 103 instructions off the generated code.

llvm-svn: 121671
2010-12-13 03:18:54 +00:00
Benjamin Kramer c4169cebe3 Generalize the and-icmp-select instcombine further by allowing selects of the form
(x & 2^n) ? 2^m+C : C

we can offset both arms by C to get the "(x & 2^n) ? 2^m : 0" form, optimize the
select to a shift and apply the offset afterwards.

llvm-svn: 121609
2010-12-11 10:49:22 +00:00
Benjamin Kramer c8b035d006 Factor the (x & 2^n) ? 2^m : 0 instcombine into its own method and generalize it
to catch cases where n != m with a shift.

llvm-svn: 121608
2010-12-11 09:42:59 +00:00
Chris Lattner bc4457e317 enhance memcpyopt to zap memcpy's that have the same src/dst.
llvm-svn: 121362
2010-12-09 07:45:45 +00:00
Chris Lattner fd51c52ef6 fix PR8753, eliminating a case where we'd infinitely make a
substitution because it doesn't actually change the IR.  Patch by
Jakub Staszak!

llvm-svn: 121361
2010-12-09 07:39:50 +00:00
Dan Gohman a32986e899 Really check that the bits that will become zero are actually already zero
before eliminating the operation that zeros them. This fixes rdar://8739316.

llvm-svn: 121353
2010-12-09 02:52:17 +00:00
Chris Lattner 0d71c4f564 reapply r121100 with a tweak to constant fold ConstExprs with TargetData
(if available) as we go so that we get simple constantexprs not insane ones.
This fixes the failure of clang/test/CodeGenCXX/virtual-base-ctor.cpp
that the previous iteration of this patch had.

llvm-svn: 121111
2010-12-07 04:33:29 +00:00
Eric Christopher f10dcfb9fb Temporarily revert r121100 as it's causing clang to fail
CodeGenCXX/virtual-base-ctor.cpp.

llvm-svn: 121102
2010-12-07 02:41:11 +00:00
Chris Lattner 287f4366c1 fix PR8710 - teach global opt that some constantexprs are too complex to
put in a global variable's initializer.

llvm-svn: 121100
2010-12-07 01:59:32 +00:00
Frits van Bommel d9df6eaa9c Implement jump threading of 'indirectbr' by keeping track of whether we're looking for ConstantInt*s or BlockAddress*s.
llvm-svn: 121066
2010-12-06 23:36:56 +00:00
Chris Lattner 94fbdf3814 Fix PR8728, a miscompilation I recently introduced. When optimizing
memcpy's like:
  memcpy(A, B)
  memcpy(A, C)

we cannot delete the first memcpy as dead if A and C might be aliases.
If so, we actually get:

  memcpy(A, B)
  memcpy(A, A)

which is not correct to transform into:

  memcpy(A, A)

This patch was heavily influenced by Jakub Staszak's patch in PR8728, thanks
Jakub!

llvm-svn: 120974
2010-12-06 01:48:06 +00:00
Frits van Bommel 8fb69ee805 Teach SimplifyCFG to turn
(indirectbr (select cond, blockaddress(@fn, BlockA),
                            blockaddress(@fn, BlockB)))
into
  (br cond, BlockA, BlockB).

llvm-svn: 120943
2010-12-05 18:29:03 +00:00
Chris Lattner 1c577b54b0 fix a bozo bug I introduced in r119930, causing a miscompile of
20040709-1.c from the gcc testsuite.  I was using the size of a
pointer instead of the pointee.  This fixes rdar://8713376

llvm-svn: 120519
2010-12-01 01:24:55 +00:00
Chris Lattner 903add84d9 Enhance DSE to handle the variable index case in PR8657.
llvm-svn: 120498
2010-11-30 23:43:23 +00:00
Chris Lattner c0f3379ae0 teach DSE to use GetPointerBaseWithConstantOffset to analyze
may-aliasing stores that partially overlap with different base
pointers.  This implements PR6043 and the non-variable part of
PR8657

llvm-svn: 120485
2010-11-30 23:05:20 +00:00
Chris Lattner b63ba73b1b enhance isRemovable to refuse to delete volatile mem transfers
now that DSE hacks on them.  This fixes a regression I introduced,
by generalizing DSE to hack on transfers.

llvm-svn: 120445
2010-11-30 19:12:10 +00:00
Chris Lattner 58b779e9c2 Rewrite the main DSE loop to be written in terms of reasoning
about pairs of AA::Location's instead of looking for MemDep's
"Def" predicate.  This is more powerful and general, handling
memset/memcpy/store all uniformly, and implementing PR8701 and
probably obsoleting parts of memcpyoptimizer.

This also fixes an obscure bug with init.trampoline and i8
stores, but I'm not surprised it hasn't been hit yet.  Enhancing
init.trampoline to carry the size that it stores would allow
DSE to be much more aggressive about optimizing them.

llvm-svn: 120406
2010-11-30 07:23:21 +00:00
Anders Carlsson e3ea1cba79 Add a puts optimization that converts puts() to putchar('\n').
llvm-svn: 120398
2010-11-30 06:19:18 +00:00
Anders Carlsson 77e9892afd Fix a typo.
llvm-svn: 120394
2010-11-30 06:03:55 +00:00
Anders Carlsson 631d06bbce Rename this test to FPuts.ll since it actually tests fputs.
llvm-svn: 120393
2010-11-30 05:59:26 +00:00
Chris Lattner 6c7f64e0bc remove a use of llvm-dis
llvm-svn: 120383
2010-11-30 02:04:15 +00:00
Chris Lattner c2e3445273 merge one more away
llvm-svn: 120375
2010-11-30 01:06:43 +00:00
Chris Lattner 7578d0df51 I already merged partial-overwrite.ll -> PartialStore.ll
Merge context-sensitive.ll -> simple.ll and upgrade it.

llvm-svn: 120374
2010-11-30 01:05:07 +00:00
Chris Lattner 43e3a98675 clean up DSE tests, removing some poorly reduced and useless old test,
merging more into other larger .ll files, filecheckizing along the way.

llvm-svn: 120373
2010-11-30 01:00:34 +00:00
Chris Lattner 90c4947df7 enhance basicaa to return "Mod" for a memcpy call when the
queried location doesn't overlap the source, and add a testcase.

llvm-svn: 120370
2010-11-30 00:43:16 +00:00
Chris Lattner 9a146372b5 Teach basicaa that memset's modref set is at worst "mod" and never
contains "ref".

Enhance DSE to use a modref query instead of a store-specific hack
to generalize the "ignore may-alias stores" optimization to handle
memset and memcpy.

llvm-svn: 120368
2010-11-30 00:28:45 +00:00
Chris Lattner c3c754f750 my previous patch would cause us to start deleting some volatile
stores, fix and add a testcase.

llvm-svn: 120363
2010-11-30 00:12:39 +00:00
Benjamin Kramer e6840ef4b3 Fix some broken CHECK lines.
llvm-svn: 120332
2010-11-29 22:34:55 +00:00
Chris Lattner 2e8793482c fix PR8677, patch by Jakub Staszak!
llvm-svn: 120325
2010-11-29 21:59:31 +00:00
Frits van Bommel 28218aa8f1 Transform (extractvalue (load P), ...) to (load (gep P, 0, ...)) if the load has no other uses, shrinking the load.
llvm-svn: 120323
2010-11-29 21:56:20 +00:00
Frits van Bommel 40a80ac963 Update this test to keep testing the -instcombine transform it's supposed to be testing instead of triggering the improved constant folding for insertvalue and extractvalue.
llvm-svn: 120319
2010-11-29 20:55:40 +00:00
Frits van Bommel a98214de10 Teach ConstantFoldInstruction() how to fold insertvalue and extractvalue.
llvm-svn: 120316
2010-11-29 20:36:52 +00:00
Nick Lewycky b8de00ee07 Treat a call of function pointer like a load of the pointer when considering
whether the pointer can be replaced with the global variable it is a copy of.
Fixes PR8680.

llvm-svn: 120126
2010-11-24 22:04:20 +00:00
Benjamin Kramer 94a622af4c The srem -> urem transform is not safe for any divisor that's not a power of two.
E.g. -5 % 5 is 0 with srem and 1 with urem.

Also addresses Frits van Bommel's comments.

llvm-svn: 120049
2010-11-23 20:33:57 +00:00
Benjamin Kramer b5afa65b0a InstCombine: Reduce "X shift (A srem B)" to "X shift (A urem B)" iff B is positive.
This allows to transform the rem in "1 << ((int)x % 8);" to an and.

llvm-svn: 120028
2010-11-23 18:52:42 +00:00
Duncan Sands adc7771f18 Exploit distributive laws (eg: And distributes over Or, Mul over Add, etc) in a
fairly systematic way in instcombine.  Some of these cases were already dealt
with, in which case I removed the existing code.  The case of Add has a bunch of
funky logic which covers some of this plus a few variants (considers shifts to be
a form of multiplication), which I didn't touch.  The simplification performed is:
A*B+A*C -> A*(B+C).  The improvement is to do this in cases that were not already
handled [such as A*B-A*C -> A*(B-C), which was reported on the mailing list], and
also to do it more often by not checking for "only one use" if "B+C" simplifies.

llvm-svn: 120024
2010-11-23 14:23:47 +00:00
Chris Lattner e5afa15b77 duncan's spider sense was right, I completely reversed the condition
on this instcombine xform.  This fixes a miscompilation of 403.gcc.

llvm-svn: 119988
2010-11-23 02:42:04 +00:00
Benjamin Kramer f1ebb63161 InstCombine: Implement X - A*-B -> X + A*B.
llvm-svn: 119984
2010-11-22 20:31:27 +00:00
Duncan Sands c133c54426 If a GEP index simply advances by multiples of a type of zero size,
then replace the index with zero.

llvm-svn: 119974
2010-11-22 16:32:50 +00:00
Duncan Sands cf4bceba49 Add a rather pointless InstructionSimplify transform, inspired by recent constant
folding improvements: if P points to a type of size zero, turn "gep P, N" into "P".
More generally, if a gep index type has size zero, instcombine could replace the
index with zero, but that is not done here.

llvm-svn: 119942
2010-11-21 13:53:09 +00:00
Chris Lattner e48c31ce33 implement PR8576, deleting dead stores with intervening may-alias stores.
llvm-svn: 119927
2010-11-21 07:34:32 +00:00
Chris Lattner 6e22221b37 file checkize
llvm-svn: 119926
2010-11-21 07:32:40 +00:00
Chris Lattner f7e896138e optimize:
void a(int x) { if (((1<<x)&8)==0) b(); }

into "x != 3", which occurs over 100 times in 403.gcc but in no
other program in llvm-test.

llvm-svn: 119922
2010-11-21 06:44:42 +00:00
Chris Lattner 58f9f58716 Implement PR8644: forwarding a memcpy value to a byval,
allowing the memcpy to be eliminated.

Unfortunately, the requirements on byval's without explicit 
alignment are really weak and impossible to predict in the 
mid-level optimizer, so this doesn't kick in much with current
frontends.  The fix is to change clang to set alignment on all
byval arguments.

llvm-svn: 119916
2010-11-21 00:28:59 +00:00
Owen Anderson c0177aeb71 Add a test for CodeGenPrepare's ability to look through PHI nodes when performing
addressing mode folding, introduced in r119853.

llvm-svn: 119857
2010-11-19 22:34:53 +00:00
Duncan Sands aef146b890 Factor code for testing whether replacing one value with another
preserves LCSSA form out of ScalarEvolution and into the LoopInfo
class.  Use it to check that SimplifyInstruction simplifications
are not breaking LCSSA form.  Fixes PR8622.

llvm-svn: 119727
2010-11-18 19:59:41 +00:00
Owen Anderson c21c100f3d Completely rework the datastructure GVN uses to represent the value number to leader mapping. Previously,
this was a tree of hashtables, and a query recursed into the table for the immediate dominator ad infinitum
if the initial lookup failed.  This led to really bad performance on tall, narrow CFGs.

We can instead replace it with what is conceptually a multimap of value numbers to leaders (actually
represented by a hashtable with a list of Value*'s as the value type), and then
determine which leader from that set to use very cheaply thanks to the DFS numberings maintained by
DominatorTree.  Because there are typically few duplicates of a given value, this scan tends to be
quite fast.  Additionally, we use a custom linked list and BumpPtr allocation to avoid any unnecessary
allocation in representing the value-side of the multimap.

This change brings with it a 15% (!) improvement in the total running time of GVN on 403.gcc, which I
think is pretty good considering that includes all the "real work" being done by MemDep as well.

The one downside to this approach is that we can no longer use GVN to perform simple conditional progation,
but that seems like an acceptable loss since we now have LVI and CorrelatedValuePropagation to pick up
the slack.  If you see conditional propagation that's not happening, please file bugs against LVI or CVP.

llvm-svn: 119714
2010-11-18 18:32:40 +00:00
Dan Gohman 2e1fc849b2 Add support for PHI-translating sext, zext, and trunc instructions,
enabling more PRE. PR8586.

llvm-svn: 119704
2010-11-18 17:05:13 +00:00
Chris Lattner 731caac7c6 remove a pointless restriction from memcpyopt. It was
refusing to optimize two memcpy's like this:

copy A <- B
copy C <- A

if it couldn't prove that noalias(B,C).  We can eliminate
the copy by producing a memmove instead of memcpy.

llvm-svn: 119694
2010-11-18 08:00:57 +00:00
Chris Lattner bbb0f9661d filecheckize, this is still not optimal, see PR8643
llvm-svn: 119693
2010-11-18 07:49:32 +00:00
Chris Lattner ac5701319b allow eliminating an alloca that is just copied from an constant global
if it is passed as a byval argument.  The byval argument will just be a
read, so it is safe to read from the original global instead.  This allows
us to promote away the %agg.tmp alloca in PR8582

llvm-svn: 119686
2010-11-18 06:41:51 +00:00
Chris Lattner f183d5c4be enhance the "alloca is just a memcpy from constant global"
to ignore calls that obviously can't modify the alloca
because they are readonly/readnone.

llvm-svn: 119683
2010-11-18 06:26:49 +00:00
Chris Lattner 7aeae25c78 fix a small oversight in the "eliminate memcpy from constant global"
optimization.  If the alloca that is "memcpy'd from constant" also has
a memcpy from *it*, ignore it: it is a load.  We now optimize the testcase to:

define void @test2() {
  %B = alloca %T
  %a = bitcast %T* @G to i8*
  %b = bitcast %T* %B to i8*
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %b, i8* %a, i64 124, i32 4, i1 false)
  call void @bar(i8* %b)
  ret void
}

previously we would generate:

define void @test() {
  %B = alloca %T
  %b = bitcast %T* %B to i8*
  %G.0 = getelementptr inbounds %T* @G, i32 0, i32 0
  %tmp3 = load i8* %G.0, align 4
  %G.1 = getelementptr inbounds %T* @G, i32 0, i32 1
  %G.15 = bitcast [123 x i8]* %G.1 to i8*
  %1 = bitcast [123 x i8]* %G.1 to i984*
  %srcval = load i984* %1, align 1
  %B.0 = getelementptr inbounds %T* %B, i32 0, i32 0
  store i8 %tmp3, i8* %B.0, align 4
  %B.1 = getelementptr inbounds %T* %B, i32 0, i32 1
  %B.12 = bitcast [123 x i8]* %B.1 to i8*
  %2 = bitcast [123 x i8]* %B.1 to i984*
  store i984 %srcval, i984* %2, align 1
  call void @bar(i8* %b)
  ret void
}

llvm-svn: 119682
2010-11-18 06:20:47 +00:00
Chris Lattner 9434184142 filecheckize
llvm-svn: 119681
2010-11-18 06:16:43 +00:00
Benjamin Kramer 07726c7d52 InstCombine: Add a missing irem identity (X % X -> 0).
llvm-svn: 119538
2010-11-17 19:11:46 +00:00
Duncan Sands 5ffc298bc7 In which I discover the existence of loops. Threading an operation
over a phi node by applying it to each operand may be wrong if the
operation and the phi node are mutually interdependent (the testcase
has a simple example of this).  So only do this transform if it would
be correct to perform the operation in each predecessor of the block
containing the phi, i.e. if the other operands all dominate the phi.
This should fix the FFMPEG snow.c regression reported by İsmail Dönmez.

llvm-svn: 119347
2010-11-16 12:16:38 +00:00
Duncan Sands f12ba1dfe1 Teach InstructionSimplify the trick of skipping incoming phi
values that are equal to the phi itself.

llvm-svn: 119161
2010-11-15 17:52:45 +00:00
Duncan Sands 01157b027b Move PHI tests to phi.ll, out of select.ll.
llvm-svn: 119153
2010-11-15 16:43:28 +00:00
Duncan Sands 4581ddc123 Teach InstructionSimplify about phi nodes. I chose to have it simply
offload the work to hasConstantValue rather than do something more
complicated (such handling mutually recursive phis) because (1) it is
not clear it is worth it; and (2) if it is worth it, maybe such logic
would be better placed in hasConstantValue.  Adjust some GVN tests
which are now cleaned up much further (eg: all phi nodes are removed).

llvm-svn: 119043
2010-11-14 13:30:18 +00:00
Chris Lattner 8d22ff90a3 rename test.
llvm-svn: 119033
2010-11-14 07:03:49 +00:00
Chris Lattner befbf975e8 filecheckize, remove an old and useless test
llvm-svn: 119032
2010-11-14 07:03:38 +00:00
Chris Lattner 4e44cb8bcb this test is pretty pointless and "propogation" isn't a word (or so Misha claims).
llvm-svn: 119031
2010-11-14 07:02:03 +00:00
Duncan Sands 8c58ba4199 Testcase to go along with commit 118923 ("Have GVN simplify instructions
as it goes").  Before -std-compile-opts only got it down to
  %a = tail call i32 @foo(i32 0) readnone
  %x = tail call i32 @foo(i32 %a) readnone
  %y = tail call i32 @foo(i32 %a) readnone
  %z = icmp eq i32 %x, %y
  ret i1 %z
while now -basicaa -gvn alone reduce it to
  %a = call i32 @foo(i32 0) readnone
  %x = call i32 @foo(i32 %a) readnone
  ret i1 true

llvm-svn: 119009
2010-11-13 21:33:19 +00:00
Duncan Sands 641baf1646 Generalize the reassociation transform in SimplifyCommutative (now renamed to
SimplifyAssociativeOrCommutative) "(A op C1) op C2" -> "A op (C1 op C2)",
which previously was only done if C1 and C2 were constants, to occur whenever
"C1 op C2" simplifies (a la InstructionSimplify).  Since the simplifying operand
combination can no longer be assumed to be the right-hand terms, consider all of
the possible permutations.  When compiling "gcc as one big file", transform 2
(i.e. using right-hand operands) fires about 4000 times but it has to be said
that most of the time the simplifying operands are both constants.  Transforms
3, 4 and 5 each fired once.  Transform 6, which is an existing transform that
I didn't change, never fired.  With this change, the testcase is now optimized
perfectly with one run of instcombine (previously it required instcombine +
reassociate + instcombine, and it may just have been luck that this worked).

llvm-svn: 119002
2010-11-13 15:10:37 +00:00
Dan Gohman d4b7fff2e8 Enhance DSE to handle the case where a free call makes more than
one store dead. This is especially noticeable in
SingleSource/Benchmarks/Shootout/objinst.

llvm-svn: 118875
2010-11-12 02:19:17 +00:00
Dan Gohman 620c38f030 Filecheckize.
llvm-svn: 118874
2010-11-12 02:02:39 +00:00
Dan Gohman a826a88755 Factor out Instruction::isSafeToSpeculativelyExecute's code for
testing for dereferenceable pointers into a helper function,
isDereferenceablePointer.  Teach it how to reason about GEPs
with simple non-zero indices.

Also eliminate ArgumentPromtion's IsAlwaysValidPointer,
which didn't check for weak externals or out of range gep
indices.

llvm-svn: 118840
2010-11-11 21:23:25 +00:00
Dan Gohman 0a6021a54d Enhance GVN to do more precise alias queries for non-local memory
references. For example, this allows gvn to eliminate the load in
this example:

  void foo(int n, int* p, int *q) {
    p[0] = 0;
    p[1] = 1;
    if (n) {
      *q = p[0];
    }
  }

llvm-svn: 118714
2010-11-10 20:37:15 +00:00
Duncan Sands f3b1bf1606 Teach InstructionSimplify how to look through PHI nodes. Since PHI
nodes can be used in loops, this could result in infinite looping
if there is no recursion limit, so add such a limit.  It is also
used for the SelectInst case because in theory there could be an
infinite loop there too if the basic block is unreachable.

llvm-svn: 118694
2010-11-10 18:23:01 +00:00
Dale Johannesen 0171dc30ff When checking that the necessary bits are zero in
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits.  PR 8547.

llvm-svn: 118665
2010-11-10 01:30:56 +00:00
Dan Gohman 2694e14087 Make ModRefBehavior a lattice. Use this to clean up AliasAnalysis
chaining and simplify FunctionAttrs' GetModRefBehavior logic.

llvm-svn: 118660
2010-11-10 01:02:18 +00:00
Duncan Sands 52be6b41d3 Add an additional test for icmp of select folding.
llvm-svn: 118441
2010-11-08 20:56:28 +00:00
Dan Gohman 9130bad71f Extend the AliasAnalysis::pointsToConstantMemory interface to allow it
to optionally look for constant or local (alloca) memory.

Teach BasicAliasAnalysis::pointsToConstantMemory to look through Select
and Phi nodes, and to support looking for local memory.

Remove FunctionAttrs' PointsToLocalOrConstantMemory function, now that
AliasAnalysis knows all the tricks that it knew.

llvm-svn: 118412
2010-11-08 16:45:26 +00:00
Dan Gohman 86449d705a Make FunctionAttrs use AliasAnalysis::getModRefBehavior, now that it
knows about intrinsic functions.

llvm-svn: 118410
2010-11-08 16:10:15 +00:00
Duncan Sands a620bd1fa3 Add simplification of floating point comparisons with the result
of a select instruction, the same as already exists for integer
comparisons.

llvm-svn: 118379
2010-11-07 16:46:25 +00:00
Duncan Sands f532d31198 Fix a README item: when doing a comparison with the result
of a select instruction, see if doing the compare with the
true and false values of the select gives the same result.
If so, that can be used as the value of the comparison.

llvm-svn: 118378
2010-11-07 16:12:23 +00:00
Owen Anderson 6186c96765 When folding away a (shl (shr)) pair, we need to check that the bits that will BECOME the low
bits are zero, not that the current low bits are zero.  Fixes <rdar://problem/8606771>.

llvm-svn: 117953
2010-11-01 21:08:20 +00:00
Duncan Sands b8f3b14dfb If a function does a volatile load from a global constant, do not
consider it to be readonly.  In fact, don't even consider it to be
readonly if it does a volatile load from an AllocaInst either (it
is debatable as to whether readonly would be correct or not in this
case; play safe for the moment).  This fixes PR8279.

llvm-svn: 117783
2010-10-30 12:59:44 +00:00
Bob Wilson 11ee456e23 Change instcombine's getShuffleMask to represent undef with negative values.
This code had previously used 2*N, where N is the mask length, to represent
undef.  That is not safe because the shufflevector operands may have more
than N elements -- they don't have to match the result type.

llvm-svn: 117721
2010-10-29 22:03:05 +00:00
Bob Wilson cb11b48e7a Make instcombine a little more aggressive in combining vector shuffles.
Allow splats even if they don't match either of the original shuffles,
possibly due to undef entries in the shuffles masks.  Radar 8597790.
Also fix some 80-column violations.

llvm-svn: 117719
2010-10-29 22:02:50 +00:00
Owen Anderson 8e7b73743d Update testcase since we're no longer doing the constant forwarding inline with correlated value propagation.
llvm-svn: 117712
2010-10-29 21:18:23 +00:00
NAKAMURA Takumi 959807fa37 test/Transforms/SimplifyLibCalls/floor.ll: Mark as XFAIL:win32 due to lack of nearbyintf on MSVC. [PR8466]
llvm-svn: 117529
2010-10-28 06:46:04 +00:00
Dale Johannesen 16bb87a90e Teach InstCombine not to use Add and Neg on FP. PR 8490.
llvm-svn: 117510
2010-10-27 23:45:18 +00:00
Dan Gohman 2e20dfb0f2 Fix a case where instcombine was stripping metadata (and alignment)
from stores when folding in bitcasts.

llvm-svn: 117265
2010-10-25 16:16:27 +00:00
Duncan Sands 31c803b2ba Fix PR8445: a block with no predecessors may be the entry block, in which case
it isn't unreachable and should not be zapped.  The check for the entry block
was missing in one case: a block containing a unwind instruction.  While there,
do some small cleanups: "M" is not a great name for a Function* (it would be
more appropriate for a Module*), change it to "Fn"; use Fn in more places.

llvm-svn: 117224
2010-10-24 12:23:30 +00:00
Bob Wilson a4e231c880 Teach instcombine to set the alignment arguments for NEON load/store intrinsics.
llvm-svn: 117154
2010-10-22 21:41:48 +00:00
Mikhail Glushenkov 2072db24ed GlobalOpt: EvaluateFunction() must not evaluate stores to weak_odr globals.
Fixes PR8389.

llvm-svn: 116812
2010-10-19 16:47:23 +00:00
Dan Gohman 02538ac4d3 Make BasicAliasAnalysis a normal AliasAnalysis implementation which
does normal initialization and normal chaining. Change the default
AliasAnalysis implementation to NoAlias.

Update StandardCompileOpts.h and friends to explicitly request
BasicAliasAnalysis.

Update tests to explicitly request -basicaa.

llvm-svn: 116720
2010-10-18 18:04:47 +00:00
Owen Anderson 18e4fed3fa Generalize MemCpyOpt's handling of call slot forwarding to function properly when the call slot
forwarding is implemented with a load/store pair rather than a memcpy.

llvm-svn: 116637
2010-10-15 22:52:12 +00:00
Chris Lattner b9681ad442 fix a bug I introduced, no idea how this didn't repro right.
llvm-svn: 116462
2010-10-14 00:30:00 +00:00
Chris Lattner c7bd5740eb hack to unbreak buildbots
llvm-svn: 116461
2010-10-14 00:26:10 +00:00
Chris Lattner 698661c741 add uadd_ov/usub_ov to apint, consolidate constant folding
logic to use the new APInt methods.  Among other things this
implements rdar://8501501 - llvm.smul.with.overflow.i32 should constant fold

which comes from "clang -ftrapv", originally brought to my attention from PR8221.

llvm-svn: 116457
2010-10-14 00:05:07 +00:00
Kenneth Uildriks b8d7efe785 Now using a variant of the existing inlining heuristics to decide whether to create a given specialization of a function in PartialSpecialization. If the total performance bonus across all callsites passing the same constant exceeds the specialization cost, we create the specialization.
llvm-svn: 116158
2010-10-09 22:06:36 +00:00
Devang Patel 57da4caa85 Remove LoopIndexSplit pass. It is neither maintained nor used by anyone.
llvm-svn: 116004
2010-10-07 23:29:37 +00:00
Owen Anderson 13a642da0b Now that the profitable bits of EnableFullLoadPRE have been enabled by default, rip out the remainder.
Anyone interested in more general PRE would be better served by implementing it separately, to get real
anticipation calculation, etc.

llvm-svn: 115337
2010-10-01 20:02:55 +00:00
Chris Lattner c663a67384 fix PR8267 - Instcombine shouldn't optimizer away volatile memcpy's.
llvm-svn: 115296
2010-10-01 05:51:02 +00:00
Chris Lattner 98e82678bd upgrade this test.
llvm-svn: 115295
2010-10-01 05:47:16 +00:00
Owen Anderson 3170a25a84 We do want to allow LoadPRE to perform LICM-like transformations: we already consider PHI nodes to be negligible for
code size (making this transform code size neutral), and it allows us to hoist values out of loops, which is always
a good thing.

llvm-svn: 115205
2010-09-30 20:53:04 +00:00
Benjamin Kramer 2b76c66fd6 Add constant folding for strspn and strcspn to SimplifyLibCalls.
llvm-svn: 115116
2010-09-30 00:58:35 +00:00
Benjamin Kramer 38d22f69fc Add strpbrk folding to SimplifyLibCalls.
llvm-svn: 115111
2010-09-29 23:52:12 +00:00
Benjamin Kramer 8e861d7eee Simplify the loop in StrChrOptimizer. FileCheckize test.
llvm-svn: 115095
2010-09-29 22:29:12 +00:00
Benjamin Kramer 824645abc9 Teach SimplifyLibCalls how to optimize strrchr.
llvm-svn: 115091
2010-09-29 21:50:51 +00:00
Owen Anderson 99c985c37d Fix PR8247: JumpThreading can cause a block to become unreachable while still having predecessor, if it is part of a self-loop.
Because of this, we cannot use the Simplify* APIs, as they can assert-fail on unreachable code.  Since it's not easy to determine
if a given threading will cause a block to become unreachable, simply defer simplifying simplification to later InstCombine and/or
DCE passes.

llvm-svn: 115082
2010-09-29 20:34:41 +00:00
Jakob Stoklund Olesen 1083573796 Don't try to constant fold libm functions with non-finite arguments.
Usually we wouldn't do this anyway because llvm_fenv_testexcept would return an
exception, but we have seen some cases where neither errno nor fenv detect an
exception on arm-linux.

llvm-svn: 114893
2010-09-27 21:29:20 +00:00
Owen Anderson b590a927cd LoadPRE was not properly checking that the load it was PRE'ing post-dominated the block it was being hoisted to.
Splitting critical edges at the merge point only addressed part of the issue; it is also possible for non-post-domination
to occur when the path from the load to the merge has branches in it.  Unfortunately, full anticipation analysis is
time-consuming, so for now approximate it.  This is strictly more conservative than real anticipation, so we will miss
some cases that real PRE would allow, but we also no longer insert loads into paths where they didn't exist before. :-)

This is a very slight net positive on SPEC for me (0.5% on average).  Most of the benchmarks are largely unaffected, but
when it pays off it pays off decently: 181.mcf improves by 4.5% on my machine.

llvm-svn: 114785
2010-09-25 05:26:18 +00:00
Jakob Stoklund Olesen 9d06adcf88 Be more precise when trying to XFAIL this tester: http://google1.osuosl.org:8011/builders/llvm-arm-linux
llvm-svn: 114755
2010-09-24 20:34:49 +00:00
Dan Gohman 49c15c0f9f Attempt to XFAIL this test on arm-linux, which is inexplicably failing.
llvm-svn: 114241
2010-09-18 00:04:37 +00:00
Dan Gohman f3a9c464b4 Fix this test to avoid an "inexact" fold.
llvm-svn: 114202
2010-09-17 20:25:43 +00:00
Dan Gohman 695312637c Fix this test so that folding doesn't depend on a potentially
"inexact" result.

llvm-svn: 114198
2010-09-17 20:15:53 +00:00
Dan Gohman 18fa17cf3d Fix the folding of floating-point math library calls, like sin(infinity),
so that it detects errors on platforms where libm doesn't set errno.
It's still subject to host libm details though.

llvm-svn: 114148
2010-09-17 01:38:06 +00:00
Owen Anderson 20154b3ed4 Add missing RUN line to this test.
llvm-svn: 114106
2010-09-16 18:46:23 +00:00
Owen Anderson 140296f5c0 It is possible, under specific circumstances involving ptrtoint ConstantExpr's, for LVI to end up trying to merge
a Constant into a ConstantRange.  Handle this conservatively for now, rather than asserting.  The testcase is
more complex that I would like, but the manifestation of the problem is sensitive to iteration orders and the state of the
LVI cache, and I have not been able to reproduce it with manually constructed or simplified cases.

Fixes PR8162.

llvm-svn: 114103
2010-09-16 18:28:33 +00:00
Owen Anderson 94532cb297 Fix PR8161, in which an unreachable loop causes recursive instruction simplification to try
to replace an instruction with itself.  Add a predicate to the simplifier to prevent this case.

llvm-svn: 114097
2010-09-16 17:42:36 +00:00
Chris Lattner 67e534505d fix PR8144, a bug where constant merge would merge globals marked
attribute(used).

llvm-svn: 113911
2010-09-15 00:30:11 +00:00
Owen Anderson f6fe577e88 Remove dead option from tests.
llvm-svn: 113855
2010-09-14 21:03:40 +00:00
Chris Lattner f1144f0929 fix PR8102, a case where we'd copyValue from a value that we already
deleted.  Fix this by doing the copyValue's before we delete stuff!

The testcase only repros the problem on my system with valgrind.

llvm-svn: 113820
2010-09-14 00:19:00 +00:00
Owen Anderson fe12f23024 Add a reduced testcase for the infinite loop fixed in r113763.
llvm-svn: 113770
2010-09-13 18:28:40 +00:00
Owen Anderson c237a849e3 Re-apply r113679, which was reverted in r113720, which added a paid of new instcombine transforms
to expose greater opportunities for store narrowing in codegen.  This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.

llvm-svn: 113763
2010-09-13 17:59:27 +00:00
Eric Christopher 26abd3e0c2 Revert 113679, it was causing an infinite loop in a testcase that I've sent
on to Owen.

llvm-svn: 113720
2010-09-12 06:09:23 +00:00
Owen Anderson 70f4524427 Invert and-of-or into or-of-and when doing so would allow us to clear bits of the and's mask.
This can result in increased opportunities for store narrowing in code generation.  Update a number of
tests for this change.  This fixes <rdar://problem/8285027>.

Additionally, because this inverts the order of ors and ands, some patterns for optimizing or-of-and-of-or
no longer fire in instances where they did originally.  Add a simple transform which recaptures most of these
opportunities: if we have an or-of-constant-or and have failed to fold away the inner or, commute the order 
of the two ors, to give the non-constant or a chance for simplification instead.

llvm-svn: 113679
2010-09-11 05:48:06 +00:00
Benjamin Kramer 8c35fb0739 Teach InstructionSimplify to fold (A & B) & A -> A & B and (A | B) | A -> A | B.
Reassociate does this but it doesn't catch all cases (e.g. if the operands are i1).

llvm-svn: 113651
2010-09-10 22:39:55 +00:00
Owen Anderson 6270515918 Revert r113439, which relaxed the requirement that loops containing calls cannot be unrolled. After some discussion,
there seems to be a better way to achieve the same effect.

llvm-svn: 113528
2010-09-09 20:02:23 +00:00
Owen Anderson 8084dbaf8e Relax the "don't unroll loops containing calls" rule. Instead, when a loop contains a call, lower the
unrolling threshold to the optimize-for-size threshold.  Basically, for loops containing calls, unrolling
can still be profitable as long as the loop is REALLY small.

llvm-svn: 113439
2010-09-08 23:10:07 +00:00
Owen Anderson 3fe002dfb5 Generalize instcombine's support for combining multiple bit checks into a single test. Patch by Dirk Steinke!
llvm-svn: 113423
2010-09-08 22:16:17 +00:00
Chris Lattner 6e27b3e004 Fix a serious performance regression introduced by r108687 on linux:
turning (fptrunc (sqrt (fpext x))) -> (sqrtf x)  is great, but we have
to delete the original sqrt as well.  Not doing so causes us to do 
two sqrt's when building with -fmath-errno (the default on linux).

llvm-svn: 113260
2010-09-07 20:01:38 +00:00
Chris Lattner 29570bd695 rename test.
llvm-svn: 113257
2010-09-07 19:57:06 +00:00
Chris Lattner be9019090e fix PR8067, an over-aggressive assertion in LICM.
llvm-svn: 113146
2010-09-06 05:11:24 +00:00
Chris Lattner b01c24a945 Teach loop rotate to hoist trivially invariant instructions
in the duplicated block instead of duplicating them.  

Duplicating them into the end of the loop and the preheader 
means that we got a phi node in the header of the loop, 
which prevented LICM from hoisting them.  GVN would
usually come around later and merge the duplicated 
instructions so we'd get reasonable output... except that
anything dependent on the shoulda-been-hoisted value can't
be hoisted.  In PR5319 (which this fixes), a memory value
didn't get promoted.

llvm-svn: 113134
2010-09-06 01:10:22 +00:00
Chris Lattner 72d283c826 fix PR8063, a crash in globalopt in the malloc analysis code.
llvm-svn: 113109
2010-09-05 17:20:46 +00:00
Dan Gohman 487e250109 Fix LoopSimplify to notify ScalarEvolution when splitting a loop backedge
into an inner loop, as the new loop iteration may differ substantially.
This fixes PR8078.

llvm-svn: 113057
2010-09-04 02:42:48 +00:00
Chris Lattner 50506787d1 fix a bug in my licm rewrite when a load from the promoted memory
location is being re-stored to the memory location.  We would get
a dangling pointer from the SSAUpdate data structure and miss a 
use.  This fixes PR8068

llvm-svn: 113042
2010-09-04 00:12:30 +00:00
Owen Anderson c91c1a205a Propagate non-local comparisons. Fixes PR1757.
llvm-svn: 113025
2010-09-03 22:47:08 +00:00
Owen Anderson c725462245 Add support for simplifying a load from a computed value to a load from a global when it
is provable that they're equivalent.  This fixes PR4855.

llvm-svn: 112994
2010-09-03 19:08:37 +00:00
Owen Anderson 064cb4c807 Add a test for PR4413, which was apparently fixed at some point in the past.
llvm-svn: 112987
2010-09-03 18:33:08 +00:00
Owen Anderson 50d8c8888c Add PR number to test.
llvm-svn: 112971
2010-09-03 16:58:25 +00:00
Chris Lattner 65fb25a257 more test cleanup
llvm-svn: 112892
2010-09-02 22:38:56 +00:00
Chris Lattner bb451461ec remove some noise from tests.
llvm-svn: 112889
2010-09-02 22:35:33 +00:00
Chris Lattner affc0e42f0 fix more AST updating bugs, correcting miscompilation in PR8041
llvm-svn: 112878
2010-09-02 22:19:10 +00:00
Owen Anderson 67dee4dcac Fix typo. I accidentally edited the wrong file before my last commit.
llvm-svn: 112851
2010-09-02 19:52:06 +00:00
Owen Anderson a8c896b704 Fix a bug in LazyValueInfo that CorrelatedValuePropagation exposed: In the LVI lattice, undef and the full set ConstantRange should not
be treated as equivalent.

llvm-svn: 112843
2010-09-02 18:23:58 +00:00
Duncan Sands 8dda07428a Print the number of uses of a function in the .ll since it can be informative
and there seems to be no reason not to.

llvm-svn: 112812
2010-09-02 08:52:23 +00:00
Chris Lattner 8af45a889d deepen my MMX/SRoA hack to avoid hurting non-x86 codegen.
llvm-svn: 112763
2010-09-01 23:09:27 +00:00
Dan Gohman 0ad7d9c24e Fix loop unswitching's assumption that a code path which either
infinite loops or exits will eventually exit. This fixes PR5373.

llvm-svn: 112745
2010-09-01 21:46:45 +00:00
Bill Wendling 6456efaffd The output of opt -stats must be sent to stderr. Patch by NAKAMURA Takumi!
llvm-svn: 112724
2010-09-01 18:32:56 +00:00
Chris Lattner 34e5361eb5 add a gross hack to work around a problem that Argiris reported
on llvmdev: SRoA is introducing MMX datatypes like <1 x i64>,
which then cause random problems because the X86 backend is
producing mmx stuff without inserting proper emms calls.

In the short term, force off MMX datatypes.  In the long term,
the X86 backend should not select generic vector types to MMX
registers.  This is being worked on, but won't be done in time
for 2.8.  rdar://8380055

llvm-svn: 112696
2010-09-01 05:14:33 +00:00
Chris Lattner b9ed4f252f filecheckize
llvm-svn: 112695
2010-09-01 05:10:14 +00:00
Chris Lattner 030f02021b licm is wasting time hoisting constant foldable operations,
instead of hoisting them, just fold them away.  This occurs in the
testcase for PR8041, for example.

llvm-svn: 112669
2010-08-31 23:00:16 +00:00
Owen Anderson a5e6b3eca4 Merge 2010-08-31-InfiniteRecursion.ll into crash.ll.
llvm-svn: 112635
2010-08-31 20:27:17 +00:00
Owen Anderson 799a08ae48 Add a test for the duplicated-conditional situation illutrated by PR5652.
llvm-svn: 112621
2010-08-31 18:49:12 +00:00
Chris Lattner e2295f1c80 merge two tests.
llvm-svn: 112617
2010-08-31 18:44:03 +00:00
Owen Anderson 3931c85956 Manually reduce this testcase.
llvm-svn: 112615
2010-08-31 18:16:29 +00:00
Chris Lattner fbcd165b59 merge two tests and convert to filecheck.
llvm-svn: 112613
2010-08-31 18:05:08 +00:00
Owen Anderson ada0623725 Add a micro-test for the transforms I added to JumpThreading.
I have not been able to find a way to test each in isolation, for a few reasons:
1) The ability to look-through non-i1 BinaryOperator's requires the ability to look through non-constant
   ICmps in order for it to ever trigger.
2) The ability to do LVI-powered PHI value determination only matters in cases that ProcessBranchOnPHI
   can't handle.  Since it already handles all the cases without other instructions in the def-use chain
   between the PHI and the branch, it requires the ability to look through ICmps and/or BinaryOperators
   as well.

llvm-svn: 112611
2010-08-31 17:59:07 +00:00
Owen Anderson 064b139c8d Rename test directory to reflect new pass name.
llvm-svn: 112592
2010-08-31 07:50:31 +00:00
Owen Anderson 48d58ad64c Rename ValuePropagation to a more descriptive CorrelatedValuePropagation.
llvm-svn: 112591
2010-08-31 07:48:34 +00:00
Owen Anderson 3997a07fb9 More Chris-inspired JumpThreading fixes: use ConstantExpr to correctly constant-fold undef, and be more careful with its return value.
This actually exposed an infinite recursion bug in ComputeValueKnownInPredecessors which theoretically already existed (in JumpThreading's
handling of and/or of i1's), but never manifested before.  This patch adds a tracking set to prevent this case.

llvm-svn: 112589
2010-08-31 07:36:34 +00:00
Owen Anderson 376597c13e Remove r111665, which implemented store-narrowing in InstCombine. Chris discovered a miscompilation in it, and it's not easily
fixable at the optimizer level. I'll investigate reimplementing it in DAGCombine.

llvm-svn: 112575
2010-08-31 04:41:06 +00:00
Owen Anderson 70b17c50e2 Combine these two tests, and make sure there's a newline at the end of the file.
llvm-svn: 112554
2010-08-30 23:37:41 +00:00
Duncan Sands 68c30907cc Correct bogus module triple specifications.
llvm-svn: 112469
2010-08-30 10:48:29 +00:00
Chris Lattner 263f804699 LICM does get dead instructions input to it. Instead of sinking them
out of loops, just delete them.

llvm-svn: 112451
2010-08-29 18:22:25 +00:00
Chris Lattner 504e5100d3 remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner d0214f3efe handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Owen Anderson cf7f941121 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Chris Lattner 954e9557e3 tidy up test.
llvm-svn: 112321
2010-08-27 23:15:21 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner 606b76eba6 merge and filecheckize test
llvm-svn: 112289
2010-08-27 20:44:45 +00:00
Chris Lattner c665156e9f merge two tests
llvm-svn: 112288
2010-08-27 20:42:10 +00:00
Chris Lattner 7398434675 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Chris Lattner 90cd746e63 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Owen Anderson 6ebbd92380 Use LVI to eliminate conditional branches where we've tested a related condition previously. Update tests for this change.
This fixes PR5652.

llvm-svn: 112270
2010-08-27 17:12:29 +00:00
Chris Lattner c188b96bbe filecheckize
llvm-svn: 112235
2010-08-26 22:23:39 +00:00
Chris Lattner 387d6bcdcb rename test.
llvm-svn: 112234
2010-08-26 22:20:47 +00:00
Chris Lattner bfd2228182 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Chris Lattner d4ebd6df5a optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Chris Lattner 3c19d3d5c3 filecheckize
llvm-svn: 112225
2010-08-26 21:51:41 +00:00
Chris Lattner 7717c616bd rename test
llvm-svn: 112224
2010-08-26 21:50:56 +00:00
Owen Anderson bd2ecc7e68 Make JumpThreading smart enough to properly thread StrSwitch when it's compiled with clang++.
llvm-svn: 112198
2010-08-26 17:40:24 +00:00
Devang Patel 01262e129e DIGlobalVariable can be used to encode debug info for globals that are directly folded into a constant by FE.
llvm-svn: 112072
2010-08-25 18:52:02 +00:00
Owen Anderson 4afea9e3c6 In the default address space, any GEP off of null results in a trap value if you try to load it. Thus,
any load in the default address space that completes implies that the base value that it GEP'd from
was not null.

llvm-svn: 112015
2010-08-25 01:16:47 +00:00
Owen Anderson 84c29a096b Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson 3323651ec7 Previous revert failed to remove this file.
llvm-svn: 111582
2010-08-19 23:45:15 +00:00
Owen Anderson 43057cd56a Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson bb723b228a When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Kenneth Uildriks d4b6ab9888 Fixed and reactivated a partial specialization test
llvm-svn: 111516
2010-08-19 12:42:38 +00:00
Chris Lattner 3c603024bb Fix PR7755: knowing something about an inval for a pred
from the LHS should disable reconsidering that pred on the
RHS.  However, knowing something about the pred on the RHS
shouldn't disable subsequent additions on the RHS from
happening.

llvm-svn: 111349
2010-08-18 03:14:36 +00:00
Eric Christopher 51edc7b7e1 Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.

llvm-svn: 111318
2010-08-17 22:55:27 +00:00
Dan Gohman 5047ca0c02 When rotating loops, put the original header at the bottom of the
loop, making the resulting loop significantly less ugly.  Also, zap
its trivial PHI nodes, since it's easy.

llvm-svn: 111255
2010-08-17 17:39:21 +00:00
Dan Gohman 250b754428 Instead, teach SimplifyCFG to trim non-address-taken blocks from
indirectbr destination lists.

llvm-svn: 111122
2010-08-16 14:41:14 +00:00
Dan Gohman aa445c0751 LoopSimplify shouldn't split loop backedges that use indirectbr. PR7867.
llvm-svn: 111061
2010-08-14 00:43:09 +00:00
Dan Gohman 4a63fad976 Teach SimplifyCFG how to simplify indirectbr instructions.
- Eliminate redundant successors.
 - Convert an indirectbr with one successor into a direct branch.

Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.

llvm-svn: 111060
2010-08-14 00:29:42 +00:00
Nate Begeman 2a0ca3e937 Reapply this transformation now that it is passing the external test which it previously failed.
llvm-svn: 110987
2010-08-13 00:17:53 +00:00
Chris Lattner 363226dfe8 fix PR7876: If ipsccp decides that a function's address is taken
before it rewrites the code, we need to use that in the post-rewrite pass.

llvm-svn: 110962
2010-08-12 22:25:23 +00:00
Eric Christopher ac40d49c70 Temporarily revert 110737 and 110734, they were causing failures
in an external testsuite.

llvm-svn: 110905
2010-08-12 07:01:22 +00:00
Nate Begeman 3ec892c167 Add test for recent instcombine vector shuffle enhancement
llvm-svn: 110737
2010-08-10 21:58:00 +00:00
Eli Friedman f99e7e6643 PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
it doesn't regress again.

llvm-svn: 110597
2010-08-09 20:49:43 +00:00
Dan Gohman c53ee449a5 Move x86-specific tests out of test/Transforms/LoopStrengthReduce and
into test/CodeGen/X86, so that they aren't run when the x86 target is
not enabled.

Fix uglygep.ll to not be x86-specific.

llvm-svn: 110343
2010-08-05 17:04:15 +00:00
Dan Gohman 3619660529 Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.

llvm-svn: 110128
2010-08-03 18:20:32 +00:00
Peter Collingbourne ddaaf40d24 Add an atomic lowering pass
llvm-svn: 110113
2010-08-03 16:19:16 +00:00
Owen Anderson 8f306a779b Re-apply the infamous r108614, with a fix pointed out by Dirk Steinke.
llvm-svn: 110036
2010-08-02 09:32:13 +00:00
Daniel Dunbar 0b636a24c7 Speculatively revert r108614, "Another attempt at getting the clang self-host to
like my instcombine patch.", in an attempt to fix Clang i386 bootstrap.
 - Also PR7719.

llvm-svn: 109953
2010-07-31 19:51:11 +00:00
Owen Anderson bb4c4b59a4 Fix a test with malformed IR. Not sure why this didn't fail before.
llvm-svn: 109422
2010-07-26 18:44:56 +00:00
Dan Gohman cd83870faf Fix SCEVExpander::visitAddRecExpr so that it remembers the induction variable
it inserted rather than using LoopInfo::getCanonicalInductionVariable to
rediscover it, since that doesn't work on non-canonical loops. This fixes
infinite recurrsion on such loops; PR7562.

llvm-svn: 109419
2010-07-26 18:28:14 +00:00