Commit Graph

7976 Commits

Author SHA1 Message Date
Devang Patel 3ac171d49a Remove dead code.
llvm-svn: 127923
2011-03-18 23:33:58 +00:00
Devang Patel c1431e6e84 Consider debug info intrinsics pointing to null value as dead instructions.
llvm-svn: 127922
2011-03-18 23:28:02 +00:00
Andrew Trick f8f67f0188 Remove TargetData and ValueTracking includes. I didn't mean for them to sneak in my last checkin.
llvm-svn: 127842
2011-03-18 00:36:39 +00:00
Andrew Trick 87716c93c2 Added isValidRewrite() to check the result of ScalarEvolutionExpander.
SCEV may generate expressions composed of multiple pointers, which can
lead to invalid GEP expansion. Until we can teach SCEV to follow strict
pointer rules, make sure no bad GEPs creep into IR.
Fixes rdar://problem/9038671.

llvm-svn: 127839
2011-03-17 23:51:11 +00:00
Andrew Trick e44f0d94f6 whitespace
llvm-svn: 127837
2011-03-17 23:46:48 +00:00
Devang Patel aad34d882d Try to not lose variable's debug info during instcombine.
This is done by lowering dbg.declare intrinsic into dbg.value intrinsic.
Radar 9143931.

llvm-svn: 127834
2011-03-17 22:18:16 +00:00
Devang Patel 8c0b16b0aa Refactor into a separate utility function.
llvm-svn: 127832
2011-03-17 21:58:19 +00:00
Cameron Zwarich 7599b106b7 Fix a comment.
llvm-svn: 127728
2011-03-16 08:13:42 +00:00
Cameron Zwarich 0454253d7a Only convert allocas to scalars if it is profitable. The profitability metric I
chose is having a non-memcpy/memset use and being larger than any native integer
type. Originally I chose having an access of a size smaller than the total size
of the alloca, but this caused some minor issues on the spirit benchmark where
SRoA runs again after some inlining.

This fixes <rdar://problem/8613163>.

llvm-svn: 127718
2011-03-16 00:13:44 +00:00
Cameron Zwarich b51c830f7c Better use initializer lists.
llvm-svn: 127716
2011-03-16 00:13:37 +00:00
Cameron Zwarich 63062ccf85 Add a clarifying comment.
llvm-svn: 127715
2011-03-16 00:13:35 +00:00
Cameron Zwarich dbb27393cc Clean up something noticed by Fritz.
llvm-svn: 127684
2011-03-15 18:42:33 +00:00
Cameron Zwarich 0b8cdfb6ec Do not add PHIs with no users when creating LCSSA form. Patch by Andrew Clinton.
llvm-svn: 127674
2011-03-15 07:41:25 +00:00
Eli Friedman c4414c6e92 PR9450: Make switch optimization in SimplifyCFG not dependent on the ordering
of pointers in an std::map.

llvm-svn: 127650
2011-03-15 02:23:35 +00:00
Eric Christopher 2139d3148f If we don't know how long a string is we can't fold an _chk version to the
normal version.

Fixes rdar://9123638

llvm-svn: 127636
2011-03-15 00:25:41 +00:00
Andrew Trick 8b55b736b1 Added SCEV::NoWrapFlags to manage unsigned, signed, and self wrap
properties.
Added the self-wrap flag for SCEV::AddRecExpr.
A slew of temporary FIXMEs indicate the intention of the no-self-wrap flag
without changing behavior in this revision.

llvm-svn: 127590
2011-03-14 16:50:06 +00:00
Andrew Trick 328b223bb1 whitespace
llvm-svn: 127589
2011-03-14 16:48:10 +00:00
Jin-Gu Kang b452db02f0 This case is solved by Scalar Replacement of Aggregates (DT) and
Early CSE pass so this patch reverts it to original source code.

llvm-svn: 127574
2011-03-14 01:21:00 +00:00
Jin-Gu Kang b7538c71e1 Add comment as following:
load and store reference same memory location, the memory location
is represented by getelementptr with two uses (load and store) and
the getelementptr's base is alloca with single use. At this point,
instructions from alloca to store can be removed.
(this pattern is generated when bitfield is accessed.)
For example,
%u = alloca %struct.test, align 4               ; [#uses=1]
%0 = getelementptr inbounds %struct.test* %u, i32 0, i32 0;[#uses=2]
%1 = load i8* %0, align 4                       ; [#uses=1]
%2 = and i8 %1, -16                             ; [#uses=1]
%3 = or i8 %2, 5                                ; [#uses=1]
store i8 %3, i8* %0, align 4

llvm-svn: 127565
2011-03-13 14:05:51 +00:00
Jin-Gu Kang 2e939f7c3c This patch removes some of useless instructions generated by bitfield access.
llvm-svn: 127539
2011-03-12 12:18:44 +00:00
Cameron Zwarich 338d362200 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar 94ccb27b43 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Benjamin Kramer 51897bcd3e InstCombine: Fix a thinko where transform an icmp under the assumption that it's a zero comparison when it's not.
Fixes PR9454.

llvm-svn: 127464
2011-03-11 11:37:40 +00:00
Cameron Zwarich cc27b3acc4 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Dan Gohman affbc66f60 RecursivelyDeleteTriviallyDeadInstructions only needs a
Value, not an Instruction, so casting is not necessary. Also,
it's theoretically possible that the Value is not an
Instruction, since WeakVH follows RAUWs.

llvm-svn: 127427
2011-03-10 20:57:44 +00:00
Dan Gohman 154ed49784 Fix reassociate to postpone certain instruction deletions until
after it has finished all of its reassociations, because its
habit of unlinking operands and holding them in a datastructure
while working means that it's not easy to determine when an
instruction is really dead until after all its regular work is
done. rdar://9096268.

llvm-svn: 127424
2011-03-10 19:51:54 +00:00
Benjamin Kramer b49b964b98 InstCombine: Turn umul_with_overflow into mul nuw if we can prove that it cannot overflow.
This happens a lot in clang-compiled C++ code because it adds overflow checks to operator new[]:
  unsigned *foo(unsigned n) { return new unsigned[n]; }
We can optimize away the overflow check on 64 bit targets because (uint64_t)n*4 cannot overflow.

llvm-svn: 127418
2011-03-10 18:40:14 +00:00
Devang Patel 13f8c7d48e Preserve line number information while simplifying libcalls.
llvm-svn: 127362
2011-03-09 21:27:52 +00:00
Devang Patel a10794ab7b These llvm.dbg.* constants are not used anymore.
llvm-svn: 127352
2011-03-09 19:41:33 +00:00
Cameron Zwarich 19f2b3c652 Fix a crasher introduced by r127317 that is seen on the bots when using an
alloca as both integer and floating-point vectors of the same size. Bugpoint is
not cooperating with me, but I'll try to find a manual testcase tomorrow.

llvm-svn: 127320
2011-03-09 07:34:11 +00:00
Cameron Zwarich 3b649f4d01 Add support to scalar replacement for partial vector accesses of an alloca, e.g.
a union of a float, <2 x float>, and <4 x float>. This mostly comes up with the
use of vector intrinsics, especially in NEON when programmers know the layout of
the register file. This enables codegen to eliminate a lot of the subregister
traffic it would otherwise generate.

This commit only enables this for a small number of floating-point cases, but a
lot more integer cases. I assume this is okay for all ports, but I did not do
extensive testing of the quality of code involving i512 vectors and the like. If
there is a use case where this generates worse code than before, let me know and
we can scale it back.

This fixes <rdar://problem/9036264>.

llvm-svn: 127317
2011-03-09 05:43:05 +00:00
Cameron Zwarich 43a241fa06 Move vector type merging to a separate function in preparation for it getting
more complicated.

llvm-svn: 127316
2011-03-09 05:43:01 +00:00
Eli Friedman a81a82dcaf PR9346: Prevent SimplifyDemandedBits from incorrectly introducing
INT_MIN % -1.

llvm-svn: 127306
2011-03-09 01:28:35 +00:00
Eli Friedman aac35b3fbb PR9420; an instruction before an unreachable is guaranteed not to have any
reachable uses, but there still might be uses in dead blocks.  Use the
standard solution of replacing all the uses with undef.  This is
a rare case because it's very sensitive to phase ordering in SimplifyCFG.

llvm-svn: 127299
2011-03-09 00:48:33 +00:00
Devang Patel fbb482b314 llvm.dbg.declare intrinsic does not use any llvm::Values. It's magic!
llvm-svn: 127282
2011-03-08 22:12:11 +00:00
Nick Lewycky afc8098c9e Reorder comments to put them the right way around.
llvm-svn: 127220
2011-03-08 06:29:47 +00:00
Devang Patel 97d0be8ee1 While sinking an instruction, do not lose llvm.dbg.value intrinsic.
llvm-svn: 127214
2011-03-08 03:06:19 +00:00
Devang Patel d00c628f8f Preserve line no. info.
Radar 9097659

llvm-svn: 127182
2011-03-07 22:43:45 +00:00
Nick Lewycky e467979d0a Add more analysis of the sign bit of an srem instruction. If the LHS is negative
then the result could go either way. If it's provably positive then so is the
srem. Fixes PR9343 #7!

llvm-svn: 127146
2011-03-07 01:50:10 +00:00
Rafael Espindola 871cfde1c2 Don't internalize available_externally functions. We already did the right
thing for variables.

llvm-svn: 127138
2011-03-06 23:41:34 +00:00
Nick Lewycky 92db8e8e39 ConstantInt has some getters which return ConstantInt's or ConstantVector's of
the value splatted into every element. Extend this to getTrue and getFalse which
by providing new overloads that take Types that are either i1 or <N x i1>. Use
it in InstCombine to add vector support to some code, fixing PR8469!

llvm-svn: 127116
2011-03-06 03:36:19 +00:00
Benjamin Kramer 08c913b6e6 InstCombine: We know the number of items initially added to the worklist map, reserve space early to avoid rehashing.
llvm-svn: 127089
2011-03-05 16:43:46 +00:00
Cameron Zwarich 13c885d193 Fix PR9398 - 10% of llc compile time is spent in Value::getNumUses. This reduces
the percentage of time spent in CodeGenPrepare when llcing 403.gcc from 12.6% to
1.8% of total llc time.

llvm-svn: 127069
2011-03-05 08:12:26 +00:00
Nick Lewycky 9719a719c7 Thread comparisons over udiv/sdiv/ashr/lshr exact and lshr nuw/nsw whenever
possible. This goes into instcombine and instsimplify because instsimplify
doesn't need to check hasOneUse since it returns (almost exclusively) constants.

This fixes PR9343 #4 #5 and #8!

llvm-svn: 127064
2011-03-05 05:19:11 +00:00
Nick Lewycky 25cc338d88 Try once again to optimize "icmp (srem X, Y), Y" by turning the comparison into
true/false or "icmp slt/sge Y, 0".

llvm-svn: 127063
2011-03-05 04:28:48 +00:00
Jakob Stoklund Olesen e2017b6f2e DenseMap<uintptr_t,...> doesn't allow all values as keys.
Avoid colliding with the sentinels, hopefully unbreaking
llvm-gcc-x86_64-linux-selfhost.

llvm-svn: 126982
2011-03-04 02:48:56 +00:00
Richard Osborne 5003782293 Fix typo in comment.
llvm-svn: 126941
2011-03-03 14:21:22 +00:00
Richard Osborne af52c52569 Optimize fprintf -> iprintf if there are no floating point arguments
and siprintf is available on the target.

llvm-svn: 126940
2011-03-03 14:20:22 +00:00
Richard Osborne 2dfb888392 Optimize sprintf -> siprintf if there are no floating point arguments
and siprintf is available on the target.

llvm-svn: 126937
2011-03-03 14:09:28 +00:00
Richard Osborne 815de536e5 Optimize printf -> iprintf if there are no floating point arguments
and iprintf is available on the target. Currently iprintf is only
marked as being available on the XCore.

llvm-svn: 126935
2011-03-03 13:17:51 +00:00
Cameron Zwarich 86ade9510f Remove some more unused code that I missed.
llvm-svn: 126826
2011-03-02 03:48:29 +00:00
Cameron Zwarich 5dd2aa2615 Eliminate the unused CodeGenPrepare option to split critical edges.
llvm-svn: 126825
2011-03-02 03:31:46 +00:00
Cameron Zwarich b7f8eaafa3 Stop computing the number of uses twice per value in CodeGenPrepare's sinking of
addressing code. On 403.gcc this almost halves CodeGenPrepare time and reduces
total llc time by 9.5%. Unfortunately, getNumUses() is still the hottest function
in llc.

llvm-svn: 126782
2011-03-01 21:13:53 +00:00
Anders Carlsson da80afef99 Make InstCombiner::FoldAndOfICmps create a ConstantRange that's the
intersection of the LHS and RHS ConstantRanges and return "false" when
the range is empty.

This simplifies some code and catches some extra cases.

llvm-svn: 126744
2011-03-01 15:05:01 +00:00
Eli Friedman 683bbc16c4 Add an obvious missing safety check to DAE::RemoveDeadArgumentsFromCallers.
llvm-svn: 126720
2011-03-01 00:33:47 +00:00
Ted Kremenek 20164dcc68 Unbreak CMake build.
llvm-svn: 126715
2011-02-28 23:56:33 +00:00
Chris Lattner 1ac5e0c5c6 update cmake
llvm-svn: 126694
2011-02-28 22:45:25 +00:00
Dan Gohman 06d70015ce Delete the GEPSplitter experiment.
llvm-svn: 126671
2011-02-28 19:47:47 +00:00
Dan Gohman b8a25f49f3 Delete the SimplifyHalfPowrLibCalls pass, which was unused, and
only existed as the result of a misunderstanding.

llvm-svn: 126669
2011-02-28 19:41:14 +00:00
Frits van Bommel 8ae07996c9 Teach SimplifyCFG that (switch (select cond, X, Y)) is better expressed as a branch.
Based on a patch by Alistair Lynn.

llvm-svn: 126647
2011-02-28 09:44:07 +00:00
Nick Lewycky 66f4f22f7b srem doesn't actually have the same resulting sign as its numerator, you could
also have a zero when numerator = denominator. Reverts parts of r126635 and
r126637.

llvm-svn: 126644
2011-02-28 09:17:39 +00:00
Nick Lewycky 174a705497 Teach InstCombine to fold "(shr exact X, Y) == 0" --> X == 0, fixing #1 from
PR9343.

llvm-svn: 126643
2011-02-28 08:31:40 +00:00
Nick Lewycky 6b445419b0 The sign of an srem instruction is the sign of its dividend (the first
argument), regardless of the divisor. Teach instcombine about this and fix
test7 in PR9343!

llvm-svn: 126635
2011-02-28 06:20:05 +00:00
Benjamin Kramer ceb5daa567 Revert "SimplifyCFG: GEPs with just one non-constant index are also cheap."
Yes, there are other types than i8* and GEPs on them can produce an add+multiply.
We don't consider that cheap enough to be speculatively executed.

llvm-svn: 126481
2011-02-25 10:33:33 +00:00
Benjamin Kramer dfdca1a14d SimplifyCFG: GEPs with just one non-constant index are also cheap.
llvm-svn: 126452
2011-02-24 23:26:09 +00:00
Benjamin Kramer 27361a7124 SimplifyCFG: GEPs with constant indices are cheap enough to be executed unconditionally.
llvm-svn: 126445
2011-02-24 22:46:11 +00:00
Devang Patel cedf928743 Do not use DIFactory. Use DIBuilder.
llvm-svn: 126398
2011-02-24 18:49:55 +00:00
Chris Lattner eddb33ebd0 wire TargetLibraryInfo into simplify libcalls and use it in a couple of
trivial places.  This pass needs a lot of work.

llvm-svn: 126367
2011-02-24 07:16:14 +00:00
Chris Lattner 2e56e20662 move a massive amount of code out into its own helper function
to reduce nesting.  This needs to be turned into a table.

llvm-svn: 126366
2011-02-24 07:12:12 +00:00
Chris Lattner adf38b3e09 change instcombine to not turn a call to non-varargs bitcast of
function prototype into a call to a varargs prototype.  We do
allow the xform if we have a definition, but otherwise we don't
want to risk that we're changing the abi in a subtle way.  On
X86-64, for example, varargs require passing stuff in %al.

llvm-svn: 126363
2011-02-24 05:10:56 +00:00
Cameron Zwarich 826308586c Make LoopDeletion work on loops with multiple edges, as long as the incoming
values from all of the loop's exiting blocks are equal. Patch by Andrew Clinton.

llvm-svn: 126253
2011-02-22 22:25:39 +00:00
Duncan Sands ecbbf0825b If the phi node was used by an unreachable instruction that ends up using
itself without going via a phi node then we could return false here in
spite of making a change.  Also, tweak the comment because this method
can (and always could) return true without deleting the original phi node.
For example, if the phi node was used by a read-only invoke instruction
which is used by another phi node phi2 which is only used by and only uses
the invoke, then phi2 would be deleted but not the invoke instruction and
not the original phi node.

llvm-svn: 126129
2011-02-21 17:32:05 +00:00
Chris Lattner 2333ac279f fix a crasher in disabled code (on variable stride loops)
llvm-svn: 126125
2011-02-21 17:02:55 +00:00
Duncan Sands 6dcd49bc2b Simplify RecursivelyDeleteDeadPHINode. The only functionality change
should be that if the phi is used by a side-effect free instruction with
no uses then the phi and the instruction now get zapped (checked by the
unittest).

llvm-svn: 126124
2011-02-21 16:27:36 +00:00
Chris Lattner bc661d6686 Add some (disabled code) to print out negative strides.
llvm-svn: 126102
2011-02-21 02:08:54 +00:00
Nick Lewycky 183c24c51b Make RecursivelyDeleteDeadPHINode delete a phi node that has no users and add a
test for that. With this change, test/CodeGen/X86/codegen-dce.ll no longer finds
any instructions to DCE, so delete the test.

Also renamed J and JP to I and IP in RecursivelyDeleteDeadPHINode.

llvm-svn: 126088
2011-02-20 18:05:56 +00:00
Benjamin Kramer 5b7a4e0195 Move "A | ~(A & ?) -> -1" from InstCombine to InstructionSimplify.
llvm-svn: 126082
2011-02-20 15:20:01 +00:00
Benjamin Kramer d5d7f37beb InstCombine: Add a bunch of combines of the form x | (y ^ z).
We usually catch this kind of optimization through InstSimplify's distributive
magic, but or doesn't distribute over xor in general.

"A | ~(A | B) -> A | ~B" hits 24 times on gcc.c.

llvm-svn: 126081
2011-02-20 13:23:43 +00:00
Nick Lewycky c8a1569950 Teach RecursivelyDeleteDeadPHINodes to handle multiple self-references. Patch
by Andrew Clinton!

llvm-svn: 126077
2011-02-20 08:38:20 +00:00
Nick Lewycky 080ea93779 Instead of keeping two Value*->id# mappings, keep one Value->Value mapping and
one Value set. This is faster because we only need to use the set when there
isn't already an entry in the map. No functionality change!

llvm-svn: 126076
2011-02-20 08:11:03 +00:00
Eli Friedman ef200db4fd PR9218: SimplifyDemandedVectorElts can return a non-null value that is not
the instruction passed in.  Make sure to account for this correctly, instead
of looping infinitely.

llvm-svn: 126058
2011-02-19 22:42:40 +00:00
Chris Lattner 72a35fb974 rewrite the memset_pattern pattern generation stuff to accept any 2/4/8/16-byte
constant, including globals.  This makes us generate much more "pretty" pattern
globals as well because it doesn't break it down to an array of bytes all the
time.

This enables us to handle stores of relocatable globals.  This kicks in about
48 times in 254.gap, giving us stuff like this:

@.memset_pattern40 = internal constant [2 x %struct.TypHeader* (%struct.TypHeader*, %struct.TypHeader*)*] [%struct.TypHeader* (%struct.TypHeader*, %struct
.TypHeader*)* @IsFalse, %struct.TypHeader* (%struct.TypHeader*, %struct.TypHeader*)* @IsFalse], align 16

...
  call void @memset_pattern16(i8* %scevgep5859, i8* bitcast ([2 x %struct.TypHeader* (%struct.TypHeader*, %struct.TypHeader*)*]* @.memset_pattern40 to i8*
), i64 %tmp75) nounwind

llvm-svn: 126044
2011-02-19 19:56:44 +00:00
Chris Lattner 0f4a64011e Implement rdar://9009151, transforming strided loop stores of
unsplatable values into memset_pattern16 when it is available
(recent darwins).  This transforms lots of strided loop stores
of ints for example, like 5 in vpr:

  Formed memset:   call void @memset_pattern16(i8* %4, i8* getelementptr inbounds ([16 x i8]* @.memset_pattern9, i32 0, i32 0), i64 %tmp25)
    from store to: {%3,+,4}<%11> at:   store i32 3, i32* %scevgep, align 4, !tbaa !4

llvm-svn: 126040
2011-02-19 19:31:39 +00:00
Chris Lattner e6b261fec5 Make loop-idiom use TargetLibraryInfo to determine whether it is allowed
to hack on memset, memcpy etc.

llvm-svn: 125974
2011-02-18 22:22:15 +00:00
Oscar Fuentes 5ed962656c Move library stuff out of the toplevel CMakeLists.txt file.
llvm-svn: 125968
2011-02-18 22:06:14 +00:00
Duncan Sands 84653b3674 Add some transforms of the kind X-Y>X -> 0>Y which are valid when there is no
overflow.  These subsume some existing equality transforms, so zap those.

llvm-svn: 125843
2011-02-18 16:25:37 +00:00
Chris Lattner 1a924e770a prevent jump threading from merging blocks when their address is
taken (and used!).  This prevents merging the blocks (invalidating
the block addresses) in a case like this:

#define _THIS_IP_  ({ __label__ __here; __here: (unsigned long)&&__here; })

void foo() {
  printf("%p\n", _THIS_IP_);
  printf("%p\n", _THIS_IP_);
  printf("%p\n", _THIS_IP_);
}

which fixes PR4151.

llvm-svn: 125829
2011-02-18 04:43:06 +00:00
Chris Lattner 4a14fbc50c Don't unroll loops whose header block's address is taken.
This is part of a futile attempt to not "break" bizzaro
code like this:

 l1:
  printf("l1: %p\n", &&l1);
  ++x;

  if( x < 3 ) goto l1;


Previously we'd fold &&l1 to 1, which is fine per our semantics
but not helpful to the user.

llvm-svn: 125827
2011-02-18 04:25:21 +00:00
Chris Lattner a8fed47eed have instcombine preserve nsw/nuw/exact when sinking
common operations through a phi. 

llvm-svn: 125790
2011-02-17 23:01:49 +00:00
Chris Lattner 75ae5a45ff fix typo
llvm-svn: 125787
2011-02-17 22:32:54 +00:00
Chris Lattner abb8eb2c63 fix instcombine merging GEPs through a PHI to only make the
result inbounds if all of the inputs are inbounds.

llvm-svn: 125785
2011-02-17 22:21:26 +00:00
Chris Lattner d406764d52 add is always integer, thanks to Frits for noticing this.
llvm-svn: 125774
2011-02-17 20:55:29 +00:00
Duncan Sands e522001171 Transform "A + B >= A + C" into "B >= C" if the adds do not wrap. Likewise for some
variations (some of these were already present so I unified the code).  Spotted by my
auto-simplifier as occurring a lot.

llvm-svn: 125734
2011-02-17 07:46:37 +00:00
Chris Lattner 5592071768 preserve NUW/NSW when transforming add x,x
llvm-svn: 125711
2011-02-17 02:23:02 +00:00
Chris Lattner 3eb0af94c4 fix PR9215, preventing -reassociate from clearing nsw/nuw when
it swaps the LHS/RHS of a single binop.

llvm-svn: 125700
2011-02-17 01:29:24 +00:00
Duncan Sands 75b5d27b84 Spelling fix: consequtive -> consecutive.
llvm-svn: 125563
2011-02-15 09:23:02 +00:00
Nadav Rotem 67d67a0385 Fix 9216 - Endless loop in InstCombine pass.
The pattern "A&(A^B) -> A & ~B" recreated itself because ~B is
actually a xor -1.

llvm-svn: 125557
2011-02-15 07:13:48 +00:00
Devang Patel 8d53ac81ec Do not forget DebugLoc!
llvm-svn: 125547
2011-02-15 02:02:30 +00:00
Chris Lattner 9f0ac0dd8b tidy up a bit.
llvm-svn: 125546
2011-02-15 01:56:08 +00:00
Chris Lattner 69229316aa convert ConstantVector::get to use ArrayRef.
llvm-svn: 125537
2011-02-15 00:14:00 +00:00
Devang Patel 3058398655 Do not hoist @llvm.dbg.value. Here, @llvm.dbg.value is "referring" a value that is modified inside loop.
llvm-svn: 125529
2011-02-14 23:03:23 +00:00
Chris Lattner 34442e6ebf revert my ConstantVector patch, it seems to have made the llvm-gcc
builders unhappy.

llvm-svn: 125504
2011-02-14 18:15:46 +00:00
Chris Lattner d9f5b88548 Switch ConstantVector::get to use ArrayRef instead of a pointer+size
idiom.  Change various clients to simplify their code.

llvm-svn: 125487
2011-02-14 07:55:32 +00:00
Chris Lattner 9bd7fdff58 remove a now-unneccesary cast.
llvm-svn: 125464
2011-02-13 18:30:09 +00:00
Chris Lattner 43273affb9 implement instcombine folding for things like (x >> c) < 42.
We were previously simplifying divisions, but not right shifts!

llvm-svn: 125454
2011-02-13 08:07:21 +00:00
Chris Lattner d369f575d7 refactor some code out into a helper method.
llvm-svn: 125451
2011-02-13 07:43:07 +00:00
Daniel Dunbar 210ce0feb5 SimplifyLibCalls: Add missing legalize check on various printf to puts and
putchar transforms, their return values are not compatible.

llvm-svn: 125442
2011-02-12 18:19:57 +00:00
Benjamin Kramer 1800d823de Also fold (A+B) == A -> B == 0 when the add is commuted.
llvm-svn: 125411
2011-02-11 21:46:48 +00:00
Chris Lattner d3c0e05f51 When lowering an inbounds gep, the intermediate adds can have
unsigned overflow (e.g. due to a negative array index), but
the scales on array size multiplications are known to not
sign wrap.

llvm-svn: 125409
2011-02-11 21:37:43 +00:00
Cameron Zwarich 99de19b3cb Make LoopUnswitch preserve ScalarEvolution by just forgetting everything about
a loop when unswitching it. It only does this in the complex case, because
everything should be fine already in the simple case.

llvm-svn: 125369
2011-02-11 06:08:28 +00:00
Cameron Zwarich 25cb63c791 LoopInstSimplify preserves ScalarEvolution.
llvm-svn: 125368
2011-02-11 06:08:25 +00:00
Cameron Zwarich 97dae4d361 If we can't avoid running loop-simplify twice for now, at least avoid running
iv-users twice.

llvm-svn: 125318
2011-02-10 23:53:14 +00:00
Cameron Zwarich d8e66038f4 Rename 'loopsimplify' to 'loop-simplify'.
llvm-svn: 125317
2011-02-10 23:38:10 +00:00
Chris Lattner d86ded17ad implement the first part of PR8882: when lowering an inbounds
gep to explicit addressing, we know that none of the intermediate
computation overflows.

This could use review: it seems that the shifts certainly wouldn't
overflow, but could the intermediate adds overflow if there is a 
negative index?

Previously the testcase would instcombine to:

define i1 @test(i64 %i) {
  %p1.idx.mask = and i64 %i, 4611686018427387903
  %cmp = icmp eq i64 %p1.idx.mask, 1000
  ret i1 %cmp
}

now we get:

define i1 @test(i64 %i) {
  %cmp = icmp eq i64 %i, 1000
  ret i1 %cmp
}

llvm-svn: 125271
2011-02-10 07:11:16 +00:00
Chris Lattner 6b657aed33 Enhance a bunch of transformations in instcombine to start generating
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.

Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck.  I believe that this takes care of a bunch of related
code quality issues attached to PR8862.

llvm-svn: 125267
2011-02-10 05:36:31 +00:00
Chris Lattner 98457101fc Enhance the "compare with shift" and "compare with div"
optimizations to be much more aggressive in the face of
exact/nsw/nuw div and shifts.  For example, these (which
are the same except the first is 'exact' sdiv:

define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
  %A = sdiv exact i64 %X, -5   ; X/-5 == 0 --> x == 0
  %B = icmp eq i64 %A, 0
  ret i1 %B
}

define i1 @sdiv_icmp4(i64 %X) nounwind {
  %A = sdiv i64 %X, -5   ; X/-5 == 0 --> x == 0
  %B = icmp eq i64 %A, 0
  ret i1 %B
}

compile down to:

define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
  %1 = icmp eq i64 %X, 0
  ret i1 %1
}

define i1 @sdiv_icmp4(i64 %X) nounwind {
  %X.off = add i64 %X, 4
  %1 = icmp ult i64 %X.off, 9
  ret i1 %1
}

This happens when you do something like:
  (ptr1-ptr2) == 42

where the pointers are pointers to non-unit types.

llvm-svn: 125266
2011-02-10 05:23:05 +00:00
Chris Lattner dcef03fba2 more cleanups, notably bitcast isn't used for "signed to unsigned type
conversions". :)

llvm-svn: 125265
2011-02-10 05:17:27 +00:00
Chris Lattner 7d0e43ff8b A bunch of cleanups and simplifications using the new PatternMatch predicates
and generally tidying things up.  Only very trivial functionality changes
like now doing (-1 - A) -> (~A) for vectors too.

 InstCombineAddSub.cpp |  296 +++++++++++++++++++++-----------------------------
 1 file changed, 126 insertions(+), 170 deletions(-)

llvm-svn: 125264
2011-02-10 05:14:58 +00:00
Chris Lattner 768003c59e teach SimplifyDemandedBits that exact shifts demand the bits they
are shifting out since they do require them to be zeros.  Similarly
for NUW/NSW bits of shl

llvm-svn: 125263
2011-02-10 05:09:34 +00:00
Eric Christopher da6bd45088 Revert this in an attempt to bring the builders back.
llvm-svn: 125257
2011-02-10 01:48:24 +00:00
Cameron Zwarich 58c8670ab2 Turn this pass ordering:
Natural Loop Information
 Loop Pass Manager
   Canonicalize natural loops
 Scalar Evolution Analysis
 Loop Pass Manager
   Induction Variable Users
   Canonicalize natural loops
   Induction Variable Users
   Loop Strength Reduction

into this:

Scalar Evolution Analysis
Loop Pass Manager
  Canonicalize natural loops
  Induction Variable Users
  Loop Strength Reduction

This fixes <rdar://problem/8869639>. I also filed PR9184 on doing this sort of
thing automatically, but it seems easier to just change the ordering of the
passes if this is the only case.

llvm-svn: 125254
2011-02-10 01:07:54 +00:00
Chris Lattner 9e4aa0259f Teach instsimplify some tricks about exact/nuw/nsw shifts.
improve interfaces to instsimplify to take this info.

llvm-svn: 125196
2011-02-09 17:15:04 +00:00
Chris Lattner b940091388 Rework InstrTypes.h so to reduce the repetition around the NSW/NUW/Exact
versions of creation functions.  Eventually, the "insertion point" versions
of these should just be removed, we do have IRBuilder afterall.

Do a massive rewrite of much of pattern match.  It is now shorter and less
redundant and has several other widgets I will be using in other patches.
Among other changes, m_Div is renamed to m_IDiv (since it only matches 
integer divides) and m_Shift is gone (it used to match all binops!!) and
we now have m_LogicalShift for the one client to use.

Enhance IRBuilder to have "isExact" arguments to things like CreateUDiv
and reduce redundancy within IRbuilder by having these methods chain to
each other more instead of duplicating code.

llvm-svn: 125194
2011-02-09 17:00:45 +00:00
Nick Lewycky 292e78c3cd When removing a function from the function set and adding it to deferred, we
could end up removing a different function than we intended because it was
functionally equivalent, then end up with a comparison of a function against
itself in the next round of comparisons (the one in the function set and the
one on the deferred list). To fix this, I introduce a choice in the form of
comparison for ComparableFunctions, either normal or "pointer only" used to
find exact Function*'s in lookups.

Also add some debugging statements.

llvm-svn: 125180
2011-02-09 06:32:02 +00:00
Dan Gohman de7f699754 Don't split any loop backedges, including backedges of loops other than
the active loop. This is generally desirable, and it avoids trouble
in situations such as the testcase in PR9123, though the failure
mode depends on use-list order, so it is infeasible to test.

llvm-svn: 125065
2011-02-08 00:55:13 +00:00
Benjamin Kramer 8d6a8c130b SimplifyCFG: Track the number of used icmps when turning a icmp chain into a switch. If we used only one icmp, don't turn it into a switch.
Also prevent the switch-to-icmp transform from creating identity adds, noticed by Marius Wachtler.

llvm-svn: 125056
2011-02-07 22:37:28 +00:00
Chris Lattner 35315d065b enhance vmcore to know that udiv's can be exact, and add a trivial
instcombine xform to exercise this.

Nothing forms exact udivs yet though.  This is progress on PR8862

llvm-svn: 124992
2011-02-06 21:44:57 +00:00
Nick Lewycky cb1a4c26ee Simplify away redundant test, and document what's going on.
llvm-svn: 124977
2011-02-06 05:04:00 +00:00
Nick Lewycky f8797fda44 Remove specialized comparison of InlineAsm objects. They're uniqued on creation
now, and this wasn't comparing some of their relevant bits anyhow.

llvm-svn: 124976
2011-02-06 04:33:50 +00:00
Benjamin Kramer 62aa46b852 SimplifyCFG: Also transform switches that represent a range comparison but are not sorted into sub+icmp.
This transforms another 1000 switches in gcc.c.

llvm-svn: 124826
2011-02-03 22:51:41 +00:00
Benjamin Kramer f4ea1d5f79 SimplifyCFG: Turn switches into sub+icmp+branch if possible.
This makes the job of the later optzn passes easier, allowing the vast amount of
icmp transforms to chew on it.

We transform 840 switches in gcc.c, leading to a 16k byte shrink of the resulting
binary on i386-linux.

The testcase from README.txt now compiles into
  decl  %edi
  cmpl  $3, %edi
  sbbl  %eax, %eax
  andl  $1, %eax
  ret

llvm-svn: 124724
2011-02-02 15:56:22 +00:00
Nick Lewycky a46c898314 Remove wasteful caching. This isn't needed for correctness because any function
that might have changed been affected by a merge elsewhere will have been
removed from the function set, and it isn't needed for performance because we
call grow() ahead of time to prevent reallocations.

llvm-svn: 124717
2011-02-02 05:31:01 +00:00
Dan Gohman c6f0bda839 Conservatively, clear optional flags, such as nsw, when performing
reassociation. No testcase, because I wasn't able to create a testcase
which actually demonstrates a problem.

llvm-svn: 124713
2011-02-02 02:05:46 +00:00
Dan Gohman 08d2c98c23 Fix reassociate to clear optional flags, such as nsw.
llvm-svn: 124712
2011-02-02 02:02:34 +00:00
Anders Carlsson f23a6da271 Recognize and simplify
(A+B) == A  ->  B == 0
A == (A+B)  ->  B == 0

llvm-svn: 124567
2011-01-30 22:01:13 +00:00
Francois Pichet 326e4a2966 Unbreak the MSVC build.
The DEBUG() call at line 606 demands to see raw_ostream's definition. I have no idea why this seems to only break MSVC.

llvm-svn: 124545
2011-01-29 20:06:16 +00:00
Frits van Bommel 2a55951d08 Call SimplifyFDivInst() in InstCombiner::visitFDiv().
llvm-svn: 124535
2011-01-29 17:50:27 +00:00
Frits van Bommel c2549661af Move InstCombine's knowledge of fdiv to SimplifyInstruction().
llvm-svn: 124534
2011-01-29 15:26:31 +00:00
Evan Cheng 73c29178ac Add a test for TCE return duplication.
llvm-svn: 124527
2011-01-29 04:53:35 +00:00
Evan Cheng d983eba7dc Re-apply r124518 with fix. Watch out for invalidated iterator.
llvm-svn: 124526
2011-01-29 04:46:23 +00:00
Evan Cheng 65b8ccf6ac Revert r124518. It broke Linux self-host.
llvm-svn: 124522
2011-01-29 02:43:04 +00:00
Evan Cheng d4eff31476 Re-commit r124462 with fixes. Tail recursion elim will now dup ret into unconditional predecessor to enable TCE on demand.
llvm-svn: 124518
2011-01-29 01:29:26 +00:00
Andrew Trick 24f5ff0f23 Implementation of path profiling.
Modified patch by Adam Preuss.

This builds on the existing framework for block tracing, edge profiling and optimal edge profiling.
See -help-hidden for new flags.
For documentation, see the technical report "Implementation of Path Profiling..." in llvm.org/pubs.

llvm-svn: 124515
2011-01-29 01:09:53 +00:00
Duncan Sands 771e82a863 My auto-simplifier noticed that ((X/Y)*Y)/Y occurs several times in SPEC
benchmarks, and that it can be simplified to X/Y.  (In general you can only
simplify (Z*Y)/Y to Z if the multiplication did not overflow; if Z has the
form "X/Y" then this is the case).  This patch implements that transform and
moves some Div logic out of instcombine and into InstructionSimplify.
Unfortunately instcombine gets in the way somewhat, since it likes to change
(X/Y)*Y into X-(X rem Y), so I had to teach instcombine about this too.
Finally, thanks to the NSW/NUW flags, sometimes we know directly that "Z*Y"
does not overflow, because the flag says so, so I added that logic too.  This
eliminates a bunch of divisions and subtractions in 447.dealII, and has good
effects on some other benchmarks too.  It seems to have quite an effect on
tramp3d-v4 but it's hard to say if it's good or bad because inlining decisions
changed, resulting in massive changes all over.

llvm-svn: 124487
2011-01-28 16:51:11 +00:00
Nick Lewycky cfb284cf96 Rename functions to follow coding standard. Also rejiggers comments. No
functionality change.

llvm-svn: 124482
2011-01-28 08:43:14 +00:00
Nick Lewycky aaf401241a Add a doxygen comment for this class.
llvm-svn: 124480
2011-01-28 08:19:00 +00:00
Nick Lewycky 564fcca856 Reorder for readability. (Chris, is this what you meant?)
llvm-svn: 124479
2011-01-28 07:36:21 +00:00
Evan Cheng aaa9606b2f Revert r124462. There are a few big regressions that I need to fix first.
llvm-svn: 124478
2011-01-28 07:12:38 +00:00
Nick Lewycky c5eb3733f7 Reduce the number of functions we look at in the first pass, and preallocate
the function equality set.

llvm-svn: 124475
2011-01-28 05:48:15 +00:00
Nick Lewycky b074e32641 Fold select + select where both selects are on the same condition.
llvm-svn: 124469
2011-01-28 03:28:10 +00:00
Evan Cheng 417fca86c4 - Stop simplifycfg from duplicating "ret" instructions into unconditional
branches. PR8575, rdar://5134905, rdar://8911460.
- Allow codegen tail duplication to dup small return blocks after register
  allocation is done.

llvm-svn: 124462
2011-01-28 02:19:21 +00:00
Benjamin Kramer 57e3d65884 Unbreak the build.
llvm-svn: 124426
2011-01-27 20:30:54 +00:00
Nick Lewycky e2d46d30ae Expound upon this comparison!
llvm-svn: 124406
2011-01-27 19:51:31 +00:00
Nick Lewycky 5a37e950e1 Use dyn_cast instead of isa+cast.
llvm-svn: 124404
2011-01-27 19:42:43 +00:00
Nick Lewycky 13e04aef2a Fix surprising missed optimization in mergefunc where we forgot to consider
that relationships like "i8* null" is equivalent to "i32* null".

llvm-svn: 124368
2011-01-27 08:38:19 +00:00
Duncan Sands 69bdb585b2 Fix PR9039, a use-after-free in reassociate. The issue was that the
operand being factorized (and erased) could occur several times in Ops,
resulting in freed memory being used when the next occurrence in Ops was
analyzed.

llvm-svn: 124287
2011-01-26 10:08:38 +00:00
Nick Lewycky 91543447a6 AttrListPtr has an overloaded operator== which does this for us, we should use
it. No functionality change!

llvm-svn: 124286
2011-01-26 09:23:19 +00:00
Nick Lewycky 82d4db8662 Teach mergefunc that intptr_t is the same width as a pointer. We still can't
merge vector<intptr_t>::push_back() and vector<void*>::push_back() because
Enumerate() doesn't realize that "i64* null" and "i8** null" are equivalent.

llvm-svn: 124285
2011-01-26 09:13:58 +00:00
Nick Lewycky fb622f9920 There are no vectors of pointer or arrays, so we don't need to check vector
elements for type equivalence.

llvm-svn: 124284
2011-01-26 08:50:18 +00:00
Nick Lewycky f1cec164ce Teach mergefunc how to emit aliases safely again -- but keep it turned it off
for now. It's controlled by the HasGlobalAliases variable which is not attached
to any flag yet.

llvm-svn: 124182
2011-01-25 08:56:50 +00:00
Dan Gohman 0f124e1987 Give GetUnderlyingObject a TargetData, to keep it in sync
with BasicAA's DecomposeGEPExpression, which recently began
using a TargetData. This fixes PR8968, though the testcase
is awkward to reduce.

Also, update several off GetUnderlyingObject's users
which happen to have a TargetData handy to pass it in.

llvm-svn: 124134
2011-01-24 18:53:32 +00:00
Chris Lattner b4017769ae fix PR9017, a bug where we'd assert when promoting in unreachable
code.

llvm-svn: 124100
2011-01-24 03:29:07 +00:00
Chris Lattner 23289c385a fix PR9015, a crash linking recursive metadata.
llvm-svn: 124099
2011-01-24 03:18:24 +00:00
Chris Lattner d83e7b0ff6 enhance SRoA to promote allocas that are used by PHI nodes. This often
occurs because instcombine sinks loads and inserts phis.  This kicks in 
on such apps as 175.vpr, eon, 403.gcc, xalancbmk and a bunch of times in
spec2006 in some app that uses std::deque.

This resolves the last of rdar://7339113.

llvm-svn: 124090
2011-01-24 01:07:11 +00:00
Chris Lattner a960725d18 Enhance SRoA to promote allocas that are used by selects in some
common cases.  This triggers a surprising number of times in SPEC2K6
because min/max idioms end up doing this.  For example, code from the
STL ends up looking like this to SRoA:

  %202 = load i64* %__old_size, align 8, !tbaa !3
  %203 = load i64* %__old_size, align 8, !tbaa !3
  %204 = load i64* %__n, align 8, !tbaa !3
  %205 = icmp ult i64 %203, %204
  %storemerge.i = select i1 %205, i64* %__n, i64* %__old_size
  %206 = load i64* %storemerge.i, align 8, !tbaa !3

We can now promote both the __n and the __old_size allocas.

This addresses another chunk of rdar://7339113, poor codegen on
stringswitch.

llvm-svn: 124088
2011-01-23 22:04:55 +00:00
Ted Kremenek 3c4408ceb6 Null initialize a few variables flagged by
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.

llvm-svn: 124073
2011-01-23 17:05:06 +00:00
Chris Lattner 9491dee24e Enhance SRoA to be more aggressive about scalarization of aggregate allocas
that have PHI or select uses of their element pointers.  This can often happen
when instcombine sinks two loads into a successor, inserting a phi or select.

With this patch, we can scalarize the alloca, but the pinned elements are not
yet promoted.  This is still a win for large aggregates where only one element
is used.  This fixes rdar://8904039 and part of rdar://7339113 (poor codegen
on stringswitch).

llvm-svn: 124070
2011-01-23 08:27:54 +00:00
Cameron Zwarich 07d6fe34b3 Convert two std::vectors to SmallVectors for a 3.4% speedup running -scalarrepl
on test-suite + SPEC2000 & SPEC2006.

llvm-svn: 124068
2011-01-23 08:03:04 +00:00
Chris Lattner 8acbb79506 have AllocaInfo store the alloca being inspected, simplifying callers.
No functionality change.

llvm-svn: 124067
2011-01-23 07:29:29 +00:00
Chris Lattner 3e56c29068 Rearrange some code a bit. Change MarkUnsafe to
handle the "Transformation preventing inst" printing, 
so that -scalarrepl -debug will always print the rejected
instruction.  No functionality change.

llvm-svn: 124066
2011-01-23 07:05:44 +00:00
Chris Lattner a587ab7b94 remove an old hack that avoided creating MMX datatypes. The
X86 backend has been fixed.

llvm-svn: 124064
2011-01-23 06:40:33 +00:00
Dan Gohman 19e30d5a7d Actually check memcpy lengths, instead of just commenting about
how they should be checked.

llvm-svn: 123999
2011-01-21 22:07:57 +00:00
Owen Anderson a834200dbe Just because we have determined that an (fcmp | fcmp) is true for A < B,
A == B, and A > B, does not mean we can fold it to true.  We still need to
check for A ? B (A unordered B).

llvm-svn: 123993
2011-01-21 19:39:42 +00:00
Nick Lewycky ae0275e018 SCCP doesn't actually preserve the CFG. It will delete and insert terminator
instructions.

llvm-svn: 123973
2011-01-21 08:38:09 +00:00
Chris Lattner b5e15d1907 fix PR9013, an infinite loop in instcombine.
llvm-svn: 123968
2011-01-21 05:29:50 +00:00
Chris Lattner f4ca47bda8 update obsolete comment.
llvm-svn: 123965
2011-01-21 05:08:26 +00:00
Nick Lewycky 6a083cf820 Don't try to pull vector bitcasts that change the number of elements through
a select. A vector select is pairwise on each element so we'd need a new
condition with the right number of elements to select on. Fixes PR8994.

llvm-svn: 123963
2011-01-21 02:30:43 +00:00
Duncan Sands 8fb2c3827c At -O123 the early-cse pass is run before instcombine has run. According to my
auto-simplier the transform most missed by early-cse is (zext X) != 0 -> X != 0.
This patch adds this transform and some related logic to InstructionSimplify
and removes some of the logic from instcombine (unfortunately not all because
there are several situations in which instcombine can improve things by making
new instructions, whereas instsimplify is not allowed to do this).  At -O2 this
often results in more than 15% more simplifications by early-cse, and results in
hundreds of lines of bitcode being eliminated from the testsuite.  I did see some
small negative effects in the testsuite, for example a few additional instructions
in three programs.  One program, 483.xalancbmk, got an additional 35 instructions,
which seems to be due to a function getting an additional instruction and then
being inlined all over the place.

llvm-svn: 123911
2011-01-20 13:21:55 +00:00
Rafael Espindola fc355bc070 Add unnamed_addr when we can show that address of a global is not used.
llvm-svn: 123834
2011-01-19 16:32:21 +00:00
Chris Lattner 86d56c651d fix rdar://8878965, a regression I introduced with the recent
llvm.objectsize changes.

llvm-svn: 123771
2011-01-18 20:53:04 +00:00
Cameron Zwarich fc210c79b7 Convert a std::map to a DenseMap for another 1.7% speedup on -scalarrepl.
llvm-svn: 123732
2011-01-18 04:50:38 +00:00
Cameron Zwarich 6968c41ac8 Make a std::vector a SmallVector<*, 32> like the other vectors in the same
function. This seems to be about a 1.5% speedup of -scalarrepl on test-suite
with SPEC2000 and SPEC2006.

llvm-svn: 123731
2011-01-18 04:41:32 +00:00
Rafael Espindola ecd5b9abe9 Reduce indentation and remove commented out code.
llvm-svn: 123729
2011-01-18 04:36:06 +00:00
Cameron Zwarich b703654edc Remove code for updating dominance frontiers and some outdated references to
dominance and post-dominance frontiers.

llvm-svn: 123725
2011-01-18 04:11:31 +00:00
Cameron Zwarich 4694e69540 Remove outdated references to dominance frontiers.
llvm-svn: 123724
2011-01-18 03:53:26 +00:00
Owen Anderson 459e079912 Remove dead code, that I apparently wrote a while back. We seem to be doing well enough
without whatever this was trying to do.  When/if someone has the time to do some empirical
evaluations, it might be worth it to figure out what this code was trying to do and see if
it's worth resurrecting/fixing.

llvm-svn: 123684
2011-01-17 22:39:54 +00:00
Cameron Zwarich b410858a5f Roll r123609 back in with two changes that fix test failures with expensive
checks enabled:

1) Use '<' to compare integers in a comparison function rather than '<='.

2) Use the uniqued set DefBlocks rather than Info.DefiningBlocks to initialize
the priority queue.

The speedup of scalarrepl on test-suite + SPEC2000 + SPEC2006 is a bit less, at
just under 16% rather than 17%.

llvm-svn: 123662
2011-01-17 17:38:41 +00:00
Cameron Zwarich 67431d7943 Roll out r123609 due to failures on the llvm-x86_64-linux-checks bot.
llvm-svn: 123618
2011-01-17 07:26:51 +00:00
Cameron Zwarich 814cd9233e Eliminate the use of dominance frontiers in PromoteMemToReg. In addition to
eliminating a potentially quadratic data structure, this also gives a 17%
speedup when running -scalarrepl on test-suite + SPEC2000 + SPEC2006. My initial
experiment gave a greater speedup around 25%, but I moved the dominator tree
level computation from dominator tree construction to PromoteMemToReg.

Since this approach to computing IDFs has a much lower overhead than the old
code using precomputed DFs, it is worth looking at using this new code for the
second scalarrepl pass as well.

llvm-svn: 123609
2011-01-17 01:08:59 +00:00
Anders Carlsson d3db83349e Teach DAE to look for functions whose arguments are unused, and change all callers to pass in an undefvalue instead.
llvm-svn: 123596
2011-01-16 21:25:33 +00:00
Chris Lattner 7c9f4c9c2b tidy up a comment, as suggested by duncan
llvm-svn: 123590
2011-01-16 17:46:19 +00:00
Rafael Espindola 751677a040 Don't merge two constants if we care about the address of both.
This fixes the original testcase in PR8927. It also causes a clang
binary built with a patched clang to increase in size by 0.21%.

We can probably get some of the size back by writing a pass that
detects that a global never has its pointer compared and adds
unnamed_addr to it (maybe extend global opt). It is also possible that
there are some other cases clang could add unnamed_addr to.

I will investigate extending globalopt next.

llvm-svn: 123584
2011-01-16 17:05:09 +00:00
Chris Lattner e5f8de8639 fix PR8932, a case where arg promotion could infinitely promote.
llvm-svn: 123574
2011-01-16 08:09:24 +00:00
Chris Lattner ed1fb92cfe simplify a little
llvm-svn: 123573
2011-01-16 07:11:21 +00:00
Chris Lattner 6fab2e9418 if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Chris Lattner 7cd8cf7d24 Use an irbuilder to get some trivial constant folding when doing a store
of a constant.

llvm-svn: 123570
2011-01-16 05:58:24 +00:00
Chris Lattner adb1a233b1 remove a dead check, this was needed before we had an explicit veto on uses of phis.
llvm-svn: 123569
2011-01-16 05:37:55 +00:00
Chris Lattner d55581ded8 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.

llvm-svn: 123568
2011-01-16 05:28:59 +00:00
Chris Lattner ea7131a062 remove the AllowAggressive argument to FoldOpIntoPhi. It is forced to false in the
first line of the function because it isn't a good idea, even for compares.

llvm-svn: 123566
2011-01-16 05:14:26 +00:00
Chris Lattner ff2e737714 more cleanups: use the IR builder.
llvm-svn: 123565
2011-01-16 05:08:00 +00:00
Chris Lattner 25ce280511 tidy up code.
llvm-svn: 123564
2011-01-16 04:37:29 +00:00
Owen Anderson 4e54efd625 Improve the safety of my globalopt enhancement by ensuring that the bitcast
of the stored value to the new store type is always.  Also, add a testcase.

llvm-svn: 123563
2011-01-16 04:33:33 +00:00
Chris Lattner 8b4952fcf7 simplify this code, it is still broken but will follow up on llvm-commits.
llvm-svn: 123558
2011-01-16 02:05:10 +00:00
Chris Lattner 1e209b87ad remove the partial specialization pass. It is unmaintained and has bugs.
llvm-svn: 123554
2011-01-16 00:27:10 +00:00
Nick Lewycky 4a1ff16b29 Add missing whitespace.
llvm-svn: 123543
2011-01-15 18:42:52 +00:00
Nick Lewycky 0296a481f9 Make constmerge a two-pass algorithm so that it won't miss merging
opporuntities. Fixes PR8978.

llvm-svn: 123541
2011-01-15 18:14:21 +00:00
Benjamin Kramer ed5f2e504e Try to unbreak selfhost.
llvm-svn: 123537
2011-01-15 11:25:34 +00:00
Nick Lewycky 540f9536c8 Add a cache that protects mergefunc's internals from more surprises in DenseSet.
Also, replace tabs with spaces. Yes, it's 2011.

llvm-svn: 123535
2011-01-15 10:16:23 +00:00
Chris Lattner af26390790 temporarily revert r123526. While working on a follow-on patch I
realize that ConstantFoldTerminator doesn't preserve dominfo.

llvm-svn: 123527
2011-01-15 07:51:19 +00:00
Chris Lattner 8df83c4a24 fix rdar://8785296 - -fcatch-undefined-behavior generates inefficient code
The basic issue is that isel (very reasonably!) expects conditional branches
to be folded, so CGP leaving around a bunch dead computation feeding
conditional branches isn't such a good idea.  Just fold branches on constants
into unconditional branches.

llvm-svn: 123526
2011-01-15 07:36:13 +00:00
Chris Lattner ee588defc6 simplify code, no functionality change.
llvm-svn: 123525
2011-01-15 07:29:01 +00:00
Chris Lattner 1b93be501d Now that instruction optzns can update the iterator as they go, we can
have objectsize folding recursively simplify away their result when it
folds.  It is important to catch this here, because otherwise we won't
eliminate the cross-block values at isel and other times.

llvm-svn: 123524
2011-01-15 07:25:29 +00:00
Chris Lattner 7a2771440f make the current instruction iterator an ivar, allowing xforms that
potentially invalidate it (like inline asm lowering) to be sunk into
their proper place, cleaning up a ton of code.

llvm-svn: 123523
2011-01-15 07:14:54 +00:00
Chris Lattner 9c10d587f6 implement an instcombine xform that canonicalizes casts outside of and-with-constant operations.
This fixes rdar://8808586 which observed that we used to compile:


union xy {
        struct x { _Bool b[15]; } x;
        __attribute__((packed))
        struct y {
                __attribute__((packed)) unsigned long b0to7;
                __attribute__((packed)) unsigned int b8to11;
                __attribute__((packed)) unsigned short b12to13;
                __attribute__((packed)) unsigned char b14;
        } y;
};

struct x
foo(union xy *xy)
{
        return xy->x;
}

into:

_foo:                                   ## @foo
	movq	(%rdi), %rax
	movabsq	$1095216660480, %rcx    ## imm = 0xFF00000000
	andq	%rax, %rcx
	movabsq	$-72057594037927936, %rdx ## imm = 0xFF00000000000000
	andq	%rax, %rdx
	movzbl	%al, %esi
	orq	%rdx, %rsi
	movq	%rax, %rdx
	andq	$65280, %rdx            ## imm = 0xFF00
	orq	%rsi, %rdx
	movq	%rax, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rdx, %rsi
	movl	%eax, %edx
	andl	$-16777216, %edx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rdx
	orq	%rcx, %rdx
	movabsq	$280375465082880, %rcx  ## imm = 0xFF0000000000
	movq	%rax, %rsi
	andq	%rcx, %rsi
	orq	%rdx, %rsi
	movabsq	$71776119061217280, %r8 ## imm = 0xFF000000000000
	andq	%r8, %rax
	orq	%rsi, %rax
	movzwl	12(%rdi), %edx
	movzbl	14(%rdi), %esi
	shlq	$16, %rsi
	orl	%edx, %esi
	movq	%rsi, %r9
	shlq	$32, %r9
	movl	8(%rdi), %edx
	orq	%r9, %rdx
	andq	%rdx, %rcx
	movzbl	%sil, %esi
	shlq	$32, %rsi
	orq	%rcx, %rsi
	movl	%edx, %ecx
	andl	$-16777216, %ecx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rcx
	movq	%rdx, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rcx, %rsi
	movq	%rdx, %rcx
	andq	$65280, %rcx            ## imm = 0xFF00
	orq	%rsi, %rcx
	movzbl	%dl, %esi
	orq	%rcx, %rsi
	andq	%r8, %rdx
	orq	%rsi, %rdx
	ret

We now compile this into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movzwl	12(%rdi), %eax
	movzbl	14(%rdi), %ecx
	shlq	$16, %rcx
	orl	%eax, %ecx
	shlq	$32, %rcx
	movl	8(%rdi), %edx
	orq	%rcx, %rdx
	movq	(%rdi), %rax
	ret

A small improvement :-)

llvm-svn: 123520
2011-01-15 06:32:33 +00:00
Chris Lattner e20dd530d0 one more instcombine variant that is needed to work with future changes,
no functionality change currently.

llvm-svn: 123517
2011-01-15 05:50:18 +00:00
Chris Lattner 497459d5fd fix typo
llvm-svn: 123516
2011-01-15 05:42:47 +00:00
Chris Lattner f3c4eefff8 Catch ~x < cst just like ~x < ~y, we currently handle this through
means that are about to disappear.

llvm-svn: 123515
2011-01-15 05:41:33 +00:00
Chris Lattner 311aa63c87 reduce indentation
llvm-svn: 123514
2011-01-15 05:40:29 +00:00
Chris Lattner b68ec5c339 Generalize LoadAndStorePromoter a bit and switch LICM
to use it.

llvm-svn: 123501
2011-01-15 00:12:35 +00:00
Owen Anderson 3e2f6cf7ae Fix a false-positive warning.
llvm-svn: 123480
2011-01-14 22:31:13 +00:00
Owen Anderson 9eb7cb48e4 Enhance GlobalOpt to be able evaluate initializers that involve stores through
bitcasts, at least in simple cases.  This fixes clang's CodeGenCXX/virtual-base-dtor.cpp

llvm-svn: 123477
2011-01-14 22:19:20 +00:00
Chris Lattner b498f9aff3 switch SRoA to use LoadAndStorePromoter instead of its own copy of the code.
llvm-svn: 123457
2011-01-14 19:50:47 +00:00
Chris Lattner 95294b8796 Add a new LoadAndStorePromoter class, which implements the general
"promote a bunch of load and stores" logic, allowing the code to
be shared and reused.

llvm-svn: 123456
2011-01-14 19:36:13 +00:00
Chris Lattner 9987a6f49b split SROA into two passes: one that uses DomFrontiers (-scalarrepl)
and one that uses SSAUpdater (-scalarrepl-ssa)

llvm-svn: 123436
2011-01-14 08:13:00 +00:00
Chris Lattner 543384efb4 Implement full support for promoting allocas to registers using SSAUpdater
instead of DomTree/DomFrontier.  This may be interesting for reducing compile 
time.  This is currently disabled, but seems to work just fine.

When this is enabled, we eliminate two runs of dominator frontier, one in the
"early per-function" optimizations and one in the "interlaced with inliner"
function passes.

llvm-svn: 123434
2011-01-14 07:50:47 +00:00
Chris Lattner 90f3a9a1c7 indentation
llvm-svn: 123426
2011-01-14 04:23:53 +00:00
Duncan Sands 7f60dc1eb0 Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.

llvm-svn: 123417
2011-01-14 00:37:45 +00:00
Bob Wilson 328e91bbe1 Fix whitespace.
llvm-svn: 123396
2011-01-13 20:59:44 +00:00
Bob Wilson c8056a952e Check for empty structs, and for consistency, zero-element arrays.
llvm-svn: 123383
2011-01-13 18:26:59 +00:00
Bob Wilson 08713d3c5f Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00
Bob Wilson 12eec40c83 Make SROA more aggressive with allocas containing padding.
SROA only split up structs and arrays one level at a time, so padding can
only cause trouble if it is located in between the struct or array elements.

llvm-svn: 123380
2011-01-13 17:45:08 +00:00
Devang Patel 30f3ebbc1f Use SmallVector instead of SmallPtrSet and avoid non-deterministic behavior.
llvm-svn: 123318
2011-01-12 19:12:45 +00:00
Chris Lattner dd5f60b7a7 revert 123144, reenabling the rest of memset formation.
llvm-svn: 123302
2011-01-12 03:25:15 +00:00
Chris Lattner 654098f411 revert r123146 which disabled code that wasn't the root cause
of the bootstrap miscompare issue.

llvm-svn: 123299
2011-01-12 01:52:23 +00:00
Chris Lattner fa7c29d255 revert r123149, reenabling an improvement to memcpyopt that wasn't
the source of the bootstrap problem.

llvm-svn: 123298
2011-01-12 01:43:46 +00:00
Jakob Stoklund Olesen 12cc296bd4 Remove the PR8954 workaround.
llvm-svn: 123288
2011-01-11 22:56:41 +00:00
Jakob Stoklund Olesen f2407aa98b Fix a non-deterministic loop in llvm::MergeBlockIntoPredecessor.
DT->changeImmediateDominator() trivially ignores identity updates, so there is
really no need for the uniqueing provided by SmallPtrSet.

I expect this to fix PR8954.

llvm-svn: 123286
2011-01-11 22:54:38 +00:00
Cameron Zwarich cb9c4f85ec Dial back the speculative fix for PR8954 a bit, so that we only recompute dominators
once at the beginning of GVN instead of once per iteration.

llvm-svn: 123278
2011-01-11 22:14:42 +00:00
Cameron Zwarich 51eb403907 Attempt to fix the bootstrap buildbot. Rafael says this works for him on x86-64 Linux.
llvm-svn: 123270
2011-01-11 20:23:34 +00:00
Owen Anderson 0022a4b417 Remove dead variable, const-ref-ize an APInt.
llvm-svn: 123248
2011-01-11 18:26:37 +00:00
Chris Lattner d41db8f9cb this pass claims to preserve scev, make sure to tell it about deletions.
llvm-svn: 123247
2011-01-11 18:14:50 +00:00
Frits van Bommel 8e158495f1 Factor the actual simplification out of SimplifyIndirectBrOnSelect and into a new helper function so it can be reused in e.g. an upcoming SimplifySwitchOnSelect.
No functional change.

llvm-svn: 123234
2011-01-11 12:52:11 +00:00
Chris Lattner 193ce7c4d1 update memdep when an instruction is deleted. This code isn't
actually reached in the testcase in PR8954, but it's safe and good
practice.

llvm-svn: 123224
2011-01-11 08:19:16 +00:00
Chris Lattner e2523b287c when MergeBlockIntoPredecessor merges two blocks, update MemDep if it
is floating around in the ether.

llvm-svn: 123223
2011-01-11 08:16:49 +00:00
Chris Lattner f6ae904e34 Fix FoldSingleEntryPHINodes to update memdep and AA when it deletes
phi nodes.  It is called from MergeBlockIntoPredecessor which is 
called from GVN, which claims to preserve these.

I'm skeptical that this is the actual problem behind PR8954, but
this is a stab in the right direction.

llvm-svn: 123222
2011-01-11 08:13:40 +00:00
Chris Lattner dfcfcb49fa random cleanups
llvm-svn: 123221
2011-01-11 08:00:40 +00:00
Chris Lattner 63fe78de68 remove a bogus assertion: the latch block of a loop is not
neccesarily an uncond branch to the header.  This fixes 
PR8955 (the assertion tripping).

llvm-svn: 123219
2011-01-11 07:47:59 +00:00
Owen Anderson d490c2d2ae Fix a random missed optimization by making InstCombine more aggressive when determining which bits are demanded by
a comparison against a constant.

llvm-svn: 123203
2011-01-11 00:36:45 +00:00
Chandler Carruth cf414cf0a6 Teach instcombine about the rest of the SSE and SSE2 conversion
intrinsics element dependencies. Reviewed by Nick.

llvm-svn: 123161
2011-01-10 07:19:37 +00:00
Chris Lattner 88bc848ab6 another random stab in the dark trying to fix llvm-gcc-i386-linux-selfhost
llvm-svn: 123149
2011-01-10 02:34:11 +00:00