Commit Graph

8904 Commits

Author SHA1 Message Date
Owen Anderson a834200dbe Just because we have determined that an (fcmp | fcmp) is true for A < B,
A == B, and A > B, does not mean we can fold it to true.  We still need to
check for A ? B (A unordered B).

llvm-svn: 123993
2011-01-21 19:39:42 +00:00
Nick Lewycky ae0275e018 SCCP doesn't actually preserve the CFG. It will delete and insert terminator
instructions.

llvm-svn: 123973
2011-01-21 08:38:09 +00:00
Chris Lattner b5e15d1907 fix PR9013, an infinite loop in instcombine.
llvm-svn: 123968
2011-01-21 05:29:50 +00:00
Chris Lattner f4ca47bda8 update obsolete comment.
llvm-svn: 123965
2011-01-21 05:08:26 +00:00
Nick Lewycky 6a083cf820 Don't try to pull vector bitcasts that change the number of elements through
a select. A vector select is pairwise on each element so we'd need a new
condition with the right number of elements to select on. Fixes PR8994.

llvm-svn: 123963
2011-01-21 02:30:43 +00:00
Duncan Sands 8fb2c3827c At -O123 the early-cse pass is run before instcombine has run. According to my
auto-simplier the transform most missed by early-cse is (zext X) != 0 -> X != 0.
This patch adds this transform and some related logic to InstructionSimplify
and removes some of the logic from instcombine (unfortunately not all because
there are several situations in which instcombine can improve things by making
new instructions, whereas instsimplify is not allowed to do this).  At -O2 this
often results in more than 15% more simplifications by early-cse, and results in
hundreds of lines of bitcode being eliminated from the testsuite.  I did see some
small negative effects in the testsuite, for example a few additional instructions
in three programs.  One program, 483.xalancbmk, got an additional 35 instructions,
which seems to be due to a function getting an additional instruction and then
being inlined all over the place.

llvm-svn: 123911
2011-01-20 13:21:55 +00:00
Rafael Espindola fc355bc070 Add unnamed_addr when we can show that address of a global is not used.
llvm-svn: 123834
2011-01-19 16:32:21 +00:00
Chris Lattner 86d56c651d fix rdar://8878965, a regression I introduced with the recent
llvm.objectsize changes.

llvm-svn: 123771
2011-01-18 20:53:04 +00:00
Cameron Zwarich fc210c79b7 Convert a std::map to a DenseMap for another 1.7% speedup on -scalarrepl.
llvm-svn: 123732
2011-01-18 04:50:38 +00:00
Cameron Zwarich 6968c41ac8 Make a std::vector a SmallVector<*, 32> like the other vectors in the same
function. This seems to be about a 1.5% speedup of -scalarrepl on test-suite
with SPEC2000 and SPEC2006.

llvm-svn: 123731
2011-01-18 04:41:32 +00:00
Rafael Espindola ecd5b9abe9 Reduce indentation and remove commented out code.
llvm-svn: 123729
2011-01-18 04:36:06 +00:00
Cameron Zwarich b703654edc Remove code for updating dominance frontiers and some outdated references to
dominance and post-dominance frontiers.

llvm-svn: 123725
2011-01-18 04:11:31 +00:00
Cameron Zwarich 4694e69540 Remove outdated references to dominance frontiers.
llvm-svn: 123724
2011-01-18 03:53:26 +00:00
Owen Anderson 459e079912 Remove dead code, that I apparently wrote a while back. We seem to be doing well enough
without whatever this was trying to do.  When/if someone has the time to do some empirical
evaluations, it might be worth it to figure out what this code was trying to do and see if
it's worth resurrecting/fixing.

llvm-svn: 123684
2011-01-17 22:39:54 +00:00
Cameron Zwarich b410858a5f Roll r123609 back in with two changes that fix test failures with expensive
checks enabled:

1) Use '<' to compare integers in a comparison function rather than '<='.

2) Use the uniqued set DefBlocks rather than Info.DefiningBlocks to initialize
the priority queue.

The speedup of scalarrepl on test-suite + SPEC2000 + SPEC2006 is a bit less, at
just under 16% rather than 17%.

llvm-svn: 123662
2011-01-17 17:38:41 +00:00
Cameron Zwarich 67431d7943 Roll out r123609 due to failures on the llvm-x86_64-linux-checks bot.
llvm-svn: 123618
2011-01-17 07:26:51 +00:00
Cameron Zwarich 814cd9233e Eliminate the use of dominance frontiers in PromoteMemToReg. In addition to
eliminating a potentially quadratic data structure, this also gives a 17%
speedup when running -scalarrepl on test-suite + SPEC2000 + SPEC2006. My initial
experiment gave a greater speedup around 25%, but I moved the dominator tree
level computation from dominator tree construction to PromoteMemToReg.

Since this approach to computing IDFs has a much lower overhead than the old
code using precomputed DFs, it is worth looking at using this new code for the
second scalarrepl pass as well.

llvm-svn: 123609
2011-01-17 01:08:59 +00:00
Anders Carlsson d3db83349e Teach DAE to look for functions whose arguments are unused, and change all callers to pass in an undefvalue instead.
llvm-svn: 123596
2011-01-16 21:25:33 +00:00
Chris Lattner 7c9f4c9c2b tidy up a comment, as suggested by duncan
llvm-svn: 123590
2011-01-16 17:46:19 +00:00
Rafael Espindola 751677a040 Don't merge two constants if we care about the address of both.
This fixes the original testcase in PR8927. It also causes a clang
binary built with a patched clang to increase in size by 0.21%.

We can probably get some of the size back by writing a pass that
detects that a global never has its pointer compared and adds
unnamed_addr to it (maybe extend global opt). It is also possible that
there are some other cases clang could add unnamed_addr to.

I will investigate extending globalopt next.

llvm-svn: 123584
2011-01-16 17:05:09 +00:00
Chris Lattner e5f8de8639 fix PR8932, a case where arg promotion could infinitely promote.
llvm-svn: 123574
2011-01-16 08:09:24 +00:00
Chris Lattner ed1fb92cfe simplify a little
llvm-svn: 123573
2011-01-16 07:11:21 +00:00
Chris Lattner 6fab2e9418 if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Chris Lattner 7cd8cf7d24 Use an irbuilder to get some trivial constant folding when doing a store
of a constant.

llvm-svn: 123570
2011-01-16 05:58:24 +00:00
Chris Lattner adb1a233b1 remove a dead check, this was needed before we had an explicit veto on uses of phis.
llvm-svn: 123569
2011-01-16 05:37:55 +00:00
Chris Lattner d55581ded8 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.

llvm-svn: 123568
2011-01-16 05:28:59 +00:00
Chris Lattner ea7131a062 remove the AllowAggressive argument to FoldOpIntoPhi. It is forced to false in the
first line of the function because it isn't a good idea, even for compares.

llvm-svn: 123566
2011-01-16 05:14:26 +00:00
Chris Lattner ff2e737714 more cleanups: use the IR builder.
llvm-svn: 123565
2011-01-16 05:08:00 +00:00
Chris Lattner 25ce280511 tidy up code.
llvm-svn: 123564
2011-01-16 04:37:29 +00:00
Owen Anderson 4e54efd625 Improve the safety of my globalopt enhancement by ensuring that the bitcast
of the stored value to the new store type is always.  Also, add a testcase.

llvm-svn: 123563
2011-01-16 04:33:33 +00:00
Chris Lattner 8b4952fcf7 simplify this code, it is still broken but will follow up on llvm-commits.
llvm-svn: 123558
2011-01-16 02:05:10 +00:00
Chris Lattner 1e209b87ad remove the partial specialization pass. It is unmaintained and has bugs.
llvm-svn: 123554
2011-01-16 00:27:10 +00:00
Nick Lewycky 4a1ff16b29 Add missing whitespace.
llvm-svn: 123543
2011-01-15 18:42:52 +00:00
Nick Lewycky 0296a481f9 Make constmerge a two-pass algorithm so that it won't miss merging
opporuntities. Fixes PR8978.

llvm-svn: 123541
2011-01-15 18:14:21 +00:00
Benjamin Kramer ed5f2e504e Try to unbreak selfhost.
llvm-svn: 123537
2011-01-15 11:25:34 +00:00
Nick Lewycky 540f9536c8 Add a cache that protects mergefunc's internals from more surprises in DenseSet.
Also, replace tabs with spaces. Yes, it's 2011.

llvm-svn: 123535
2011-01-15 10:16:23 +00:00
Chris Lattner af26390790 temporarily revert r123526. While working on a follow-on patch I
realize that ConstantFoldTerminator doesn't preserve dominfo.

llvm-svn: 123527
2011-01-15 07:51:19 +00:00
Chris Lattner 8df83c4a24 fix rdar://8785296 - -fcatch-undefined-behavior generates inefficient code
The basic issue is that isel (very reasonably!) expects conditional branches
to be folded, so CGP leaving around a bunch dead computation feeding
conditional branches isn't such a good idea.  Just fold branches on constants
into unconditional branches.

llvm-svn: 123526
2011-01-15 07:36:13 +00:00
Chris Lattner ee588defc6 simplify code, no functionality change.
llvm-svn: 123525
2011-01-15 07:29:01 +00:00
Chris Lattner 1b93be501d Now that instruction optzns can update the iterator as they go, we can
have objectsize folding recursively simplify away their result when it
folds.  It is important to catch this here, because otherwise we won't
eliminate the cross-block values at isel and other times.

llvm-svn: 123524
2011-01-15 07:25:29 +00:00
Chris Lattner 7a2771440f make the current instruction iterator an ivar, allowing xforms that
potentially invalidate it (like inline asm lowering) to be sunk into
their proper place, cleaning up a ton of code.

llvm-svn: 123523
2011-01-15 07:14:54 +00:00
Chris Lattner 9c10d587f6 implement an instcombine xform that canonicalizes casts outside of and-with-constant operations.
This fixes rdar://8808586 which observed that we used to compile:


union xy {
        struct x { _Bool b[15]; } x;
        __attribute__((packed))
        struct y {
                __attribute__((packed)) unsigned long b0to7;
                __attribute__((packed)) unsigned int b8to11;
                __attribute__((packed)) unsigned short b12to13;
                __attribute__((packed)) unsigned char b14;
        } y;
};

struct x
foo(union xy *xy)
{
        return xy->x;
}

into:

_foo:                                   ## @foo
	movq	(%rdi), %rax
	movabsq	$1095216660480, %rcx    ## imm = 0xFF00000000
	andq	%rax, %rcx
	movabsq	$-72057594037927936, %rdx ## imm = 0xFF00000000000000
	andq	%rax, %rdx
	movzbl	%al, %esi
	orq	%rdx, %rsi
	movq	%rax, %rdx
	andq	$65280, %rdx            ## imm = 0xFF00
	orq	%rsi, %rdx
	movq	%rax, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rdx, %rsi
	movl	%eax, %edx
	andl	$-16777216, %edx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rdx
	orq	%rcx, %rdx
	movabsq	$280375465082880, %rcx  ## imm = 0xFF0000000000
	movq	%rax, %rsi
	andq	%rcx, %rsi
	orq	%rdx, %rsi
	movabsq	$71776119061217280, %r8 ## imm = 0xFF000000000000
	andq	%r8, %rax
	orq	%rsi, %rax
	movzwl	12(%rdi), %edx
	movzbl	14(%rdi), %esi
	shlq	$16, %rsi
	orl	%edx, %esi
	movq	%rsi, %r9
	shlq	$32, %r9
	movl	8(%rdi), %edx
	orq	%r9, %rdx
	andq	%rdx, %rcx
	movzbl	%sil, %esi
	shlq	$32, %rsi
	orq	%rcx, %rsi
	movl	%edx, %ecx
	andl	$-16777216, %ecx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rcx
	movq	%rdx, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rcx, %rsi
	movq	%rdx, %rcx
	andq	$65280, %rcx            ## imm = 0xFF00
	orq	%rsi, %rcx
	movzbl	%dl, %esi
	orq	%rcx, %rsi
	andq	%r8, %rdx
	orq	%rsi, %rdx
	ret

We now compile this into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movzwl	12(%rdi), %eax
	movzbl	14(%rdi), %ecx
	shlq	$16, %rcx
	orl	%eax, %ecx
	shlq	$32, %rcx
	movl	8(%rdi), %edx
	orq	%rcx, %rdx
	movq	(%rdi), %rax
	ret

A small improvement :-)

llvm-svn: 123520
2011-01-15 06:32:33 +00:00
Chris Lattner e20dd530d0 one more instcombine variant that is needed to work with future changes,
no functionality change currently.

llvm-svn: 123517
2011-01-15 05:50:18 +00:00
Chris Lattner 497459d5fd fix typo
llvm-svn: 123516
2011-01-15 05:42:47 +00:00
Chris Lattner f3c4eefff8 Catch ~x < cst just like ~x < ~y, we currently handle this through
means that are about to disappear.

llvm-svn: 123515
2011-01-15 05:41:33 +00:00
Chris Lattner 311aa63c87 reduce indentation
llvm-svn: 123514
2011-01-15 05:40:29 +00:00
Chris Lattner b68ec5c339 Generalize LoadAndStorePromoter a bit and switch LICM
to use it.

llvm-svn: 123501
2011-01-15 00:12:35 +00:00
Owen Anderson 3e2f6cf7ae Fix a false-positive warning.
llvm-svn: 123480
2011-01-14 22:31:13 +00:00
Owen Anderson 9eb7cb48e4 Enhance GlobalOpt to be able evaluate initializers that involve stores through
bitcasts, at least in simple cases.  This fixes clang's CodeGenCXX/virtual-base-dtor.cpp

llvm-svn: 123477
2011-01-14 22:19:20 +00:00
Chris Lattner b498f9aff3 switch SRoA to use LoadAndStorePromoter instead of its own copy of the code.
llvm-svn: 123457
2011-01-14 19:50:47 +00:00
Chris Lattner 95294b8796 Add a new LoadAndStorePromoter class, which implements the general
"promote a bunch of load and stores" logic, allowing the code to
be shared and reused.

llvm-svn: 123456
2011-01-14 19:36:13 +00:00
Chris Lattner 9987a6f49b split SROA into two passes: one that uses DomFrontiers (-scalarrepl)
and one that uses SSAUpdater (-scalarrepl-ssa)

llvm-svn: 123436
2011-01-14 08:13:00 +00:00
Chris Lattner 543384efb4 Implement full support for promoting allocas to registers using SSAUpdater
instead of DomTree/DomFrontier.  This may be interesting for reducing compile 
time.  This is currently disabled, but seems to work just fine.

When this is enabled, we eliminate two runs of dominator frontier, one in the
"early per-function" optimizations and one in the "interlaced with inliner"
function passes.

llvm-svn: 123434
2011-01-14 07:50:47 +00:00
Chris Lattner 90f3a9a1c7 indentation
llvm-svn: 123426
2011-01-14 04:23:53 +00:00
Duncan Sands 7f60dc1eb0 Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.

llvm-svn: 123417
2011-01-14 00:37:45 +00:00
Bob Wilson 328e91bbe1 Fix whitespace.
llvm-svn: 123396
2011-01-13 20:59:44 +00:00
Bob Wilson c8056a952e Check for empty structs, and for consistency, zero-element arrays.
llvm-svn: 123383
2011-01-13 18:26:59 +00:00
Bob Wilson 08713d3c5f Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00
Bob Wilson 12eec40c83 Make SROA more aggressive with allocas containing padding.
SROA only split up structs and arrays one level at a time, so padding can
only cause trouble if it is located in between the struct or array elements.

llvm-svn: 123380
2011-01-13 17:45:08 +00:00
Devang Patel 30f3ebbc1f Use SmallVector instead of SmallPtrSet and avoid non-deterministic behavior.
llvm-svn: 123318
2011-01-12 19:12:45 +00:00
Chris Lattner dd5f60b7a7 revert 123144, reenabling the rest of memset formation.
llvm-svn: 123302
2011-01-12 03:25:15 +00:00
Chris Lattner 654098f411 revert r123146 which disabled code that wasn't the root cause
of the bootstrap miscompare issue.

llvm-svn: 123299
2011-01-12 01:52:23 +00:00
Chris Lattner fa7c29d255 revert r123149, reenabling an improvement to memcpyopt that wasn't
the source of the bootstrap problem.

llvm-svn: 123298
2011-01-12 01:43:46 +00:00
Jakob Stoklund Olesen 12cc296bd4 Remove the PR8954 workaround.
llvm-svn: 123288
2011-01-11 22:56:41 +00:00
Jakob Stoklund Olesen f2407aa98b Fix a non-deterministic loop in llvm::MergeBlockIntoPredecessor.
DT->changeImmediateDominator() trivially ignores identity updates, so there is
really no need for the uniqueing provided by SmallPtrSet.

I expect this to fix PR8954.

llvm-svn: 123286
2011-01-11 22:54:38 +00:00
Cameron Zwarich cb9c4f85ec Dial back the speculative fix for PR8954 a bit, so that we only recompute dominators
once at the beginning of GVN instead of once per iteration.

llvm-svn: 123278
2011-01-11 22:14:42 +00:00
Cameron Zwarich 51eb403907 Attempt to fix the bootstrap buildbot. Rafael says this works for him on x86-64 Linux.
llvm-svn: 123270
2011-01-11 20:23:34 +00:00
Owen Anderson 0022a4b417 Remove dead variable, const-ref-ize an APInt.
llvm-svn: 123248
2011-01-11 18:26:37 +00:00
Chris Lattner d41db8f9cb this pass claims to preserve scev, make sure to tell it about deletions.
llvm-svn: 123247
2011-01-11 18:14:50 +00:00
Frits van Bommel 8e158495f1 Factor the actual simplification out of SimplifyIndirectBrOnSelect and into a new helper function so it can be reused in e.g. an upcoming SimplifySwitchOnSelect.
No functional change.

llvm-svn: 123234
2011-01-11 12:52:11 +00:00
Chris Lattner 193ce7c4d1 update memdep when an instruction is deleted. This code isn't
actually reached in the testcase in PR8954, but it's safe and good
practice.

llvm-svn: 123224
2011-01-11 08:19:16 +00:00
Chris Lattner e2523b287c when MergeBlockIntoPredecessor merges two blocks, update MemDep if it
is floating around in the ether.

llvm-svn: 123223
2011-01-11 08:16:49 +00:00
Chris Lattner f6ae904e34 Fix FoldSingleEntryPHINodes to update memdep and AA when it deletes
phi nodes.  It is called from MergeBlockIntoPredecessor which is 
called from GVN, which claims to preserve these.

I'm skeptical that this is the actual problem behind PR8954, but
this is a stab in the right direction.

llvm-svn: 123222
2011-01-11 08:13:40 +00:00
Chris Lattner dfcfcb49fa random cleanups
llvm-svn: 123221
2011-01-11 08:00:40 +00:00
Chris Lattner 63fe78de68 remove a bogus assertion: the latch block of a loop is not
neccesarily an uncond branch to the header.  This fixes 
PR8955 (the assertion tripping).

llvm-svn: 123219
2011-01-11 07:47:59 +00:00
Owen Anderson d490c2d2ae Fix a random missed optimization by making InstCombine more aggressive when determining which bits are demanded by
a comparison against a constant.

llvm-svn: 123203
2011-01-11 00:36:45 +00:00
Chandler Carruth cf414cf0a6 Teach instcombine about the rest of the SSE and SSE2 conversion
intrinsics element dependencies. Reviewed by Nick.

llvm-svn: 123161
2011-01-10 07:19:37 +00:00
Chris Lattner 88bc848ab6 another random stab in the dark trying to fix llvm-gcc-i386-linux-selfhost
llvm-svn: 123149
2011-01-10 02:34:11 +00:00
Chris Lattner 4662bd4b13 another (more) aggressive attempt to bring llvm-gcc-i386-linux-selfhost
back to life.

llvm-svn: 123146
2011-01-10 00:47:34 +00:00
Chris Lattner 1017fa6746 temporarily disable memset formation from memsets in an effort to restore buildbot stability.
llvm-svn: 123144
2011-01-09 23:52:48 +00:00
Chris Lattner caf5c0d037 fix a few old bugs (found by inspection) where we would zap instructions
without informing memdep.  This could cause nondeterminstic weirdness 
based on where instructions happen to get allocated, and will hopefully
breath some life into some broken testers.

llvm-svn: 123124
2011-01-09 19:26:10 +00:00
Tobias Grosser cc21c4aa98 Instcombine: Fix pattern where the sext did not dominate the icmp using it
llvm-svn: 123121
2011-01-09 16:00:11 +00:00
Cameron Zwarich a42e5915bf LoopInstSimplify preserves LoopSimplify.
llvm-svn: 123117
2011-01-09 12:35:16 +00:00
Chris Lattner a337f5ec5c reduce indentation. Print <nuw> and <nsw> when dumping SCEV AddRec's
that have the bit set.

llvm-svn: 123104
2011-01-09 02:16:18 +00:00
Chris Lattner 7d6433ae76 fix a latent bug in memcpyoptimizer that my recent patches exposed: it wasn't
updating memdep when fusing stores together.  This fixes the crash optimizing
the bullet benchmark.

llvm-svn: 123091
2011-01-08 22:19:21 +00:00
Chris Lattner ff6ed2ac5f tryMergingIntoMemset can only handle constant length memsets.
llvm-svn: 123090
2011-01-08 22:11:56 +00:00
Chris Lattner 9a1d63ba9f Merge memsets followed by neighboring memsets and other stores into
larger memsets.  Among other things, this fixes rdar://8760394 and
allows us to handle "Example 2" from http://blog.regehr.org/archives/320,
compiling it into a single 4096-byte memset:

_mad_synth_mute:                        ## @mad_synth_mute
## BB#0:                                ## %entry
	pushq	%rax
	movl	$4096, %esi             ## imm = 0x1000
	callq	___bzero
	popq	%rax
	ret

llvm-svn: 123089
2011-01-08 21:19:19 +00:00
Chris Lattner 5120ebf184 fix an issue in IsPointerOffset that prevented us from recognizing that
P and P+1 are relative to the same base pointer.

llvm-svn: 123087
2011-01-08 21:07:56 +00:00
Chris Lattner 4dc1fd938f enhance memcpyopt to merge a store and a subsequent
memset into a single larger memset.

llvm-svn: 123086
2011-01-08 20:54:51 +00:00
Chris Lattner c638147e9f constify TargetData references.
Split memset formation logic out into its own
"tryMergingIntoMemset" helper function.

llvm-svn: 123081
2011-01-08 20:24:01 +00:00
Chris Lattner 59c82f850d When loop rotation happens, it is *very* common for the duplicated condbr
to be foldable into an uncond branch.  When this happens, we can make a
much simpler CFG for the loop, which is important for nested loop cases
where we want the outer loop to be aggressively optimized.

Handle this case more aggressively.  For example, previously on
phi-duplicate.ll we would get this:


define void @test(i32 %N, double* %G) nounwind ssp {
entry:
  %cmp1 = icmp slt i64 1, 1000
  br i1 %cmp1, label %bb.nph, label %for.end

bb.nph:                                           ; preds = %entry
  br label %for.body

for.body:                                         ; preds = %bb.nph, %for.cond
  %j.02 = phi i64 [ 1, %bb.nph ], [ %inc, %for.cond ]
  %arrayidx = getelementptr inbounds double* %G, i64 %j.02
  %tmp3 = load double* %arrayidx
  %sub = sub i64 %j.02, 1
  %arrayidx6 = getelementptr inbounds double* %G, i64 %sub
  %tmp7 = load double* %arrayidx6
  %add = fadd double %tmp3, %tmp7
  %arrayidx10 = getelementptr inbounds double* %G, i64 %j.02
  store double %add, double* %arrayidx10
  %inc = add nsw i64 %j.02, 1
  br label %for.cond

for.cond:                                         ; preds = %for.body
  %cmp = icmp slt i64 %inc, 1000
  br i1 %cmp, label %for.body, label %for.cond.for.end_crit_edge

for.cond.for.end_crit_edge:                       ; preds = %for.cond
  br label %for.end

for.end:                                          ; preds = %for.cond.for.end_crit_edge, %entry
  ret void
}

Now we get the much nicer:

define void @test(i32 %N, double* %G) nounwind ssp {
entry:
  br label %for.body

for.body:                                         ; preds = %entry, %for.body
  %j.01 = phi i64 [ 1, %entry ], [ %inc, %for.body ]
  %arrayidx = getelementptr inbounds double* %G, i64 %j.01
  %tmp3 = load double* %arrayidx
  %sub = sub i64 %j.01, 1
  %arrayidx6 = getelementptr inbounds double* %G, i64 %sub
  %tmp7 = load double* %arrayidx6
  %add = fadd double %tmp3, %tmp7
  %arrayidx10 = getelementptr inbounds double* %G, i64 %j.01
  store double %add, double* %arrayidx10
  %inc = add nsw i64 %j.01, 1
  %cmp = icmp slt i64 %inc, 1000
  br i1 %cmp, label %for.body, label %for.end

for.end:                                          ; preds = %for.body
  ret void
}

With all of these recent changes, we are now able to compile:

void foo(char *X) {
 for (int i = 0; i != 100; ++i) 
   for (int j = 0; j != 100; ++j)
     X[j+i*100] = 0;
}

into a single memset of 10000 bytes.  This series of changes
should also be helpful for other nested loop scenarios as well.

llvm-svn: 123079
2011-01-08 19:59:06 +00:00
Chris Lattner 30f318e5d1 split ssa updating code out to its own helper function. Don't bother
moving the OrigHeader block anymore: we just merge it away anyway so
its code layout doesn't matter.

llvm-svn: 123077
2011-01-08 19:26:33 +00:00
Chris Lattner 2615130e1d Implement a TODO: Enhance loopinfo to merge away the unconditional branch
that it was leaving in loops after rotation (between the original latch
block and the original header.

With this change, it is possible for rotated loops to have just a single
basic block, which is useful.

llvm-svn: 123075
2011-01-08 19:10:28 +00:00
Chris Lattner 930b716e1b various code cleanups, enhance MergeBlockIntoPredecessor to preserve
loop info.

llvm-svn: 123074
2011-01-08 19:08:40 +00:00
Chris Lattner fee37c5fa3 inline preserveCanonicalLoopForm now that it is simple.
llvm-svn: 123073
2011-01-08 18:55:50 +00:00
Chris Lattner 063dca0f6a Three major changes:
1. Rip out LoopRotate's domfrontier updating code.  It isn't
   needed now that LICM doesn't use DF and it is super complex
   and gross.
2. Make DomTree updating code a lot simpler and faster.  The 
   old loop over all the blocks was just to find a block??
3. Change the code that inserts the new preheader to just use
   SplitCriticalEdge instead of doing an overcomplex 
   reimplementation of it.

No behavior change, except for the name of the inserted preheader.

llvm-svn: 123072
2011-01-08 18:52:51 +00:00
Chris Lattner 30d95f9f87 reduce nesting.
llvm-svn: 123071
2011-01-08 18:47:43 +00:00
Chris Lattner 7fab23bc1d LoopRotate requires canonical loop form, so it always has preheaders
and latch blocks.  Reorder entry conditions to make hte pass faster
and more logical.

llvm-svn: 123069
2011-01-08 18:06:22 +00:00
Chris Lattner d62691f4e8 use the LI ivar.
llvm-svn: 123068
2011-01-08 17:49:51 +00:00
Chris Lattner 385f2ec6d8 some cleanups: remove dead arguments and eliminate ivars
that are just passed to one function.

llvm-svn: 123067
2011-01-08 17:48:33 +00:00
Chris Lattner 25ba40a0cc fix an issue duncan pointed out, which could cause loop rotate
to violate LCSSA form

llvm-svn: 123066
2011-01-08 17:38:45 +00:00
Cameron Zwarich b4ab257bcc Fix coding style issues.
llvm-svn: 123065
2011-01-08 17:07:11 +00:00
Cameron Zwarich 84986b298a Make more passes preserve dominators (or state that they preserve dominators if
they all ready do). This removes two dominator recomputations prior to isel,
which is a 1% improvement in total llc time for 403.gcc.

The only potentially suspect thing is making GCStrategy recompute dominators if
it used a custom lowering strategy.

llvm-svn: 123064
2011-01-08 17:01:52 +00:00
Cameron Zwarich 80bd9af7c5 Contract subloop bodies. However, it is still important to visit the phis at the
top of subloop headers, as the phi uses logically occur outside of the subloop.

llvm-svn: 123062
2011-01-08 15:52:22 +00:00
Frits van Bommel 6a1fb8f235 Fix a bug in r123034 (trying to sext/zext non-integers) and clean up a little.
llvm-svn: 123061
2011-01-08 10:51:36 +00:00
Chris Lattner 8c5defd0b0 Have loop-rotate simplify instructions (yay instsimplify!) as it clones
them into the loop preheader, eliminating silly instructions like
"icmp i32 0, 100" in fixed tripcount loops.  This also better exposes the 
bigger problem with loop rotate that I'd like to fix: once this has been
folded, the duplicated conditional branch *often* turns into an uncond branch.

Not aggressively handling this is pessimizing later loop optimizations 
somethin' fierce by making "dominates all exit blocks" checks fail.

llvm-svn: 123060
2011-01-08 08:24:46 +00:00
Chris Lattner 43f8d16482 Revamp the ValueMapper interfaces in a couple ways:
1. Take a flags argument instead of a bool.  This makes
   it more clear to the reader what it is used for.
2. Add a flag that says that "remapping a value not in the
   map is ok".
3. Reimplement MapValue to share a bunch of code and be a lot
   more efficient.  For lookup failures, don't drop null values
   into the map.
4. Using the new flag a bunch of code can vaporize in LinkModules
   and LoopUnswitch, kill it.

No functionality change.

llvm-svn: 123058
2011-01-08 08:15:20 +00:00
Chris Lattner 2b3f20e6ec two minor changes: switch to the standard ValueToValueMapTy
map from ValueMapper.h (giving us access to its utilities)
and add a fastpath in the loop rotation code, avoiding expensive
ssa updator manipulation for values with nothing to update.

llvm-svn: 123057
2011-01-08 07:21:31 +00:00
Tobias Grosser fc3d7f664b InstCombine: Match min/max hidden by sext/zext
X = sext x; x >s c ? X : C+1 --> X = sext x; X <s C+1 ? C+1 : X
X = sext x; x <s c ? X : C-1 --> X = sext x; X >s C-1 ? C-1 : X
X = zext x; x >u c ? X : C+1 --> X = zext x; X <u C+1 ? C+1 : X
X = zext x; x <u c ? X : C-1 --> X = zext x; X >u C-1 ? C-1 : X
X = sext x; x >u c ? X : C+1 --> X = sext x; X <u C+1 ? C+1 : X
X = sext x; x <u c ? X : C-1 --> X = sext x; X >u C-1 ? C-1 : X

Instead of calculating this with mixed types promote all to the
larger type. This enables scalar evolution to analyze this
expression. PR8866

llvm-svn: 123034
2011-01-07 21:33:14 +00:00
Tobias Grosser 411e6eedff Some whitespace fixes
llvm-svn: 123033
2011-01-07 21:33:13 +00:00
Benjamin Kramer 134cde912a Revert 122959, it needs more thought. Add it back to README.txt with additional notes.
llvm-svn: 123030
2011-01-07 20:42:20 +00:00
Jay Foad 89afb43b1e Remove all uses of the "ugly" method BranchInst::setUnconditionalDest().
llvm-svn: 123025
2011-01-07 20:25:56 +00:00
Benjamin Kramer ae67cc13a9 InstCombine: Turn _chk functions into the "unsafe" variant if length and max langth are equal.
This happens when we take the (non-constant) length from a malloc.

llvm-svn: 122961
2011-01-06 14:22:52 +00:00
Benjamin Kramer 799b011276 InstCombine: If we call llvm.objectsize on a malloc call we can replace it with the size passed to malloc.
llvm-svn: 122959
2011-01-06 13:11:05 +00:00
Benjamin Kramer a76cc117e0 InstCombine: Teach llvm.objectsize folding to look through GEPs.
llvm-svn: 122958
2011-01-06 13:07:49 +00:00
Cameron Zwarich 9ec19ea06a Add the CallInst optimizations that don't involve expanding inline assembly to
OptimizeInst() so that they can be used on a worklist instruction.

llvm-svn: 122945
2011-01-06 02:56:42 +00:00
Cameron Zwarich d28c78eb4f Move the GEP handling in CodeGenPrepare to OptimizeInst().
llvm-svn: 122944
2011-01-06 02:44:52 +00:00
Cameron Zwarich 14ac865ca9 Split the optimizations in CodeGenPrepare that don't manipulate the iterators
into a separate function, so that it can be called from a loop using a worklist
rather than a loop traversing a whole basic block.

llvm-svn: 122943
2011-01-06 02:37:26 +00:00
Jakob Stoklund Olesen 70be93a200 Zap the last two -Wself-assign warnings in llvm.
Simplify RALinScan::DowngradeRegister with TRI::getOverlaps while we are there.

llvm-svn: 122940
2011-01-06 01:33:22 +00:00
Cameron Zwarich ce3b930a98 Stop reallocating SunkAddrs for each basic block. When we move to an instruction
worklist, the key will need to become std::pair<BasicBlock*, Value*>.

llvm-svn: 122932
2011-01-06 00:42:50 +00:00
Cameron Zwarich b62ccb241b Add some more statistics to CodeGenPrepare.
llvm-svn: 122891
2011-01-05 17:47:38 +00:00
Cameron Zwarich ced753fadf Add some stats to CodeGenPrepare to make it easier to speed it up without
regressing code quality.

llvm-svn: 122887
2011-01-05 17:27:27 +00:00
Cameron Zwarich 6a78995369 Use pop_back_val instead of back followed by pop_back.
llvm-svn: 122876
2011-01-05 16:08:47 +00:00
Cameron Zwarich 5a2bb998ac Use a worklist for later iterations just like ordinary instsimplify. The next
step is to only process instructions in subloops if they have been modified by
an earlier simplification.

llvm-svn: 122869
2011-01-05 05:47:47 +00:00
Cameron Zwarich 4c51d122d5 Change LoopInstSimplify back to a LoopPass. It revisits subloops rather than
skipping them, but it should probably use a worklist and only revisit those
instructions in subloops that have actually changed. It should probably also
use a worklist after the first iteration like instsimplify now does. Regardless,
it's only 0.3% of opt -O2 time on 403.gcc if it replaces the instcombine placed
in the middle of the loop passes.

llvm-svn: 122868
2011-01-05 05:15:53 +00:00
Owen Anderson 7b25ff04bd Don't bother value numbering instructions with void types in GVN. In theory this should allow us to insert
fewer things into the value numbering maps, but any speedup is beneath the noise threshold on my machine
on 403.gcc.

llvm-svn: 122844
2011-01-04 22:15:21 +00:00
Owen Anderson e39cb57b09 Complete the NumberTable --> LeaderTable rename.
llvm-svn: 122828
2011-01-04 19:29:46 +00:00
Owen Anderson d7d06d3aaf Fix typo in a comment.
llvm-svn: 122827
2011-01-04 19:25:18 +00:00
Owen Anderson 51489b3b28 Prune #include's.
llvm-svn: 122826
2011-01-04 19:24:57 +00:00
Owen Anderson c7c3bc63f7 Clarify terminology, settling on referring to what was the "number table" as the "leader table", and
rename methods to make it much more clear what they're doing.

llvm-svn: 122823
2011-01-04 19:13:25 +00:00
Owen Anderson 83546f2fe0 When removing a value from GVN's leaders list, don't drop the Next pointer in a corner case.
llvm-svn: 122822
2011-01-04 19:10:54 +00:00
Dale Johannesen a71d2cc88d Improve the accuracy of the inlining heuristic looking for the
case where a static caller is itself inlined everywhere else, and
thus may go away if it doesn't get too big due to inlining other
things into it.  If there are references to the caller other than
calls, it will not be removed; account for this.
This results in same-day completion of the case in PR8853.

llvm-svn: 122821
2011-01-04 19:01:54 +00:00
Owen Anderson 41a1550ef5 Branch instructions don't produce values, so there's no need to generate a value number for them. This
avoids adding them to the various value numbering tables, resulting in a minor (~3%) speedup for GVN
on 40.gcc.

llvm-svn: 122819
2011-01-04 18:54:18 +00:00
Owen Anderson 22c53e277a Remove commented out code.
llvm-svn: 122817
2011-01-04 18:22:08 +00:00
Cameron Zwarich b2a41e9388 Switch to the new style of asterisk placement.
llvm-svn: 122815
2011-01-04 18:19:19 +00:00
Chris Lattner 8643810ede Teach loop-idiom to turn a loop containing a memset into a larger memset
when safe.

The testcase is basically this nested loop:
void foo(char *X) {
  for (int i = 0; i != 100; ++i) 
    for (int j = 0; j != 100; ++j)
      X[j+i*100] = 0;
}

which gets turned into a single memset now.  clang -O3 doesn't optimize
this yet though due to a phase ordering issue I haven't analyzed yet.

llvm-svn: 122806
2011-01-04 07:46:33 +00:00
Chris Lattner a62b01dc37 restructure this a bit. Initialize the WeakVH with "I", the
instruction *after* the store.  The store will always be deleted
if the transformation kicks in, so we'd do an N^2 scan of every
loop block.  Whoops.

llvm-svn: 122805
2011-01-04 07:27:30 +00:00
Cameron Zwarich f4e13699e7 Avoid finding loop back edges when we are not splitting critical edges in
CodeGenPrepare (which is the default behavior).

llvm-svn: 122801
2011-01-04 04:43:31 +00:00
Cameron Zwarich e924969380 Address most of Duncan's review comments. Also, make LoopInstSimplify a simple
FunctionPass. It probably doesn't have a reason to be a LoopPass, as it will
probably drop the simple fixed point and either use RPO iteration or Duncan's
approach in instsimplify of only revisiting instructions that have changed.

The next step is to preserve LoopSimplify. This looks like it won't be too hard,
although the pass manager doesn't actually seem to respect when non-loop passes
claim to preserve LCSSA or LoopSimplify. This will have to be fixed.

llvm-svn: 122791
2011-01-04 00:12:46 +00:00
Chris Lattner 0ba473c218 use the very-handy getTruncateOrZeroExtend helper function, and
stop setting NSW: signed overflow is possible.  Thanks to Dan
for pointing these out.

llvm-svn: 122790
2011-01-04 00:06:55 +00:00
Owen Anderson 0839d3930a Fix comment.
llvm-svn: 122788
2011-01-03 23:51:56 +00:00
Owen Anderson d62d37225a Use the new addEscapingValue callback to update GlobalsModRef when GVN adds PHIs of GEPs. For the moment,
have GlobalsModRef handle this conservatively by simply removing the value from its maps.

llvm-svn: 122787
2011-01-03 23:51:43 +00:00
Chris Lattner bde6ec1db6 Duncan deftly points out that readnone functions aren't
invalidated by stores, so they can be handled as 'simple'
operations.

llvm-svn: 122785
2011-01-03 23:38:13 +00:00
Owen Anderson 3a33d0cc4a Simplify GVN's value expression structure, allowing the elimination of a lot of
almost-but-not-quite-identical code.  No intended functionality change.

llvm-svn: 122760
2011-01-03 19:00:11 +00:00
Chris Lattner 16ca19ffc5 stength reduce my previous patch a bit. The only instructions
that are allowed to have metadata operands are intrinsic calls,
and the only ones that take metadata currently return void.
Just reject all void instructions, which should not be value
numbered anyway.  To future proof things, add an assert to the
getHashValue impl for calls to check that metadata operands 
aren't present.

llvm-svn: 122759
2011-01-03 18:43:03 +00:00
Chris Lattner 142f1cd251 fix PR8895: metadata operands don't have a strong use of their
nested values, so they can change and drop to null, which can
change the hash and cause havok.

It turns out that it isn't a good idea to value number stuff
with metadata operands anyway, so... don't.

llvm-svn: 122758
2011-01-03 18:28:15 +00:00
Duncan Sands 697de77339 Speed up instsimplify by about 10-15% by not bothering to retry
InstructionSimplify on instructions that didn't change since the
last time round the loop.

llvm-svn: 122745
2011-01-03 10:50:04 +00:00
Cameron Zwarich 43cecb1200 Switch a worklist in CodeGenPrepare to SmallVector and increase the inline
capacity on the Visited SmallPtrSet. On 403.gcc, this is about a 4.5% speedup of
CodeGenPrepare time (which itself is 10% of time spent in the backend).

This is progress towards PR8889.

llvm-svn: 122741
2011-01-03 06:33:01 +00:00
Chris Lattner 9e5e9ed79a earlycse can do trivial with-a-block dead store
elimination as well.  This deletes 60 stores in 176.gcc
that largely come from bitfield code.

llvm-svn: 122736
2011-01-03 04:17:24 +00:00
Chris Lattner 4b9a525742 switch the load table to use a recycling bump pointer allocator,
speeding earlycse up by 6%.

llvm-svn: 122733
2011-01-03 03:53:50 +00:00
Chris Lattner e0e32a9ef0 now that loads are in their own table, we can implement
store->load forwarding.  This allows EarlyCSE to zap 600 more
loads from 176.gcc.

llvm-svn: 122732
2011-01-03 03:46:34 +00:00
Chris Lattner 92bb0f9f9d split loads and calls into separate tables. Loads are now just indexed
by their pointer instead of using MemoryValue to wrap it.

llvm-svn: 122731
2011-01-03 03:41:27 +00:00
Chris Lattner 4cb365414f various cleanups, no functionality change.
llvm-svn: 122729
2011-01-03 03:28:23 +00:00
Chris Lattner b9a8efc960 Teach EarlyCSE to do trivial CSE of loads and read-only calls.
On 176.gcc, this catches 13090 loads and calls, and increases the
number of simple instructions CSE'd from 29658 to 36208.

llvm-svn: 122727
2011-01-03 03:18:43 +00:00
Chris Lattner 79d83067ee rename InstValue to SimpleValue, add some comments.
llvm-svn: 122725
2011-01-03 02:20:48 +00:00
Michael J. Spencer edb5bcdde5 CMake: Add missing source file.
llvm-svn: 122724
2011-01-03 02:13:05 +00:00
Chris Lattner d815f69b30 Allocate nodes for the scoped hash table from a recyling bump pointer
allocator.  This speeds up early cse by about 20%

llvm-svn: 122723
2011-01-03 01:42:46 +00:00
Chris Lattner 02a9776b64 reduce redundancy in the hashing code and other misc cleanups.
llvm-svn: 122720
2011-01-03 01:10:08 +00:00
Cameron Zwarich cab9a0abab Add a new loop-instsimplify pass, with the intention of replacing the instance
of instcombine that is currently in the middle of the loop pass pipeline. This
commit only checks in the pass; it will hopefully be enabled by default later.

llvm-svn: 122719
2011-01-03 00:25:16 +00:00
Chris Lattner 0844c76f9a fix some pastos
llvm-svn: 122718
2011-01-02 23:29:58 +00:00
Chris Lattner 8fac5db251 add DEBUG and -stats output to earlycse.
Teach it to CSE the rest of the non-side-effecting instructions.

llvm-svn: 122716
2011-01-02 23:19:45 +00:00
Chris Lattner 18ae5436b1 Enhance earlycse to do CSE of casts, instsimplify and die.
Add a testcase.

llvm-svn: 122715
2011-01-02 23:04:14 +00:00
Chris Lattner bf0aa927cc split dom frontier handling stuff out to its own DominanceFrontier header,
so that Dominators.h is *just* domtree.  Also prune #includes a bit.

llvm-svn: 122714
2011-01-02 22:09:33 +00:00
Chris Lattner 704541bb23 sketch out a new early cse pass. No functionality yet.
llvm-svn: 122713
2011-01-02 21:47:05 +00:00
Chris Lattner 9c69406f2b fix a miscompilation of tramp3d-v4: when forming a memcpy, we have to make
sure that the loop we're promoting into a memcpy doesn't mutate the input
of the memcpy.  Before we were just checking that the dest of the memcpy
wasn't mod/ref'd by the loop.

llvm-svn: 122712
2011-01-02 21:14:18 +00:00
Chris Lattner 5702a43c09 If a loop iterates exactly once (has backedge count = 0) then don't
mess with it.  We'd rather peel/unroll it than convert all of its 
stores into memsets.

llvm-svn: 122711
2011-01-02 20:24:21 +00:00
Nick Lewycky 5361b84184 Also remove functions that use complex constant expressions in terms of
another function.

llvm-svn: 122705
2011-01-02 19:16:44 +00:00
Chris Lattner 8455b6e45e enhance loop idiom recognition to scan *all* unconditionally executed
blocks in a loop, instead of just the header block.  This makes it more
aggressive, able to handle Duncan's Ada examples.

llvm-svn: 122704
2011-01-02 19:01:03 +00:00
Chris Lattner 0cdc6f62a5 make inSubLoop much more efficient.
llvm-svn: 122703
2011-01-02 18:53:08 +00:00
Chris Lattner 27497ece96 rip out isExitBlockDominatedByBlockInLoop, calling DomTree::dominates instead.
isExitBlockDominatedByBlockInLoop is a relic of the days when domtree was 
*just* a tree and didn't have DFS numbers.  Checking DFS numbers is faster
and easier than "limiting the search of the tree".

llvm-svn: 122702
2011-01-02 18:45:39 +00:00
Chris Lattner 0469e01c02 add a list of opportunities for future improvement.
llvm-svn: 122701
2011-01-02 18:32:09 +00:00
Duncan Sands 64f1c0dcda Fix PR8702 by not having LoopSimplify claim to preserve LCSSA form. As described
in the PR, the pass could break LCSSA form when inserting preheaders.  It probably
would be easy enough to fix this, but since currently we always go into LCSSA form
after running this pass, doing so is not urgent.

llvm-svn: 122695
2011-01-02 13:38:21 +00:00
Chris Lattner ddf58010bd Allow loop-idiom to run on multiple BB loops, but still only scan the loop
header for now for memset/memcpy opportunities.  It turns out that loop-rotate
is successfully rotating loops, but *DOESN'T MERGE THE BLOCKS*, turning "for 
loops" into 2 basic block loops that loop-idiom was ignoring.

With this fix, we form many *many* more memcpy and memsets than before, including
on the "history" loops in the viterbi benchmark, which look like this:

        for (j=0; j<MAX_history; ++j) {
          history_new[i][j+1] = history[2*i][j];
        }

Transforming these loops into memcpy's speeds up the viterbi benchmark from
11.98s to 3.55s on my machine.  Woo.

llvm-svn: 122685
2011-01-02 07:58:36 +00:00
Chris Lattner 5b5a043d82 remove debugging code.
llvm-svn: 122683
2011-01-02 07:37:13 +00:00
Chris Lattner 12f91befce add some -stats output.
llvm-svn: 122682
2011-01-02 07:36:44 +00:00
Chris Lattner 679572e584 improve loop rotation to use CodeMetrics to analyze the
size of a loop header instead of its own code size estimator.
This allows it to handle bitcasts etc more precisely.

llvm-svn: 122681
2011-01-02 07:35:53 +00:00
Chris Lattner 85b6d81d41 teach loop idiom recognition to form memcpy's from simple loops.
llvm-svn: 122678
2011-01-02 03:37:56 +00:00
Nick Lewycky 4e250c8245 Remove functions from the FnSet when one of their callee's is being merged. This
maintains the guarantee that the DenseSet expects two elements it contains to
not go from inequal to equal under its nose.

As a side-effect, this also lets us switch from iterating to a fixed-point to
actually maintaining a work queue of functions to look at again, and we don't
add thunks to our work queue so we don't need to detect and ignore them.

llvm-svn: 122677
2011-01-02 02:46:33 +00:00
Chris Lattner 1903c42b97 fix a globalopt crash on two Adobe-C++ testcases that the recent
loop idiom pass exposed.

llvm-svn: 122674
2011-01-01 22:31:46 +00:00
Chris Lattner a3514441e0 add a validity check that was missed, fixing a crash on the
new testcase.

llvm-svn: 122662
2011-01-01 20:12:04 +00:00
Chris Lattner 91a4435875 improve validity check to handle constant-trip-count loops more
aggressively.  In practice, this doesn't help anything though,
see the todo.

llvm-svn: 122660
2011-01-01 19:54:22 +00:00
Chris Lattner 8b3baf6d75 implement the "no aliasing accesses in loop" safety check. This pass
should be correct now.

llvm-svn: 122659
2011-01-01 19:39:01 +00:00
Duncan Sands 2c440fa403 Simplify this pass by using a depth-first iterator to ensure that all
operands are visited before the instructions themselves.

llvm-svn: 122647
2010-12-31 17:49:05 +00:00
Duncan Sands 6cc7126ed9 Zap dead instructions harder.
llvm-svn: 122645
2010-12-31 16:17:54 +00:00
Benjamin Kramer 570dd787a6 Make a bunch of symbols internal.
llvm-svn: 122642
2010-12-30 22:34:44 +00:00
Chris Lattner 65a699d4d0 simplify this, isBytewiseValue handles the extra check. We still
check for "multiple of a byte" in size to make it clear that the
>> 3 below is safe.

llvm-svn: 122604
2010-12-28 18:53:48 +00:00
Duncan Sands 5cf10e691b Silence gcc warning about an unused variable when doing a release build.
llvm-svn: 122593
2010-12-28 09:41:15 +00:00
Chris Lattner cb18bfa3d2 fix some issues Frits noticed, add AliasAnalysis as a dependency
llvm-svn: 122585
2010-12-27 18:39:08 +00:00
Benjamin Kramer 84bd73c527 BuildLibCalls: Nuke EmitMemCpy, EmitMemMove and EmitMemSet. They are dead and superseded by IRBuilder.
llvm-svn: 122576
2010-12-27 00:25:32 +00:00
Benjamin Kramer 7cba269dfb SimplifyLibCalls: Use IRBuilder to simplify code.
llvm-svn: 122575
2010-12-27 00:16:46 +00:00
Chris Lattner b9fe685b9a have loop-idiom nuke instructions that feed stores that get removed.
llvm-svn: 122574
2010-12-27 00:03:23 +00:00
Chris Lattner 29e14edc8d implement enough of the memset inference algorithm to recognize and insert
memsets.  This is still missing one important validity check, but this is enough
to compile stuff like this:

void test0(std::vector<char> &X) {
  for (std::vector<char>::iterator I = X.begin(), E = X.end(); I != E; ++I)
    *I = 0;
}

void test1(std::vector<int> &X) {
  for (long i = 0, e = X.size(); i != e; ++i)
    X[i] = 0x01010101;
}

With:
 $ clang t.cpp -S -o - -O2 -emit-llvm | opt -loop-idiom | opt -O3 | llc 

to:

__Z5test0RSt6vectorIcSaIcEE:            ## @_Z5test0RSt6vectorIcSaIcEE
## BB#0:                                ## %entry
	subq	$8, %rsp
	movq	(%rdi), %rax
	movq	8(%rdi), %rsi
	cmpq	%rsi, %rax
	je	LBB0_2
## BB#1:                                ## %bb.nph
	subq	%rax, %rsi
	movq	%rax, %rdi
	callq	___bzero
LBB0_2:                                 ## %for.end
	addq	$8, %rsp
	ret
...
__Z5test1RSt6vectorIiSaIiEE:            ## @_Z5test1RSt6vectorIiSaIiEE
## BB#0:                                ## %entry
	subq	$8, %rsp
	movq	(%rdi), %rax
	movq	8(%rdi), %rdx
	subq	%rax, %rdx
	cmpq	$4, %rdx
	jb	LBB1_2
## BB#1:                                ## %for.body.preheader
	andq	$-4, %rdx
	movl	$1, %esi
	movq	%rax, %rdi
	callq	_memset
LBB1_2:                                 ## %for.end
	addq	$8, %rsp
	ret

llvm-svn: 122573
2010-12-26 23:42:51 +00:00
Chris Lattner 6cf8d6cc6e start using irbuilder to make mem intrinsics in a few passes.
llvm-svn: 122572
2010-12-26 22:57:41 +00:00
Chris Lattner 7c5f9c35d1 sketch more of this out.
llvm-svn: 122567
2010-12-26 20:45:45 +00:00
Chris Lattner 9cb1035f94 move isBytewiseValue out to ValueTracking.h/cpp
llvm-svn: 122565
2010-12-26 20:15:01 +00:00
Chris Lattner 81ae3f299a actually add the file...
llvm-svn: 122563
2010-12-26 19:39:38 +00:00
Chris Lattner 2ef535a4e4 Start of a pass for recognizing memset and memcpy idioms.
No functionality yet.

llvm-svn: 122562
2010-12-26 19:32:44 +00:00
Benjamin Kramer 30342fb1fd Simplify code.
llvm-svn: 122561
2010-12-26 15:23:45 +00:00
Chris Lattner d729d0dcdb don't lose TD info
llvm-svn: 122556
2010-12-25 20:52:04 +00:00
Chris Lattner 20fca48341 switch the inliner alignment enforcement stuff to use the
getOrEnforceKnownAlignment function, which simplifies the code
and makes it stronger.

llvm-svn: 122555
2010-12-25 20:42:38 +00:00
Chris Lattner 6fcd32e7d7 Move getOrEnforceKnownAlignment out of instcombine into Transforms/Utils.
llvm-svn: 122554
2010-12-25 20:37:57 +00:00
Benjamin Kramer b90b2f0635 Fix a thinko pointed out by Frits van Bommel: looking through global variables in isBytewiseValue is not safe.
llvm-svn: 122550
2010-12-24 22:23:59 +00:00
Benjamin Kramer ea9152e551 MemCpyOpt: Turn memcpys from a constant into a memset if possible.
This allows us to compile "int cst[] = {-1, -1, -1};" into
  movl  $-1, 16(%rsp)
  movq  $-1, 8(%rsp)
instead of
  movl  _cst+8(%rip), %eax
  movl  %eax, 16(%rsp)
  movq  _cst(%rip), %rax
  movq  %rax, 8(%rsp)

llvm-svn: 122548
2010-12-24 21:17:12 +00:00
Owen Anderson 226ac14afb When determining if we can fold (x >> C1) << C2, the bits that we need to verify are zero
are not the low bits of x, but the bits that WILL be the low bits after the operation completes.

llvm-svn: 122529
2010-12-23 23:56:24 +00:00
Owen Anderson 5d690d4168 It is possible for SimplifyCFG to cause PHI nodes to become redundant too late in the optimization
pipeline to be caught by instcombine, and it's not feasible to catch them in SimplifyCFG because the
use-lists are in an inconsistent state at the point where it could know that it need to simplify them.
Instead, have CodeGenPrepare look for trivially redundant PHIs as part of its general cleanup effort.

llvm-svn: 122516
2010-12-23 20:57:35 +00:00
Mon P Wang 18b762a946 Preserve the address space when generating bitcasts for MemTransferInst in ConvertToScalarInfo
llvm-svn: 122462
2010-12-23 01:41:32 +00:00
Jeffrey Yasskin 9b43f33620 Change all self assignments X=X to (void)X, so that we can turn on a
new gcc warning that complains on self-assignments and
self-initializations.

llvm-svn: 122458
2010-12-23 00:58:24 +00:00
Benjamin Kramer 8ef5001b27 InstCombine: creating selects from -1 and 0 is fine, they combine into a sext from i1.
llvm-svn: 122453
2010-12-22 23:12:15 +00:00
Duncan Sands fbb9ac3cca Add a generic expansion transform: A op (B op' C) -> (A op B) op' (A op C)
if both A op B and A op C simplify.  This fires fairly often but doesn't
make that much difference.  On gcc-as-one-file it removes two "and"s and
turns one branch into a select.

llvm-svn: 122399
2010-12-22 13:36:08 +00:00
Duncan Sands 3547d2ebd8 Add some statistics, good for understanding how much more powerful
instcombine is compared to instsimplify.

llvm-svn: 122397
2010-12-22 09:40:51 +00:00
Owen Anderson 5ab8d4b5e5 Give GVN back the ability to perform simple conditional propagation on conditional branch values.
I still think that LVI should be handling this, but that capability is some ways off in the future,
and this matters for some significant benchmarks.

llvm-svn: 122378
2010-12-21 23:54:34 +00:00
Owen Anderson 12470778d7 Remove dead code.
llvm-svn: 122371
2010-12-21 22:31:24 +00:00
Benjamin Kramer 43493c089f GVN's Expression is not POD-like (it contains a SmallVector). Simplify code while at it.
llvm-svn: 122362
2010-12-21 21:30:19 +00:00
Duncan Sands 3b8af41a3e Visit instructions deterministically. Use a FIFO so as to approximately
visit instructions before their uses, since InstructionSimplify does a
better job in that case.  All this prompted by Frits van Bommel.

llvm-svn: 122343
2010-12-21 17:08:55 +00:00
Duncan Sands e7cbb64ec0 If an instruction simplifies, try again to simplify any uses of it. This is
not very important since the pass is only used for testing, but it does make
it more realistic.  Suggested by Frits van Bommel.

llvm-svn: 122336
2010-12-21 16:12:03 +00:00
Duncan Sands d0eb6d39f8 Pull a few more simplifications out of instcombine (there are still
plenty left though!), in particular for multiplication.

llvm-svn: 122330
2010-12-21 14:00:22 +00:00
Duncan Sands eaff500c7b Oops, forgot to add the pass itself!
llvm-svn: 122265
2010-12-20 21:07:42 +00:00
Duncan Sands a436cbe4bf Add a new convenience pass for testing InstructionSimplify. Previously
it could only be tested indirectly, via instcombine, gvn or some other
pass that makes use of InstructionSimplify, which means that testcases
had to be carefully contrived to dance around any other transformations
that that pass did.

llvm-svn: 122264
2010-12-20 20:54:37 +00:00
Benjamin Kramer f7957d0463 Add a check missing from my last commit and avoid a potential overflow situation.
llvm-svn: 122258
2010-12-20 20:00:31 +00:00
Benjamin Kramer 2bca3a67b3 Reduce indentation.
llvm-svn: 122249
2010-12-20 16:21:59 +00:00
Benjamin Kramer 68531baea9 Teach InstCombine to merge (icmp ult (X + CA), C1) | (icmp eq X, C2) into (icmp ult (X + CA), C1 + 1) if C2 + CA == C1.
InstCombine creates these so now we compile x == 23 || x == 24 || x == 25 to
  %x.off = add i32 %x, -23
  %1 = icmp ult i32 %x.off, 3
instead of
  %x.off = add i32 %x, -23
  %1 = icmp ult i32 %x.off, 2
  %cmp3 = icmp eq i32 %x, 25
  %ret2 = or i1 %1, %cmp3

llvm-svn: 122248
2010-12-20 16:18:51 +00:00
Chris Lattner 27ca8ebd4b fix PR8807 by making transformConstExprCastCall aware of byval arguments.
llvm-svn: 122238
2010-12-20 08:36:38 +00:00
Chris Lattner 7398965b67 various cleanups for transformConstExprCastCall
llvm-svn: 122237
2010-12-20 08:25:06 +00:00
Chris Lattner 0f11495289 when eliding a byval copy due to inlining a readonly function, we have
to make sure that the reused alloca has sufficient alignment.

llvm-svn: 122236
2010-12-20 08:10:40 +00:00
Chris Lattner 0099744506 pull byval processing out to its own helper function.
llvm-svn: 122235
2010-12-20 07:57:41 +00:00
Chris Lattner 7394680a00 fix PR8769, a miscompilation by inliner when inlining a function with a byval
argument.  The generated alloca has to have at least the alignment of the
byval, if not, the client may be making assumptions that the new alloca won't
satisfy.

llvm-svn: 122234
2010-12-20 07:45:28 +00:00
Mon P Wang 1991c47ec1 Avoid dropping the address space when InstCombine optimizes memset
llvm-svn: 122215
2010-12-20 01:05:30 +00:00
Chris Lattner 4fb9dd4c74 fix an oversight caught by Frits!
llvm-svn: 122204
2010-12-19 23:24:04 +00:00
Chris Lattner b6252a376a tidy up
llvm-svn: 122190
2010-12-19 20:24:28 +00:00
Chris Lattner 3e635d2e99 move a transformation to a more logical place, simplifying it.
llvm-svn: 122183
2010-12-19 19:43:52 +00:00
Chris Lattner 5e0c0c72e9 recognize an unsigned add with overflow idiom into uadd.
This resolves a README entry and technically resolves PR4916,
but we still get poor code for the testcase in that PR because
GVN isn't CSE'ing uadd with add, filed as PR8817.

Previously we got:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	cmpq	%rdi, %rsi
	movl	$42, %eax
	cmovaq	%rsi, %rax
	ret

Now we get:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	movl	$42, %eax
	cmovbq	%rsi, %rax
	ret

llvm-svn: 122182
2010-12-19 19:37:52 +00:00
Chris Lattner 33dc3f0cfa optimize uadd(x, cst) into a comparison when the normal
result is dead.  This is required for my next patch to not
regress the testsuite.

llvm-svn: 122181
2010-12-19 19:35:32 +00:00
Chris Lattner ce2995ae58 use IC.ReplaceInstUsesWith instead of a raw RAUW so that uses of
the old thing end up on the instcombine worklist.  Not doing this
can cause an extra top-level iteration of instcombine, burning
compile time.

llvm-svn: 122179
2010-12-19 18:38:44 +00:00
Chris Lattner 79874566ce generalize the sadd creation code to not require that the
sadd formed is half the size of the original type. We can
now compile this into a sadd.i8:

unsigned char X(char a, char b) {
  int res = a+b;
  if ((unsigned )(res+128) > 255U)
    abort();
  return res;
}

llvm-svn: 122178
2010-12-19 18:35:09 +00:00
Chris Lattner c56c845377 fix another miscompile in the llvm.sadd formation logic: it wasn't
checking to see if the high bits of the original add result were dead.
Inserting a smaller add and zexting back to that size is not good enough.

This is likely to be the fix for 8816.

llvm-svn: 122177
2010-12-19 18:22:06 +00:00
Chris Lattner f29562db25 fix a bug (possibly 8816) in the sadd forming xform: it isn't
profitable (or safe) to promote code when the add-with-constant
has other uses.

llvm-svn: 122175
2010-12-19 17:59:02 +00:00
Chris Lattner ee61c1d820 rework the code added in r122072 to pull it out to its own
helper function, clean up comments, and reduce indentation.
No functionality change.

llvm-svn: 122174
2010-12-19 17:52:50 +00:00
Chris Lattner 408a684d29 Enhance LICM to promote alias sets whose pointers themselves are stored,
which doesn't affect the memory address being promoted.

llvm-svn: 122172
2010-12-19 05:57:25 +00:00
Chris Lattner 3337a81450 fix PR8602, a bug in an assertion: a volatile store *of* a pointer
does not make the alias set for that pointer volatile, just stores
*to* the pointer.

llvm-svn: 122171
2010-12-19 05:51:54 +00:00
Chris Lattner fb888622c3 revert r122164, I'm going to go with a different approach.
llvm-svn: 122168
2010-12-19 04:23:03 +00:00
Chris Lattner 583ec6fa44 first step to fixing PR8642: don't fold away empty basic blocks
which have trapping constant exprs in them due to PHI nodes.
Eliminating them can cause the constant expr to be evalutated
on new paths if the input edges are critical.

llvm-svn: 122164
2010-12-19 03:02:34 +00:00
Chris Lattner 6b8b4855ff simplify this a bit.
llvm-svn: 122156
2010-12-18 20:22:49 +00:00
Bill Wendling 5e3605552e Whitespace fixes. No functionality change.
llvm-svn: 122110
2010-12-17 23:27:41 +00:00
Nate Begeman 7aa18bf46a Add vector versions of some existing scalar transforms to aid codegen in matching psign & pblend operations to the IR produced by clang/gcc for their C idioms.
llvm-svn: 122105
2010-12-17 23:12:19 +00:00
Owen Anderson 1294ea7d53 Reapply r121905 (automatic synthesis of @llvm.sadd.with.overflow) with a fix for a bug that manifested itself
on the DragonEgg self-host bot.  Unfortunately, the testcase is pretty messy and doesn't reduce well due to
interactions with other parts of InstCombine.

llvm-svn: 122072
2010-12-17 18:08:00 +00:00
Benjamin Kramer e5f49c4ff2 SimplifyCFG: Ranges can be larger than 64 bits. Fixes Release-selfhost build.
llvm-svn: 122054
2010-12-17 10:48:14 +00:00
Chris Lattner d14b0f1db7 improve switch formation to handle small range
comparisons formed by comparisons.  For example,
this:

void foo(unsigned x) {
  if (x == 0 || x == 1 || x == 3 || x == 4 || x == 6) 
    bar();
}

compiles into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	cmpl	$6, %edi
	ja	LBB0_2
## BB#1:                                ## %entry
	movl	%edi, %eax
	movl	$91, %ecx
	btq	%rax, %rcx
	jb	LBB0_3

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	cmpl	$2, %edi
	jb	LBB0_4
## BB#1:                                ## %switch.early.test
	cmpl	$6, %edi
	ja	LBB0_3
## BB#2:                                ## %switch.early.test
	movl	%edi, %eax
	movl	$88, %ecx
	btq	%rax, %rcx
	jb	LBB0_4

This catches a bunch of cases in GCC, which look like this:

 %804 = load i32* @which_alternative, align 4, !tbaa !0
 %805 = icmp ult i32 %804, 2
 %806 = icmp eq i32 %804, 3
 %or.cond121 = or i1 %805, %806
 %807 = icmp eq i32 %804, 4
 %or.cond124 = or i1 %or.cond121, %807
 br i1 %or.cond124, label %.thread, label %808

turning this into a range comparison.

llvm-svn: 122045
2010-12-17 06:20:15 +00:00
Dan Gohman 93dc2b808f Revert r64460. strtol and friends cannot be marked readonly, even with
a null endptr argument, because they may write to errno.

This fixes a seflhost miscompile observed on Linux targets when TBAA
was enabled.

llvm-svn: 122014
2010-12-17 01:09:43 +00:00
Frits van Bommel 9bbe849fc3 Fix a bug in the loop in JumpThreading::ProcessThreadableEdges() where it could falsely produce a MultipleDestSentinel value if the first predecessor ended with an 'indirectbr'. If that happened, it caused an unnecessary FindMostPopularDest() call.
This wasn't a correctness problem, but it broke the fast path for single-predecessor blocks.

llvm-svn: 121966
2010-12-16 12:16:00 +00:00
Duncan Sands 8d1ab6f6e1 Speculatively revert commit 121905 since it looks like it might have broken the
dragonegg self-host buildbot.  Original commit message:

Add an InstCombine transform to recognize instances of manual overflow-safe addition
(performing the addition in a wider type and explicitly checking for overflow), and
fold them down to intrinsics.  This currently only supports signed-addition, but could
be generalized if someone works out the magic constant formulas for other operations.

llvm-svn: 121965
2010-12-16 09:40:54 +00:00
Dan Gohman e1a17a3473 Make memcpyopt TBAA-aware.
llvm-svn: 121944
2010-12-16 02:51:19 +00:00
Dan Gohman 4467aa5294 Preserve TBAA tags when doing load PRE.
llvm-svn: 121921
2010-12-15 23:53:55 +00:00
Owen Anderson 1cf8881299 Add an InstCombine transform to recognize instances of manual overflow-safe addition
(performing the addition in a wider type and explicitly checking for overflow), and
fold them down to intrinsics.  This currently only supports signed-addition, but could
be generalized if someone works out the magic constant formulas for other operations.

Fixes <rdar://problem/8558713>.

llvm-svn: 121905
2010-12-15 22:32:38 +00:00
Dan Gohman a4fcd2418d Move Value::getUnderlyingObject to be a standalone
function so that it can live in Analysis instead of
VMCore.

llvm-svn: 121885
2010-12-15 20:02:24 +00:00
Duncan Sands 0a2c416894 Move Sub simplifications and additional Add simplifications out of
instcombine and into InstructionSimplify.

llvm-svn: 121861
2010-12-15 14:07:39 +00:00
Frits van Bommel 3d1803495e Teach jump threading to "look through" a select when the branch direction of a terminator depends on it.
When it sees a promising select it now tries to figure out whether the condition of the select is known in any of the predecessors and if so it maps the operands appropriately.

llvm-svn: 121859
2010-12-15 09:51:20 +00:00
Chris Lattner e893e2601e make qsort predicate more conformant by returning 0 for equal values.
llvm-svn: 121838
2010-12-15 04:52:41 +00:00
Owen Anderson 35609d97ae Fix PR8790, another instance where unreachable code can cause instruction simplification to fail,
this case involve a select that simplifies to itself.

llvm-svn: 121817
2010-12-15 00:55:35 +00:00
Owen Anderson 15c85c916f Cleanup trailing whitespace.
llvm-svn: 121816
2010-12-15 00:52:44 +00:00
Chris Lattner 7499b452c1 - Insert new instructions before DomBlock's terminator,
which is simpler than finding a place to insert in BB.
 - Don't perform the 'if condition hoisting' xform on certain
   i1 PHIs, as it interferes with switch formation.

This re-fixes "example 7", without breaking the world hopefully.

llvm-svn: 121764
2010-12-14 08:46:09 +00:00
Chris Lattner 335f0e4ad4 fix two significant issues with FoldTwoEntryPHINode:
first, it can kick in on blocks whose conditions have been
folded to a constant, even though one of the edges will be
trivially folded.

second, it doesn't clean up the "if diamond" that it just 
eliminated away.  This is a problem because other simplifycfg
xforms kick in depending on the order of block visitation,
causing pointless work.

llvm-svn: 121762
2010-12-14 08:01:53 +00:00
Chris Lattner dc20a7d38c remove the instsimplify logic I added in r121754. It is apparently
breaking the selfhost builds, though I can't fathom how.

llvm-svn: 121761
2010-12-14 07:53:03 +00:00
Chris Lattner 9ac168d0ab clean up logic, convert std::set to SmallPtrSet, handle the case
when all 2-entry phis are simplified away.

llvm-svn: 121760
2010-12-14 07:41:39 +00:00
Chris Lattner 9fd838d31b tidy up a bit, move DEBUG down to when we commit to doing the transform so we
don't print it unless the xform happens.

llvm-svn: 121758
2010-12-14 07:23:10 +00:00
Chris Lattner b42d293faa use SimplifyInstruction instead of reimplementing part of it.
llvm-svn: 121757
2010-12-14 07:20:29 +00:00
Chris Lattner fb73de482c simplify GetIfCondition by using getSinglePredecessor.
llvm-svn: 121756
2010-12-14 07:15:21 +00:00
Chris Lattner 0f4d67bd88 use AddPredecessorToBlock in 3 places instead of a manual loop.
llvm-svn: 121755
2010-12-14 07:09:42 +00:00
Chris Lattner a07cc6f4fd make FoldTwoEntryPHINode use instsimplify a bit, make
GetIfCondition faster by avoiding pred_iterator.  No
really interesting change.

llvm-svn: 121754
2010-12-14 07:00:00 +00:00
Chris Lattner afd2a8cfbb remove the dead (and terrible) llvm::RemoveSuccessor function.
llvm-svn: 121753
2010-12-14 06:51:55 +00:00
Chris Lattner d7beca3782 improve DEBUG's a bit, switch to eraseFromParent() to simplify
code a bit, switch from constant folding to instsimplify.

llvm-svn: 121751
2010-12-14 06:17:25 +00:00
Chris Lattner 5a9d59d918 reapply my recent change that disables a piece of the switch formation
work, but fixes 400.perlbmk.

llvm-svn: 121749
2010-12-14 05:57:30 +00:00
Owen Anderson 3e5648896e Fix recent buildbot breakage by pulling SimplifyCFG back to its state as of r121694, the most recent state
where I'm confident there were no crashes or miscompilations.  XFAIL the test added since then for now.

llvm-svn: 121733
2010-12-13 23:49:28 +00:00
Chris Lattner a6e5d5694a temporarily disable part of my previous patch, which causes an iterator invalidation issue, causing a crash on some versions of perlbmk.
llvm-svn: 121728
2010-12-13 23:02:19 +00:00
Chris Lattner 2d434e594e add some DEBUG's.
llvm-svn: 121711
2010-12-13 19:55:30 +00:00
Benjamin Kramer 1e155ab7e1 Fix sort predicate. qsort(3)'s predicate semantics differ from std::sort's. Fixes PR 8780.
llvm-svn: 121705
2010-12-13 18:20:38 +00:00
Chris Lattner fb836f8c1a reinstate my patch: the miscompile was caused by an inverted branch in the
'and' case.

llvm-svn: 121695
2010-12-13 08:12:19 +00:00
Chris Lattner 79db357d80 Completely disable the optimization I added in r121680 until
I can track down a miscompile.  This should bring the buildbots
back to life

llvm-svn: 121693
2010-12-13 07:41:29 +00:00
Chris Lattner fbeb55844b Make simplifycfg reprocess newly formed "br (cond1 | cond2)" conditions
when simplifying, allowing them to be eagerly turned into switches.  This
is the last step required to get "Example 7" from this blog post:
http://blog.regehr.org/archives/320

On X86, we now generate this machine code, which (to my eye) seems better
than the ICC generated code:

_crud:                                  ## @crud
## BB#0:                                ## %entry
	cmpb	$33, %dil
	jb	LBB0_4
## BB#1:                                ## %switch.early.test
	addb	$-34, %dil
	cmpb	$58, %dil
	ja	LBB0_3
## BB#2:                                ## %switch.early.test
	movzbl	%dil, %eax
	movabsq	$288230376537592865, %rcx ## imm = 0x400000017001421
	btq	%rax, %rcx
	jb	LBB0_4
LBB0_3:                                 ## %lor.rhs
	xorl	%eax, %eax
	ret
LBB0_4:                                 ## %lor.end
	movl	$1, %eax
	ret

llvm-svn: 121690
2010-12-13 07:00:06 +00:00
Chris Lattner 1d05761df4 make this logic a bit simpler.
llvm-svn: 121689
2010-12-13 06:36:51 +00:00
Chris Lattner 25c3af35d8 split all the guts of SimplifyCFGOpt::run out into one function
per terminator kind.

llvm-svn: 121688
2010-12-13 06:25:44 +00:00
Chris Lattner cb570f87e5 fix a bug in r121680 that upset the various buildbots.
llvm-svn: 121687
2010-12-13 05:34:18 +00:00
Chris Lattner a6db741f3d refactor the speculative execution logic to be factored into the cond branch code instead of
doing a cfg search for every block simplified.

llvm-svn: 121686
2010-12-13 05:26:52 +00:00
Chris Lattner 466f54ffcf simplify a bunch of code.
llvm-svn: 121685
2010-12-13 05:20:28 +00:00
Chris Lattner 6df7bdd810 move HoistThenElseCodeToIf up to a more logical and efficient-to-handle place.
llvm-svn: 121684
2010-12-13 05:15:29 +00:00
Chris Lattner 2e3832d9a0 move 'MergeBlocksIntoPredecessor' call earlier. Use
getSinglePredecessor to simplify code.

llvm-svn: 121683
2010-12-13 05:10:48 +00:00
Chris Lattner a69c443459 factor new code out to a SimplifyBranchOnICmpChain helper function.
llvm-svn: 121681
2010-12-13 05:03:41 +00:00
Chris Lattner a442f24a36 enhance the "change or icmp's into switch" xform to handle one value in an
'or sequence' that it doesn't understand.  This allows us to optimize
something insane like this:

int crud (unsigned char c, unsigned x)
 {
   if(((((((((( (int) c <= 32 ||
                    (int) c == 46) || (int) c == 44)
                  || (int) c == 58) || (int) c == 59) || (int) c == 60)
               || (int) c == 62) || (int) c == 34) || (int) c == 92)
            || (int) c == 39) != 0)
     foo();
 }

into:

define i32 @crud(i8 zeroext %c, i32 %x) nounwind ssp noredzone {
entry:
  %cmp = icmp ult i8 %c, 33
  br i1 %cmp, label %if.then, label %switch.early.test

switch.early.test:                                ; preds = %entry
  switch i8 %c, label %if.end [
    i8 39, label %if.then
    i8 44, label %if.then
    i8 58, label %if.then
    i8 59, label %if.then
    i8 60, label %if.then
    i8 62, label %if.then
    i8 46, label %if.then
    i8 92, label %if.then
    i8 34, label %if.then
  ]

by pulling the < comparison out ahead of the newly formed switch.

llvm-svn: 121680
2010-12-13 04:50:38 +00:00
Chris Lattner 5a177e681e merge two very similar functions into one that has a bool argument.
llvm-svn: 121678
2010-12-13 04:26:26 +00:00
Chris Lattner 9b1af510cb don't bother handling non-canonical icmp's
llvm-svn: 121676
2010-12-13 04:18:32 +00:00
Chris Lattner 395252d93e inline a function, making the result much simpler.
llvm-svn: 121675
2010-12-13 04:15:19 +00:00
Chris Lattner 62cc76e9cc Fix my previous patch to handle a degenerate case that the llvm-gcc
bootstrap buildbot tripped over.

llvm-svn: 121674
2010-12-13 03:43:57 +00:00
Chris Lattner 11dafaa3ec convert some methods to be static functions
llvm-svn: 121673
2010-12-13 03:30:12 +00:00
Chris Lattner 4642d79fb0 zap two more std::sorts.
llvm-svn: 121672
2010-12-13 03:24:30 +00:00
Chris Lattner d9bacc088a fix a fairly serious oversight with switch formation from
or'd conditions.  Previously we'd compile something like this:

int crud (unsigned char c) {
   return c == 62 || c == 34 || c == 92;
}

into:

  switch i8 %c, label %lor.rhs [
    i8 62, label %lor.end
    i8 34, label %lor.end
  ]

lor.rhs:                                          ; preds = %entry
  %cmp8 = icmp eq i8 %c, 92
  br label %lor.end

lor.end:                                          ; preds = %entry, %entry, %lor.rhs
  %0 = phi i1 [ true, %entry ], [ %cmp8, %lor.rhs ], [ true, %entry ]
  %lor.ext = zext i1 %0 to i32
  ret i32 %lor.ext

which failed to merge the compare-with-92 into the switch.  With this patch
we simplify this all the way to:

  switch i8 %c, label %lor.rhs [
    i8 62, label %lor.end
    i8 34, label %lor.end
    i8 92, label %lor.end
  ]

lor.rhs:                                          ; preds = %entry
  br label %lor.end

lor.end:                                          ; preds = %entry, %entry, %entry, %lor.rhs
  %0 = phi i1 [ true, %entry ], [ false, %lor.rhs ], [ true, %entry ], [ true, %entry ]
  %lor.ext = zext i1 %0 to i32
  ret i32 %lor.ext

which is much better for codegen's switch lowering stuff.  This kicks in 33 times
on 176.gcc (for example) cutting 103 instructions off the generated code.

llvm-svn: 121671
2010-12-13 03:18:54 +00:00
Chris Lattner 73a58627c3 simplify code and reduce indentation
llvm-svn: 121670
2010-12-13 02:38:13 +00:00
Chris Lattner 7c8e6047d6 convert an std::sort to array_pod_sort.
llvm-svn: 121669
2010-12-13 02:00:58 +00:00
Chris Lattner 1475987634 move the "br (X == 0 | X == 1), T, F" -> switch optimization to a new
location in simplifycfg.  In the old days, SimplifyCFG was never run on
the entry block, so we had to scan over all preds of the BB passed into
simplifycfg to do this xform, now we can just check blocks ending with
a condbranch.  This avoids a scan over all preds of every simplified 
block, which should be a significant compile-time perf win on functions
with lots of edges.  No functionality change.

llvm-svn: 121668
2010-12-13 01:57:34 +00:00
Chris Lattner 4088e2b8e4 reduce indentation and generally simplify code, no functionality change.
llvm-svn: 121667
2010-12-13 01:47:07 +00:00
Chris Lattner 7cb7867d7a use getFirstNonPHIOrDbg to simplify this code.
llvm-svn: 121664
2010-12-13 01:28:06 +00:00
Benjamin Kramer c4169cebe3 Generalize the and-icmp-select instcombine further by allowing selects of the form
(x & 2^n) ? 2^m+C : C

we can offset both arms by C to get the "(x & 2^n) ? 2^m : 0" form, optimize the
select to a shift and apply the offset afterwards.

llvm-svn: 121609
2010-12-11 10:49:22 +00:00
Benjamin Kramer c8b035d006 Factor the (x & 2^n) ? 2^m : 0 instcombine into its own method and generalize it
to catch cases where n != m with a shift.

llvm-svn: 121608
2010-12-11 09:42:59 +00:00
Chris Lattner bc4457e317 enhance memcpyopt to zap memcpy's that have the same src/dst.
llvm-svn: 121362
2010-12-09 07:45:45 +00:00
Chris Lattner fd51c52ef6 fix PR8753, eliminating a case where we'd infinitely make a
substitution because it doesn't actually change the IR.  Patch by
Jakub Staszak!

llvm-svn: 121361
2010-12-09 07:39:50 +00:00
Dan Gohman a32986e899 Really check that the bits that will become zero are actually already zero
before eliminating the operation that zeros them. This fixes rdar://8739316.

llvm-svn: 121353
2010-12-09 02:52:17 +00:00
Frits van Bommel d2f4b09e10 Remove some dead code from the jump threading pass.
The last uses of these functions were removed in r113852 when LazyValueInfo was permanently enabled and removed the need for them.

llvm-svn: 121133
2010-12-07 13:08:07 +00:00
Jay Foad 583abbc4df PR5207: Change APInt methods trunc(), sext(), zext(), sextOrTrunc() and
zextOrTrunc(), and APSInt methods extend(), extOrTrunc() and new method
trunc(), to be const and to return a new value instead of modifying the
object in place.

llvm-svn: 121120
2010-12-07 08:25:19 +00:00
Chris Lattner 0d71c4f564 reapply r121100 with a tweak to constant fold ConstExprs with TargetData
(if available) as we go so that we get simple constantexprs not insane ones.
This fixes the failure of clang/test/CodeGenCXX/virtual-base-ctor.cpp
that the previous iteration of this patch had.

llvm-svn: 121111
2010-12-07 04:33:29 +00:00
Eric Christopher f10dcfb9fb Temporarily revert r121100 as it's causing clang to fail
CodeGenCXX/virtual-base-ctor.cpp.

llvm-svn: 121102
2010-12-07 02:41:11 +00:00
Chris Lattner 287f4366c1 fix PR8710 - teach global opt that some constantexprs are too complex to
put in a global variable's initializer.

llvm-svn: 121100
2010-12-07 01:59:32 +00:00
Frits van Bommel d9df6eaa9c Implement jump threading of 'indirectbr' by keeping track of whether we're looking for ConstantInt*s or BlockAddress*s.
llvm-svn: 121066
2010-12-06 23:36:56 +00:00
Chris Lattner 7ff0ba41bd replace a linear scan with a symtab lookup, reduce indentation.
No functionality change.

llvm-svn: 121042
2010-12-06 21:53:07 +00:00
Chris Lattner 4dc53e37d9 Use a stronger predicate here, pointed out by Duncan
llvm-svn: 121040
2010-12-06 21:48:10 +00:00
Chris Lattner ca335e38cf add some DEBUG statements.
llvm-svn: 121038
2010-12-06 21:13:51 +00:00
Chris Lattner fb212de06d Fix PR8735, a really terrible problem in the inliner's "alloca merging"
optimization.

Consider:
static void foo() {
  A = alloca
  ...
}

static void bar() {
  B = alloca
  ...
  call foo();
}

void main() {
  bar()
}

The inliner proceeds bottom up, but lets pretend it decides not to inline foo
into bar.  When it gets to main, it inlines bar into main(), and says "hey, I
just inlined an alloca "B" into main, lets remember that.  Then it keeps going
and finds that it now contains a call to foo.  It decides to inline foo into
main, and says "hey, foo has an alloca A, and I have an alloca B from another
inlined call site, lets reuse it".  The problem with this of course, is that 
the lifetime of A and B are nested, not disjoint.

Unfortunately I can't create a reasonable testcase for this: the one in the
PR is both huge and extremely sensitive, because you minor tweaks end up
causing foo to get inlined into bar too early.  We already have tests for the
basic alloca merging optimization and this does not break them.

llvm-svn: 120995
2010-12-06 07:52:42 +00:00
Chris Lattner cd3af96a8f improve comment
llvm-svn: 120994
2010-12-06 07:43:04 +00:00
Chris Lattner 5b6a865f2e improve -debug output and comments a little.
llvm-svn: 120993
2010-12-06 07:38:40 +00:00
Chris Lattner 94fbdf3814 Fix PR8728, a miscompilation I recently introduced. When optimizing
memcpy's like:
  memcpy(A, B)
  memcpy(A, C)

we cannot delete the first memcpy as dead if A and C might be aliases.
If so, we actually get:

  memcpy(A, B)
  memcpy(A, A)

which is not correct to transform into:

  memcpy(A, A)

This patch was heavily influenced by Jakub Staszak's patch in PR8728, thanks
Jakub!

llvm-svn: 120974
2010-12-06 01:48:06 +00:00
Frits van Bommel 76244867cf Refactor jump threading.
Should have no functional change other than the order of two transformations that are mutually-exclusive and the exact formatting of debug output.
Internally, it now stores the ConstantInt*s as Constant*s, and actual undef values instead of nulls.

llvm-svn: 120946
2010-12-05 19:06:41 +00:00
Frits van Bommel 5e75ef4a8e Remove trailing whitespace.
llvm-svn: 120945
2010-12-05 19:02:47 +00:00
Frits van Bommel 8fb69ee805 Teach SimplifyCFG to turn
(indirectbr (select cond, blockaddress(@fn, BlockA),
                            blockaddress(@fn, BlockB)))
into
  (br cond, BlockA, BlockB).

llvm-svn: 120943
2010-12-05 18:29:03 +00:00
Jay Foad 25a5e4ca1f PR5207: Rename overloaded APInt methods set(), clear(), flip() to
setAllBits(), setBit(unsigned), etc.

llvm-svn: 120564
2010-12-01 08:53:58 +00:00
Chris Lattner 1c577b54b0 fix a bozo bug I introduced in r119930, causing a miscompile of
20040709-1.c from the gcc testsuite.  I was using the size of a
pointer instead of the pointee.  This fixes rdar://8713376

llvm-svn: 120519
2010-12-01 01:24:55 +00:00
Chris Lattner 903add84d9 Enhance DSE to handle the variable index case in PR8657.
llvm-svn: 120498
2010-11-30 23:43:23 +00:00
Chris Lattner c0f3379ae0 teach DSE to use GetPointerBaseWithConstantOffset to analyze
may-aliasing stores that partially overlap with different base
pointers.  This implements PR6043 and the non-variable part of
PR8657

llvm-svn: 120485
2010-11-30 23:05:20 +00:00
Chris Lattner e28618de59 move GetPointerBaseWithConstantOffset out of GVN into ValueTracking.h
llvm-svn: 120476
2010-11-30 22:25:26 +00:00
Chris Lattner 50162e3c2a remove a fixed fixme
llvm-svn: 120474
2010-11-30 22:18:11 +00:00
Chris Lattner 6712251f41 Make DeleteDeadInstruction be a static function, move some code around.
llvm-svn: 120471
2010-11-30 21:58:14 +00:00
Chris Lattner 51d67ce2ff switch RemoveAccessedObjects to use AliasAnalysis::Location to simplify
the code.  We now get accurate sizes on Loads, though it surely doesn't
matter in practice.

llvm-svn: 120469
2010-11-30 21:47:58 +00:00
Chris Lattner f80b39986f two improvements to RemoveAccessedObjects:
1. if the underlying pointer passed in can be resolved
   to any argument or alloca, then we don't need to scan.
   Previously we would only avoid the scan if the alloca
   or byval was actually considered dead.
2. The dead store processing code is itself completely
   dead and didn't handle volatile stores right anyway,
   so delete it.  This allows simplifying the interface
   to RemoveAccessedObjects.

llvm-svn: 120467
2010-11-30 21:38:30 +00:00
Chris Lattner 7fe08b67fa remove the "undead" terminology, which is nonstandard and never
made sense to me.  We now have a set of dead stack objects, and
they become live when loaded.  Fix a theoretical problem where
we'd pass in the wrong pointer to the alias query.

llvm-svn: 120465
2010-11-30 21:32:12 +00:00
Chris Lattner 127818d746 move call handling in handleEndBlock up a bit, and simplify it.
If the call might read all the allocas, stop scanning early.
Convert a vector to smallvector, shrink SmallPtrSet to 16 instead
of 64 to avoid crazy linear scans.

llvm-svn: 120463
2010-11-30 21:18:46 +00:00
Dale Johannesen d3a58c8fa1 Avoid exponential growth of a table. It feels like
there should be a better way to do this.  PR 8679.

llvm-svn: 120457
2010-11-30 20:23:21 +00:00
Chris Lattner 60a8b3dab8 various cleanups and code simplification
llvm-svn: 120454
2010-11-30 19:48:15 +00:00
Chris Lattner 51c28a93cc make getPointerSize a static function. Add ivars to DSE for
AA and MD pass info instead of using getAnalysis<> all over.

llvm-svn: 120453
2010-11-30 19:34:42 +00:00
Chris Lattner 77d79fa25f reduce indentation, clean up TD use a bit.
llvm-svn: 120452
2010-11-30 19:28:23 +00:00
Chris Lattner b63ba73b1b enhance isRemovable to refuse to delete volatile mem transfers
now that DSE hacks on them.  This fixes a regression I introduced,
by generalizing DSE to hack on transfers.

llvm-svn: 120445
2010-11-30 19:12:10 +00:00
Chris Lattner 58b779e9c2 Rewrite the main DSE loop to be written in terms of reasoning
about pairs of AA::Location's instead of looking for MemDep's
"Def" predicate.  This is more powerful and general, handling
memset/memcpy/store all uniformly, and implementing PR8701 and
probably obsoleting parts of memcpyoptimizer.

This also fixes an obscure bug with init.trampoline and i8
stores, but I'm not surprised it hasn't been hit yet.  Enhancing
init.trampoline to carry the size that it stores would allow
DSE to be much more aggressive about optimizing them.

llvm-svn: 120406
2010-11-30 07:23:21 +00:00
Anders Carlsson e3ea1cba79 Add a puts optimization that converts puts() to putchar('\n').
llvm-svn: 120398
2010-11-30 06:19:18 +00:00
Chris Lattner 3590ef817c rename a function and reduce some indentation, no functionality change.
llvm-svn: 120391
2010-11-30 05:30:45 +00:00
Chris Lattner b438ef236c remove the pointless check of MemoryUseIntrinsic from
is trivially dead, since these have side effects.  This makes the
(misnamed) MemoryUseIntrinsic class dead, so remove it.

llvm-svn: 120382
2010-11-30 02:03:47 +00:00
Chris Lattner 2227a8a192 rename doesClobberMemory -> hasMemoryWrite to be more specific, and
remove an actively-wrong comment.

llvm-svn: 120378
2010-11-30 01:37:52 +00:00
Chris Lattner 9d179d911d clean up handling of 'free', detangling it from everything else.
It can be seriously improved, but at least now it isn't intertwined
with the other logic.

llvm-svn: 120377
2010-11-30 01:28:33 +00:00
Chris Lattner 9a146372b5 Teach basicaa that memset's modref set is at worst "mod" and never
contains "ref".

Enhance DSE to use a modref query instead of a store-specific hack
to generalize the "ignore may-alias stores" optimization to handle
memset and memcpy.

llvm-svn: 120368
2010-11-30 00:28:45 +00:00
Chris Lattner c3c754f750 my previous patch would cause us to start deleting some volatile
stores, fix and add a testcase.

llvm-svn: 120363
2010-11-30 00:12:39 +00:00
Chris Lattner d4f1090948 two changes to DSE that shouldn't affect anything:
1. Don't bother trying to optimize:

lifetime.end(ptr)
store(ptr)

as it is undefined, and therefore shouldn't exist.

2. Move the 'storing a loaded pointer' xform up, simplifying
  the may-aliased store code.

llvm-svn: 120359
2010-11-30 00:01:19 +00:00
Chris Lattner b4df1d5a3e prune an llvmcontext include and simplify some code.
llvm-svn: 120347
2010-11-29 23:35:33 +00:00
Chris Lattner 2e8793482c fix PR8677, patch by Jakub Staszak!
llvm-svn: 120325
2010-11-29 21:59:31 +00:00
Frits van Bommel 28218aa8f1 Transform (extractvalue (load P), ...) to (load (gep P, 0, ...)) if the load has no other uses, shrinking the load.
llvm-svn: 120323
2010-11-29 21:56:20 +00:00
Owen Anderson 8ba5f39f70 Second attempt at fixing the performance regressions introduced
by my recent GVN improvement.  Looking through a single layer of
PHI nodes when attempting to sink GEPs, we need to iteratively
look through arbitrary PHI nests.

llvm-svn: 120202
2010-11-27 08:15:55 +00:00
Nick Lewycky b8de00ee07 Treat a call of function pointer like a load of the pointer when considering
whether the pointer can be replaced with the global variable it is a copy of.
Fixes PR8680.

llvm-svn: 120126
2010-11-24 22:04:20 +00:00
Duncan Sands 0488d564e1 Rename SimplifyDistributed to the more meaningfull name SimplifyByFactorizing.
llvm-svn: 120051
2010-11-23 20:42:39 +00:00
Benjamin Kramer 94a622af4c The srem -> urem transform is not safe for any divisor that's not a power of two.
E.g. -5 % 5 is 0 with srem and 1 with urem.

Also addresses Frits van Bommel's comments.

llvm-svn: 120049
2010-11-23 20:33:57 +00:00
Duncan Sands 433c1679cf Replace calls to ConstantFoldInstruction with calls to SimplifyInstruction
in two places that are really interested in simplified instructions, not
constants.

llvm-svn: 120044
2010-11-23 20:26:33 +00:00
Duncan Sands bb2cd025a9 Constant folding here is pointless, because InstructionSimplify
(which does constant folding and more) is called a few lines
later.

llvm-svn: 120042
2010-11-23 20:24:21 +00:00
Benjamin Kramer b5afa65b0a InstCombine: Reduce "X shift (A srem B)" to "X shift (A urem B)" iff B is positive.
This allows to transform the rem in "1 << ((int)x % 8);" to an and.

llvm-svn: 120028
2010-11-23 18:52:42 +00:00
Duncan Sands 60813f96e0 Propagate LeftDistributes and RightDistributes into their only uses.
Stylistic improvement suggested by Frits van Bommel.

llvm-svn: 120026
2010-11-23 15:28:14 +00:00
Duncan Sands 22df741687 Fix typo pointed out by Frits van Bommel and Marius Wachtler.
llvm-svn: 120025
2010-11-23 15:25:34 +00:00
Duncan Sands adc7771f18 Exploit distributive laws (eg: And distributes over Or, Mul over Add, etc) in a
fairly systematic way in instcombine.  Some of these cases were already dealt
with, in which case I removed the existing code.  The case of Add has a bunch of
funky logic which covers some of this plus a few variants (considers shifts to be
a form of multiplication), which I didn't touch.  The simplification performed is:
A*B+A*C -> A*(B+C).  The improvement is to do this in cases that were not already
handled [such as A*B-A*C -> A*(B-C), which was reported on the mailing list], and
also to do it more often by not checking for "only one use" if "B+C" simplifies.

llvm-svn: 120024
2010-11-23 14:23:47 +00:00
Chris Lattner e5afa15b77 duncan's spider sense was right, I completely reversed the condition
on this instcombine xform.  This fixes a miscompilation of 403.gcc.

llvm-svn: 119988
2010-11-23 02:42:04 +00:00
Benjamin Kramer f1ebb63161 InstCombine: Implement X - A*-B -> X + A*B.
llvm-svn: 119984
2010-11-22 20:31:27 +00:00
Duncan Sands c133c54426 If a GEP index simply advances by multiples of a type of zero size,
then replace the index with zero.

llvm-svn: 119974
2010-11-22 16:32:50 +00:00
Duncan Sands 8a0f486e36 Move the "gep undef" -> "undef" transform from instcombine to
InstructionSimplify.

llvm-svn: 119970
2010-11-22 13:42:49 +00:00
Duncan Sands c6648eb4c3 Don't keep track of inserted phis in PromoteMemoryToRegister: the information
is never used.  Patch by Cameron Zwarich.

llvm-svn: 119963
2010-11-22 09:41:24 +00:00
Chris Lattner fc9aead6fd fix comment
llvm-svn: 119948
2010-11-21 19:05:34 +00:00
Chris Lattner 5957229659 rework some DSE paths to use the newly-public "getPointerDependencyFrom"
method in MemDep instead of inserting an instruction, doing a query,
then removing it.  Neither operation is effectively cached.

llvm-svn: 119930
2010-11-21 08:06:10 +00:00
Chris Lattner e48c31ce33 implement PR8576, deleting dead stores with intervening may-alias stores.
llvm-svn: 119927
2010-11-21 07:34:32 +00:00
Chris Lattner f7e896138e optimize:
void a(int x) { if (((1<<x)&8)==0) b(); }

into "x != 3", which occurs over 100 times in 403.gcc but in no
other program in llvm-test.

llvm-svn: 119922
2010-11-21 06:44:42 +00:00
Chris Lattner 58f9f58716 Implement PR8644: forwarding a memcpy value to a byval,
allowing the memcpy to be eliminated.

Unfortunately, the requirements on byval's without explicit 
alignment are really weak and impossible to predict in the 
mid-level optimizer, so this doesn't kick in much with current
frontends.  The fix is to change clang to set alignment on all
byval arguments.

llvm-svn: 119916
2010-11-21 00:28:59 +00:00
Benjamin Kramer ddd1b7b801 Simplify code. No change in functionality.
llvm-svn: 119908
2010-11-20 18:43:35 +00:00
Owen Anderson ea326db47b Document the new GVN number table structure.
llvm-svn: 119865
2010-11-19 22:48:40 +00:00
Owen Anderson dfb8c3bbfc When folding addressing modes in CodeGenPrepare, attempt to look through PHI nodes
if all the operands of the PHI are equivalent.  This allows CodeGenPrepare to undo
unprofitable PRE transforms.

llvm-svn: 119853
2010-11-19 22:15:03 +00:00
Duncan Sands aef146b890 Factor code for testing whether replacing one value with another
preserves LCSSA form out of ScalarEvolution and into the LoopInfo
class.  Use it to check that SimplifyInstruction simplifications
are not breaking LCSSA form.  Fixes PR8622.

llvm-svn: 119727
2010-11-18 19:59:41 +00:00
Owen Anderson c21c100f3d Completely rework the datastructure GVN uses to represent the value number to leader mapping. Previously,
this was a tree of hashtables, and a query recursed into the table for the immediate dominator ad infinitum
if the initial lookup failed.  This led to really bad performance on tall, narrow CFGs.

We can instead replace it with what is conceptually a multimap of value numbers to leaders (actually
represented by a hashtable with a list of Value*'s as the value type), and then
determine which leader from that set to use very cheaply thanks to the DFS numberings maintained by
DominatorTree.  Because there are typically few duplicates of a given value, this scan tends to be
quite fast.  Additionally, we use a custom linked list and BumpPtr allocation to avoid any unnecessary
allocation in representing the value-side of the multimap.

This change brings with it a 15% (!) improvement in the total running time of GVN on 403.gcc, which I
think is pretty good considering that includes all the "real work" being done by MemDep as well.

The one downside to this approach is that we can no longer use GVN to perform simple conditional progation,
but that seems like an acceptable loss since we now have LVI and CorrelatedValuePropagation to pick up
the slack.  If you see conditional propagation that's not happening, please file bugs against LVI or CVP.

llvm-svn: 119714
2010-11-18 18:32:40 +00:00
Chris Lattner 1385dff8c0 slightly simplify code and substantially improve comment. Instead of
saying "it would be bad", give an example of what is going on.

llvm-svn: 119695
2010-11-18 08:07:09 +00:00
Chris Lattner 731caac7c6 remove a pointless restriction from memcpyopt. It was
refusing to optimize two memcpy's like this:

copy A <- B
copy C <- A

if it couldn't prove that noalias(B,C).  We can eliminate
the copy by producing a memmove instead of memcpy.

llvm-svn: 119694
2010-11-18 08:00:57 +00:00
Chris Lattner c274a83442 remove another pointless noalias check: M is a memcpy, so the
source and dest are known to not overlap.

llvm-svn: 119692
2010-11-18 07:39:57 +00:00
Chris Lattner 75cfe98534 use AA::isNoAlias instead of open coding it. Remove an extraneous noalias check:
there is no need to check to see if the source and dest of a memcpy are noalias,
behavior is undefined if not.

llvm-svn: 119691
2010-11-18 07:38:43 +00:00
Chris Lattner 1e37bbafbb finish a thought.
llvm-svn: 119690
2010-11-18 07:32:33 +00:00
Chris Lattner 7e9b2ea3bf rearrange some code, splitting memcpy/memcpy optimization
out of processMemCpy into its own function.

llvm-svn: 119687
2010-11-18 07:02:37 +00:00
Chris Lattner ac5701319b allow eliminating an alloca that is just copied from an constant global
if it is passed as a byval argument.  The byval argument will just be a
read, so it is safe to read from the original global instead.  This allows
us to promote away the %agg.tmp alloca in PR8582

llvm-svn: 119686
2010-11-18 06:41:51 +00:00
Chris Lattner f183d5c4be enhance the "alloca is just a memcpy from constant global"
to ignore calls that obviously can't modify the alloca
because they are readonly/readnone.

llvm-svn: 119683
2010-11-18 06:26:49 +00:00
Chris Lattner 7aeae25c78 fix a small oversight in the "eliminate memcpy from constant global"
optimization.  If the alloca that is "memcpy'd from constant" also has
a memcpy from *it*, ignore it: it is a load.  We now optimize the testcase to:

define void @test2() {
  %B = alloca %T
  %a = bitcast %T* @G to i8*
  %b = bitcast %T* %B to i8*
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %b, i8* %a, i64 124, i32 4, i1 false)
  call void @bar(i8* %b)
  ret void
}

previously we would generate:

define void @test() {
  %B = alloca %T
  %b = bitcast %T* %B to i8*
  %G.0 = getelementptr inbounds %T* @G, i32 0, i32 0
  %tmp3 = load i8* %G.0, align 4
  %G.1 = getelementptr inbounds %T* @G, i32 0, i32 1
  %G.15 = bitcast [123 x i8]* %G.1 to i8*
  %1 = bitcast [123 x i8]* %G.1 to i984*
  %srcval = load i984* %1, align 1
  %B.0 = getelementptr inbounds %T* %B, i32 0, i32 0
  store i8 %tmp3, i8* %B.0, align 4
  %B.1 = getelementptr inbounds %T* %B, i32 0, i32 1
  %B.12 = bitcast [123 x i8]* %B.1 to i8*
  %2 = bitcast [123 x i8]* %B.1 to i984*
  store i984 %srcval, i984* %2, align 1
  call void @bar(i8* %b)
  ret void
}

llvm-svn: 119682
2010-11-18 06:20:47 +00:00
Dan Gohman 20d9ce21ef Move SCEV::dominates and properlyDominates to ScalarEvolution.
llvm-svn: 119570
2010-11-17 21:41:58 +00:00
Dan Gohman afd6db9932 Move SCEV::isLoopInvariant and hasComputableLoopEvolution to be member
functions of ScalarEvolution, in preparation for memoization and
other optimizations.

llvm-svn: 119562
2010-11-17 21:23:15 +00:00
Dan Gohman 1ee6d24072 Reference ScalarEvolution by name rather than directly in LICM,
to avoid an unneeded dependence.

llvm-svn: 119557
2010-11-17 20:50:07 +00:00
Benjamin Kramer 07726c7d52 InstCombine: Add a missing irem identity (X % X -> 0).
llvm-svn: 119538
2010-11-17 19:11:46 +00:00
Duncan Sands c89ac07e7a Move some those Xor simplifications which don't require creating new
instructions out of InstCombine and into InstructionSimplify.  While
there, introduce an m_AllOnes pattern to simplify matching with integers
and vectors with all bits equal to one.

llvm-svn: 119536
2010-11-17 18:52:15 +00:00
Duncan Sands 9d9a4e2ca2 Have InlineFunction use SimplifyInstruction rather than
hasConstantValue.  I was leery of using SimplifyInstruction
while the IR was still in a half-baked state, which is the
reason for delaying the simplification until the IR is fully
cooked.

llvm-svn: 119494
2010-11-17 11:16:23 +00:00
Duncan Sands ba0b22c785 Have RemovePredecessorAndSimplify you SimplifyInstruction
rather than hasConstantValue.

llvm-svn: 119457
2010-11-17 04:12:05 +00:00
Duncan Sands 72313843d5 Remove dead code in GVN: now that SimplifyInstruction is called
systematically, CollapsePhi will always return null here.  Note
that CollapsePhi did an extra check, isSafeReplacement, which
the SimplifyInstruction logic does not do.  I think that check
was bogus - I guess we will soon find out!  (It was originally
added in commit 41998 without a testcase).

llvm-svn: 119456
2010-11-17 04:05:21 +00:00
Duncan Sands 637049515f Have a few places that want to simplify phi nodes use SimplifyInstruction
rather than calling hasConstantValue.  No intended functionality change.

llvm-svn: 119352
2010-11-16 17:41:24 +00:00
Duncan Sands b99f39b9f6 If dom tree information is available, make it possible to pass
it to get better phi node simplification.

llvm-svn: 119055
2010-11-14 18:36:10 +00:00
Duncan Sands 4581ddc123 Teach InstructionSimplify about phi nodes. I chose to have it simply
offload the work to hasConstantValue rather than do something more
complicated (such handling mutually recursive phis) because (1) it is
not clear it is worth it; and (2) if it is worth it, maybe such logic
would be better placed in hasConstantValue.  Adjust some GVN tests
which are now cleaned up much further (eg: all phi nodes are removed).

llvm-svn: 119043
2010-11-14 13:30:18 +00:00
Duncan Sands 641baf1646 Generalize the reassociation transform in SimplifyCommutative (now renamed to
SimplifyAssociativeOrCommutative) "(A op C1) op C2" -> "A op (C1 op C2)",
which previously was only done if C1 and C2 were constants, to occur whenever
"C1 op C2" simplifies (a la InstructionSimplify).  Since the simplifying operand
combination can no longer be assumed to be the right-hand terms, consider all of
the possible permutations.  When compiling "gcc as one big file", transform 2
(i.e. using right-hand operands) fires about 4000 times but it has to be said
that most of the time the simplifying operands are both constants.  Transforms
3, 4 and 5 each fired once.  Transform 6, which is an existing transform that
I didn't change, never fired.  With this change, the testcase is now optimized
perfectly with one run of instcombine (previously it required instcombine +
reassociate + instcombine, and it may just have been luck that this worked).

llvm-svn: 119002
2010-11-13 15:10:37 +00:00
Duncan Sands 246b71c596 Have GVN simplify instructions as it goes. For example, consider
"%z = %x and %y".  If GVN can prove that %y equals %x, then it turns
this into "%z = %x and %x".  With the new code, %z will be replaced
with %x everywhere (and then deleted).  Previously %z would be value
numbered too, which is a waste of time.  Also, while a clever value
numbering algorithm would give %z the same value number as %x, our
current one doesn't do so (at least I don't think it does).  The new
logic has an essentially equivalent effect to what you would get if
%z was given the same value number as %x, i.e. it should make value
numbering smarter.  While there, get hold of target data once at the
start rather than a gazillion times all over the place.

llvm-svn: 118923
2010-11-12 21:10:24 +00:00
Dan Gohman d4b7fff2e8 Enhance DSE to handle the case where a free call makes more than
one store dead. This is especially noticeable in
SingleSource/Benchmarks/Shootout/objinst.

llvm-svn: 118875
2010-11-12 02:19:17 +00:00
Dan Gohman 65316d6749 Add helper functions for computing the Location of load, store,
and vaarg instructions.

llvm-svn: 118845
2010-11-11 21:50:19 +00:00
Dan Gohman a826a88755 Factor out Instruction::isSafeToSpeculativelyExecute's code for
testing for dereferenceable pointers into a helper function,
isDereferenceablePointer.  Teach it how to reason about GEPs
with simple non-zero indices.

Also eliminate ArgumentPromtion's IsAlwaysValidPointer,
which didn't check for weak externals or out of range gep
indices.

llvm-svn: 118840
2010-11-11 21:23:25 +00:00
Dan Gohman dcdfd8dd24 TBAA-enable ArgumentPromotion.
llvm-svn: 118804
2010-11-11 18:09:32 +00:00
Dan Gohman 0cc4c7516e Make Sink tbaa-aware.
llvm-svn: 118788
2010-11-11 16:21:47 +00:00
Dan Gohman c3b4ea7b7d It's safe to sink some instructions which are not safe to speculatively
execute. Make Sink's predicate more precise.

llvm-svn: 118787
2010-11-11 16:20:28 +00:00
Dan Gohman 0a6021a54d Enhance GVN to do more precise alias queries for non-local memory
references. For example, this allows gvn to eliminate the load in
this example:

  void foo(int n, int* p, int *q) {
    p[0] = 0;
    p[1] = 1;
    if (n) {
      *q = p[0];
    }
  }

llvm-svn: 118714
2010-11-10 20:37:15 +00:00
Dan Gohman d209911642 Use getValueOperand() and getPointerOperand() on load and store
instructions instead of hard-coding operand numbers.

llvm-svn: 118698
2010-11-10 19:03:33 +00:00
Dan Gohman 066c1bb1e9 Add a doesAccessArgPointees helper function, and update code to use
it, and to be consistent.

llvm-svn: 118692
2010-11-10 18:17:28 +00:00
Dan Gohman 2577580967 Factor out the code for testing whether a function accesses
arbitrary memory into a helper function, and adjust some comments.

llvm-svn: 118687
2010-11-10 17:34:04 +00:00
Dale Johannesen 0171dc30ff When checking that the necessary bits are zero in
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits.  PR 8547.

llvm-svn: 118665
2010-11-10 01:30:56 +00:00
Dan Gohman 2694e14087 Make ModRefBehavior a lattice. Use this to clean up AliasAnalysis
chaining and simplify FunctionAttrs' GetModRefBehavior logic.

llvm-svn: 118660
2010-11-10 01:02:18 +00:00
Dan Gohman e3467a7687 Teach FunctionAttrs about the VAArg instruction.
llvm-svn: 118627
2010-11-09 20:17:38 +00:00
Dan Gohman 35814e6128 Use the AliasAnalysis interface to determine how a Function accesses
memory. This isn't a real improvement with present day AliasAnalysis
implementations; it's mainly for consistency.

llvm-svn: 118624
2010-11-09 20:13:27 +00:00
Dan Gohman 0f17507478 Teach LICM and AliasSetTracker about AccessesArgumentsReadonly.
llvm-svn: 118618
2010-11-09 19:58:21 +00:00
Dan Gohman de52155685 Teach FunctionAttrs about AccessesArgumentsReadonly.
llvm-svn: 118617
2010-11-09 19:56:27 +00:00
Dan Gohman 470ade12e0 Fix a thinko that Duncan spotted.
llvm-svn: 118430
2010-11-08 19:24:47 +00:00
Dan Gohman 2cd1fd4a82 Make FunctionAttrs TBAA-aware.
llvm-svn: 118417
2010-11-08 17:12:04 +00:00
Dan Gohman 9130bad71f Extend the AliasAnalysis::pointsToConstantMemory interface to allow it
to optionally look for constant or local (alloca) memory.

Teach BasicAliasAnalysis::pointsToConstantMemory to look through Select
and Phi nodes, and to support looking for local memory.

Remove FunctionAttrs' PointsToLocalOrConstantMemory function, now that
AliasAnalysis knows all the tricks that it knew.

llvm-svn: 118412
2010-11-08 16:45:26 +00:00
Dan Gohman 86449d705a Make FunctionAttrs use AliasAnalysis::getModRefBehavior, now that it
knows about intrinsic functions.

llvm-svn: 118410
2010-11-08 16:10:15 +00:00
Duncan Sands 9d1fe4c40d Rename PointsToLocalMemory to PointsToLocalOrConstantMemory to make
the code more self-documenting.

llvm-svn: 118171
2010-11-03 14:45:05 +00:00
Jakob Stoklund Olesen 31a7eb40c1 Let the -inline-threshold command line argument take precedence over the
threshold given to createFunctionInliningPass().

Both opt -O3 and clang would silently ignore the -inline-threshold option.

llvm-svn: 118117
2010-11-02 23:40:26 +00:00
Owen Anderson 6186c96765 When folding away a (shl (shr)) pair, we need to check that the bits that will BECOME the low
bits are zero, not that the current low bits are zero.  Fixes <rdar://problem/8606771>.

llvm-svn: 117953
2010-11-01 21:08:20 +00:00
Duncan Sands e659aba516 Now that the MallocInst no longer exists, this workaround for
it claiming not to have side-effects is no longer needed.

llvm-svn: 117789
2010-10-30 16:12:16 +00:00
Duncan Sands b8f3b14dfb If a function does a volatile load from a global constant, do not
consider it to be readonly.  In fact, don't even consider it to be
readonly if it does a volatile load from an AllocaInst either (it
is debatable as to whether readonly would be correct or not in this
case; play safe for the moment).  This fixes PR8279.

llvm-svn: 117783
2010-10-30 12:59:44 +00:00
Bob Wilson 67a6f32c59 Clean up indentation and other whitespace.
llvm-svn: 117728
2010-10-29 22:20:45 +00:00
Bob Wilson 8ecf98b04f Remove trailing whitespace.
llvm-svn: 117727
2010-10-29 22:20:43 +00:00
Bob Wilson 9d07f39ace Fix 80-column violation.
llvm-svn: 117722
2010-10-29 22:03:07 +00:00
Bob Wilson 11ee456e23 Change instcombine's getShuffleMask to represent undef with negative values.
This code had previously used 2*N, where N is the mask length, to represent
undef.  That is not safe because the shufflevector operands may have more
than N elements -- they don't have to match the result type.

llvm-svn: 117721
2010-10-29 22:03:05 +00:00
Bob Wilson cb11b48e7a Make instcombine a little more aggressive in combining vector shuffles.
Allow splats even if they don't match either of the original shuffles,
possibly due to undef entries in the shuffles masks.  Radar 8597790.
Also fix some 80-column violations.

llvm-svn: 117719
2010-10-29 22:02:50 +00:00
Owen Anderson 374e1464ae Give up on doing in-line instruction simplification during correlated value propagation. Instruction simplification
needs to be guaranteed never to be run on an unreachable block.  However, earlier block simplifications may have
changed the CFG to make block that were reachable when we began our iteration unreachable by the time we try to
simplify them. (Note that this also means that our depth-first iterators were potentially being invalidated).

This should not have a large impact on code quality, since later runs of instcombine should pick up these simplifications.
Fixes PR8506.

llvm-svn: 117709
2010-10-29 21:05:17 +00:00
John Thompson e8360b7182 Inline asm multiple alternative constraints development phase 2 - improved basic logic, added initial platform support.
llvm-svn: 117667
2010-10-29 17:29:13 +00:00
Dale Johannesen 16bb87a90e Teach InstCombine not to use Add and Neg on FP. PR 8490.
llvm-svn: 117510
2010-10-27 23:45:18 +00:00
Dan Gohman 2e20dfb0f2 Fix a case where instcombine was stripping metadata (and alignment)
from stores when folding in bitcasts.

llvm-svn: 117265
2010-10-25 16:16:27 +00:00
Duncan Sands 31c803b2ba Fix PR8445: a block with no predecessors may be the entry block, in which case
it isn't unreachable and should not be zapped.  The check for the entry block
was missing in one case: a block containing a unwind instruction.  While there,
do some small cleanups: "M" is not a great name for a Function* (it would be
more appropriate for a Module*), change it to "Fn"; use Fn in more places.

llvm-svn: 117224
2010-10-24 12:23:30 +00:00
Benjamin Kramer 76229bc128 SmallVectorize.
llvm-svn: 117213
2010-10-23 17:10:24 +00:00
Chandler Carruth 88c54b82c1 Switch attribute macros to use 'LLVM_' as a prefix. We retain the old names
until other LLVM projects using these are cleaned up.

llvm-svn: 117200
2010-10-23 08:10:43 +00:00
Bob Wilson a4e231c880 Teach instcombine to set the alignment arguments for NEON load/store intrinsics.
llvm-svn: 117154
2010-10-22 21:41:48 +00:00
Duncan Sands 94da154558 RetOp is not actually used for anything useful (though
it looks like maybe it was supposed to be used in the
test...), so zap it (gcc-4.6 warning).

llvm-svn: 117023
2010-10-21 16:05:44 +00:00
Dan Gohman f372cf869b Reapply r116831 and r116839, converting AliasAnalysis to use
uint64_t, plus fixes for places I missed before.

llvm-svn: 116875
2010-10-19 22:54:46 +00:00
Dan Gohman b4aa503501 Revert r116831 and r116839, which are breaking selfhost builds.
llvm-svn: 116858
2010-10-19 21:06:16 +00:00
Owen Anderson a4fefc1949 Passes do not need to recursively initialize passes that they preserve, if
they do not also require them.  This allows us to reduce inter-pass linkage
dependencies.

llvm-svn: 116854
2010-10-19 20:08:44 +00:00
Dan Gohman 896ac62346 Oops, check in all the files for converting AliasAnalysis to
use uint64_t.

llvm-svn: 116839
2010-10-19 18:08:27 +00:00
Owen Anderson 6c18d1aac0 Get rid of static constructors for pass registration. Instead, every pass exposes an initializeMyPassFunction(), which
must be called in the pass's constructor.  This function uses static dependency declarations to recursively initialize
the pass's dependencies.

Clients that only create passes through the createFooPass() APIs will require no changes.  Clients that want to use the
CommandLine options for passes will need to manually call the appropriate initialization functions in PassInitialization.h
before parsing commandline arguments.

I have tested this with all standard configurations of clang and llvm-gcc on Darwin.  It is possible that there are problems
with the static dependencies that will only be visible with non-standard options.  If you encounter any crash in pass
registration/creation, please send the testcase to me directly.

llvm-svn: 116820
2010-10-19 17:21:58 +00:00
Dan Gohman 14fe8cf238 Consistently use AliasAnalysis::UnknownSize instead of hardcoding ~0u.
llvm-svn: 116815
2010-10-19 17:06:23 +00:00
Mikhail Glushenkov 2072db24ed GlobalOpt: EvaluateFunction() must not evaluate stores to weak_odr globals.
Fixes PR8389.

llvm-svn: 116812
2010-10-19 16:47:23 +00:00
Mikhail Glushenkov cf2afe008d Trailing whitespace.
llvm-svn: 116749
2010-10-18 21:16:00 +00:00
Dan Gohman 71af9db0e8 Make AliasSetTracker TBAA-aware, enabling TBAA-enabled LICM.
llvm-svn: 116743
2010-10-18 20:44:50 +00:00
Devang Patel 218f3206fa Transfer debug loc to lowered call.
Patch by Alexander Herz!

llvm-svn: 116733
2010-10-18 18:53:44 +00:00
Benjamin Kramer 1dc34b48dd Eliminate some calls to Value::getNameStr.
llvm-svn: 116670
2010-10-16 11:28:23 +00:00
Owen Anderson 18e4fed3fa Generalize MemCpyOpt's handling of call slot forwarding to function properly when the call slot
forwarding is implemented with a load/store pair rather than a memcpy.

llvm-svn: 116637
2010-10-15 22:52:12 +00:00
Owen Anderson 071cee0c81 CallGraphSCC passes implicity require CallGraph analysis.
llvm-svn: 116443
2010-10-13 22:00:45 +00:00
Rafael Espindola c2240adcc7 Fix PR8313 by changing ValueToValueMap use a TrackingVH.
llvm-svn: 116390
2010-10-13 02:08:17 +00:00
Rafael Espindola 229e38f0fe Be more consistent in using ValueToValueMapTy.
llvm-svn: 116387
2010-10-13 01:36:30 +00:00
Owen Anderson 8ac477ffb5 Begin adding static dependence information to passes, which will allow us to
perform initialization without static constructors AND without explicit initialization
by the client.  For the moment, passes are required to initialize both their
(potential) dependencies and any passes they preserve.  I hope to be able to relax
the latter requirement in the future.

llvm-svn: 116334
2010-10-12 19:48:12 +00:00
Kenneth Uildriks b8d7efe785 Now using a variant of the existing inlining heuristics to decide whether to create a given specialization of a function in PartialSpecialization. If the total performance bonus across all callsites passing the same constant exceeds the specialization cost, we create the specialization.
llvm-svn: 116158
2010-10-09 22:06:36 +00:00
Dan Gohman 2fd85d7cd2 Filter out illegal formulae after updating offsets, not before, so that
formulae which become illegal as a result of the offset updating don't
escape.

This is for rdar://8529692. No testcase yet, because the given cases
hit use-list ordering differences.

llvm-svn: 116093
2010-10-08 19:33:26 +00:00
Daniel Dunbar d4e9c3b43a Update CMake.
llvm-svn: 116034
2010-10-08 02:30:03 +00:00
Dan Gohman 5947e1626a Delete the FormulaSorter class and inline its one method into its
one user. This code will be restructured soon and FormulaSorter
is getting in the way.

llvm-svn: 116012
2010-10-07 23:52:18 +00:00
Dan Gohman 1b61fd9bff Fix a spello.
llvm-svn: 116011
2010-10-07 23:43:09 +00:00
Dan Gohman 34f37e0d04 Charge a formula for explicit multiplies on scaled registers too,
not just base registers.

llvm-svn: 116010
2010-10-07 23:41:58 +00:00
Dan Gohman 49d638b45a Use size_t for consistency.
llvm-svn: 116009
2010-10-07 23:37:58 +00:00
Dan Gohman 8e72611058 When merging one use into another, transfer the offsets from
the old use to the new one.

llvm-svn: 116008
2010-10-07 23:36:45 +00:00
Dan Gohman a7b68d6d95 Fix LSR to keep the RegUseTracker up to date when combining users.
This doesn't usually matter, because the other heuristics usually
succeed regardless, but it's good to keep the register use
bookkeeping consistent.

llvm-svn: 116005
2010-10-07 23:33:43 +00:00
Devang Patel 57da4caa85 Remove LoopIndexSplit pass. It is neither maintained nor used by anyone.
llvm-svn: 116004
2010-10-07 23:29:37 +00:00
Owen Anderson df7a4f2515 Now with fewer extraneous semicolons!
llvm-svn: 115996
2010-10-07 22:25:06 +00:00
Owen Anderson 9786868939 Add initialization routines for Instrumentation.
llvm-svn: 115971
2010-10-07 20:17:24 +00:00
Owen Anderson f7ef5dfccc Add initialization routines to InstCombine.
llvm-svn: 115965
2010-10-07 20:04:55 +00:00
Owen Anderson bf70a035f0 Add an initialization routine for libLLVMipo.a
llvm-svn: 115933
2010-10-07 18:09:59 +00:00
Owen Anderson 4698c5d7f7 Next step on the getting-rid-of-static-ctors train: begin adding per-library
initialization functions that initialize the set of passes implemented in
that library.  Add C bindings for these functions as well.

llvm-svn: 115927
2010-10-07 17:55:47 +00:00
Owen Anderson 5e19bfcde3 Move the pass initialization helper functions into the llvm namespace, and add
a header declaring them all.  This is also where we will declare per-library pass-set
initializer functions down the road.

llvm-svn: 115900
2010-10-07 04:13:08 +00:00
Owen Anderson 6da4d820fa Since the Hello pass is built as a loadable dynamic library, don't try to convert it to new-style registration yet.
llvm-svn: 115881
2010-10-07 00:31:16 +00:00
Owen Anderson 13a642da0b Now that the profitable bits of EnableFullLoadPRE have been enabled by default, rip out the remainder.
Anyone interested in more general PRE would be better served by implementing it separately, to get real
anticipation calculation, etc.

llvm-svn: 115337
2010-10-01 20:02:55 +00:00
Eric Christopher 3ad2f3a2f2 Fix the other half of the alignment changing issue by making sure that the
memcpy alignment is the minimum of the incoming alignments.

Fixes PR 8266.

llvm-svn: 115305
2010-10-01 09:02:05 +00:00
Chris Lattner c663a67384 fix PR8267 - Instcombine shouldn't optimizer away volatile memcpy's.
llvm-svn: 115296
2010-10-01 05:51:02 +00:00
Dale Johannesen dd224d2333 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Owen Anderson 3170a25a84 We do want to allow LoadPRE to perform LICM-like transformations: we already consider PHI nodes to be negligible for
code size (making this transform code size neutral), and it allows us to hoist values out of loops, which is always
a good thing.

llvm-svn: 115205
2010-09-30 20:53:04 +00:00
Jakob Stoklund Olesen eb12f49fb7 Try again to disable critical edge splitting in CodeGenPrepare.
The bug that broke i386 linux has been fixed in r115191.

llvm-svn: 115204
2010-09-30 20:51:52 +00:00
Benjamin Kramer 5d66e5feb8 Tighten up prototype verification of strchr and strrchr to avoid a crash in the very unlikely case that someone passes an integer > i64 to strchr.
llvm-svn: 115144
2010-09-30 11:21:59 +00:00
Benjamin Kramer 2b76c66fd6 Add constant folding for strspn and strcspn to SimplifyLibCalls.
llvm-svn: 115116
2010-09-30 00:58:35 +00:00
Benjamin Kramer 38d22f69fc Add strpbrk folding to SimplifyLibCalls.
llvm-svn: 115111
2010-09-29 23:52:12 +00:00
Benjamin Kramer 8e861d7eee Simplify the loop in StrChrOptimizer. FileCheckize test.
llvm-svn: 115095
2010-09-29 22:29:12 +00:00
Benjamin Kramer 824645abc9 Teach SimplifyLibCalls how to optimize strrchr.
llvm-svn: 115091
2010-09-29 21:50:51 +00:00
Owen Anderson 99c985c37d Fix PR8247: JumpThreading can cause a block to become unreachable while still having predecessor, if it is part of a self-loop.
Because of this, we cannot use the Simplify* APIs, as they can assert-fail on unreachable code.  Since it's not easy to determine
if a given threading will cause a block to become unreachable, simply defer simplifying simplification to later InstCombine and/or
DCE passes.

llvm-svn: 115082
2010-09-29 20:34:41 +00:00
Owen Anderson d67ca0ed4c Revert r114919, which caused some serious regressions on ARM.
llvm-svn: 115053
2010-09-29 18:05:19 +00:00
Oscar Fuentes b4b12535e8 Removed a bunch of unnecessary target_link_libraries.
llvm-svn: 114999
2010-09-28 22:39:14 +00:00
Owen Anderson 9c93fd5598 Weight loop unrolling counts by nesting depth. Unrolling deeply nested loops tends to cause high
register pressure and thus excess spills, which we don't currently recover from well.  This should
be re-evaluated in the future if our ability to generate good spills/splits improves.

Partial fix for <rdar://problem/7635585>.

llvm-svn: 114919
2010-09-27 22:58:54 +00:00
Jakob Stoklund Olesen 415a7a6fec Revert "Disable codegen prepare critical edge splitting. Machine instruction passes now"
This reverts revision 114633. It was breaking llvm-gcc-i386-linux-selfhost.

It seems there is a downstream bug that is exposed by
-cgp-critical-edge-splitting=0. When that bug is fixed, this patch can go back
in.

Note that the changes to tailcallfp2.ll are not reverted. They were good are
required.

llvm-svn: 114859
2010-09-27 18:43:48 +00:00
Dan Gohman 16ef49686c Delete an unused function.
llvm-svn: 114841
2010-09-27 16:58:21 +00:00
Owen Anderson b590a927cd LoadPRE was not properly checking that the load it was PRE'ing post-dominated the block it was being hoisted to.
Splitting critical edges at the merge point only addressed part of the issue; it is also possible for non-post-domination
to occur when the path from the load to the merge has branches in it.  Unfortunately, full anticipation analysis is
time-consuming, so for now approximate it.  This is strictly more conservative than real anticipation, so we will miss
some cases that real PRE would allow, but we also no longer insert loads into paths where they didn't exist before. :-)

This is a very slight net positive on SPEC for me (0.5% on average).  Most of the benchmarks are largely unaffected, but
when it pays off it pays off decently: 181.mcf improves by 4.5% on my machine.

llvm-svn: 114785
2010-09-25 05:26:18 +00:00
Eric Christopher ebacd2b023 If we're changing the source of a memcpy we need to use the alignment
of the source, not the original alignment since it may no longer
be valid.

Fixes rdar://8400094

llvm-svn: 114781
2010-09-25 00:57:26 +00:00
Michael J. Spencer ded5f66813 Get rid of pop_macro warnings on MSVC.
llvm-svn: 114750
2010-09-24 19:48:47 +00:00
Bob Wilson 3aecb15f0a Fix llvm-extract so that it changes the linkage of all GlobalValues to
"external" even when doing lazy bitcode loading.  This was broken because
a function that is not materialized fails the !isDeclaration() test.

llvm-svn: 114666
2010-09-23 17:25:06 +00:00
Evan Cheng 794aaa79e2 Disable codegen prepare critical edge splitting. Machine instruction passes now
break critical edges on demand.

llvm-svn: 114633
2010-09-23 06:55:34 +00:00
Bob Wilson b6832a4372 When moving zext/sext to be folded with a load, ignore the issue of whether
truncates are free only in the case where the extended type is legal but the
load type is not.  If both types are illegal, such as when they are too big,
the load may not be legalized into an extended load.

llvm-svn: 114568
2010-09-22 18:44:56 +00:00
Bob Wilson 4ddcb6a6b4 Move a sign-extend or a zero-extend of a load to the same basic block as the
load when the type of the load is not legal, even if truncates are not free.
The load is going to be legalized to an extending load anyway.

llvm-svn: 114488
2010-09-21 21:54:27 +00:00
Bob Wilson ff714f9992 Clarify a comment.
llvm-svn: 114487
2010-09-21 21:44:14 +00:00
Gabor Greif a06741b356 do not rely on the implicit-dereference semantics of dyn_cast_or_null
llvm-svn: 114278
2010-09-18 11:55:34 +00:00
Gabor Greif aaa22cf1b6 do not rely on the implicit-dereference semantics of dyn_cast_or_null
llvm-svn: 114277
2010-09-18 11:53:39 +00:00
Owen Anderson d104806575 Use a depth-first iteratation in CorrelatedValuePropagation to avoid wasting time trying
to optimize unreachable blocks.

llvm-svn: 114105
2010-09-16 18:35:07 +00:00
Dale Johannesen f95f59a0c2 When substituting sunkaddrs into indirect arguments an asm, we were
walking the asm arguments once and stashing their Values.  This is
wrong because the same memory location can be in the list twice, and
if the first one has a sunkaddr substituted, the stashed value for the
second one will be wrong (use-after-free).  PR 8154.

llvm-svn: 114104
2010-09-16 18:30:55 +00:00
Chris Lattner 67e534505d fix PR8144, a bug where constant merge would merge globals marked
attribute(used).

llvm-svn: 113911
2010-09-15 00:30:11 +00:00
Owen Anderson d361aac3d0 Remove the option to disable LazyValueInfo in JumpThreading, as it is now
on by default and has received significant testing.

llvm-svn: 113852
2010-09-14 20:57:41 +00:00
Chris Lattner f1144f0929 fix PR8102, a case where we'd copyValue from a value that we already
deleted.  Fix this by doing the copyValue's before we delete stuff!

The testcase only repros the problem on my system with valgrind.

llvm-svn: 113820
2010-09-14 00:19:00 +00:00
Michael J. Spencer 93c9b2ea93 Revert "CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally."
This reverts commit r113632

Conflicts:

	cmake/modules/AddLLVM.cmake

llvm-svn: 113819
2010-09-13 23:59:48 +00:00
Eric Christopher e3a89f9f9c Remove unused variable.
llvm-svn: 113769
2010-09-13 18:27:59 +00:00
John Thompson 1094c80281 Added skeleton for inline asm multiple alternative constraint support.
llvm-svn: 113766
2010-09-13 18:15:37 +00:00
Owen Anderson c237a849e3 Re-apply r113679, which was reverted in r113720, which added a paid of new instcombine transforms
to expose greater opportunities for store narrowing in codegen.  This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.

llvm-svn: 113763
2010-09-13 17:59:27 +00:00
Eric Christopher 26abd3e0c2 Revert 113679, it was causing an infinite loop in a testcase that I've sent
on to Owen.

llvm-svn: 113720
2010-09-12 06:09:23 +00:00
Owen Anderson 70f4524427 Invert and-of-or into or-of-and when doing so would allow us to clear bits of the and's mask.
This can result in increased opportunities for store narrowing in code generation.  Update a number of
tests for this change.  This fixes <rdar://problem/8285027>.

Additionally, because this inverts the order of ors and ands, some patterns for optimizing or-of-and-of-or
no longer fire in instances where they did originally.  Add a simple transform which recaptures most of these
opportunities: if we have an or-of-constant-or and have failed to fold away the inner or, commute the order 
of the two ors, to give the non-constant or a chance for simplification instead.

llvm-svn: 113679
2010-09-11 05:48:06 +00:00
Gabor Greif 2f5f696b66 typoes
llvm-svn: 113647
2010-09-10 22:25:58 +00:00
Michael J. Spencer dc38d36ccb CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally.
llvm-svn: 113632
2010-09-10 21:14:25 +00:00
Benjamin Kramer 77ab138f84 This transform is also performed by InstructionSimplify, remove the duplicate.
llvm-svn: 113608
2010-09-10 19:52:35 +00:00
Owen Anderson d85c9ccdba Lower the unrolling theshold to 150. Empirical tests indicate that this is a sweet spot in the performance per
code size increase curve.

llvm-svn: 113595
2010-09-10 17:57:00 +00:00
Owen Anderson 04cf3fd761 What the loop unroller cares about, rather than just not unrolling loops with calls, is
not unrolling loops that contain calls that would be better off getting inlined.  This mostly
comes up when an interleaved devirtualization pass has devirtualized a call which the inliner
will inline on a future pass.  Thus, rather than blocking all loops containing calls, add
a metric for "inline candidate calls" and block loops containing those instead.

llvm-svn: 113535
2010-09-09 20:32:23 +00:00
Owen Anderson 6270515918 Revert r113439, which relaxed the requirement that loops containing calls cannot be unrolled. After some discussion,
there seems to be a better way to achieve the same effect.

llvm-svn: 113528
2010-09-09 20:02:23 +00:00
Owen Anderson 11ab204fdc r113526 introduced an unintended change to the loop unrolling threshold. Revert it.
llvm-svn: 113527
2010-09-09 19:11:57 +00:00
Owen Anderson b61b1647e2 Fix typo in code to cap the loop code size reduction calculation.
llvm-svn: 113526
2010-09-09 19:08:59 +00:00
Owen Anderson 62ea1b718c Use code-size reduction metrics to estimate the amount of savings we'll get when we unroll a loop.
Next step is to recalculate the threshold values given this new heuristic.

llvm-svn: 113525
2010-09-09 19:07:31 +00:00
Owen Anderson 8084dbaf8e Relax the "don't unroll loops containing calls" rule. Instead, when a loop contains a call, lower the
unrolling threshold to the optimize-for-size threshold.  Basically, for loops containing calls, unrolling
can still be profitable as long as the loop is REALLY small.

llvm-svn: 113439
2010-09-08 23:10:07 +00:00
Owen Anderson 3fe002dfb5 Generalize instcombine's support for combining multiple bit checks into a single test. Patch by Dirk Steinke!
llvm-svn: 113423
2010-09-08 22:16:17 +00:00
Owen Anderson a4d9c78aa1 Add a separate unrolling threshold when the current function is being optimized for size.
The threshold value of 50 is arbitrary, and I chose it simply by analogy to the inlining thresholds, where
the baseline unrolling threshold is slightly smaller than the baseline inlining threshold.  This could
undoubtedly use some tuning.

llvm-svn: 113306
2010-09-07 23:15:30 +00:00
Chris Lattner 6e27b3e004 Fix a serious performance regression introduced by r108687 on linux:
turning (fptrunc (sqrt (fpext x))) -> (sqrtf x)  is great, but we have
to delete the original sqrt as well.  Not doing so causes us to do 
two sqrt's when building with -fmath-errno (the default on linux).

llvm-svn: 113260
2010-09-07 20:01:38 +00:00
Nick Lewycky 71972d45dc Fix major bug in thunk detection. Also verify the calling convention.
Switch from isWeakForLinker to mayBeOverridden which is more accurate.

Add more statistics and debugging info. Add comments. Move static function
outside anonymous namespace.

llvm-svn: 113190
2010-09-07 01:42:10 +00:00
Chris Lattner be9019090e fix PR8067, an over-aggressive assertion in LICM.
llvm-svn: 113146
2010-09-06 05:11:24 +00:00
Chris Lattner b01c24a945 Teach loop rotate to hoist trivially invariant instructions
in the duplicated block instead of duplicating them.  

Duplicating them into the end of the loop and the preheader 
means that we got a phi node in the header of the loop, 
which prevented LICM from hoisting them.  GVN would
usually come around later and merge the duplicated 
instructions so we'd get reasonable output... except that
anything dependent on the shoulda-been-hoisted value can't
be hoisted.  In PR5319 (which this fixes), a memory value
didn't get promoted.

llvm-svn: 113134
2010-09-06 01:10:22 +00:00
Chris Lattner da24b9a49a pull a simple method out of LICM into a new
Loop::hasLoopInvariantOperands method. Remove
a useless and confusing Loop::isLoopInvariant(Instruction)
method, which didn't do what you thought it did.

No functionality change.

llvm-svn: 113133
2010-09-06 01:05:37 +00:00
Chris Lattner 1edf7434cf more cleanups
llvm-svn: 113115
2010-09-05 20:13:07 +00:00
Chris Lattner e6214557e7 Change lower atomic pass to use IntrinsicInst to simplify it a bit.
llvm-svn: 113114
2010-09-05 20:10:47 +00:00
Chris Lattner 05ef361b5e eliminate some non-obvious casts. UndefValue isa Constant.
llvm-svn: 113113
2010-09-05 20:03:09 +00:00
Nick Lewycky e3ac69eca3 Fix warning reported by MSVC++ builder.
llvm-svn: 113106
2010-09-05 09:11:38 +00:00
Nick Lewycky f3a07ec394 Switch FnSet to containing the ComparableFunction instead of a pointer to one.
This reduces malloc traffic (yay!) and removes MergeFunctionsEqualityInfo.

llvm-svn: 113105
2010-09-05 09:00:32 +00:00
Nick Lewycky 0095937b13 Fix many bugs when merging weak-strong and weak-weak pairs. We now merge all
strong functions first to make sure they're the canonical definitions and then
do a second pass looking only for weak functions.

llvm-svn: 113104
2010-09-05 08:22:49 +00:00
Chris Lattner 65b48b5dfc zap dead code.
llvm-svn: 113073
2010-09-04 18:12:00 +00:00
Dan Gohman 487e250109 Fix LoopSimplify to notify ScalarEvolution when splitting a loop backedge
into an inner loop, as the new loop iteration may differ substantially.
This fixes PR8078.

llvm-svn: 113057
2010-09-04 02:42:48 +00:00
Chris Lattner 50506787d1 fix a bug in my licm rewrite when a load from the promoted memory
location is being re-stored to the memory location.  We would get
a dangling pointer from the SSAUpdate data structure and miss a 
use.  This fixes PR8068

llvm-svn: 113042
2010-09-04 00:12:30 +00:00
Owen Anderson c91c1a205a Propagate non-local comparisons. Fixes PR1757.
llvm-svn: 113025
2010-09-03 22:47:08 +00:00
Owen Anderson c725462245 Add support for simplifying a load from a computed value to a load from a global when it
is provable that they're equivalent.  This fixes PR4855.

llvm-svn: 112994
2010-09-03 19:08:37 +00:00
Chris Lattner affc0e42f0 fix more AST updating bugs, correcting miscompilation in PR8041
llvm-svn: 112878
2010-09-02 22:19:10 +00:00
Duncan Sands 6778149f7e Reapply commit 112699, speculatively reverted by echristo, since
I'm sure it is harmless.  Original commit message:
If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory.  Instead, simply pass
in the type and name explicitly, which is all that was used anyway.

llvm-svn: 112810
2010-09-02 08:14:03 +00:00
Chris Lattner 8af45a889d deepen my MMX/SRoA hack to avoid hurting non-x86 codegen.
llvm-svn: 112763
2010-09-01 23:09:27 +00:00
Dan Gohman 0ad7d9c24e Fix loop unswitching's assumption that a code path which either
infinite loops or exits will eventually exit. This fixes PR5373.

llvm-svn: 112745
2010-09-01 21:46:45 +00:00
Owen Anderson 73f988cafa JumpThreading keeps LazyValueInfo up to date, so we don't need to rerun it
if we schedule another LVI-using pass afterwards.

llvm-svn: 112722
2010-09-01 18:27:22 +00:00
Eric Christopher a5d315c665 Speculatively revert 112699 and 112702, they seem to be causing
self host errors on clang-x86-64.

llvm-svn: 112719
2010-09-01 17:29:10 +00:00
Duncan Sands f7b18437b5 If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory.  Instead, simply pass
in the type and name explicitly, which is all that was used anyway.

llvm-svn: 112699
2010-09-01 10:29:33 +00:00
Chris Lattner 34e5361eb5 add a gross hack to work around a problem that Argiris reported
on llvmdev: SRoA is introducing MMX datatypes like <1 x i64>,
which then cause random problems because the X86 backend is
producing mmx stuff without inserting proper emms calls.

In the short term, force off MMX datatypes.  In the long term,
the X86 backend should not select generic vector types to MMX
registers.  This is being worked on, but won't be done in time
for 2.8.  rdar://8380055

llvm-svn: 112696
2010-09-01 05:14:33 +00:00
Dan Gohman 110ed64fbb Revert 112442 and 112440 until the compile time problems introduced
by 112440 are resolved.

llvm-svn: 112692
2010-09-01 01:45:53 +00:00
Chris Lattner 030f02021b licm is wasting time hoisting constant foldable operations,
instead of hoisting them, just fold them away.  This occurs in the
testcase for PR8041, for example.

llvm-svn: 112669
2010-08-31 23:00:16 +00:00
Chris Lattner daca6f3483 tidy up
llvm-svn: 112643
2010-08-31 21:21:25 +00:00
Owen Anderson 3c84ecb067 More cleanups of my JumpThreading transforms, including extracting some duplicated code into a helper function.
llvm-svn: 112634
2010-08-31 20:26:04 +00:00
Owen Anderson 6fdcb172a9 Add an RAII helper to make cleanup of the RecursionSet more fool-proof.
llvm-svn: 112628
2010-08-31 19:24:27 +00:00
Owen Anderson 048efbe225 Only try to clean up the current block if we changed that block already.
llvm-svn: 112625
2010-08-31 18:55:52 +00:00
Owen Anderson cd4de7f399 Refactor my fix for PR5652 to terminate the predecessor lookups after the first failure.
llvm-svn: 112620
2010-08-31 18:48:48 +00:00
Nick Lewycky 68984ede5c Fix an infinite loop; merging two functions will create a new function (if the
two are weak, we make them thunks to a new strong function) so don't iterate
through the function list as we're modifying it.

Also add back the outermost loop which got removed during the cleanups.

llvm-svn: 112595
2010-08-31 08:29:37 +00:00
Owen Anderson ce401be792 Don't perform an extra traversal of the function just to do cleanup. We can safely simplify instructions after each block has been processed without worrying about iterator invalidation.
llvm-svn: 112594
2010-08-31 07:55:56 +00:00
Owen Anderson 48d58ad64c Rename ValuePropagation to a more descriptive CorrelatedValuePropagation.
llvm-svn: 112591
2010-08-31 07:48:34 +00:00
Owen Anderson d2918a07bd Rename file to something more descriptive.
llvm-svn: 112590
2010-08-31 07:41:39 +00:00
Owen Anderson 3997a07fb9 More Chris-inspired JumpThreading fixes: use ConstantExpr to correctly constant-fold undef, and be more careful with its return value.
This actually exposed an infinite recursion bug in ComputeValueKnownInPredecessors which theoretically already existed (in JumpThreading's
handling of and/or of i1's), but never manifested before.  This patch adds a tracking set to prevent this case.

llvm-svn: 112589
2010-08-31 07:36:34 +00:00
Nick Lewycky 0464d1d7ec Switch to DenseSet, simplifying much more code. We now have a single iteration
where we hash, compare and fold, instead of one iteration where we build up
the hash buckets and a second one to fold.

llvm-svn: 112582
2010-08-31 05:53:05 +00:00
Owen Anderson 376597c13e Remove r111665, which implemented store-narrowing in InstCombine. Chris discovered a miscompilation in it, and it's not easily
fixable at the optimizer level. I'll investigate reimplementing it in DAGCombine.

llvm-svn: 112575
2010-08-31 04:41:06 +00:00
Owen Anderson b58b3c0dda Fix a typo.
llvm-svn: 112560
2010-08-30 23:59:30 +00:00
Owen Anderson b974dbbdd7 Cleanups suggested by Chris.
llvm-svn: 112553
2010-08-30 23:34:17 +00:00
Owen Anderson c910acb54a Re-apply r112539, being more careful to respect the return values of the constant folding methods. Additionally,
use the ConstantExpr::get*() methods to simplify some constant folding.

llvm-svn: 112550
2010-08-30 23:22:36 +00:00
Owen Anderson 30bacbdfdf Add statistics to evaluate this pass.
llvm-svn: 112545
2010-08-30 22:45:55 +00:00
Owen Anderson 1ddcbbe49c Revert r112539. It accidentally introduced a miscompilation.
llvm-svn: 112543
2010-08-30 22:33:41 +00:00
Owen Anderson 75f6037c7c Fixes and cleanups pointed out by Chris. In general, be careful to handle 0 results from ComputeValueKnownInPredecessors
(indicating undef), and re-use existing constant folding APIs.

llvm-svn: 112539
2010-08-30 22:07:52 +00:00
Chris Lattner c843fca2fd rewrite DwarfEHPrepare to use SSAUpdater to promote its allocas
instead of PromoteMemToReg.  This allows it to stop using DF and DT,
eliminating a computation of DT and DF from clang -O3.  Clang is now
down to 2 runs of DomFrontier.

llvm-svn: 112457
2010-08-29 19:54:28 +00:00
Chris Lattner f58382ed87 two changes: 1) make AliasSet hold the list of call sites with an
assertingvh so we get a violent explosion if the pointer dangles.

2) Fix AliasSetTracker::deleteValue to remove call sites with
   by-pointer comparisons instead of by-alias queries.  Using
   findAliasSetForCallSite can cause alias sets to get merged
   when they shouldn't, and can also miss alias sets when the
   call is readonly.

#2 fixes PR6889, which only repros with a .c file :(

llvm-svn: 112452
2010-08-29 18:42:23 +00:00
Chris Lattner 263f804699 LICM does get dead instructions input to it. Instead of sinking them
out of loops, just delete them.

llvm-svn: 112451
2010-08-29 18:22:25 +00:00
Chris Lattner 6ac0659a1c use moveBefore instead of remove+insert, it avoids some
symtab manipulation, so its faster (in addition to being
more elegant)

llvm-svn: 112450
2010-08-29 18:18:40 +00:00
Chris Lattner f03b4eac48 revert 112448 for now.
llvm-svn: 112449
2010-08-29 18:11:16 +00:00
Chris Lattner 11f8ad8211 optimize LICM::hoist to use moveBefore. Correct its updating
of AST to remove the hoisted instruction from the AST, since it
is no longer in the loop.

llvm-svn: 112448
2010-08-29 18:03:33 +00:00
Chris Lattner 1a1ed69435 fix some bugs (found by inspection) where LICM would not update
LICM correctly.  When sinking an instruction, it should not add
entries for the sunk instruction to the AST, it should remove
the entry for the sunk instruction.  The blocks being sunk to
are not in the loop, so their instructions shouldn't be in the
AST (yet)!

llvm-svn: 112447
2010-08-29 18:00:00 +00:00
Chris Lattner cc9cbc66a3 rework the ownership of subloop alias information: instead of
keeping them around until the pass is destroyed, keep them
around a) just when useful (not for outer loops) and b) destroy
them right after we use them.  This should reduce memory use
and fixes potential bugs where a loop is deleted and another
loop gets allocated to the same address.

llvm-svn: 112446
2010-08-29 17:46:00 +00:00
Chris Lattner bc1a65ac6c apparently unswitch had the same "Feature". Stop its
claims that it preserves domfrontier if it doesn't really.

llvm-svn: 112445
2010-08-29 17:23:19 +00:00
Chris Lattner d6f46b8af8 now that loop passes don't use DomFrontier, there is no reason
for the unroller to pretend it supports updating it.  It still
has a horrible hack for DomTree.

llvm-svn: 112444
2010-08-29 17:21:35 +00:00
Dan Gohman 002ff89cbd Optionally rerun dedicated-register filtering after applying
other filtering techniques, as those may allow it to filter
out more obviously unprofitable candidates.

llvm-svn: 112441
2010-08-29 16:39:22 +00:00
Dan Gohman f031792cc6 Fix several areas in LSR to do a better job keeping the main
LSRInstance data structures up to date. This fixes some
pessimizations caused by stale data which will be exposed
in an upcoming change.

llvm-svn: 112440
2010-08-29 16:32:54 +00:00
Dan Gohman e9e0873b08 Refactor the three main groups of code out of
NarrowSearchSpaceUsingHeuristics into separate functions.

llvm-svn: 112439
2010-08-29 16:09:42 +00:00
Dan Gohman 37a0f68036 Delete a bogus check.
llvm-svn: 112438
2010-08-29 15:30:29 +00:00
Dan Gohman b6a520d63c Add some comments.
llvm-svn: 112437
2010-08-29 15:27:08 +00:00
Dan Gohman bf673e0652 Move this debug output into GenerateAllReuseFormula, to declutter
the high-level logic.

llvm-svn: 112436
2010-08-29 15:21:38 +00:00
Dan Gohman d366b6d5c8 Delete an unused declaration.
llvm-svn: 112435
2010-08-29 15:19:11 +00:00
Dan Gohman 4f13bbfefc Do one lookup instead of two.
llvm-svn: 112434
2010-08-29 15:18:49 +00:00
Chris Lattner f94f6bb0ba licm preserves the cfg, it doesn't have to explicitly say it
preserves domfrontier.  It does preserve AA though.

llvm-svn: 112419
2010-08-29 07:02:56 +00:00
Chris Lattner abe61ef3b4 now that it doesn't use the PromoteMemToReg function, LICM doesn't
require DomFrontier.  Dropping this doesn't actually save any runs
of the pass though.

llvm-svn: 112418
2010-08-29 06:49:44 +00:00
Chris Lattner 1dc98b47b5 completely rewrite the memory promotion algorithm in LICM.
Among other things, this uses SSAUpdater instead of 
PromoteMemToReg.

llvm-svn: 112417
2010-08-29 06:43:52 +00:00
Chris Lattner 9c3931a544 use getUniqueExitBlocks instead of a manual set.
llvm-svn: 112412
2010-08-29 05:12:21 +00:00
Chris Lattner 85bf5421e1 reimplement LICM::sink to use SSAUpdater instead of PromoteMemToReg.
This leads to much simpler code.

llvm-svn: 112410
2010-08-29 04:55:06 +00:00
Chris Lattner c3fb03e289 implement SSAUpdater::RewriteUseAfterInsertions, a helpful form of RewriteUse.
llvm-svn: 112409
2010-08-29 04:54:06 +00:00
Chris Lattner b50407f104 remove dead proto
llvm-svn: 112408
2010-08-29 04:53:24 +00:00
Chris Lattner cd96b4df56 reduce indentation in LICM::sink by using early exits, use
getUniqueExitBlocks instead of getExitBlocks and a manual
set to eliminate dupes.

llvm-svn: 112405
2010-08-29 04:28:20 +00:00
Chris Lattner 188cc5a0fc modernize this pass a bit: use efficient set/map and reduce indentation.
llvm-svn: 112404
2010-08-29 04:23:04 +00:00
Chris Lattner 13ee795c42 remove unions from LLVM IR. They are severely buggy and not
being actively maintained, improved, or extended.

llvm-svn: 112356
2010-08-28 04:09:24 +00:00
Chris Lattner 504e5100d3 remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner 50df36ac0a for completeness, allow undef also.
llvm-svn: 112351
2010-08-28 03:36:51 +00:00
Chris Lattner 95bb297c26 squish dead code.
llvm-svn: 112350
2010-08-28 03:21:03 +00:00
Chris Lattner d0214f3efe handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Benjamin Kramer 83f9ff0452 Update CMake build. Add newline at end of file.
llvm-svn: 112332
2010-08-28 00:11:12 +00:00
Owen Anderson cf7f941121 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner 25a198e72b remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Owen Anderson 99d4cb861b Fix typos in comments.
llvm-svn: 112286
2010-08-27 20:32:56 +00:00
Chris Lattner 7398434675 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Chris Lattner 90cd746e63 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Owen Anderson 6ebbd92380 Use LVI to eliminate conditional branches where we've tested a related condition previously. Update tests for this change.
This fixes PR5652.

llvm-svn: 112270
2010-08-27 17:12:29 +00:00
Chris Lattner bfd2228182 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Chris Lattner d4ebd6df5a optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Owen Anderson bd2ecc7e68 Make JumpThreading smart enough to properly thread StrSwitch when it's compiled with clang++.
llvm-svn: 112198
2010-08-26 17:40:24 +00:00
Dan Gohman ca26f79051 Reapply r112091 and r111922, support for metadata linking, with a
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).

This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.

llvm-svn: 112190
2010-08-26 15:41:53 +00:00
Daniel Dunbar ce45863f0d Revert r111922, "MapValue support for MDNodes. This is similar to r109117,
except ...", it is causing *massive* performance regressions when building Clang
with itself (-O3 -g).

llvm-svn: 112158
2010-08-26 03:48:11 +00:00
Daniel Dunbar 95fe13c720 Revert r112091, "Remap metadata attached to instructions when remapping
individual ...", which depends on r111922, which I am reverting.

llvm-svn: 112157
2010-08-26 03:48:08 +00:00
Chris Lattner 07afbd5a08 zap dead code.
llvm-svn: 112130
2010-08-26 01:13:54 +00:00
Dan Gohman 8f292e7a6d Rewrite ExtractGV, removing a bunch of stuff that didn't fully work,
and was over-complicated, and replacing it with a simple implementation.

llvm-svn: 112120
2010-08-26 00:22:55 +00:00
Chris Lattner 8df99b523e remove some llvmcontext arguments that are now dead post-refactoring.
llvm-svn: 112104
2010-08-25 23:00:45 +00:00
Dan Gohman fd824487a3 Remap metadata attached to instructions when remapping individual
instructions, not when remapping modules.

llvm-svn: 112091
2010-08-25 21:36:50 +00:00
Devang Patel 01262e129e DIGlobalVariable can be used to encode debug info for globals that are directly folded into a constant by FE.
llvm-svn: 112072
2010-08-25 18:52:02 +00:00
Dan Gohman a209503467 Use MapValue in the Linker instead of having a private function
which does the same thing. This eliminates redundant code and
handles MDNodes better. MDNode linking still doesn't fully
work yet though.

llvm-svn: 111941
2010-08-24 18:50:07 +00:00
Owen Anderson 7c853e877e Turn LVI on, previously detected failures should be fixed now.
llvm-svn: 111923
2010-08-24 17:21:18 +00:00
Dan Gohman 6901283544 MapValue support for MDNodes. This is similar to r109117, except
that it avoids a lot of unnecessary cloning by avoiding remapping
MDNode cycles when none of the nodes in the cycle actually need to
be remapped. Also it uses the new temporary MDNode mechanism.

llvm-svn: 111922
2010-08-24 17:10:10 +00:00
Owen Anderson 6ffa3f2aea Turn LVI back off, I have a testcase now.
llvm-svn: 111834
2010-08-23 19:59:27 +00:00
Owen Anderson 630add39a6 Re-enable LazyValueInfo. Monitoring for failures.
llvm-svn: 111816
2010-08-23 18:12:23 +00:00
Owen Anderson d31d82d75c Now that PassInfo and Pass::ID have been separated, move the rest of the passes over to the new registration API.
llvm-svn: 111815
2010-08-23 17:52:01 +00:00
Owen Anderson 84c29a096b Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson 43057cd56a Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson bb723b228a When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Owen Anderson aac8cbb261 Disable LVI while I evaluate a failure.
llvm-svn: 111551
2010-08-19 19:47:08 +00:00
Owen Anderson 5c87dd55d3 Tentatively enabled LVI by default. I'll be monitoring for any failures.
llvm-svn: 111543
2010-08-19 19:04:40 +00:00
Dan Gohman 129a816ee6 Process the step before the start, because it's usually the simpler
of the two.

llvm-svn: 111495
2010-08-19 01:02:31 +00:00
Owen Anderson 208636fa33 Inform LazyValueInfo whenever a block is deleted, to avoid dangling pointer issues.
llvm-svn: 111382
2010-08-18 18:39:01 +00:00
Chris Lattner 3c603024bb Fix PR7755: knowing something about an inval for a pred
from the LHS should disable reconsidering that pred on the
RHS.  However, knowing something about the pred on the RHS
shouldn't disable subsequent additions on the RHS from
happening.

llvm-svn: 111349
2010-08-18 03:14:36 +00:00
Chris Lattner f0b5b67ba5 fit in 80 cols
llvm-svn: 111348
2010-08-18 03:13:35 +00:00
Chris Lattner b45de95345 remove some dead code.
llvm-svn: 111344
2010-08-18 02:41:56 +00:00
Chris Lattner 6aabb66139 remove dead prototype.
llvm-svn: 111342
2010-08-18 02:37:06 +00:00
Eric Christopher 51edc7b7e1 Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.

llvm-svn: 111318
2010-08-17 22:55:27 +00:00
Dan Gohman 5047ca0c02 When rotating loops, put the original header at the bottom of the
loop, making the resulting loop significantly less ugly.  Also, zap
its trivial PHI nodes, since it's easy.

llvm-svn: 111255
2010-08-17 17:39:21 +00:00
Dan Gohman 941020ed72 Use the getUniquePredecessor() utility function, instead of doing
what it does manually.

llvm-svn: 111248
2010-08-17 17:07:02 +00:00
Evan Cheng 8b637b177c Add an option to disable codegen prepare critical edge splitting. In theory, PHI elimination is already doing all (most?) of the splitting needed. But machine-licm and machine-sink seem to miss some important optimizations when splitting is disabled.
llvm-svn: 111224
2010-08-17 01:34:49 +00:00
Dan Gohman 89fdbaf99a Instead of having CollectSubexpr's categorize operands as interesting or
uninteresting, just put all the operands on one list and make
GenerateReassociations make the decision about what's interesting.
This is simpler, and it avoids an extra ScalarEvolution::getAddExpr call.

llvm-svn: 111133
2010-08-16 15:50:00 +00:00
Dan Gohman 9b7632df26 Put add operands in ScalarEvolution-canonical order, when convenient.
This isn't necessary, because ScalarEvolution sorts them anyway,
but it's tidier this way.

llvm-svn: 111132
2010-08-16 15:39:27 +00:00
Dan Gohman 6e964c7fb4 Avoid #include <ScalarEvolution.h> in LoopSimplify.cpp, which doesn't
actually use ScalarEvolution.

llvm-svn: 111124
2010-08-16 14:44:03 +00:00
Dan Gohman 250b754428 Instead, teach SimplifyCFG to trim non-address-taken blocks from
indirectbr destination lists.

llvm-svn: 111122
2010-08-16 14:41:14 +00:00
Dan Gohman aa445c0751 LoopSimplify shouldn't split loop backedges that use indirectbr. PR7867.
llvm-svn: 111061
2010-08-14 00:43:09 +00:00
Dan Gohman 4a63fad976 Teach SimplifyCFG how to simplify indirectbr instructions.
- Eliminate redundant successors.
 - Convert an indirectbr with one successor into a direct branch.

Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.

llvm-svn: 111060
2010-08-14 00:29:42 +00:00
Dan Gohman 081ffcd00b Fix LSR's ExtractImmediate and ExtractSymbol to avoid calling
ScalarEvolution::getAddExpr, which can be pretty expensive, when nothing
has changed, which is pretty common.

llvm-svn: 111042
2010-08-13 21:17:19 +00:00
Nate Begeman 2a0ca3e937 Reapply this transformation now that it is passing the external test which it previously failed.
llvm-svn: 110987
2010-08-13 00:17:53 +00:00
Chris Lattner 363226dfe8 fix PR7876: If ipsccp decides that a function's address is taken
before it rewrites the code, we need to use that in the post-rewrite pass.

llvm-svn: 110962
2010-08-12 22:25:23 +00:00
Eric Christopher ac40d49c70 Temporarily revert 110737 and 110734, they were causing failures
in an external testsuite.

llvm-svn: 110905
2010-08-12 07:01:22 +00:00
Nate Begeman 265363061e Add the minimal amount of smarts necessary to instcombine of shufflevectors to recognize
patterns generated by clang for transpose of a matrix in generic vectors.  This is made
of two parts:

1) Propagating vector extracts of hi/lo half into their users
2) Recognizing an insertion of even elements followed by the odd elements as an unpack.

Testcase to come, but this shrinks the # of shuffle instructions generated on x86 from ~40 to the minimal 8.

llvm-svn: 110734
2010-08-10 21:38:12 +00:00
Nick Lewycky f0067b668c Fix a use after free error caught by the valgrind builders.
llvm-svn: 110601
2010-08-09 21:03:28 +00:00
Eli Friedman f99e7e6643 PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
it doesn't regress again.

llvm-svn: 110597
2010-08-09 20:49:43 +00:00
Nick Lewycky fbd2757cde Do more to modernize MergeFunctions. Refactor in response to Chris' code review.
llvm-svn: 110538
2010-08-08 05:04:23 +00:00
Owen Anderson 0398607714 Don't attempt the PRE inline asm calls, since we don't value number them yet. Fixes PR7835.
llvm-svn: 110489
2010-08-07 00:20:35 +00:00
Dan Gohman 0f7892b8ae Eliminate PromoteMemoryToRegisterID; just use addPreserved("mem2reg")
instead, as an example of what this looks like.

llvm-svn: 110478
2010-08-06 21:48:06 +00:00
Owen Anderson a7aed18624 Reapply r110396, with fixes to appease the Linux buildbot gods.
llvm-svn: 110460
2010-08-06 18:33:48 +00:00
Nick Lewycky 5a2849e166 Fix uninitialized variable warning.
Also move 'default' case next to a real case to help compiler optimize in
non-Debug builds.
No functionality change.

llvm-svn: 110435
2010-08-06 07:43:46 +00:00
Nick Lewycky f216f69ad9 Work in progress, cleaning up MergeFuncs.
Further clean up the comparison function by removing overly generalized
"domains".
Remove all understanding of ELF aliases and simplify folding code and comments.

llvm-svn: 110434
2010-08-06 07:21:30 +00:00
Owen Anderson bda59bd247 Revert r110396 to fix buildbots.
llvm-svn: 110410
2010-08-06 00:23:35 +00:00
Owen Anderson 755aceb5d0 Don't use PassInfo* as a type identifier for passes. Instead, use the address of the static
ID member as the sole unique type identifier.  Clean up APIs related to this change.

llvm-svn: 110396
2010-08-05 23:42:04 +00:00
Owen Anderson 4674dd6cf5 Give JumpThreading+LVI a long-form cl::opt so that it's easier to toggle the default.
llvm-svn: 110384
2010-08-05 22:11:31 +00:00
Owen Anderson 9f2bca02d7 Experiments show that we can safely increase our unrolling threshold without unduly impacting code size, particularly
since unrolling is not enabled at -Os.

llvm-svn: 110233
2010-08-04 18:32:46 +00:00
Dan Gohman ba81fc16a5 Fix whitespace.
llvm-svn: 110223
2010-08-04 17:43:57 +00:00
Dan Gohman 839c972102 Fix a comment.
llvm-svn: 110181
2010-08-04 01:16:35 +00:00
Dan Gohman 5442c71f2e Thread const correctness through a bunch of AliasAnalysis interfaces and
eliminate several const_casts.

Make CallSite implicitly convertible to ImmutableCallSite.

Rename the getModRefBehavior for intrinsic IDs to
getIntrinsicModRefBehavior to avoid overload ambiguity with CallSite,
which happens to be implicitly convertible to bool.

llvm-svn: 110155
2010-08-03 21:48:53 +00:00
Dan Gohman 3619660529 Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.

llvm-svn: 110128
2010-08-03 18:20:32 +00:00
Peter Collingbourne ddaaf40d24 Add an atomic lowering pass
llvm-svn: 110113
2010-08-03 16:19:16 +00:00
Dan Gohman 35e8a6209d Use unary + instead of a separate local variable for working
around std::min vs static const friction.

llvm-svn: 110112
2010-08-03 16:15:50 +00:00
Owen Anderson 8f306a779b Re-apply the infamous r108614, with a fix pointed out by Dirk Steinke.
llvm-svn: 110036
2010-08-02 09:32:13 +00:00
Oscar Fuentes 40b31ad3ee Prefix `next' iterator operation with `llvm::'.
Fixes potential ambiguity problems on VS 2010.

Patch by nobled!

llvm-svn: 110029
2010-08-02 06:00:15 +00:00
Daniel Dunbar c1b09c8644 Fix a -Wreorder warning.
llvm-svn: 110022
2010-08-02 05:43:46 +00:00
Nick Lewycky f52bd9cc33 Work in progress.
Start cleaning up MergeFunctions to look more like the rest of LLVM. The
primary change here is to move the methods responsible for comparison into the
new FunctionComparator object. Some comments added. There's more to do.

llvm-svn: 110021
2010-08-02 05:23:03 +00:00
Daniel Dunbar 0b636a24c7 Speculatively revert r108614, "Another attempt at getting the clang self-host to
like my instcombine patch.", in an attempt to fix Clang i386 bootstrap.
 - Also PR7719.

llvm-svn: 109953
2010-07-31 19:51:11 +00:00
Rafael Espindola 40f18838b7 The BlockExtractorPass() constructor was not reading the BlockFile and that was
exactly what bugpoint expected it to do.

There was also only one user of
BlockExtractorPass(const std::vector<BasicBlock*> &B), so just remove it and
make BlockExtractorPass read BlockFile.

This fixes bugpoint's block extraction.

Nick, please review.

llvm-svn: 109936
2010-07-31 00:32:17 +00:00
Dan Gohman d566d2c7b5 Move MaximumAlignment to be a member of the Value class.
llvm-svn: 109891
2010-07-30 21:07:05 +00:00
Nick Lewycky 299c6dfcbf Add missing newline to debug statement.
llvm-svn: 109886
2010-07-30 20:27:01 +00:00
Eli Friedman 0428a61e45 PR7750: !CExpr->isNullValue() only properly computes whether CExpr is nonnull
if CExpr is a ConstantInt.

llvm-svn: 109773
2010-07-29 18:03:33 +00:00
Gabor Greif 62f0aac99d simplify by using CallSite constructors; virtually eliminates CallSite::get from the tree
llvm-svn: 109687
2010-07-28 22:50:26 +00:00
Dan Gohman a7e5a24093 Define a maximum supported alignment value for load, store, and
alloca instructions (constrained by their internal encoding),
and add error checking for it. Fix an instcombine bug which
generated huge alignment values (null is infinitely aligned).
This fixes undefined behavior noticed by John Regehr.

llvm-svn: 109643
2010-07-28 20:12:04 +00:00
Dan Gohman 9cd20bf792 When user code intentionally dereferences null, the alignment of the
dereference is theoretically infinite. Put a cap on the computed
alignment to avoid overflow, noticed by John Regehr.

llvm-svn: 109596
2010-07-28 17:14:23 +00:00
Gabor Greif f0084e1333 simplify
llvm-svn: 109589
2010-07-28 15:52:43 +00:00
Gabor Greif 0a970698da use Value* constructor of CallSite to create potentially improper site, and test that
llvm-svn: 109581
2010-07-28 14:28:18 +00:00
Gabor Greif f159085414 recommit simplification (r109502, backed out r109509); seems to innocent
llvm-svn: 109510
2010-07-27 16:44:23 +00:00
Gabor Greif 5f91b7cf3e back out this too to restore the bots
llvm-svn: 109509
2010-07-27 15:56:07 +00:00
Gabor Greif 7b0a5fd2a5 simplify: CallSite::get --> CallSite constructor
llvm-svn: 109506
2010-07-27 15:02:37 +00:00
Gabor Greif 7527b2ed5c simplify
llvm-svn: 109502
2010-07-27 13:31:22 +00:00
Owen Anderson aa7f66ba67 Add an initial implementation of LazyValueInfo updating for JumpThreading. Disabled for now.
llvm-svn: 109424
2010-07-26 18:48:03 +00:00
Dan Gohman 0141c13b22 Remove LCSSA's bogus dependence on LoopSimplify and LoopSimplify's bogus
dependence on DominanceFrontier. Instead, add an explicit DominanceFrontier
pass in StandardPasses.h to ensure that it gets scheduled at the right
time.

Declare that loop unrolling preserves ScalarEvolution, and shuffle some
getAnalysisUsages.

This eliminates one LoopSimplify and one LCCSA run in the standard
compile opts sequence.

llvm-svn: 109413
2010-07-26 18:11:16 +00:00
Dan Gohman a7908ae369 Preserve ScalarEvolution in the loop unroller.
llvm-svn: 109412
2010-07-26 18:02:06 +00:00
Dan Gohman 65b257c9d2 Use DominatorTree::properlyDominates instead of dominates with an
explicit inequality check.

llvm-svn: 109401
2010-07-26 17:37:36 +00:00
Dan Gohman 31f73ef210 A block dominates itself, by definition.
llvm-svn: 109400
2010-07-26 17:35:32 +00:00
Nick Lewycky 7bc0443f2b Revert this because we can't clone cyclic MDNodes which are creating during a
build of llvm-gcc.

llvm-svn: 109355
2010-07-24 20:54:02 +00:00
Nick Lewycky 14b69d59dd Whether function-local or not, a MDNode may reference a Function in which case
it needs to be mapped to refer to the function in the new module, not the old
one. Fixes PR7700.

llvm-svn: 109353
2010-07-24 19:43:25 +00:00
Devang Patel 5fa3813329 Speculatively revert 109117
llvm-svn: 109132
2010-07-22 18:44:00 +00:00
Gabor Greif 59f9970ba5 keep in 80 cols
llvm-svn: 109122
2010-07-22 17:18:03 +00:00
Devang Patel fac440cfb6 Map MDNode correctly.
A non function local MDNode can have an operand which is cloned by MapValue(). 

llvm-svn: 109117
2010-07-22 16:35:00 +00:00
Gabor Greif dde79d8f1a mass elimination of reliance on automatic iterator dereferencing
llvm-svn: 109103
2010-07-22 13:36:47 +00:00
Gabor Greif 84012a93ef simplify
llvm-svn: 109101
2010-07-22 13:07:39 +00:00
Gabor Greif b8686360a1 do not access arguments via low-level interface, do not multiply dereference use_iterators
llvm-svn: 109100
2010-07-22 13:04:32 +00:00
Gabor Greif 10bb1f5462 pass dereferenced iterator to dyn_cast
llvm-svn: 109099
2010-07-22 11:48:35 +00:00
Gabor Greif 36f25dfd33 pass dereferenced iterator to dyn_cast
llvm-svn: 109098
2010-07-22 11:43:44 +00:00
Gabor Greif 3e44ea1917 undo 80 column trespassing I caused
llvm-svn: 109092
2010-07-22 10:37:47 +00:00
Dan Gohman 2637cc1a38 Make NamedMDNode not be a subclass of Value, and simplify the interface
for creating and populating NamedMDNodes.

llvm-svn: 109061
2010-07-21 23:38:33 +00:00
Owen Anderson a57b97e7e7 Fix batch of converting RegisterPass<> to INTIALIZE_PASS().
llvm-svn: 109045
2010-07-21 22:09:45 +00:00
Dan Gohman afbe4a7a10 Make this code a little more readable.
llvm-svn: 108968
2010-07-20 23:49:44 +00:00
Dan Gohman 7373bd9973 Use DebugLocs instead of MDNodes.
llvm-svn: 108967
2010-07-20 23:49:05 +00:00
Dan Gohman b22dd85bb3 Fix a typo.
llvm-svn: 108962
2010-07-20 23:10:36 +00:00
Dan Gohman 5c2e65b7bf Don't look up the "dbg" metadata kind by name.
llvm-svn: 108961
2010-07-20 23:09:34 +00:00
Dan Gohman d2c7e52d05 Use getDebugLoc and setDebugLoc instead of getDbgMetadata and setDbgMetadata,
avoiding MDNode overhead.

llvm-svn: 108909
2010-07-20 20:09:07 +00:00
Dan Gohman 12725c7d46 Remember that the induction variable is always a PHINode and
use getIncomingValueForBlock instead of
LoopInfo::getCanonicalInductionVariableIncrement.

llvm-svn: 108865
2010-07-20 17:18:52 +00:00
Owen Anderson 84774eda4b Tweak per Chris' comments.
llvm-svn: 108736
2010-07-19 19:23:32 +00:00
Owen Anderson 32a58342ed Reimplement r108639 in InstCombine rather than DAGCombine.
llvm-svn: 108687
2010-07-19 08:09:34 +00:00
Owen Anderson 7d2818b073 Another attempt at getting the clang self-host to like my instcombine patch.
llvm-svn: 108614
2010-07-17 06:56:35 +00:00
Chris Lattner 27e997a168 eliminate unlockedRefineAbstractTypeTo, types are all per-llvmcontext,
so there is no locking involved in type refinement.

llvm-svn: 108553
2010-07-16 20:50:13 +00:00
Dan Gohman efd7f9c360 Reorder the contents of various getAnalysisUsage functions, eliminating
a redundant loopsimplify run from the default -O2 sequence.

llvm-svn: 108539
2010-07-16 17:58:45 +00:00
Owen Anderson 8a39c807e2 Remove the rest of my instcombine changes. Back to the drawing board on this one.
llvm-svn: 108530
2010-07-16 16:39:00 +00:00
Gabor Greif 6d673953e3 eliminate CallInst::ArgOffset
llvm-svn: 108522
2010-07-16 09:38:02 +00:00
Nick Lewycky 375efe3157 Arrays and vectors with different numbers of elements are not equivalent.
llvm-svn: 108517
2010-07-16 06:31:12 +00:00
Eric Christopher 15a81cddb4 Also revert 108422, it's causing some test failures.
Working on testcases for Owen.

llvm-svn: 108494
2010-07-16 01:36:12 +00:00
Dan Gohman 1415208292 Don't merge uses when they are targetting fixup sites with
different widths. In a use with a narrower fixup, formulae
may be wider than the fixup, in which case the high bits
aren't necessarily meaningful, so it isn't safe to reuse
them for uses with wider fixups.

This fixes PR7618, though the testcase is too large for a
reasonable regression test, since it heavily dependes on
hitting LSR's heuristics in a certain way.

llvm-svn: 108455
2010-07-15 20:24:58 +00:00
Dan Gohman a1501b9c50 Use dbgs() instead of errs() in a DEBUG.
llvm-svn: 108453
2010-07-15 20:12:42 +00:00
Owen Anderson eaf64d5c1e Speculatively revert r108429 to fix the clang self-host.
llvm-svn: 108436
2010-07-15 18:18:57 +00:00
Owen Anderson eb08d01061 Per Chris' suggestion, get rid of the select canonicalization and just add
the corresponding or-icmp-and pattern.  This has the added benefit of doing
the matching earlier, and thus being less susceptible to being confused by
earlier transforms.

llvm-svn: 108429
2010-07-15 17:24:23 +00:00
Owen Anderson 13700ebb02 Remove unneeded check, and correct style.
llvm-svn: 108427
2010-07-15 16:38:22 +00:00
Dan Gohman 4afd412d6b Watch out for a constant offset cancelling out a base register, forming
a zero. This situation arrises in Fortran code with induction variables
that start at 1 instead of 0. This fixes PR7651.

llvm-svn: 108424
2010-07-15 15:14:45 +00:00
Owen Anderson 7151dfd48a Reapply r108378, with bugfixes, testcase, and improved comment formatting.
This now passes LIT, nighty test, and llvm-gcc bootstrap on my machine.

llvm-svn: 108422
2010-07-15 15:00:23 +00:00
Nick Lewycky 485ce5a49c This is a full sentence.
llvm-svn: 108418
2010-07-15 06:51:22 +00:00
Nick Lewycky e6f3287cbb Disable aliases on all platforms.
llvm-svn: 108417
2010-07-15 06:48:56 +00:00
Chris Lattner e41ab07c61 make various clients of ReplaceAndSimplifyAllUses tolerate
it *changing* the things it replaces, not just causing them
to drop to null.  There is no functionality change yet, but 
this is required for a subsequent patch.

llvm-svn: 108414
2010-07-15 06:06:04 +00:00
Eli Friedman a8b4e3732b Speculatively revert r108378; may be causing bootstrap failures.
llvm-svn: 108389
2010-07-15 00:33:00 +00:00
Owen Anderson 37d91d84af Add instcombine transforms to optimize tests of multiple bits of the same value into a single larger comparison.
llvm-svn: 108378
2010-07-14 23:33:51 +00:00
Owen Anderson 2cfe91379b Extend SimplifyCFG's common-destination folding heuristic to allow a single
"bonus" instruction to be speculatively executed.  Add a heuristic to
ensure we're not tripping up out-of-order execution by checking that this bonus
instruction only uses values that were already guaranteed to be available.

This allows us to eliminate the short circuit in (x&1)&&(x&2).

llvm-svn: 108351
2010-07-14 19:52:16 +00:00
Chris Lattner ec0e7b1643 revert r108320, I see the failures now...
llvm-svn: 108322
2010-07-14 06:16:35 +00:00
Chris Lattner 658680b2f5 reapply benjamin's instcombine patch, I don't see anything wrong with it and can't repro any problems with a manual self-host.
llvm-svn: 108320
2010-07-14 05:59:13 +00:00
Eric Christopher ea282034b6 Grammar.
llvm-svn: 108252
2010-07-13 18:27:13 +00:00
Duncan Sands f88a284579 Handle the case of a tail recursion in which the tail call is followed
by a return that returns a constant, while elsewhere in the function
another return instruction returns a different constant.  This is a
special case of accumulator recursion, so just generalize the existing
logic a bit.

llvm-svn: 108241
2010-07-13 15:41:41 +00:00
Benjamin Kramer 8f36402ac2 Nope, still breaks the release selfhost bots :(
llvm-svn: 108153
2010-07-12 16:38:48 +00:00
Benjamin Kramer 07b695e052 Reapply the "or" half of r108136, which seems to be less problematic.
llvm-svn: 108152
2010-07-12 16:15:48 +00:00
Gabor Greif 1b787df129 cache result of operator*
llvm-svn: 108150
2010-07-12 15:48:26 +00:00
Benjamin Kramer c719e8ae9e Revert r108141 again, sigh.
llvm-svn: 108148
2010-07-12 14:42:04 +00:00
Gabor Greif 96fedcb136 cache result of operator*
llvm-svn: 108147
2010-07-12 14:15:58 +00:00
Gabor Greif f9c38b5a45 cache result of operator*
llvm-svn: 108146
2010-07-12 14:15:10 +00:00
Gabor Greif 88dd73b75e cache result of operator*
llvm-svn: 108145
2010-07-12 14:14:03 +00:00
Gabor Greif a75ed761a9 cache result of operator*
llvm-svn: 108144
2010-07-12 14:13:15 +00:00
Gabor Greif 15445db11b cache results of operator*
llvm-svn: 108143
2010-07-12 14:12:11 +00:00
Gabor Greif a5fa885d47 cache results of operator*
llvm-svn: 108142
2010-07-12 14:10:24 +00:00
Benjamin Kramer f578c36035 Reapply 108136 with an ugly pasto fixed.
llvm-svn: 108141
2010-07-12 13:44:00 +00:00
Benjamin Kramer 11743249e6 Move optimization to avoid redundant matching.
llvm-svn: 108140
2010-07-12 13:34:22 +00:00
Benjamin Kramer 9675e759cf Revert r108136 until I figure out why it broke selfhost.
llvm-svn: 108139
2010-07-12 12:35:49 +00:00
Gabor Greif 782f62412f cache dereferenced iterators
llvm-svn: 108138
2010-07-12 12:03:02 +00:00
Gabor Greif 433b975fe2 recommit r108131 (hich has been backed out in r108135) with a fix
llvm-svn: 108137
2010-07-12 12:02:10 +00:00
Benjamin Kramer 35473faa50 instcombine: fold (x & y) | (~x & z) and (x & y) ^ (~x & z) into ((y ^ z) & x) ^ z which is one instruction shorter. (PR6773)
before:
  %and = and i32 %y, %x
  %neg = xor i32 %x, -1
  %and4 = and i32 %z, %neg
  %xor = xor i32 %and4, %and

after:
  %xor1 = xor i32 %z, %y
  %and2 = and i32 %xor1, %x
  %xor = xor i32 %and2, %z

llvm-svn: 108136
2010-07-12 11:54:45 +00:00
Gabor Greif f9610827ce back out r108131 (of TailDuplication.cpp) for now, it causes a buildbot failure
llvm-svn: 108135
2010-07-12 11:32:39 +00:00
Gabor Greif 6143704ac5 cache dereferenced iterators
llvm-svn: 108134
2010-07-12 11:19:24 +00:00
Gabor Greif 8629f12bb8 cache dereferenced iterators
llvm-svn: 108133
2010-07-12 10:59:23 +00:00
Gabor Greif d993402df3 cache dereferenced iterators
llvm-svn: 108132
2010-07-12 10:49:54 +00:00
Gabor Greif 2a464d7308 cache dereferenced iterators
llvm-svn: 108131
2010-07-12 10:36:48 +00:00
Duncan Sands 41b4a6b36a Convert some tab stops into spaces.
llvm-svn: 108130
2010-07-12 08:16:59 +00:00
Chris Lattner 601e390a3b make the prototypes for CreateMalloc and CreateFree more consistent. Patch
by Hans Vandierendonck from PR7605

llvm-svn: 108116
2010-07-12 00:57:28 +00:00
Chris Lattner bbc25ff5cc if jump threading is able to infer interesting values on both
the LHS and RHS of an and/or instruction, don't multiply add
known predecessor values.  This fixes the crash on testcase
from PR7498

llvm-svn: 108114
2010-07-12 00:47:34 +00:00
Duncan Sands 82b21c086e The accumulator tail recursion transform claims to work for any associative
operation, but the way it's implemented requires the operation to also be
commutative.  So add a check for commutativity (and tweak the corresponding
comments).  This makes no difference in practice since every associative
LLVM instruction is also commutative!  Here's an example to show the need
for commutativity: the accum_recursion.ll testcase calculates the factorial
function.  Before the transformation the result of a call is
  ((((1*1)*2)*3)...)*x
while afterwards it is
  (((1*x)*(x-1))...*2)*1
which clearly requires both associativity and commutativity of * to be equal
to the original.

llvm-svn: 108056
2010-07-10 20:31:42 +00:00
Gabor Greif 9d5ae03404 cache result of operator*
llvm-svn: 107990
2010-07-09 16:51:20 +00:00
Gabor Greif fd8e7d4a0f cache result of operator*
llvm-svn: 107984
2010-07-09 16:31:08 +00:00
Gabor Greif e7650c7c29 cache result of operator*
llvm-svn: 107983
2010-07-09 16:26:41 +00:00
Gabor Greif 04af1e4f65 cache result of operator*
llvm-svn: 107981
2010-07-09 16:17:52 +00:00
Gabor Greif e82532a1c5 cache result of operator*
llvm-svn: 107976
2010-07-09 15:40:10 +00:00
Gabor Greif 6d8870fc35 cache result of operator*
llvm-svn: 107975
2010-07-09 15:25:42 +00:00
Gabor Greif 329c4d8ed9 cache result of operator*
llvm-svn: 107974
2010-07-09 15:25:09 +00:00
Gabor Greif 0028cc6730 cache result of operator*
llvm-svn: 107972
2010-07-09 15:01:36 +00:00
Gabor Greif d323f5e161 cache result of operator* (found by inspection)
llvm-svn: 107971
2010-07-09 14:48:08 +00:00
Gabor Greif b0d56ffc85 cache result of operator*
llvm-svn: 107969
2010-07-09 14:36:49 +00:00
Gabor Greif 4247949ce9 cache result of operator*
llvm-svn: 107968
2010-07-09 14:29:14 +00:00
Gabor Greif a02f232c1b cache result of operator*
llvm-svn: 107966
2010-07-09 14:18:23 +00:00
Gabor Greif f0821f39ee cache operator*'s result (in multiple functions)
llvm-svn: 107965
2010-07-09 14:02:13 +00:00
Gabor Greif 60a346d0f1 do not repeatedly dereference use_iterator
llvm-svn: 107962
2010-07-09 12:23:50 +00:00
Benjamin Kramer 2321e6a4d4 Teach instcombine to transform
(X >s -1) ? C1 : C2 and (X <s  0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.

This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.

for
int sel(int n) {
     return n >= 0 ? 60 : 100;
}

we now generate
  sarl  $31, %edi
  andl  $40, %edi
  leal  60(%rdi), %eax

instead of
  testl %edi, %edi
  movl  $60, %ecx
  movl  $100, %eax
  cmovnsl %ecx, %eax

llvm-svn: 107866
2010-07-08 11:39:10 +00:00
Chris Lattner efa3c824cc Fix the second half of PR7437: scalarrepl wasn't preserving
address spaces when SRoA'ing memcpy's.

llvm-svn: 107846
2010-07-08 00:27:05 +00:00
Duncan Sands 408bb192de Rename "Release" builds as "Release+Asserts"; rename "Release-Asserts"
builds to "Release".  The default build is unchanged (optimization on,
assertions on), however it is now called Release+Asserts.  The intent
is that future LLVM releases released via llvm.org will be Release builds
in the new sense, i.e. will have assertions disabled (currently they have
assertions enabled, for a more than 20% slowdown).  This will bring them
in line with MacOS releases, which ship with assertions disabled.  It also
means that "Release" now means the same things in make and cmake builds:
cmake already disables assertions for "Release" builds AFAICS.

llvm-svn: 107758
2010-07-07 07:48:00 +00:00
Nick Lewycky dace239949 Detabify this file.
llvm-svn: 107637
2010-07-06 03:53:43 +00:00
Devang Patel cefe3831b7 MDString is already checked earlier.
llvm-svn: 107516
2010-07-02 21:13:23 +00:00
Dan Gohman 832282e061 Don't claim to preserve AliasAnalysis. First, this is doesn't actually
have any effect, and second, deleting stores can potentially invalidate
an AliasAnalysis, and there's currently no notification for this.

llvm-svn: 107496
2010-07-02 18:43:05 +00:00
Bill Wendling 03bcd6ecc8 Implement the "linker_private_weak" linkage type. This will be used for
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.

For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:

      .globl l_objc_msgSend_fixup_alloc
      .weak_definition l_objc_msgSend_fixup_alloc
      .section __DATA, __objc_msgrefs, coalesced
      .align 3
l_objc_msgSend_fixup_alloc:
       .quad   _objc_msgSend_fixup
       .quad   L_OBJC_METH_VAR_NAME_1

This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".

Currently only supported on Darwin platforms.

llvm-svn: 107433
2010-07-01 21:55:59 +00:00
Devang Patel 2b434e12cd Debugging infomration is encoded in llvm IR using metadata. This is designed
such a way that debug info for symbols preserved even if symbols are
optimized away by the optimizer. 

Add new special pass to remove debug info for such symbols.

llvm-svn: 107416
2010-07-01 19:49:20 +00:00
Devang Patel b9e2e4b762 If a named mdnode is removed then mark module as changed.
llvm-svn: 107412
2010-07-01 18:27:46 +00:00
Jim Grosbach e74c78d539 lowerinvoke needs to handle aggregate function args like sjlj eh does.
llvm-svn: 107335
2010-06-30 22:22:59 +00:00
Devang Patel db735cbbab Remove all debug info related named mdnodes.
llvm-svn: 107323
2010-06-30 21:29:00 +00:00
Gabor Greif 74470192d7 use ArgOperand API
llvm-svn: 107278
2010-06-30 12:42:43 +00:00
Gabor Greif d50572802e use ArgOperand API
llvm-svn: 107277
2010-06-30 12:40:35 +00:00
Gabor Greif 3abd881bea use getArgOperand (corrected by CallInst::ArgOffset) instead of getOperand
llvm-svn: 107275
2010-06-30 12:38:26 +00:00
Gabor Greif 743b3fd196 use getArgOperand (corrected by CallInst::ArgOffset) instead of getOperand
llvm-svn: 107273
2010-06-30 09:19:23 +00:00
Gabor Greif f628ecd15f use getNumArgOperands instead of getNumOperands
llvm-svn: 107272
2010-06-30 09:17:53 +00:00
Gabor Greif fe252e6fa0 use getArgOperand instead of getOperand
llvm-svn: 107271
2010-06-30 09:16:16 +00:00
Gabor Greif 8ae3095286 use getArgOperand instead of getOperand
llvm-svn: 107270
2010-06-30 09:15:28 +00:00
Gabor Greif e9acc46f65 use getArgOperand instead of getOperand
llvm-svn: 107269
2010-06-30 09:14:26 +00:00
Bill Wendling 3632171750 Revert r107205 and r107207.
llvm-svn: 107215
2010-06-29 22:34:52 +00:00
Bill Wendling 1767723dbe Introducing the "linker_weak" linkage type. This will be used for Objective-C
metadata types which should be marked as "weak", but which the linker will
remove upon final linkage. For example, the "objc_msgSend_fixup_alloc" symbol is
defined like this:

       .globl l_objc_msgSend_fixup_alloc
       .weak_definition l_objc_msgSend_fixup_alloc
       .section __DATA, __objc_msgrefs, coalesced
       .align 3
l_objc_msgSend_fixup_alloc:
        .quad   _objc_msgSend_fixup
        .quad   L_OBJC_METH_VAR_NAME_1

This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".

llvm-svn: 107205
2010-06-29 21:24:00 +00:00
Duncan Sands 17f1ca8793 Return Changed. This required setting Changed if dbg metadata
is stripped off.  Currently set unconditionally, since the API
does not provide a way of working out if anything was actually
stripped off.

llvm-svn: 107142
2010-06-29 14:52:10 +00:00
Gabor Greif 5b1370ee80 use ArgOperand API
llvm-svn: 107017
2010-06-28 16:50:57 +00:00
Gabor Greif e23efeef10 use ArgOperand API
llvm-svn: 107016
2010-06-28 16:45:00 +00:00
Gabor Greif 18c5bae727 employ CallInst::ArgOffset (for now)
llvm-svn: 107015
2010-06-28 16:43:57 +00:00
Gabor Greif 2dd4307e45 use setArgOperand
llvm-svn: 107004
2010-06-28 12:31:35 +00:00
Gabor Greif ec60adf161 use CallInst::ArgOffset
llvm-svn: 107003
2010-06-28 12:30:07 +00:00
Gabor Greif 2de43a7c5c use ArgOperand API and CallInst::ArgOffset
llvm-svn: 107002
2010-06-28 12:29:20 +00:00
Gabor Greif 4300fc77ae use cached value
llvm-svn: 107000
2010-06-28 11:20:42 +00:00
Chris Lattner 25a843fcd2 minor cleanup to SROA: when lowering type unsafe accesses to
large integers, the first inserted value would always create
an 'or X, 0'.  Even though this is trivially zapped by
instcombine, don't bother creating this pointless instruction.

llvm-svn: 106979
2010-06-27 07:58:26 +00:00
Duncan Sands 3a5cb69cb8 Fix PR7328: when turning a tail recursion into a loop, need to preserve
the returned value after the tail call if it differs from other return
values.  The optimal thing to do would be to introduce a phi node for
the return value, but for the moment just fix the miscompile.

llvm-svn: 106947
2010-06-26 12:53:31 +00:00
Dan Gohman fb9712bdae In GenerateReassociations, don't bother thinking about individual
SCEVUnknown values which are loop-variant, as LSR can't do anything
interesting with these values in any case. This fixes very slow compile
times on loops which have large numbers of such values.

llvm-svn: 106897
2010-06-25 22:32:18 +00:00
Dale Johannesen ce97d55ad9 The hasMemory argument is irrelevant to how the argument
for an "i" constraint should get lowered; PR 6309.  While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.

llvm-svn: 106893
2010-06-25 21:55:36 +00:00
Gabor Greif e3ba486c9f use ArgOperand API (one more hunk I could split)
llvm-svn: 106825
2010-06-25 07:58:41 +00:00
Gabor Greif 5f3e656a1b use ArgOperand API (some hunks I could split)
llvm-svn: 106824
2010-06-25 07:57:14 +00:00
Gabor Greif 07e9284c75 use ArgOperand API; tighten type of handleFreeWithNonTrivialDependency to be able to use isFreeCall whithout a cast or new overload
llvm-svn: 106823
2010-06-25 07:40:32 +00:00
Dan Gohman 4143e9deeb Add an exports file for the Hello example plugin.
llvm-svn: 106768
2010-06-24 17:36:51 +00:00
Dan Gohman 963b1c142e A few minor micro-optimizations.
llvm-svn: 106764
2010-06-24 16:57:52 +00:00
Dan Gohman 47ddf76d89 Teach getExactSDiv to evaluate x/1 to x up front, as it's a common
enough special case, and it theoretically allows more folding because
it works even when x is unanalyzable.

llvm-svn: 106763
2010-06-24 16:51:25 +00:00
Dan Gohman ab5422200b Fix copy+pasto issues in isMulSExtable.
llvm-svn: 106759
2010-06-24 16:45:11 +00:00
Gabor Greif 7ccec09252 use ArgOperand API
llvm-svn: 106752
2010-06-24 16:11:44 +00:00
Gabor Greif a6d75e2cf7 use (even more, still) ArgOperand API
llvm-svn: 106750
2010-06-24 15:51:11 +00:00
Gabor Greif 218f5541b2 use ArgOperand API and CallSite for arg range; add necessary casts and perform some cosmetics
llvm-svn: 106747
2010-06-24 14:42:01 +00:00
Gabor Greif 5aafdf1e43 use ArgOperand API and CallSite for arg range
llvm-svn: 106745
2010-06-24 14:13:36 +00:00
Gabor Greif 0a136c9b53 use (even more) ArgOperand API
llvm-svn: 106744
2010-06-24 13:54:33 +00:00
Gabor Greif 590d95ed18 use ArgOperand API
llvm-svn: 106743
2010-06-24 13:42:49 +00:00
Gabor Greif 589a0b950a use ArgOperand API
llvm-svn: 106740
2010-06-24 12:58:35 +00:00
Gabor Greif 7943017490 use ArgOperand API
llvm-svn: 106737
2010-06-24 12:35:13 +00:00
Gabor Greif 75f6943c95 use ArgOperand API, also tighten the type of visitFree to make this work out smoothly
llvm-svn: 106736
2010-06-24 12:21:15 +00:00
Gabor Greif 91f9589057 use ArgOperand API; introduce downcasted pointers into scope to facilitate this
llvm-svn: 106734
2010-06-24 12:03:56 +00:00
Gabor Greif e2f482ca0b use ArgOperand API
llvm-svn: 106731
2010-06-24 10:42:46 +00:00
Gabor Greif 2d958d4db5 use ArgOperand API
llvm-svn: 106730
2010-06-24 10:17:17 +00:00
Gabor Greif 5bcaa55761 use callsite to obtain all arguments
llvm-svn: 106729
2010-06-24 10:04:07 +00:00
Gabor Greif 42f620cc55 use callsite to obtain all arguments
llvm-svn: 106728
2010-06-24 09:56:43 +00:00
Gabor Greif 0f60709f0e use getNumArgOperands
llvm-svn: 106709
2010-06-24 00:48:48 +00:00
Gabor Greif 4a39b84a9d use ArgOperand API
llvm-svn: 106707
2010-06-24 00:44:01 +00:00
Devang Patel 0dc3c2d37e Use ValueMap instead of DenseMap.
The ValueMapper used by various cloning utility maps MDNodes also.

llvm-svn: 106706
2010-06-24 00:33:28 +00:00
Devang Patel d8dedee96d Use available typedef for " DenseMap<const Value*, Value*>".
llvm-svn: 106699
2010-06-24 00:00:42 +00:00
Devang Patel b8f11de105 Cosmetic change.
Do not use "ValueMap" as a name for a local variable or an argument.

llvm-svn: 106698
2010-06-23 23:55:51 +00:00
Devang Patel 9ad629367d Revert 106592 for now. It causes clang-selfhost build failure.
llvm-svn: 106598
2010-06-22 23:29:55 +00:00
Dan Gohman 1081f1a0f5 Fix OptimizeMax to handle an odd case where one of the max operands
is another max which folds. This fixes PR7454.

llvm-svn: 106594
2010-06-22 23:07:13 +00:00
Devang Patel 87f75f75be If a metadata operand is seeded in value map and the metadata should also be seeded in value map. This is not limited to function local metadata.
Failure to seed metdata in such cases causes troubles when in a cloned module, metadata from a new module refers to values in old module. Usually this results in mysterious bugpoint crashes. For example,

 Checking to see if we can delete global inits: Unknown constant!
 UNREACHABLE executed at /d/g/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp:904!

llvm-svn: 106592
2010-06-22 22:53:21 +00:00
Devang Patel e43c6487da While cloning a module, clone metadata attached with instructions.
llvm-svn: 106591
2010-06-22 22:50:42 +00:00
Devang Patel e3fbbd19ed Clone named metadata while cloning a module.
Reapply Bob's patch.

llvm-svn: 106560
2010-06-22 18:52:38 +00:00
Dan Gohman d2d1ae105d Use pre-increment instead of post-increment when the result is not used.
llvm-svn: 106542
2010-06-22 15:08:57 +00:00
Devang Patel f040dec68a Revert 106528. It is causing self host failures.
llvm-svn: 106529
2010-06-22 06:14:09 +00:00
Devang Patel b195eb4acf Do not rely on DenseMap slot which can be easily invalidated when DenseMap grows.
llvm-svn: 106528
2010-06-22 05:16:56 +00:00
Bob Wilson 6c1fc79cab Revert my change to clone named metadata. Buildbots are complaining.
--- Reverse-merging r106508 into '.':
U    lib/Transforms/Utils/CloneModule.cpp

llvm-svn: 106521
2010-06-22 02:08:51 +00:00
Bob Wilson 5f9575c1cd Include named metadata when cloning a module.
llvm-svn: 106508
2010-06-22 00:11:03 +00:00
Dan Gohman dd41bba517 Use A.append(...) instead of A.insert(A.end(), ...) when A is a
SmallVector, and other SmallVector simplifications.

llvm-svn: 106452
2010-06-21 19:47:52 +00:00
Dan Gohman 32655906e4 Add a TODO comment.
llvm-svn: 106397
2010-06-19 21:30:18 +00:00
Dan Gohman 51d00092b6 Include the use kind along with the expression in the key of the
use sharing map. The reconcileNewOffset logic already forces a
separate use if the kinds differ, so incorporating the kind in the
key means we can track more sharing opportunities.

More sharing means fewer total uses to track, which means smaller
problem sizes, which means the conservative throttles don't kick
in as often.

llvm-svn: 106396
2010-06-19 21:29:59 +00:00
Dan Gohman 297fb8b9fc Don't include things in anonymous namespaces that don't need it.
llvm-svn: 106395
2010-06-19 21:21:39 +00:00
Dan Gohman f3aea7aecf Disable indvars on loops when LoopSimplify form is not available.
This fixes PR7333.

llvm-svn: 106267
2010-06-18 01:35:11 +00:00
Jim Grosbach e94f1ded24 remove trailing whitespace
llvm-svn: 106164
2010-06-16 22:41:09 +00:00
Rafael Espindola a20e2dfe86 Make sure that simplify libcalls does not replace a call with one calling
convention with a new call with a different calling convention.

llvm-svn: 106134
2010-06-16 19:34:01 +00:00
Benjamin Kramer a13bd20396 simplify-libcalls: fold strncmp(x, y, 1) -> memcmp(x, y, 1)
The memcmp will be optimized further and even the pathological case
'strstr(x, "x") == x' generates optimal code now.

llvm-svn: 106097
2010-06-16 10:30:29 +00:00
Benjamin Kramer 1118860e3a simplify-libcalls: fold strstr(a, b) == a -> strncmp(a, b, strlen(b)) == 0
llvm-svn: 106047
2010-06-15 21:34:25 +00:00
Chris Lattner 329ea064ed jump threading can't split a critical edge from an indirectbr. This
fixes PR7356.

llvm-svn: 105950
2010-06-14 19:45:43 +00:00
Benjamin Kramer b82de426de SimplifyCFG: don't turn volatile stores to null/undef into unreachable. Fixes PR7369.
llvm-svn: 105914
2010-06-13 14:35:54 +00:00
Kenneth Uildriks 9b21208bfb Pulled CodeMetrics out of InlineCost.h and made it a bit more general, so it can be reused from PartialSpecializationCost
llvm-svn: 105725
2010-06-09 15:11:37 +00:00
Dan Gohman fb8ed43349 Make bugpoint dead-argument-hacking actually work, and actually test it.
llvm-svn: 105551
2010-06-07 20:20:33 +00:00
Kenneth Uildriks 1850444000 Partial specialization was not checking the callsite to make sure it was using the same constants as the specialization, leading to calls to the wrong specialization. Patch by Takumi Nakamura\!
llvm-svn: 105528
2010-06-05 14:50:21 +00:00
Dan Gohman 67b4403101 Don't track users of undef values; they aren't interesting for
register pressure.

llvm-svn: 105501
2010-06-04 23:16:05 +00:00
Devang Patel 36da24b546 Copy location info for current function argument from dbg.declare if respective store instruction does not have any location info.
llvm-svn: 105490
2010-06-04 22:27:30 +00:00
Jim Grosbach 5ba76b94f8 Remove unused code
llvm-svn: 105293
2010-06-01 21:56:30 +00:00
Jim Grosbach 0e20dc5cd6 fix think-o
llvm-svn: 105291
2010-06-01 21:35:50 +00:00
Jim Grosbach b69c68742a Simplify things a bit more. Fix prototype to use SmallVectorImpl and
change a few SmallVectors to vanilla C arrays.

llvm-svn: 105289
2010-06-01 21:06:46 +00:00
Jim Grosbach a37af16221 mirror of r105280 changes for LowerInvoke, which uses the same basic logic here
llvm-svn: 105281
2010-06-01 18:04:56 +00:00
Jim Grosbach 7352167560 Use SmallVector instead of std::vector.
llvm-svn: 105279
2010-06-01 17:56:41 +00:00
Duncan Sands 4c904fa797 Fix PR7272: when inlining through a callsite with byval arguments,
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared.  Fixes PR7272.

llvm-svn: 105255
2010-05-31 21:00:26 +00:00
Benjamin Kramer 5ac57e3440 Avoid swap when a copy suffices.
llvm-svn: 105220
2010-05-31 12:50:41 +00:00
Nick Lewycky aee2632be3 The memcpy intrinsic only takes i8* for %src and %dst, so cast them to that
first. Fixes PR7265.

llvm-svn: 105206
2010-05-31 06:16:35 +00:00
Dan Gohman 826bdf8c10 Move FindAvailableLoadedValue isSafeToLoadUnconditionally out of
lib/Transforms/Utils and into lib/Analysis so that Analysis passes
can use them.

llvm-svn: 104949
2010-05-28 16:19:17 +00:00
Dan Gohman df5d7dcef1 Teach instcombine to promote alloca array sizes.
llvm-svn: 104945
2010-05-28 15:09:00 +00:00
Dan Gohman 05a6555acb Fix instcombine's handling of alloca to accept non-i32 types.
llvm-svn: 104935
2010-05-28 04:33:04 +00:00
Devang Patel 3e0fbafab2 Fix typo.
llvm-svn: 104914
2010-05-28 01:29:50 +00:00
Devang Patel e2099e8088 Fix typo.
llvm-svn: 104913
2010-05-28 01:17:51 +00:00
Devang Patel 7a9dedf0ab Do not drop location info for inlined function args.
llvm-svn: 104884
2010-05-27 20:25:04 +00:00
Duncan Sands f162eace49 Teach instCombine to remove malloc+free if malloc's only uses are comparisons
to null.  Patch by Matti Niemenmaa.

llvm-svn: 104871
2010-05-27 19:09:06 +00:00
Benjamin Kramer 6877119ef3 Kill unneeded SExt.
llvm-svn: 104692
2010-05-26 09:45:04 +00:00
Benjamin Kramer 9439084cea Properly promote operands when optimizing a single-character memcmp.
llvm-svn: 104648
2010-05-25 22:53:43 +00:00
Dan Gohman a4abd035ea Fix a missing newline in debug output.
llvm-svn: 104644
2010-05-25 21:50:35 +00:00
Dan Gohman 9b48b856ea DominatorTree.getNode can return null for unreachable blocks.
llvm-svn: 104290
2010-05-20 22:46:54 +00:00
Dan Gohman 86110fa2bb Minor code cleanups.
llvm-svn: 104287
2010-05-20 22:25:20 +00:00
Dan Gohman 6295f2ebb8 Make Solve check its own post-condition, to reduce clutter in the
top-level LSRInstance logic.

llvm-svn: 104278
2010-05-20 20:59:23 +00:00
Dan Gohman a4ca28a3ae Add comments.
llvm-svn: 104276
2010-05-20 20:52:00 +00:00
Dan Gohman 927bcaadda More code cleanups. Use iterators instead of indices when indices
aren't needed.

llvm-svn: 104273
2010-05-20 20:33:18 +00:00
Dan Gohman 4c4043cf34 Fix OptimizeShadowIV to set Changed. Change OptimizeLoopTermCond to set
Changed directly instead of using a return value.

Rename FilterOutUndesirableDedicatedRegisters's Changed variable to
distinguish it from LSRInstance's Changed member.

llvm-svn: 104269
2010-05-20 20:05:31 +00:00
Dan Gohman 8ec018cedf Add some comments.
llvm-svn: 104268
2010-05-20 20:00:41 +00:00
Dan Gohman 8ce95cc3c5 Simplify this code. Don't do a DomTreeNode lookup for each visited block.
llvm-svn: 104267
2010-05-20 20:00:25 +00:00
Dan Gohman ab5fb7f559 Minor code cleanups.
llvm-svn: 104263
2010-05-20 19:44:23 +00:00
Dan Gohman ee2fea3cd7 When canonicalizing icmp operand order to put the loop invariant
operand on the left, the interesting operand is on the right. This
fixes a bug where LSR was failing to recognize ICmpZero uses,
which led it to be unable to reverse the induction variable in the
attached testcase.

Delete test/CodeGen/X86/stack-color-with-reg-2.ll, because its test
is extremely fragile and hard to meaningfully update.

llvm-svn: 104262
2010-05-20 19:26:52 +00:00
Dan Gohman fdf9874ba7 Set Changed to true when canonicalizing ICmp operand order; even though
it isn't a very interesting change, it's a change nonetheless.

llvm-svn: 104260
2010-05-20 19:16:03 +00:00
Devang Patel e2ff7f3a7d Strip llvm.dbg.lv also.
llvm-svn: 104236
2010-05-20 16:49:22 +00:00
Dan Gohman 981563d0ba Rename a variable to avoid shadowing.
llvm-svn: 104234
2010-05-20 16:41:11 +00:00
Dan Gohman 6b733fc189 Minor code simplification.
llvm-svn: 104232
2010-05-20 16:23:28 +00:00
Dan Gohman 80a9608442 Move the code for deleting BaseRegs and LSRUses into helper functions,
and fix a bug that valgrind noticed where the code would std::swap an
element with itself.

llvm-svn: 104225
2010-05-20 15:17:54 +00:00
Dan Gohman 20fab456da Teach LSR how to cope better with unrolled loops on targets where
the addressing modes don't make this trivially easy. This allows
it to avoid falling into the less precise heuristics in more
cases.

llvm-svn: 104186
2010-05-19 23:43:12 +00:00
Dan Gohman beebef4137 Add a comment.
llvm-svn: 104089
2010-05-18 23:55:57 +00:00
Dan Gohman 50f8f2c23d Fix the predicate which checks for non-sensical formulae which have
constants in registers which partially cancel out their immediate fields.

llvm-svn: 104088
2010-05-18 23:48:08 +00:00
Dan Gohman 4cf99b5303 Factor out the code for recomputing an LSRUse's Regs set after some
of its formulae have been removed into a helper function, and also
teach it how to update the RegUseTracker.

llvm-svn: 104087
2010-05-18 23:42:37 +00:00
Dan Gohman a4eca05174 Factor out code for estimating search space complexity into a helper
function.

llvm-svn: 104082
2010-05-18 22:51:59 +00:00
Dan Gohman 63e9015248 Add some more debug output.
llvm-svn: 104080
2010-05-18 22:41:32 +00:00
Dan Gohman f1c7b1b42f Factor out the code for deleting a formula from an LSRUse into
a helper function.

llvm-svn: 104079
2010-05-18 22:39:15 +00:00
Dan Gohman 8aca7ef903 Make some debug output more informative.
llvm-svn: 104078
2010-05-18 22:37:37 +00:00
Dan Gohman 06ab08f795 Print an error message in Formula::print if the HasBaseReg flag
is inconsistent with the BaseRegs field. It's not print's job to
assert on an invalid condition, but it can make one more obvious.

llvm-svn: 104077
2010-05-18 22:35:55 +00:00
Dan Gohman 248c41d108 Rename RegUseTracker's RegUses member to RegUsesMap to avoid
confusion with LSRInstance's RegUses member.

llvm-svn: 104076
2010-05-18 22:33:00 +00:00
Nick Lewycky b35818eb25 Teach the always inliner to release its inline cost estimates, like the basic
inliner did in r103653. Why does the always inliner even bother with cost
estimates anyways?

llvm-svn: 103858
2010-05-15 04:26:25 +00:00
Nick Lewycky 002a45eb64 Clean up, no functional change.
llvm-svn: 103857
2010-05-15 03:41:58 +00:00
Nick Lewycky 2b3cbac0ee Remove heinous tabs.
llvm-svn: 103700
2010-05-13 06:45:13 +00:00
Nick Lewycky d3c6dfe853 Replace the core comparison login in merge functions. We can now merge
vector<>::push_back() in:

  int foo(vector<int> &a, vector<unsigned> &b) {
    a.push_back(10);
    b.push_back(11);
  }

to two calls to the same push_back function, or fold away the two copies of
push_back() in:

  struct T { int; };
  struct S { char; };
  vector<T*> t;
  vector<S*> s;
  void f(T *x) { t.push_back(x); }
  void g(S *x) { s.push_back(x); }

but leave f() and g() separate, since they refer to two different global
variables.

llvm-svn: 103698
2010-05-13 05:48:45 +00:00
Nick Lewycky c63aa1e8ab Clear CachedFunctionInfo upon Pass::releaseMemory. Because ValueMap will abort
on RAUW of functions, this is a correctness issue instead of a mere memory
usage problem.


No testcase until the new MergeFunctions can land.

llvm-svn: 103653
2010-05-12 21:48:15 +00:00
Duncan Sands 6c5e4355bb I got tired of VISIBILITY_HIDDEN colliding with the gcc enum. Rename it
to LLVM_LIBRARY_VISIBILITY and introduce LLVM_GLOBAL_VISIBILITY, which is
the opposite, for future use by dragonegg.

llvm-svn: 103495
2010-05-11 20:16:09 +00:00
Douglas Gregor 6739a89117 Fixes for Microsoft Visual Studio 2010, from Steven Watanabe!
llvm-svn: 103457
2010-05-11 06:17:44 +00:00
Chris Lattner 84d4618659 make simplifycfg insert an llvm.trap before the 'unreachable' it introduces
when it detects undefined behavior.  llvm.trap generally codegens into some
thing really small (e.g. a 2 byte ud2 instruction on x86) and debugging this
sort of thing is "nontrivial".  For example, we now compile:

void foo() { *(int*)0 = 42; }

into:

_foo:
	pushl	%ebp
	movl	%esp, %ebp
	ud2

Some may even claim that this is a security hole, though that seems dubious
to me.  This addresses rdar://7958343 - Optimizing away null dereference 
potentially allows arbitrary code execution

llvm-svn: 103356
2010-05-08 22:15:59 +00:00
Chris Lattner 02b0df5338 Teach instcombine to transform a bitcast/(zext|trunc)/bitcast sequence
with a vector input and output into a shuffle vector.  This sort of 
sequence happens when the input code stores with one type and reloads
with another type and then SROA promotes to i96 integers, which make
everyone sad.

This fixes rdar://7896024

llvm-svn: 103354
2010-05-08 21:50:26 +00:00
Chris Lattner 5a62d6e578 Fix PR7052, patch by Jakub Staszak!
llvm-svn: 103347
2010-05-08 20:01:44 +00:00
Dan Gohman d0800241d2 When pruning candidate formulae out of an LSRUse, update the
LSRUse's Regs set after all pruning is done, rather than trying
to do it on the fly, which can produce an incomplete result.

This fixes a case where heuristic pruning was stripping all
formulae from a use, which led the solver to enter an infinite
loop.

Also, add a few asserts to diagnose this kind of situation.

llvm-svn: 103328
2010-05-07 23:36:59 +00:00
Devang Patel 32cc43c242 Wrap const MDNode * inside DIDescriptor.
llvm-svn: 103295
2010-05-07 20:54:48 +00:00
Devang Patel 4423abd734 Use overloaded operators instead of DIDescriptor::getNode()
llvm-svn: 103276
2010-05-07 18:19:32 +00:00
Ted Kremenek d90773ebe0 Update CMake build.
llvm-svn: 103266
2010-05-07 17:13:20 +00:00
Dan Gohman 5d5b8b1b8c Add an LLVM IR version of code sinking. This uses the same simple algorithm
as MachineSink, but it isn't constrained by MachineInstr-level details.

llvm-svn: 103257
2010-05-07 15:40:13 +00:00
Bob Wilson 0c8b29bcdb Use the right version of "append" to combine two SmallVectors.
This fixes the compile-time regressions seen in last night's tests.

llvm-svn: 103118
2010-05-05 20:44:15 +00:00
Bob Wilson d1b38e317d Combine the implementations of the core part of the SSAUpdater and
MachineSSAUpdater to avoid duplicating all the code.

llvm-svn: 103060
2010-05-04 23:18:19 +00:00
Bob Wilson a2fda8b648 Defer adding critical edges to the "toSplit" list until after checking for
indirect branches in all the predecessors.  This avoids unnecessarily
splitting edges in cases where load PRE is not possible anyway.
Thanks to Jakub Staszak for pointing this out.

llvm-svn: 103034
2010-05-04 20:03:21 +00:00
Dan Gohman 1d2ded75e2 Use getConstant instead of getIntegerSCEV. The two are basically the
same, now that getConstant has overloads consistent with ConstantInt::get.

llvm-svn: 102965
2010-05-03 22:09:21 +00:00
Devang Patel 9f5200a122 Check for side effects before splitting loop.
Patch by Jakub Staszak!

llvm-svn: 102928
2010-05-03 18:06:58 +00:00
Chris Lattner b49a622fe9 revert r102831. We already delete dead readonly calls in
other places, killing a valid transformation is not the right
answer.

llvm-svn: 102850
2010-05-01 17:19:38 +00:00
Owen Anderson 550986ea90 Disable the call-deletion transformation introduced in r86975. Without
halting analysis, it is illegal to delete a call to a read-only function.
The correct solution is almost certainly to add a "must halt" attribute and
only allow deletions in its presence.

XFAIL the relevant testcase for now.

llvm-svn: 102831
2010-05-01 08:34:28 +00:00
Chris Lattner c2432b9d44 rename InlineInfo.DevirtualizedCalls -> InlinedCalls to
reflect that it includes all inlined calls now, not just
devirtualized ones.

llvm-svn: 102824
2010-05-01 01:26:13 +00:00
Chris Lattner fc8d9ee6c3 Implement rdar://6295824 and PR6724 with two tiny changes
that can have a big effect :).  The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call.  In this case, it will rerun all the passes it 
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.

The second patch is to add *all* call sites to the 
DevirtualizedCalls list the inliner uses.  This list is
about to get renamed, but the jist of this is that the 
inliner now reconsiders *all* inlined call sites as candidates
for further inlining.  The intuition is this that in cases 
like this:

f() { g(1); }     g(int x) { h(x); }

We analyze this bottom up, and may decide that it isn't 
profitable to inline H into G.  Next step, we decide that it is
profitable to inline G into F, and do so, which means that F 
now calls H.  Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).

In my spot checks, this doesn't have a big impact on code.  For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes).  252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.

llvm-svn: 102823
2010-05-01 01:15:56 +00:00
Chris Lattner e8262675a3 The inliner has traditionally not considered call sites
that appear due to inlining a callee as candidates for
futher inlining, but a recent patch made it do this if
those call sites were indirect and became direct.

Unfortunately, in bizarre cases (see testcase) doing this
can cause us to infinitely inline mutually recursive
functions into callers not in the cycle.  Fix this by
keeping track of the inline history from which callsite
inline candidates got inlined from.

This shouldn't affect any "real world" code, but is required
for a follow on patch that is coming up next.

llvm-svn: 102822
2010-05-01 01:05:10 +00:00
Devang Patel 3ca9a9b59c Preserve debug info attached with call instruction while eliminating dead argument.
Radar 7927803

llvm-svn: 102760
2010-04-30 20:23:54 +00:00
Chris Lattner 4bd85e47bf further clarify alignment of globals, fix instcombine
to not increase the alignment of globals with an assigned
alignment and section.

llvm-svn: 102476
2010-04-28 00:31:12 +00:00
Chris Lattner 44a27efdf9 Fix a problem that lower invoke has with allocas (PR6694), and
add a version of createLowerInvokePass that allows the client
to specify whether it wants "expensive" or "cheap" lowering.

Patch by Alex Mac!

llvm-svn: 102402
2010-04-26 23:49:32 +00:00
Chris Lattner 87aa2243e2 fix PR6940: sitofp(undef) folds to 0.0, not undef.
llvm-svn: 102358
2010-04-26 18:21:23 +00:00
Chris Lattner b34ffe36ae remove #if 1's.
llvm-svn: 102296
2010-04-25 04:43:02 +00:00
Dan Gohman 534ba376f6 Generalize LSR's OptimizeMax to handle the new kinds of max expressions
that indvars may use, now that indvars is recognizing le and ge loops.

llvm-svn: 102235
2010-04-24 03:13:44 +00:00
Chris Lattner d3b361d1b6 enable my inliner change: add newly devirtualized call sites to
the worklist, making them inline candidates.

llvm-svn: 102213
2010-04-23 21:16:07 +00:00
Chris Lattner c691de3b4e switch InlineInfo.DevirtualizedCalls's list to be of WeakVH.
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call.  The improved inliner behavior is still
disabled for now.

llvm-svn: 102196
2010-04-23 18:37:01 +00:00
Dan Gohman 997bbc54d6 Fix LSR to tolerate cases where ScalarEvolution initially
misses an opportunity to fold add operands, but folds them
after LSR has separated them out. This fixes rdar://7886751.

llvm-svn: 102157
2010-04-23 01:55:05 +00:00
Chris Lattner d8d898dbd3 disable my previous inliner patch, it appears to be busting self-host.
llvm-svn: 102153
2010-04-23 00:41:03 +00:00
Chris Lattner 2eee5d3467 The inliner was choosing to not consider call sites
that appear in the SCC as a result of inlining as candidates
for inlining.  Change this so that it *does* consider call 
sites that change from being indirect to being direct as a
result of inlining.  This allows it to completely 
"devirtualize" the testcase.

llvm-svn: 102146
2010-04-22 23:37:35 +00:00
Chris Lattner 4ba01ec869 refactor the interface to InlineFunction so that most of the in/out
arguments are handled with a new InlineFunctionInfo class.  This 
makes it easier to extend InlineFunction to return more info in the
future.

llvm-svn: 102137
2010-04-22 23:07:58 +00:00
Chris Lattner 016c00a311 when inlining something like this:
define void @f3(void (i8*)* %__f) ssp {
entry:
  call void %__f(i8* undef)
  unreachable
}

define void @f4(i8* %this) ssp align 2 {
entry:
  call void @f3(void (i8*)* @f2) ssp
  ret void
}

The inliner is turning the indirect call to %__f into a direct
call to F2.  Make the call graph more precise when this happens.

The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.

llvm-svn: 102131
2010-04-22 21:31:00 +00:00
Chris Lattner 0a3b5b4e39 eliminate dead #include.
llvm-svn: 102119
2010-04-22 20:41:10 +00:00
Bob Wilson 4c7f50afb8 Fix a performance problem with the new SSAUpdater. This showed up in the
GCCAS time for MultiSource/Benchmarks/ASCI_Purple/SMG2000.

llvm-svn: 102009
2010-04-21 18:39:03 +00:00
Devang Patel 2176643241 Rename ValueMapTy as ValueToValueMapTy to clearly indicate that this has no replationship with ADT/ValueMap.
llvm-svn: 101950
2010-04-20 22:24:18 +00:00
Devang Patel 382b969647 There is no need to install ValueMapper.h header.
llvm-svn: 101949
2010-04-20 22:18:31 +00:00
Gabor Greif 27b3d55194 use abstract accessors to CallInst
llvm-svn: 101899
2010-04-20 13:13:04 +00:00
Chris Lattner 66e809acc0 remove a bunch of ad-hoc code to simplify instructions from
loop unswitch, and use inst simplify instead.  It is more
powerful and less duplication.

llvm-svn: 101874
2010-04-20 05:33:18 +00:00
Chris Lattner c707fa9651 move some select simplifications out out instcombine into
inst simplify.  No functionality change.

llvm-svn: 101873
2010-04-20 05:32:14 +00:00
Chris Lattner 5814d9d9da RewriteLoopBodyWithConditionConstant can end up rewriting the
condition we're unswitching on.  In this case, don't try to
simplify the second copy of the loop which may be dead or not,
but is probably a constant now.  This fixes PR6879

llvm-svn: 101870
2010-04-20 05:09:16 +00:00
Chris Lattner a5cdd5e6a2 make the inliner do less work for leaf functions.
llvm-svn: 101846
2010-04-20 00:47:08 +00:00
Chris Lattner e93846762a Fix rdar://7879828 - crash in CallGraph, a self host issue.
Arg promotion was deleting call graph nodes that still had references
from the 'indirect' CGN.  Like the inliner, it should only delete the
function if all references are gone.

llvm-svn: 101845
2010-04-20 00:46:50 +00:00
Dan Gohman e637ff5e9a Remove the Expr member from IVUsers. Instead of remembering the expression,
just ask ScalarEvolution for it on demand. This helps IVUsers be more robust
in the case of expressions changing underneath it. This fixes PR6862.

llvm-svn: 101819
2010-04-19 21:48:58 +00:00
Bob Wilson ca51425d94 Re-commit my previous SSAUpdater changes. The previous version naively tried
to determine where to place PHIs by iteratively comparing reaching definitions
at each block.  That was just plain wrong.  This version now computes the
dominator tree within the subset of the CFG where PHIs may need to be placed,
and then places the PHIs in the iterated dominance frontier of each definition.
The rest of the patch is mostly the same, with a few more performance
improvements added in.

llvm-svn: 101612
2010-04-17 03:08:24 +00:00
Eric Christopher 7258dcd77f Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00
Chris Lattner 4422d31b84 introduce a new CallGraphSCC class, and pass it around
to CallGraphSCCPass's instead of passing around a
std::vector<CallGraphNode*>.  No functionality change,
but now we have a much tidier interface.

llvm-svn: 101558
2010-04-16 22:42:17 +00:00
Dan Gohman 99e5327bfd Refine the detection of seemingly infinitely recursive calls where the
callee is expected to be expanded to something else by codegen, so that
normal infinitely recursive calls are still transformed.

llvm-svn: 101468
2010-04-16 15:57:50 +00:00
Gabor Greif f375520f7b reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101465
2010-04-16 15:33:14 +00:00
Chris Lattner bd2d9430d6 fix comment noticed by Bob
llvm-svn: 101437
2010-04-16 02:32:17 +00:00
Gabor Greif 403e9694f9 back out r101423 and r101397, they break llvm-gcc self-host on darwin10
llvm-svn: 101434
2010-04-16 01:16:20 +00:00
Chris Lattner 1146d326a7 fix PR6832: we were using the alignment of a pointer when we
wanted the alignment of the pointee.

llvm-svn: 101432
2010-04-16 01:05:38 +00:00
Chris Lattner b73552908e improve comments.
llvm-svn: 101429
2010-04-16 00:38:19 +00:00
Chris Lattner 78d7dbbc30 pull all the ConvertToScalarInfo code together into one
place.

llvm-svn: 101427
2010-04-16 00:24:57 +00:00
Chris Lattner d69c3ee958 more refactoring: suck some stuff out of SRoA into
ConvertToScalarInfo.

llvm-svn: 101425
2010-04-16 00:20:00 +00:00
Gabor Greif 6af0ad846e shift intrinsic operand
llvm-svn: 101423
2010-04-16 00:06:45 +00:00
Chris Lattner 9ef4eae6e6 introduce a new ConvertToScalarInfo struct to simplify
CanConvertToScalar/MergeInType.  Eliminate a pointless
LLVMContext argument to MergeInType.

llvm-svn: 101422
2010-04-15 23:50:26 +00:00
Chris Lattner 9c1172d848 tidy interface to isOnlyCopiedFromConstantGlobal
llvm-svn: 101405
2010-04-15 21:59:20 +00:00
Gabor Greif 33ae80bff7 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101397
2010-04-15 20:51:13 +00:00
Anton Korobeynikov 839cdaa70a Revert r100896 and around - this breaks the only mingw32 buildbot we have.
llvm-svn: 101387
2010-04-15 19:51:42 +00:00
Dan Gohman b29cda9b3c Fix a bunch of namespace polution.
llvm-svn: 101376
2010-04-15 17:08:50 +00:00
Gabor Greif 9fd00c7d25 back out r101364, as it trips the linux nightlybot on some clang C++ tests
llvm-svn: 101368
2010-04-15 12:46:56 +00:00
Gabor Greif aafd209632 rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101364
2010-04-15 10:49:53 +00:00
Tobias Grosser de1a37b872 IPO needs ScalarOpts and InstCombine in its libs
The commit "Adding IPSCCP and Internalize passes to the C-bindings" introduced
new dependencies for IPO. Add these to the CMAKE build as otherwise the
BUILD_SHARED_LIBS=1 build fails.

llvm-svn: 101313
2010-04-14 23:42:23 +00:00
Evan Cheng 21b588b678 - Code clean up to reduce indentation.
- TryToOptimizeStoreOfMallocToGlobal should check if TargetData is available and bail out if it is not. The transformations being done requires TD.

llvm-svn: 101285
2010-04-14 20:52:55 +00:00
Gabor Greif c08e5df836 performance: cache the dereferenced use_iterator
llvm-svn: 101253
2010-04-14 16:48:56 +00:00
Gabor Greif a49686fa3e performance: cache the dereferenced use_iterator
llvm-svn: 101250
2010-04-14 16:13:56 +00:00
Nick Lewycky 163a743b51 I don't know how, but I managed to goof the revert. Remove function that should
have been removed in r101231.

llvm-svn: 101232
2010-04-14 05:03:50 +00:00
Nick Lewycky ca615eb0d6 Revert r101213.
llvm-svn: 101231
2010-04-14 04:51:58 +00:00
Nick Lewycky 087d59cf25 Remove tab.
llvm-svn: 101223
2010-04-14 04:19:05 +00:00
Nick Lewycky 3cdae269f0 While DAE can't modify the function signature of an externally visible function,
it can check whether the visible direct callers are passing in parameters to
dead arguments and replace those with undef.

This reinstates r94322 with bugs fixed.

llvm-svn: 101213
2010-04-14 03:38:11 +00:00
Eric Christopher 4016dcd625 Actually... return after the check for invalid input.
llvm-svn: 101139
2010-04-13 16:41:29 +00:00
Owen Anderson b516f1c6cc Remove SCCVN from the CMake build system.
llvm-svn: 101125
2010-04-13 08:33:09 +00:00
Owen Anderson 9ed6abfe0b SCCVN, we hardly knew ye!
llvm-svn: 101117
2010-04-13 05:24:08 +00:00
Dan Gohman 5867a56db8 Teach IndVarSimplify how to eliminate remainder operators where the
numerator is an induction variable. For example, with code like this:

  for (i=0;i<n;++i)
    x[i%n] = 0;

IndVarSimplify will now recognize that i is always less than n inside
the loop, and eliminate the remainder.

llvm-svn: 101113
2010-04-13 01:46:36 +00:00
Dan Gohman 4a645b88ef Suppress LinearFunctionTestReplace when the computed backedge-taken
expression is a UDiv and it doesn't appear that the UDiv came from
the user's source.

ScalarEvolution has recently figured out how to compute a tripcount
expression for the inner loop in
SingleSource/Benchmarks/Shootout/sieve.c, using a udiv. Emitting a
udiv instruction dramatically slows down the enclosing loop.

llvm-svn: 101068
2010-04-12 21:13:43 +00:00
Dan Gohman 27c8e79839 Delete this code, which is no longer needed.
llvm-svn: 101033
2010-04-12 08:00:22 +00:00
Dan Gohman 07f6563e81 Move the EliminateIVUsers call back out to its original location. Now that
a ScalarEvolution bug with overflow handling is fixed, the normal analysis
code will automatically decline to operate on the icmp instructions which
are responsible for the loop exit.

llvm-svn: 101032
2010-04-12 07:56:56 +00:00
Dan Gohman 15f90c294c Use RecursivelyDeleteTriviallyDeadInstructions in EliminateIVComparisons,
instead of deleting just the user. This makes it more consistent with
other code in IndVarSimplify, and theoretically can eliminate more users
earlier.

llvm-svn: 101027
2010-04-12 07:29:15 +00:00
Eric Christopher 1f272f7fd8 Verify function prototypes before trying to optimize functions. We also
need TargetData, just return false if we don't have it.

Update testcases accordingly.

Fixes PR6807.

llvm-svn: 101011
2010-04-12 04:48:00 +00:00
Dan Gohman fa5ad797e3 Re-apply r101000, with a fix: Don't eliminate an icmp which is part of
the loop exit test. This usually doesn't come up for a variety of
reasons, but it isn't impossible, so make IndVarSimplify handle it
conservatively.

llvm-svn: 101008
2010-04-12 02:21:50 +00:00
Dan Gohman c0f1efaf8d Revert 101000, which is breaking self-host builds.
llvm-svn: 101002
2010-04-12 00:17:10 +00:00
Dan Gohman af4ab1b681 Teach IndVarSimplify how to eliminate comparisons involving induction
variables. For example, with code like this:

  for (i=0;i<n;++i)
    if (i<n)
      x[i] = 0;

IndVarSimplify will now recognize that i is always less than n inside
the loop, and eliminate the if.

llvm-svn: 101000
2010-04-11 23:10:12 +00:00
Dan Gohman b50349a979 Rename isLoopGuardedByCond to isLoopEntryGuardedByCond, to emphasise
that it's only testing for the entry condition, not full loop-invariant
conditions.

llvm-svn: 100979
2010-04-11 19:27:13 +00:00
Chris Lattner 4568ed7893 Implement support for varargs functions without any fixed
parameters in the CBE by implicitly adding a fixed argument.
This allows eliminating a work-around from DAE.  Patch by
Sylvere Teissier!

llvm-svn: 100944
2010-04-10 19:12:44 +00:00
Chris Lattner 9ae28b141f fix PR6743, a case where we'd delete an instruction before using it
in some cases.

llvm-svn: 100937
2010-04-10 18:26:57 +00:00
Chris Lattner b9801ffcb5 fix PR6760, a missing check in heap SRoA.
llvm-svn: 100936
2010-04-10 18:19:22 +00:00
Dan Gohman 607e02b33a When determining a canonical insert position, don't climb deeper
into adjacent loops. Also, ensure that the insert position is
dominated by the loop latch of any loop in the post-inc set which
has a latch.

llvm-svn: 100906
2010-04-09 22:07:05 +00:00
Chris Lattner 74e2ef68b9 suck the propagating "has dynamic libs" check into a single makefile
variable TARGET_HAS_DYNAMIC_LIBS

llvm-svn: 100896
2010-04-09 20:51:47 +00:00
Chris Lattner c86cdc7d47 add minix support, patch by Kees van Reeuwijk! PR6797
llvm-svn: 100895
2010-04-09 20:45:04 +00:00
Wesley Peck a2ca3fa781 Adding IPSCCP and Internalize passes to the C-bindings
llvm-svn: 100893
2010-04-09 20:43:20 +00:00
Dan Gohman 42ec4eb351 When looking for loop-invariant users, look through no-op instructions,
so that an unfortunately placed bitcast doesn't pin a value in a
register.

llvm-svn: 100883
2010-04-09 19:12:34 +00:00
Gabor Greif ef60190a00 performance: cache result of looking up user
llvm-svn: 100862
2010-04-09 15:18:34 +00:00
Dan Gohman 0a8175d1db Minor code simplification.
llvm-svn: 100859
2010-04-09 14:53:59 +00:00
Gabor Greif ce6dd889ec const-ize a predicate
llvm-svn: 100856
2010-04-09 10:57:00 +00:00
Dan Gohman d2df643ddb Refactor the code for computing the insertion point for an expression into
a separate function.

llvm-svn: 100845
2010-04-09 02:00:38 +00:00
Chris Lattner c6c153be45 fix a SCCP miscompilation that could happen when a
forced constant is changed to a constant, we would end
up adding the instruction to the wrong worklist, 
preventing it from being properly revisited.  This fixes
rdar://7832370

llvm-svn: 100837
2010-04-09 01:14:31 +00:00
Dan Gohman 9b5d0bb774 Avoid allocating a value of zero in a register if the initial formula
inputs happen to negate each other.

llvm-svn: 100828
2010-04-08 23:36:27 +00:00
Dan Gohman 4ce1fb1448 Add variants of ult, ule, etc. which take a uint64_t RHS, for convenience.
llvm-svn: 100824
2010-04-08 23:03:40 +00:00
Dan Gohman 4506539d84 When expanding expressions which are using post-inc mode for multiple loops,
ensure that the expansion is dominated by the increments of those loops.

llvm-svn: 100748
2010-04-08 05:57:57 +00:00
Dan Gohman eb7111b98f Say bitcast instead of bitconvert.
llvm-svn: 100720
2010-04-07 23:22:42 +00:00
Eric Christopher e8b281c3c3 Add support for stpncpy_chk.
llvm-svn: 100710
2010-04-07 23:00:07 +00:00
Chris Lattner 2104b8d36e rename llvm::llvm_report_error -> llvm::report_fatal_error
llvm-svn: 100709
2010-04-07 22:58:41 +00:00
Dan Gohman d006ab90dd Generalize IVUsers to track arbitrary expressions rather than expressions
explicitly split into stride-and-offset pairs. Also, add the
ability to track multiple post-increment loops on the same expression.

This refines the concept of "normalizing" SCEV expressions used for
to post-increment uses, and introduces a dedicated utility routine for
normalizing and denormalizing expressions.

This fixes the expansion of expressions which are post-increment users
of more than one loop at a time. More broadly, this takes LSR another
step closer to being able to reason about more than one loop at a time.

llvm-svn: 100699
2010-04-07 22:27:08 +00:00
Gabor Greif 08d85da6cc fix 80-col violations
llvm-svn: 100677
2010-04-07 18:59:26 +00:00
Gabor Greif df323a51f5 performance: get rid of repeated dereferencing of use_iterator by caching its result
llvm-svn: 100550
2010-04-06 19:32:30 +00:00
Gabor Greif 679728790b make more two predicates constant
llvm-svn: 100549
2010-04-06 19:24:18 +00:00
Gabor Greif 08355d6cda performance: get rid of repeated dereferencing of use_iterator by caching its result
llvm-svn: 100547
2010-04-06 19:14:05 +00:00
Gabor Greif a21bc0fbd5 const-ize predicate ValueIsOnlyUsedLocallyOrStoredToOneGlobal
llvm-svn: 100546
2010-04-06 18:58:22 +00:00
Gabor Greif 0439789023 use CallSite to access calls vs. invokes uniformly
and remove assumptions about operand order

llvm-svn: 100544
2010-04-06 18:45:08 +00:00
Chris Lattner adca608281 fix a really nasty bug that Evan was tracking in SCCP. When resolving
undefs in branches/switches, we have two cases: a branch on a literal
undef or a branch on a symbolic value which is undef.  If we have a
literal undef, the code was correct: forcing it to a constant is the
right thing to do.

If we have a branch on a symbolic value that is undef, we should force
the symbolic value to a constant, which then makes the successor block
live.  Forcing the condition of the branch to being a constant isn't 
safe if later paths become live and the value becomes overdefined.  This
is the case that 'forcedconstant' is designed to handle, so just use it.

This fixes rdar://7765019 but there is no good testcase for this, the
one I have is too insane to be useful in the future.

llvm-svn: 100478
2010-04-05 22:14:48 +00:00
Chris Lattner c832c1bf69 some code cleanups, use SwitchInst::findCaseValue, reduce indentation
llvm-svn: 100468
2010-04-05 21:18:32 +00:00
Evan Cheng ba930449a9 Code clean up.
llvm-svn: 100467
2010-04-05 21:16:25 +00:00
Mon P Wang c576ee9040 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100304
2010-04-04 03:10:48 +00:00
Chris Lattner ecb536313f require that the branch being controlled by the IV
exits the loop.  With this information we can guarantee 
the iteration count of the loop is bounded by the 
compare.  I think this xforms is finally safe now.

llvm-svn: 100285
2010-04-03 07:21:39 +00:00
Chris Lattner 40060d33f6 add integer overflow check for the fp induction variable
checker.  Amusingly, we already had tests that we should
have rejects because they would be miscompiled in the
testsuite.

The remaining issue with this is that we don't check that
the branch causes us to exit the loop if it fails, so we
don't actually know if we remain in bounds.

llvm-svn: 100284
2010-04-03 07:18:48 +00:00
Chris Lattner 69913466cb add a comment and fix some consistency issues, converting
to a signed vs unsigned value depending on the sign of the
constant fp means that we can't distinguish between a 
truly negative number and a positive number so large the
32nd bit is set.  So, do don't this!

llvm-svn: 100283
2010-04-03 06:41:49 +00:00
Chris Lattner 40ea690f39 fix PR6761, a miscompilation due to the fp->int IV conversion
stuff.  More bugs remain though.

llvm-svn: 100282
2010-04-03 06:30:03 +00:00
Chris Lattner 42202868c3 just eliminate the uitofp checks. This code isn't doing
the required validity checks in the first place, and supporting
a condition large enough to require the 32'nd bit isn't worth it.

llvm-svn: 100280
2010-04-03 06:25:21 +00:00
Chris Lattner ca25b60f4e rename PH -> PN to be consistent with WeakPN and the rest
of llvm.

llvm-svn: 100276
2010-04-03 06:17:08 +00:00
Chris Lattner 774858fc38 improve comment and drop a dead check. If PH had
no uses, it would have been deleted by 
RecursivelyDeleteTriviallyDeadInstructions

llvm-svn: 100275
2010-04-03 06:16:22 +00:00
Chris Lattner 915322bc4a strength reduce a ridiculous use of APInt.
llvm-svn: 100274
2010-04-03 06:13:12 +00:00
Chris Lattner 0b941347f9 rename stuff improve comment grammar.
llvm-svn: 100273
2010-04-03 06:11:07 +00:00
Chris Lattner d77bde5f94 simplify some code and resolve a fixme.
llvm-svn: 100272
2010-04-03 06:06:59 +00:00
Chris Lattner 2ff33f91d5 There is no guarantee that the increment and the branch
are in the same block.  Insert the new increment in the
correct location.

Also, more cleanups.

llvm-svn: 100271
2010-04-03 06:05:10 +00:00
Chris Lattner c558b49f14 first half of a pass through IndVarSimplify::HandleFloatingPointIV,
this cleans up a bunch of code and also fixes several crashes and
miscompiles.  More to come unfortunately, this optimization
is quite broken.

llvm-svn: 100270
2010-04-03 05:54:59 +00:00
Chris Lattner 2e23e5284c don't internalize available_externally functions, they are
really just declarations.  This is related to PR6524

llvm-svn: 100269
2010-04-03 05:24:50 +00:00
Bob Wilson f1aa4743d9 Revert all my SSAUpdater patches. The PHI placement algorithm is not correct
(what was I thinking?) and there's also a problem with LCSSA.  I'll try again
later with fixes.

--- Reverse-merging r100263 into '.':
U    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100177 into '.':
G    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100148 into '.':
G    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100147 into '.':
U    include/llvm/Transforms/Utils/SSAUpdater.h
G    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100131 into '.':
G    include/llvm/Transforms/Utils/SSAUpdater.h
G    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100130 into '.':
G    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100126 into '.':
G    include/llvm/Transforms/Utils/SSAUpdater.h
G    lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100050 into '.':
D    test/Transforms/GVN/2010-03-31-RedundantPHIs.ll
--- Reverse-merging r100047 into '.':
G    include/llvm/Transforms/Utils/SSAUpdater.h
G    lib/Transforms/Utils/SSAUpdater.cpp

llvm-svn: 100264
2010-04-03 03:50:38 +00:00
Bob Wilson 25f1aefd5b Add a DEBUG_TYPE for the SSAUpdater.
llvm-svn: 100263
2010-04-03 03:28:44 +00:00
Evan Cheng ed66db3f9b Code refactoring.
llvm-svn: 100262
2010-04-03 02:23:43 +00:00
Mon P Wang 999c1b927b Revert r100191 since it breaks objc in clang
llvm-svn: 100199
2010-04-02 18:43:02 +00:00
Mon P Wang a972ab8564 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100191
2010-04-02 18:04:15 +00:00
Dan Gohman f7239102fe Manually notify ScalarEvolution before making an operand replacement, since
it can't currently observe such changes automatically.

llvm-svn: 100186
2010-04-02 14:48:31 +00:00
Bob Wilson 3c54edf9b3 Recommit 100158 now that the buildbots are happy again.
llvm-svn: 100177
2010-04-02 05:09:46 +00:00
Dan Gohman 4bd755419f Revert the recent alignment changes. They're broken for -Os because,
in particular, they end up aligning strings at 16-byte boundaries, and
there's no way for GlobalOpt to check OptForSize.

llvm-svn: 100172
2010-04-02 03:04:37 +00:00
Bob Wilson 0389adcd73 Revert 100158 in case it is causing some of the buildbot problems.
llvm-svn: 100164
2010-04-02 01:22:49 +00:00
Dan Gohman c671347fcb Make globalopt refine global variable alignment.
llvm-svn: 100160
2010-04-02 00:14:16 +00:00
Bob Wilson 9af4e118c6 Check for terminating conditions before adding PHIs to the worklists.
This is more efficient than adding them to the worklist and then ignoring them.

llvm-svn: 100158
2010-04-02 00:10:41 +00:00
Bob Wilson 737195069a Remove trailing whitespace.
llvm-svn: 100148
2010-04-01 23:06:38 +00:00
Bob Wilson 37b73d9d3e Rewrite another SSAUpdater function to avoid recursion.
llvm-svn: 100147
2010-04-01 23:05:58 +00:00
Bob Wilson 8409feadf0 Change another SSAUpdater function to avoid recursion.
llvm-svn: 100131
2010-04-01 20:04:30 +00:00
Bob Wilson 043c0406f7 Simplify the code to check for existing PHIs, now that it is only used in
one place.  This removes the template function added in svn 94690.

llvm-svn: 100130
2010-04-01 19:53:48 +00:00
Bob Wilson 38fc88ee5d The SSAUpdater should avoid recursive traversals of the CFG, since that may
blow out the stack for really big functions.  Start by fixing an easy case.

llvm-svn: 100126
2010-04-01 18:46:59 +00:00
Gabor Greif 5d5db5342b Introduce ImmutableCallSite, useful for contexts where no mutation
is necessary. Inherits from new templated baseclass CallSiteBase<>
which is highly customizable. Base CallSite on it too, in a configuration
that allows full mutation.
Adapt some call sites in analyses to employ ImmutableCallSite.

llvm-svn: 100100
2010-04-01 08:21:08 +00:00
Nick Lewycky bfb50a0d43 Clean up this file a little, no functionality change. This is a subset of my
patch back in r94322.

llvm-svn: 100097
2010-04-01 07:34:00 +00:00
Bob Wilson ac229124f4 Rewrite part of the SSAUpdater to be more careful about inserting redundant
PHIs.  The previous algorithm was unable to reliably detect when existing
PHIs in a cycle can be reused.  I'm still working on reducing a testcase.
Radar 7711900.

llvm-svn: 100047
2010-03-31 20:51:00 +00:00
Dale Johannesen b67a6e6620 Fix a nasty dangling-pointer heisenbug that could
generate wrong code pretty much anywhere AFAICT.
A case that hits the bug reproducibly is impossible,
but the situation was like this:
Addr = ...
Store -> Addr
Addr2 = GEP , 0, 0
Store -> Addr2
Handling the first store, the code changed replaced Addr
with a sunkaddr and deleted Addr, but not its table
entry.  Code in OptimizedBlock replaced Addr2 with a
bitcast; if that happened to reuse the memory of Addr,
the old table entry was erroneously found when handling
the second store.

llvm-svn: 100044
2010-03-31 20:37:15 +00:00
Bob Wilson 6f7fd28824 Revert Mon Ping's change 99928, since it broke all the llvm-gcc buildbots.
llvm-svn: 99948
2010-03-30 22:27:04 +00:00
Mon P Wang 7460571381 Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.

llvm-svn: 99928
2010-03-30 20:55:56 +00:00
Dan Gohman 39027c403c Fix a grammaro.
llvm-svn: 99917
2010-03-30 20:04:57 +00:00
Gabor Greif b469818279 fix two cases where the arguments were extracted from the wrong range out of the InvokeInst
spotted by baldrick -- thanks\!

llvm-svn: 99914
2010-03-30 19:20:53 +00:00
Jeffrey Yasskin 12fd516e51 Remove another memory leak from ABCD by using Edges by value instead of
pointer.  There was also a SmallPtrSet whose settiness wasn't being used, so I
changed it to a SmallVector.

llvm-svn: 99713
2010-03-27 09:09:17 +00:00
Jeffrey Yasskin 97e613b6da In ABCD, change the non-null Bound*s to Bound&s.
llvm-svn: 99711
2010-03-27 08:15:46 +00:00
Jeffrey Yasskin 33bc7e4cb5 Fix a memory leak in ABCD by giving ownership of Bound objects to the
MemoizedResultChart.

llvm-svn: 99710
2010-03-27 08:09:24 +00:00
Eric Christopher 81c03447fc When we promote a load of an argument make sure to take the alignment
of the previous load - it's usually important.  For example, we don't want
to blindly turn an unaligned load into an aligned one.

llvm-svn: 99699
2010-03-27 01:54:00 +00:00
Dan Gohman d42e09d91e Ignore debug intrinsics in yet more places.
llvm-svn: 99580
2010-03-26 00:33:27 +00:00
Gabor Greif 6c6b2fd2b2 rename pred_const_iterator to const_pred_iterator for consistency's sake
llvm-svn: 99567
2010-03-25 23:25:28 +00:00
Gabor Greif c78d720f02 rename use_const_iterator to const_use_iterator for consistency's sake
llvm-svn: 99564
2010-03-25 23:06:16 +00:00
Chris Lattner 0563804982 fix PR6642, GVN forwarding from memset to load of the base of the memset.
llvm-svn: 99488
2010-03-25 05:58:19 +00:00
Eric Christopher 1d38538fb6 Temporarily revert this, it's causing an issue with an internal project.
llvm-svn: 99451
2010-03-24 23:35:21 +00:00
Evan Cheng c12c2d9bb4 Move OptChkCall off LibCallOptimization into StrCpyOpt.
llvm-svn: 99418
2010-03-24 20:19:04 +00:00
Gabor Greif a2fbc0ae1b Finally land the InvokeInst operand reordering.
I have audited all getOperandNo calls now, fixing
hidden assumptions. CallSite related uglyness will
be eliminated successively.

Note this patch has a long and griveous history,
for all the back-and-forths have a look at
CallSite.h's log.

llvm-svn: 99399
2010-03-24 13:21:49 +00:00
Gabor Greif be18ae6781 tighten a type and remove trailing whitespace, no functional changes
llvm-svn: 99398
2010-03-24 11:58:07 +00:00
Gabor Greif 9027ffb918 increase const goodness and remove pointless getUser() calls
llvm-svn: 99395
2010-03-24 10:29:52 +00:00
Gabor Greif 11ff53146f cache result of UI.getOperandNo() instead of calling it twice, it is cheaper this way
llvm-svn: 99394
2010-03-24 10:12:54 +00:00
Chris Lattner 00eeac4179 add some accessors to callsite/callinst/invokeinst to check
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't.  This fixes PR6682.

llvm-svn: 99341
2010-03-23 22:59:07 +00:00
Bill Wendling 04803e8ef6 Skip debugging intrinsics when sinking unused invariants.
llvm-svn: 99324
2010-03-23 21:15:59 +00:00
Evan Cheng d9e822345c Teach simplify libcall to transform __strcpy_chk to __memcpy_chk to enable optimizations down stream.
llvm-svn: 99282
2010-03-23 15:48:04 +00:00
Gabor Greif 161cb044f3 add assert in argpromotion, which cannot trigger
if Function::hasAddressTaken works as advertised

also included some cosmetic cleanups

llvm-svn: 99276
2010-03-23 14:40:20 +00:00
Evan Cheng 3f7842232e Fix an incorrect logic causing instcombine to miss some _chk -> non-chk transformations.
llvm-svn: 99263
2010-03-23 06:06:09 +00:00
Evan Cheng 9a7b270825 Fix 80 col violation.
llvm-svn: 99224
2010-03-22 22:44:31 +00:00
Gabor Greif e1517a084f backing out r99170 because it still fails on clang-x86_64-darwin10-fnt
llvm-svn: 99171
2010-03-22 09:11:00 +00:00
Gabor Greif 7a743e15e3 Now that hopefully all direct accesses to InvokeInst operands are fixed
we can reapply the InvokeInst operand reordering patch. (see r98957).

llvm-svn: 99170
2010-03-22 08:28:00 +00:00
Gabor Greif febf6ab718 Add a setCalledFunction member to InvokeInst (like in CallInst)
and use this (as well as getCalledValue) to access the callee,
instead of {g|s}etOperand(0).

llvm-svn: 99084
2010-03-20 21:00:25 +00:00
Dan Gohman 1a2abe5580 Clear the SCEVExpander's insertion point after making deletions,
so that the SCEVExpander doesn't retain a dangling pointer as its
insert position. The dangling pointer in this case wasn't ever used
to insert new instructions, but it was causing trouble with
SCEVExpander's code for automatically advancing its insert position
past debug intrinsics.

This fixes use-after-free errors that valgrind noticed in
test/Transforms/IndVarSimplify/2007-06-06-DeleteDanglesPtr.ll and
test/Transforms/IndVarSimplify/exit_value_tests.ll.

llvm-svn: 99036
2010-03-20 03:53:53 +00:00
Gabor Greif 6c56ed847e back out r98957, it broke http://smooshlab.apple.com:8010/builders/clang-x86_64-darwin10-fnt/builds/703 in the nightly test suite
llvm-svn: 98958
2010-03-19 13:50:02 +00:00
Gabor Greif 8335f9c0bf Recommit r80858 again (which has been backed out in r80871).
This time I did a self-hosted bootstrap on Linux x86-64,
with no problems. Let's see how darwin 64-bit self-hosting
goes. At the first sign of failure I'll back this out.

Maybe the valgrind bots give me a hint of what may be wrong
(it at all).

llvm-svn: 98957
2010-03-19 11:55:53 +00:00
Benjamin Kramer f2e4b5dd7f str[r]chr returns its pointer argument so we cannot mark it as nocapture. Thanks to Duncan for spotting my mistake.
llvm-svn: 98671
2010-03-16 20:33:15 +00:00
Benjamin Kramer 5cf5fd2ffa Mark str[r]chr readonly.
llvm-svn: 98663
2010-03-16 19:36:43 +00:00
Devang Patel 45c1505bf6 Skip debug info intrinsics.
llvm-svn: 98584
2010-03-15 22:23:03 +00:00
Devang Patel b21991c4f5 Skip debug info intrinsics.
llvm-svn: 98581
2010-03-15 21:25:29 +00:00
Devang Patel d3f41e8939 In "empty" bb, the return instruction may not be first instruction, if dbg value intrinsics are present in this bb. Use terminator to find return instructions.
llvm-svn: 98565
2010-03-15 19:05:46 +00:00
Bill Wendling 55e69d179b Skip over debug info when trying to merge two return BBs.
llvm-svn: 98491
2010-03-14 10:40:55 +00:00
Bill Wendling ee84f27536 Make returns more consistent with others.
llvm-svn: 98490
2010-03-14 10:40:28 +00:00
Benjamin Kramer a956527c92 Add a virtual destructor and give vtable a home.
llvm-svn: 98376
2010-03-12 20:41:29 +00:00
Benjamin Kramer 7b88a49f3e Factor checked library call optimization into a common helper class and use it
to unify the almost identical code in CodeGenPrepare and InstCombineCalls.

llvm-svn: 98338
2010-03-12 09:27:41 +00:00
Nate Begeman 2e41605d4f Whoops this already existed.
llvm-svn: 98297
2010-03-11 23:21:19 +00:00
Nate Begeman 5daa235c91 Add a handful of additional useful pass manager things to the C API
llvm-svn: 98296
2010-03-11 23:06:07 +00:00
Benjamin Kramer 2fc395659c stpcpy is so similar to strcpy, it doesn't deserve a complete copy of the __strcpy_chk -> strcpy code.
llvm-svn: 98284
2010-03-11 20:45:13 +00:00
Eric Christopher 607de1de53 Lower stpcpy_chk when possible.
llvm-svn: 98274
2010-03-11 19:24:34 +00:00
Eric Christopher 103e3ef893 Fix typo.
llvm-svn: 98260
2010-03-11 17:45:38 +00:00
Eric Christopher 4b7948e09e Do some final lowering in CodeGenPrepare of _chk calls similar to
that in InstCombineCalls.

More call lowering needed.

llvm-svn: 98228
2010-03-11 02:41:03 +00:00
Eric Christopher 43dc11c525 Add strncpy libcall creator. Use it when it should be used.
llvm-svn: 98219
2010-03-11 01:25:07 +00:00
Dan Gohman 2734ebd37f Add a DominatorTree argument to isLCSSA so that it doesn't have to
compute a set of reachable blocks for itself each time it is called, which
is fairly frequently.

llvm-svn: 98179
2010-03-10 19:38:49 +00:00
Dan Gohman b7e0b87441 Fix a comment.
llvm-svn: 98122
2010-03-10 02:18:48 +00:00
Jakob Stoklund Olesen b495cad7ca Try to keep the cached inliner costs around for a bit longer for big functions.
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.

This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.

This is a more conservative version of r98089 that doesn't break the clang
test CodeGenCXX/temp-order.cpp. That test relies on rather extreme inlining
for constant folding.

llvm-svn: 98099
2010-03-09 23:02:17 +00:00
Jakob Stoklund Olesen 4497475905 Revert r98089, it was breaking a clang test.
llvm-svn: 98094
2010-03-09 22:43:37 +00:00
Jakob Stoklund Olesen 741dec43e4 Try to keep the cached inliner costs around for a bit longer for big functions.
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.

This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.

llvm-svn: 98089
2010-03-09 22:17:11 +00:00
Jakob Stoklund Olesen d62c2f554c Add inlining threshold to log output.
llvm-svn: 98024
2010-03-09 00:59:53 +00:00
Evan Cheng 4f2fd2d2be Re-commit 97860 with fix. getMallocAllocatedType may return null.
llvm-svn: 98000
2010-03-08 22:54:36 +00:00
Devang Patel 3b548aa8e2 Avoid using DIDescriptor.isNull().
This is a first step towards eliminating checks in Descriptor constructors.

llvm-svn: 97975
2010-03-08 20:52:55 +00:00
Devang Patel bc97f6b757 Revert r97947.
llvm-svn: 97963
2010-03-08 19:20:38 +00:00
Devang Patel fe28599f6f Avoid using DIDescriptor.isNull().
This is a first step towards eliminating unncessary constructor checks in light weight DIDescriptor wrappers.

llvm-svn: 97947
2010-03-08 18:25:48 +00:00
Eric Christopher 1810d77cb4 Let the fallthrough handle whether or not we've changed anything
before we try to optimize.

llvm-svn: 97876
2010-03-06 10:59:25 +00:00
Eric Christopher a7fb58f5f5 Migrate _chk call lowering from SimplifyLibCalls to InstCombine. Stub
out the remainder of the calls that we should lower in some way and
move the tests to the new correct directory. Fix up tests that are now
optimized more than they were before by -instcombine.

llvm-svn: 97875
2010-03-06 10:50:38 +00:00
Eric Christopher d8b43d0e59 Temporarily revert:
Log:
Transform @llvm.objectsize to integer if the argument is a result of malloc of known size.

Modified:
   llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp
   llvm/trunk/test/Transforms/InstCombine/objsize.ll

It appears to be causing swb and nightly test failures.

llvm-svn: 97866
2010-03-06 03:11:35 +00:00
Evan Cheng afdc7d3aab Transform @llvm.objectsize to integer if the argument is a result of malloc of known size.
llvm-svn: 97860
2010-03-06 01:01:42 +00:00
Ted Kremenek 65bb311629 Update CMake build.
llvm-svn: 97846
2010-03-05 22:34:16 +00:00
Eric Christopher 87abfc506f Move SimplifyLibCalls's LibCall builders to a separate file so they
can be used in more places.  Add an argument for the TargetData that
most of them need. Update for the getInt8PtrTy() change.  Should be
no functionality change.

llvm-svn: 97844
2010-03-05 22:25:30 +00:00
Evan Cheng d214ed0e75 Safely turn memset_chk etc. to non-chk variant if the known object size is >= memset / memcpy / memmove size.
llvm-svn: 97828
2010-03-05 20:59:47 +00:00
Evan Cheng fffdad58ac Instcombine should turn llvm.objectsize of a alloca with static size to an integer.
llvm-svn: 97827
2010-03-05 20:47:23 +00:00
Chris Lattner f6befffbb2 fix PR6512, a case where instcombine would incorrectly merge loads
from different addr spaces.

llvm-svn: 97813
2010-03-05 18:53:28 +00:00
Chris Lattner 067459c62b Fix PR6503. This turned into a much more interesting and nasty bug. Various
parts of the cmp|cmp and cmp&cmp folding logic wasn't prepared for vectors
(unrelated to the bug but noticed while in the code) and the code was 
*definitely* not safe to use by the (cast icmp)|(cast icmp) handling logic
that I added in r95855.  Fix all this up by changing the various routines
to more consistently use IRBuilder and not pass in the I which had the wrong 
type.

llvm-svn: 97801
2010-03-05 08:46:26 +00:00
Chris Lattner 343d2e48b2 simplify some functions and make them work with vector
compares, noticed by inspection.

llvm-svn: 97795
2010-03-05 07:47:57 +00:00
Chris Lattner c6c1523f59 fix a nice subtle reassociate bug which would only occur
in a very specific use pattern embodied in the carefully
reduced testcase.

llvm-svn: 97794
2010-03-05 07:18:54 +00:00
Eric Christopher 4899cbc77d Move GetStringLength and helper from SimplifyLibCalls to ValueTracking.
No functionality change.

llvm-svn: 97793
2010-03-05 06:58:57 +00:00
Evan Cheng 43d6ff7701 Add missing break for Intrinsic::objectsize case. It was falling through to the following Intrinsic::bswap code. I have no idea why it wasn't breaking stuff.
llvm-svn: 97774
2010-03-05 01:22:47 +00:00
Dan Gohman 29707de4fe Make SCEVExpander and LSR more aggressive about hoisting expressions out
of loops.

llvm-svn: 97642
2010-03-03 05:29:13 +00:00
Bill Wendling af13d82945 This test case:
long test(long x) { return (x & 123124) | 3; }

Currently compiles to:

_test:
        orl     $3, %edi
        movq    %rdi, %rax
        andq    $123127, %rax
        ret

This is because instruction and DAG combiners canonicalize

  (or (and x, C), D) -> (and (or, D), (C | D))

However, this is only profitable if (C & D) != 0. It gets in the way of the
3-addressification because the input bits are known to be zero.

llvm-svn: 97616
2010-03-03 00:35:56 +00:00
Dan Gohman 52f5563973 Non-affine post-inc SCEV expansions have more code which must be
emitted after the increment. Make sure the insert position
reflects this. This fixes PR6453.

llvm-svn: 97537
2010-03-02 01:59:21 +00:00
Dan Gohman 6f34abd092 Floating-point add, sub, and mul are now spelled fadd, fsub, and fmul,
respectively.

llvm-svn: 97531
2010-03-02 01:11:08 +00:00
Bob Wilson 0fd415820b Don't attempt load PRE when there is no real redundancy (i.e., the load is in
a loop and is itself the only dependency).

llvm-svn: 97526
2010-03-02 00:09:29 +00:00
Bob Wilson 892432b7ef When GVN needs to split critical edges for load PRE, check all of the
predecessors before returning.  Otherwise, if multiple predecessor edges need
splitting, we only get one of them per iteration.  This makes a small but
measurable compile time improvement with -enable-full-load-pre.

llvm-svn: 97521
2010-03-01 23:37:32 +00:00
Evan Cheng 7263cf8431 MemoryDepAnalysis is not used if redundant load processing is disabled.
llvm-svn: 97512
2010-03-01 22:23:12 +00:00
Dan Gohman 39917c7c81 Add some debug output to LoopSimplify.
llvm-svn: 97458
2010-03-01 17:55:27 +00:00
Dan Gohman 8b0a419eb1 Spelling fixes.
llvm-svn: 97453
2010-03-01 17:49:51 +00:00
Dan Gohman 0c39a35457 Prune #includes.
llvm-svn: 97448
2010-03-01 17:42:17 +00:00
Bob Wilson 1136166ee9 Revert r97245 which seems to be causing performance problems.
llvm-svn: 97366
2010-02-28 05:34:05 +00:00
Chris Lattner 2af7e3dceb fix grammaro's pointed out by daniel
llvm-svn: 97313
2010-02-27 07:50:40 +00:00
Chris Lattner d887f1da73 fix PR6414, a nondeterminism issue in IPSCCP which was because
of a subtle interation in a loop operating in densemap order.

llvm-svn: 97288
2010-02-27 00:07:42 +00:00
Chris Lattner 65d3a0a5f8 Fix rdar://7694996 a miscompile of 183.equake from my patch yesterday,
confusing the old MAT variable with the new GlobalType one.  This caused
us to promote the @disp global pointer into:

@disp.body = internal global double*** undef

instead of:

@disp.body = internal global [3 x double**] undef

llvm-svn: 97285
2010-02-26 23:42:13 +00:00
Chris Lattner da5fcdace0 remove dead code, by this point all uses of CI are gone.
llvm-svn: 97283
2010-02-26 23:35:25 +00:00
Bob Wilson ed1b0c31a7 Move the EnableFullLoadPRE flag from a separate command-line option to an
argument of createGVNPass and set it automatically for -O3.

llvm-svn: 97245
2010-02-26 19:09:47 +00:00
Bob Wilson d4655991c3 Remove unused "NoPRE" parameter in GVN and createGVNPass().
llvm-svn: 97235
2010-02-26 18:35:19 +00:00
Chris Lattner 0521c09d97 fix PR6435 another bug from the MallocInst elimination work.
llvm-svn: 97231
2010-02-26 18:23:13 +00:00
Chris Lattner 7939f795f5 rewrite OptimizeGlobalAddressOfMalloc to fix PR6422, some bugs
introduced when mallocinst was eliminated. 

llvm-svn: 97178
2010-02-25 22:33:52 +00:00
Dan Gohman a9c205cc88 Make LoopSimplify change conditional branches in loop exiting blocks
which branch on undef to branch on a boolean constant for the edge
exiting the loop. This helps ScalarEvolution compute trip counts for
loops.

Teach ScalarEvolution to recognize single-value PHIs, when safe, and
ForgetSymbolicName to forget such single-value PHI nodes as apprpriate
in ForgetSymbolicName.

llvm-svn: 97126
2010-02-25 06:57:05 +00:00
Nick Lewycky 614fb949b9 Modernize comment.
llvm-svn: 97121
2010-02-25 06:39:10 +00:00
Nick Lewycky dc835c4361 Correct whitespace.
llvm-svn: 97120
2010-02-25 06:38:51 +00:00
Daniel Dunbar 693ea89214 Reapply r97010, the speculative revert failed.
llvm-svn: 97036
2010-02-24 08:48:04 +00:00
Daniel Dunbar 0a2031e5b6 Speculatively revert r97010, "Add an argument to PHITranslateValue to specify
the DominatorTree. ...", in hopes of restoring poor old PPC bootstrap.

llvm-svn: 97027
2010-02-24 06:55:22 +00:00
Dan Gohman 94732024eb Fix indentation.
llvm-svn: 97024
2010-02-24 06:46:09 +00:00
Bob Wilson 66e58ac742 Add an argument to PHITranslateValue to specify the DominatorTree. If this
argument is non-null, pass it along to PHITranslateSubExpr so that it can
prefer using existing values that dominate the PredBB, instead of just
blindly picking the first equivalent value that it finds on a uselist.
Also when the DominatorTree is specified, have PHITranslateValue filter
out any result that does not dominate the PredBB.  This is basically just
refactoring the check that used to be in GetAvailablePHITranslatedSubExpr
and also in GVN.

Despite my initial expectations, this change does not affect the results
of GVN for any testcases that I could find, but it should help compile time.
Before this change, if PHITranslateSubExpr picked a value that does not
dominate, PHITranslateWithInsertion would then insert a new value, which GVN
would later determine to be redundant and would replace.  By picking a good
value to begin with, we save GVN the extra work of inserting and then
replacing a new value.

llvm-svn: 97010
2010-02-24 01:39:00 +00:00
Dan Gohman cd4c03e886 Don't do (X != Y) ? X : Y -> X for floating-point values; it doesn't
handle NaN properly.

Do (X une Y) ? X : Y  -> X if one of X and Y is not zero.

llvm-svn: 96955
2010-02-23 17:17:57 +00:00
Bob Wilson 923261bbe9 Update memdep when load PRE inserts a new load, and add some debug output.
I don't have a small testcase for this.

llvm-svn: 96890
2010-02-23 05:55:00 +00:00
Evan Cheng 3688b8fa68 Instcombine constant folding can normalize gep with negative index to index with large offset. When instcombine objsize checking transformation sees these geps where the offset seemingly point out of bound, it should just return "i don't know" rather than asserting.
llvm-svn: 96825
2010-02-22 23:34:00 +00:00
Bob Wilson 1da9041913 Erase deleted instructions from GVN's ValueTable. This fixes assertion
failures from ValueTable::verifyRemoved() when using -debug.

llvm-svn: 96805
2010-02-22 21:39:41 +00:00
Dan Gohman 8c16b38262 Remove unused variables and parameters.
llvm-svn: 96780
2010-02-22 04:11:59 +00:00
Dan Gohman 4506fcb3c2 When emitting an instruction which depends on both a post-incremented
induction variable value and a loop-variant value, don't force the
insert position to be at the post-increment position, because it may
not be dominated by the loop-variant value. This fixes a
use-before-def problem noticed on PPC.

llvm-svn: 96774
2010-02-22 03:59:54 +00:00
Dan Gohman 740909be2d This cast<Instruction> is unnecessary.
llvm-svn: 96771
2010-02-22 02:07:36 +00:00
Dan Gohman 4eebb94094 Rename getSDiv to getExactSDiv to reflect its behavior in cases where
the division would have a remainder.

llvm-svn: 96693
2010-02-19 19:35:48 +00:00
Dan Gohman 85af256779 Check for overflow when scaling up an add or an addrec for
scaled reuse.

llvm-svn: 96692
2010-02-19 19:32:49 +00:00
Dale Johannesen 1d6827adef recommit 96626, evidence that it broke things appears
to be spurious

llvm-svn: 96662
2010-02-19 07:14:22 +00:00
Dale Johannesen 1f790c28d0 Revert 96626, which causes build failure on ppc Darwin.
llvm-svn: 96653
2010-02-19 01:54:37 +00:00
Dan Gohman 2446f57503 When determining the set of interesting reuse factors, consider
strides in foreign loops. This helps locate reuse opportunities
with existing induction variables in foreign loops and reduces
the need for inserting new ones. This fixes rdar://7657764.

llvm-svn: 96629
2010-02-19 00:05:23 +00:00
Dan Gohman 60b3326435 Indvars needs to explicitly notify ScalarEvolution when it is replacing
a loop exit value, so that if a loop gets deleted, ScalarEvolution
isn't stick holding on to dangling SCEVAddRecExprs for that loop. This
fixes PR6339.

llvm-svn: 96626
2010-02-18 23:26:33 +00:00
Dan Gohman c43d264cc0 Hoist this loop-invariant logic out of the loop.
llvm-svn: 96614
2010-02-18 21:34:02 +00:00
Dan Gohman 13ac3b2139 Delete some unneeded casts.
llvm-svn: 96429
2010-02-17 00:42:19 +00:00
Dan Gohman 5f10d6c52c Don't attempt to divide INT_MIN by -1; consider such cases to
have overflowed.

llvm-svn: 96428
2010-02-17 00:41:53 +00:00
Bob Wilson aff96b2132 Rename SuccessorNumber to GetSuccessorNumber.
llvm-svn: 96387
2010-02-16 21:06:42 +00:00
Dan Gohman 6deab96c81 Refactor rewriting for PHI nodes into a separate function.
llvm-svn: 96382
2010-02-16 20:25:07 +00:00
Bob Wilson 92cdb6eec5 Split critical edges as needed for load PRE.
llvm-svn: 96378
2010-02-16 19:51:59 +00:00
Bob Wilson 3de492ec35 Refactor to share code to find the position of a basic block successor in the
terminator's list of successors.

llvm-svn: 96377
2010-02-16 19:49:17 +00:00
Dan Gohman 0849ed5e26 Fix whitespace.
llvm-svn: 96372
2010-02-16 19:42:34 +00:00
Duncan Sands 19d0b47b1f There are two ways of checking for a given type, for example isa<PointerType>(T)
and T->isPointerTy().  Convert most instances of the first form to the second form.
Requested by Chris.

llvm-svn: 96344
2010-02-16 11:11:14 +00:00
Dan Gohman 521efe68ab Split the main for-each-use loop again, this time for GenerateTruncates,
as it also peeks at which registers are being used by other uses. This
makes LSR less sensitive to use-list order.

llvm-svn: 96308
2010-02-16 01:42:53 +00:00
Chris Lattner 6fbfe5897c fix PR6305 by handling BlockAddress in a helper function
called by jump threading.

llvm-svn: 96263
2010-02-15 20:47:49 +00:00
Duncan Sands 9dff9bec31 Uniformize the names of type predicates: rather than having isFloatTy and
isInteger, we now have isFloatTy and isIntegerTy.  Requested by Chris!

llvm-svn: 96223
2010-02-15 16:12:20 +00:00
Dan Gohman e4e51a63da Fix whitespace.
llvm-svn: 96179
2010-02-14 18:51:39 +00:00
Dan Gohman e7f74bb16c Fix a comment.
llvm-svn: 96178
2010-02-14 18:51:20 +00:00
Dan Gohman bb7d52213c When complicated expressions are broken down into subexpressions
with multiplication by constants distributed through, occasionally
those subexpressions can include both x and -x. For now, if this
condition is discovered within LSR, just prune such cases away,
as they won't be profitable. This fixes a "zero allocated in a
base register" assertion failure.

llvm-svn: 96177
2010-02-14 18:50:49 +00:00
Dan Gohman 2d0f96d49a Actually, this code doesn't have to be quite so conservative in
the no-TLI case. But it should still default to declining the
transformation.

llvm-svn: 96152
2010-02-14 03:21:49 +00:00
Dan Gohman cb76a806f0 Don't attempt aggressive post-inc uses if TargetLowering is not available,
because profitability can't be sufficiently approximated.

llvm-svn: 96148
2010-02-14 02:45:21 +00:00
John McCall 0daaf13b97 Make LSR not crash if invoked without target lowering info, e.g. if invoked
from opt.

llvm-svn: 96135
2010-02-13 23:40:16 +00:00
Eric Christopher 843a4cc43c Fix a problem where we had bitcasted operands that gave us
odd offsets since the bitcasted pointer size and the offset pointer
size are going to be different types for the GEP vs base object.

llvm-svn: 96134
2010-02-13 23:38:01 +00:00
Chris Lattner b8639bc2d1 remove dead code.
llvm-svn: 96109
2010-02-13 19:07:06 +00:00
Chris Lattner 42c66b7270 Split some code out to a helper function (FindReusablePredBB)
and add a doxygen comment.

Cache the phi entry to avoid doing tons of 
PHINode::getBasicBlockIndex calls in the common case.

On my insane testcase from re2c, this speeds up CGP from
617.4s to 7.9s (78x).

llvm-svn: 96083
2010-02-13 05:35:08 +00:00
Chris Lattner 5e7f705934 Speed up codegen prepare from 3.58s to 0.488s.
llvm-svn: 96081
2010-02-13 05:01:14 +00:00
Chris Lattner 72c4dce884 PHINode::getBasicBlockIndex is O(n) in the number of inputs
to a PHI, avoid it in the common case where the BB occurs
in the same index for multiple phis.  This speeds up CGP on
an insane testcase from 8.35 to 3.58s.

llvm-svn: 96080
2010-02-13 04:24:19 +00:00
Chris Lattner b0ebb65ab0 iterate over preds using PHI information when available instead of
using pred_begin/end.  It is much faster.

llvm-svn: 96079
2010-02-13 04:15:26 +00:00
Chris Lattner 96b8826542 speed up CGP a bit by scanning predecessors through phi operands
instead of with pred_begin/end.

llvm-svn: 96078
2010-02-13 04:04:42 +00:00
Dan Gohman 5b18f039eb Fix a pruning heuristic which implicitly assumed that SmallPtrSet is
deterministically sorted.

llvm-svn: 96071
2010-02-13 02:06:02 +00:00
Jakob Stoklund Olesen 492b8b42cd Enable the inlinehint attribute in the Inliner.
Functions explicitly marked inline will get an inlining threshold slightly
more aggressive than the default for -O3. This means than -O3 builds are
mostly unaffected while -Os builds will be a bit bigger and faster.

The difference depends entirely on how many 'inline's are sprinkled on the
source.

In the CINT2006 suite, only these tests are significantly affected under -Os:

               Size   Time
471.omnetpp   +1.63% -1.85%
473.astar     +4.01% -6.02%
483.xalancbmk +4.60%  0.00%

Note that 483.xalancbmk runs too quickly to give useful timing results.

llvm-svn: 96066
2010-02-13 01:51:53 +00:00
Dan Gohman 2b75de97c0 Reapply 95979, a compile-time speedup, now that the bug it exposed is fixed.
llvm-svn: 96005
2010-02-12 19:35:25 +00:00
Dan Gohman 363f847ec6 Fix this code to avoid dereferencing an end() iterator in
offset distributions it doesn't expect.

llvm-svn: 96002
2010-02-12 19:20:37 +00:00
Chris Lattner 75879be9d8 1. modernize the constantmerge pass, using densemap/smallvector.
2. don't bother trying to merge globals in non-default sections,
   doing so is quite dubious at best anyway.
3. fix a bug reported by Arnaud de Grandmaison where we'd try to
   merge two globals in different address spaces.

llvm-svn: 95995
2010-02-12 18:17:23 +00:00
Daniel Dunbar e0b2c69d3c Revert "Reverse the order for collecting the parts of an addrec. The order", it
is breaking llvm-gcc bootstrap.

llvm-svn: 95988
2010-02-12 17:27:08 +00:00
Dan Gohman 0194f58047 Reverse the order for collecting the parts of an addrec. The order
doesn't matter, except that ScalarEvolution tends to need less time
to fold the results this way.

llvm-svn: 95979
2010-02-12 11:08:26 +00:00
Dan Gohman 45774ce0ad Reapply the new LoopStrengthReduction code, with compile time and
bug fixes, and with improved heuristics for analyzing foreign-loop
addrecs.

This change also flattens IVUsers, eliminating the stride-oriented
groupings, which makes it easier to work with.

llvm-svn: 95975
2010-02-12 10:34:29 +00:00
Eric Christopher cccdc13662 Make sure that ConstantExpr offsets also aren't off of extern
symbols.

Thanks to Duncan Sands for the testcase!

llvm-svn: 95877
2010-02-11 17:44:04 +00:00
Chris Lattner 4e8137d678 Rename ValueRequiresCast to ShouldOptimizeCast, to better reflect
what it does.  Enhance it to return false to optimizing vector
sign extensions from vector comparisions, which is the idiom used
to get a splatted vector for a vector comparison.

Doing this breaks vector-casts.ll, add some compensating 
transformations to handle the important case they cover without
depending on this canonicalization.

This fixes rdar://7434900 a serious pessimization of vector compares.

llvm-svn: 95855
2010-02-11 06:26:33 +00:00
Chris Lattner c053cbbc4d Make DSE only scan blocks that are reachable from the entry
block.  Other blocks may have pointer cycles that will crash
basicaa and other alias analyses.  In any case, there is no
point wasting cycles optimizing dead blocks.  This fixes 
rdar://7635088

llvm-svn: 95852
2010-02-11 05:11:54 +00:00
Chris Lattner d924f63692 Make jump threading honor x|undef -> true and x&undef -> false,
instead of considering x|undef -> x, which may not be true.

llvm-svn: 95850
2010-02-11 04:40:44 +00:00
Eric Christopher 531ea566a6 Add ConstantExpr handling to Intrinsic::objectsize lowering.
Update testcase accordingly now that we can optimize another
section.

llvm-svn: 95846
2010-02-11 01:48:54 +00:00
Devang Patel 03936a1880 Ignore dbg info intrinsics.
llvm-svn: 95828
2010-02-11 00:20:49 +00:00
Devang Patel 211746a69a Strip new llvm.dbg.value intrinsic.
llvm-svn: 95807
2010-02-10 21:19:56 +00:00
Dan Gohman 4a618827de Fix "the the" and similar typos.
llvm-svn: 95781
2010-02-10 16:03:48 +00:00
Eric Christopher 7b7028fd24 Move Intrinsic::objectsize lowering back to InstCombineCalls and
enable constant 0 offset lowering.

llvm-svn: 95691
2010-02-09 21:24:27 +00:00
Eric Christopher ad1aa86276 Pull these back out, they're a little too aggressive and time
consuming for a simple optimization.

llvm-svn: 95671
2010-02-09 17:29:18 +00:00
Chris Lattner f4c8d3cea9 simplify this code, duh.
llvm-svn: 95643
2010-02-09 01:14:06 +00:00
Chris Lattner 9b6a1789e5 fix PR6193, only considering sign extensions *from i1* for this
xform.

llvm-svn: 95642
2010-02-09 01:12:41 +00:00
Eric Christopher be2f0b2b7b Add file in here too.
llvm-svn: 95641
2010-02-09 01:11:03 +00:00
Eric Christopher 9f85e7eb16 Add a new pass to do llvm.objsize lowering using SCEV.
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly.  Move testcase
to new directory.

Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.

llvm-svn: 95628
2010-02-09 00:35:38 +00:00
Chris Lattner b22423c89a fix some problems handling large vectors reported in PR6230
llvm-svn: 95616
2010-02-08 23:56:03 +00:00
Jakob Stoklund Olesen 74bb06c0f0 Reintroduce the InlineHint function attribute.
This time it's for real! I am going to hook this up in the frontends as well.

The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.

We need some experiments to determine if that is the right thing to do.

llvm-svn: 95466
2010-02-06 01:16:28 +00:00
Jakob Stoklund Olesen 5f9ead2714 Don't unroll loops containing function calls.
llvm-svn: 95454
2010-02-05 23:21:31 +00:00
Jakob Stoklund Olesen 916f48a054 Teach SimplifyCFG about magic pointer constants.
Weird code sometimes uses pointer constants other than null. This patch
teaches SimplifyCFG to build switch instructions in those cases.

Code like this:

void f(const char *x) {
  if (!x)
    puts("null");
  else if ((uintptr_t)x == 1)
    puts("one");
  else if (x == (char*)2 || x == (char*)3)
    puts("two");
  else if ((intptr_t)x == 4)
    puts("four");
  else
    puts(x);
}

Now becomes a switch:

define void @f(i8* %x) nounwind ssp {
entry:
  %magicptr23 = ptrtoint i8* %x to i64            ; <i64> [#uses=1]
  switch i64 %magicptr23, label %if.else16 [
    i64 0, label %if.then
    i64 1, label %if.then2
    i64 2, label %if.then9
    i64 3, label %if.then9
    i64 4, label %if.then14
  ]

Note that LLVM's own DenseMap uses magic pointers.

llvm-svn: 95439
2010-02-05 22:03:18 +00:00
Chris Lattner 64ffd11d49 fix logical-select to invoke filecheck right, and fix hte instcombine
xform it is checking to actually pass.  There is no need to match
m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).

Add matches for sext(not(x)) in addition to not(sext(x)).

llvm-svn: 95420
2010-02-05 19:53:02 +00:00
Dan Gohman 4739e41ce9 Implement releaseMemory in CodeGenPrepare and free the BackEdges
container data. This prevents it from holding onto dangling
pointers and potentially behaving unpredictably.

llvm-svn: 95409
2010-02-05 19:24:11 +00:00
Dan Gohman 8abb67df63 Use a SmallSetVector instead of a SetVector; this code showed up as a
malloc caller in a profile.

llvm-svn: 95407
2010-02-05 19:20:15 +00:00
Eric Christopher 04371b4f12 Remove this code for now. I have a better idea and will rewrite with
that in mind.

llvm-svn: 95402
2010-02-05 19:04:06 +00:00
Bob Wilson 27dfb1e1a4 Do not reassociate expressions with i1 type. SimplifyCFG converts some
short-circuited conditions to AND/OR expressions, and those expressions
are often converted back to a short-circuited form in code gen.  The
original source order may have been optimized to take advantage of the
expected values, and if we reassociate them, we change the order and
subvert that optimization.  Radar 7497329.

llvm-svn: 95333
2010-02-04 23:32:37 +00:00
Jakob Stoklund Olesen 113fb54bcb Increase inliner thresholds by 25.
This makes the inliner about as agressive as it was before my changes to the
inliner cost calculations. These levels give the same performance and slightly
smaller code than before.

llvm-svn: 95320
2010-02-04 18:48:20 +00:00
Eric Christopher 107a1fbf61 Temporarily revert this since it appears to have caused a build
failure.

llvm-svn: 95294
2010-02-04 06:41:27 +00:00
Eric Christopher 42fa84a880 Rework constant expr and array handling for objectsize instcombining.
Fix bugs where we would compute out of bounds as in bounds, and where
we couldn't know that the linker could override the size of an array.

Add a few new testcases, change existing testcase to use a private
global array instead of extern.

llvm-svn: 95283
2010-02-04 02:55:34 +00:00
Eric Christopher f12e18db21 If we're dealing with a zero-length array, don't lower to any
particular size, we just don't know what the length is yet.

llvm-svn: 95266
2010-02-03 23:56:07 +00:00
Bob Wilson 04365c5f72 Adjust the heuristics used to decide when SROA is likely to be profitable.
The SRThreshold value makes perfect sense for checking if an entire aggregate
should be promoted to a scalar integer, but it is not so good for splitting
an aggregate into its separate elements.  A struct may contain a large embedded
array along with some scalar fields that would benefit from being split apart
by SROA.  Even if the total aggregate size is large, it may still be good to
perform SROA.  Thus, the most important piece of this patch is simply moving
the aggregate size comparison vs. SRThreshold so that it guards only the
aggregate promotion.

We have also been checking the number of elements to decide if an aggregate
should be split up.  The limit of "SRThreshold/4" seemed rather arbitrary,
and I don't think it's very useful to derive this limit from SRThreshold
anyway.  I've collected some data showing that the current default limit of
32 (since SRThreshold defaults to 128) is a reasonable cutoff for struct
types.  One thing suggested by the data is that distinguishing between structs
and arrays might be useful.  There are (obviously) a lot more large arrays
than large structs (as measured by the number of elements and not the total
size -- a large array inside a struct still counts as a single element given
the way we do SROA right now).  Out of 8377 arrays where we successfully
performed SROA while compiling a large set of benchmarks, only 16 of them had
more than 8 elements.  And, for those 16 arrays, it's not at all clear that
SROA was actually beneficial.  So, to offset the compile time cost of
investigating more large structs for SROA, the patch lowers the limit on array
elements to 8.

This fixes Apple Radar 7563690.

llvm-svn: 95224
2010-02-03 17:23:56 +00:00
Evan Cheng 27a41d5473 Revert 94937 and move the noreturn check to codegen.
llvm-svn: 95198
2010-02-03 03:55:59 +00:00
Bob Wilson 76e8c59509 Fix some comment typos.
llvm-svn: 95170
2010-02-03 00:33:21 +00:00
Eric Christopher d86233c118 Recommit this, looks like it wasn't the cause.
llvm-svn: 95165
2010-02-03 00:21:58 +00:00
Eric Christopher e67d01a9a8 Hopefully temporarily revert this.
llvm-svn: 95154
2010-02-02 23:01:31 +00:00
Eric Christopher f9553572b7 Reformat my last patch slightly.
llvm-svn: 95147
2010-02-02 22:29:26 +00:00
Eric Christopher 4264e7e46f Re-add strcmp and known size object size checking optimization.
Passed bootstrap and nightly test run here.

llvm-svn: 95145
2010-02-02 22:10:43 +00:00
Chris Lattner 8e2c471614 don't turn (A & (C0?-1:0)) | (B & ~(C0?-1:0)) -> C0 ? A : B
for vectors.  Codegen is generating awful code or segfaulting
in various cases (e.g. PR6204).

llvm-svn: 95058
2010-02-02 02:43:51 +00:00
Chris Lattner 302240d73e fix a crash in loop unswitch on a loop invariant vector condition.
llvm-svn: 95055
2010-02-02 02:26:54 +00:00
Dan Gohman 949458d014 LangRef.html says that inttoptr and ptrtoint always use zero-extension
when the cast is extending.

llvm-svn: 95046
2010-02-02 01:44:02 +00:00
Eric Christopher 14dfc3f6df Don't need to check the last argument since it'll always be bool. We also
don't use TargetData here.

llvm-svn: 95040
2010-02-02 00:51:45 +00:00
Eric Christopher 9afa973203 More indentation/tabification fixes.
llvm-svn: 95036
2010-02-02 00:13:06 +00:00
Eric Christopher 1408234753 Untabify previous commit.
llvm-svn: 95035
2010-02-02 00:06:55 +00:00
Eric Christopher 56e4182c49 Formatting.
llvm-svn: 95027
2010-02-01 23:25:03 +00:00
Bob Wilson d517b52012 Add an option to GVN to remove all partially redundant loads. This is currently
disabled by default.  This divides the existing load PRE code into 2 phases:
first it checks that it is safe to move the load to each of the predecessors
where it is unavailable, and then if it is safe, the code is changed to move
the load.  Radar 7571861.

llvm-svn: 95007
2010-02-01 21:17:14 +00:00
Chris Lattner 9306ffa05a cleanups.
llvm-svn: 94995
2010-02-01 19:54:45 +00:00
Chris Lattner 846a52e228 fix rdar://7590304, a miscompilation of objc apps on arm. The caller
of objc message send was getting marked arm_apcscc, but the prototype
isn't.  This is fine at runtime because objcmsgsend is implemented in
assembly.  Only turn a mismatched caller and callee into 'unreachable'
if the callee is a definition.

llvm-svn: 94986
2010-02-01 18:11:34 +00:00
Chris Lattner 2cecedf081 fix rdar://7590304, an infinite loop in instcombine. In the invoke
case, instcombine can't zap the invoke for fear of changing the CFG.
However, we have to do something to prevent the next iteration of
instcombine from inserting another store -> undef before the invoke
thereby getting into infinite iteration between dead store elim and
store insertion.

Just zap the callee to null, which will prevent the next iteration
from doing anything.

llvm-svn: 94985
2010-02-01 18:04:58 +00:00
Bob Wilson f65ba356e1 Fix pr6198 by moving the isSized() check to an outer conditional.
The testcase from pr6198 does not crash for me -- I don't know what's up with
that -- so I'm not adding it to the tests.

llvm-svn: 94984
2010-02-01 17:41:44 +00:00
Eli Friedman a2cc2875fc Simplify/generalize the xor+add->sign-extend instcombine.
llvm-svn: 94943
2010-01-31 04:29:12 +00:00
Eli Friedman 37a8197b61 Add a small transform: transform -(X<<Y) to (-X<<Y) when the shift has a single
use and X is free to negate.

llvm-svn: 94941
2010-01-31 02:30:23 +00:00
Evan Cheng d86d3fe0c3 Do not mark no-return calls tail calls. It'll screw up special calls like longjmp and it doesn't make much sense for performance reason. If my logic is faulty, please let me know.
llvm-svn: 94937
2010-01-31 00:59:31 +00:00
Bob Wilson 56600a15ad Check alignment of loads when deciding whether it is safe to execute them
unconditionally.  Besides checking the offset, also check that the underlying
object is aligned as much as the load itself.

llvm-svn: 94875
2010-01-30 04:42:39 +00:00
Bob Wilson 4b71b6c179 Use more specific types to avoid casts. No functionality change.
llvm-svn: 94863
2010-01-30 00:41:10 +00:00
Jakob Stoklund Olesen e27dc727e2 Keep iterating over all uses when meeting a phi node in AllUsesOfValueWillTrapIfNull().
This bug was exposed by my inliner cost changes in r94615, and caused failures
of lencod on most architectures when building with LTO.

This patch fixes lencod and 464.h264ref on x86-64 (and likely others).

llvm-svn: 94858
2010-01-29 23:54:14 +00:00
Bob Wilson 1b8453067b Preserve load alignment in instcombine transformations. I've been unable to
create a testcase where this matters.  The select+load transformation only
occurs when isSafeToLoadUnconditionally is true, and in those situations,
instcombine also changes the underlying objects to be aligned.  This seems
like a good idea regardless, and I've verified that it doesn't pessimize
the subsequent realignment.

llvm-svn: 94850
2010-01-29 22:39:21 +00:00
Eric Christopher 5a0e174863 Revert my last couple of patches. They appear to have broken bison.
llvm-svn: 94841
2010-01-29 21:16:24 +00:00
Bob Wilson 34e10c2218 Use uint64_t instead of unsigned for offsets and sizes.
llvm-svn: 94835
2010-01-29 20:34:28 +00:00
Bob Wilson 7c42b9d51e Improve isSafeToLoadUnconditionally to recognize that GEPs with constant
indices are safe if the result is known to be within the bounds of the
underlying object.

llvm-svn: 94829
2010-01-29 19:19:08 +00:00
Duncan Sands c8a3e56870 Having RHSKnownZero and RHSKnownOne be alternative names for KnownZero and KnownOne
(via APInt &RHSKnownZero = KnownZero, etc) seems dangerous and confusing to me: it
is easy not to notice this, and then wonder why KnownZero/RHSKnownZero changed
underneath you when you modified RHSKnownZero/KnownZero etc.  So get rid of this.
No intended functionality change (tested with "make check" + llvm-gcc bootstrap).

llvm-svn: 94802
2010-01-29 06:18:46 +00:00
Eric Christopher 9b3c02b7da Make strcpy_chk lower to strcpy if we have a safe size.
llvm-svn: 94783
2010-01-29 01:37:11 +00:00
Eric Christopher 997f7ca8c5 Add constant support to object size handling and remove default
lowering. We'll either figure it out, or not and be lowered by
SelectionDAGBuild.

Add test.

llvm-svn: 94775
2010-01-29 01:09:57 +00:00
Bill Wendling 48816a0b3f Generic reformatting and comment fixing. No functionality change.
llvm-svn: 94771
2010-01-29 00:52:43 +00:00
Bill Wendling 8277838cf8 Add newline to debugging output, and fix some grammar-os in comment.
llvm-svn: 94765
2010-01-29 00:27:39 +00:00
Victor Hernandez 006b53f199 mem2reg erases the dbg.declare intrinsics that it converts to dbg.val intrinsics
llvm-svn: 94763
2010-01-29 00:01:35 +00:00
Duncan Sands 3a48b87c54 Fix PR6165. The bug was that LHSKnownZero was being and'd with DemandedMask
when it should have been and'd with LowBits.  Fix that and while there beef
up the logic in the case of a negative LHS.

llvm-svn: 94745
2010-01-28 17:22:42 +00:00
Bob Wilson 7577e948e4 Avoid creating redundant PHIs in SSAUpdater::GetValueInMiddleOfBlock.
This was already being done in SSAUpdater::GetValueAtEndOfBlock so I've
just changed SSAUpdater to check for existing PHIs in both places.

llvm-svn: 94690
2010-01-27 22:01:02 +00:00
Jeffrey Yasskin 091217be6f Kill ModuleProvider and ghost linkage by inverting the relationship between
Modules and ModuleProviders. Because the "ModuleProvider" simply materializes
GlobalValues now, and doesn't provide modules, it's renamed to
"GVMaterializer". Code that used to need a ModuleProvider to materialize
Functions can now materialize the Functions directly. Functions no longer use a
magic linkage to record that they're materializable; they simply ask the
GVMaterializer.

Because the C ABI must never change, we can't remove LLVMModuleProviderRef or
the functions that refer to it. Instead, because Module now exposes the same
functionality ModuleProvider used to, we store a Module* in any
LLVMModuleProviderRef and translate in the wrapper methods.  The bindings to
other languages still use the ModuleProvider concept.  It would probably be
worth some time to update them to follow the C++ more closely, but I don't
intend to do it.

Fixes http://llvm.org/PR5737 and http://llvm.org/PR5735.

llvm-svn: 94686
2010-01-27 20:34:15 +00:00
Benjamin Kramer 1266d46d32 Don't bother with sprintf, just pass the Twine through.
llvm-svn: 94684
2010-01-27 19:58:47 +00:00
Benjamin Kramer 40582a891c Use the less expensive getName function instead of getNameStr.
llvm-svn: 94683
2010-01-27 19:46:52 +00:00
Chris Lattner 65f4733b77 some cleanups.
llvm-svn: 94649
2010-01-27 02:12:20 +00:00
Chris Lattner 711e701f1c no need to check for null
llvm-svn: 94648
2010-01-27 02:04:20 +00:00
Victor Hernandez 477d9274bb When converting dbg.declare to dbg.value, attach promoted store's debug metadata to dbg.value
llvm-svn: 94634
2010-01-27 00:44:36 +00:00
Victor Hernandez 2b17e2a452 Avoid extra calls to MD->getNumOperands()
llvm-svn: 94618
2010-01-26 23:29:09 +00:00
Victor Hernandez 9ecd2f039f Switch AllocaDbgDeclares to SmallVector and don't leak DIFactory
llvm-svn: 94567
2010-01-26 18:57:53 +00:00
Victor Hernandez cd94410152 In mem2reg, for all alloca/stores that get promoted where the alloca has an associated llvm.dbg.declare instrinsic, insert an llvm.dbg.var intrinsic before each store.
llvm-svn: 94493
2010-01-26 02:42:15 +00:00
Bob Wilson 70c8fe5e4e Remove check for an impossible condition: the condition of the while loop has
already checked that TmpBB->getSinglePredecessor() is non-null.

llvm-svn: 94451
2010-01-25 21:28:05 +00:00
Bob Wilson fc060e4337 Change Value::getUnderlyingObject to have the MaxLookup value specified as a
parameter with a default value, instead of just hardcoding it in the
implementation.  The limit of MaxLookup = 6 was introduced in r69151 to fix
a performance problem with O(n^2) behavior in instcombine, but the scalarrepl
pass is relying on getUnderlyingObject to go all the way back to an AllocaInst.
Making the limit part of the method signature makes it clear that by default
the result is limited and should help avoid similar problems in the future.
This fixes pr6126.

llvm-svn: 94433
2010-01-25 18:26:54 +00:00
Victor Hernandez 8a588e1444 Revert r94260 until findDbgDeclare() is made more efficient
llvm-svn: 94432
2010-01-25 17:52:13 +00:00
Chris Lattner 823aed16f9 make -fno-rtti the default unless a directory builds with REQUIRES_RTTI.
llvm-svn: 94378
2010-01-24 20:43:08 +00:00
Chris Lattner 1b35bbe813 change the canonical form of "cond ? -1 : 0" to be
"sext cond" instead of a select.  This simplifies some instcombine
code, matches the policy for zext (cond ? 1 : 0 -> zext), and allows
us to generate better code for a testcase on ppc.

llvm-svn: 94339
2010-01-24 00:09:49 +00:00
Chris Lattner e112ff64c5 fix a potential overflow issue Eli pointed out.
llvm-svn: 94336
2010-01-23 23:31:46 +00:00
Nick Lewycky 7e7ed8b9e5 Speculatively revert r94322 to see if it fixes darwin selfhost buildbot.
llvm-svn: 94331
2010-01-23 20:32:12 +00:00
Chris Lattner 29b15c5cfd third bug from PR6119: the xor dupe extension allows
for arbitrary terminators in predecessors, don't assume
it is a conditional or uncond branch.  The testcase shows
an example where they can happen with switches.

llvm-svn: 94323
2010-01-23 19:21:31 +00:00
Nick Lewycky 32966aed9d Teach DAE that even though it can't modify the function signature of an
externally visible function, it can still find all callers of it and replace
the parameters to a dead argument with undef.

llvm-svn: 94322
2010-01-23 19:19:34 +00:00
Chris Lattner ba2d0b89ff add an early out to ProcessBranchOnXOR to speed it up,
handle the case when we can infer an input to the xor
from all inputs that agree, instead of going into an
infinite loop.  Another part of PR6199

llvm-svn: 94321
2010-01-23 19:16:25 +00:00
Chris Lattner de5ab4860f fix a crash in jump threading, PR6119
llvm-svn: 94319
2010-01-23 18:56:07 +00:00
Chris Lattner 249da5cb73 implement a simple instcombine xform that has been in the
readme forever.

llvm-svn: 94318
2010-01-23 18:49:30 +00:00
Eric Christopher ba7cd4c393 Reapply 94059 while fixing the calling convention setup
for strcpy.

llvm-svn: 94287
2010-01-23 05:29:06 +00:00
Victor Hernandez 5006e43faf In mem2reg, for all alloca/stores that get promoted where the alloca has an associated llvm.dbg.declare instrinsic, insert an llvm.dbg.var intrinsic before each store
llvm-svn: 94260
2010-01-23 00:17:34 +00:00
Benjamin Kramer 3838dfbaea Another strncmp -> StringRef.startswith simplification.
llvm-svn: 94203
2010-01-22 20:00:21 +00:00
Bob Wilson 6c0c8d41b4 Revert 94059. It is breaking the MultiSource/Benchmarks/Prolangs-C/bison
test on ARM.

llvm-svn: 94198
2010-01-22 19:16:40 +00:00
Victor Hernandez 5f8c8c034a Keep ignoring pointer-to-pointer bitcasts
llvm-svn: 94194
2010-01-22 19:05:05 +00:00
Chris Lattner 7ba0661f27 Stop building RTTI information for *most* llvm libraries. Notable
missing ones are libsupport, libsystem and libvmcore.  libvmcore is
currently blocked on bugpoint, which uses EH.  Once it stops using
EH, we can switch it off.

This #if 0's out 3 unit tests, because gtest requires RTTI information.
Suggestions welcome on how to fix this.

llvm-svn: 94164
2010-01-22 06:49:46 +00:00
Dan Gohman 045f81981a Revert LoopStrengthReduce.cpp to pre-r94061 for now.
llvm-svn: 94123
2010-01-22 00:46:49 +00:00
Victor Hernandez 7b151e9f06 No need to look through bitcasts for DbgInfoIntrinsic
llvm-svn: 94114
2010-01-21 23:09:12 +00:00
Victor Hernandez ae4d949721 DbgInfoIntrinsic no longer appear in an instruction's use list
llvm-svn: 94113
2010-01-21 23:08:36 +00:00
Victor Hernandez 5f5abd598c No need to look through bitcasts for DbgInfoIntrinsic
llvm-svn: 94112
2010-01-21 23:07:15 +00:00
Victor Hernandez 1df65186d1 DbgInfoIntrinsics no longer appear in an instruction's use list; so clean up looking for them in use iterations and remove OnlyUsedByDbgInfoIntrinsics()
llvm-svn: 94111
2010-01-21 23:05:53 +00:00
Dan Gohman b1ee154b6b When inserting expressions for post-increment users which contain
loop-variant components, adds must be inserted after the increment.
Keep track of the increment position for this case, and insert
these adds in the correct location.

llvm-svn: 94110
2010-01-21 23:01:22 +00:00
Dan Gohman cb8d577eb2 Include IVUsers information in LSR's debug output.
llvm-svn: 94108
2010-01-21 22:46:32 +00:00
Dan Gohman 29916e023d Prune the search for candidate formulae if the number of register
operands exceeds the number of registers used in the initial
solution, as that wouldn't lead to a profitable solution anyway.

llvm-svn: 94107
2010-01-21 22:42:49 +00:00
Dan Gohman c903499ff8 Add a comment.
llvm-svn: 94104
2010-01-21 21:31:09 +00:00
Chris Lattner 24716b6c63 It turns out that this #include is needed because otherwise
ValueMapper.cpp ends up calling an out of line 
__ZNK4llvm12PATypeHolder3getEv, which is a template and llvm-config
determines arbitrarily to use the one in libipo.  This sucks, but
keeping the #include is a reasonable workaround.

llvm-svn: 94103
2010-01-21 21:29:25 +00:00
Chris Lattner 9889b4be04 unbreak the build, apparently without this transformutils starts depending on libipa?
llvm-svn: 94102
2010-01-21 21:20:51 +00:00
Chris Lattner e39837d5ee tidy up
llvm-svn: 94101
2010-01-21 21:05:54 +00:00
Victor Hernandez a9ad174b49 Don't need to include IntrinsicInst.h any more
llvm-svn: 94092
2010-01-21 19:33:59 +00:00
Victor Hernandez d089f4e10b No need to map NULL operands of metadata
llvm-svn: 94091
2010-01-21 19:26:20 +00:00
Dan Gohman 51ad99d2c5 Re-implement the main strength-reduction portion of LoopStrengthReduction.
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.

It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.

llvm-svn: 94061
2010-01-21 02:09:26 +00:00
Eric Christopher fa863258d0 Add strcpy_chk -> strcpy support for "don't know" object size
answers.  This will update as object size checking gets better information.

llvm-svn: 94059
2010-01-21 01:04:38 +00:00
Chris Lattner 3c5bf71353 simplify this code.
llvm-svn: 94048
2010-01-20 23:30:28 +00:00
Jakob Stoklund Olesen 8a19d3c96c Move per-function inline threshold calculation to a method.
No functional change except the forgotten test for
InlineLimit.getNumOccurrences() == 0 in the CurrentThreshold2 calculation.

llvm-svn: 94007
2010-01-20 17:51:28 +00:00
Victor Hernandez f2462407ee Switch Elts from vector to SmallVector
llvm-svn: 93989
2010-01-20 06:56:16 +00:00
Victor Hernandez 5fa88d4e30 Map operands of all function-local metadata, not just metadata passed to llvm.dbg.declare intrinsics
llvm-svn: 93979
2010-01-20 05:49:59 +00:00
Dan Gohman ca19445d08 When doing address-mode sinking, expand the base register first, rather
than the scaled register. This makes it more likely that subsequent
AddrModeMatcher queries will match the new address the same way as the
old, instead of accidentally matching what had been the base register
as the new scaled register, and then failing to match the scaled register.
This fixes some problems with address-mode sinking multiple muls into a
block, which will be a lot more common with some upcoming
LoopStrengthReduction changes.

llvm-svn: 93935
2010-01-19 22:45:06 +00:00
Chris Lattner 18f49ce2d3 optimize ~(~X >>s Y) --> (X >>s Y), patch by Edmund Grimley
Evans!

llvm-svn: 93884
2010-01-19 18:16:19 +00:00
Bob Wilson 58d59fe394 Fix a crash in scalarrepl for memcpy/memmove where the source and destination
are the same.  I had already fixed a similar problem where the source and
destination were different bitcasts derived from the same alloca, but the
previous fix still did not handle the case where both operands are exactly
the same value.  Radar 7552893.

llvm-svn: 93848
2010-01-19 04:32:48 +00:00
Eric Christopher 84bd316bd6 Fix comment.
llvm-svn: 93831
2010-01-19 01:20:15 +00:00
Chris Lattner 43f2fa6201 my instcombine transformations to make extension elimination more
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.

llvm-svn: 93787
2010-01-18 22:19:16 +00:00
Devang Patel 696cb8d410 While mapping llvm.dbg.declare intrinsic manually map its operand, if possible,
because it points to an alloca instruction through metadata.

llvm-svn: 93757
2010-01-18 19:52:14 +00:00
Owen Anderson cdea3572fa Convert some of the dynamic opcode lookups into static ones.
llvm-svn: 93693
2010-01-17 19:33:27 +00:00
Owen Anderson fa1edea9ce Fix comment.
llvm-svn: 93679
2010-01-17 06:49:03 +00:00
Bob Wilson e0da4b6cff Fix a comment typo.
llvm-svn: 93560
2010-01-15 21:55:02 +00:00
Bill Wendling ad7a5b07a7 When the visitSub method was split into visitSub and visitFSub, this xform was
added to the FSub version. However, the original version of this xform guarded
against doing this for floating point (!Op0->getType()->isFPOrFPVector()).

This is causing LLVM to perform incorrect xforms for code like:

void func(double *rhi, double *rlo, double xh, double xl, double yh, double yl){
  double mh, ml;
  double c = 134217729.0;
  double up, u1, u2, vp, v1, v2;
        
  up = xh*c;
  u1 = (xh - up) + up;
  u2 = xh - u1;
        
  vp = yh*c;
  v1 = (yh - vp) + vp;
  v2 = yh - v1;
        
  mh = xh*yh;
  ml = (((u1*v1 - mh) + (u1*v2)) + (u2*v1)) + (u2*v2);
  ml += xh*yl + xl*yh;
        
  *rhi = mh + ml;
  *rlo = (mh - (*rhi)) + ml;
}

The last line was optimized away, but rl is intended to be the difference
between the infinitely precise result of mh + ml and after it has been rounded
to double precision.

llvm-svn: 93369
2010-01-13 23:23:17 +00:00
Chris Lattner 573da8ac90 1) Use the new SimplifyInstructionsInBlock routine instead of the copy
in JT.

2) When cloning blocks for PHI or xor conditions, use
instsimplify to simplify the code as we go.  This allows us to 
squish common cases early in JT which opens up opportunities for
subsequent iterations, and allows it to completely simplify the
testcase.

llvm-svn: 93253
2010-01-12 20:41:47 +00:00
Chris Lattner 7c743f2c74 add a helper function.
llvm-svn: 93251
2010-01-12 19:40:54 +00:00
Chris Lattner af7855d571 tidy up
llvm-svn: 93222
2010-01-12 02:07:50 +00:00
Chris Lattner eb73bdb2e1 Teach jump threading to duplicate small blocks when the branch
condition is a xor with a phi node.  This eliminates nonsense
like this from 176.gcc in several places:

 LBB166_84:
        testl   %eax, %eax
-       setne   %al
-       xorb    %cl, %al
-       notb    %al
-       testb   $1, %al
-       je      LBB166_85
+       je      LBB166_69
+       jmp     LBB166_85

This is rdar://7391699

llvm-svn: 93221
2010-01-12 02:07:17 +00:00
Chris Lattner 6a19ed0b86 some cleanup, and make it obvious that ProcessJumpOnPHI only works
on branches by renaming it and checking for a branch at the call site.

llvm-svn: 93208
2010-01-11 23:41:09 +00:00
Chris Lattner d1a3efedd8 reenable the piece that turns trunc(zext(x)) -> x even if zext has multiple uses,
codegen has no apparent problem with the trunc version of this, because it turns
into a simple subreg idiom

llvm-svn: 93202
2010-01-11 22:49:40 +00:00
Chris Lattner a6b1356cf9 Disable folding sext(trunc(x)) -> x (and other similar cast/cast cases) when the
trunc has multiple uses.  Codegen is not able to coalesce the subreg case 
correctly and so this leads to higher register pressure and spilling (see PR5997).

This speeds up 256.bzip2 from 8.60 -> 8.04s on my machine, ~7%.

llvm-svn: 93200
2010-01-11 22:45:25 +00:00
Chris Lattner 9518869423 add one more bitfield optimization, allowing clang to generate
good code on PR4216:

_test_bitfield:                                             ## @test_bitfield
	orl	$32962, %edi
	movl	$4294941946, %eax
	andq	%rdi, %rax
	ret

instead of:

_test_bitfield:
        movl    $4294941696, %ecx
        movl    %edi, %eax
        orl     $194, %edi
        orl     $32768, %eax
        andq    $250, %rdi
        andq    %rax, %rcx
        movq    %rdi, %rax
        orq     %rcx, %rax
        ret

Evan is looking into the remaining andq+imm -> andl optimization.

llvm-svn: 93147
2010-01-11 06:55:24 +00:00
Chris Lattner 0a85420409 Extend CanEvaluateZExtd to handle and/or/xor more aggressively in the
BitsToClear case.  This allows it to promote expressions which have an
and/or/xor after the lshr, promoting cases like test2 (from PR4216) 
and test3 (random extample extracted from a spec benchmark).

clang now compiles the code in PR4216 into:

_test_bitfield:                                             ## @test_bitfield
	movl	%edi, %eax
	orl	$194, %eax
	movl	$4294902010, %ecx
	andq	%rax, %rcx
	orl	$32768, %edi
	andq	$39936, %rdi
	movq	%rdi, %rax
	orq	%rcx, %rax
	ret

instead of:

_test_bitfield:                                             ## @test_bitfield
	movl	%edi, %eax
	orl	$194, %eax
	movl	$4294902010, %ecx
	andq	%rax, %rcx
	shrl	$8, %edi
	orl	$128, %edi
	shlq	$8, %rdi
	andq	$39936, %rdi
	movq	%rdi, %rax
	orq	%rcx, %rax
	ret

which is still not great, but is progress.

llvm-svn: 93145
2010-01-11 04:05:13 +00:00
Chris Lattner 12bd8992b3 Remove the dead TD argument to CanEvaluateZExtd, and add a
new BitsToClear result which allows us to start promoting
expressions that end with a lshr-by-constant.  This is
conservatively correct and better than what we had before
(see testcases) but still needs to be extended further.

llvm-svn: 93144
2010-01-11 03:32:00 +00:00
Chris Lattner 172630abd2 improve comments, remove dead TD argument to CanEvaluateSExtd.
llvm-svn: 93143
2010-01-11 02:43:35 +00:00
Chris Lattner 7dd540ee24 teach sext optimization to handle truncs from types that are not
the dest of the sext.

llvm-svn: 93128
2010-01-10 20:30:41 +00:00
Chris Lattner 39d2daa94c teach zext optimization how to deal with truncs that don't come from
the zext dest type.  This allows us to handle test52/53 in cast.ll,
and allows llvm-gcc to generate much better code for PR4216 in -m64
mode:

_test_bitfield:                                             ## @test_bitfield
	orl	$32962, %edi
	movl	%edi, %eax
	andl	$-25350, %eax
	ret

This also fixes a bug handling vector extends, ensuring that the
mask produced is a vector constant, not an integer constant.

llvm-svn: 93127
2010-01-10 20:25:54 +00:00
Chris Lattner 1a05fddcdc simplify CanEvaluateSExtd to return a bool now that we have a
simpler profitability predicate.

llvm-svn: 93111
2010-01-10 07:57:20 +00:00
Chris Lattner d7816780e2 the NumCastsRemoved argument to CanEvaluateSExtd is dead, remove it.
llvm-svn: 93110
2010-01-10 07:42:21 +00:00
Chris Lattner 2fff10c424 now that the cost model has changed, we can always consider
elimination of a sign extend to be a win, which simplifies 
the client of CanEvaluateSExtd, and allows us to eliminate
more casts (examples taken from real code).

llvm-svn: 93109
2010-01-10 07:40:50 +00:00
Chris Lattner d8509424a4 change the preferred canonical form for a sign extension to be
lshr+ashr instead of trunc+sext.  We want to avoid type 
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.

llvm-svn: 93107
2010-01-10 07:08:30 +00:00
Chris Lattner 2b459fe7e1 fix indentation of switch statements, no functionality change.
llvm-svn: 93106
2010-01-10 06:59:55 +00:00
Chris Lattner 127bbc715e fix pasto that broke bootstrap.
llvm-svn: 93105
2010-01-10 06:50:04 +00:00
Chris Lattner b7be7cc486 simplify CanEvaluateZExtd now that we don't care about the number of
bits known clear in the result and don't care about the # casts 
eliminated.  TD is also dead but keeping it for now.

llvm-svn: 93098
2010-01-10 02:50:04 +00:00
Chris Lattner 49d2c9764d two changes:
1) don't try to optimize a sext or zext that is only used by a trunc, let
   the trunc get optimized first.  This avoids some pointless effort in
   some common cases since instcombine scans down a block in the first pass.
2) Change the cost model for zext elimination to consider an 'and' cheaper
   than a zext.  This allows us to do it more aggressively, and for the next
   patch to simplify the code quite a bit.

llvm-svn: 93097
2010-01-10 02:39:31 +00:00
Chris Lattner f0af17dab3 enhance CanEvaluateZExtd to handle shift left and sext, allowing
more expressions to be promoted and casts eliminated.

llvm-svn: 93096
2010-01-10 02:22:12 +00:00
Chris Lattner 7723e2b10f remove an xform subsumed by EvaluateInDifferentType.
llvm-svn: 93095
2010-01-10 01:35:55 +00:00
Julien Lerouge 321098ebec Fix nondeterministic behavior.
llvm-svn: 93093
2010-01-10 01:07:22 +00:00
Chris Lattner c95a7a21b7 clean up this xform by using m_Trunc.
llvm-svn: 93092
2010-01-10 01:04:31 +00:00
Chris Lattner 883550afe8 inline and remove the rest of commonIntCastTransforms.
llvm-svn: 93091
2010-01-10 01:00:46 +00:00
Chris Lattner c3aca38468 Inline the expression type promotion/demotion stuff out of
commonIntCastTransforms into the callers, eliminating a switch,
and allowing the static predicate  methods to be moved down to
live next to the corresponding function.  No functionality 
change.

llvm-svn: 93089
2010-01-10 00:58:42 +00:00
Chris Lattner ab7087ad66 only factor from expressions whose uses are empty and whose
base is the right expression type.  This fixes PR5981.

llvm-svn: 93045
2010-01-09 06:01:36 +00:00
Julien Lerouge f50a3f19da Fix nondeterministic behavior.
llvm-svn: 93038
2010-01-09 01:06:49 +00:00
Eric Christopher 4a1d7e1506 Remove unnecessary dyn_cast and add a comment. Part of a WIP.
llvm-svn: 93026
2010-01-08 21:37:11 +00:00
Chris Lattner 9242ae047c mplement a theoretical fixme.
llvm-svn: 93024
2010-01-08 19:28:47 +00:00
Chris Lattner 10840e9e13 rename CanEvaluateInDifferentType -> CanEvaluateTruncated and
simplify it now that it is only used for truncates.

llvm-svn: 93021
2010-01-08 19:19:23 +00:00
Chris Lattner a1e223ea10 teach instcombine to delete sign extending shift pairs (sra(shl X, C), C) when
the input is already sign extended.

llvm-svn: 93019
2010-01-08 19:04:21 +00:00
Duncan Sands 4a8b15dc74 Suppress an unused variable warning when assertions are off;
remove some trailing whitespace while there.

llvm-svn: 93008
2010-01-08 17:51:48 +00:00
Chris Lattner 8c92b57df9 tidy up some stuff duncan pointed out.
llvm-svn: 93007
2010-01-08 17:48:19 +00:00
Chris Lattner 35d3b9dcd0 teach ComputeNumSignBits to look through PHI nodes.
llvm-svn: 92964
2010-01-07 23:44:37 +00:00