Commit Graph

16690 Commits

Author SHA1 Message Date
Sam Parker 214f7bf5cc Enable simplify libcalls for ARM PCS
Teach SimplifyLibcalls that in can treat functions annotated with
apcs, aapcs or aapcs_vfp like normal C functions if they only take
and return integer or pointer values, and the target is not iOS.

Differential Revision: https://reviews.llvm.org/D24453

llvm-svn: 281322
2016-09-13 12:10:14 +00:00
Peter Collingbourne d4135bbc30 DebugInfo: New metadata representation for global variables.
This patch reverses the edge from DIGlobalVariable to GlobalVariable.
This will allow us to more easily preserve debug info metadata when
manipulating global variables.

Fixes PR30362. A program for upgrading test cases is attached to that
bug.

Differential Revision: http://reviews.llvm.org/D20147

llvm-svn: 281284
2016-09-13 01:12:59 +00:00
Sanjay Patel f5887f1fbd [InstCombine] use m_APInt to allow icmp X, C folds for splat constant vectors
isSignBitCheck could be changed to take a pointer param to avoid the 'UnusedBit' ugliness.

llvm-svn: 281231
2016-09-12 16:25:41 +00:00
David Majnemer c83044d9bb [FunctionAttrs] Don't try to infer returned if it is already on an argument
Trying to infer the 'returned' attribute if an argument is already
'returned' can lead to verification failure: inference might determine
that a different argument is passed through which would result in two
different arguments marked as 'returned'.

This fixes PR30350.

llvm-svn: 281221
2016-09-12 16:04:59 +00:00
Sanjay Patel 0531f0a5bb fix formatting; NFC
llvm-svn: 281220
2016-09-12 15:52:28 +00:00
Sanjay Patel 3151dec7f1 [InstCombine] add helper function for foldICmpUsingKnownBits; NFCI
llvm-svn: 281217
2016-09-12 15:24:31 +00:00
Sanjay Patel 5352331716 fix formatting/typos; NFC
llvm-svn: 281214
2016-09-12 14:25:46 +00:00
Chad Rosier a4c424654e [LoopInterchange] Improve debug output. NFC.
llvm-svn: 281212
2016-09-12 13:24:47 +00:00
Sanjay Patel 60312bc45f [InstCombine] add helper function for folding {and,or,xor} (cast X), C ; NFCI
llvm-svn: 281187
2016-09-12 00:16:23 +00:00
Duncan P. N. Exon Smith 8b4e4af5ed ScalarOpts: Use std::list for Candidates, NFC
There is nothing intrusive about the Candidate list; use std::list over
llvm::ilist for simplicity.

llvm-svn: 281177
2016-09-11 21:29:34 +00:00
Duncan P. N. Exon Smith 077f5b41e4 ScalarOpts: Sort includes, NFC
llvm-svn: 281176
2016-09-11 21:04:36 +00:00
James Molloy 104370ab37 [SimplifyCFG] Be even more conservative in SinkThenElseCodeToEnd
This should *actually* fix PR30244. This cranks up the workaround for PR30188 so that we never sink loads or stores of allocas.

The idea is that these should be removed by SROA/Mem2Reg, and any movement of them may well confuse SROA or just cause unwanted code churn. It's not ideal that the midend should be crippled like this, but that unwanted churn can really cause significant regressions in important workloads (tsan).

llvm-svn: 281162
2016-09-11 09:00:03 +00:00
James Molloy 18d96e8fa5 [SimplifyCFG] Harden up the profitability heuristic for block splitting during sinking
Exposed by PR30244, we will split a block currently if we think we can sink at least one instruction. However this isn't right - the reason we split predecessors is so that we can sink instructions that otherwise couldn't be sunk because it isn't safe to do so - stores, for example.

So, change the heuristic to only split if it thinks it can sink at least one non-speculatable instruction.

Should fix PR30244.

llvm-svn: 281160
2016-09-11 08:07:30 +00:00
Arnold Schwaighofer 5d335559b9 InstCombine: Don't combine loads/stores from swifterror to a new type
This generates invalid IR: the only users of swifterror can be call
arguments, loads, and stores.

rdar://28242257

llvm-svn: 281144
2016-09-10 18:14:57 +00:00
Sanjay Patel 0a3d72bb93 [InstCombine] clean up foldICmpBinOpEqualityWithConstant / foldICmpIntrinsicWithConstant ; NFC
1. Rename variables to be consistent with related/preceding code (may want to reorganize).
2. Fix comments/formatting.

llvm-svn: 281140
2016-09-10 15:33:39 +00:00
Sanjay Patel f58f68c891 [InstCombine] rename and reorganize some icmp folding functions; NFC
Everything under foldICmpInstWithConstant() should now be working for
splat vectors via m_APInt matchers. Ie, I've removed all of the FIXMEs
that I added while cleaning that section up. Note that not all of the
associated FIXMEs in the regression tests are gone though, because some
of the tests require earlier folds that are still scalar-only. 

llvm-svn: 281139
2016-09-10 15:03:44 +00:00
Vitaly Buka 3ac3aa50f6 [asan] Add flag to allow lifetime analysis of problematic allocas
Summary:
Could be useful for comparison when we suspect that alloca was skipped
because of this.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24437

llvm-svn: 281126
2016-09-10 01:06:11 +00:00
Arnold Schwaighofer c9277f40fd Inliner: Don't mark swifterror allocas with lifetime markers
This would create a bitcast use which fails the verifier: swifterror values may
only be used by loads, stores, and as function arguments.

rdar://28233244

llvm-svn: 281114
2016-09-09 22:40:27 +00:00
Matt Arsenault 950a82047b LSV: Fix incorrectly increasing alignment
If the unaligned access has a dynamic offset, it may be odd which
would make the adjusted alignment incorrect to use.

llvm-svn: 281110
2016-09-09 22:20:14 +00:00
Sanjay Patel 58109abe91 [InstCombine] use m_APInt to allow icmp ult X, C folds for splat constant vectors
llvm-svn: 281107
2016-09-09 21:59:37 +00:00
Gor Nishanov faf36c2e0b [Coroutines] Part13: Handle single edge PHINodes across suspends
Summary:
If one of the uses of the value is a single edge PHINode, handle it.

Original:

    %val = something
    <suspend>
    %p = PHINode [%val]

After Spill + Part13:

    %val = something
    %slot = gep val.spill.slot
    store %val, %slot
    <suspend>
    %p = load %slot

Plus tiny fixes/changes:
   * use correct index for coro.free in CoroCleanup
   * fixup id parameter in coro.free to allow authoring coroutine in plain C with __builtins

Reviewers: majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D24242

llvm-svn: 281020
2016-09-09 05:39:00 +00:00
Dehao Chen 87823f8e4d Remove debug info when hoisting instruction from then/else branch.
Summary: The hoisted instruction is executed speculatively. It could affect the debugging experience as user would see gdb go into code that may not be expected to execute. It will also affect sample profile accuracy by assigning incorrect frequency to source within then/else branch.

Reviewers: davidxl, dblaikie, chandlerc, kcc, echristo

Subscribers: mehdi_amini, probinson, eric_niebler, andreadb, llvm-commits

Differential Revision: https://reviews.llvm.org/D24164

llvm-svn: 280995
2016-09-08 21:53:33 +00:00
Matthew Simpson bfe5e1817b [LV] Ensure proper handling of multi-use case when collecting uniforms
The test case included in r280979 wasn't checking what it was supposed to be
checking for the predicated store case. Fixing the test revealed that the
multi-use case (when a pointer is used by both vectorized and scalarized memory
accesses) wasn't being handled properly. We can't skip over
non-consecutive-like pointers since they may have looked consecutive-like with
a different memory access.

llvm-svn: 280992
2016-09-08 21:38:26 +00:00
Matthew Simpson 408a3abcfe [LV] Don't mark pointers used by scalarized memory accesses uniform
Previously, all consecutive pointers were marked uniform after vectorization.
However, if a consecutive pointer is used by a memory access that is eventually
scalarized, the pointer won't remain uniform after all. An example is
predicated stores. Even though a predicated store may be consecutive, it will
still be scalarized, making it's pointer operand non-uniform.

This patch updates the logic in collectLoopUniforms to consider the cases where
a memory access may be scalarized. If a memory access may be scalarized, its
pointer operand is not marked uniform. The determination of whether a given
memory instruction will be scalarized or not has been moved into a common
function that is used by the vectorizer, cost model, and legality analysis.

Differential Revision: https://reviews.llvm.org/D24271

llvm-svn: 280979
2016-09-08 19:11:07 +00:00
Balaram Makam c6cebf727c [LoopDataPrefetch] Use range based for loop; NFCI
Switch to range based for loop.
No functional change, but more readable code.

llvm-svn: 280966
2016-09-08 17:08:20 +00:00
Sanjay Patel 1c608f4323 [InstCombine] return a vector-safe true/false constant
I introduced this potential bug by missing this diff in:
https://reviews.llvm.org/rL280873

...however, I'm not sure how to reach this code path with a regression test.
We may be able to remove this code and assume that the transform to a constant
is always handled by InstSimplify?

llvm-svn: 280964
2016-09-08 16:54:02 +00:00
Dehao Chen db3810771e revert r280427
Refactor replaceDominatedUsesWith to have a flag to control whether to replace uses in BB itself.

Summary: This is in preparation for LoopSink pass which calls replaceDominatedUsesWith to update after sinking.
llvm-svn: 280949
2016-09-08 15:25:12 +00:00
Vitaly Buka 58a81c6540 [asan] Avoid lifetime analysis for allocas with can be in ambiguous state
Summary:
C allows to jump over variables declaration so lifetime.start can be
avoid before variable usage. To avoid false-positives on such rare cases
we detect them and remove from lifetime analysis.

PR27453
PR28267

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24321

llvm-svn: 280907
2016-09-08 06:27:58 +00:00
Michael Zolotukhin e72997a524 Revert "[LoopUnroll] Properly update loop-info when cloning prologues and epilogues."
This reverts commit r280901.

This caused a bunch of failures, reverting it until I investigate them.

llvm-svn: 280905
2016-09-08 03:51:30 +00:00
Michael Zolotukhin 5e0a20697e [LoopUnroll] Properly update loop-info when cloning prologues and epilogues.
Summary:
When cloning blocks for prologue/epilogue we need to replicate the loop
structure from the original loop. It wasn't a problem for the innermost
loops, but it led to an incorrect loop info when we unrolled a loop with
a child loop - in this case created prologue-loop had a child loop, but
loop info didn't reflect that.

This fixes PR28888.

Reviewers: chandlerc, sanjoy, hfinkel

Subscribers: llvm-commits, silvas

Differential Revision: https://reviews.llvm.org/D24203

llvm-svn: 280901
2016-09-08 01:52:26 +00:00
Peter Collingbourne 8f1dd5c41e IR: Remove Value::intersectOptionalDataWith, replace all calls with calls to Instruction::andIRFlags.
The two functions are functionally equivalent.

Differential Revision: https://reviews.llvm.org/D22830

llvm-svn: 280884
2016-09-07 23:39:04 +00:00
Vitaly Buka c5e53b2a53 Revert "[asan] Avoid lifetime analysis for allocas with can be in ambiguous state"
Fails on Windows.

This reverts commit r280880.

llvm-svn: 280883
2016-09-07 23:37:15 +00:00
Vitaly Buka 2ca05b07d6 [asan] Avoid lifetime analysis for allocas with can be in ambiguous state
Summary:
C allows to jump over variables declaration so lifetime.start can be
avoid before variable usage. To avoid false-positives on such rare cases
we detect them and remove from lifetime analysis.

PR27453
PR28267

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24321

llvm-svn: 280880
2016-09-07 23:18:23 +00:00
Sanjay Patel 9b40f98357 [InstCombine] use m_APInt to allow icmp (and (sh X, Y), C2), C1 folds for splat constant vectors
llvm-svn: 280873
2016-09-07 22:33:03 +00:00
Hal Finkel ac5803ba91 [SimplifyCFG] Don't try to create metadata-valued PHIs
We can't create metadata-valued PHIs; don't try to do so when sinking.

I created a test case for this using the @llvm.type.test intrinsic, because it
takes a metadata parameter and does not have severe side effects (thus
SimplifyCFG is willing to otherwise sink it).

Previously, running the test case would crash with:

  Invalid use of metadata!
    %.sink = select i1 %flag, metadata <...>, metadata <0x4e45dc0>
  LLVM ERROR: Broken function found, compilation aborted!

llvm-svn: 280866
2016-09-07 21:38:22 +00:00
Haicheng Wu 109f4f3509 [LoopUnroll] Correct a debug message. NFC.
Differential Revision: https://reviews.llvm.org/D24299

llvm-svn: 280865
2016-09-07 21:30:16 +00:00
Sanjay Patel def931e76a [InstCombine] allow icmp (and X, C2), C1 folds for splat constant vectors
This is a revert of r280676 which was a revert of r280637;
ie, this is r280637 again. It was speculatively reverted to
help debug buildbot failures.

llvm-svn: 280861
2016-09-07 20:50:44 +00:00
Chad Rosier 13bc0d19a8 Typo. NFC.
llvm-svn: 280834
2016-09-07 18:15:12 +00:00
Chad Rosier 90bcb9176e [LoopInterchange] Improve debug output. NFC.
llvm-svn: 280820
2016-09-07 16:07:17 +00:00
Chad Rosier f5814f56b8 [LoopInterchange] Improve debug output. NFC.
llvm-svn: 280819
2016-09-07 15:56:59 +00:00
Justin Lebar 3a5f40c191 [LSV] Use the original loads' names for the extractelement instructions.
Summary:
LSV replaces multiple adjacent loads with one vectorized load and a
bunch of extractelement instructions.  This patch makes the
extractelement instructions' names match those of the original loads,
for (hopefully) improved readability.

Reviewers: asbirlea, tstellarAMD

Subscribers: arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D23748

llvm-svn: 280818
2016-09-07 15:49:48 +00:00
Andrea Di Biagio f3fd316223 [InstCombine][SSE4a] Fix assertion failure in the insertq/insertqi combining logic.
This fixes a similar issue to the one already fixed by r280804
(revieved in D24256). Revision 280804 fixed the problem with unsafe dyn_casts
in the extrq/extrqi combining logic. However, it turns out that even the
insertq/insertqi logic was affected by the same problem.

llvm-svn: 280807
2016-09-07 12:47:53 +00:00
Andrea Di Biagio 8df5b9cf48 [InstCombine][SSE4a] Fix assertion failure caused by unsafe dyn_casts on the operands of extrq/extrqi intrinsic calls.
This patch fixes an assertion failure caused by unsafe dynamic casts on the
constant operands of sse4a intrinsic calls to extrq/extrqi

The combine logic that simplifies sse4a extrq/extrqi intrinsic calls currently
checks if the input operands are constants. Internally, that logic relies on
dyn_casts of values returned by calls to method Constant::getAggregateElement.
However, method getAggregateElemet may return nullptr if the constant element
cannot be retrieved. So, all the dyn_casts can potentially fail. This is what
happens for example if a constexpr value is passed in input to an extrq/extrqi
intrinsic call.

This patch fixes the problem by using a dyn_cast_or_null (instead of a simple
dyn_cast) on the result of each call to Constant::getAggregateElement.

Added reproducible test cases to x86-sse4a.ll.

Differential Revision: https://reviews.llvm.org/D24256

llvm-svn: 280804
2016-09-07 12:03:03 +00:00
Renato Golin c69e0818e0 Revert "[EfficiencySanitizer] Adds shadow memory parameters for 40-bit virtual memory address."
This reverts commit r280796, as it broke the AArch64 bots for no reason.

The tests were passing and we should try to keep them passing, so a proper
review should make that happen.

llvm-svn: 280802
2016-09-07 10:54:42 +00:00
Sagar Thakur 69c78d8db7 [EfficiencySanitizer] Adds shadow memory parameters for 40-bit virtual memory address.
Adding 40-bit shadow memory parameters because MIPS64 uses 40-bit virtual memory addresses.

Reviewed by bruening
Differential: D23801

llvm-svn: 280796
2016-09-07 09:45:37 +00:00
James Molloy 6c009c1c85 [SimplifyCFG] Followup fix to r280790
In failure cases it's not guaranteed that the PHI we're inspecting is actually in the successor block! In this case we need to bail out early, and never query getIncomingValueForBlock() as that will cause an assert.

llvm-svn: 280794
2016-09-07 09:01:22 +00:00
James Molloy ec905a62ae [SimplifyCFG] Update workaround for PR30188 to also include loads
I should have realised this the first time around, but if we're avoiding sinking stores where the operands come from allocas so they don't create selects, we also have to do the same for loads because SROA will be just as defective looking at loads of selected addresses as stores.

Fixes PR30188 (again).

llvm-svn: 280792
2016-09-07 08:40:20 +00:00
James Molloy bf1837d9c9 [SimplifyCFG] Check PHI uses more accurately
PR30292 showed a case where our PHI checking wasn't correct. We were checking that all values were used by the same PHI before deciding to sink, but we weren't checking that the incoming values for that PHI were what we expected. As a result, we had to bail out after block splitting which caused us to never reach a steady state in SimplifyCFG.

Fixes PR30292.

llvm-svn: 280790
2016-09-07 08:15:54 +00:00
Nick Lewycky edd0a7023f Fix typo in comment, NFC
llvm-svn: 280774
2016-09-07 01:49:41 +00:00
Dehao Chen 3857f8f0ac Explicitly require DominatorTreeAnalysis pass for instsimplify pass.
Summary: DominatorTreeAnalysis is always required by instsimplify.

Reviewers: danielcdh, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24173

llvm-svn: 280760
2016-09-06 22:17:16 +00:00
Sanjay Patel 4e463b4a2c fix formatting; NFC
llvm-svn: 280727
2016-09-06 18:16:31 +00:00
Adam Nemet c520822dbf [JumpThreading] Only write back branch-weight MDs for blocks that originally had PGO info
Currently the pass updates branch weights in the IR if the function has
any PGO info (entry frequency is set).  However we could still have
regions of the CFG that does not have branch weights collected (e.g. a
cold region).  In this case we'd use static estimates.  Since static
estimates for branches are determined independently, they are
inconsistent.  Updating them can "randomly" inflate block frequencies.

I've run into this in a completely cold loop of h264ref from
SPEC.  -Rpass-with-hotness showed the loop to be completely cold during
inlining (before JT) but completely hot during vectorization (after JT).

The new testcase demonstrate the problem.  We check array elements
against 1, 2 and 3 in a loop.  The check against 3 is the loop-exiting
check.  The block names should be self-explanatory.

In this example, jump threading incorrectly updates the weight of the
loop-exiting branch to 0, drastically inflating the frequency of the
loop (in the range of billions).

There is no run-time profile info for edges inside the loop, so branch
probabilities are estimated.  These are the resulting branch and block
frequencies for the loop body:

                check_1 (16)
            (8) /  |
            eq_1   | (8)
                \  |
                check_2 (16)
            (8) /  |
            eq_2   | (8)
                \  |
                check_3 (16)
            (1) /  |
       (loop exit) | (15)
                   |
              (back edge)

First we thread eq_1 -> check_2 to check_3.  Frequencies are updated to
remove the frequency of eq_1 from check_2 and then from the false edge
leaving check_2.  Changed frequencies are highlighted with * *:

                check_1 (16)
            (8) /  |
           eq_1~   | (8)
           /       |
          /     check_2 (*8*)
         /  (8) /  |
         \  eq_2   | (*0*)
          \     \  |
           ` --- check_3 (16)
            (1) /  |
       (loop exit) | (15)
                   |
              (back edge)

Next we thread eq_1 -> check_3 and eq_2 -> check_3 to check_1 as new
back edges.  Frequencies are updated to remove the frequency of eq_1 and
eq_3 from check_3 and then the false edge leaving check_3 (changed
frequencies are highlighted with * *):

                  check_1 (16)
              (8) /  |
             eq_1~   | (8)
             /       |
            /     check_2 (*8*)
           /  (8) /  |
          /-- eq_2~  | (*0*)
  (back edge)        |
                  check_3 (*0*)
            (*0*) /  |
         (loop exit) | (*0*)
                     |
                (back edge)

As a result, the loop exit edge ends up with 0 frequency which in turn makes
the loop header to have maximum frequency.

There are a few potential problems here:

1. The profile data seems odd.  There is a single profile sample of the
loop being entered.  On the other hand, there are no weights inside the
loop.

2. Based on static estimation we shouldn't set edges to "extreme"
values, i.e. extremely likely or unlikely.

3. We shouldn't create profile metadata that is calculated from static
estimation.  I am not sure what policy is but it seems to make sense to
treat profile metadata as something that is known to originate from
profiling.  Estimated probabilities should only be reflected in BPI/BFI.

Any one of these would probably fix the immediate problem.  I went for 3
because I think it's a good policy to have and added a FIXME about 2.

Differential Revision: https://reviews.llvm.org/D24118

llvm-svn: 280713
2016-09-06 16:08:33 +00:00
Gor Nishanov ccabaca273 [Coroutines] Part12: Handle alloca address-taken
Summary:
Move early uses of spilled variables after CoroBegin.

For example, if a parameter had address taken, we may end up with the code
like:
        define @f(i32 %n) {
          %n.addr = alloca i32
          store %n, %n.addr
          ...
          call @coro.begin

This patch fixes the problem by moving uses of spilled variables after CoroBegin.

Reviewers: majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D24234

llvm-svn: 280678
2016-09-05 23:45:45 +00:00
Sanjay Patel eea2ef7862 [InstCombine] don't assert that division-by-constant has been folded (PR30281)
This is effectively a revert of:
https://reviews.llvm.org/rL280115

And this should fix
https://llvm.org/bugs/show_bug.cgi?id=30281:

llvm-svn: 280677
2016-09-05 23:38:22 +00:00
Sanjay Patel 46f9df5b71 [InstCombine] revert r280637 because it causes test failures on an ARM bot
http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15/builds/14952/steps/ninja%20check%201/logs/FAIL%3A%20LLVM%3A%3Aicmp.ll

llvm-svn: 280676
2016-09-05 22:36:32 +00:00
Gor Nishanov 0e18f75a92 [Coroutines] Part11: Add final suspend handling.
Summary:
A frontend may designate a particular suspend to be final, by setting the second argument of the coro.suspend intrinsic to true. Such a suspend point has two properties:

* it is possible to check whether a suspended coroutine is at the final suspend point via coro.done intrinsic;
* a resumption of a coroutine stopped at the final suspend point leads to undefined behavior. The only possible action for a coroutine at a final suspend point is destroying it via coro.destroy intrinsic.

This patch adds final suspend handling logic to CoroEarly and CoroSplit passes.
Now, the final suspend point example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex5.ll).

Reviewers: majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D24068

llvm-svn: 280646
2016-09-05 04:44:30 +00:00
Sanjay Patel c641e9d6ff [InstCombine] allow icmp (and X, C2), C1 folds for splat constant vectors
The code to calculate 'UsesRemoved' could be simplified.
As-is, that code is a victim of PR30273:
https://llvm.org/bugs/show_bug.cgi?id=30273

llvm-svn: 280637
2016-09-04 20:58:27 +00:00
Sanjay Patel 6b4909749b [InstCombine] recode icmp fold in a vector-friendly way; NFC
The transform in question:
icmp (and (trunc W), C2), C1 -> icmp (and W, C2'), C1'

...is still not enabled for vectors, thus no functional change intended.
It's not clear to me if this is a good transform for vectors or even
scalars in general. Changing that behavior may be a follow-on patch.

llvm-svn: 280627
2016-09-04 14:32:15 +00:00
Dorit Nuzman abd15f69b2 [InstCombine] Preserve llvm.mem.parallel_loop_access metadata when replacing
memcpy with ld/st.

When InstCombine replaces a memcpy with loads+stores it does not copy over the
llvm.mem.parallel_loop_access from the memcpy instruction. This patch fixes
that.

Differential Revision: https://reviews.llvm.org/D23499

llvm-svn: 280617
2016-09-04 07:49:39 +00:00
Dorit Nuzman 7673ba7ac2 Test commit.
llvm-svn: 280615
2016-09-04 07:06:00 +00:00
Joseph Tremoulet e92e0a9042 Fix inliner funclet unwind memoization
Summary:
The inliner may need to determine where a given funclet unwinds to,
and this determination may depend on other funclets throughout the
funclet tree.  The code that performs this walk in getUnwindDestToken
memoizes results to avoid redundant computations.  In the case that
a funclet's unwind destination is derived from its ancestor, there's
code to walk back down the tree from the ancestor updating the memo
map of its descendants to record the unwind destination.  This change
fixes that code to account for the case that some descendant has a
different unwind destination, which can happen if that unwind dest
is a descendant of the EHPad being queried and thus didn't determine
its unwind destination.

Also update test inline-funclets.ll, which is supposed to cover such
scenarios, to include a case that fails an assertion without this fix
but passes with it.

Fixes PR29151.


Reviewers: majnemer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24117

llvm-svn: 280610
2016-09-04 01:23:20 +00:00
Xinliang David Li 7a28a7fbd8 Cleanup : Use metadata preserving API for branch creation
Use the wrapper API in IRBuilder that does meta data copy
to create new branch in LoopUnswitch.

llvm-svn: 280602
2016-09-03 22:26:11 +00:00
Matt Arsenault 46a0382ab2 AMDGPU: Do basic folding of class intrinsic
This allows more of the OCML builtin library to be
constant folded.

llvm-svn: 280586
2016-09-03 07:06:58 +00:00
Duncan P. N. Exon Smith 4e229a7c0a ADT: Do not inherit from std::iterator in ilist_iterator
Inheriting from std::iterator uses more boiler-plate than manual
typedefs.  Avoid that in both ilist_iterator and
MachineInstrBundleIterator.

This has the side effect of removing ilist_iterator from certain ADL
lookups in namespace std; calls to std::next need to be qualified by
"std::" that didn't have to before.  The one case of this in-tree was
operating on a temporary, so I used the more compact operator++.

llvm-svn: 280570
2016-09-03 02:27:35 +00:00
Xinliang David Li 40afd5c9e4 [Profile] handle select instruction in 'expect' lowering
Builtin expect lowering currently ignores select. This patch
fixes the issue

Differential Revision: http://reviews.llvm.org/D24166

llvm-svn: 280547
2016-09-02 22:03:40 +00:00
Chad Rosier c50dfe38ac [SLP] Don't pass a global CL option as an argument. NFC.
Differential Revision: https://reviews.llvm.org/D24199

llvm-svn: 280527
2016-09-02 19:09:50 +00:00
Sanjay Patel 521f19f249 [InsttCombine] fold insertelement of constant into shuffle with constant operand (PR29126)
The motivating case occurs with SSE/AVX scalar intrinsics, so this is a first step towards
shrinking that to a single shufflevector.

Note that the transform is intentionally limited to shuffles that are equivalent to vector
selects to avoid creating arbitrary shuffle masks that may not lower well.

This should solve PR29126:
https://llvm.org/bugs/show_bug.cgi?id=29126

Differential Revision: https://reviews.llvm.org/D23886

llvm-svn: 280504
2016-09-02 17:05:43 +00:00
Matthew Simpson b65c230eab [LV] Ensure reverse interleaved group GEPs remain uniform
For uniform instructions, we're only required to generate a scalar value for
the first vector lane of each unroll iteration. Thus, if we have a reverse
interleaved group, computing the member index off the scalar GEP corresponding
to the last vector lane of its pointer operand technically makes the GEP
non-uniform. We should compute the member index off the first scalar GEP
instead.

I've added the updated member index computation to the existing reverse
interleaved group test.

llvm-svn: 280497
2016-09-02 16:19:22 +00:00
James Molloy f3cf2a494b [SimplifyCFG] Add a workaround to fix PR30188
We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions.

The real fix is in SROA, which I'll be looking into.

llvm-svn: 280470
2016-09-02 07:29:00 +00:00
Dehao Chen 4b5e7f750c revert r280429 and r280425:
r280425 | dehao | 2016-09-01 16:15:50 -0700 (Thu, 01 Sep 2016) | 9 lines

Refactor LICM pass in preparation for LoopSink pass.

Summary: LoopSink pass uses some common function in LICM. This patch refactor the LICM code to make it usable by LoopSink pass (https://reviews.llvm.org/D22778).

r280429 | dehao | 2016-09-01 16:31:25 -0700 (Thu, 01 Sep 2016) | 9 lines

Refactor LICM to expose canSinkOrHoistInst to LoopSink pass.

Summary: LoopSink pass shares the same canSinkOrHoistInst functionality with LICM pass. This patch exposes this function in preparation of https://reviews.llvm.org/D22778
llvm-svn: 280453
2016-09-02 01:59:27 +00:00
Dehao Chen 820372c0ed revert r280432:
r280432 | dehao | 2016-09-01 16:51:37 -0700 (Thu, 01 Sep 2016) | 9 lines

Explicitly require DominatorTreeAnalysis pass for instsimplify pass.

Summary: DominatorTreeAnalysis is always required by instsimplify.
llvm-svn: 280452
2016-09-02 01:47:13 +00:00
Dehao Chen e573b77772 Explicitly require DominatorTreeAnalysis pass for instsimplify pass.
Summary: DominatorTreeAnalysis is always required by instsimplify.

Reviewers: davidxl, danielcdh

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24173

llvm-svn: 280432
2016-09-01 23:51:37 +00:00
Dehao Chen e81d50b3b9 Refactor LICM to expose canSinkOrHoistInst to LoopSink pass.
Summary: LoopSink pass shares the same canSinkOrHoistInst functionality with LICM pass. This patch exposes this function in preparation of https://reviews.llvm.org/D22778

Reviewers: chandlerc, davidxl, danielcdh

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24171

llvm-svn: 280429
2016-09-01 23:31:25 +00:00
Dehao Chen ddd0c125e3 Refactor replaceDominatedUsesWith to have a flag to control whether to replace uses in BB itself.
Summary: This is in preparation for LoopSink pass which calls replaceDominatedUsesWith to update after sinking.

Reviewers: chandlerc, davidxl, danielcdh

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24170

llvm-svn: 280427
2016-09-01 23:26:48 +00:00
Dehao Chen bc4e5bba0e Refactor LICM pass in preparation for LoopSink pass.
Summary: LoopSink pass uses some common function in LICM. This patch refactor the LICM code to make it usable by LoopSink pass (https://reviews.llvm.org/D22778).

Reviewers: chandlerc, davidxl, danielcdh

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24168

llvm-svn: 280425
2016-09-01 23:15:50 +00:00
Matthew Simpson 9e7851ed31 [LV] Use ScalarParts for ad-hoc pointer IV scalarization (NFCI)
We can now maintain scalar values in VectorLoopValueMap. Thus, we no longer
have to create temporary vectors with insertelement instructions when handling
pointer induction variables. This case was mistakenly missed from r279649 when
refactoring the other scalarization code.

llvm-svn: 280405
2016-09-01 19:40:19 +00:00
Matthew Simpson 922af076c7 [LV] Move VectorParts allocation and mapping into PHI widening (NFC)
This patch moves the allocation of VectorParts for PHI nodes into the actual
PHI widening code. Previously, we allocated these VectorParts in
vectorizeBlockInLoop, and passed them by reference to widenPHIInstruction. Upon
returning, we would then map the VectorParts in VectorLoopValueMap. This
behavior is problematic for the cases where we only want to generate a scalar
version of a PHI node. For example, if in the future we only generate a scalar
version of an induction variable, we would end up inserting an empty vector
entry into the map once we return to vectorizeBlockInLoop. We now no longer
need to pass VectorParts to the various PHI widening functions, and we can keep
VectorParts allocation as close as possible to the point at which they are
actually mapped in VectorLoopValueMap.

llvm-svn: 280390
2016-09-01 18:14:27 +00:00
Geoff Berry fcb186ca9d [EarlyCSE] Change C API pass interface for EarlyCSE w/ MemorySSA
Previous change broke the C API for creating an EarlyCSE pass w/
MemorySSA by adding a bool parameter to control whether MemorySSA was
used or not.  This broke the OCaml bindings.  Instead, change the old C
API entry point back and add a new one to request an EarlyCSE pass with
MemorySSA.

llvm-svn: 280379
2016-09-01 15:07:46 +00:00
Sanjay Patel dd861964d1 [InstCombine] remove fold of an icmp pattern that should never happen
While removing a scalar shackle from an icmp fold, I noticed that I couldn't find any tests to trigger
this code path.

The 'and' shrinking transform should be handled by InstCombiner::foldCastedBitwiseLogic()
or eliminated with InstSimplify. The icmp narrowing is part of InstCombiner::foldICmpWithCastAndCast().

Differential Revision: https://reviews.llvm.org/D24031 

llvm-svn: 280370
2016-09-01 14:20:43 +00:00
James Molloy 88cad7e5cf [SimplifyCFG] Handle tail-sinking of more than 2 incoming branches
This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted.

As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider:

   if (a)
     x(1);
   else if (b)
     x(2);

This produces the following CFG:

         [if]
        /    \
      [x(1)] [if]
        |     | \
        |     |  \
        |  [x(2)] |
         \    |  /
          [ end ]

[end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch.

We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects).

We are now able to detect this case and split off the unconditional arcs to a common successor:

         [if]
        /    \
      [x(1)] [if]
        |     | \
        |     |  \
        |  [x(2)] |
         \   /    |
     [sink.split] |
           \     /
           [ end ]

Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases.

llvm-svn: 280364
2016-09-01 12:58:13 +00:00
James Molloy eec6df3193 [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd
r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything.

This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink.

This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example:

    %a = load i32* %b          %d = load i32* %b
    %c = gep i32* %a, i32 0    %e = gep i32* %d, i32 1

Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor).

This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough.

Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm.

In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging.

In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive.

This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans.

llvm-svn: 280351
2016-09-01 10:44:35 +00:00
James Molloy 21744689b9 [SimplifyCFG] Fix nondeterministic iteration order
We iterate over the result from SafeToMergeTerminators, so make it a SmallSetVector instead of a SmallPtrSet.

Should fix stage3 convergence builds.

llvm-svn: 280342
2016-09-01 09:01:34 +00:00
James Molloy e656642295 [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more cases
A very important case is not handled here: multiple arcs to a single block with a PHI. Consider:

    a:
      %1 = icmp %b, 1
      br %1, label %c, label %e
    c:
      %2 = icmp %b, 2
      br %2, label %d, label %e
    d:
      br %e
    e:
      phi [0, %a], [1, %c], [2, %d]

FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs.

llvm-svn: 280338
2016-09-01 07:45:25 +00:00
Nick Lewycky 8dd4dad08b Add cast to appease windows builder. Fixes build break introduced in r280306.
llvm-svn: 280311
2016-08-31 23:24:43 +00:00
Nick Lewycky 97e49ac59e Add -fprofile-dir= to clang.
-fprofile-dir=path allows the user to specify where .gcda files should be
emitted when the program is run. In particular, this is the first flag that
causes the .gcno and .o files to have different paths, LLVM is extended to
support this. -fprofile-dir= does not change the file name in the .gcno (and
thus where lcov looks for the source) but it does change the name in the .gcda
(and thus where the runtime library writes the .gcda file). It's different from
a GCOV_PREFIX because a user can observe that the GCOV_PREFIX_STRIP will strip
paths off of -fprofile-dir= but not off of a supplied GCOV_PREFIX.

To implement this we split -coverage-file into -coverage-data-file and
-coverage-notes-file to specify the two different names. The !llvm.gcov
metadata node grows from a 2-element form {string coverage-file, node dbg.cu}
to 3-elements, {string coverage-notes-file, string coverage-data-file, node
dbg.cu}. In the 3-element form, the file name is already "mangled" with
.gcno/.gcda suffixes, while the 2-element form left that to the middle end
pass.

llvm-svn: 280306
2016-08-31 23:04:32 +00:00
Sanjay Patel 0d70831d73 [InstCombine] allow icmp (shr exact X, C2), C fold for splat constant vectors
The enhancement to foldICmpDivConstant ( http://llvm.org/viewvc/llvm-project?view=revision&revision=280299 )
allows us to remove the ConstantInt check; no other changes needed.

llvm-svn: 280300
2016-08-31 22:18:43 +00:00
Sanjay Patel 541aef4661 [InstCombine] allow icmp (div X, Y), C folds for splat constant vectors
Converting all of the overflow ops to APInt looked risky, so I've left that as a TODO.

llvm-svn: 280299
2016-08-31 21:57:21 +00:00
Sanjay Patel 85d79744df [InstCombine] change insertRangeTest() to use APInt instead of Constant; NFCI
This is prep work before changing the callers to also use APInt which will
allow folds for splat vectors. Currently, the callers have ConstantInt
guards in place, so no functional change intended with this commit.

llvm-svn: 280282
2016-08-31 19:49:56 +00:00
Michael Zolotukhin e0b2d97b52 [LoopInfo] Add verification by recomputation.
Summary:
Current implementation of LI verifier isn't ideal and fails to detect
some cases when LI is incorrect. For instance, it checks that all
recorded loops are in a correct form, but it has no way to check if
there are no more other (unrecorded in LI) loops in the function. This
patch adds a way to detect such bugs.

Reviewers: chandlerc, sanjoy, hfinkel

Subscribers: llvm-commits, silvas, mzolotukhin

Differential Revision: https://reviews.llvm.org/D23437

llvm-svn: 280280
2016-08-31 19:26:19 +00:00
Geoff Berry 8d84605f25 [EarlyCSE] Optionally use MemorySSA. NFC.
Summary:
Use MemorySSA, if requested, to do less conservative memory dependency
checking.

This change doesn't enable the MemorySSA enhanced EarlyCSE in the
default pipelines, so should be NFC.

Reviewers: dberlin, sanjoy, reames, majnemer

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D19821

llvm-svn: 280279
2016-08-31 19:24:10 +00:00
Geoff Berry 64f5ed172a [EarlyCSE] Allow forwarding a non-invariant load into an invariant load.
Reviewers: sanjoy

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D23935

llvm-svn: 280265
2016-08-31 17:45:31 +00:00
Chad Rosier 0de580aaab [SLP] Update the debug based on Michael's suggestion.
Passing the types/opcode check still doesn't guarantee we'll actually vectorize.
Therefore, just make it clear we're attempting to vectorize.

llvm-svn: 280263
2016-08-31 17:41:12 +00:00
Chad Rosier 54807a9b9d [SLP] Sink debug after checking for matching types/opcode.
Differential Revision: https://reviews.llvm.org/D24090

llvm-svn: 280260
2016-08-31 17:31:09 +00:00
Tim Shen 48f814e8a3 s/static inline/static/ for headers I have changed in r279475. NFC.
llvm-svn: 280257
2016-08-31 16:48:13 +00:00
Philip Reames 2b1084ac93 [statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.

The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.

Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)

My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.

Differential Revision: https://reviews.llvm.org/D24000

llvm-svn: 280250
2016-08-31 15:12:17 +00:00
Chad Rosier 669130ffdf [SLP] Arguments should be camel case, and start with an upper case letter. NFC.
llvm-svn: 280248
2016-08-31 15:06:58 +00:00
James Molloy 3c1137c639 Revert "[SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more cases"
This reverts commit r280218. This *also* causes buildbot errors. Sigh. Not a successful day all around!

llvm-svn: 280239
2016-08-31 13:32:28 +00:00
James Molloy cacfc16109 Revert "[SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd"
This reverts commit r280216 - it caused buildbot failures.

llvm-svn: 280234
2016-08-31 13:16:52 +00:00
James Molloy 76c9d423a7 Revert "[SimplifyCFG] Handle tail-sinking of more than 2 incoming branches"
This reverts commit r280217. r280216 caused buildbot failures - backing out the entire chain.

llvm-svn: 280233
2016-08-31 13:16:45 +00:00
James Molloy 06a45483a1 Revert "[SimplifyCFG] Add a workaround to fix PR30188"
This reverts commit r280219. r280216 caused buildbot failures - backing out the entire chain.

llvm-svn: 280232
2016-08-31 13:16:36 +00:00
James Molloy 8a66a39cbf Revert "[SimplifyCFG] Fix bootstrap failure after r280220"
This reverts commit r280228. r280216 caused buildbot failures - backing out the entire sequence.

llvm-svn: 280231
2016-08-31 13:16:30 +00:00
James Molloy b7efa6c227 [SimplifyCFG] Fix bootstrap failure after r280220
We check that a sinking candidate is used by only one PHI node during our legality checks. However for instructions that are used by other sinking candidates our heuristic is less conservative. This can result in a candidate actually being illegal when we come to sink it because of how we sunk a predecessor. Do the used-by-only-one-PHI checks again during sinking to ensure we don't crash.

llvm-svn: 280228
2016-08-31 12:33:48 +00:00
James Molloy 171fdac7ce [SimplifyCFG] Add a workaround to fix PR30188
We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions.

The real fix is in SROA, which I'll be looking into.

llvm-svn: 280219
2016-08-31 10:46:45 +00:00
James Molloy 8e69b032e5 [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more cases
A very important case is not handled here: multiple arcs to a single block with a PHI. Consider:

    a:
      %1 = icmp %b, 1
      br %1, label %c, label %e
    c:
      %2 = icmp %b, 2
      br %2, label %d, label %e
    d:
      br %e
    e:
      phi [0, %a], [1, %c], [2, %d]

FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs.

llvm-svn: 280218
2016-08-31 10:46:39 +00:00
James Molloy c53b40b509 [SimplifyCFG] Handle tail-sinking of more than 2 incoming branches
This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted.

As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider:

   if (a)
     x(1);
   else if (b)
     x(2);

This produces the following CFG:

         [if]
        /    \
      [x(1)] [if]
        |     | \
        |     |  \
        |  [x(2)] |
         \    |  /
          [ end ]

[end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch.

We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects).

We are now able to detect this case and split off the unconditional arcs to a common successor:

         [if]
        /    \
      [x(1)] [if]
        |     | \
        |     |  \
        |  [x(2)] |
         \   /    |
     [sink.split] |
           \     /
           [ end ]

Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases.

llvm-svn: 280217
2016-08-31 10:46:33 +00:00
James Molloy 55bd04cd20 [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd
r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything.

This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink.

This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example:

    %a = load i32* %b          %d = load i32* %b
    %c = gep i32* %a, i32 0    %e = gep i32* %d, i32 1

Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor).

This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough.

Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm.

In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging.

In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive.

This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans.

llvm-svn: 280216
2016-08-31 10:46:23 +00:00
James Molloy 923e98c232 [SimplifyCFG] Tail-merge calls with sideeffects
This was deliberately disabled during my rewrite of SinkIfThenToEnd to keep behaviour
at least vaguely consistent with the previous version and keep it as close to NFC as
I could.

There's no real reason not to merge sideeffect calls though, so let's do it! Small fixup
along the way to ensure we don't create indirect calls.

Should fix PR28964.

llvm-svn: 280215
2016-08-31 10:46:16 +00:00
Gor Nishanov 50d7fb974f [Coroutines] Part 10: Add coroutine promise support.
Summary:
1) CoroEarly now lowers llvm.coro.promise intrinsic that allows to obtain
a coroutine promise pointer from a coroutine frame and vice versa.

2) CoroFrame now interprets Promise argument of llvm.coro.begin to
place CoroutinPromise alloca at a deterministic offset from the coroutine frame.

Now, the coroutine promise example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex4.ll).

Reviewers: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23993

llvm-svn: 280184
2016-08-31 00:35:41 +00:00
Sanjay Patel 7d9ebaf337 [InstCombine] clean up InsertRangeTest; NFCI
It's much less code and easier to read if we don't duplicate
everything between the 'Inside' and not 'Inside' cases.

As noted with the FIXME, the goal is to make this vector-friendly
in a follow-up patch.

llvm-svn: 280183
2016-08-31 00:19:35 +00:00
Alina Sbirlea 3f8f7840bf [LoadStoreVectorizer] Change VectorSet to Vector to match head and tail positions. Resolves PR29148.
Summary:
LSV was using two vector sets (heads and tails) to track pairs of adjiacent position to vectorize.
A recent optimization is trying to obtain the longest chain to vectorize and assumes the positions
in heads(H) and tails(T) match, which is not the case is there are multiple tails for the same head.

e.g.:
i1: store a[0]
i2: store a[1]
i3: store a[1]
Leads to:
H: i1
T: i2 i3
Instead of:
H: i1 i1
T: i2 i3
So the positions for instructions that follow i3 will have different indexes in H/T.
This patch resolves PR29148.

This issue also surfaced the fact that if the chain is too long, and TLI
returns a "not-fast" answer, the whole chain will be abandoned for
vectorization, even though a smaller one would be beneficial.
Added a testcase and FIXME for this.

Reviewers: tstellarAMD, arsenm, jlebar

Subscribers: mzolotukhin, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D24057

llvm-svn: 280179
2016-08-30 23:53:59 +00:00
Michael Kuperstein 2954d1db77 [LoopVectorizer] Predicate instructions in blocks with several incoming edges
We don't need to limit predication to blocks that have a single incoming
edge, we just need to use the right mask.
This fixes PR30172.

Differential Revision: https://reviews.llvm.org/D24009

llvm-svn: 280148
2016-08-30 20:22:21 +00:00
Sanjay Patel b37145712e [InstCombine] replace divide-by-constant checks with asserts; NFC
These folds already have tests for scalar and vector types, except 
for the vector div-by-0 case, so I'm adding tests for that.

llvm-svn: 280115
2016-08-30 17:31:34 +00:00
Sanjay Patel a7cb477277 [InstCombine] clean up foldICmpDivConstant; NFCI
1. Fix comments to match variable names
2. Remove redundant CmpRHS variable
3. Add FIXME to replace some checks with asserts

llvm-svn: 280112
2016-08-30 17:10:49 +00:00
Chad Rosier 27ac0d8670 [Reassociate] Add additional debug output. NFC.
llvm-svn: 280090
2016-08-30 13:58:35 +00:00
James Molloy d13b1239e4 [SimplifyCFG] Properly CSE metadata in SinkThenElseCodeToEnd
This was missing, meaning the metadata in sunk instructions was potentially bogus and could cause miscompiles.

llvm-svn: 280072
2016-08-30 10:56:08 +00:00
Anna Thomas 1aea6da564 [RewriteStatepointsForGC] Update comment for same PHI node check. NFC
llvm-svn: 280052
2016-08-30 02:36:48 +00:00
Kostya Serebryany 5ac427b8e4 [sanitizer-coverage] add two more modes of instrumentation: trace-div and trace-gep, mostly usaful for value-profile-based fuzzing; llvm part
llvm-svn: 280043
2016-08-30 01:12:10 +00:00
Duncan P. N. Exon Smith 5c001c367f ADT: Give ilist<T>::reverse_iterator a handle to the current node
Reverse iterators to doubly-linked lists can be simpler (and cheaper)
than std::reverse_iterator.  Make it so.

In particular, change ilist<T>::reverse_iterator so that it is *never*
invalidated unless the node it references is deleted.  This matches the
guarantees of ilist<T>::iterator.

(Note: MachineBasicBlock::iterator is *not* an ilist iterator, but a
MachineInstrBundleIterator<MachineInstr>.  This commit does not change
MachineBasicBlock::reverse_iterator, but it does update
MachineBasicBlock::reverse_instr_iterator.  See note at end of commit
message for details on bundle iterators.)

Given the list (with the Sentinel showing twice for simplicity):

     [Sentinel] <-> A <-> B <-> [Sentinel]

the following is now true:
 1. begin() represents A.
 2. begin() holds the pointer for A.
 3. end() represents [Sentinel].
 4. end() holds the poitner for [Sentinel].
 5. rbegin() represents B.
 6. rbegin() holds the pointer for B.
 7. rend() represents [Sentinel].
 8. rend() holds the pointer for [Sentinel].

The changes are #6 and #8.  Here are some properties from the old
scheme (which used std::reverse_iterator):
- rbegin() held the pointer for [Sentinel] and rend() held the pointer
  for A;
- operator*() cost two dereferences instead of one;
- converting from a valid iterator to its valid reverse_iterator
  involved a confusing increment; and
- "RI++->erase()" left RI invalid.  The unintuitive replacement was
  "RI->erase(), RE = end()".

With vector-like data structures these properties are hard to avoid
(since past-the-beginning is not a valid pointer), and don't impose a
real cost (since there's still only one dereference, and all iterators
are invalidated on erase).  But with lists, this was a poor design.

Specifically, the following code (which obviously works with normal
iterators) now works with ilist::reverse_iterator as well:

    for (auto RI = L.rbegin(), RE = L.rend(); RI != RE;)
      fooThatMightRemoveArgFromList(*RI++);

Converting between iterator and reverse_iterator for the same node uses
the getReverse() function.

    reverse_iterator iterator::getReverse();
    iterator reverse_iterator::getReverse();

Why doesn't iterator <=> reverse_iterator conversion use constructors?

In order to catch and update old code, reverse_iterator does not even
have an explicit conversion from iterator.  It wouldn't be safe because
there would be no reasonable way to catch all the bugs from the changed
semantic (see the changes at call sites that are part of this patch).

Old code used this API:

    std::reverse_iterator::reverse_iterator(iterator);
    iterator std::reverse_iterator::base();

Here's how to update from old code to new (that incorporates the
semantic change), assuming I is an ilist<>::iterator and RI is an
ilist<>::reverse_iterator:

            [Old]         ==>          [New]
    reverse_iterator(I)       (--I).getReverse()
    reverse_iterator(I)         ++I.getReverse()
  --reverse_iterator(I)           I.getReverse()
    reverse_iterator(++I)         I.getReverse()
          RI.base()          (--RI).getReverse()
          RI.base()            ++RI.getReverse()
        --RI.base()              RI.getReverse()
      (++RI).base()              RI.getReverse()
  delete &*RI, RE = end()         delete &*RI++
  RI->erase(), RE = end()         RI++->erase()

=======================================
Note: bundle iterators are out of scope
=======================================

MachineBasicBlock::iterator, also known as
MachineInstrBundleIterator<MachineInstr>, is a wrapper to represent
MachineInstr bundles.  The idea is that each operator++ takes you to the
beginning of the next bundle.  Implementing a sane reverse iterator for
this is harder than ilist.  Here are the options:
- Use std::reverse_iterator<MBB::i>.  Store a handle to the beginning of
  the next bundle.  A call to operator*() runs a loop (usually
  operator--() will be called 1 time, for unbundled instructions).
  Increment/decrement just works.  This is the status quo.
- Store a handle to the final node in the bundle.  A call to operator*()
  still runs a loop, but it iterates one time fewer (usually
  operator--() will be called 0 times, for unbundled instructions).
  Increment/decrement just works.
- Make the ilist_sentinel<MachineInstr> *always* store that it's the
  sentinel (instead of just in asserts mode).  Then the bundle iterator
  can sniff the sentinel bit in operator++().

I initially tried implementing the end() option as part of this commit,
but updating iterator/reverse_iterator conversion call sites was
error-prone.  I have a WIP series of patches that implements the final
option.

llvm-svn: 280032
2016-08-30 00:13:12 +00:00
Teresa Johnson 8c1bc986b5 [ThinLTO] Indirect call promotion fixes for promoted local functions
Summary:
Fix a couple issues limiting the application of indirect call promotion
in ThinLTO mode:
- Invoke indirect call promotion before globalopt, since it may
  eliminate imported functions which appear unreferenced.
- Invoke indirect call promotion with InLTO=true so that the PGOFuncName
  metadata is used to get the name for locals which would have been
  renamed during promotion.

Reviewers: davidxl, mehdi_amini

Subscribers: Prazek, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D24004

llvm-svn: 280024
2016-08-29 22:46:56 +00:00
Chad Rosier 6e1eaac62b [SLP] Return a boolean value for these static helpers. NFC.
Differential Revision: https://reviews.llvm.org/D24008

llvm-svn: 280020
2016-08-29 22:09:51 +00:00
Matthew Simpson df19502b16 [LV] Move insertelement sequence after scalar definitions
After r279649 when getting a vector value from VectorLoopValueMap, we create an
insertelement sequence on-demand if the value has been scalarized instead of
vectorized. We previously inserted this insertelement sequence before the
value's first vector user. However, this insert location is problematic if that
user is the phi node of a first-order recurrence. With this patch, we move the
insertelement sequence after the last scalar instruction we created when
scalarizing the value. Thus, the value's vector definition in the new loop will
immediately follow its scalar definitions. This should fix PR30183.

Reference: https://llvm.org/bugs/show_bug.cgi?id=30183
llvm-svn: 280001
2016-08-29 20:14:04 +00:00
Vitaly Buka 3c4f6bf654 [asan] Enable new stack poisoning with store instruction by default
Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23968

llvm-svn: 279993
2016-08-29 19:28:34 +00:00
Tim Northover c10c33444e ASan: remove variable only used in assertions build
llvm-svn: 279990
2016-08-29 19:12:20 +00:00
Vitaly Buka 793913c7eb Use store operation to poison allocas for lifetime analysis.
Summary:
Calling __asan_poison_stack_memory and __asan_unpoison_stack_memory for small
variables is too expensive.

Code is disabled by default and can be enabled by -asan-experimental-poisoning.

PR27453

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23947

llvm-svn: 279984
2016-08-29 18:17:21 +00:00
Vitaly Buka db331d8be7 [asan] Separate calculation of ShadowBytes from calculating ASanStackFrameLayout
Summary: No functional changes, just refactoring to make D23947 simpler.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23954

llvm-svn: 279982
2016-08-29 17:41:29 +00:00
David Majnemer e8fd5f9ffd [SimplifyCFG] Hoisting invalidates metadata
We forgot to remove optimization metadata when performing hosting during
FoldTwoEntryPHINode.

This fixes PR29163.

llvm-svn: 279980
2016-08-29 17:14:08 +00:00
Anna Thomas 2bc129c5fd [StatepointsForGC] Rematerialize in the presence of PHIs
Summary:
While walking the use chain for identifying rematerializable values in RS4GC,
add the case where the current value and base value are the same PHI nodes.

This will aid rematerialization of geps and casts instead of relocating.

Reviewers: sanjoy, reames, igor

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23920

llvm-svn: 279975
2016-08-29 15:41:59 +00:00
Gor Nishanov dce9b02677 [Coroutines] Part 9: Add cleanup subfunction.
Summary:
[Coroutines] Part 9: Add cleanup subfunction.

This patch completes coroutine heap allocation elision. Now, the heap elision example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex3.ll)

Intrinsic Changes:
* coro.free gets a token parameter tying it to coro.id to allow reliably discovering all coro.frees associated with a particular coroutine.
* coro.id gets an extra parameter that points back to a coroutine function. This allows to check whether a coro.id describes the enclosing function or it belongs to a different function that was later inlined.

CoroSplit now creates three subfunctions:
# f$resume - resume logic
# f$destroy - cleanup logic, followed by a deallocation code
# f$cleanup - just the cleanup code

CoroElide pass during devirtualization replaces coro.destroy with either f$destroy or f$cleanup depending whether heap elision is performed or not.

Other fixes, improvements:
* Fixed buglet in Shape::buildFrame that was not creating coro.save properly if coroutine has more than one suspend point.

* Switched to using variable width suspend index field (no longer limited to 32 bit index field can be as little as i1 or as large as i<whatever-size_t-is>)

Reviewers: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23844

llvm-svn: 279971
2016-08-29 14:34:12 +00:00
Sanjay Patel 5c5311f4e5 [InstCombine] use m_APInt to allow icmp (and X, Y), C folds for splat constant vectors
llvm-svn: 279937
2016-08-28 18:18:00 +00:00
Sebastian Pop 4660199a33 GVN-hoist: invalidate MD cache (PR29144)
Without invalidating the entries in the MD cache we would try to access instructions
that were removed in previous iterations of hoisting.

Differential Revision: https://reviews.llvm.org/D23927

llvm-svn: 279907
2016-08-27 02:48:41 +00:00
Adam Nemet cef3314156 [Inliner] Report when inlining fails because callee's def is unavailable
Summary:
This is obviously an interesting case because it may motivate code
restructuring or LTO.

Reporting this requires instantiation of ORE in the loop where the call
sites are first gathered.  I've checked compile-time
overhead *with* -Rpass-with-hotness and the worst slow-down was 6% in
mcf and quickly tailing off.  As before without -Rpass-with-hotness
there is no overhead.

Because this could be a pretty noisy diagnostics, it is currently
qualified as 'verbose'.  As of this patch, 'verbose' diagnostics are
only emitted with -Rpass-with-hotness, i.e. when the output is expected
to be filtered.

Reviewers: eraman, chandlerc, davidxl, hfinkel

Subscribers: tejohnson, Prazek, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D23415

llvm-svn: 279860
2016-08-26 20:21:05 +00:00
Sanjay Patel 14e0e18d76 [InstCombine] add helper function for icmp (and (sh X, Y), C2), C1 ; NFC
Like other recent changes near here, the goal is to allow vector types for
all of these folds. Splitting things up makes it easier to incrementally 
enhance the code and easier to read.

llvm-svn: 279851
2016-08-26 18:28:46 +00:00
Sanjay Patel da9c56299b [InstCombine] clean up foldICmpAndConstConst(); NFC
1. Early exit to reduce indent
2. Fix comments and variable names to match
3. Reformat comments / clang-format code

llvm-svn: 279837
2016-08-26 17:15:22 +00:00
Sanjay Patel d3c7bb28be [InstCombine] add helper function for folding of icmp (and X, C2), C; NFC
llvm-svn: 279834
2016-08-26 16:42:33 +00:00
Bob Haarman 3db176410a limit the number of instructions per block examined by dead store elimination
Summary: Dead store elimination gets very expensive when large numbers of instructions need to be analyzed. This patch limits the number of instructions analyzed per store to the value of the memdep-block-scan-limit parameter (which defaults to 100). This resulted in no observed difference in performance of the generated code, and no change in the statistics for the dead store elimination pass, but improved compilation time on some files by more than an order of magnitude.

Reviewers: dexonsmith, bruno, george.burgess.iv, dberlin, reames, davidxl

Subscribers: davide, chandlerc, dberlin, davidxl, eraman, tejohnson, mbodart, llvm-commits

Differential Revision: https://reviews.llvm.org/D15537

llvm-svn: 279833
2016-08-26 16:34:27 +00:00
Sanjay Patel 311e0fabb1 [InstCombine] rename variables in foldICmpAndConstant(); NFC
llvm-svn: 279831
2016-08-26 16:14:06 +00:00
Bob Haarman 244ed8b574 test commit
llvm-svn: 279830
2016-08-26 16:00:04 +00:00
Adam Nemet 4f155b6e91 [LoopUnroll] Use OptimizationRemarkEmitter directly not via the analysis pass
We can't mark ORE (a function pass) preserved as required by the loop
passes because that is how we ensure that the required passes like
LazyBFI are all available any time ORE is used.  See the new comments in
the patch.

Instead we use it directly just like the inliner does in D22694.

As expected there is some additional overhead after removing the caching
provided by analysis passes.  The worst case, I measured was
LNT/CINT2006_ref/401.bzip2 which regresses by 12%.  As before, this only
affects -Rpass-with-hotness and not default compilation.

llvm-svn: 279829
2016-08-26 15:58:34 +00:00
Sanjay Patel f7ba0891ce [InstCombine] rename variables in foldICmpDivConstant(); NFC
Removing the redundant 'CmpRHSV' local variable exposes a bug in the caller
foldICmpShrConstant() - it was sending in the div constant instead of the
cmp constant. But I have not been able to expose this in a regression test
yet - the affected folds all appear to be handled before we ever reach this
code. I'll keep trying to find a case as I make changes to allow vector folds
in both functions.

llvm-svn: 279828
2016-08-26 15:53:01 +00:00
Tim Shen 3ad8b43cc2 [MemCpy] Add comments for r279769
Differential Revision: https://reviews.llvm.org/D23846

llvm-svn: 279778
2016-08-25 21:03:46 +00:00
Tim Shen a3dbead2d6 [MemCpy] Check for alias in performMemCpyToMemSetOptzn, instead of the identity of two operands
Summary:
This fixes pr29105. The reason is that lifetime marks creates new
aliasing pointers the original ones, but before this patch aliases
were not checked in performMemCpyToMemSetOptzn.

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23846

llvm-svn: 279769
2016-08-25 19:27:26 +00:00
Wei Mi 59ca96636d [UNROLL] Postpone ScalarEvolution::forgetLoop after TripCountSC is expanded
when unroll runtime iteration loop.

In llvm::UnrollRuntimeLoopRemainder, if the loop to be unrolled is the inner
loop inside a loop nest, the scalar evolution needs to be dropped for its
parent loop which is done by ScalarEvolution::forgetLoop. However, we can
postpone forgetLoop to the end of UnrollRuntimeLoopRemainder so TripCountSC
expansion can still reuse existing value.

Differential Revision: https://reviews.llvm.org/D23572

llvm-svn: 279748
2016-08-25 16:17:18 +00:00
Sebastian Pop 5f0d0e60d1 GVN-hoist: fix hoistingFromAllPaths for loops (PR29034)
It is invalid to hoist stores or loads if they are not executed on all paths
from the hoisting point to the exit of the function. In the testcase, there are
paths in the loop that do not execute the stores or the loads, and so hoisting
them within the loop is unsafe.

The problem is that the current implementation of hoistingFromAllPaths is
incomplete: it walks all blocks dominated by the hoisting point, and does not
return false when the loop contains a path on which the hoisted ld/st is
not executed.

Differential Revision: https://reviews.llvm.org/D23843

llvm-svn: 279732
2016-08-25 11:55:47 +00:00
Xinliang David Li cad3a995a4 [Profile] Propagate branch metadata properly in instcombine
Differential Revision: http://reviews.llvm.org/D23590

llvm-svn: 279693
2016-08-25 00:26:32 +00:00
Sanjay Patel 1655414903 [InstCombine] move foldICmpDivConstConst() contents to foldICmpDivConstant(); NFCI
There was no logic in foldICmpDivConstant, so no need for a separate function.
The code is directly copy/pasted, so further cleanups to follow.

llvm-svn: 279685
2016-08-24 23:03:36 +00:00
Sanjay Patel d398d4a39e [InstCombine] use m_APInt to allow icmp eq/ne (shr X, C2), C folds for splat constant vectors
llvm-svn: 279677
2016-08-24 22:22:06 +00:00
Matthew Simpson abd2be1e2e [LV] Unify vector and scalar maps
This patch unifies the data structures we use for mapping instructions from the
original loop to their corresponding instructions in the new loop. Previously,
we maintained two distinct maps for this purpose: WidenMap and ScalarIVMap.
WidenMap maintained the vector values each instruction from the old loop was
represented with, and ScalarIVMap maintained the scalar values each scalarized
induction variable was represented with. With this patch, all values created
for the new loop are maintained in VectorLoopValueMap.

The change allows for several simplifications. Previously, when an instruction
was scalarized, we had to insert the scalar values into vectors in order to
maintain the mapping in WidenMap. Then, if a user of the scalarized value was
also scalar, we had to extract the scalar values from the temporary vector we
created. We now aovid these unnecessary scalar-to-vector-to-scalar conversions.
If a scalarized value is used by a scalar instruction, the scalar value is used
directly. However, if the scalarized value is needed by a vector instruction,
we generate the needed insertelement instructions on-demand.

A common idiom in several locations in the code (including the scalarization
code), is to first get the vector values an instruction from the original loop
maps to, and then extract a particular scalar value. This patch adds
getScalarValue for this purpose along side getVectorValue as an interface into
VectorLoopValueMap. These functions work together to return the requested
values if they're available or to produce them if they're not.

The mapping has also be made less permissive. Entries can be added to
VectorLoopValue map with the new initVector and initScalar functions.
getVectorValue has been modified to return a constant reference to the mapped
entries.

There's no real functional change with this patch; however, in some cases we
will generate slightly different code. For example, instead of an insertelement
sequence following the definition of an instruction, it will now precede the
first use of that instruction. This can be seen in the test case changes.

Differential Revision: https://reviews.llvm.org/D23169

llvm-svn: 279649
2016-08-24 18:23:17 +00:00
Sanjoy Das ff855b6020 [SCCP] Don't delete side-effecting instructions
I'm not sure if the `!isa<CallInst>(Inst) &&
!isa<TerminatorInst>(Inst))` bit is correct either, but this fixes the
case we know is broken.

llvm-svn: 279647
2016-08-24 18:10:21 +00:00
Sanjay Patel 8e297749c1 [InstCombine] add assert and explanatory comment for fold removed in r279568; NFC
I deleted a fold from InstCombine at:
https://reviews.llvm.org/rL279568

because it (like any InstCombine to a constant?) should always happen in InstSimplify,
however, it's not obvious what the assumptions are in the remaining code.

Add a comment and assert to make it clearer.

Differential Revision: https://reviews.llvm.org/D23819

llvm-svn: 279626
2016-08-24 13:55:55 +00:00
Gil Rapaport 550148b2f6 [Loop Vectorizer] Support predication of div/rem
div/rem instructions in basic blocks that require predication currently prevent
vectorization. This patch extends the existing mechanism for predicating stores
to handle other instructions and leverages it to predicate divs and rems.

Differential Revision: https://reviews.llvm.org/D22918

llvm-svn: 279620
2016-08-24 11:37:57 +00:00
Chandler Carruth 8882346842 [PM] Introduce basic update capabilities to the new PM's CGSCC pass
manager, including both plumbing and logic to handle function pass
updates.

There are three fundamentally tied changes here:
1) Plumbing *some* mechanism for updating the CGSCC pass manager as the
   CG changes while passes are running.
2) Changing the CGSCC pass manager infrastructure to have support for
   the underlying graph to mutate mid-pass run.
3) Actually updating the CG after function passes run.

I can separate them if necessary, but I think its really useful to have
them together as the needs of #3 drove #2, and that in turn drove #1.

The plumbing technique is to extend the "run" method signature with
extra arguments. We provide the call graph that intrinsically is
available as it is the basis of the pass manager's IR units, and an
output parameter that records the results of updating the call graph
during an SCC passes's run. Note that "...UpdateResult" isn't a *great*
name here... suggestions very welcome.

I tried a pretty frustrating number of different data structures and such
for the innards of the update result. Every other one failed for one
reason or another. Sometimes I just couldn't keep the layers of
complexity right in my head. The thing that really worked was to just
directly provide access to the underlying structures used to walk the
call graph so that their updates could be informed by the *particular*
nature of the change to the graph.

The technique for how to make the pass management infrastructure cope
with mutating graphs was also something that took a really, really large
number of iterations to get to a place where I was happy. Here are some
of the considerations that drove the design:

- We operate at three levels within the infrastructure: RefSCC, SCC, and
  Node. In each case, we are working bottom up and so we want to
  continue to iterate on the "lowest" node as the graph changes. Look at
  how we iterate over nodes in an SCC running function passes as those
  function passes mutate the CG. We continue to iterate on the "lowest"
  SCC, which is the one that continues to contain the function just
  processed.

- The call graph structure re-uses SCCs (and RefSCCs) during mutation
  events for the *highest* entry in the resulting new subgraph, not the
  lowest. This means that it is necessary to continually update the
  current SCC or RefSCC as it shifts. This is really surprising and
  subtle, and took a long time for me to work out. I actually tried
  changing the call graph to provide the opposite behavior, and it
  breaks *EVERYTHING*. The graph update algorithms are really deeply
  tied to this particualr pattern.

- When SCCs or RefSCCs are split apart and refined and we continually
  re-pin our processing to the bottom one in the subgraph, we need to
  enqueue the newly formed SCCs and RefSCCs for subsequent processing.
  Queuing them presents a few challenges:
  1) SCCs and RefSCCs use wildly different iteration strategies at
     a high level. We end up needing to converge them on worklist
     approaches that can be extended in order to be able to handle the
     mutations.
  2) The order of the enqueuing need to remain bottom-up post-order so
     that we don't get surprising order of visitation for things like
     the inliner.
  3) We need the worklists to have set semantics so we don't duplicate
     things endlessly. We don't need a *persistent* set though because
     we always keep processing the bottom node!!!! This is super, super
     surprising to me and took a long time to convince myself this is
     correct, but I'm pretty sure it is... Once we sink down to the
     bottom node, we can't re-split out the same node in any way, and
     the postorder of the current queue is fixed and unchanging.
  4) We need to make sure that the "current" SCC or RefSCC actually gets
     enqueued here such that we re-visit it because we continue
     processing a *new*, *bottom* SCC/RefSCC.

- We also need the ability to *skip* SCCs and RefSCCs that get merged
  into a larger component. We even need the ability to skip *nodes* from
  an SCC that are no longer part of that SCC.

This led to the design you see in the patch which uses SetVector-based
worklists. The RefSCC worklist is always empty until an update occurs
and is just used to handle those RefSCCs created by updates as the
others don't even exist yet and are formed on-demand during the
bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and
we push new SCCs onto it and blacklist existing SCCs on it to get the
desired processing.

We then *directly* update these when updating the call graph as I was
never able to find a satisfactory abstraction around the update
strategy.

Finally, we need to compute the updates for function passes. This is
mostly used as an initial customer of all the update mechanisms to drive
their design to at least cover some real set of use cases. There are
a bunch of interesting things that came out of doing this:

- It is really nice to do this a function at a time because that
  function is likely hot in the cache. This means we want even the
  function pass adaptor to support online updates to the call graph!

- To update the call graph after arbitrary function pass mutations is
  quite hard. We have to build a fairly comprehensive set of
  data structures and then process them. Fortunately, some of this code
  is related to the code for building the cal graph in the first place.
  Unfortunately, very little of it makes any sense to share because the
  nature of what we're doing is so very different. I've factored out the
  one part that made sense at least.

- We need to transfer these updates into the various structures for the
  CGSCC pass manager. Once those were more sanely worked out, this
  became relatively easier. But some of those needs necessitated changes
  to the LazyCallGraph interface to make it significantly easier to
  extract the changed SCCs from an update operation.

- We also need to update the CGSCC analysis manager as the shape of the
  graph changes. When an SCC is merged away we need to clear analyses
  associated with it from the analysis manager which we didn't have
  support for in the analysis manager infrsatructure. New SCCs are easy!
  But then we have the case that the original SCC has its shape changed
  but remains in the call graph. There we need to *invalidate* the
  analyses associated with it.

- We also need to invalidate analyses after we *finish* processing an
  SCC. But the analyses we need to invalidate here are *only those for
  the newly updated SCC*!!! Because we only continue processing the
  bottom SCC, if we split SCCs apart the original one gets invalidated
  once when its shape changes and is not processed farther so its
  analyses will be correct. It is the bottom SCC which continues being
  processed and needs to have the "normal" invalidation done based on
  the preserved analyses set.

All of this is mostly background and context for the changes here.

Many thanks to all the reviewers who helped here. Especially Sanjoy who
caught several interesting bugs in the graph algorithms, David, Sean,
and others who all helped with feedback.

Differential Revision: http://reviews.llvm.org/D21464

llvm-svn: 279618
2016-08-24 09:37:14 +00:00
Gor Nishanov 4570e26e68 [Coroutines] Fix unused var warning in release build
llvm-svn: 279610
2016-08-24 05:20:30 +00:00
Gor Nishanov 241b041fba [Coroutines] Part 8: Coroutine Frame Building algorithm
Summary:
This patch adds coroutine frame building algorithm. Now, simple coroutines such as ex0.ll and ex1.ll (first examples from docs\Coroutines.rst can be compiled).

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
...

7. Split coroutine into subfunctions. (https://reviews.llvm.org/D23461)
8. Coroutine Frame Building algorithm  <= we are here
9. Add f.cleanup subfunction.
10+. The rest of the logic

Reviewers: majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23586

llvm-svn: 279609
2016-08-24 04:44:35 +00:00
David Callahan 012d1c0766 [ADCE] Add control dependence computation
Summary:
This is part of a serious of patches to evolve ADCE.cpp to support
removing of unnecessary control flow.

This patch adds the ability to compute control dependences using
the iterated dominance frontier. We extend the liveness propagation
to alternate between data and control dependences until convergences.

Modify the pass manager intergation to compute the post-dominator tree
needed for iterator dominance frontier.

We still force all terminators live for now until we add code to
handlinge removing control flow in a later patch.

No changes to effective behavior with this patch

Previous patches:

D23225 [ADCE] Modify data structures to support removing control flow
D23065 [ADCE] Refactor anticipating new functionality (NFC)
D23102 [ADCE] Refactoring for new functionality (NFC)

Reviewers: nadav, majnemer, mehdi_amini

Subscribers: twoh, freik, llvm-commits

Differential Revision: https://reviews.llvm.org/D23559

llvm-svn: 279594
2016-08-24 00:10:06 +00:00
Michael Zolotukhin bd63d436c1 [LoopUnroll] By default disable unrolling when optimizing for size.
Summary:
In clang commit r268509 we started to invoke loop-unroll pass from the
driver even under -Os. However, we happen to not initialize optsize
thresholds properly, which si fixed with this change.

r268509 led to some big compile time regressions, because we started to
unroll some loops that we didn't unroll before. With this change I hope
to recover most of the regressions. We still are slightly slower than
before, because we do some checks here and there in loop-unrolling
before we bail out, but at least the slowdown is not that huge now.

Reviewers: hfinkel, chandlerc

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23388

llvm-svn: 279585
2016-08-23 23:13:15 +00:00
Sanjay Patel d64e988701 [InstCombine] use local variables for repeated values; NFCI
llvm-svn: 279578
2016-08-23 22:05:55 +00:00
Sanjay Patel dcac0dfca9 [InstCombine] move foldICmpShrConstConst() contents to foldICmpShrConst(); NFCI
There will only be 3 lines of code in foldICmpShrConst() when the cleanup is done,
so it doesn't make much sense to have a separate function for a single fold.

llvm-svn: 279575
2016-08-23 21:25:13 +00:00
Sanjay Patel 6ef22da9ec [InstCombine] remove icmp shr folds that are already handled by InstSimplify
AFAICT, these already worked in all cases for scalar types, and I enhanced
the code to work for vector types in:
https://reviews.llvm.org/rL279543

llvm-svn: 279568
2016-08-23 21:01:35 +00:00
Matthew Simpson df2ab917ad [SLP] Avoid signed integer overflow
The test case included with r279125 exposed an existing signed integer
overflow. Since getTreeCost can return INT_MAX, we can't sum this cost together
with other costs, such as getReductionCost.

This patch removes the possibility of assigning a cost of INT_MAX. Since we
were previously using INT_MAX as an indicator for "should not vectorize", we
now explicitly check this condition with "isTreeTinyAndNotFullyVectorizable"
before computing a cost.

This patch adds a run-line to the test case used for r279125 that ensures we
don't vectorize. Previously, this line would vectorize the test case by chance
due to undefined behavior in the cost calculation.

Differential Revision: https://reviews.llvm.org/D23723

llvm-svn: 279562
2016-08-23 20:48:50 +00:00
Xinliang David Li 812288a5b4 Possible fix of test failures on win bots
llvm-svn: 279542
2016-08-23 18:00:41 +00:00
Xinliang David Li dc49140b44 [Profile] refactor meta data copying/swapping code
Differential Revision: http://reviews.llvm.org/D23619

llvm-svn: 279523
2016-08-23 15:39:03 +00:00
Daniel Berlin ea02eee18f GVNHoist: Use the pass version of MemorySSA and preserve it.
Summary: GVNHoist: Use the pass version of MemorySSA and preserve it.

Reviewers: sebpop, george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23782

llvm-svn: 279504
2016-08-23 05:42:41 +00:00
George Burgess IV 7f414b90ab [MemorySSA] Remove unused field. NFC.
Given that we're not currently using blocker info, and whether or not we
will end up using it it is unclear, don't waste 8 (or 4) bytes of memory
per path node.

llvm-svn: 279493
2016-08-22 23:40:01 +00:00
Sanjay Patel c9196c4488 [InstCombine] change param type from Instruction to BinaryOperator for icmp helpers; NFCI
This saves some casting in the helper functions and eases some further refactoring.

llvm-svn: 279478
2016-08-22 21:24:29 +00:00
Tim Shen f2187ed321 [GraphTraits] Replace all NodeType usage with NodeRef
This should finish the GraphTraits migration.

Differential Revision: http://reviews.llvm.org/D23730

llvm-svn: 279475
2016-08-22 21:09:30 +00:00
Sanjay Patel a392049419 [InstCombine] use m_APInt to allow icmp (shr exact X, Y), 0 folds for splat constant vectors
llvm-svn: 279472
2016-08-22 20:45:06 +00:00
Daniel Berlin 3d512a2dc2 MSSA: Factor out phi node placement
llvm-svn: 279462
2016-08-22 19:14:30 +00:00
Daniel Berlin 868381bff6 MSSA: Only rename accesses whose defining access is nullptr
llvm-svn: 279461
2016-08-22 19:14:16 +00:00
James Molloy 5bf2114265 [SimplifyCFG] Rewrite SinkThenElseCodeToEnd
[Recommitting now an unrelated assertion in SROA is sorted out]

The new version has several advantages:
  1) IMSHO it's more readable and neater
  2) It handles loads and stores properly
  3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.

With this change we can now finally sink load-modify-store idioms such as:

    if (a)
      return *b += 3;
    else
      return *b += 4;

    =>

    %z = load i32, i32* %y
    %.sink = select i1 %a, i32 5, i32 7
    %b = add i32 %z, %.sink
    store i32 %b, i32* %y
    ret i32 %b

When this works for switches it'll be even more powerful.

Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables.

This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup.

llvm-svn: 279460
2016-08-22 19:07:15 +00:00
James Molloy 0fee97f8ba [SROA] Remove incorrect assertion
Confirmed with aprantl, this assertion is incorrect - code can get here (for example 80-bit FP types) and if it does it's benign. This is exposed by a completely unrelated patch of mine, so stop the compiler falling over.

Original differential: http://reviews.llvm.org/D16187
aprantl's advice to remove assertion: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160815/382129.html

llvm-svn: 279454
2016-08-22 18:49:42 +00:00
Jun Bum Lim ec8b8cc595 [InstCombine] Allow sinking from unique predecessor with multiple edges
Summary: We can allow sinking if the single user block has only one unique predecessor, regardless of the number of edges. Note that a switch statement with multiple cases can have the same destination.

Reviewers: mcrosier, majnemer, spatel, reames

Subscribers: reames, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D23722

llvm-svn: 279448
2016-08-22 18:21:56 +00:00
James Molloy 475f4a763f Revert "[SimplifyCFG] Rewrite SinkThenElseCodeToEnd"
This reverts commit r279443. It caused buildbot failures.

llvm-svn: 279447
2016-08-22 18:13:12 +00:00
James Molloy 353052698a [SimplifyCFG] Rewrite SinkThenElseCodeToEnd
The new version has several advantages:
  1) IMSHO it's more readable and neater
  2) It handles loads and stores properly
  3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.

With this change we can now finally sink load-modify-store idioms such as:

    if (a)
      return *b += 3;
    else
      return *b += 4;

    =>

    %z = load i32, i32* %y
    %.sink = select i1 %a, i32 5, i32 7
    %b = add i32 %z, %.sink
    store i32 %b, i32* %y
    ret i32 %b

When this works for switches it'll be even more powerful.

Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables.

This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup.

llvm-svn: 279443
2016-08-22 17:40:23 +00:00
Artur Pilipenko b78ad9d41f Revert -r278269 [IndVarSimplify] Eliminate zext of a signed IV when the IV is known to be non-negative
This change needs to be reverted in order to revert -r278267 which cause performance regression on MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt from LNT and some other bechmarks.

See comments on https://reviews.llvm.org/D18777 for details.

llvm-svn: 279432
2016-08-22 13:12:07 +00:00
Vitaly Buka 0672a27bb5 [asan] Use 1 byte aligned stores to poison shadow memory
Summary: r279379 introduced crash on arm 32bit bot. I suspect this is alignment issue.

Reviewers: eugenis

Subscribers: llvm-commits, aemerson

Differential Revision: https://reviews.llvm.org/D23762

llvm-svn: 279413
2016-08-22 04:16:14 +00:00
Sanjay Patel 643d21a62c [InstCombine] use m_APInt to allow icmp (shl X, Y), C folds for splat constant vectors, part 4
This concludes the fixes for icmp+shl in this series:
https://reviews.llvm.org/rL279339
https://reviews.llvm.org/rL279398
https://reviews.llvm.org/rL279399

llvm-svn: 279401
2016-08-21 17:10:07 +00:00
Sanjay Patel 7ffcde7422 [InstCombine] use m_APInt to allow icmp (shl X, Y), C folds for splat constant vectors, part 3
This is a partial enablement (move the ConstantInt guard down).

llvm-svn: 279399
2016-08-21 16:35:34 +00:00
Sanjay Patel 7e09f13fed [InstCombine] use m_APInt to allow icmp (shl X, Y), C folds for splat constant vectors, part 2
This is a partial enablement (move the ConstantInt guard down).

llvm-svn: 279398
2016-08-21 16:28:22 +00:00
Sanjay Patel 792636603f [InstCombine] use APInt instead of ConstantInt in isSignBitCheck(); NFCI
The callers still have ConstantInt guards, so there is no functional change
intended from this change. But relaxing the callers will allow more folds
for vector types.

llvm-svn: 279396
2016-08-21 15:07:45 +00:00
Vitaly Buka 1f9e135023 [asan] Minimize code size by using __asan_set_shadow_* for large blocks
Summary:
We can insert function call instead of multiple store operation.
Current default is blocks larger than 64 bytes.
Changes are hidden behind -asan-experimental-poisoning flag.

PR27453

Differential Revision: https://reviews.llvm.org/D23711

llvm-svn: 279383
2016-08-20 20:23:50 +00:00
Vitaly Buka 3455b9b8bc [asan] Initialize __asan_set_shadow_* callbacks
Summary:
Callbacks are not being used yet.

PR27453

Reviewers: kcc, eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23634

llvm-svn: 279380
2016-08-20 18:34:39 +00:00
Vitaly Buka 186280daa5 [asan] Optimize store size in FunctionStackPoisoner::poisonRedZones
Summary: Reduce store size to avoid leading and trailing zeros.

Reviewers: kcc, eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23648

llvm-svn: 279379
2016-08-20 18:34:36 +00:00
Vitaly Buka 5b4f12176c [asan] Cleanup instrumentation of dynamic allocas
Summary:
Extract instrumenting dynamic allocas into separate method.
Rename asan-instrument-allocas -> asan-instrument-dynamic-allocas

Differential Revision: https://reviews.llvm.org/D23707

llvm-svn: 279376
2016-08-20 17:22:27 +00:00
Vitaly Buka f9fd63ad39 [asan] Add support of lifetime poisoning into ComputeASanStackFrameLayout
Summary:
We are going to combine poisoning of red zones and scope poisoning.

PR27453

Reviewers: kcc, eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23623

llvm-svn: 279373
2016-08-20 16:48:24 +00:00
Matthew Simpson 235e479984 Reapply "[SLP] Initialize VectorizedValue when gathering"
The test case included in r279125 exposed existing undefined behavior in the
SLP vectorizer that it did not introduce. This patch reapplies the original
patch, but modifies the test case to avoid hitting the undefined behavior. This
allows us to close PR28330 while keeping the UBSan bot happy. The undefined
behavior the original test uncovered will be addressed in a follow-on patch.

Reference: https://llvm.org/bugs/show_bug.cgi?id=28330
llvm-svn: 279370
2016-08-20 14:49:02 +00:00
Matthew Simpson 2429656aa9 [SLP] Add command line option for minimum tree size (NFC)
llvm-svn: 279369
2016-08-20 14:10:06 +00:00
Vitaly Buka cc7db13bf0 Revert "[SLP] Initialize VectorizedValue when gathering" to fix ubsan bot.
This reverts commit r279125.

https://reviews.llvm.org/D23410

llvm-svn: 279363
2016-08-20 07:09:39 +00:00
Sanjay Patel fa7de606c4 [InstCombine] use m_APInt to allow icmp (shl X, Y), C folds for splat constant vectors, part 1
This is a partial enablement (move the ConstantInt guard down) because there are many
different folds here and one of the later ones will require reworking 'isSignBitCheck'.

llvm-svn: 279339
2016-08-19 22:33:26 +00:00
Daniel Berlin 11da66fc10 Partially revert 279331, as we modify this instruction in the loop
llvm-svn: 279335
2016-08-19 22:18:38 +00:00
Vitaly Buka e149b392a8 Revert "[asan] Add support of lifetime poisoning into ComputeASanStackFrameLayout"
This reverts commit r279020.

Speculative revert in hope to fix asan test on arm.

llvm-svn: 279332
2016-08-19 22:12:58 +00:00
Daniel Berlin a36f46363f Convert some depth first traversals to depth_first
llvm-svn: 279331
2016-08-19 22:06:23 +00:00
Tim Shen b5e0f5ac95 [GraphTraits] Make nodes_iterator dereference to NodeType*/NodeRef
Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later.

Differential Revision: https://reviews.llvm.org/D23704
Differential Revision: https://reviews.llvm.org/D23705

llvm-svn: 279326
2016-08-19 21:20:13 +00:00
Reid Kleckner 98a48afa5d Revert "[SimplifyCFG] Rewrite SinkThenElseCodeToEnd"
This reverts commit r279229. It breaks intrinsic function calls in
diamonds.

llvm-svn: 279313
2016-08-19 20:22:39 +00:00
Sanjay Patel 7a104615c5 [InstCombine] remove an icmp fold that is already handled by InstSimplify
Specifically, this is done near the end of "SimplifyICmpInst" using 
computeKnownBits() as the broader solution. There are even vector
tests (yay!) for this in test/Transforms/InstSimplify/compare.ll.

I considered putting an assert here instead of just deleting, but
then we could assert every possible fold in InstSimplify in 
InstCombine, so...less is more?

llvm-svn: 279300
2016-08-19 19:03:07 +00:00
Sanjay Patel e38e79c3e6 [InstCombine] use local variables to reduce code in foldICmpShlConstant; NFC
llvm-svn: 279282
2016-08-19 17:34:05 +00:00
Sanjay Patel 38b7506f75 [InstCombine] rename variables in foldICmpShlConstant(); NFC
llvm-svn: 279279
2016-08-19 17:20:37 +00:00
Vitaly Buka 170dede75d Revert "[asan] Optimize store size in FunctionStackPoisoner::poisonRedZones"
This reverts commit r279178.

Speculative revert in hope to fix asan crash on arm.

llvm-svn: 279277
2016-08-19 17:15:38 +00:00
Vitaly Buka c8f4d69c82 Revert "[asan] Fix size of shadow incorrectly calculated in r279178"
This reverts commit r279222.

Speculative revert in hope to fix asan crash on arm.

llvm-svn: 279276
2016-08-19 17:15:33 +00:00
Reid Kleckner a871d3872a Fix regression in InstCombine introduced by r278944
The intended transform is:
  // Simplify icmp eq (or (ptrtoint P), (ptrtoint Q)), 0
  // -> and (icmp eq P, null), (icmp eq Q, null).

P and Q are both pointer types, but may have different types. We need
two calls to getNullValue() to make the icmps.

llvm-svn: 279271
2016-08-19 16:53:18 +00:00
David Majnemer 5554edabef [CloneFunction] Don't remove unrelated nodes from the CGSSC
CGSCC use a WeakVH to track call sites.  RAUW a call within a function
can result in that WeakVH getting confused about whether or not the call
site is still around.

llvm-svn: 279268
2016-08-19 16:37:40 +00:00
Sanjay Patel a867afe094 [InstCombine] use m_APInt to allow icmp (shl 1, Y), C folds for splat constant vectors
llvm-svn: 279266
2016-08-19 16:12:16 +00:00
Sanjay Patel 57b12d3876 [InstCombine] use m_APInt to allow icmp X, C folds for splat constant vectors
Of course, we really need to refactor and fix all of the cmp predicates, 
but this one is interesting because without it, we later perform an 
information-losing transform of icmp (shl 1, Y), C, and we can't recover
the better fold.

llvm-svn: 279263
2016-08-19 15:40:44 +00:00
Benjamin Kramer 96fcf5df03 [LoopVectorize] Don't copy std::vector in for-range loop.
llvm-svn: 279233
2016-08-19 12:44:24 +00:00
James Molloy 11a1936b70 [SimplifyCFG] Rewrite SinkThenElseCodeToEnd
The new version has several advantages:
  1) IMSHO it's more readable and neater
  2) It handles loads and stores properly
  3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.

With this change we can now finally sink load-modify-store idioms such as:

    if (a)
      return *b += 3;
    else
      return *b += 4;

    =>

    %z = load i32, i32* %y
    %.sink = select i1 %a, i32 5, i32 7
    %b = add i32 %z, %.sink
    store i32 %b, i32* %y
    ret i32 %b

When this works for switches it'll be even more powerful.

llvm-svn: 279229
2016-08-19 10:10:27 +00:00
Vitaly Buka b81960a6c8 [asan] Fix size of shadow incorrectly calculated in r279178
Summary: r279178 generates 8 times more stores than necessary.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23708

llvm-svn: 279222
2016-08-19 08:33:53 +00:00
Xinliang David Li 63248ab888 [Profile] Fix edge count read bug
Use uint64_t to avoid value truncation before scaling.

llvm-svn: 279213
2016-08-19 06:31:45 +00:00
Xinliang David Li 2c9336823c [Profile] Simple code refactoring for reuse /NFC
llvm-svn: 279209
2016-08-19 05:31:33 +00:00
Vitaly Buka aa654292bd [asan] Optimize store size in FunctionStackPoisoner::poisonRedZones
Summary: Reduce store size to avoid leading and trailing zeros.

Reviewers: kcc, eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23648

llvm-svn: 279178
2016-08-18 23:51:15 +00:00
Sanjay Patel 98cd99dfc6 [InstCombine] add helper function for folds of icmp (shl 1, Y), C; NFCI
Clean up the existing code by:
1. Renaming variables
2. Adding local variables
3. Making it vector-safe

This is still guarded by a ConstantInt check, so no functional change is intended.
But this should be ready to go: if we move the ConstantInt check down, all of
these folds should do the right thing for vector types.

llvm-svn: 279150
2016-08-18 21:28:30 +00:00
Amaury Sechet 763c59dc9a Make cltz and cttz zero undef when the operand cannot be zero in InstCombine
Summary: Also add popcount(n) == bitsize(n)  -> n == -1 transformation.

Reviewers: majnemer, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23134

llvm-svn: 279141
2016-08-18 20:43:50 +00:00
Sanjay Patel 40e8ca46ad [InstCombine] use m_APInt to allow icmp (trunc X, Y), C folds for splat constant vectors
This is a sibling of:
https://reviews.llvm.org/rL278859
https://reviews.llvm.org/rL278935
https://reviews.llvm.org/rL278945
https://reviews.llvm.org/rL279066
https://reviews.llvm.org/rL279077
https://reviews.llvm.org/rL279101

llvm-svn: 279133
2016-08-18 20:28:54 +00:00
Sanjay Patel 5f4ce4e23d [InstCombine] clean up foldICmpTruncConstant(); NFCI
1. Fix variable names
2. Add local variables to reduce code

llvm-svn: 279132
2016-08-18 20:25:16 +00:00
Matthew Simpson 11db6b6b8c [SLP] Initialize VectorizedValue when gathering
We abort building vectorizable trees in some cases (e.g., if the maximum
recursion depth is reached, if the region size is too large, etc.). If this
happens for a reduction, we can be left with a root entry that needs to be
gathered. For these cases, we need make sure we actually set VectorizedValue to
the resulting vector.

This patch ensures we properly set VectorizedValue, and it also ensures the
insertelement sequence generated for the gathers is inserted at the correct
location.

Reference: https://llvm.org/bugs/show_bug.cgi?id=28330
Differential Revison: https://reviews.llvm.org/D23410

llvm-svn: 279125
2016-08-18 19:50:32 +00:00
Sanjay Patel fa5ca2bf46 [InstCombine] use m_APInt to allow icmp (udiv X, Y), C folds for splat constant vectors
This is a sibling of:
https://reviews.llvm.org/rL278859
https://reviews.llvm.org/rL278935
https://reviews.llvm.org/rL278945
https://reviews.llvm.org/rL279066
https://reviews.llvm.org/rL279077

llvm-svn: 279101
2016-08-18 17:55:59 +00:00
Sanjay Patel 12a4105647 [InstCombine] clean up foldICmpUDivConstant; NFC
1. Better variable names
2. Remove unnecessary check of ConstantInt

llvm-svn: 279094
2016-08-18 17:37:26 +00:00
Artur Pilipenko 615b820af6 CVP. Turn marking adds as no wrap (introduced by r278107) off by default
It causes a regression on our internal benchmark. Introduce cvp-dont-process flag and set it off by default while investigating the regression.

llvm-svn: 279082
2016-08-18 16:08:35 +00:00
Davide Italiano d1279df752 [IRCE] Switch over to LLVM_DUMP_METHOD. NFCI.
llvm-svn: 279079
2016-08-18 15:55:49 +00:00
Sanjay Patel 6347807f87 [InstCombine] use m_APInt to allow icmp (mul X, Y), C folds for splat constant vectors
This is a sibling of:
https://reviews.llvm.org/rL278859
https://reviews.llvm.org/rL278935
https://reviews.llvm.org/rL278945
https://reviews.llvm.org/rL279066

llvm-svn: 279077
2016-08-18 15:44:44 +00:00
Sanjay Patel 5b112845da [InstCombine] use APInt in isSignTest instead of ConstantInt; NFC
This will enable vector splat folding, but NFC until the callers
have their ConstantInt restrictions removed.

llvm-svn: 279072
2016-08-18 14:59:14 +00:00
Sanjay Patel 4c5e60d95c [InstCombine] use m_APInt to allow icmp (xor X, Y), C folds for splat constant vectors
This is a sibling of:
https://reviews.llvm.org/rL278859
https://reviews.llvm.org/rL278935
https://reviews.llvm.org/rL278945

llvm-svn: 279066
2016-08-18 14:10:48 +00:00
Kostya Serebryany 524c3f32e7 [sanitizer-coverage/libFuzzer] instrument comparisons with __sanitizer_cov_trace_cmp[1248] instead of __sanitizer_cov_trace_cmp, don't pass the comparison type to save a bit performance. Use these new callbacks in libFuzzer
llvm-svn: 279027
2016-08-18 01:25:28 +00:00
Vitaly Buka d5ec14989d [asan] Add support of lifetime poisoning into ComputeASanStackFrameLayout
Summary:
We are going to combine poisoning of red zones and scope poisoning.

PR27453

Reviewers: kcc, eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23623

llvm-svn: 279020
2016-08-18 00:56:58 +00:00
Haicheng Wu e787763275 [LoopUnroll] Move a simple check earlier. NFC.
Move the check of CallInst earlier to skip expensive recursive operations.

Differential Revision: https://reviews.llvm.org/D23611

llvm-svn: 278998
2016-08-17 22:42:58 +00:00
Tim Shen 5c0c063ad5 [LV] Move LoopBodyTraits to a better place, and add comment for simplifying LoopBlocksTraversal. NFC.
Summary: I later (after r278573) found that LoopIterator.h has some overlapping with LoopBodyTraits. It's good to use LoopBodyTraits because a *Traits struct is algorithm independent.

Reviewers: anemet, nadav, mkuper

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23529

llvm-svn: 278996
2016-08-17 22:20:07 +00:00
Justin Bogner cd1d5aaf2e Replace a few more "fall through" comments with LLVM_FALLTHROUGH
Follow up to r278902. I had missed "fall through", with a space.

llvm-svn: 278970
2016-08-17 20:30:52 +00:00
Sanjay Patel daffec91ef [InstCombine] more clean up of foldICmpXorConstant(); NFCI
Use m_APInt for the xor constant, but this is all still guarded by the initial
ConstantInt check, so no vector types should make it in here.

llvm-svn: 278957
2016-08-17 19:45:18 +00:00
Sanjay Patel 6d5f448746 [InstCombine] clean up foldICmpXorConstant(); NFCI
1. Change variable names
2. Use local variables to reduce code
3. Early exit to reduce indent

llvm-svn: 278955
2016-08-17 19:23:42 +00:00
Sanjay Patel 63e14a07e8 [InstCombine] use m_APInt to allow icmp (or X, Y), C folds for splat constant vectors
This is a sibling of:
https://reviews.llvm.org/rL278859
https://reviews.llvm.org/rL278935

llvm-svn: 278945
2016-08-17 16:38:57 +00:00
Sanjay Patel 943e92efde [InstCombine] clean up foldICmpOrConstant(); NFCI
1. Change variable names
2. Use local variables to reduce code
3. Use ? instead of if/else
4. Use the APInt variable instead of 'RHS' so the removal of the FIXME code will be direct

llvm-svn: 278944
2016-08-17 16:30:43 +00:00
Chad Rosier ea7e4647db Revert "Reassociate: Reprocess RedoInsts after each inst".
This reverts commit r258830, which introduced a bug described in PR28367.

PR28367

llvm-svn: 278938
2016-08-17 15:54:39 +00:00
Sanjay Patel 4f7eb2aa95 [InstCombine] use m_APInt to allow icmp (add X, Y), C folds for splat constant vectors
This is a sibling of:
https://reviews.llvm.org/rL278859

llvm-svn: 278935
2016-08-17 15:24:30 +00:00
Chad Rosier a6822f64f3 Revert "[Reassociate] Avoid iterator invalidation when negating value."
This reverts commit r278928 due to lit test failures.

llvm-svn: 278929
2016-08-17 14:31:34 +00:00
Chad Rosier cf3e8121a6 [Reassociate] Avoid iterator invalidation when negating value.
Differential Revision: https://reviews.llvm.org/D23464
PR28367

llvm-svn: 278928
2016-08-17 14:16:45 +00:00
Jonas Paulsson 7a79422536 [LoopStrenghtReduce] Refactoring and addition of a new target cost function.
Refactored so that a LSRUse owns its fixups, as oppsed to letting the
LSRInstance own them. This makes it easier to rate formulas for
LSRUses, since the fixups are available directly. The Offsets vector
has been removed since it was no longer necessary.

New target hook isFoldableMemAccessOffset(), which is used during formula
rating.

For SystemZ, this is useful to express that loads and stores with
float or vector types with a big/negative offset should be avoided in
loops. Without this, LSR will generate a lot of negative offsets that
would require extra instructions for loading the address.

Updated tests:
test/CodeGen/SystemZ/loop-01.ll

Reviewed by: Quentin Colombet and Ulrich Weigand.
https://reviews.llvm.org/D19152

llvm-svn: 278927
2016-08-17 13:24:19 +00:00
Justin Bogner b03fd12cef Replace "fallthrough" comments with LLVM_FALLTHROUGH
This is a mechanical change of comments in switches like fallthrough,
fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead.

llvm-svn: 278902
2016-08-17 05:10:15 +00:00
Chandler Carruth 67fc52f067 [PM] Port the always inliner to the new pass manager in a much more
minimal and boring form than the old pass manager's version.

This pass does the very minimal amount of work necessary to inline
functions declared as always-inline. It doesn't support a wide array of
things that the legacy pass manager did support, but is alse ... about
20 lines of code. So it has that going for it. Notably things this
doesn't support:

- Array alloca merging
  - To support the above, bottom-up inlining with careful history
    tracking and call graph updates
- DCE of the functions that become dead after this inlining.
- Inlining through call instructions with the always_inline attribute.
  Instead, it focuses on inlining functions with that attribute.

The first I've omitted because I'm hoping to just turn it off for the
primary pass manager. If that doesn't pan out, I can add it here but it
will be reasonably expensive to do so.

The second should really be handled by running global-dce after the
inliner. I don't want to re-implement the non-trivial logic necessary to
do comdat-correct DCE of functions. This means the -O0 pipeline will
have to be at least 'always-inline,global-dce', but that seems
reasonable to me. If others are seriously worried about this I'd like to
hear about it and understand why. Again, this is all solveable by
factoring that logic into a utility and calling it here, but I'd like to
wait to do that until there is a clear reason why the existing
pass-based factoring won't work.

The final point is a serious one. I can fairly easily add support for
this, but it seems both costly and a confusing construct for the use
case of the always inliner running at -O0. This attribute can of course
still impact the normal inliner easily (although I find that
a questionable re-use of the same attribute). I've started a discussion
to sort out what semantics we want here and based on that can figure out
if it makes sense ta have this complexity at O0 or not.

One other advantage of this design is that it should be quite a bit
faster due to checking for whether the function is a viable candidate
for inlining exactly once per function instead of doing it for each call
site.

Anyways, hopefully a reasonable starting point for this pass.

Differential Revision: https://reviews.llvm.org/D23299

llvm-svn: 278896
2016-08-17 02:56:20 +00:00
Chandler Carruth f702d8ecb6 [Inliner] Add a flag to disable manual alloca merging in the Inliner.
This is off for now while testing can take place to make sure that in
fact we do sufficient stack coloring to fully obviate the manual alloca
array merging.

Some context on why we should be using stack coloring rather than
merging allocas in this way:

LLVM relies very heavily on analyzing pointers as coming from different
allocas in order to make aliasing decisions. These are some of the most
powerful aliasing signals available in LLVM. So merging allocas is an
extremely destructive operation on the LLVM IR -- it takes away highly
valuable and hard to reconstruct information.

As a consequence, inlined functions which happen to have array allocas
that this pattern matches will fail to be properly interleaved unless
SROA manages to hoist everything to an SSA register. Instead, the
inliner will have added an unnecessary dependence that one inlined
function execute after the other because they will have been rewritten
to refer to the same memory.

All that said, folks will reasonably want some time to experiment here
and make sure there are no significant regressions. A flag should give
us an easy knob to test.

For more context, see the thread here:
http://lists.llvm.org/pipermail/llvm-dev/2016-July/103277.html
http://lists.llvm.org/pipermail/llvm-dev/2016-August/103285.html

Differential Revision: https://reviews.llvm.org/D23052

llvm-svn: 278892
2016-08-17 02:40:23 +00:00
Duncan P. N. Exon Smith 362d120488 Scalar: Avoid dereferencing end() in IndVarSimplify
IndVarSimplify::sinkUnusedInvariants calls
BasicBlock::getFirstInsertionPt on the ExitBlock and moves instructions
before it.  This can return end(), so it's not safe to dereference.  Add
an iterator-based overload to Instruction::moveBefore to avoid the UB.

llvm-svn: 278886
2016-08-17 01:54:41 +00:00
Duncan P. N. Exon Smith 9e3edad932 IPO: Swap || operands to avoid dereferencing end()
IsOperandBundleUse conveniently indicates  whether
std::next(F->arg_begin(),UseIndex) will get to (or past) end().  Check
it first to avoid dereferencing end().

llvm-svn: 278884
2016-08-17 01:23:58 +00:00
Duncan P. N. Exon Smith 3bcaa81204 Scalar: Avoid dereferencing end() in InductiveRangeCheckElimination
BasicBlock::Create isn't designed to take iterators (which might be
end()), but pointers (which might be nullptr).  Fix the UB that was
converting end() to a BasicBlock* by calling BasicBlock::getNextNode()
in the first place.

llvm-svn: 278883
2016-08-17 01:16:17 +00:00
Duncan P. N. Exon Smith 0a12729f99 SimplifyCFG: Avoid dereferencing end()
When comparing a User* to a BasicBlock::iterator in
passingValueIsAlwaysUndefined, don't dereference the iterator in case it
is end().

llvm-svn: 278872
2016-08-16 23:57:56 +00:00
Sanjay Patel 60ea1b43d6 [InstCombine] clean up foldICmpAddConstant(); NFCI
1. Fix variable names
2. Add local variables to reduce code
3. Fix code comments
4. Add early exit to reduce indentation
5. Remove 'else' after if -> return
6. Hoist common predicate

llvm-svn: 278864
2016-08-16 22:34:42 +00:00
David Majnemer 744a8753db Preserve the assumption cache more often
We were clearing it out in LoopUnswitch and InlineFunction instead of
attempting to preserve it.

llvm-svn: 278860
2016-08-16 22:07:32 +00:00
Sanjay Patel e47df1ac62 [InstCombine] use m_APInt to allow icmp (sub X, Y), C folds for splat constant vectors
llvm-svn: 278859
2016-08-16 21:53:19 +00:00
Sanjay Patel b9aa67bfcf [InstCombine] fix variable names to match formula comments; NFC
llvm-svn: 278855
2016-08-16 21:26:10 +00:00
David Majnemer 110522bc0f [LoopUnroll] Don't clear out the AssumptionCache on each loop
Clearing out the AssumptionCache can cause us to rescan the entire
function for assumes.  If there are many loops, then we are scanning
over the entire function many times.

Instead of clearing out the AssumptionCache, register all cloned
assumes.

llvm-svn: 278854
2016-08-16 21:09:46 +00:00
Gor Nishanov 74309fa014 [Coroutines] Part 7: Split coroutine into subfunctions
Summary:
This patch adds simple coroutine splitting logic to CoroSplit pass.

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
...
7. Split coroutine into subfunctions <= we are here
8. Coroutine Frame Building algorithm
9. Handle coroutine with unwinds
10+. The rest of the logic

Reviewers: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23461

llvm-svn: 278830
2016-08-16 18:04:14 +00:00
Sanjay Patel a3f4f0828b [InstCombine] add helper functions for foldICmpWithConstant; NFCI
Besides breaking up a 700 line function to improve readability,
this sinks the 'FIXME: ConstantInt' check into each helper. So 
now we can independently break that restriction within any of the
helper functions.

As much as possible, the code was only {cut/paste/clang-format}'ed 
to minimize risk (no functional changes intended), so several more
readability improvements are still possible. 

llvm-svn: 278828
2016-08-16 17:54:36 +00:00
Vitaly Buka 1ce73ef11c [Asan] Unpoison red zones even if use-after-scope was disabled with runtime flag
Summary: PR27453

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23481

llvm-svn: 278818
2016-08-16 16:24:10 +00:00
Sanjay Patel 1e5b2d1611 [InstCombine] use m_APInt in foldICmpWithConstant; NFCI
There's some formatting and pointer deref ugliness here that I intend to fix in
subsequent patches. The overall goal is to refactor the obnoxiously long switch
and incrementally remove the restriction to scalar types (allow folds for vector
splats). This patch introduces the use of m_APInt which means the RHSV reference
is now a pointer (and may have matched a vector splat), but the check of 'RHS' 
remains, so vector folds are disallowed and no functional change is intended.

llvm-svn: 278816
2016-08-16 16:08:11 +00:00
David Callahan 947be0fa66 [ADCE] Modify data structures to support removing control flow
Summary:
This is part of a serious of patches to evolve ADCE.cpp to support
removing of unnecessary control flow.

This patch changes the data structures to hold liveness information to
support the additional information we will eventually need. In
particular we now have a notion of basic blocks being live because
they contain a live operations. This will eventually feed into control
dependence analysis of which branches are live. We cater to getting
from instructions to associated block information and from blocks to
information about their terminators.

This patch also changes the structure of the main loop of the
algorithm so that it alternates propagating liveness between
instructions and usign control dependence information to mark branches
live.

We force all terminators live for now until we add code to handlinge
removing control flow in a later patch.

No changes to effective behavior with this patch

Previous patches:

D23065 [ADCE] Refactor anticipating new functionality (NFC)
D23102 [ADCE] Refactoring for new functionality (NFC)

Reviewers: nadav, majnemer, mehdi_amini

Subscribers: freik, twoh, llvm-commits

Differential Revision: https://reviews.llvm.org/D23225

llvm-svn: 278807
2016-08-16 14:31:51 +00:00
Sagar Thakur e311740bde [MemorySanitizer] [MIPS] Changed memory mapping to support pie executable.
Reviewed by eugenis
Differential: D22994

llvm-svn: 278795
2016-08-16 12:55:38 +00:00
Mehdi Amini 88c491ddec FunctionImport: missed one occurence of ImportListForModule to rename (NFC)
llvm-svn: 278778
2016-08-16 05:49:12 +00:00
Mehdi Amini 9b490f10e1 FunctionImport: rename ImportsForModule to ImportList for consistency (NFC)
llvm-svn: 278777
2016-08-16 05:47:12 +00:00
Mehdi Amini cdbcbf7477 [LTO] Simplify APIs and constify (NFC)
Summary:
Multiple APIs were taking a StringMap for the ImportLists containing
the entries for for all the modules while operating on a single entry
for the current module. Instead we can pass the desired ModuleImport
directly. Also some of the APIs were not const, I believe just to be
able to use operator[] on the StringMap.

Reviewers: tejohnson

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23537

llvm-svn: 278776
2016-08-16 05:46:05 +00:00
Teresa Johnson 6107a4195d [ThinLTO] Remove functions resolved to available_externally from comdats
Summary:
thinLTOResolveWeakForLinkerModule needs to drop any preempted weak symbols
that were converted to available_externally from comdats, otherwise we
will get a verification failure (since available_externally is a
declaration for the linker, and no declarations can be in a comdat).

Reviewers: mehdi_amini

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23015

llvm-svn: 278739
2016-08-15 21:00:04 +00:00
Reid Kleckner 70a600b8bb Revert "[SimplifyCFG] Rewrite SinkThenElseCodeToEnd"
This reverts commit r278660.

It causes downstream assertion failure in InstCombine on shuffle
instructions. Comes up in __mm_swizzle_epi32.

llvm-svn: 278672
2016-08-15 15:42:31 +00:00
James Molloy 9a3c82f5cf [SimplifyCFG] Rewrite SinkThenElseCodeToEnd
The new version has several advantages:
  1) IMSHO it's more readable and neater
  2) It handles loads and stores properly
  3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.

With this change we can now finally sink load-modify-store idioms such as:

    if (a)
      return *b += 3;
    else
      return *b += 4;

    =>

    %z = load i32, i32* %y
    %.sink = select i1 %a, i32 5, i32 7
    %b = add i32 %z, %.sink
    store i32 %b, i32* %y
    ret i32 %b

When this works for switches it'll be even more powerful.

llvm-svn: 278660
2016-08-15 08:04:56 +00:00
James Molloy 196ad0823e [LSR] Don't try and create post-inc expressions on non-rotated loops
If a loop is not rotated (for example when optimizing for size), the latch is not the backedge. If we promote an expression to post-inc form, we not only increase register pressure and add a COPY for that IV expression but for all IVs!

Motivating testcase:

    void f(float *a, float *b, float *c, int n) {
      while (n-- > 0)
        *c++ = *a++ + *b++;
    }

It's imperative that the pointer increments be located in the latch block and not the header block; if not, we cannot use post-increment loads and stores and we have to keep both the post-inc and pre-inc values around until the end of the latch which bloats register usage.

llvm-svn: 278658
2016-08-15 07:53:03 +00:00
Sanjoy Das 35459f0e34 [IRCE] Change variable grouping; NFC
llvm-svn: 278619
2016-08-14 01:04:50 +00:00
Sanjoy Das 2143447c73 [IRCE] Create llvm::Loop instances for cloned out loops
llvm-svn: 278618
2016-08-14 01:04:46 +00:00
Sanjoy Das 7a18a238c6 [IRCE] Don't iterate on loops that were cloned out
IRCE has the ability to further version pre-loops and post-loops that it
created, but this isn't useful at all.  This change teaches IRCE to
leave behind some metadata in the loops it creates (by cloning the main
loop) so that these new loops are not re-processed by IRCE.

Today this bug is hidden by another bug -- IRCE does not update LoopInfo
properly so the loop pass manager does not re-invoke IRCE on the loops
it split out.  However, once the latter is fixed the bug addressed in
this change causes IRCE to infinite-loop in some cases (e.g. it splits
out a pre-loop, a pre-pre-loop from that, a pre-pre-pre-loop from that
and so on).

llvm-svn: 278617
2016-08-14 01:04:36 +00:00
Sanjoy Das 43fdc54303 [IRCE] Add better DEBUG diagnostic; NFC
NFC meaning IRCE should not _do_ anything different, but
-debug-only=irce will be a little friendlier.

llvm-svn: 278616
2016-08-14 01:04:31 +00:00
Sanjoy Das 2a2f14d7ab [IRCE] Be resilient in the face of non-simplified loops
Loops containing `indirectbr` may not be in simplified form, even after
running LoopSimplify.  Reject then gracefully, instead of tripping an
assert.

llvm-svn: 278611
2016-08-13 23:36:35 +00:00
Sanjoy Das f2b7bafae4 [IRCE] Use dyn_cast instead of explicit isa/cast; NFC
llvm-svn: 278607
2016-08-13 22:00:12 +00:00
Sanjoy Das d1d62a1354 [IRCE] Use range-for; NFC
llvm-svn: 278606
2016-08-13 22:00:09 +00:00
Aditya Kumar f24939b1f4 Test commit
llvm-svn: 278598
2016-08-13 11:56:50 +00:00
Teresa Johnson 1eca6bc6a7 [PM] Port LoopDataPrefetch to new pass manager
Summary:
Refactor the existing support into a LoopDataPrefetch implementation
class and a LoopDataPrefetchLegacyPass class that invokes it.
Add a new LoopDataPrefetchPass for the new pass manager that utilizes
the LoopDataPrefetch implementation class.

Reviewers: mehdi_amini

Subscribers: sanjoy, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D23483

llvm-svn: 278591
2016-08-13 04:11:27 +00:00
Sanjoy Das 3502511548 [IndVars] Ignore (s|z)exts that don't extend the induction variable
`IVVisitor::visitCast` used to have the invariant that if the
instruction it was passed was a sext or zext instruction, the result of
the instruction would be wider than the induction variable.  This is no
longer true after rL275037, so this change teaches `IndVarSimplify` s
implementation of `IVVisitor::visitCast` to work with the relaxed
invariant.

A corresponding change to SimplifyIndVar to preserve the said invariant
after rL275037 would also work, but given how `IVVisitor::visitCast` is
spelled (no indication of said invariant), I figured the current fix is
cleaner.

Fixes PR28935.

llvm-svn: 278584
2016-08-13 00:58:31 +00:00
Justin Lebar d1675aadf6 [LSV] Use a set rather than an ArraySlice at the end of getVectorizablePrefix. NFC
Summary: This avoids a small O(n^2) loop.

Reviewers: asbirlea

Subscribers: mzolotukhin, llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D23473

llvm-svn: 278581
2016-08-13 00:04:12 +00:00
Justin Lebar 222ceff289 [LSV] Use OrderedBasicBlock instead of rolling it ourselves. NFC
Summary:
In getVectorizablePrefix, this is less efficient (because we have to
iterate over the BB twice), but boy is it simpler.  Given how much
trouble we've had here, I think the simplicity gain is worthwhile.

In reorder(), this is actually more efficient, as
DominatorTree::dominates iterates over the BB from the beginning when
the two instructions are in the same BB.

Reviewers: asbirlea

Subscribers: arsenm, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D23472

llvm-svn: 278580
2016-08-13 00:04:08 +00:00
Tim Shen c9c0d2dcb5 [LoopVectorize] Detect loops in the innermost loop before creating InnerLoopVectorizer
InnerLoopVectorizer shouldn't handle a loop with cycles inside the loop
body, even if that cycle isn't a natural loop.

Fixes PR28541.

Differential Revision: https://reviews.llvm.org/D22952

llvm-svn: 278573
2016-08-12 22:47:13 +00:00
Reid Kleckner 6ee00a2602 [Inliner] Don't treat inalloca allocas as static
They aren't static, and moving them to the entry block across something
else will only result in tears.

Root cause of http://crbug.com/636558.

llvm-svn: 278571
2016-08-12 22:23:04 +00:00
David L Kreitzer 9667417a1a Fixed typo.
llvm-svn: 278565
2016-08-12 21:06:53 +00:00
Michael Kuperstein 31b8399beb [PM] Port LowerInvoke to the new pass manager
llvm-svn: 278531
2016-08-12 17:28:27 +00:00
Pete Cooper 980a935e27 constify InstCombine::foldAllocaCmp. NFC.
This is part of an effort to constify ValueTracking.cpp.  This change is
to methods which need const Value* instead of Value* to go with the upcoming
changes to ValueTracking.

llvm-svn: 278528
2016-08-12 17:13:28 +00:00
Dehao Chen c0a1e432c7 Fine tuning of sample profile propagation algorithm.
Summary: The refined propagation algorithm is more accurate and robust.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23224

llvm-svn: 278522
2016-08-12 16:22:12 +00:00
Teresa Johnson 4223dd8559 [PM] Port NameAnonFunction pass to new pass manager
Summary:
Port the NameAnonFunction pass and add a test.

Depends on D23439.

Reviewers: mehdi_amini

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23440

llvm-svn: 278509
2016-08-12 14:03:36 +00:00
Benjamin Kramer bbff9c6130 [Coroutines] Move class into anonymous namespace.
Hopefully fixes visibility warnings from GCC. No functionality change.

llvm-svn: 278485
2016-08-12 08:47:13 +00:00
Gor Nishanov 0f303accde [Coroutines]: Part6b: Add coro.id intrinsic.
Summary:
1. Make coroutine representation more robust against optimization that may duplicate instruction by introducing coro.id intrinsics that returns a token that will get fed into coro.alloc and coro.begin. Due to coro.id returning a token, it won't get duplicated and can be used as reliable indicator of coroutine identify when a particular coroutine call gets inlined.
2. Move last three arguments of coro.begin into coro.id as they will be shared if coro.begin will get duplicated.
3. doc + test + code updated to support the new intrinsic.

Reviewers: mehdi_amini, majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23412

llvm-svn: 278481
2016-08-12 05:45:49 +00:00
David Majnemer 2d006e7673 Use the range variant of transform instead of unpacking begin/end
No functionality change is intended.

llvm-svn: 278476
2016-08-12 04:32:42 +00:00
David Majnemer c700490f48 Use the range variant of remove_if instead of unpacking begin/end
No functionality change is intended.

llvm-svn: 278475
2016-08-12 04:32:37 +00:00
David Majnemer 42531260b3 Use the range variant of find/find_if instead of unpacking begin/end
If the result of the find is only used to compare against end(), just
use is_contained instead.

No functionality change is intended.

llvm-svn: 278469
2016-08-12 03:55:06 +00:00
Ivan Krasin 89439a7939 WholeProgramDevirt: initialize WasDevirt in all constructors.
Summary: This is a follow up to r278389 and r278442.

Differential Revision: https://reviews.llvm.org/D23438

llvm-svn: 278455
2016-08-12 01:40:10 +00:00
Eli Friedman a6707f56b5 [DSE] Don't remove stores made live by a call which unwinds.
Issue exposed by noalias or more aggressive alias analysis.

Fixes http://llvm.org/PR25422.

Differential revision: https://reviews.llvm.org/D21007

llvm-svn: 278451
2016-08-12 01:09:53 +00:00
Xinliang David Li cbb5e02f4a Fix typos /NFC
llvm-svn: 278436
2016-08-11 22:34:00 +00:00
David Majnemer 0d955d0bf5 Use the range variant of find instead of unpacking begin/end
If the result of the find is only used to compare against end(), just
use is_contained instead.

No functionality change is intended.

llvm-svn: 278433
2016-08-11 22:21:41 +00:00
Piotr Padlewski 332b3b2210 Don't import variadic functions
Summary:
This patch adds IsVariadicFunction bit to summary in order
to not import variadic functions. Inliner doesn't inline
variadic functions because it is hard to reason about it.

This one small fix improves Importer by about 16%
(going from 86% to 100% of imported functions that are
inlined anywhere)
on some spec benchmarks like 'int' and others.

Reviewers: eraman, mehdi_amini, tejohnson

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23339

llvm-svn: 278432
2016-08-11 22:13:57 +00:00
Ehsan Amiri dbcfea9811 Extend trip count instead of truncating IV in LFTR, when legal
When legal, extending trip count in the loop control logic generates better code compared to truncating IV. This is because

(1) extending trip count is a loop invariant operation (see genLoopLimit where we prove trip count is loop invariant).
(2) Scalar Evolution seems to have problems understanding trunc when computing loop trip count. So removing them allows better analysis performed in Scalar Evolution. (In particular this fixes PR 28363 which is the motivation for this change).

I am not going to perform any performance test. Any degradation caused by this should be an indication of a bug elsewhere.

To prove legality, we rely on SCEV to prove zext(trunc(IV)) == IV (or similarly for sext). If this holds, we can prove equivalence of trunc(IV)==ExitCnt (1) and IV == zext(ExitCnt). Simply take zext of boths sides of (1) and apply the proven equivalence.

This commit contains changes in a newly added testcase which was not included in the previous commit (which was reverted later on).

https://reviews.llvm.org/D23075

llvm-svn: 278421
2016-08-11 21:31:40 +00:00
Daniel Berlin da2f38e0f4 [MSSA] Use is_contained
llvm-svn: 278418
2016-08-11 21:26:50 +00:00
David Majnemer 0a16c22846 Use range algorithms instead of unpacking begin/end
No functionality change is intended.

llvm-svn: 278417
2016-08-11 21:15:00 +00:00
Geoff Berry d01828096f [SCEV] Update interface to handle SCEVExpander insert point motion.
Summary:
This is an extension of the fix in r271424.  That fix dealt with builder
insert points being moved by SCEV expansion, but only for the lifetime
of the expand call.  This change modifies the interface so that LSR can
safely call expand multiple times at the same insert point and do the
right thing if one of the expansions decides to move the original insert
point.

This is a fix for PR28719.

Reviewers: sanjoy

Subscribers: llvm-commits, mcrosier, mzolotukhin

Differential Revision: https://reviews.llvm.org/D23342

llvm-svn: 278413
2016-08-11 21:05:17 +00:00
Daniel Berlin f75fd1b58b Fix PR 28933
Summary:
This fixes PR 28933 by making sure GVNHoist does not try to recreate memory
accesses when it has not actually moved them.

Reviewers: sebpop

Subscribers: llvm-commits, george.burgess.iv

Differential Revision: https://reviews.llvm.org/D23411

llvm-svn: 278401
2016-08-11 20:32:43 +00:00
Ivan Krasin f3403fd2c8 WholeProgramDevirt: generate more detailed and accurate remarks.
Summary:
Keep track of all methods for which we have devirtualized at least
one call and then print them sorted alphabetically. That allows to
avoid duplicates and also makes the order deterministic.

Add optimization names into the remarks, so that it's easier to
understand how has each method been devirtualized.

Fix a bug when wrong methods could have been reported for
tryVirtualConstProp.

Reviewers: kcc, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23297

llvm-svn: 278389
2016-08-11 19:09:02 +00:00
Andrew Kaylor 7cdf01ef58 Target independent codesize heuristics for Loop Idiom Recognition
Patch by Sunita Marathe

Differential Revision: https://reviews.llvm.org/D21449

llvm-svn: 278378
2016-08-11 18:28:33 +00:00
Easwaran Raman 61edc107bb Add a new method to create SimpleInliner instance and make pre-inliner use this.
This adds a createFunctionInliningPass pass that takes an InlineParams object and use this to create the pre-inliner pass. This prevents the regular inliner's threshold flag from influencing the preinliner.

Differential revision: https://reviews.llvm.org/D23377

llvm-svn: 278377
2016-08-11 18:24:08 +00:00
Eugene Zelenko cdc7161281 Fix some Clang-tidy modernize and Include What You Use warnings.
Differential revision: https://reviews.llvm.org/D23291

llvm-svn: 278364
2016-08-11 17:20:18 +00:00
Matthew Simpson 3f69195b9e [SLP] Make RecursionMaxDepth a command line option (NFC)
llvm-svn: 278343
2016-08-11 15:28:45 +00:00
Sanjay Patel 38ae83de38 fix comment; NFC
llvm-svn: 278342
2016-08-11 15:23:56 +00:00
Sanjay Patel e3c335cbed use auto* with dyn_cast ; NFC
llvm-svn: 278340
2016-08-11 15:21:21 +00:00
Sanjay Patel 5a470950b9 getParent()->getParent() == getFunction() ; NFC
llvm-svn: 278339
2016-08-11 15:16:06 +00:00
Ehsan Amiri 3818f1b38a revert 278334
llvm-svn: 278337
2016-08-11 14:51:14 +00:00
Ehsan Amiri b9fcc2b171 Extend trip count instead of truncating IV in LFTR, when legal
When legal, extending trip count in the loop control logic generates better code compared to truncating IV. This is because

(1) extending trip count is a loop invariant operation (see genLoopLimit where we prove trip count is loop invariant).
(2) Scalar Evolution seems to have problems understanding trunc when computing loop trip count. So removing them allows better analysis performed in Scalar Evolution. (In particular this fixes PR 28363 which is the motivation for this change).

I am not going to perform any performance test. Any degradation caused by this should be an indication of a bug elsewhere.

To prove legality, we rely on SCEV to prove zext(trunc(IV)) == IV (or similarly for sext). If this holds, we can prove equivalence of trunc(IV)==ExitCnt (1) and IV == zext(ExitCnt). Simply take zext of boths sides of (1) and apply the proven equivalence.

https://reviews.llvm.org/D23075

llvm-svn: 278334
2016-08-11 13:51:20 +00:00
Xinliang David Li 76a0108be4 [Profile] improve warning control option
Change --no-pgo-warn-missing to -pgo-warn-missing-function
and negate the default. /NFC

Add more test to make sure the warning is off by default

llvm-svn: 278314
2016-08-11 05:09:30 +00:00
Piotr Padlewski d89875ca39 Changed sign of LastCallToStaticBouns
Summary:
I think it is much better this way.
When I firstly saw line:
  Cost += InlineConstants::LastCallToStaticBonus;
I though that this is a bug, because everywhere where the cost is being reduced
it is usuing -=.

Reviewers: eraman, tejohnson, mehdi_amini

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23222

llvm-svn: 278290
2016-08-10 21:15:22 +00:00
Andrew Kaylor 498d3113c3 [IndVarSimplify] Eliminate zext of a signed IV when the IV is known to be non-negative
Patch by Li Huang

Differential Revision: https://reviews.llvm.org/D18867

llvm-svn: 278269
2016-08-10 18:56:35 +00:00
Rong Xu 63f970ee24 Fix LCSSA increased compile time
We are seeing r276077 drastically increasing compiler time for our larger
benchmarks in PGO profile generation build (both clang based and IR based
mode) -- it can be 20x slower than without the patch (like from 30 secs to
780 secs)

The increased time are all in pass LCSSA. The problematic code is about
PostProcessPHIs after use-rewrite. Note that the InsertedPhis from ssa_updater
is accumulating (never been cleared). Since the inserted PHIs are added to the
candidate for each rewrite, The earlier ones will be repeatedly added. Later
when adding the new PHIs to the work-list, we don't check the duplication
either. This can result in extremely long work-list that containing tons of
duplicated PHIs.

This patch fixes the issue by hoisting the code out of the loop.

Differential Revision: http://reviews.llvm.org/D23344

llvm-svn: 278250
2016-08-10 17:49:11 +00:00
Gor Nishanov b2a9c02521 [Coroutines] Part 6: Elide dynamic allocation of a coroutine frame when possible
Summary:
A particular coroutine usage pattern, where a coroutine is created, manipulated and
destroyed by the same calling function, is common for coroutines implementing
RAII idiom and is suitable for allocation elision optimization which avoid
dynamic allocation by storing the coroutine frame as a static `alloca` in its
caller.

coro.free and coro.alloc intrinsics are used to indicate which code needs to be suppressed
when dynamic allocation elision happens:
```
entry:
  %elide = call i8* @llvm.coro.alloc()
  %need.dyn.alloc = icmp ne i8* %elide, null
  br i1 %need.dyn.alloc, label %coro.begin, label %dyn.alloc
dyn.alloc:
  %alloc = call i8* @CustomAlloc(i32 4)
  br label %coro.begin
coro.begin:
  %phi = phi i8* [ %elide, %entry ], [ %alloc, %dyn.alloc ]
  %hdl = call i8* @llvm.coro.begin(i8* %phi, i32 0, i8* null,
                          i8* bitcast ([2 x void (%f.frame*)*]* @f.resumers to i8*))
```
and
```
  %mem = call i8* @llvm.coro.free(i8* %hdl)
  %need.dyn.free = icmp ne i8* %mem, null
  br i1 %need.dyn.free, label %dyn.free, label %if.end
dyn.free:
  call void @CustomFree(i8* %mem)
  br label %if.end
if.end:
  ...
```

If heap allocation elision is performed, we replace coro.alloc with a static alloca on the caller frame and coro.free with null constant.

Also, we need to make sure that if there are any tail calls referencing the coroutine frame, we need to remote tail call attribute, since now coroutine frame lives on the stack.

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
4.Add coroutine devirtualization + tests.
ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
c) Do devirtualization (https://reviews.llvm.org/D23229)
5.Add CGSCC restart trigger + tests. (https://reviews.llvm.org/D23234)
6.Add coroutine heap elision + tests.  <= we are here
7.Add the rest of the logic (split into more patches)

Reviewers: mehdi_amini, majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23245

llvm-svn: 278242
2016-08-10 16:40:39 +00:00
Artur Pilipenko e171ea8a33 Teach CorrelatedValuePropagation to mark adds as no wrap
This is a resubmission of previously reverted r277592. It was hitting overly strong assertion in getConstantRange which was relaxed in r278217.

Use LVI to prove that adds do not wrap. The change is motivated by https://llvm.org/bugs/show_bug.cgi?id=28620 bug and it's the first step to fix that problem.

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D23059

llvm-svn: 278220
2016-08-10 13:08:34 +00:00
Davide Italiano 873219c406 [SimplifyLibCalls] Restore the old behaviour, emit a libcall.
Hal pointed out that the semantic of our intrinsic and the libc
call are slightly different. Add a comment while I'm here to
explain why we can't emit an intrinsic. Thanks Hal!

llvm-svn: 278200
2016-08-10 06:33:32 +00:00
Easwaran Raman 1c57cc2b68 Do not directly use inline threshold cl options in cost analysis.
This adds an InlineParams struct which is populated from the command line options by getInlineParams and passed to getInlineCost for the call analyzer to use.

Differential revision: https://reviews.llvm.org/D22120

llvm-svn: 278189
2016-08-10 00:48:04 +00:00
Adam Nemet 896c09bd10 [Inliner,OptDiag] Add hotness attribute to opt diagnostics
Summary:
The inliner not being a function pass requires the work-around of
generating the OptimizationRemarkEmitter and in turn BFI on demand.
This will go away after the new PM is ready.

BFI is only computed inside ORE if the user has requested hotness
information for optimization diagnostitics (-pass-remark-with-hotness at
the 'opt' level).  Thus there is no additional overhead without the
flag.

Reviewers: hfinkel, davidxl, eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22694

llvm-svn: 278185
2016-08-10 00:44:44 +00:00
Michael Zolotukhin aae168f993 [LoopSimplify] Rebuild LCSSA for the inner loop after separating nested loops.
Summary:
This hopefully fixes PR28825. The problem now was that a value from the
original loop was used in a subloop, which became a sibling after separation.
While a subloop doesn't need an lcssa phi node, a sibling does, and that's
where we broke LCSSA. The most natural way to fix this now is to simply call
formLCSSA on the original loop: it'll do what we've been doing before plus
it'll cover situations described above.

I think we don't need to run formLCSSARecursively here, and we have an assert
to verify this (I've tried testing it on LLVM testsuite + SPECs). I'd be happy
to be corrected here though.

I also changed a run line in the test from '-lcssa -loop-unroll' to
'-lcssa -loop-simplify -indvars', because it exercises LCSSA
preservation to the same extent, but also makes less unrelated
transformation on the CFG, which makes it easier to verify.

Reviewers: chandlerc, sanjoy, silvas

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23288

llvm-svn: 278173
2016-08-09 22:44:56 +00:00
Wei Mi 575435012c Fix the runtime error caused by "Use ValueOffsetPair to enhance value reuse during SCEV expansion".
The patch is to fix the bug in PR28705. It was caused by setting wrong return
value for SCEVExpander::findExistingExpansion. The return values of findExistingExpansion
have different meanings when the function is used in different ways so it is easy to make
mistake. The fix creates two new interfaces to replace SCEVExpander::findExistingExpansion,
and specifies where each interface is expected to be used.

Differential Revision: https://reviews.llvm.org/D22942

llvm-svn: 278161
2016-08-09 20:40:03 +00:00
Anna Thomas b2d12b81c3 [EarlyCSE] Teach about CSE'ing over invariant.start intrinsics
Summary:
Teach EarlyCSE about invariant.start intrinsic. Specifically, we can perform
store-load, load-load forwarding over this call.

Reviewers: majnemer, reames, dberlin, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23268

llvm-svn: 278153
2016-08-09 20:00:47 +00:00
Xinliang David Li 9035cfceef [Profile] turn off verbose warnings by default
no prof data for func warning is turned off by default
due to its high verbosity and minimal usefulness.

Differential Revision: http://reviews.llvm.org/D23295

llvm-svn: 278127
2016-08-09 15:35:28 +00:00
Sean Silva 0746f3bfa4 Consistently use LoopAnalysisManager
One exception here is LoopInfo which must forward-declare it (because
the typedef is in LoopPassManager.h which depends on LoopInfo).

Also, some includes for LoopPassManager.h were needed since that file
provides the typedef.

Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.

Thanks to David for the suggestion.

llvm-svn: 278079
2016-08-09 00:28:52 +00:00
Sean Silva fd03ac6a0c Consistently use ModuleAnalysisManager
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.

Thanks to David for the suggestion.

llvm-svn: 278078
2016-08-09 00:28:38 +00:00
Sean Silva 36e0d01e13 Consistently use FunctionAnalysisManager
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.

Thanks to David for the suggestion.

llvm-svn: 278077
2016-08-09 00:28:15 +00:00
Derek Schuff b7d6d9e3cd [WebAssembly] Fix CFI index to account for padding nullptr function
The WebAssembly linker now creates a dummy function at index 0 to
prevent miscomparisons with the NULL pointer, see
https://github.com/WebAssembly/binaryen/pull/658. Thanks to pcc for
pointing out this problem!

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D23137

llvm-svn: 278073
2016-08-08 23:56:01 +00:00
Justin Bogner 6b4422e6fe InstCombine: Remove a redundant #ifdef NDEBUG. NFC
The DEBUG() macro already does this.

llvm-svn: 278049
2016-08-08 21:02:11 +00:00
Michael Zolotukhin 2f50725dbd [LoopUnroll] Simplify loops created by unrolling.
Summary:
Currently loop-unrolling doesn't preserve loop-simplified form. This patch
fixes it by resimplifying affected loops.

Reviewers: chandlerc, sanjoy, hfinkel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23148

llvm-svn: 278038
2016-08-08 19:02:15 +00:00
Geoff Berry 290a13e7c7 [MemorySSA] Fix windows build breakage caused by r278028
r278028: [MemorySSA] Ensure address stability of MemorySSA object.
llvm-svn: 278035
2016-08-08 18:27:22 +00:00
Geoff Berry cdf5333f6f [MemorySSA] Ensure address stability of MemorySSA object.
Summary:
Ensure that the MemorySSA object never changes address when using the
new pass manager since the walkers contained by MemorySSA cache pointers
to it at construction time.  This is achieved by wrapping the
MemorySSAAnalysis result in a unique_ptr.  Also add some asserts that
check for this bug.

Reviewers: george.burgess.iv, dberlin

Subscribers: mcrosier, hfinkel, chandlerc, silvas, llvm-commits

Differential Revision: https://reviews.llvm.org/D23171

llvm-svn: 278028
2016-08-08 17:52:01 +00:00
Sebastian Pop bfb96c5bfd GVN-hoist: enable by default
llvm-svn: 278010
2016-08-08 14:46:15 +00:00
Sean Silva 0873e7d218 Add some comments linking back to PR28400.
Thanks to Mehdi for the suggestion!

llvm-svn: 277984
2016-08-08 07:03:49 +00:00
Sean Silva 7f21f4b264 [PM] More workaround for PR28400
llvm-svn: 277982
2016-08-08 05:38:06 +00:00
Sean Silva 744f7a843f [PM] Invalidate CallGraphAnalysis because it holds AssertingVH
This is essentially PR28400. The fix here is similar to that implemented
in r274656.

llvm-svn: 277980
2016-08-08 05:38:01 +00:00
Daniel Berlin 4b4c722e79 [MSSA] Fix PR28880 by fixing use optimizer's lower bound tracking behavior.
Summary:
In the use optimizer, we need to keep of whether the lower bound still
dominates us or else we may decide a lower bound is still valid when it
is not due to intervening pushes/pops.  Fixes PR28880 (and probably a
bunch of other things).

Reviewers: george.burgess.iv

Subscribers: MatzeB, llvm-commits, sebpop

Differential Revision: https://reviews.llvm.org/D23237

llvm-svn: 277978
2016-08-08 04:44:53 +00:00
Eli Friedman 02419a9849 [JumpThreading] Fix handling of aliasing metadata.
Summary:
The correctness fix here is that when we CSE a load with another load,
we need to combine the metadata on the two loads. This matches the
behavior of other passes, like instcombine and GVN.

There's also a minor optimization improvement here: for load PRE, the
aliasing metadata on the inserted load should be the same as the
metadata on the original load. Not sure why the old code was throwing
it away.

Issue found by inspection.

Differential Revision: http://reviews.llvm.org/D21460

llvm-svn: 277977
2016-08-08 04:10:22 +00:00
Davide Italiano e3b916d164 [SimplifyLibCalls] Emit sqrt intrinsic instead of a libcall.
llvm-svn: 277972
2016-08-08 03:23:01 +00:00
Eli Friedman 2a65dd1ba6 [SROA] Fix crash with lifetime intrinsic partially covering alloca.
Summary:
PromoteMemToReg looks specifically for the pattern
bitcast+lifetime.start (or a bitcast-equivalent GEP); any offset
will lead to an assertion failure.

Fixes https://llvm.org/bugs/show_bug.cgi?id=27999 .

Differential Revision: https://reviews.llvm.org/D22737

llvm-svn: 277969
2016-08-08 01:30:53 +00:00
Davide Italiano 27da131f32 [SLC] Emit an intrinsic instead of a libcall for pow.
Differential Revision:  https://reviews.llvm.org/D22104

llvm-svn: 277963
2016-08-07 20:27:03 +00:00
David Majnemer 4e4f4437c2 [InstCombine] Infer inbounds on geps of allocas
llvm-svn: 277950
2016-08-07 07:58:00 +00:00
Michael Zolotukhin 442b82f0eb Revert "Revert "[LoopSimplify] Fix updating LCSSA after separating nested loops.""
This reverts commit r277901. Reaaply the commit as it looks like it has
nothing to do with the bots failures.

llvm-svn: 277946
2016-08-07 01:56:54 +00:00
Gor Nishanov 28c889593a CoroSplit: Squash unused variable FnTrigger warning in NDEBUG
llvm-svn: 277938
2016-08-06 21:11:10 +00:00
Gor Nishanov 2ed6e788a8 [Coroutines] Part 5: Add CGSCC restart trigger
Summary:
CoroSplit pass processes the coroutine twice. First, it lets it go through
complete IPO optimization pipeline as a single function. It forces restart
of the pipeline by inserting an indirect call to an empty function "coro.devirt.trigger"
which is devirtualized by CoroElide pass that triggers a restart of the pipeline by CGPassManager.
(In later patches, when CoroSplit pass sees the same coroutine the second time, it splits it up,
adds coroutine subfunctions to the SCC to be processed by IPO pipeline.)

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
4.Add coroutine devirtualization + tests.
ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
c) Do devirtualization (https://reviews.llvm.org/D23229)
5.Add CGSCC restart trigger + tests. <= we are here
6.Add coroutine heap elision + tests.
7.Add the rest of the logic (split into more patches)

Reviewers: mehdi_amini, majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23234

llvm-svn: 277936
2016-08-06 20:44:39 +00:00
Benjamin Kramer 41e66dade1 [Inliner] Use function_ref for functors which are never taken ownership of.
llvm-svn: 277922
2016-08-06 12:33:46 +00:00
Benjamin Kramer a3d4def878 [LoadCombine] Simplify code with a brace init. NFC.
llvm-svn: 277921
2016-08-06 12:11:11 +00:00
Benjamin Kramer b7d3311c77 Move helpers into anonymous namespaces. NFC.
llvm-svn: 277916
2016-08-06 11:13:10 +00:00
Sanjoy Das ba04d3a620 [InstCombine] Don't coerce non-integral pointers to integers
Reviewers: majnemer

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D23231

llvm-svn: 277910
2016-08-06 02:58:48 +00:00
Matthias Braun 9a0035d8d2 Revert "(refs/bisect/bad) GVN-hoist: enable by default"
GVN-Hoist appears to miscompile llvm-testsuite
SingleSource/Benchmarks/Misc/fbench.c at the moment.

I filed http://llvm.org/PR28880

This reverts commit r277786.

llvm-svn: 277909
2016-08-06 02:23:15 +00:00
Gor Nishanov 31d8c9af89 Part 4c: Coroutine Devirtualization: Devirtualize coro.resume and coro.destroy.
Summary:
This is the 4c patch of the coroutine series. CoroElide pass now checks if PostSplit coro.begin
is referenced by coro.subfn.addr intrinsics. If so replace coro.subfn.addrs with an appropriate coroutine
subfunction associated with that coro.begin.

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
4.Add coroutine devirtualization + tests.
ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
c) Do devirtualization <= we are here
5.Add CGSCC restart trigger + tests.
6.Add coroutine heap elision + tests.
7.Add the rest of the logic (split into more patches)

Reviewers: majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23229

llvm-svn: 277908
2016-08-06 02:16:35 +00:00
Michael Zolotukhin 09cf304ebc Revert "[LoopSimplify] Fix updating LCSSA after separating nested loops."
This reverts commit r277877.
Try to appease clang-x64-ninja-win7 buildbot.

llvm-svn: 277901
2016-08-06 01:48:51 +00:00
Sanjoy Das b8c2ebea08 [IRCE] Remove unused headers; NFC
llvm-svn: 277892
2016-08-06 00:02:01 +00:00
Sanjoy Das cf181867a6 [IRCE] Preserve loop-simplify form
Fixes PR28764.  Right now there is no way to test this, but (as
mentioned on the PR) with Michael Zolotukhin's yet to be checked in
LoopSimplify verfier, 8 of the llvm-lit tests for IRCE crash.

llvm-svn: 277891
2016-08-06 00:01:56 +00:00
Sanjay Patel 8e3ab17c44 [InstCombine] refactor ctlz/cttz folds (NFCI)
Note that this fold really belongs in InstSimplify.
Refactoring here anyway as an intermediate step because
there's a planned addition to this function in D23134.

Differential Revision: https://reviews.llvm.org/D23223

llvm-svn: 277883
2016-08-05 22:42:46 +00:00
Daniel Berlin 7ac3d74017 [MSSA] Use depth first iterator instead of custom version.
Summary:
Originally the plan was to use the custom worklist to do some block popping,
and because we don't actually need a visited set. The custom one we have
here is slightly broken, and it's not worth fixing vs using depth_first_iterator since we aren't going to go the route we originally
were.

Fixes PR28874
Reviewers: george.burgess.iv

Subscribers: llvm-commits, gberry

Differential Revision: https://reviews.llvm.org/D23187

llvm-svn: 277880
2016-08-05 22:09:14 +00:00
Michael Zolotukhin 4c65c3596a [LoopSimplify] Fix updating LCSSA after separating nested loops.
This fixes PR28825. The problem was that we only checked if a value from
a created inner loop is used in the outer loop, and fixed LCSSA for
them. But we missed to fixup LCSSA for values used in exits of the outer
loop.

llvm-svn: 277877
2016-08-05 21:52:58 +00:00
Daniel Berlin 7af95876cf [MSSA] Match assert vs llvm_unreachable style in verification functions.
llvm-svn: 277873
2016-08-05 21:47:20 +00:00
Daniel Berlin 2919b1c41b Rewrite domination verifier to handle local domination as well.
Summary:
Rewrite domination verifier to handle local domination as well.
This catches a bug Geoff Berry noticed.

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23184

llvm-svn: 277872
2016-08-05 21:46:52 +00:00
Davide Italiano 500929df9c [FlattenCFG] Simplify + remove unused variable. NFCI.
llvm-svn: 277864
2016-08-05 20:53:35 +00:00
Ivan Krasin b05e06e4fd WholeProgramDevirt: print remarks with devirtualized method names.
Summary:
Chrome on Linux uses WholeProgramDevirt for speed ups, and it's
important to detect regressions on both sides: the toolchain,
if fewer methods get devirtualized after an update, and Chrome,
if an innocently looking change caused many hot methods become
virtual again.

The need to track devirtualized methods is not Chrome-specific,
but it's probably the only user of the pass at this time.

Reviewers: kcc

Differential Revision: https://reviews.llvm.org/D23219

llvm-svn: 277856
2016-08-05 19:45:16 +00:00
David Callahan 45e442ebaa [ADCE] Refactoring for new functionality (NFC)
Summary:
This is another refactoring to break up the one function into three logical components functions.
Another non-functional change before we start added in features.

Reviewers: nadav, mehdi_amini, majnemer

Subscribers: twoh, freik, llvm-commits

Differential Revision: https://reviews.llvm.org/D23102

llvm-svn: 277855
2016-08-05 19:38:11 +00:00
Dehao Chen 17c6afc35b Do not assign new discriminator for all intrinsics.
Summary: We do not care about intrinsic calls when assigning discriminators.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23212

llvm-svn: 277843
2016-08-05 17:56:49 +00:00
Benjamin Kramer aa160c22f7 [SimplifyCFG] Make range reduction code deterministic.
This generated IR based on the order of evaluation, which is different
between GCC and Clang. With that in mind you get bootstrap miscompares
if you compare a Clang built with GCC-built Clang vs. Clang built with
Clang-built Clang. Diagnosing that made my head hurt.

This also reverts commit r277337, which "fixed" the test case.

llvm-svn: 277820
2016-08-05 14:55:02 +00:00
Nicolai Haehnle 870bf1788c [InstCombine] try to fold (select C, (sext A), B) into logical ops
Summary:
Turn (select C, (sext A), B) into (sext (select C, A, B')) when A is i1 and
B is a compatible constant, also for zext instead of sext. This will then be
further folded into logical operations.

The transformation would be valid for non-i1 types as well, but other parts of
InstCombine prefer to have sext from non-i1 as an operand of select.

Motivated by the shader compiler frontend in Mesa for AMDGPU, which emits i32
for boolean operations. With this change, the boolean logic is fully
recovered.

Reviewers: majnemer, spatel, tstellarAMD

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22747

llvm-svn: 277801
2016-08-05 08:22:29 +00:00
Justin Bogner c7e4fbe11c InstCombine: Clean up some trailing whitespace. NFC
llvm-svn: 277793
2016-08-05 01:09:48 +00:00
Justin Bogner 9979840f59 InstCombine: Replace some never-null pointers with references. NFC
llvm-svn: 277792
2016-08-05 01:06:44 +00:00
Sebastian Pop c33f0e25c9 GVN-hoist: enable by default
llvm-svn: 277786
2016-08-04 23:49:07 +00:00
Sebastian Pop 429740a6c2 GVN-hoist: fix early exit logic
The patch splits a complex && if condition into easier to read and understand
logic.  That wrong early exit condition was letting some instructions with not
all operands available pass through when HoistingGeps was true.

Differential Revision: https://reviews.llvm.org/D23174

llvm-svn: 277785
2016-08-04 23:49:05 +00:00
Justin Bogner 19dd0da153 IR: Provide an IRBuilder Inserter that calls a callback after insertion
Add a generalized IRBuilderCallbackInserter, which is just given a
callback to execute after insertion. This can be used to get rid of
the custom inserter in InstCombine, which will in turn allow me to add
target specific InstCombineCalls API for intrinsics without horrible
layering violations.

llvm-svn: 277784
2016-08-04 23:41:01 +00:00
Michael Kuperstein 3ceac2bbd5 [LV, X86] Be more optimistic about vectorizing shifts.
Shifts with a uniform but non-constant count were considered very expensive to
vectorize, because the splat of the uniform count and the shift would tend to
appear in different blocks. That made the splat invisible to ISel, and we'd
scalarize the shift at codegen time.

Since r201655, CodeGenPrepare sinks those splats to be next to their use, and we
are able to select the appropriate vector shifts. This updates the cost model to
to take this into account by making shifts by a uniform cheap again.

Differential Revision: https://reviews.llvm.org/D23049

llvm-svn: 277782
2016-08-04 22:48:03 +00:00
Sanjay Patel 3bade138b5 [InstCombine] use m_APInt to allow icmp eq (mul X, C1), C2 folds for splat constant vectors
This concludes the splat vector enhancements for foldICmpEqualityWithConstant().
Other commits in this series:
https://reviews.llvm.org/rL277762
https://reviews.llvm.org/rL277752
https://reviews.llvm.org/rL277738
https://reviews.llvm.org/rL277731
https://reviews.llvm.org/rL277659
https://reviews.llvm.org/rL277638
https://reviews.llvm.org/rL277629

llvm-svn: 277779
2016-08-04 22:19:27 +00:00
Matt Arsenault 6ad97732aa GVNHoist: Don't hoist convergent calls
llvm-svn: 277767
2016-08-04 20:52:57 +00:00
David Majnemer f93082e71a [coroutines] Part 4[ab]: Coroutine Devirtualization: Lower coro.resume and coro.destroy.
This is the forth patch in the coroutine series. CoroEaly pass now lowers coro.resume
and coro.destroy intrinsics by replacing them with an indirect call to an address
returned by coro.subfn.addr intrinsic. This is done so that CGPassManager recognizes
devirtualization when CoroElide replaces a call to coro.subfn.addr with an appropriate
function address.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22998

llvm-svn: 277765
2016-08-04 20:30:07 +00:00
Sanjay Patel d938e88e89 [InstCombine] use m_APInt to allow icmp eq (and X, C1), C2 folds for splat constant vectors
llvm-svn: 277762
2016-08-04 20:05:02 +00:00
Sanjay Patel b3de75d3a0 [InstCombine] use m_APInt to allow icmp eq (or X, C1), C2 folds for splat constant vectors
llvm-svn: 277752
2016-08-04 19:12:12 +00:00
Sanjay Patel bcaf6f39dd [InstCombine] use m_APInt to allow icmp eq (op X, Y), C folds for splat constant vectors
I'm removing a misplaced pair of more specific folds from InstCombine in this patch as well,
so we know where those folds are happening in InstSimplify.

llvm-svn: 277738
2016-08-04 17:48:04 +00:00
Alina Sbirlea 6f937b1144 LoadStoreVectorizer: Remove TargetBaseAlign. Keep alignment for stack adjustments.
Summary:
TargetBaseAlign is no longer required since LSV checks if target allows misaligned accesses.
A constant defining a base alignment is still needed for stack accesses where alignment can be adjusted.

Previous patch (D22936) was reverted because tests were failing. This patch also fixes the cause of those failures:
- x86 failing tests either did not have the right target, or the right alignment.
- NVPTX failing tests did not have the right alignment.
- AMDGPU failing test (merge-stores) should allow vectorization with the given alignment but the target info
  considers <3xi32> a non-standard type and gives up early. This patch removes the condition and only checks
  for a maximum size allowed and relies on the next condition checking for %4 for correctness.
  This should be revisited to include 3xi32 as a MVT type (on arsenm's non-immediate todo list).

Note that checking the sizeInBits for a MVT is undefined (leads to an assertion failure),
so we need to create an EVT, hence the interface change in allowsMisaligned to include the Context.

Reviewers: arsenm, jlebar, tstellarAMD

Subscribers: jholewinski, arsenm, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23068

llvm-svn: 277735
2016-08-04 16:38:44 +00:00
Sanjay Patel 9d591d15ec [InstCombine] use m_APInt to allow icmp eq (sub C1, X), C2 folds for splat constant vectors
llvm-svn: 277731
2016-08-04 15:19:25 +00:00
Amaury Sechet 6bea674c43 Add popcount(n) == bitsize(n) -> n == -1 transformation.
Summary: As per title.

Reviewers: majnemer, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23139

llvm-svn: 277694
2016-08-04 05:27:20 +00:00
David Majnemer 4eefd6bca4 Forgot the dyn_cast_or_null intended for r277691.
llvm-svn: 277693
2016-08-04 04:47:18 +00:00
David Majnemer 909793fa63 Reinstate "[CloneFunction] Don't remove side effecting calls"
This reinstates r277611 + r277614 and reverts r277642.  A cast_or_null
should have been a dyn_cast_or_null.

llvm-svn: 277691
2016-08-04 04:24:02 +00:00
Bruno Cardoso Lopes bd887581fc Revert "GVN-hoist: enable by default" & "Make GVN Hoisting obey optnone/bisect."
This reverts commits r277685 & r277688. r277685 broke compiler-rt
compilation http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/23335
and r277685 is a followup from it.

llvm-svn: 277690
2016-08-04 04:16:24 +00:00
Sebastian Pop 70ffe6523f GVN-hoist: enable by default
As we addressed all compilation time problems with GVN-hoist
https://llvm.org/bugs/show_bug.cgi?id=28670
this patch turns GVN-hoist back by default.

Differential Revision: https://reviews.llvm.org/D23136

llvm-svn: 277685
2016-08-04 01:59:42 +00:00
Sanjay Patel 00a324e893 [InstCombine] use m_APInt to allow icmp eq (add X, C1), C2 folds for splat constant vectors
llvm-svn: 277659
2016-08-03 22:08:44 +00:00
George Burgess IV 363da6f589 [MSSA] Fix a bug in MemorySSA's move ctor.
Not a correctness issue, but it would be nice if we didn't have to
recompute our block numbering (worst-case) every time we move MSSA.

llvm-svn: 277652
2016-08-03 21:07:52 +00:00
Sebastian Pop 2aadad7243 GVN-hoist: limit the length of dependent instructions
Limit the number of times the while(1) loop is executed. With this restriction
the number of hoisted instructions does not change in a significant way on the
test-suite.

Differential Revision: https://reviews.llvm.org/D23028

llvm-svn: 277651
2016-08-03 20:54:38 +00:00
Sebastian Pop 4ba7c88cc7 GVN-hoist: compute DFS numbers once
With this patch we compute the DFS numbers of instructions only once and update
them during the code generation when an instruction gets hoisted.

Differential Revision: https://reviews.llvm.org/D23021

llvm-svn: 277650
2016-08-03 20:54:36 +00:00
Sebastian Pop 5d3822fc12 GVN-hoist: compute MSSA once per function (PR28670)
With this patch we compute the MemorySSA once and update it in the code generator.

Differential Revision: https://reviews.llvm.org/D22966

llvm-svn: 277649
2016-08-03 20:54:33 +00:00
Reid Kleckner a6be60871f Revert "[CloneFunction] Don't remove side effecting calls"
This reverts commit r277611 and the followup r277614.

Bootstrap builds and chromium builds are crashing during inlining after
this change.

llvm-svn: 277642
2016-08-03 20:01:01 +00:00
George Burgess IV f7672854f0 [MSSA] clang-format. NFC.
Didn't want to fold this in with r277640, since it touches bits that
aren't entirely related to r277640.

llvm-svn: 277641
2016-08-03 19:59:11 +00:00
George Burgess IV 024f3d2683 [MSSA] Add special handling for invariant/constant loads.
This is a follow-up to r277637. It teaches MemorySSA that invariant
loads (and loads of provably constant memory) are always liveOnEntry.

llvm-svn: 277640
2016-08-03 19:57:02 +00:00
Sanjay Patel 2e9675ff52 [InstCombine] use m_APInt to allow icmp eq (srem X, C1), C2 folds for splat constant vectors
llvm-svn: 277638
2016-08-03 19:48:40 +00:00
George Burgess IV 82e355ce48 [MSSA] Add logic for special handling of atomics/volatiles.
This patch makes MemorySSA recognize atomic/volatile loads, and makes
MSSA treat said loads specially. This allows us to be a bit more
aggressive in some cases.

Administrative note: Revision was LGTM'ed by reames in person.
Additionally, this doesn't include the `invariant.load` recognition in
the differential revision, because I feel it's better to commit that
separately. Will commit soon.

Differential Revision: https://reviews.llvm.org/D16875

llvm-svn: 277637
2016-08-03 19:39:54 +00:00
Tobias Grosser 8757e387dd [InstCombine] Refactor optimization of zext(or(icmp, icmp)) to enable more aggressive cast-folding
Summary:
InstCombine unfolds expressions of the form `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` such that in a later iteration of InstCombine the exposed `zext(icmp)` instructions can be optimized. We now combine this unfolding and the subsequent `zext(icmp)` optimization to be performed together. Since the unfolding doesn't happen separately anymore, we also again enable the folding of `logic(cast(icmp), cast(icmp))` expressions to `cast(logic(icmp, icmp))` which had been disabled due to its interference with the unfolding transformation.

Tested via `make check` and `lnt`.

Background
==========

For a better understanding on how it came to this change we subsequently summarize its history. In commit r275989 we've already tried to enable the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` which had to be reverted in r276106 because it could lead to an endless loop in InstCombine (also see http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160718/374347.html). The root of this problem is that in `visitZExt()` in InstCombineCasts.cpp there also exists a reverse of the above folding transformation, that unfolds `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` in order to expose `zext(icmp)` operations which would then possibly be eliminated by subsequent iterations of InstCombine. However, before these `zext(icmp)` would be eliminated the folding from r275989 could kick in and cause InstCombine to endlessly switch back and forth between the folding and the unfolding transformation. This is the reason why we now combine the `zext`-unfolding and the elimination of the exposed `zext(icmp)` to happen at one go because this enables us to still allow the cast-folding in `logic(cast(icmp), cast(icmp))` without entering an endless loop again.

Details on the submitted changes
================================

- In `visitZExt()` we combine the unfolding and optimization of `zext` instructions.
- In `transformZExtICmp()` we have to use `Builder->CreateIntCast()` instead of `CastInst::CreateIntegerCast()` to make sure that the new `CastInst` is inserted in a `BasicBlock`. The new calls to `transformZExtICmp()` that we introduce in `visitZExt()` would otherwise cause according assertions to be triggered (in our case this happend, for example, with lnt for the MultiSource/Applications/sqlite3 and SingleSource/Regression/C++/EH/recursive-throw tests). The subsequent usage of `replaceInstUsesWith()` is necessary to ensure that the new `CastInst` replaces the `ZExtInst` accordingly.
- In InstCombineAndOrXor.cpp we again allow the folding of casts on `icmp` instructions.
- The instruction order in the optimized IR for the zext-or-icmp.ll test case is different with the introduced changes.
- The test cases in zext.ll have been adopted from the reverted commits r275989 and r276105.

Reviewers: grosser, majnemer, spatel

Subscribers: eli.friedman, majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D22864

Contributed-by: Matthias Reisinger <d412vv1n@gmail.com>
llvm-svn: 277635
2016-08-03 19:30:35 +00:00
Sanjay Patel 43aeb001c9 [InstCombine] use m_APInt to allow icmp (binop X, Y), C folds with constant splat vectors
This removes the restriction for the icmp constant, but as noted by the FIXME comments, 
we still need to change individual checks for binop operand constants.

llvm-svn: 277629
2016-08-03 18:59:03 +00:00
David Majnemer fa8ef91748 [CloneFunction] Don't crash if the value map doesn't hold something
It is possible for the value map to not have an entry for some value
that has already been removed.

I don't have a testcase, this is fall-out from a buildbot.

llvm-svn: 277614
2016-08-03 17:37:10 +00:00
Sanjay Patel 51a767c6b8 use local variables; NFC
llvm-svn: 277612
2016-08-03 17:23:08 +00:00
David Majnemer fad0490869 [CloneFunction] Don't remove side effecting calls
We were able to figure out that the result of a call is some constant.
While propagating that fact, we added the constant to the value map.
This is problematic because it results in us losing the call site when
processing the value map.

This fixes PR28802.

llvm-svn: 277611
2016-08-03 17:12:47 +00:00
Renato Golin f583097434 Revert "Teach CorrelatedValuePropagation to mark adds as no wrap"
This reverts commit r277592, trying to fix the AArch64 42VMA buildbot.

llvm-svn: 277607
2016-08-03 16:20:48 +00:00
Gil Rapaport e7a8fab275 [Loop Vectorizer] Move store-predication into its own function, remove obsolete comment (NFC)
Differential Revision: https://reviews.llvm.org/D23013

llvm-svn: 277595
2016-08-03 13:23:43 +00:00
Artur Pilipenko 68cb947cc9 Teach CorrelatedValuePropagation to mark adds as no wrap
Use LVI to prove that adds do not wrap. The change is motivated by https://llvm.org/bugs/show_bug.cgi?id=28620 bug and it's the first step to fix that problem.

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D23059

llvm-svn: 277592
2016-08-03 13:11:39 +00:00
David Callahan cc5cd4dc65 [ADCE] Refactor anticipating new functionality (NFC)
Summary:
This is the first refactoring before adding new functionality.
Add a class wrapper for the functions and container for
state associated with the transformation.

No functional change

Reviewers: majnemer, nadav, mehdi_amini

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23065

llvm-svn: 277565
2016-08-03 04:28:39 +00:00
George Burgess IV 14633b5cd3 [MSSA] Fix a caching bug.
This fixes a bug where we'd sometimes cache overly-conservative results
with our walker. This bug was made more obvious by r277480, which makes
our cache far more spotty than it was. Test case is llvm-unit, because
we're likely going to use CachingWalker only for def optimization in the
future.

The bug stems from that there was a place where the walker assumed that
`DefNode.Last` was a valid target to cache to when failing to optimize
phis. This is sometimes incorrect if we have a cache hit. The fix is to
use the thing we *can* assume is a valid target to cache to. :)

llvm-svn: 277559
2016-08-03 01:22:19 +00:00
Chandler Carruth 8562d3a5e4 [Inliner] clang-format various parts of the inliner prior to changes
here. NFC.

llvm-svn: 277557
2016-08-03 01:02:31 +00:00
Ivan Krasin 3aade11252 Add -lowertypetests-bitsets-level to control bitsets generation.
Summary:
Sometimes, bitsets could get really large (>300k entries) and
we might want to drop a check, as it would have a too much cost.

Adding a flag to control how much penalty are we willing to pay
for bitsets.

Reviewers: kcc

Differential Revision: https://reviews.llvm.org/D23088

llvm-svn: 277556
2016-08-03 00:59:38 +00:00
Daniel Berlin df10119e4e Support for lifetime begin/end markers in the MemorySSA use optimizer
Summary: Depends on D23072

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23076

llvm-svn: 277553
2016-08-03 00:01:46 +00:00
Sanjay Patel ab50a93888 [InstCombine] replace dyn_casts with matches; NFCI
Clean-up before changing this to allow folds for vectors.

llvm-svn: 277538
2016-08-02 22:38:33 +00:00
Piotr Padlewski 47509f6185 Imported statistics types changes
Reviewers: tejohnson, eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22980

llvm-svn: 277534
2016-08-02 22:18:47 +00:00
Daniel Berlin dff31deb1e Move to having a single real instructionClobbersQuery
Summary: We really want to move towards MemoryLocOrCall (or fix AA) everywhere, but for now, this lets us have a single instructionClobbersQuery.

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23072

llvm-svn: 277530
2016-08-02 21:57:52 +00:00
Michael Zolotukhin b2738e41bf [LoopUnroll] Switch the default value of -unroll-runtime-epilog back to its original value.
As agreed in post-commit review of r265388, I'm switching the flag to
its original value until the 90% runtime performance regression on
SingleSource/Benchmarks/Stanford/Bubblesort is addressed.

llvm-svn: 277524
2016-08-02 21:24:14 +00:00
Wei Mi dc7001afb2 [LoopVectorize] Change comment for isOutOfScope in collectLoopUniforms, NFC
Update comment for isOutOfScope and add a testcase for uniform value being used
out of scope.

Differential Revision: https://reviews.llvm.org/D23073

llvm-svn: 277515
2016-08-02 20:27:49 +00:00
Daniel Berlin 26fcea91f6 Fixes for post-commit review comments on r277480
llvm-svn: 277510
2016-08-02 20:02:21 +00:00
Sanjoy Das 83a72850c7 [IRCE] Rename variable; NFC
There is nothing "Original" about "OriginalLoopInfo".

llvm-svn: 277506
2016-08-02 19:32:01 +00:00
Sanjoy Das f45e03e201 [IRCE] Preserve DomTree and LCSSA
This changes IRCE to "preserve" LCSSA and DomTree by recomputing them.
It still does not preserve LoopSimplify.

llvm-svn: 277505
2016-08-02 19:31:54 +00:00
Michael Zolotukhin d9b6ad3c01 [LoopUnroll] Ensure we create prolog loops in simplified form.
llvm-svn: 277502
2016-08-02 19:19:31 +00:00
Daniel Berlin de4be65313 MSVC 2013 does not implement C++11 unions properly, so remove the anoymous union for now,
and leave a FIXME.

llvm-svn: 277485
2016-08-02 16:59:51 +00:00
Daniel Berlin c43aa5a5b6 Rewrite the use optimizer to be less memory intensive and 50% faster.
Fixes PR28670

Summary:
Rewrite the use optimizer to be less memory intensive and 50% faster.
Fixes PR28670

The new use optimizer works like a standard SSA renaming pass, storing
all possible versions a MemorySSA use could get in a stack, and just
tracking indexes into the stack.
This uses much less memory than caching N^2 alias query results.
It's also a lot faster.

The current version defers phi node walking to the normal walker.

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23032

llvm-svn: 277480
2016-08-02 16:24:03 +00:00
Matthew Simpson 18d8898317 [LV] Generate both scalar and vector integer induction variables
This patch enables the vectorizer to generate both scalar and vector versions
of an integer induction variable for a given loop. Previously, we only
generated a scalar induction variable if we knew all its users were going to be
scalar. Otherwise, we generated a vector induction variable. In the case of a
loop with both scalar and vector users of the induction variable, we would
generate the vector induction variable and extract scalar values from it for
the scalar users. With this patch, we now generate both versions of the
induction variable when there are both scalar and vector users and select which
version to use based on whether the user is scalar or vector.

Differential Revision: https://reviews.llvm.org/D22869

llvm-svn: 277474
2016-08-02 15:25:16 +00:00
Matthew Simpson 58f562887b [LV] Untangle the concepts of uniform and scalar
This patch refactors the logic in collectLoopUniforms and
collectValuesToIgnore, untangling the concepts of "uniform" and "scalar". It
adds isScalarAfterVectorization along side isUniformAfterVectorization to
distinguish the two. Known scalar values include those that are uniform,
getelementptr instructions that won't be vectorized, and induction variables
and induction variable update instructions whose users are all known to be
scalar.

This patch includes the following functional changes:

- In collectLoopUniforms, we mark uniform the pointer operands of interleaved
  accesses. Although non-consecutive, these pointers are treated like
  consecutive pointers during vectorization.

- In collectValuesToIgnore, we insert a value into VecValuesToIgnore if it
  isScalarAfterVectorization rather than isUniformAfterVectorization. This
  differs from the previous functionaly in that we now add getelementptr
  instructions that will not be vectorized into VecValuesToIgnore.

This patch also removes the ValuesNotWidened set used for induction variable
scalarization since, after the above changes, it is now equivalent to
isScalarAfterVectorization.

Differential Revision: https://reviews.llvm.org/D22867

llvm-svn: 277460
2016-08-02 14:29:41 +00:00
Benjamin Kramer a0053cc0af [LoadStoreVectorizer] Don't use a linear walk for an existence check in a SmallPtrSet
No functionality change intended.

llvm-svn: 277436
2016-08-02 09:35:17 +00:00
Junmo Park db8f6eebee Minor code cleanups. NFC.
llvm-svn: 277415
2016-08-02 04:38:27 +00:00
Sean Silva f801575fd0 CodeExtractor : Add ability to preserve profile data.
Added ability to estimate the entry count of the extracted function and
the branch probabilities of the exit branches.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22744

llvm-svn: 277411
2016-08-02 02:15:45 +00:00
Tim Shen b44909eccb [ADT] NFC: Generalize GraphTraits requirement of "NodeType *" in interfaces to "NodeRef", and migrate SCCIterator.h to use NodeRef
Summary: By generalize the interface, users are able to inject more flexible Node token into the algorithm, for example, a pair of vector<Node>* and index integer. Currently I only migrated SCCIterator to use NodeRef, but more is coming. It's a NFC.

Reviewers: dblaikie, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22937

llvm-svn: 277399
2016-08-01 22:32:20 +00:00
Derek Schuff c64d7655b2 [WebAssembly] Support CFI for WebAssembly target
Summary: This patch implements CFI for WebAssembly. It modifies the
LowerTypeTest pass to pre-assign table indexes to functions that are
called indirectly, and lowers type checks to test against the
appropriate table indexes. It also modifies the WebAssembly backend to
support a special ".indidx" assembly directive that propagates the table
index assignments out to the linker.

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D21768

llvm-svn: 277398
2016-08-01 22:25:02 +00:00
Michael Kuperstein c40618610f [PM] Port SpeculativeExecution to the new PM
Differential Revision: https://reviews.llvm.org/D23033

llvm-svn: 277393
2016-08-01 21:48:33 +00:00
Xinliang David Li d119761bbe [Profile] IR profiling minor cleanup /nfc
Differential Revision: http://reviews.llvm.org/D22995

llvm-svn: 277379
2016-08-01 20:25:06 +00:00
Matthew Simpson 228f973189 [LV] Move isGatherOrScatterLegal into LoopVectorizationLegality (NFC)
llvm-svn: 277376
2016-08-01 20:11:25 +00:00
Matthew Simpson 1ce88ff6a7 [LV] Use getPointerOperand helper where appropriate (NFC)
llvm-svn: 277375
2016-08-01 20:08:09 +00:00
James Molloy bade86cedc [SimplifyCFG] Fix nasty RAUW bug from r277325
Using RAUW was wrong here; if we have a switch transform such as:
  18 -> 6 then
  6 -> 0

If we use RAUW, while performing the second transform the  *transformed* 6
from the first will be also replaced, so we end up with:
  18 -> 0
  6 -> 0

Found by clang stage2 bootstrap; testcase added.

llvm-svn: 277332
2016-08-01 09:34:48 +00:00
James Molloy b2e436de42 [SimplifyCFG] Range reduce switches
If a switch is sparse and all the cases (once sorted) are in arithmetic progression, we can extract the common factor out of the switch and create a dense switch. For example:

    switch (i) {
    case 5: ...
    case 9: ...
    case 13: ...
    case 17: ...
    }

can become:

    if ( (i - 5) % 4 ) goto default;
    switch ((i - 5) / 4) {
    case 0: ...
    case 1: ...
    case 2: ...
    case 3: ...
    }

or even better:

   switch ( ROTR(i - 5, 2) {
   case 0: ...
   case 1: ...
   case 2: ...
   case 3: ...
   }

The division and remainder operations could be costly so we only do this if the factor is a power of two, and emit a right-rotate instead of a divide/remainder sequence. Dense switches can be lowered significantly better than sparse switches and can even be transformed into lookup tables.

llvm-svn: 277325
2016-08-01 07:45:11 +00:00
Sean Silva 423c7149dc Revert r277313 and r277314.
They seem to trigger an LSan failure:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/15140/steps/check-llvm%20asan/logs/stdio

Revert "Add the tests for r277313"

This reverts commit r277314.

Revert "CodeExtractor : Add ability to preserve profile data."

This reverts commit r277313.

llvm-svn: 277317
2016-08-01 04:16:09 +00:00
Sean Silva a0a802abe3 Fix - CodeExtractor : Inherit Target Dependent Attributes from the parent function.
When extracting a set of blocks make sure to inherit all of the target
dependent attributes to make sure that the function will be valid for
lowering. One example is the "target-features" attribute for x86, if the
extracted region has functionality that relies on a specific feature it
will fail to be lowered.
This also allows for extracted functions to be valid for inlining, at
least back into the parent function, as the target attributes are tested
when inlining for compatibility.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22713

llvm-svn: 277315
2016-08-01 03:15:32 +00:00
Sean Silva 6208924323 CodeExtractor : Add ability to preserve profile data.
Added ability to estimate the entry count of the extracted function and
the branch probabilities of the exit branches.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22744

llvm-svn: 277313
2016-08-01 02:59:26 +00:00
Daniel Berlin 5130cc831a Fix the MemorySSA updating API to enable people to create memory accesses before removing old ones
llvm-svn: 277309
2016-07-31 21:08:20 +00:00
Adam Nemet 12937c361f [LoopUnroll] Include hotness of region in opt remark
LoopUnroll is a loop pass, so the analysis of OptimizationRemarkEmitter
is added to the common function analysis passes that loop passes
depend on.

The BFI and indirectly BPI used in this pass is computed lazily so no
overhead should be observed unless -pass-remarks-with-hotness is used.

This is how the patch affects the O3 pipeline:

         Dominator Tree Construction
         Natural Loop Information
         Canonicalize natural loops
         Loop-Closed SSA Form Pass
         Basic Alias Analysis (stateless AA impl)
         Function Alias Analysis Results
         Scalar Evolution Analysis
+        Lazy Branch Probability Analysis
+        Lazy Block Frequency Analysis
+        Optimization Remark Emitter
         Loop Pass Manager
           Rotate Loops
           Loop Invariant Code Motion
           Unswitch loops
         Simplify the CFG
         Dominator Tree Construction
         Basic Alias Analysis (stateless AA impl)
         Function Alias Analysis Results
         Combine redundant instructions
         Natural Loop Information
         Canonicalize natural loops
         Loop-Closed SSA Form Pass
         Scalar Evolution Analysis
+        Lazy Branch Probability Analysis
+        Lazy Block Frequency Analysis
+        Optimization Remark Emitter
         Loop Pass Manager
           Induction Variable Simplification
           Recognize loop idioms
           Delete dead loops
           Unroll loops
...

llvm-svn: 277203
2016-07-29 19:29:47 +00:00
Andrew Kaylor b99d1cc7ed Recommitting r275284: add support to inline __builtin_mempcpy
Patch by Sunita Marathe

Third try, now following fixes to MSan to handle mempcy in such a way that this commit won't break the MSan buildbots. (Thanks, Evegenii!)

llvm-svn: 277189
2016-07-29 18:23:18 +00:00
David Majnemer 130b9f99d6 [EarlyCSE] Correctly handle simplified, but live, instructions
Some instructions may have their uses replaced with a symbolic constant.
However, the instruction may still have side effects which percludes it
from being removed from the function.  EarlyCSE treated such an
instruction as if it were removed, resulting in PR28763.

llvm-svn: 277114
2016-07-29 05:39:21 +00:00
David Majnemer d536f2328e [ConstnatFolding] Teach the folder how to fold ConstantVector
A ConstantVector can have ConstantExpr operands and vice versa.
However, the folder had no ability to fold ConstantVectors which, in
some cases, was an optimization barrier.

Instead, rephrase the folder in terms of Constants instead of
ConstantExprs and teach callers how to deal with failure.

llvm-svn: 277099
2016-07-29 03:27:26 +00:00
Piotr Padlewski 84abc74f2c Added ThinLTO inlining statistics
Summary:
copypasta doc of ImportedFunctionsInliningStatistics class
 \brief Calculate and dump ThinLTO specific inliner stats.
 The main statistics are:
 (1) Number of inlined imported functions,
 (2) Number of imported functions inlined into importing module (indirect),
 (3) Number of non imported functions inlined into importing module
 (indirect).
 The difference between first and the second is that first stat counts
 all performed inlines on imported functions, but the second one only the
 functions that have been eventually inlined to a function in the importing
 module (by a chain of inlines). Because llvm uses bottom-up inliner, it is
 possible to e.g. import function `A`, `B` and then inline `B` to `A`,
 and after this `A` might be too big to be inlined into some other function
 that calls it. It calculates this statistic by building graph, where
 the nodes are functions, and edges are performed inlines and then by marking
 the edges starting from not imported function.

 If `Verbose` is set to true, then it also dumps statistics
 per each inlined function, sorted by the greatest inlines count like
 - number of performed inlines
 - number of performed inlines to importing module

Reviewers: eraman, tejohnson, mehdi_amini

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D22491

llvm-svn: 277089
2016-07-29 00:27:16 +00:00
Evgeniy Stepanov d240a889ad [sanitizer] Simplify and future-proof maybeMarkSanitizerLibraryCallNoBuiltin().
Sanitizers set nobuiltin attribute on certain library functions to
avoid a situation where such function is neither instrumented nor
intercepted.

At the moment the list of interesting functions is hardcoded. This
change replaces it with logic based on
TargetLibraryInfo::hasOptimizedCodegen and the presense of readnone
function attribute (sanitizers are generally interested in memory
behavior of library functions).

This is expected to be a no-op change: the new logic matches exactly
the same set of functions.

r276771 (currently reverted) added mempcpy() to the list, breaking
MSan tests. With this change, r276771 can be safely re-landed.

llvm-svn: 277086
2016-07-28 23:45:15 +00:00
Vitaly Buka 0ab23cf1c8 Do not remove empty lifetime.start/lifetime.end ranges
Summary:
Asan stack-use-after-scope check should poison alloca even if there is
no access between start and end.

This is possible for code like this:
for (int i = 0; i < 3; i++) {
  int x;
  p = &x;
}

"Loop Invariant Code Motion" will move "p = &x;" out of the loop, making
start/end range empty.

PR27453

Reviewers: eugenis

Differential Revision: https://reviews.llvm.org/D22842

llvm-svn: 277072
2016-07-28 22:59:03 +00:00
Vitaly Buka 2fae6a7702 Should be committed as one CL.
This reverts commits r277068 r277067 r277066.

llvm-svn: 277071
2016-07-28 22:59:01 +00:00
Vitaly Buka 21a9e573ed [asan] Add const into few methods
Summary: No functional changes

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22899

llvm-svn: 277069
2016-07-28 22:50:50 +00:00
Vitaly Buka f0500b6ae5 Do not remove empty lifetime.start/lifetime.end ranges
Summary:
Asan stack-use-after-scope check should poison alloca even if there is
no access between start and end.

This is possible for code like this:
for (int i = 0; i < 3; i++) {
  int x;
  p = &x;
}

"Loop Invariant Code Motion" will move "p = &x;" out of the loop, making
start/end range empty.

PR27453

Reviewers: eugenis

Differential Revision: https://reviews.llvm.org/D22842

llvm-svn: 277068
2016-07-28 22:50:48 +00:00
Vitaly Buka 3645793872 maned
llvm-svn: 277067
2016-07-28 22:50:45 +00:00
Vitaly Buka caca9da4ff range
llvm-svn: 277066
2016-07-28 22:50:43 +00:00
Michael Kuperstein e45d4d9b35 [PM] Port LowerGuardIntrinsic to the new PM.
llvm-svn: 277057
2016-07-28 22:08:41 +00:00
Alina Sbirlea 64acfb57bd Revert r277038 until clearing why tests fail.
llvm-svn: 277039
2016-07-28 21:35:20 +00:00
Alina Sbirlea 7116eb6e16 Remove TargetBaseAlign. Keep alignment for stack adjustments.
Summary:
TargetBaseAlign is no longer required since LSV checks if target allows misaligned accesses.
A constant defining a base alignment is still needed for stack accesses where alignment can be adjusted.

Reviewers: llvm-commits, jlebar

Subscribers: mzolotukhin, arsenm

Differential Revision: https://reviews.llvm.org/D22936

llvm-svn: 277038
2016-07-28 21:26:40 +00:00
David Majnemer 56fdf0d97e Really try to pacify the build bots :/
llvm-svn: 277037
2016-07-28 21:22:31 +00:00
David Majnemer 4919cb87e6 Try to passify the builders
llvm-svn: 277036
2016-07-28 21:16:51 +00:00
David Majnemer 3d32b7ed0d [coroutines] Part 3 of N: Adding Boilerplate for Coroutine Passes
This adds boilerplate code for all coroutine passes,
the passes are no-ops for now.
Also, a small test has been added to verify that passes execute in
the expected order or not at all if coroutine support is disabled.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22847

llvm-svn: 277033
2016-07-28 21:04:31 +00:00
David Majnemer 6e9b47bc8a Add EP_CGSCCOptimizerLate extension point to PassManagerBuilder
The EP_CGSCCOptimizerLate extension point allows adding CallGraphSCC
passes at the end of the main CallGraphSCC passes and before any
function simplification passes run by CGPassManager.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22897

llvm-svn: 276953
2016-07-28 03:28:43 +00:00
David Majnemer 0be7155350 [InstCombine] Handle failures from ConstantFoldConstantExpression
ConstantFoldConstantExpression returns null when folding fails.

This fixes PR28745.

llvm-svn: 276952
2016-07-28 02:29:06 +00:00
Wei Mi 315bb33f27 Fix the assertion error in collectLoopUniforms caused by empty Worklist before expanding.
Contributed-by: David Callahan

Differential Revision: https://reviews.llvm.org/D22886

llvm-svn: 276943
2016-07-27 23:53:58 +00:00
Michael Zolotukhin ff5ce639de Add verifyAnalysis for LCSSA.
Summary:
LCSSAWrapperPass currently doesn't override verifyAnalysis method, so pass
manager doesn't verify LCSSA. This patch adds the method so that we start
verifying LCSSA between loop passes.

Reviewers: chandlerc, sanjoy, hfinkel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22888

llvm-svn: 276941
2016-07-27 23:35:53 +00:00
Justin Lebar 37f4e0e096 [LSV] Use Instruction*s rather than Value*s where possible.
Summary:
Given the crash in D22878, this patch converts the load/store vectorizer
to use explicit Instruction*s wherever possible.  This is an overall
simplification and should be an improvement in safety, as we have fewer
naked cast<>s, and now where we use Value*, we really mean something
different from Instruction*.

This patch also gets rid of some cast<>s around Value*s returned by
Builder.  Given that Builder constant-folds everything, we can't assume
much about what we get out of it.

One downside of this patch is that we have to copy our chain before
calling propagateMetadata.  But I don't think this is a big deal, as our
chains are very small (usually 2 or 4 elems).

Reviewers: asbirlea

Subscribers: mzolotukhin, llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D22887

llvm-svn: 276938
2016-07-27 23:06:00 +00:00
Justin Lebar 23a9686011 [LSV] Don't assume that bitcast ops are Instructions.
Summary:
When we ask the builder to create a bitcast on a constant, we get back a
constant, not an instruction.

Reviewers: asbirlea

Subscribers: jholewinski, mzolotukhin, llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D22878

llvm-svn: 276922
2016-07-27 21:45:48 +00:00
Jun Bum Lim a033139cd4 [DSE] Fix bug in updating MadeChange flag
Summary: The MadeChange flag should be ORed to keep the previous result.

Reviewers: mcrosier

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D22873

llvm-svn: 276894
2016-07-27 17:25:20 +00:00
Sean Silva 285e0974f0 Refactor - CodeExtractor : Move check for valid block to static utility
This lets you actually check to see if a block is valid before trying to
extract.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22699

llvm-svn: 276846
2016-07-27 08:02:46 +00:00
George Burgess IV 9cf05464aa [GVNHoist] Fix typo in assert.
This fixes PR28730.

llvm-svn: 276844
2016-07-27 06:34:53 +00:00
Sebastian Pop 55c3007b88 GVN-hoist: improve code generation for recursive GEPs
When loading or storing in a field of a struct like "a.b.c", GVN is able to
detect the equivalent expressions, and GVN-hoist would fail in the code
generation.  This is because the GEPs are not hoisted as scalar operations to
avoid moving the GEPs too far from their ld/st instruction when the ld/st is not
movable.  So we end up having to generate code for the GEP of a ld/st when we
move the ld/st.  In the case of a GEP referring to another GEP as in "a.b.c" we
need to code generate all the GEPs necessary to make all the operands available
at the new location for the ld/st.  With this patch we recursively walk through
the GEP operands checking whether all operands are available, and in the case of
a GEP operand, it recursively makes all its operands available. Code generation
happens from the inner GEPs out until reaching the GEP that appears as an
operand of the ld/st.

Differential Revision: https://reviews.llvm.org/D22599

llvm-svn: 276841
2016-07-27 05:48:12 +00:00
Sebastian Pop 586d3eaeb5 GVN-hoist: use DFS numbers instead of walking the instruction stream
The patch replaces a function that walks the IR with a call to firstInBB() that
uses the DFS numbering.  NFC.

Differential Revision: https://reviews.llvm.org/D22809

llvm-svn: 276840
2016-07-27 05:13:52 +00:00
Andrew Kaylor f990fa5f7b Reverting r276771 due to MSan failures.
llvm-svn: 276824
2016-07-27 01:19:24 +00:00
Adam Nemet 2f2bd8caf4 [LoopUtils] Sort headers
llvm-svn: 276776
2016-07-26 17:52:02 +00:00
Andrew Kaylor 3104a6bad0 Re-committing r275284: add support to inline __builtin_mempcpy
Patch by Sunita Marathe

Differential Revision: http://reviews.llvm.org/D21920

llvm-svn: 276771
2016-07-26 17:23:13 +00:00
Sebastian Pop 91d4a30159 GVN-hoist: use a DFS numbering of instructions (PR28670)
Instead of DFS numbering basic blocks we now DFS number instructions that avoids
the costly operation of which instruction comes first in a basic block.

Patch mostly written by Daniel Berlin.

Differential Revision: https://reviews.llvm.org/D22777

llvm-svn: 276714
2016-07-26 00:15:10 +00:00
Sebastian Pop 38422b1356 GVN-hoist: limit hoisting depth (PR28670)
This patch adds an option to specify the maximum depth in a BB at which to
consider hoisting instructions.  Hoisting instructions from a deeper level is
not profitable as it increases register pressure and compilation time.

Differential Revision: https://reviews.llvm.org/D22772

llvm-svn: 276713
2016-07-26 00:15:08 +00:00
Michael Kuperstein 39feb6290c [PM] Port SymbolRewriter to the new PM
Differential Revision: https://reviews.llvm.org/D22703

llvm-svn: 276687
2016-07-25 20:52:00 +00:00
Matt Arsenault 7cddfed7e8 Scalarizer: Support scalarizing intrinsics
llvm-svn: 276681
2016-07-25 20:02:54 +00:00
Rong Xu 705f7775bb [PGO] Fix profile mismatch in COMDAT function with pre-inliner
Pre-instrumentation inline (pre-inliner) greatly improves the IR
instrumentation code performance, among other benefits. One issue of the
pre-inliner is it can introduce CFG-mismatch for COMDAT functions. This
is due to the fact that the same COMDAT function may have different early
inline decisions across different modules -- that means different copies
of COMDAT functions will have different CFG checksum.

In this patch, we propose a partially renaming the COMDAT group and its
member function/variable so we have different profile counter for each
version. We will post-fix the COMDAT function and the group name with its
FunctionHash.

Differential Revision: http://reviews.llvm.org/D22600

llvm-svn: 276673
2016-07-25 18:45:37 +00:00
Michael Kuperstein 9a89b15aa2 Attempt to pacify windows bots.
llvm-svn: 276672
2016-07-25 18:39:08 +00:00
Daniel Berlin 40765a62ad Revert NewGVN N^2 behavior patch
llvm-svn: 276670
2016-07-25 18:19:49 +00:00
Michael Kuperstein 8f8e1d1bf6 Don't use iplist in SymbolRewriter. NFC.
There didn't appear to be a good reason to use iplist in this case, a regular
list of unique_ptr works just as well.
Change made in preparation to a new PM port (since iplist is not moveable).

llvm-svn: 276668
2016-07-25 18:10:54 +00:00
Daniel Berlin 14c000936e NFC: Make a few asserts in GVNHoist do the same thing, but cheaper.
llvm-svn: 276662
2016-07-25 17:36:14 +00:00
Daniel Berlin f107f3292f Fix N^2 instruction ordering comparisons in GVNHoist.
This fixes GVNHoist's portion of PR28670.

llvm-svn: 276658
2016-07-25 17:24:27 +00:00
Daniel Berlin 65af45de03 NFC: Refactor GVNHoist class so not everything is public
llvm-svn: 276657
2016-07-25 17:24:22 +00:00
Sean Silva 519323db58 Cleanup : Reformat PartialInliner.cpp to have current LLVM style conventions
Modify the variable names and code style to be that of modern LLVM.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22743

llvm-svn: 276610
2016-07-25 05:57:59 +00:00
Sean Silva fe5abd5e0c Fix : Partial Inliner requires AssumptionCacheTracker
The public InlineFunction utility assumes that the passed in
InlineFunctionInfo has a valid AssumptionCacheTracker.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22706

llvm-svn: 276609
2016-07-25 05:00:00 +00:00
David Majnemer 68623a0e9f [GVNHoist] Merge metadata on hoisted instructions less conservatively
We can combine metadata from multiple instructions intelligently for
certain metadata nodes.

llvm-svn: 276602
2016-07-25 02:21:25 +00:00
David Majnemer 4728569d0a [GVNHoist] Properly merge alignments when hoisting
If we two loads of two different alignments, we must use the minimum of
the two alignments when hoisting.  Same deal for stores.

For allocas, use the maximum of the two allocas.

llvm-svn: 276601
2016-07-25 02:21:23 +00:00
David Majnemer 6f014d37d5 [Utils] Simplify combineMetadata
Use a range-based for loop, no functional change is intended.

llvm-svn: 276600
2016-07-25 02:21:19 +00:00
Elena Demikhovsky 376a18bd92 [Loop Vectorizer] Handling loops FP induction variables.
Allowed loop vectorization with secondary FP IVs. Like this:
float *A;
float x = init;
for (int i=0; i < N; ++i) {
  A[i] = x;
  x -= fp_inc;
}

The auto-vectorization is possible when the induction binary operator is "fast" or the function has "unsafe" attribute.

Differential Revision: https://reviews.llvm.org/D21330

llvm-svn: 276554
2016-07-24 07:24:54 +00:00
George Burgess IV 93ea19b9a6 [MSSA] Make EXPENSIVE_CHECKS check more.
checkClobberSanity will now be run for all results of `ClobberWalk`,
instead of just the crazy phi-optimized ones. This can help us catch
cases where our cache is being wonky.

llvm-svn: 276553
2016-07-24 07:03:49 +00:00
George Burgess IV f23eb70e03 [MSSA] Remove useless assert. NFC.
liveOnEntry is always a MemoryDef; asserting that a MemoryPhi isn't
liveOnEntry, while correct, isn't very helpful. :)

llvm-svn: 276542
2016-07-24 01:50:07 +00:00
Sanjay Patel 1271bf9178 [InstCombine] allow icmp (bit-manipulation-intrinsic(), C) folds for vectors
llvm-svn: 276523
2016-07-23 13:06:49 +00:00
Xinliang David Li 9239245401 [Profile] Use explicit flag to enable IR PGO
Patch by Jake VanAdrighem

Differential Revision: http://reviews.llvm.org/D22607

llvm-svn: 276516
2016-07-23 04:28:52 +00:00
Sean Silva ab6a683765 Avoid using a raw AssumptionCacheTracker in various inliner functions.
This unblocks the new PM part of River's patch in
https://reviews.llvm.org/D22706

Conveniently, this same change was needed for D21921 and so these
changes are just spun out from there.

llvm-svn: 276515
2016-07-23 04:22:50 +00:00
Sanjay Patel 6ebd5857c8 [InstCombine] move udiv+cmp fold over with other BinOp+cmp folds; NFCI
llvm-svn: 276502
2016-07-23 00:28:39 +00:00
Adam Nemet eea7c267b9 [LoopDataPrefetch] Fix unused variable in release build
llvm-svn: 276491
2016-07-22 23:08:10 +00:00
Adam Nemet 9e6e63fba2 [LoopDataPrefetch] Include hotness of region in opt remark
llvm-svn: 276488
2016-07-22 22:53:17 +00:00
Adam Nemet 885f1de490 [LoopDataPrefetch] Sort headers
llvm-svn: 276487
2016-07-22 22:53:12 +00:00
Vitaly Buka e3a032a740 Unpoison stack before resume instruction
Summary:
Clang inserts cleanup code before resume similar way as before return instruction.
This makes asan poison local variables causing false use-after-scope reports.

__asan_handle_no_return does not help here as it was executed before
llvm.lifetime.end inserted into resume block.

To avoid false report we need to unpoison stack for resume same way as for return.

PR27453

Reviewers: kcc, eugenis

Differential Revision: https://reviews.llvm.org/D22661

llvm-svn: 276480
2016-07-22 22:04:38 +00:00
Alina Sbirlea ba21ffebff Add flag to PassManagerBuilder to disable GVN Hoist Pass.
Summary:
Adding a flag to diable GVN Hoisting by default.
Note: The GVN Hoist Pass causes some Halide tests to hang. Halide will disable the pass while investigating.

Reviewers: llvm-commits, chandlerc, spop, dberlin

Subscribers: mehdi_amini

Differential Revision: https://reviews.llvm.org/D22639

llvm-svn: 276479
2016-07-22 22:02:19 +00:00
Michael Kuperstein 38e7298093 [SLPVectorizer] Vectorize reverse-order loads in horizontal reductions
When vectorizing a tree rooted at a store bundle, we currently try to sort the
stores before building the tree, so that the stores can be vectorized. For other
trees, the order of the root bundle - which determines the order of all other
bundles - is arbitrary. That is bad, since if a leaf bundle of consecutive loads
happens to appear in the wrong order, we will not vectorize it.

This is partially mitigated when the root is a binary operator, by trying to
build a "reversed" tree when that's considered profitable. This patch extends the
workaround we have for binops to trees rooted in a horizontal reduction.

This fixes PR28474.

Differential Revision: https://reviews.llvm.org/D22554

llvm-svn: 276477
2016-07-22 21:28:48 +00:00
Jun Bum Lim 6a7dc5c430 Recommit - [DSE]Enhance shorthening MemIntrinsic based on OverlapIntervals
Recommiting r275571 after fixing crash reported in PR28270.
Now we erase elements of IOL in deleteDeadInstruction().

Original Summary:
This change use the overlap interval map built from partial overwrite tracking to perform shortening MemIntrinsics.
Add test cases which was missing opportunities before.

llvm-svn: 276452
2016-07-22 18:27:24 +00:00
Wei Mi e04d0eff29 [PM] Port BreakCriticalEdges to the new PM.
Differential Revision: https://reviews.llvm.org/D22688

llvm-svn: 276449
2016-07-22 18:04:25 +00:00
David Majnemer 522a91181a Don't remove side effecting instructions due to ConstantFoldInstruction
Just because we can constant fold the result of an instruction does not
imply that we can delete the instruction.  It may have side effects.

This fixes PR28655.

llvm-svn: 276389
2016-07-22 04:54:44 +00:00
Xinliang David Li d382e9d82b Sync up InstrProfData.inc with compiler-rt with fixes to references
llvm-svn: 276388
2016-07-22 04:46:56 +00:00
Vitaly Buka 53054a7024 Fix detection of stack-use-after scope for char arrays.
Summary:
Clang inserts GetElementPtrInst so findAllocaForValue was not
able to find allocas.

PR27453

Reviewers: kcc, eugenis

Differential Revision: https://reviews.llvm.org/D22657

llvm-svn: 276374
2016-07-22 00:56:17 +00:00
Sanjoy Das bb969791b4 [IRCE] Add an option to skip profitability checks
If `-irce-skip-profitability-checks` is passed in, IRCE will kick in in
all cases where it is legal for it to kick in.  This flag is intended to
help diagnose and analyse performance issues.

llvm-svn: 276372
2016-07-22 00:40:56 +00:00
Sebastian Pop 0e2cec075c GVN-hoist: move check before mutating the IR
llvm-svn: 276368
2016-07-22 00:07:01 +00:00
Sebastian Pop c107a4875e GVN-hoist: add missing check for all GEP operands available
llvm-svn: 276364
2016-07-21 23:32:39 +00:00
Sanjay Patel 18fa9d3ca1 [InstCombine] break up foldICmpEqualityWithConstant(); NFCI
Almost all of these folds require changes to allow vector types. 
Splitting up the logic should make that easier to do incrementally.

llvm-svn: 276360
2016-07-21 23:27:36 +00:00
Sebastian Pop 31fd506623 GVH-hoist: only clone GEPs (PR28606)
Do not clone stored values unless they are GEPs that are special cased to avoid
hoisting them without hoisting their associated ld/st.

Differential revision: https://reviews.llvm.org/D22652

llvm-svn: 276358
2016-07-21 23:22:10 +00:00
Xinliang David Li 6f8c504f10 [Profile] deprecate __llvm_profile_override_default_filename
This eliminates unncessary calls and init functions.

Differential Revision: http://reviews.llvm.org/D22613

llvm-svn: 276354
2016-07-21 23:19:10 +00:00
Wei Mi 1cf58f8996 [PM] Port NaryReassociate to the new PM
Differential Revision: https://reviews.llvm.org/D22648

llvm-svn: 276349
2016-07-21 22:28:52 +00:00
Adam Nemet 84a6425d61 [OptDiag,LDist] Convert remaining opt remarks to use the new API
llvm-svn: 276340
2016-07-21 21:21:34 +00:00
Matthew Simpson 102729cf1b [LV] Move vector int induction update to end of latch
This patch moves the update instruction for vectorized integer induction phi
nodes to the end of the latch block. This ensures consistent placement of all
induction updates across all the kinds of int inductions we create (scalar,
splat vector, or vector phi).

Differential Revision: https://reviews.llvm.org/D22416

llvm-svn: 276339
2016-07-21 21:20:15 +00:00
Rong Xu 97b68c5ebe [PGO] Make needsComdatForCounter() available (NFC)
Move needsComdatForCounter() to lib/ProfileData/InstrProf.cpp from
lib/Transforms/Instrumentation/InstrProfiling.cpp to make is available for
other files.

Differential Revision: https://reviews.llvm.org/D22643

llvm-svn: 276330
2016-07-21 20:50:02 +00:00
Sanjoy Das ff9eea2278 [IndVars] Reflow oddly formatted condition; NFC
llvm-svn: 276319
2016-07-21 18:58:01 +00:00
Sanjay Patel 43395060a1 make InstCombine compare helper functions private; NFC
Also, rename some of them for consistency and to follow current conventions.

llvm-svn: 276312
2016-07-21 18:07:40 +00:00
Vedant Kumar cd32eba67b Avoid a string copy, NFC
llvm-svn: 276310
2016-07-21 17:50:07 +00:00
Sanjay Patel 1710e7cfa7 [InstCombine] break up visitICmpInstWithInstAndIntCst(); NFCI
Making smaller pieces out of some of these ~1000 line functions should make
it easier to incrementally upgrade them to handle vector types.

llvm-svn: 276304
2016-07-21 17:15:49 +00:00
Benjamin Kramer eab3d36753 Rename StringMap::emplace_second to try_emplace.
Coincidentally this function maps to the C++17 try_emplace. Rename it
for consistentcy with C++17 std::map. NFC.

llvm-svn: 276276
2016-07-21 13:37:48 +00:00
Benjamin Kramer 2a185a2547 [GCOV] Remove a layer of indirection.
StringMap is designed to hold large values. No functionality change
intended.

llvm-svn: 276265
2016-07-21 12:06:31 +00:00
David Majnemer 825e4ab9e3 [GVNHoist] Preserve optimization hints which agree
If we have optimization hints with agree with each other along different
paths, preserve them.

llvm-svn: 276248
2016-07-21 07:16:26 +00:00
David Majnemer 4808f26422 [GVNHoist] Don't wrongly preserve TBAA
We hoisted loads/stores without taking into account which can cause
miscompiles.

llvm-svn: 276240
2016-07-21 05:59:53 +00:00
David Majnemer 15cf7b83d1 [MergedLoadStoreMotion] Remove out of date comment
llvm-svn: 276239
2016-07-21 05:59:51 +00:00
Adam Nemet 7cfd5971ab [OptDiag,LV] Add hotness attribute to applied-optimization remarks
Test coverage is provided by modifying the function in the FP-math
testcase that we are allowed to vectorize.

llvm-svn: 276223
2016-07-21 01:07:13 +00:00
Sanjay Patel 0753c06d9c [InstCombine] LogicOpc (zext X), C --> zext (LogicOpc X, C) (PR28476)
The benefits of this change include:
1. Remove DeMorgan-matching code that was added specifically to work-around 
   the missing transform in http://reviews.llvm.org/rL248634.
2. Makes the DeMorgan transform work for vectors too.
3. Fix PR28476: https://llvm.org/bugs/show_bug.cgi?id=28476

Extending this transform to other casts and other associative operators may
be useful too. See https://reviews.llvm.org/D22421 for a prerequisite for
doing that though.

Differential Revision: https://reviews.llvm.org/D22271

llvm-svn: 276221
2016-07-21 00:24:18 +00:00
Adam Nemet 0e0e2d5d26 [OptDiag,LV] Add hotness attribute to the derived analysis remarks
This includes FPCompute and Aliasing.

Testcase is based on no_fpmath.ll.

llvm-svn: 276211
2016-07-20 23:50:32 +00:00
Sanjay Patel 5f3c70307d [InstSimplify][InstCombine] don't crash when folding vector selects of icmp
Differential Revision: https://reviews.llvm.org/D22602

llvm-svn: 276209
2016-07-20 23:40:01 +00:00
George Burgess IV f84bb6fa67 Make help text more consistent. NFC.
llvm-svn: 276205
2016-07-20 23:14:29 +00:00
Adam Nemet 5b3a5cf6b0 [OptDiag,LV] Add hotness attribute to analysis remarks
The earlier change added hotness attribute to missed-optimization
remarks.  This follows up with the analysis remarks (the ones explaining
the reason for the missed optimization).

llvm-svn: 276192
2016-07-20 21:44:26 +00:00
David Majnemer bd21012c6c [GVNHoist] Don't hoist PHI nodes
We hoisted PHIs without respecting their special insertion point in the
block, leading to verfier errors.

This fixes PR28626.

llvm-svn: 276181
2016-07-20 21:05:01 +00:00
Davide Italiano 15ff2d6d0c [SCCP] Zap multiple return values.
We can replace the return values with undef if we replaced all
the call uses with a constant/undef.

Differential Revision:  https://reviews.llvm.org/D22336

llvm-svn: 276174
2016-07-20 20:17:13 +00:00
Justin Lebar a272c12b73 [LSV] Don't move stores across may-load instrs, and loosen restrictions on moving loads.
Summary:
Previously we wouldn't move loads/stores across instructions that had
side-effects, where that was defined as may-write or may-throw.  But
this is not sufficiently restrictive: Stores can't safely be moved
across instructions that may load.

This patch also adds a DEBUG check that all instructions in our chain
are either loads or stores.

Reviewers: asbirlea

Subscribers: llvm-commits, jholewinski, arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22547

llvm-svn: 276171
2016-07-20 20:07:37 +00:00
Justin Lebar 62b03e344e [LSV] Vectorize up to side-effecting instructions.
Summary:
Previously if we had a chain that contained a side-effecting
instruction, we wouldn't vectorize it at all.  Now we'll vectorize
everything that comes before the side-effecting instruction.

Reviewers: asbirlea

Subscribers: arsenm, jholewinski, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22536

llvm-svn: 276170
2016-07-20 20:07:34 +00:00
George Burgess IV 400ae40348 [MSSA] Add an overload for getClobberingMemoryAccess.
A seemingly common use for the walker's getClobberingMemoryAccess
function is:

```
MemoryAccess *getClobber(MemorySSAWalker *W, MemoryUseOrDef *MUD) {
  const Instruction *I = MUD->getMemoryInst();
  return W->getClobberingMemoryAccess(I);
}
```

Which is kind of redundant, since walkers will ultimately query MSSA to
find out which MemoryAccess `I` maps to (...which is always `MUD`).

So, this patch adds an overload of getClobberingMemoryAccess that
accepts MemoryAccesses directly. As a result, the Instruction overload
of getClobberingMemoryAccess becomes a lightweight wrapper around our
new overload.

Additionally, this patch un`virtual`izes the Instruction overload of
getClobberingMemoryAccess, since there doesn't seem to be a walker that
benefits from that being virtual, and I can't think of how else one
would implement it. Happy to make it virtual again if we would benefit
from doing so.

llvm-svn: 276169
2016-07-20 19:51:34 +00:00
Sanjay Patel 683170bf56 move decomposeBitTestICmp() to Transforms/Utils; NFC
As noted in https://reviews.llvm.org/D22537 , we can use this functionality in 
visitSelectInstWithICmp() and InstSimplify, but currently we have duplicated
code.

llvm-svn: 276140
2016-07-20 17:18:45 +00:00
Sanjay Patel be53c65fab fix documentation comments; NFC
llvm-svn: 276135
2016-07-20 16:30:55 +00:00
Benjamin Kramer b4d64cf27d Revert "[InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))"
Makes InstCombine infloop when compiling v8.

This reverts commit r275989 and r276105.

llvm-svn: 276106
2016-07-20 11:40:16 +00:00
Adam Nemet 67c8929a2c [LV] Add hotness attribute to missed-optimization remarks
The new OptimizationRemarkEmitter analysis pass is hooked up to both new
and old PM passes.

llvm-svn: 276080
2016-07-20 04:03:43 +00:00
Michael Zolotukhin 6bc56d552a Revert "Revert r275883 and r275891. They seem to cause PR28608."
This reverts commit r276064, and thus reapplies r275891 and r275883 with
a fix for PR28608.

llvm-svn: 276077
2016-07-20 01:55:27 +00:00
Justin Lebar 6114b37838 [LSV] Don't assume that loads/stores appear in address order in the BB.
Summary:
getVectorizablePrefix previously didn't work properly in the face of
aliasing loads/stores.  It unwittingly assumed that the loads/stores
appeared in the BB in address order.  If they didn't, it would do the
wrong thing.

Reviewers: asbirlea, tstellarAMD

Subscribers: arsenm, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22535

llvm-svn: 276072
2016-07-20 00:55:12 +00:00
Sean Silva 554efb28d2 Revert r275883 and r275891. They seem to cause PR28608.
Revert "[LoopSimplify] Update LCSSA after separating nested loops."

This reverts commit r275891.

Revert "[LCSSA] Post-process PHI-nodes created by SSAUpdate when constructing LCSSA form."

This reverts commit r275883.

llvm-svn: 276064
2016-07-19 23:54:29 +00:00
Sean Silva e3c18a5ae8 [PM] Port LoopUnroll.
We just set PreserveLCSSA to always true since we don't have an
analogous method `mustPreserveAnalysisID(LCSSA)`.

Also port LoopInfo verifier pass to test LoopUnrollPass.

llvm-svn: 276063
2016-07-19 23:54:23 +00:00
Justin Lebar 8778c62629 [LSV] Insert stores at the right point.
Summary:
Previously, the insertion point for stores was the last instruction in
Chain *before calling getVectorizablePrefixEndIdx*.  Thus if
getVectorizablePrefixEndIdx didn't return Chain.size(), we still would
insert at the last instruction in Chain.

This patch changes our internal API a bit in an attempt to make it less
prone to this sort of error.  As a result, we end up recalculating the
Chain's boundary instructions, but I think worrying about the speed hit
of this is a premature optimization right now.

Reviewers: asbirlea, tstellarAMD

Subscribers: mzolotukhin, arsenm, llvm-commits

Differential Revision: https://reviews.llvm.org/D22534

llvm-svn: 276056
2016-07-19 23:19:20 +00:00
Justin Lebar 2cf2c22870 [LSV] Use make_range, and reformat a DEBUG message. NFC
Summary:
The DEBUG message was hard to read because two Values were being printed
on the same line with only the delimiter "aliases".  This change makes
us print each Value on its own line.

Reviewers: asbirlea

Subscribers: llvm-commits, arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22533

llvm-svn: 276055
2016-07-19 23:19:18 +00:00
Justin Lebar 4ee8a2d024 [LSV] Nix two global (ish) variables in the LoadStoreVectorizer. NFC
Reviewers: asbirlea

Subscribers: mzolotukhin, llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D22532

llvm-svn: 276054
2016-07-19 23:19:16 +00:00
Daniel Berlin 1986030b62 Fix unused variable
llvm-svn: 276050
2016-07-19 23:08:08 +00:00
Paul Robinson 2d23c029f7 Make GVN Hoisting obey optnone/bisect.
Differential Revision: http://reviews.llvm.org/D22545

llvm-svn: 276048
2016-07-19 22:57:14 +00:00
Daniel Berlin 5c46b943db Make MemorySSA::dominates/locallydominates constant time
Summary: Make MemorySSA::dominates/locallydominates constant time

Reviewers: george.burgess.iv, gberry

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22527

llvm-svn: 276046
2016-07-19 22:49:43 +00:00
Sanjay Patel 2d477e59e8 [InstCombine] fold add(zext(xor X, C), C) --> sext X when C is INT_MIN in the source type
The pattern may look more obviously like a sext if written as:

  define i32 @g(i16 %x) {
    %zext = zext i16 %x to i32
    %xor = xor i32 %zext, 32768
    %add = add i32 %xor, -32768
    ret i32 %add
  }

We already have that fold in visitAdd().

Differential Revision: https://reviews.llvm.org/D22477

llvm-svn: 276035
2016-07-19 22:09:34 +00:00
Vedant Kumar 57faf2d208 [tsan] Don't instrument __llvm_gcov_global_state_pred or __llvm_gcda*
r274801 did not go far enough to allow gcov+tsan to cooperate. With this
commit it's possible to run the following code without false positives:

  std::thread T1(fib), T2(fib);
  T1.join(); T2.join();

llvm-svn: 276015
2016-07-19 20:16:08 +00:00
David Majnemer 5246e0b2c2 [FunctionAttrs] Correct the safety analysis for inference of 'returned'
We skipped over ReturnInsts which didn't return an argument which would
lead us to incorrectly conclude that an argument returned by another
ReturnInst was 'returned'.

This reverts commit r275756.

This fixes PR28610.

llvm-svn: 276008
2016-07-19 18:50:26 +00:00
Davide Italiano 63266b6be5 [SCCP] Improve assert messages. NFCI.
I've been hitting those already while working on SCCP and I think
it's be useful to provide a more explanatory diagnostic.

llvm-svn: 276007
2016-07-19 18:31:07 +00:00
Chad Rosier 8b5fa7a2f2 [DSE] Add additional debug output. NFC.
llvm-svn: 276005
2016-07-19 18:11:11 +00:00
Chad Rosier 667b1ca0e6 [DSE] Add additional debug output. NFC.
llvm-svn: 275991
2016-07-19 16:50:57 +00:00
Tobias Grosser 1c38262279 [InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))
Summary:
Currently, InstCombine is already able to fold expressions of the form `logic(cast(A), cast(B))` to the simpler form `cast(logic(A, B))`, where logic designates one of `and`/`or`/`xor`. This transformation is implemented in `foldCastedBitwiseLogic()` in InstCombineAndOrXor.cpp. However, this optimization will not be performed if both `A` and `B` are `icmp` instructions. The decision to preclude casts of `icmp` instructions originates in r48715 in combination with r261707, and can be best understood by the title of the former one:

> Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp.

Apparently, it introduced a transformation that is a reverse of the transformation that is done in `foldCastedBitwiseLogic()`. Its purpose is to expose pairs of `zext icmp` that would subsequently be optimized by `transformZExtICmp()` in InstCombineCasts.cpp. Therefore, in order to avoid an endless loop of switching back and forth between these two transformations, the one in `foldCastedBitwiseLogic()` has been restricted to exclude `icmp` instructions which is mirrored in the responsible check:

`if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) && ...`

This check seems to sort out more cases than necessary because:
- the reverse transformation is obviously done for `or` instructions only
- and also not every `zext icmp` pair is necessarily the result of this reverse transformation

Therefore we now remove this check and replace it by a more finegrained one in `shouldOptimizeCast()` that now rejects only those `logic(zext(icmp), zext(icmp))` that would be able to be optimized by `transformZExtICmp()`, which also avoids the mentioned endless loop. That means we are now able to also simplify expressions of the form `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` (`cast` being an arbitrary `CastInst`).

As an example, consider the following IR snippet

```
%1 = icmp sgt i64 %a, %b
%2 = zext i1 %1 to i8
%3 = icmp slt i64 %a, %c
%4 = zext i1 %3 to i8
%5 = and i8 %2, %4
```

which would now be transformed to

```
%1 = icmp sgt i64 %a, %b
%2 = icmp slt i64 %a, %c
%3 = and i1 %1, %2
%4 = zext i1 %3 to i8
```

This issue became apparent when experimenting with the programming language Julia, which makes use of LLVM. Currently, Julia lowers its `Bool` datatype to LLVM's `i8` (also see https://github.com/JuliaLang/julia/pull/17225). In fact, the above IR example is the lowered form of the Julia snippet `(a > b) & (a < c)`. Like shown above, this may introduce `zext` operations, casting between `i1` and `i8`, which could for example hinder ScalarEvolution and Polly on certain code.

Reviewers: grosser, vtjnash, majnemer

Subscribers: majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D22511

Contributed-by: Matthias Reisinger
llvm-svn: 275989
2016-07-19 16:39:17 +00:00
Tobias Grosser 8ef834c712 [InstCombine] Minor cleanup of cast simplification code [NFC]
Summary:
This patch cleans up parts of InstCombine to raise its compliance with the LLVM coding standards and to increase its readability. The changes and according rationale are summarized in the following:

- Rename `ShouldOptimizeCast()` to `shouldOptimizeCast()` since functions should start with a lower case letter.

- Move `shouldOptimizeCast()` from InstCombineCasts.cpp to InstCombineAndOrXor.cpp since it's only used there.

- Simplify interface of `shouldOptimizeCast()`.

- Minor code style adaptions in `shouldOptimizeCast()`.

- Remove the documentation on the function definition of `shouldOptimizeCast()` since it just repeats the documentation on its declaration. Also enhance the documentation on its declaration with more information describing its intended use and make it doxygen-compliant.

- Change a comment in `foldCastedBitwiseLogic()` from `fold (logic (cast A), (cast B)) -> (cast (logic A, B))` to `fold logic(cast(A), cast(B)) -> cast(logic(A, B))` since the surrounding comments use this format.

- Remove comment `Only do this if the casts both really cause code to be generated.` in `foldCastedBitwiseLogic()` since it just repeats parts of the documentation of `shouldOptimizeCast()` and does not help to improve readability.

- Simplify the interface of `isEliminableCastPair()`.

- Removed the documentation on the function definition of `isEliminableCastPair()` which only contained obvious statements about its implementation. Instead added more general doxygen-compliant documentation to its declaration.

- Renamed parameter `DoXform` of `transformZExtIcmp()` to `DoTransform` to make its intention clearer.

- Moved documentation of `transformZExtIcmp()` from its definition to its declaration and made it doxygen-compliant.

Reviewers: vtjnash, grosser

Subscribers: majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D22449

Contributed-by: Matthias Reisinger
llvm-svn: 275964
2016-07-19 09:06:08 +00:00
George Burgess IV 5f30897b7b [MemorySSA] Update to the new shiny walker.
This patch updates MemorySSA's use-optimizing walker to be more
accurate and, in some cases, faster.

Essentially, this changed our core walking algorithm from a
cache-as-you-go DFS to an iteratively expanded DFS, with all of the
caching happening at the end. Said expansion happens when we hit a Phi,
P; we'll try to do the smallest amount of work possible to see if
optimizing above that Phi is legal in the first place. If so, we'll
expand the search to see if we can optimize to the next phi, etc.

An iteratively expanded DFS lets us potentially quit earlier (because we
don't assume that we can optimize above all phis) than our old walker.
Additionally, because we don't cache as we go, we can now optimize above
loops.

As an added bonus, this patch adds a ton of verification (if
EXPENSIVE_CHECKS are enabled), so finding bugs is easier.

Differential Revision: https://reviews.llvm.org/D21777

llvm-svn: 275940
2016-07-19 01:29:15 +00:00
Wei Mi 79997a24d7 Recommit the patch "Use uniforms set to populate VecValuesToIgnore".
For instructions in uniform set, they will not have vector versions so
add them to VecValuesToIgnore.
For induction vars, those only used in uniform instructions or consecutive
ptrs instructions have already been added to VecValuesToIgnore above. For
those induction vars which are only used in uniform instructions or
non-consecutive/non-gather scatter ptr instructions, the related phi and
update will also be added into VecValuesToIgnore set.

The change will make the vector RegUsages estimation less conservative.

Differential Revision: https://reviews.llvm.org/D20474

The recommit fixed the testcase global_alias.ll.

llvm-svn: 275936
2016-07-19 00:50:43 +00:00
Sanjoy Das ab73c9d88e [LoopReroll] Reroll loops with unordered atomic memory accesses
Reviewers: hfinkel, jfb, reames

Subscribers: mcrosier, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D22385

llvm-svn: 275932
2016-07-19 00:23:54 +00:00
Dehao Chen 6132ee8502 [PM] Convert Loop Strength Reduce pass to new PM
Summary: Convert Loop String Reduce pass to new PM

Reviewers: davidxl, silvas

Subscribers: junbuml, sanjoy, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D22468

llvm-svn: 275919
2016-07-18 21:41:50 +00:00
Teresa Johnson 2124157102 [PM] Port FunctionImport Pass to new PM
Summary: Port FunctionImport Pass to new PM.

Reviewers: mehdi_amini, davide

Subscribers: davidxl, llvm-commits

Differential Revision: https://reviews.llvm.org/D22475

llvm-svn: 275916
2016-07-18 21:22:24 +00:00