Commit Graph

7871 Commits

Author SHA1 Message Date
David Majnemer ba6665d88a [Verifier] Resume instructions can only be in functions w/ a personality
This fixes PR28799.

llvm-svn: 277360
2016-08-01 18:06:34 +00:00
Simon Pilgrim 7fd4ad6849 Fixed test check ordering issue on windows buildbots
llvm-svn: 277337
2016-08-01 10:40:15 +00:00
James Molloy bade86cedc [SimplifyCFG] Fix nasty RAUW bug from r277325
Using RAUW was wrong here; if we have a switch transform such as:
  18 -> 6 then
  6 -> 0

If we use RAUW, while performing the second transform the  *transformed* 6
from the first will be also replaced, so we end up with:
  18 -> 0
  6 -> 0

Found by clang stage2 bootstrap; testcase added.

llvm-svn: 277332
2016-08-01 09:34:48 +00:00
Craig Topper d2b2d745ff [AVX-512] Fix a test missed in r277327.
llvm-svn: 277330
2016-08-01 08:15:30 +00:00
James Molloy 91821bd0b4 [SimplifyCFG] Try and pacify buildbots after r277325
It looks like the two independent parts of the rotate operation (a lshr and shl) are being reordered on some bots. Add CHECK-DAGs to account for this.

llvm-svn: 277329
2016-08-01 08:09:55 +00:00
James Molloy b2e436de42 [SimplifyCFG] Range reduce switches
If a switch is sparse and all the cases (once sorted) are in arithmetic progression, we can extract the common factor out of the switch and create a dense switch. For example:

    switch (i) {
    case 5: ...
    case 9: ...
    case 13: ...
    case 17: ...
    }

can become:

    if ( (i - 5) % 4 ) goto default;
    switch ((i - 5) / 4) {
    case 0: ...
    case 1: ...
    case 2: ...
    case 3: ...
    }

or even better:

   switch ( ROTR(i - 5, 2) {
   case 0: ...
   case 1: ...
   case 2: ...
   case 3: ...
   }

The division and remainder operations could be costly so we only do this if the factor is a power of two, and emit a right-rotate instead of a divide/remainder sequence. Dense switches can be lowered significantly better than sparse switches and can even be transformed into lookup tables.

llvm-svn: 277325
2016-08-01 07:45:11 +00:00
Sean Silva 423c7149dc Revert r277313 and r277314.
They seem to trigger an LSan failure:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/15140/steps/check-llvm%20asan/logs/stdio

Revert "Add the tests for r277313"

This reverts commit r277314.

Revert "CodeExtractor : Add ability to preserve profile data."

This reverts commit r277313.

llvm-svn: 277317
2016-08-01 04:16:09 +00:00
Sean Silva e5a5c966cd Move this test to x86-specific directory.
No bots have yelled yet, but this test references an x86 intrinsic.
Also, it invokes llc on x86 IR.

Fixup to r277315.

llvm-svn: 277316
2016-08-01 03:22:05 +00:00
Sean Silva a0a802abe3 Fix - CodeExtractor : Inherit Target Dependent Attributes from the parent function.
When extracting a set of blocks make sure to inherit all of the target
dependent attributes to make sure that the function will be valid for
lowering. One example is the "target-features" attribute for x86, if the
extracted region has functionality that relies on a specific feature it
will fail to be lowered.
This also allows for extracted functions to be valid for inlining, at
least back into the parent function, as the target attributes are tested
when inlining for compatibility.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22713

llvm-svn: 277315
2016-08-01 03:15:32 +00:00
Sean Silva 72be9a6937 Add the tests for r277313
Forgot to `git add` them.

llvm-svn: 277314
2016-08-01 03:04:34 +00:00
Simon Pilgrim 9e201eac32 [SLPVectorizer][X86] Added vXi8/vXi16 sitofp/uitofp tests
Dropped useless 2i32-2f32 test

llvm-svn: 277281
2016-07-30 21:01:34 +00:00
Simon Pilgrim f5134a2867 [SLPVectorizer][X86] Added SITOFP/UITOFP vectorization tests
llvm-svn: 277275
2016-07-30 18:43:30 +00:00
Adam Nemet 12937c361f [LoopUnroll] Include hotness of region in opt remark
LoopUnroll is a loop pass, so the analysis of OptimizationRemarkEmitter
is added to the common function analysis passes that loop passes
depend on.

The BFI and indirectly BPI used in this pass is computed lazily so no
overhead should be observed unless -pass-remarks-with-hotness is used.

This is how the patch affects the O3 pipeline:

         Dominator Tree Construction
         Natural Loop Information
         Canonicalize natural loops
         Loop-Closed SSA Form Pass
         Basic Alias Analysis (stateless AA impl)
         Function Alias Analysis Results
         Scalar Evolution Analysis
+        Lazy Branch Probability Analysis
+        Lazy Block Frequency Analysis
+        Optimization Remark Emitter
         Loop Pass Manager
           Rotate Loops
           Loop Invariant Code Motion
           Unswitch loops
         Simplify the CFG
         Dominator Tree Construction
         Basic Alias Analysis (stateless AA impl)
         Function Alias Analysis Results
         Combine redundant instructions
         Natural Loop Information
         Canonicalize natural loops
         Loop-Closed SSA Form Pass
         Scalar Evolution Analysis
+        Lazy Branch Probability Analysis
+        Lazy Block Frequency Analysis
+        Optimization Remark Emitter
         Loop Pass Manager
           Induction Variable Simplification
           Recognize loop idioms
           Delete dead loops
           Unroll loops
...

llvm-svn: 277203
2016-07-29 19:29:47 +00:00
David Majnemer 718da3d1f6 [ConstantFolding] Handle bitcasts of undef fp vector elements
We used the wrong type for constructing a zero vector element which led
to type mismatches.

This fixes PR28771.

llvm-svn: 277197
2016-07-29 18:48:27 +00:00
Andrew Kaylor b99d1cc7ed Recommitting r275284: add support to inline __builtin_mempcpy
Patch by Sunita Marathe

Third try, now following fixes to MSan to handle mempcy in such a way that this commit won't break the MSan buildbots. (Thanks, Evegenii!)

llvm-svn: 277189
2016-07-29 18:23:18 +00:00
Matt Masten a6669a1e05 Initial support for vectorization using svml (short vector math library).
Differential Revision: https://reviews.llvm.org/D19544

llvm-svn: 277166
2016-07-29 16:42:44 +00:00
David Majnemer 130b9f99d6 [EarlyCSE] Correctly handle simplified, but live, instructions
Some instructions may have their uses replaced with a symbolic constant.
However, the instruction may still have side effects which percludes it
from being removed from the function.  EarlyCSE treated such an
instruction as if it were removed, resulting in PR28763.

llvm-svn: 277114
2016-07-29 05:39:21 +00:00
David Majnemer e4218cf11e [ConstantFolding] Fold bitcasts of vectors w/ undef elements
An undef vector element can be treated as if it had any value.  Folding
such a vector element to 0 in a bitcast can open up further folding
opportunities.

llvm-svn: 277104
2016-07-29 04:06:09 +00:00
David Majnemer 57b94c8d6a [ConstantFolding] Use ConstantExpr::getWithOperands
ConstantExpr::getWithOperands does much of the hard work that
ConstantFoldInstOperandsImpl tries to do but more completely.

This lets us fold ExtractValue/InsertValue expressions.

llvm-svn: 277100
2016-07-29 03:27:31 +00:00
David Majnemer d536f2328e [ConstnatFolding] Teach the folder how to fold ConstantVector
A ConstantVector can have ConstantExpr operands and vice versa.
However, the folder had no ability to fold ConstantVectors which, in
some cases, was an optimization barrier.

Instead, rephrase the folder in terms of Constants instead of
ConstantExprs and teach callers how to deal with failure.

llvm-svn: 277099
2016-07-29 03:27:26 +00:00
Piotr Padlewski 84abc74f2c Added ThinLTO inlining statistics
Summary:
copypasta doc of ImportedFunctionsInliningStatistics class
 \brief Calculate and dump ThinLTO specific inliner stats.
 The main statistics are:
 (1) Number of inlined imported functions,
 (2) Number of imported functions inlined into importing module (indirect),
 (3) Number of non imported functions inlined into importing module
 (indirect).
 The difference between first and the second is that first stat counts
 all performed inlines on imported functions, but the second one only the
 functions that have been eventually inlined to a function in the importing
 module (by a chain of inlines). Because llvm uses bottom-up inliner, it is
 possible to e.g. import function `A`, `B` and then inline `B` to `A`,
 and after this `A` might be too big to be inlined into some other function
 that calls it. It calculates this statistic by building graph, where
 the nodes are functions, and edges are performed inlines and then by marking
 the edges starting from not imported function.

 If `Verbose` is set to true, then it also dumps statistics
 per each inlined function, sorted by the greatest inlines count like
 - number of performed inlines
 - number of performed inlines to importing module

Reviewers: eraman, tejohnson, mehdi_amini

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D22491

llvm-svn: 277089
2016-07-29 00:27:16 +00:00
Adam Nemet aa3506c5f0 [BPI] Add new LazyBPI analysis
Summary:
The motivation is the same as in D22141: In order to add the hotness
attribute to optimization remarks we need BFI to be available in all
passes that emit optimization remarks.  BFI depends on BPI so unless we
make this lazy as well we would still compute BPI unconditionally.

The solution is to use the new LazyBPI pass in LazyBFI and only compute
BPI when computation of BFI is requested by the client.

I extended the laziness test using a LoopDistribute test to also cover
BPI.

Reviewers: hfinkel, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22835

llvm-svn: 277083
2016-07-28 23:31:12 +00:00
Vitaly Buka 0ab23cf1c8 Do not remove empty lifetime.start/lifetime.end ranges
Summary:
Asan stack-use-after-scope check should poison alloca even if there is
no access between start and end.

This is possible for code like this:
for (int i = 0; i < 3; i++) {
  int x;
  p = &x;
}

"Loop Invariant Code Motion" will move "p = &x;" out of the loop, making
start/end range empty.

PR27453

Reviewers: eugenis

Differential Revision: https://reviews.llvm.org/D22842

llvm-svn: 277072
2016-07-28 22:59:03 +00:00
Vitaly Buka 2fae6a7702 Should be committed as one CL.
This reverts commits r277068 r277067 r277066.

llvm-svn: 277071
2016-07-28 22:59:01 +00:00
Vitaly Buka f0500b6ae5 Do not remove empty lifetime.start/lifetime.end ranges
Summary:
Asan stack-use-after-scope check should poison alloca even if there is
no access between start and end.

This is possible for code like this:
for (int i = 0; i < 3; i++) {
  int x;
  p = &x;
}

"Loop Invariant Code Motion" will move "p = &x;" out of the loop, making
start/end range empty.

PR27453

Reviewers: eugenis

Differential Revision: https://reviews.llvm.org/D22842

llvm-svn: 277068
2016-07-28 22:50:48 +00:00
Vitaly Buka 3645793872 maned
llvm-svn: 277067
2016-07-28 22:50:45 +00:00
Michael Kuperstein e45d4d9b35 [PM] Port LowerGuardIntrinsic to the new PM.
llvm-svn: 277057
2016-07-28 22:08:41 +00:00
David Majnemer 3d32b7ed0d [coroutines] Part 3 of N: Adding Boilerplate for Coroutine Passes
This adds boilerplate code for all coroutine passes,
the passes are no-ops for now.
Also, a small test has been added to verify that passes execute in
the expected order or not at all if coroutine support is disabled.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22847

llvm-svn: 277033
2016-07-28 21:04:31 +00:00
David Majnemer 19d024b2fd [ConstantFolding] Don't bail on folding if ConstantFoldConstantExpression fails
When folding an expression, we run ConstantFoldConstantExpression on
each operand of that expression.
However, ConstantFoldConstantExpression can fail and retur nullptr.

Previously, we would bail on further refining the expression.
Instead, use the original operand and see if we can refine a later
operand.

llvm-svn: 276959
2016-07-28 06:39:48 +00:00
David Majnemer 0be7155350 [InstCombine] Handle failures from ConstantFoldConstantExpression
ConstantFoldConstantExpression returns null when folding fails.

This fixes PR28745.

llvm-svn: 276952
2016-07-28 02:29:06 +00:00
Wei Mi 315bb33f27 Fix the assertion error in collectLoopUniforms caused by empty Worklist before expanding.
Contributed-by: David Callahan

Differential Revision: https://reviews.llvm.org/D22886

llvm-svn: 276943
2016-07-27 23:53:58 +00:00
Justin Lebar 23a9686011 [LSV] Don't assume that bitcast ops are Instructions.
Summary:
When we ask the builder to create a bitcast on a constant, we get back a
constant, not an instruction.

Reviewers: asbirlea

Subscribers: jholewinski, mzolotukhin, llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D22878

llvm-svn: 276922
2016-07-27 21:45:48 +00:00
Sebastian Pop 55c3007b88 GVN-hoist: improve code generation for recursive GEPs
When loading or storing in a field of a struct like "a.b.c", GVN is able to
detect the equivalent expressions, and GVN-hoist would fail in the code
generation.  This is because the GEPs are not hoisted as scalar operations to
avoid moving the GEPs too far from their ld/st instruction when the ld/st is not
movable.  So we end up having to generate code for the GEP of a ld/st when we
move the ld/st.  In the case of a GEP referring to another GEP as in "a.b.c" we
need to code generate all the GEPs necessary to make all the operands available
at the new location for the ld/st.  With this patch we recursively walk through
the GEP operands checking whether all operands are available, and in the case of
a GEP operand, it recursively makes all its operands available. Code generation
happens from the inner GEPs out until reaching the GEP that appears as an
operand of the ld/st.

Differential Revision: https://reviews.llvm.org/D22599

llvm-svn: 276841
2016-07-27 05:48:12 +00:00
David Majnemer bc36b15253 [ConstantFolding] Correctly handle failures in ConstantFoldConstantExpressionImpl
Failures in ConstantFoldConstantExpressionImpl were ignored causing
crashes down the line.

This fixes PR28725.

llvm-svn: 276827
2016-07-27 02:39:16 +00:00
Andrew Kaylor f990fa5f7b Reverting r276771 due to MSan failures.
llvm-svn: 276824
2016-07-27 01:19:24 +00:00
David Majnemer 6774d612d4 [InstSimplify] Cast folding can be made more generic
Use isEliminableCastPair to determine if a pair of casts are foldable.

llvm-svn: 276777
2016-07-26 17:58:05 +00:00
Andrew Kaylor 3104a6bad0 Re-committing r275284: add support to inline __builtin_mempcpy
Patch by Sunita Marathe

Differential Revision: http://reviews.llvm.org/D21920

llvm-svn: 276771
2016-07-26 17:23:13 +00:00
David Majnemer a90a621d1e Reapply: [InstSimplify] Add support for bitcasts"
This reverts commit r276700 and reapplies r276698.
The relevant clang tests have been updated.

llvm-svn: 276727
2016-07-26 05:52:29 +00:00
Sebastian Pop 91d4a30159 GVN-hoist: use a DFS numbering of instructions (PR28670)
Instead of DFS numbering basic blocks we now DFS number instructions that avoids
the costly operation of which instruction comes first in a basic block.

Patch mostly written by Daniel Berlin.

Differential Revision: https://reviews.llvm.org/D22777

llvm-svn: 276714
2016-07-26 00:15:10 +00:00
Evgeniy Stepanov 906f6fb565 [safestack] Fix stack guard live range.
Stack guard slot is live throughout the function.

llvm-svn: 276712
2016-07-26 00:05:14 +00:00
David Majnemer 6e06b577cc Revert "[InstSimplify] Add support for bitcasts"
This reverts commit r276698.  Clang has tests which rely on the
optimizer :(

llvm-svn: 276700
2016-07-25 22:24:59 +00:00
David Majnemer 62611fd3f7 [InstSimplify] Add support for bitcasts
BitCasts of BitCasts can be folded away as can BitCasts which don't
change the type of the operand.

llvm-svn: 276698
2016-07-25 22:04:58 +00:00
Matt Arsenault 7cddfed7e8 Scalarizer: Support scalarizing intrinsics
llvm-svn: 276681
2016-07-25 20:02:54 +00:00
Evgeniy Stepanov 8d78bd5041 Fix invalid iterator use in safestack coloring.
llvm-svn: 276676
2016-07-25 19:25:40 +00:00
Rong Xu 705f7775bb [PGO] Fix profile mismatch in COMDAT function with pre-inliner
Pre-instrumentation inline (pre-inliner) greatly improves the IR
instrumentation code performance, among other benefits. One issue of the
pre-inliner is it can introduce CFG-mismatch for COMDAT functions. This
is due to the fact that the same COMDAT function may have different early
inline decisions across different modules -- that means different copies
of COMDAT functions will have different CFG checksum.

In this patch, we propose a partially renaming the COMDAT group and its
member function/variable so we have different profile counter for each
version. We will post-fix the COMDAT function and the group name with its
FunctionHash.

Differential Revision: http://reviews.llvm.org/D22600

llvm-svn: 276673
2016-07-25 18:45:37 +00:00
Sean Silva fe5abd5e0c Fix : Partial Inliner requires AssumptionCacheTracker
The public InlineFunction utility assumes that the passed in
InlineFunctionInfo has a valid AssumptionCacheTracker.

Patch by River Riddle!

Differential Revision: https://reviews.llvm.org/D22706

llvm-svn: 276609
2016-07-25 05:00:00 +00:00
David Majnemer 68623a0e9f [GVNHoist] Merge metadata on hoisted instructions less conservatively
We can combine metadata from multiple instructions intelligently for
certain metadata nodes.

llvm-svn: 276602
2016-07-25 02:21:25 +00:00
David Majnemer 4728569d0a [GVNHoist] Properly merge alignments when hoisting
If we two loads of two different alignments, we must use the minimum of
the two alignments when hoisting.  Same deal for stores.

For allocas, use the maximum of the two allocas.

llvm-svn: 276601
2016-07-25 02:21:23 +00:00
Elena Demikhovsky 376a18bd92 [Loop Vectorizer] Handling loops FP induction variables.
Allowed loop vectorization with secondary FP IVs. Like this:
float *A;
float x = init;
for (int i=0; i < N; ++i) {
  A[i] = x;
  x -= fp_inc;
}

The auto-vectorization is possible when the induction binary operator is "fast" or the function has "unsafe" attribute.

Differential Revision: https://reviews.llvm.org/D21330

llvm-svn: 276554
2016-07-24 07:24:54 +00:00
Sanjay Patel 1271bf9178 [InstCombine] allow icmp (bit-manipulation-intrinsic(), C) folds for vectors
llvm-svn: 276523
2016-07-23 13:06:49 +00:00
Xinliang David Li 9239245401 [Profile] Use explicit flag to enable IR PGO
Patch by Jake VanAdrighem

Differential Revision: http://reviews.llvm.org/D22607

llvm-svn: 276516
2016-07-23 04:28:52 +00:00
Sanjay Patel 8d8594acb9 auto-generate checks
llvm-svn: 276501
2016-07-23 00:09:54 +00:00
Adam Nemet 9e6e63fba2 [LoopDataPrefetch] Include hotness of region in opt remark
llvm-svn: 276488
2016-07-22 22:53:17 +00:00
Sanjay Patel e063ddb347 add tests for icmp vector folds
llvm-svn: 276482
2016-07-22 22:19:52 +00:00
Michael Kuperstein 38e7298093 [SLPVectorizer] Vectorize reverse-order loads in horizontal reductions
When vectorizing a tree rooted at a store bundle, we currently try to sort the
stores before building the tree, so that the stores can be vectorized. For other
trees, the order of the root bundle - which determines the order of all other
bundles - is arbitrary. That is bad, since if a leaf bundle of consecutive loads
happens to appear in the wrong order, we will not vectorize it.

This is partially mitigated when the root is a binary operator, by trying to
build a "reversed" tree when that's considered profitable. This patch extends the
workaround we have for binops to trees rooted in a horizontal reduction.

This fixes PR28474.

Differential Revision: https://reviews.llvm.org/D22554

llvm-svn: 276477
2016-07-22 21:28:48 +00:00
Sanjay Patel cbc4377af1 add tests for icmp vector folds
llvm-svn: 276476
2016-07-22 21:28:20 +00:00
Sanjay Patel 97e61dcc2d add tests for icmp vector folds
llvm-svn: 276475
2016-07-22 21:13:08 +00:00
Sanjay Patel b73d7aed71 add tests for icmp vector folds
llvm-svn: 276472
2016-07-22 21:02:33 +00:00
Sanjay Patel 859278005d update to use FileCheck and auto-generate checks
llvm-svn: 276466
2016-07-22 20:39:07 +00:00
Sanjay Patel 296a776a5b add tests for icmp vector folds
llvm-svn: 276464
2016-07-22 20:11:08 +00:00
Jun Bum Lim 6a7dc5c430 Recommit - [DSE]Enhance shorthening MemIntrinsic based on OverlapIntervals
Recommiting r275571 after fixing crash reported in PR28270.
Now we erase elements of IOL in deleteDeadInstruction().

Original Summary:
This change use the overlap interval map built from partial overwrite tracking to perform shortening MemIntrinsics.
Add test cases which was missing opportunities before.

llvm-svn: 276452
2016-07-22 18:27:24 +00:00
Sanjay Patel beaea95a0d add tests for vector bit manipulation intrinsics
llvm-svn: 276451
2016-07-22 18:22:25 +00:00
Anna Thomas 0be4a0e6a4 Invariant start/end intrinsics overloaded for address space
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address
space.

With this change, these intrinsics are overloaded for any adddress space
for memory objects
and we can use these llvm invariant intrinsics in non-default address
spaces.

Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)

This overloaded intrinsic is needed for representing final or invariant
memory in managed languages.

Reviewers: apilipenko, reames

Subscribers: llvm-commits
llvm-svn: 276447
2016-07-22 17:49:40 +00:00
David Majnemer 522a91181a Don't remove side effecting instructions due to ConstantFoldInstruction
Just because we can constant fold the result of an instruction does not
imply that we can delete the instruction.  It may have side effects.

This fixes PR28655.

llvm-svn: 276389
2016-07-22 04:54:44 +00:00
Sanjoy Das aae623f4c2 [IRCE] Don't misuse CHECK-LABEL; NFC
llvm-svn: 276373
2016-07-22 00:41:02 +00:00
Sanjoy Das bb969791b4 [IRCE] Add an option to skip profitability checks
If `-irce-skip-profitability-checks` is passed in, IRCE will kick in in
all cases where it is legal for it to kick in.  This flag is intended to
help diagnose and analyse performance issues.

llvm-svn: 276372
2016-07-22 00:40:56 +00:00
Sebastian Pop 31fd506623 GVH-hoist: only clone GEPs (PR28606)
Do not clone stored values unless they are GEPs that are special cased to avoid
hoisting them without hoisting their associated ld/st.

Differential revision: https://reviews.llvm.org/D22652

llvm-svn: 276358
2016-07-21 23:22:10 +00:00
Wei Mi 1cf58f8996 [PM] Port NaryReassociate to the new PM
Differential Revision: https://reviews.llvm.org/D22648

llvm-svn: 276349
2016-07-21 22:28:52 +00:00
Sanjay Patel e9fc79bb13 [InstSimplify] don't crash handling a pointer or aggregate type
llvm-svn: 276345
2016-07-21 21:56:00 +00:00
Sanjay Patel a3bfb4e313 [InstSimplify] recognize trunc + icmp sgt/slt variants of select simplifications (PR28466)
rL245171 exposed a hole in InstSimplify that manifested in a strange way in PR28466:
https://llvm.org/bugs/show_bug.cgi?id=28466

It's possible to use trunc + icmp sgt/slt in place of an and + icmp eq/ne, so we need to
recognize that pattern to eliminate selects that are choosing between some value and some
bitmasked version of that value.

Note that there is significant room for improvement (refactoring) and enhancement (more
patterns, possibly in InstCombine rather than here).

Differential Revision: https://reviews.llvm.org/D22537

llvm-svn: 276341
2016-07-21 21:26:45 +00:00
Adam Nemet 84a6425d61 [OptDiag,LDist] Convert remaining opt remarks to use the new API
llvm-svn: 276340
2016-07-21 21:21:34 +00:00
Matthew Simpson 102729cf1b [LV] Move vector int induction update to end of latch
This patch moves the update instruction for vectorized integer induction phi
nodes to the end of the latch block. This ensures consistent placement of all
induction updates across all the kinds of int inductions we create (scalar,
splat vector, or vector phi).

Differential Revision: https://reviews.llvm.org/D22416

llvm-svn: 276339
2016-07-21 21:20:15 +00:00
Sanjay Patel 9eec550a2b add vector tests and a simpler version of the negative tests
llvm-svn: 276328
2016-07-21 20:11:08 +00:00
Anna Thomas c858faa244 Revert "Invariant start/end intrinsics overloaded for address space"
This reverts commit r276316.

llvm-svn: 276320
2016-07-21 19:06:28 +00:00
Anna Thomas 29b24dfe44 Invariant start/end intrinsics overloaded for address space
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address space.

With this change, these intrinsics are overloaded for any adddress space for memory objects
and we can use these llvm invariant intrinsics in non-default address spaces.

Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)

This overloaded intrinsic is needed for representing final or invariant memory in managed languages.

Reviewers: tstellarAMD, reames, apilipenko

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22519

llvm-svn: 276316
2016-07-21 18:41:44 +00:00
David Majnemer 825e4ab9e3 [GVNHoist] Preserve optimization hints which agree
If we have optimization hints with agree with each other along different
paths, preserve them.

llvm-svn: 276248
2016-07-21 07:16:26 +00:00
David Majnemer 4808f26422 [GVNHoist] Don't wrongly preserve TBAA
We hoisted loads/stores without taking into account which can cause
miscompiles.

llvm-svn: 276240
2016-07-21 05:59:53 +00:00
Adam Nemet 7cfd5971ab [OptDiag,LV] Add hotness attribute to applied-optimization remarks
Test coverage is provided by modifying the function in the FP-math
testcase that we are allowed to vectorize.

llvm-svn: 276223
2016-07-21 01:07:13 +00:00
Sanjay Patel 0753c06d9c [InstCombine] LogicOpc (zext X), C --> zext (LogicOpc X, C) (PR28476)
The benefits of this change include:
1. Remove DeMorgan-matching code that was added specifically to work-around 
   the missing transform in http://reviews.llvm.org/rL248634.
2. Makes the DeMorgan transform work for vectors too.
3. Fix PR28476: https://llvm.org/bugs/show_bug.cgi?id=28476

Extending this transform to other casts and other associative operators may
be useful too. See https://reviews.llvm.org/D22421 for a prerequisite for
doing that though.

Differential Revision: https://reviews.llvm.org/D22271

llvm-svn: 276221
2016-07-21 00:24:18 +00:00
Adam Nemet 0e0e2d5d26 [OptDiag,LV] Add hotness attribute to the derived analysis remarks
This includes FPCompute and Aliasing.

Testcase is based on no_fpmath.ll.

llvm-svn: 276211
2016-07-20 23:50:32 +00:00
Sanjay Patel 5f3c70307d [InstSimplify][InstCombine] don't crash when folding vector selects of icmp
Differential Revision: https://reviews.llvm.org/D22602

llvm-svn: 276209
2016-07-20 23:40:01 +00:00
Justin Lebar cd564c6b46 [NVPTX] Enable the load-store vectorizer on nvptx.
Reviewers: tra

Subscribers: jholewinski, arsenm, asbirlea

Differential Revision: https://reviews.llvm.org/D22592

llvm-svn: 276196
2016-07-20 22:11:36 +00:00
Adam Nemet 5b3a5cf6b0 [OptDiag,LV] Add hotness attribute to analysis remarks
The earlier change added hotness attribute to missed-optimization
remarks.  This follows up with the analysis remarks (the ones explaining
the reason for the missed optimization).

llvm-svn: 276192
2016-07-20 21:44:26 +00:00
David Majnemer bd21012c6c [GVNHoist] Don't hoist PHI nodes
We hoisted PHIs without respecting their special insertion point in the
block, leading to verfier errors.

This fixes PR28626.

llvm-svn: 276181
2016-07-20 21:05:01 +00:00
Davide Italiano 15ff2d6d0c [SCCP] Zap multiple return values.
We can replace the return values with undef if we replaced all
the call uses with a constant/undef.

Differential Revision:  https://reviews.llvm.org/D22336

llvm-svn: 276174
2016-07-20 20:17:13 +00:00
Justin Lebar a272c12b73 [LSV] Don't move stores across may-load instrs, and loosen restrictions on moving loads.
Summary:
Previously we wouldn't move loads/stores across instructions that had
side-effects, where that was defined as may-write or may-throw.  But
this is not sufficiently restrictive: Stores can't safely be moved
across instructions that may load.

This patch also adds a DEBUG check that all instructions in our chain
are either loads or stores.

Reviewers: asbirlea

Subscribers: llvm-commits, jholewinski, arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22547

llvm-svn: 276171
2016-07-20 20:07:37 +00:00
Justin Lebar 62b03e344e [LSV] Vectorize up to side-effecting instructions.
Summary:
Previously if we had a chain that contained a side-effecting
instruction, we wouldn't vectorize it at all.  Now we'll vectorize
everything that comes before the side-effecting instruction.

Reviewers: asbirlea

Subscribers: arsenm, jholewinski, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22536

llvm-svn: 276170
2016-07-20 20:07:34 +00:00
Sanjay Patel c0812702f8 minimize tests and auto-generate checks
llvm-svn: 276147
2016-07-20 17:58:20 +00:00
Benjamin Kramer b4d64cf27d Revert "[InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))"
Makes InstCombine infloop when compiling v8.

This reverts commit r275989 and r276105.

llvm-svn: 276106
2016-07-20 11:40:16 +00:00
Tobias Grosser 8c6201b49f [InstCombine] Provide more test cases for cast-folding [NFC]
Summary: In r275989 we enabled the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))`. Here we add more test cases to assure this folding works for all logical operations `and`/`or`/`xor`.

Reviewers: grosser

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22561

Contributed-by: Matthias Reisinger
llvm-svn: 276105
2016-07-20 11:24:27 +00:00
Simon Pilgrim 1b4f511aaa [X86][SSE] Add cost model values for CTPOP of vectors
This patch adds costs for the vectorized implementations of CTPOP, the default values were seriously underestimating the cost of these and was encouraging vectorization on targets where serialized use of POPCNT would be much better.

Differential Revision: https://reviews.llvm.org/D22456

llvm-svn: 276104
2016-07-20 10:41:28 +00:00
David Majnemer a75736087d Forgot to add a test for r276008.
llvm-svn: 276082
2016-07-20 04:13:05 +00:00
Adam Nemet 67c8929a2c [LV] Add hotness attribute to missed-optimization remarks
The new OptimizationRemarkEmitter analysis pass is hooked up to both new
and old PM passes.

llvm-svn: 276080
2016-07-20 04:03:43 +00:00
Michael Zolotukhin 6bc56d552a Revert "Revert r275883 and r275891. They seem to cause PR28608."
This reverts commit r276064, and thus reapplies r275891 and r275883 with
a fix for PR28608.

llvm-svn: 276077
2016-07-20 01:55:27 +00:00
Justin Lebar 6114b37838 [LSV] Don't assume that loads/stores appear in address order in the BB.
Summary:
getVectorizablePrefix previously didn't work properly in the face of
aliasing loads/stores.  It unwittingly assumed that the loads/stores
appeared in the BB in address order.  If they didn't, it would do the
wrong thing.

Reviewers: asbirlea, tstellarAMD

Subscribers: arsenm, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22535

llvm-svn: 276072
2016-07-20 00:55:12 +00:00
Sean Silva 554efb28d2 Revert r275883 and r275891. They seem to cause PR28608.
Revert "[LoopSimplify] Update LCSSA after separating nested loops."

This reverts commit r275891.

Revert "[LCSSA] Post-process PHI-nodes created by SSAUpdate when constructing LCSSA form."

This reverts commit r275883.

llvm-svn: 276064
2016-07-19 23:54:29 +00:00
Sean Silva e3c18a5ae8 [PM] Port LoopUnroll.
We just set PreserveLCSSA to always true since we don't have an
analogous method `mustPreserveAnalysisID(LCSSA)`.

Also port LoopInfo verifier pass to test LoopUnrollPass.

llvm-svn: 276063
2016-07-19 23:54:23 +00:00
Justin Lebar 8778c62629 [LSV] Insert stores at the right point.
Summary:
Previously, the insertion point for stores was the last instruction in
Chain *before calling getVectorizablePrefixEndIdx*.  Thus if
getVectorizablePrefixEndIdx didn't return Chain.size(), we still would
insert at the last instruction in Chain.

This patch changes our internal API a bit in an attempt to make it less
prone to this sort of error.  As a result, we end up recalculating the
Chain's boundary instructions, but I think worrying about the speed hit
of this is a premature optimization right now.

Reviewers: asbirlea, tstellarAMD

Subscribers: mzolotukhin, arsenm, llvm-commits

Differential Revision: https://reviews.llvm.org/D22534

llvm-svn: 276056
2016-07-19 23:19:20 +00:00
Justin Lebar d9446d3770 [LSV] Add detail to correct-order.ll test.
Summary:
This helps keep us honest -- there were a number of ways we could screw
up and still have passed this test.

Reviewers: asbirlea

Subscribers: llvm-commits, arsenm

Differential Revision: https://reviews.llvm.org/D22531

llvm-svn: 276053
2016-07-19 23:18:59 +00:00
Sanjay Patel d4ea94eb94 regenerate checks
llvm-svn: 276042
2016-07-19 22:32:15 +00:00
Sanjay Patel 2d477e59e8 [InstCombine] fold add(zext(xor X, C), C) --> sext X when C is INT_MIN in the source type
The pattern may look more obviously like a sext if written as:

  define i32 @g(i16 %x) {
    %zext = zext i16 %x to i32
    %xor = xor i32 %zext, 32768
    %add = add i32 %xor, -32768
    ret i32 %add
  }

We already have that fold in visitAdd().

Differential Revision: https://reviews.llvm.org/D22477

llvm-svn: 276035
2016-07-19 22:09:34 +00:00
Sanjay Patel 47c04f9543 add even more missing tests for simplifySelectBitTest()
llvm-svn: 276024
2016-07-19 20:47:00 +00:00
David Majnemer 5246e0b2c2 [FunctionAttrs] Correct the safety analysis for inference of 'returned'
We skipped over ReturnInsts which didn't return an argument which would
lead us to incorrectly conclude that an argument returned by another
ReturnInst was 'returned'.

This reverts commit r275756.

This fixes PR28610.

llvm-svn: 276008
2016-07-19 18:50:26 +00:00
David Majnemer 07ea344222 Add a testcase for r275581
llvm-svn: 276002
2016-07-19 17:52:41 +00:00
Sanjay Patel 8b76ebe5b8 add tests related to PR28466
llvm-svn: 275995
2016-07-19 17:07:35 +00:00
Sanjay Patel d2ff6d727f add missing test for simplifySelectBitTest()
llvm-svn: 275990
2016-07-19 16:49:55 +00:00
Tobias Grosser 1c38262279 [InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))
Summary:
Currently, InstCombine is already able to fold expressions of the form `logic(cast(A), cast(B))` to the simpler form `cast(logic(A, B))`, where logic designates one of `and`/`or`/`xor`. This transformation is implemented in `foldCastedBitwiseLogic()` in InstCombineAndOrXor.cpp. However, this optimization will not be performed if both `A` and `B` are `icmp` instructions. The decision to preclude casts of `icmp` instructions originates in r48715 in combination with r261707, and can be best understood by the title of the former one:

> Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp.

Apparently, it introduced a transformation that is a reverse of the transformation that is done in `foldCastedBitwiseLogic()`. Its purpose is to expose pairs of `zext icmp` that would subsequently be optimized by `transformZExtICmp()` in InstCombineCasts.cpp. Therefore, in order to avoid an endless loop of switching back and forth between these two transformations, the one in `foldCastedBitwiseLogic()` has been restricted to exclude `icmp` instructions which is mirrored in the responsible check:

`if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) && ...`

This check seems to sort out more cases than necessary because:
- the reverse transformation is obviously done for `or` instructions only
- and also not every `zext icmp` pair is necessarily the result of this reverse transformation

Therefore we now remove this check and replace it by a more finegrained one in `shouldOptimizeCast()` that now rejects only those `logic(zext(icmp), zext(icmp))` that would be able to be optimized by `transformZExtICmp()`, which also avoids the mentioned endless loop. That means we are now able to also simplify expressions of the form `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` (`cast` being an arbitrary `CastInst`).

As an example, consider the following IR snippet

```
%1 = icmp sgt i64 %a, %b
%2 = zext i1 %1 to i8
%3 = icmp slt i64 %a, %c
%4 = zext i1 %3 to i8
%5 = and i8 %2, %4
```

which would now be transformed to

```
%1 = icmp sgt i64 %a, %b
%2 = icmp slt i64 %a, %c
%3 = and i1 %1, %2
%4 = zext i1 %3 to i8
```

This issue became apparent when experimenting with the programming language Julia, which makes use of LLVM. Currently, Julia lowers its `Bool` datatype to LLVM's `i8` (also see https://github.com/JuliaLang/julia/pull/17225). In fact, the above IR example is the lowered form of the Julia snippet `(a > b) & (a < c)`. Like shown above, this may introduce `zext` operations, casting between `i1` and `i8`, which could for example hinder ScalarEvolution and Polly on certain code.

Reviewers: grosser, vtjnash, majnemer

Subscribers: majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D22511

Contributed-by: Matthias Reisinger
llvm-svn: 275989
2016-07-19 16:39:17 +00:00
Simon Pilgrim 0ea8d275cc [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

A companion clang patch is at D22105

Differential Revision: https://reviews.llvm.org/D22106

llvm-svn: 275981
2016-07-19 15:07:43 +00:00
George Burgess IV 5f30897b7b [MemorySSA] Update to the new shiny walker.
This patch updates MemorySSA's use-optimizing walker to be more
accurate and, in some cases, faster.

Essentially, this changed our core walking algorithm from a
cache-as-you-go DFS to an iteratively expanded DFS, with all of the
caching happening at the end. Said expansion happens when we hit a Phi,
P; we'll try to do the smallest amount of work possible to see if
optimizing above that Phi is legal in the first place. If so, we'll
expand the search to see if we can optimize to the next phi, etc.

An iteratively expanded DFS lets us potentially quit earlier (because we
don't assume that we can optimize above all phis) than our old walker.
Additionally, because we don't cache as we go, we can now optimize above
loops.

As an added bonus, this patch adds a ton of verification (if
EXPENSIVE_CHECKS are enabled), so finding bugs is easier.

Differential Revision: https://reviews.llvm.org/D21777

llvm-svn: 275940
2016-07-19 01:29:15 +00:00
Wei Mi 79997a24d7 Recommit the patch "Use uniforms set to populate VecValuesToIgnore".
For instructions in uniform set, they will not have vector versions so
add them to VecValuesToIgnore.
For induction vars, those only used in uniform instructions or consecutive
ptrs instructions have already been added to VecValuesToIgnore above. For
those induction vars which are only used in uniform instructions or
non-consecutive/non-gather scatter ptr instructions, the related phi and
update will also be added into VecValuesToIgnore set.

The change will make the vector RegUsages estimation less conservative.

Differential Revision: https://reviews.llvm.org/D20474

The recommit fixed the testcase global_alias.ll.

llvm-svn: 275936
2016-07-19 00:50:43 +00:00
Sanjoy Das ab73c9d88e [LoopReroll] Reroll loops with unordered atomic memory accesses
Reviewers: hfinkel, jfb, reames

Subscribers: mcrosier, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D22385

llvm-svn: 275932
2016-07-19 00:23:54 +00:00
Dehao Chen 6132ee8502 [PM] Convert Loop Strength Reduce pass to new PM
Summary: Convert Loop String Reduce pass to new PM

Reviewers: davidxl, silvas

Subscribers: junbuml, sanjoy, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D22468

llvm-svn: 275919
2016-07-18 21:41:50 +00:00
Teresa Johnson 2124157102 [PM] Port FunctionImport Pass to new PM
Summary: Port FunctionImport Pass to new PM.

Reviewers: mehdi_amini, davide

Subscribers: davidxl, llvm-commits

Differential Revision: https://reviews.llvm.org/D22475

llvm-svn: 275916
2016-07-18 21:22:24 +00:00
Wei Mi f9afff71a2 Revert rL275912.
llvm-svn: 275915
2016-07-18 21:14:43 +00:00
Wei Mi 1fd25726af Use uniforms set to populate VecValuesToIgnore.
For instructions in uniform set, they will not have vector versions so
add them to VecValuesToIgnore.
For induction vars, those only used in uniform instructions or consecutive
ptrs instructions have already been added to VecValuesToIgnore above. For
those induction vars which are only used in uniform instructions or
non-consecutive/non-gather scatter ptr instructions, the related phi and
update will also be added into VecValuesToIgnore set.

The change will make the vector RegUsages estimation less conservative.

Differential Revision: https://reviews.llvm.org/D20474

llvm-svn: 275912
2016-07-18 20:59:53 +00:00
Sanjay Patel dbf44f5016 add tests for missed sext transform
llvm-svn: 275908
2016-07-18 20:37:51 +00:00
Sanjay Patel 8a2bf3099f auto-generate checks
llvm-svn: 275899
2016-07-18 20:06:51 +00:00
Michael Zolotukhin ea5b72825b [LoopSimplify] Update LCSSA after separating nested loops.
Summary:
Usually LCSSA survives this transformation, but in some cases (see
attached test) it doesn't: values from the original loop after
separating might be used from the outer loop. Before the transformation
it was the same loop, so LCSSA phis were not required.

This fixes PR28272.

Reviewers: sanjoy, hfinkel, chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21665

llvm-svn: 275891
2016-07-18 19:44:19 +00:00
Michael Zolotukhin 7a3040dc83 [LCSSA] Post-process PHI-nodes created by SSAUpdate when constructing LCSSA form.
Summary:
SSAUpdate might insert PHI-nodes inside loops, which can break LCSSA
form unless we fix it up.

This fixes PR28424.

Reviewers: sanjoy, chandlerc, hfinkel

Subscribers: uabelho, llvm-commits

Differential Revision: http://reviews.llvm.org/D21997

llvm-svn: 275883
2016-07-18 19:05:08 +00:00
Adam Nemet d6ba0bf831 [LoopDist] This test does not require ASSERTS
Only its counterpart, diagnostics-with-hotness-lazy-BFI.ll, which
invokes opt with -debug-only=.

llvm-svn: 275812
2016-07-18 16:37:32 +00:00
Adam Nemet b2593f78ca [LoopDist] Port to new PM
Summary:
The direct motivation for the port is to ensure that the OptRemarkEmitter
tests work with the new PM.

This remains a function pass because we not only create multiple loops
but could also version the original loop.

In the test I need to invoke opt
with -passes='require<aa>,loop-distribute'.  LoopDistribute does not
directly depend on AA however LAA does.  LAA uses getCachedResult so
I *think* we need manually pull in 'aa'.

Reviewers: davidxl, silvas

Subscribers: sanjoy, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22437

llvm-svn: 275811
2016-07-18 16:29:27 +00:00
Alexander Kornienko 63dd36faa5 Revert "r275571 [DSE]Enhance shorthening MemIntrinsic based on OverlapIntervals"
Causes https://llvm.org/bugs/show_bug.cgi?id=28588

llvm-svn: 275801
2016-07-18 15:51:31 +00:00
Simon Pilgrim 1b2ab113fb [SLPVectorizer][X86] Added sqrt vectorization tests
llvm-svn: 275788
2016-07-18 13:20:54 +00:00
David Majnemer 04c7c225a1 [GVNHoist] Change the key for VNtoInsns to a pair
While debugging GVNHoist, I found it confusing that the entries in a
VNtoInsns were not always value numbers.  They _usually_ were except for
StoreInst in which case they were a hash of two different value numbers.

This leads to two observations:
- It is more difficult to debug things when the semantic contents of
  VNtoInsns changes over time.
- Using a single value number is not much cheaper, the value of
  VNtoInsns is a SmallVector.
- It is not immediately clear what the algorithm would do if there were
  hash collisions in the StoreInst case.

Using a DenseMap of std::pair sidesteps all of this.

N.B.  The changes in the test were due their sensitivity to the
iteration order of VNtoInsns which has changed.

llvm-svn: 275761
2016-07-18 06:11:37 +00:00
NAKAMURA Takumi 966bde50c3 Revert r275678, "Revert "Revert r275027 - Let FuncAttrs infer the 'returned' argument attribute""
This reverts also r275029, "Update Clang tests after adding inference for the returned argument attribute"

It broke LTO build. Seems miscompilation.

llvm-svn: 275756
2016-07-18 03:23:25 +00:00
Davide Italiano 4edd54794b [GVN] Move other PRE tests to a subdirectory.
llvm-svn: 275742
2016-07-17 23:55:20 +00:00
Davide Italiano ed8e0881c1 [GVN] Move the PRE/LOADPRE test in a subdirectory.
llvm-svn: 275741
2016-07-17 23:48:18 +00:00
Davide Italiano 6a69f829bd [GVN] Use FileCheck instead of grep for tests.
llvm-svn: 275739
2016-07-17 23:21:26 +00:00
Teresa Johnson cd21a646f6 [ThinLTO] Perform profile-guided indirect call promotion
Summary:
To enable profile-guided indirect call promotion in ThinLTO mode, we
simply add call graph edges for each profitable target from the profile
to the summaries, then the summary-guided importing will consider the
callee for importing as usual.

Also we need to enable the indirect call promotion pass creation in the
PassManagerBuilder when PerformThinLTO=true (we are in the ThinLTO
backend), so that the newly imported functions are considered for
promotion in the backends.

The IC promotion profiles refer to callees by GUID, which required
adding GUIDs to the per-module VST in bitcode (and assigning them
valueIds similar to how they are assigned valueIds in the combined
index).

Reviewers: mehdi_amini, xur

Subscribers: mehdi_amini, davidxl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21932

llvm-svn: 275707
2016-07-17 14:47:01 +00:00
Dehao Chen 1a44452b11 [PM] Convert IVUsers analysis to new pass manager.
Summary: Convert IVUsers analysis to new pass manager.

Reviewers: davidxl, silvas

Subscribers: junbuml, sanjoy, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22434

llvm-svn: 275698
2016-07-16 22:51:33 +00:00
Sanjay Patel 79acd2a96b [InstCombine] allow X + signbit --> X ^ signbit for vector splats
llvm-svn: 275691
2016-07-16 18:29:26 +00:00
Sanjay Patel 040bd16e56 add vector test to show missing transform
llvm-svn: 275690
2016-07-16 18:24:18 +00:00
Sanjay Patel eb50476f77 update tests to use FileCheck, consolidate tests, fix comments
llvm-svn: 275688
2016-07-16 18:08:22 +00:00
Sanjay Patel e540a31f59 update test to use FileCheck
llvm-svn: 275687
2016-07-16 16:31:58 +00:00
Sanjay Patel 972a53fb42 auto-generate checks
llvm-svn: 275686
2016-07-16 16:27:58 +00:00
Sanjay Patel 309f98ef3a auto-ggenerate checks
llvm-svn: 275685
2016-07-16 16:24:06 +00:00
Sanjay Patel f9d2b20daf [InstCombine] reassociate logic ops with constants separated by a zext
This is a partial implementation of a general fold for associative+commutative operators:
(op (cast (op X, C2)), C1) --> (cast (op X, op (C1, C2)))
(op (cast (op X, C2)), C1) --> (op (cast X), op (C1, C2))

There are 7 associative operators and 13 cast types, so this could potentially go a lot further.

Differential Revision: https://reviews.llvm.org/D22421

llvm-svn: 275684
2016-07-16 15:20:19 +00:00
Hal Finkel 660096b260 Revert "Revert r275027 - Let FuncAttrs infer the 'returned' argument attribute"
This reverts commit r275042; the initial commit triggered self-hosting failures
on ARM/AArch64. James Molloy identified the problematic backend code, which has
been disabled in r275677. Trying again...

Original commit message:

Let FuncAttrs infer the 'returned' argument attribute

A function can have one argument with the 'returned' attribute, indicating that
the associated argument is always the return value of the function. Add
FuncAttrs inference logic.

llvm-svn: 275678
2016-07-16 07:21:28 +00:00
Matt Arsenault 93be6e8c0a StructurizeCFG: Fix inverting constantexpr conditions
llvm-svn: 275626
2016-07-15 22:13:16 +00:00
Jingyue Wu 2b353a9522 [ReassociateGEP] Update tests to allow missing "inbounds" on certain GEPs.
With r275532 fixing miscompilation of GVN, "inbounds" on certain GEPs in these
tests cannot be preserved any more. Left a TODO in the tests for future
reference.

llvm-svn: 275596
2016-07-15 18:47:17 +00:00
Sanjay Patel 27fefb2fcf add tests for associative ops blocked by a cast
These are more generalized versions of the cases added in
r275302 and r275297.

llvm-svn: 275594
2016-07-15 18:39:02 +00:00
Rong Xu 96a19d35ae [PGO] IRPGO pre-cleanup pass changes
This patch adds a selected set of cleanup passes including a pre-inline pass
before LLVM IR PGO instrumentation. The inline is only intended to apply those
obvious/trivial ones before instrumentation so that much less instrumentation
is needed to get better profiling information. This will drastically improve
the instrumented code performance for large C++ applications. Another benefit
is the context sensitive counts that can potentially improve the PGO
optimization.

Differential Revision: http://reviews.llvm.org/D21405

llvm-svn: 275588
2016-07-15 18:10:49 +00:00
Adam Nemet aad816083e [OptRemark,LDist] RFC: Add hotness attribute
Summary:
This is the first set of changes implementing the RFC from
http://thread.gmane.org/gmane.comp.compilers.llvm.devel/98334

This is a cross-sectional patch; rather than implementing the hotness
attribute for all optimization remarks and all passes in a patch set, it
implements it for the 'missed-optimization' remark for Loop
Distribution.  My goal is to shake out the design issues before scaling
it up to other types and passes.

Hotness is computed as an integer as the multiplication of the block
frequency with the function entry count.  It's only printed in opt
currently since clang prints the diagnostic fields directly.  E.g.:

  remark: /tmp/t.c:3:3: loop not distributed: use -Rpass-analysis=loop-distribute for more info (hotness: 300)

A new API added is similar to emitOptimizationRemarkMissed.  The
difference is that it additionally takes a code region that the
diagnostic corresponds to.  From this, hotness is computed using BFI.
The new API is exposed via an analysis pass so that it can be made
dependent on LazyBFI.  (Thanks to Hal for the analysis pass idea.)

This feature can all be enabled by setDiagnosticHotnessRequested in the
LLVM context.  If this is off, LazyBFI is not calculated (D22141) so
there should be no overhead.

A new command-line option is added to turn this on in opt.

My plan is to switch all user of emitOptimizationRemark* to use this
module instead.

Reviewers: hfinkel

Subscribers: rcox2, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D21771

llvm-svn: 275583
2016-07-15 17:23:20 +00:00
Jun Bum Lim a5737d8eac [DSE]Enhance shorthening MemIntrinsic based on OverlapIntervals
Summary:
This change use the overlap interval map built from partial overwrite tracking to perform shortening MemIntrinsics.
Add test cases which was missing opportunities before.

Reviewers: hfinkel, eeckstein, mcrosier

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D21909

llvm-svn: 275571
2016-07-15 16:14:34 +00:00
Sebastian Pop 4177480aad code hoisting pass based on GVN
This pass hoists duplicated computations in the program. The primary goal of
gvn-hoist is to reduce the size of functions before inline heuristics to reduce
the total cost of function inlining.

Pass written by Sebastian Pop, Aditya Kumar, Xiaoyu Hu, and Brian Rzycki.
Important algorithmic contributions by Daniel Berlin under the form of reviews.

Differential Revision: http://reviews.llvm.org/D19338

llvm-svn: 275561
2016-07-15 13:45:20 +00:00
David Majnemer 959a6623b5 XFAIL two SeparateConstOffsetFromGEP tests
They appear to have relied on bugs hidden in copyIRFlags/andIRFlags.

This has been filed as PR28564.

llvm-svn: 275533
2016-07-15 05:37:22 +00:00
David Majnemer 92f84ccf0f [IR] andIRFlags and copyIRFlags needs to handle GEP
We didn't consider the inbounds flag on GEPs leading to downstream users
introducing UB.

This fixes PR28562.

llvm-svn: 275532
2016-07-15 05:02:31 +00:00
Adam Nemet 74730d9ab0 [LoopDist] Fix typo in diagnostic
llvm-svn: 275495
2016-07-14 22:33:46 +00:00
Ekaterina Romanova 7aea5906c0 [GVN] Fold constant expression in GVN.
Fix for PR 28418.

opt never finishes compiling a test when -gvn option is passed.
The problem is caused by the fact that GVN fails to fold a constant expression.

Differential Revision: https://reviews.llvm.org/D22185

llvm-svn: 275483
2016-07-14 22:02:25 +00:00
Matthew Simpson 65ca32b83c [LV] Allow interleaved accesses in loops with predicated blocks
This patch allows the formation of interleaved access groups in loops
containing predicated blocks. However, the predicated accesses are prevented
from forming groups.

Differential Revision: https://reviews.llvm.org/D19694

llvm-svn: 275471
2016-07-14 20:59:47 +00:00
Sanjoy Das 13623ad009 [JumpThreading] PRE unordered loads
Summary: Extend JumpThreading's PRE to unordered atomic loads.

Reviewers: hfinkel, reames

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D22326

llvm-svn: 275456
2016-07-14 19:21:15 +00:00
Jun Bum Lim c837af306e [PM] Port Dead Loop Deletion Pass to the new PM
Summary: Port Dead Loop Deletion Pass to the new pass manager.

Reviewers: silvas, davide

Subscribers: llvm-commits, sanjoy, mcrosier

Differential Revision: https://reviews.llvm.org/D21483

llvm-svn: 275453
2016-07-14 18:28:29 +00:00
Nico Weber 755cd760cd Revert r275401, it caused PR28551.
llvm-svn: 275420
2016-07-14 14:41:25 +00:00
Matthew Simpson 3c3b4a257b [LV] Avoid unnecessary IV scalar-to-vector-to-scalar conversions
This patch prevents increases in the number of instructions, pre-instcombine,
due to induction variable scalarization. An increase in instructions can lead
to an increase in the compile-time required to simplify the induction
variables. We now maintain a new map for scalarized induction variables to
prevent us from converting between the scalar and vector forms.

This patch should resolve compile-time regressions seen after r274627.

llvm-svn: 275419
2016-07-14 14:36:06 +00:00
Sjoerd Meijer 716abbb2f5 This converts a signed remainder instruction to unsigned remainder, which
enables the code size optimisation to fold a rem and div into a single
aeabi_uidivmod call. This was not happening before because sdiv was converted
but srem not, and instructions with different signedness are not combined.

Differential Revision: http://reviews.llvm.org/D22214

llvm-svn: 275403
2016-07-14 12:23:48 +00:00
Sebastian Pop 63847d04e7 code hoisting pass based on GVN
This pass hoists duplicated computations in the program. The primary goal of
gvn-hoist is to reduce the size of functions before inline heuristics to reduce
the total cost of function inlining.

Pass written by Sebastian Pop, Aditya Kumar, Xiaoyu Hu, and Brian Rzycki.
Important algorithmic contributions by Daniel Berlin under the form of reviews.

Differential Revision: http://reviews.llvm.org/D19338

llvm-svn: 275401
2016-07-14 12:18:53 +00:00
Sjoerd Meijer 38c2cd0c14 This implements a more optimal algorithm for selecting a base constant in
constant hoisting. It not only takes into account the number of uses and the
cost of expressions in which constants appear, but now also the resulting
integer range of the offsets. Thus, the algorithm maximizes the number of uses
within an integer range that will enable more efficient code generation. On
ARM, for example, this will enable code size optimisations because less
negative offsets will be created. Negative offsets/immediates are not supported
by Thumb1 thus preventing more compact instruction encoding.

Differential Revision: http://reviews.llvm.org/D21183

llvm-svn: 275382
2016-07-14 07:44:20 +00:00
David Majnemer 666aa945a5 [InstCombine] Masked loads with undef masks can fold to normal loads
We were able to fold masked loads with an all-ones mask to a normal
load.  However, we couldn't turn a masked load with a mask with mixed
ones and undefs into a normal load.

llvm-svn: 275380
2016-07-14 06:58:42 +00:00
David Majnemer 17a95aaa7b Simplify llvm.masked.load w/ undef masks
We can always pick the passthru value if the mask is undef: we are
permitted to treat the mask as-if it were filled with zeros.

llvm-svn: 275379
2016-07-14 06:58:37 +00:00
Davide Italiano 7dac027ed7 [IPSCCP] Constant fold struct argument/instructions when all the lattice values are constant.
This now should also work with the interprocedural variant of the pass.
Slightly easier now that the yak is shaved.

Differential Revision:   http://reviews.llvm.org/D22329

llvm-svn: 275363
2016-07-14 02:51:41 +00:00
Mehdi Amini 8484f92f7f [Scalarizer] PR28108: Skip over nullptr rather than crashing on it.
Summary:
In Scalarizer::gather we see if we already have a scattered form of Op,
and in that case use the new form.

In the particular case of PR28108, the found ValueVector SV has size 2,
where the first Value is nullptr, and the second is indeed a proper Value.
The nullptr then caused an assert to blow when we tried to do
cast<Instruction>(SV[I]).

With this patch we check SV[I] before doing the cast, and if it's nullptr
we just skip over it.

I don't know the Scalarizer well enough to know if this is the best fix
or if something should be done else where to prevent the nullptr from
being in the ValueVector at all, but at least this avoids the crash
and looking at the test case output it looks reasonable.

Reviewers: hfinkel, frasercrmck, wala, mehdi_amini

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21518

llvm-svn: 275359
2016-07-14 01:31:25 +00:00
David Majnemer 7f781aba97 [ConstantFolding] Fold masked loads
We can constant fold a masked load if the operands are appropriately
constant.

Differential Revision: http://reviews.llvm.org/D22324

llvm-svn: 275352
2016-07-14 00:29:50 +00:00
David Majnemer f89660aba7 [ConstantFolding] Extend FoldReinterpretLoadFromConstPtr to handle negative offsets
Treat loads which clip before the start of a global initializer the same
way we treat clipping beyond the end of the initializer: use zeros.

llvm-svn: 275345
2016-07-13 23:33:07 +00:00
Alina Sbirlea 640a61cd8b Extended LoadStoreVectorizer to vectorize subchains.
Summary:
LSV used to abort vectorizing a chain for interleaved load/store accesses that alias.
Allow a valid prefix of the chain to be vectorized, mark just the prefix and retry vectorizing the remaining chain.

Reviewers: llvm-commits, jlebar, arsenm

Subscribers: mzolotukhin

Differential Revision: http://reviews.llvm.org/D22119

llvm-svn: 275317
2016-07-13 21:20:01 +00:00
Andrew Kaylor 346dd7f1bd Reverting r275284 due to platform-specific test failures
llvm-svn: 275304
2016-07-13 19:09:16 +00:00
Sanjay Patel eff2aa70fc add more tests for zexty xor sandwiches
...mmm sandwiches

llvm-svn: 275302
2016-07-13 18:58:55 +00:00
Sanjay Patel 904a88025a add test for zexty xor sandwich
llvm-svn: 275297
2016-07-13 18:40:38 +00:00
Sanjay Patel c00e48a3db [InstCombine] extend vector select matching for non-splat constants
In D21740, we discussed trying to make this a more general matcher. However, I didn't see a clean
way to handle the regular m_Not cases and these non-splat vector patterns, so I've opted for the
direct approach here. If there are other potential uses of areInverseVectorBitmasks(), we could
move that helper function to a higher level.

There is an open question as to which is of these forms should be considered the canonical IR:
  %sel = select <4 x i1> <i1 true, i1 false, i1 false, i1 true>, <4 x i32> %a, <4 x i32> %b
  %shuf = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 0, i32 5, i32 6, i32 3>

Differential Revision: http://reviews.llvm.org/D22114

llvm-svn: 275289
2016-07-13 18:07:02 +00:00
Andrew Kaylor 12cccdd731 Fix for Bug 26903, adds support to inline __builtin_mempcpy
Patch by Sunita Marathe

Differential Revision: http://reviews.llvm.org/D21920

llvm-svn: 275284
2016-07-13 17:25:11 +00:00
David Majnemer 1b3db33e3d [ConstantFolding] Don't treat negative GEP offsets as positive
GEP offsets are signed, don't treat them as huge positive numbers.

llvm-svn: 275251
2016-07-13 05:16:16 +00:00
Dehao Chen 9cba1f4e7e New pass manager for LICM.
Summary: Port LICM to the new pass manager.

Reviewers: davidxl, silvas

Subscribers: krasin, vitalybuka, silvas, davide, sanjoy, llvm-commits, mehdi_amini

Differential Revision: http://reviews.llvm.org/D21772

llvm-svn: 275222
2016-07-12 22:37:48 +00:00
Michael Kuperstein a99c46cc73 [LV] Remove wrong assumption about LCSSA
The LCSSA pass itself will not generate several redundant PHI nodes in a single
exit block. However, such redundant PHI nodes don't violate LCSSA form, and may
be introduced by passes that preserve LCSSA, and/or preserved by the LCSSA pass
itself. So, assuming a single PHI node per exit block is not safe.

llvm-svn: 275217
2016-07-12 21:24:06 +00:00
Davide Italiano 0080269342 [SCCP] Constant fold structs if all the lattice value are constant.
Differential Revision:   http://reviews.llvm.org/D22269

llvm-svn: 275208
2016-07-12 19:54:19 +00:00
Dehao Chen b9f8e29290 [PM] Port LoopIdiomRecognize Pass to new PM
Summary: Port LoopIdiomRecognize Pass to new PM

Reviewers: davidxl

Subscribers: davide, sanjoy, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D22250

llvm-svn: 275202
2016-07-12 18:45:51 +00:00
Xinliang David Li 9eb472ba4b [PGO] Don't include full file path in static function profile counter names
Patch by Jake VanAdrighem
Differential Revision: http://reviews.llvm.org/D22028

llvm-svn: 275193
2016-07-12 17:14:51 +00:00
Sanjay Patel 4a6a751dce add tests for missing DeMorgan's Law folds
llvm-svn: 275192
2016-07-12 17:05:04 +00:00
Sanjay Patel 3900191ecc auto-generate checks
llvm-svn: 275188
2016-07-12 16:21:55 +00:00
Sanjay Patel 93dffe629a auto-generate checks
llvm-svn: 275187
2016-07-12 16:17:30 +00:00
Sanjay Patel 6d1f227e6b auto-generate checks
llvm-svn: 275186
2016-07-12 16:13:04 +00:00
Vitaly Buka 204dc533c5 Revert "New pass manager for LICM."
Summary: This reverts commit r275118.

Subscribers: sanjoy, mehdi_amini

Differential Revision: http://reviews.llvm.org/D22259

llvm-svn: 275156
2016-07-12 06:25:32 +00:00
Ivan Krasin 5474645dc8 Print remarks from WholeProgramDevirt pass for each call site.
Summary:
It's useful to have some visibility about which call sites are devirtualized,
especially for debug purposes. Another use case is a regression test on the
application side (like, Chromium).

Reviewers: pcc

Differential Revision: http://reviews.llvm.org/D22252

llvm-svn: 275145
2016-07-12 02:38:37 +00:00
Dehao Chen 7ef5820fa3 New pass manager for LICM.
Summary: Port LICM to the new pass manager.

Reviewers: davidxl, silvas

Subscribers: silvas, davide, sanjoy, llvm-commits, mehdi_amini

Differential Revision: http://reviews.llvm.org/D21772

llvm-svn: 275118
2016-07-11 22:45:24 +00:00
Alina Sbirlea cbc6ac2afd Correct ordering of loads/stores.
Summary:
Aiming to correct the ordering of loads/stores. This patch changes the
insert point for loads to the position of the first load.
It updates the ordering method for loads to insert before, rather than after.

Before this patch the following sequence:
"load a[1], store a[1], store a[0], load a[2]"
Would incorrectly vectorize to "store a[0,1], load a[1,2]".
The correctness check was assuming the insertion point for loads is at
the position of the first load, when in practice it was at the last
load. An alternative fix would have been to invert the correctness check.
The current fix changes insert position but also requires reordering of
instructions before the vectorized load.

Updated testcases to reflect the changes.

Reviewers: tstellarAMD, llvm-commits, jlebar, arsenm

Subscribers: mzolotukhin

Differential Revision: http://reviews.llvm.org/D22071

llvm-svn: 275117
2016-07-11 22:34:29 +00:00
Michael Kuperstein f0c59330e9 [X86] Make some cast costs more precise
Make some AVX and AVX512 cast costs more precise.
Based on part of a patch by Elena Demikhovsky (D15604).

Differential Revision: http://reviews.llvm.org/D22064

llvm-svn: 275106
2016-07-11 21:39:44 +00:00
Alina Sbirlea 327955e057 Add TLI.allowsMisalignedMemoryAccesses to LoadStoreVectorizer
Summary: Extend TTI to access TLI.allowsMisalignedMemoryAccesses(). Check condition when vectorizing load and store chains.
Add additional parameters: AddressSpace, Alignment, Fast.

Reviewers: llvm-commits, jlebar

Subscribers: arsenm, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21935

llvm-svn: 275100
2016-07-11 20:46:17 +00:00
Jingyue Wu 641cfee976 [SLSR] Call getPointerSizeInBits with the correct address space.
llvm-svn: 275083
2016-07-11 18:13:28 +00:00
Davide Italiano e8ae0b5eb4 [PM/IPO] Port LowerTypeTests to the new PassManager.
There's a little bit of churn in this patch because the initialization
mechanism is now shared between the old and the new PM. Other than
that, it's just a pretty mechanical translation.

llvm-svn: 275082
2016-07-11 18:10:06 +00:00
Dehao Chen 9232f98279 Implement callsite-hotness based inline cost for Sample-based PGO
Summary:
For sample-based PGO, using BFI to calculate callsite count is sometime not accurate. This is because with sampling based approach, if a callsite resides in a hot loop deeply nested in a bunch of cold branches, the callsite's BFI frequency would be inaccurately calculated due to lack of samples in the cold branch.

E.g.

if (A1 && A2 && A3 && ..... && A10) {
  for (i=0; i < 100000000; i++) {
    callsite();
  }
}

Assume that A1 to A100 are all 100% taken, and callsite has 1000 samples and thus is considerred hot. Because the loop's trip count is huge, it's normal that all branches outside the loop has no sample at all. As a result, we can only use static branch probability to derive the the frequency of the loop header. Assuming that static heuristic thinks each branch is 50% taken, then the count calculated from BFI will be 1/(2^10) of the actual value.

In order to get more accurate callsite count, we directly annotate the weight on the call instruction, and directly use it when checking callsite hotness.

Note that this mechanism can also be shared by instrumentation based callsite hotness analysis. The side benefit is that it breaks the dependency from Inliner to BFI as call count is embedded in the IR.

Reviewers: davidxl, eraman, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22118

llvm-svn: 275073
2016-07-11 16:48:54 +00:00
Dehao Chen 29d2641f52 Tune the weight propagation algorithm for sample profile.
Summary: Handle the case when there is only one incoming/outgoing edge for a visited basic block: use the block weight to adjust edge weight even when the edge has been visited before. This can help reduce inaccuracies introduced by incorrect basic block profile, as shown in the updated unittest.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22180

llvm-svn: 275072
2016-07-11 16:40:17 +00:00
Nicolai Haehnle 889a20cf40 [Sink] Don't move calls to readonly functions across stores
Summary:

Reviewers: hfinkel, majnemer, tstellarAMD, sunfish

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D17279

llvm-svn: 275066
2016-07-11 14:11:51 +00:00
Hal Finkel 02012bcfee Revert r275027 - Let FuncAttrs infer the 'returned' argument attribute
Reverting r275027 and r275033. These seem to cause miscompiles on the AArch64 buildbot.

llvm-svn: 275042
2016-07-11 04:51:23 +00:00
Hal Finkel 2cac58f604 Pointer-comparison folding should look through returned-argument functions
For functions which are known to return a specific argument, pointer-comparison
folding can look through the function calls as part of its analysis.

Differential Revision: http://reviews.llvm.org/D9387

llvm-svn: 275039
2016-07-11 03:37:59 +00:00
Hal Finkel 6fd5e1f02b Teach computeKnownBits to look through returned-argument functions
If a function is known to return one of its arguments, we can use that in order
to compute known bits of the return value.

Differential Revision: http://reviews.llvm.org/D9397

llvm-svn: 275036
2016-07-11 02:25:14 +00:00
Hal Finkel d66a7b05db Let FuncAttrs infer the 'returned' argument attribute
A function can have one argument with the 'returned' attribute, indicating that
the associated argument is always the return value of the function. Add
FuncAttrs inference logic.

Differential Revision: http://reviews.llvm.org/D22202

llvm-svn: 275027
2016-07-10 22:02:55 +00:00
Sean Silva db90d4d9c1 [PM] Port LoopVectorize to the new PM.
llvm-svn: 275000
2016-07-09 22:56:50 +00:00
Jingyue Wu debce55ac3 [SLSR] Fix crash on handling 128-bit integers.
ConstantInt::getSExtValue may fail on >64-bit integers. Add checks to call
getSExtValue only on narrow integers.

As a minor aside, simplify slsr-gep.ll to remove unnecessary load instructions.

llvm-svn: 274982
2016-07-09 19:13:18 +00:00
Davide Italiano 92b933a55c [PM] Port CrossDSOCFI to the new pass manager.
llvm-svn: 274962
2016-07-09 03:25:35 +00:00
Davide Italiano cd96cfd8df [PM] Port LoopSimplify to the new pass manager.
While here move simplifyLoop() function to the new header, as
suggested by Chandler in the review.

Differential Revision:  http://reviews.llvm.org/D21404

llvm-svn: 274959
2016-07-09 03:03:01 +00:00
Piotr Padlewski 3b77612839 Add 'thinlto_src_module' md with asserts or -enable-import-metadata
Summary:
This way the metadata will be only generated when asserts enabled,
or when -enable-import-metadata specified

FIXED missing colon on requires.

Reviewers: tejohnson, eraman, mehdi_amini

Subscribers: mehdi_amini, llvm-commits

Differential Revision: http://reviews.llvm.org/D22167

llvm-svn: 274947
2016-07-08 23:01:49 +00:00
Piotr Padlewski d4b792346c Revert "Add 'thinlto_src_module' md with asserts or -enable-import-metadata"
Reverting because of 17463
http://lab.llvm.org:8011/builders/clang-x86_64-linux-selfhost-modules/builds/17463

This reverts commit d20cb431bba2ba43b4c65a8556cff445bfefbb7c.

llvm-svn: 274946
2016-07-08 22:55:48 +00:00
Anna Thomas 9ad45adfd7 Revert "InstCombine rule to fold truncs whose value is available"
This reverts commit r274853.
Caused failure in ppcBE build

llvm-svn: 274943
2016-07-08 22:15:08 +00:00
Piotr Padlewski d6efefa2b8 Add 'thinlto_src_module' md with asserts or -enable-import-metadata
Summary:
This way the metadata will be only generated when asserts enabled,
or when -enable-import-metadata specified

Reviewers: tejohnson, eraman, mehdi_amini

Subscribers: mehdi_amini, llvm-commits

Differential Revision: http://reviews.llvm.org/D22167

llvm-svn: 274938
2016-07-08 21:25:39 +00:00
Sanjay Patel 664514f7fe [InstCombine] don't form select from bitcasted logic ops if bitcasts have >1 use
This isn't a sure thing (are 2 extra bitcasts less expensive than a logic op?), 
but we'll try to err on the conservative side by going with the case that has
less IR instructions.

Note: This question came up in http://reviews.llvm.org/D22114 , but this part is
independent of that patch proposal, so I'm making this small change ahead of that
one. 

See also:
http://reviews.llvm.org/rL274926

llvm-svn: 274932
2016-07-08 21:17:51 +00:00
Sanjay Patel 5246482c7a add another multi-use test for logic->select transform
llvm-svn: 274929
2016-07-08 21:08:16 +00:00
Sanjay Patel f4a08ede03 [InstCombine] don't form select from logic ops if it's unlikely that we'll eliminate any ops
llvm-svn: 274926
2016-07-08 20:53:29 +00:00
Sanjay Patel 297a0e67b6 adjust test so it won't completely optimize away
llvm-svn: 274925
2016-07-08 20:35:53 +00:00
Sanjay Patel 0733e6b61c add tests for multi-use folding to select
llvm-svn: 274922
2016-07-08 20:22:27 +00:00
Dehao Chen 429f5c735f Remove inline hints computation from SampleProfile.cpp
Summary: As we will move to use uniformed hotness check in inliner, we do not need inline hints in SampleProfile pass any more.

Reviewers: dnovillo, davidxl

Subscribers: eraman, llvm-commits

Differential Revision: http://reviews.llvm.org/D19287

llvm-svn: 274918
2016-07-08 20:12:44 +00:00
Davide Italiano d555bde59f [SCCP] Fold constants as we build them whne visiting cast instructions.
This should be slightly more efficient and could avoid spurious overdefined
markings, as Eli pointed out.

Differential Revision:  http://reviews.llvm.org/D22122

llvm-svn: 274905
2016-07-08 19:13:40 +00:00
Sanjay Patel 1b6b824548 [InstCombine] check for one-use before turning simple logic op into a select
llvm-svn: 274891
2016-07-08 17:26:47 +00:00
Simon Pilgrim 4ca42e232d [SLPVectorizer][X86] Added fma vectorization tests
llvm-svn: 274889
2016-07-08 17:19:13 +00:00
Sanjay Patel 910ce0d511 add test to show multi-use output
llvm-svn: 274887
2016-07-08 17:12:27 +00:00
Sanjay Patel cbfca9e8ef [InstCombine] allow or(sext(A), B) --> A ? -1 : B transform for vectors
llvm-svn: 274883
2016-07-08 17:01:15 +00:00
Sanjay Patel 647174c8a4 add vector tests to show missing transform
llvm-svn: 274876
2016-07-08 16:39:53 +00:00
Sanjay Patel 46df968326 minimize tests
The cmp and load aren't required.

llvm-svn: 274864
2016-07-08 16:11:48 +00:00
Sanjay Patel e1acad9b61 regenerate checks
llvm-svn: 274860
2016-07-08 16:06:38 +00:00
Anna Thomas 3124f6273a InstCombine rule to fold truncs whose value is available
We can fold truncs whose operand feeds from a load, if the trunc value
is available through a prior load/store.

This change is from: http://reviews.llvm.org/D21246, which folded the
trunc but missed the bitcast or ptrtoint/inttoptr required in the RAUW
call, when the load type didnt match the prior load/store type.

Differential Revision: http://reviews.llvm.org/D21791

llvm-svn: 274853
2016-07-08 15:18:56 +00:00
Simon Pilgrim 4f1877fb57 [X86][SSE] Improve constant folding tests for CVTSD/CVTSS/CVTTSD/CVTTSS
As discussed on D22106, improve the testing for constant folding sse scalar conversion intrinsics to ensure we are correctly handling special/out of range cases

llvm-svn: 274846
2016-07-08 13:28:34 +00:00
Davide Italiano 16284df8ec [PM] Port InstSimplify to the new pass manager.
llvm-svn: 274796
2016-07-07 21:14:36 +00:00
Anna Thomas 6a78c78a03 [DSE] Remove dead stores in end blocks containing fence
We can remove dead stores in the presence of fence instructions. Fence
does not change an otherwise thread local store to visible.

reviewers: reames, dexonsmith, jfb
Differential Revision: http://reviews.llvm.org/D22001

llvm-svn: 274795
2016-07-07 20:51:42 +00:00
Sjoerd Meijer 17c08dc701 Code size optimisation: don't rewrite fputs to fwrite when optimising for size
because fwrite requires more arguments and thus extra MOVs are required.

llvm-svn: 274753
2016-07-07 13:56:23 +00:00
David Majnemer 7afb46d3c8 [LoopAccessAnalysis] Fix an integer overflow
We were inappropriately using 32-bit types to account for quantities
that can be far larger.

Fixed in PR28443.

llvm-svn: 274737
2016-07-07 06:24:36 +00:00
Elena Demikhovsky fc1e969dfc Fixed a bug in vectorizing GEP before gather/scatter intrinsic.
Vectorizing GEP was incorrect and broke SSA in some cases.
 
The patch fixes PR27997 https://llvm.org/bugs/show_bug.cgi?id=27997.

Differential revision: http://reviews.llvm.org/D22035

llvm-svn: 274735
2016-07-07 06:06:46 +00:00
Sean Silva 59fe82f4ce [PM] Port TailCallElim
llvm-svn: 274708
2016-07-06 23:48:41 +00:00
Sean Silva b025d375a1 [PM] Port CorrelatedValuePropagation
llvm-svn: 274705
2016-07-06 23:26:29 +00:00
Sanjay Patel 65a51c25c1 [InstCombine] enhance (select X, C1, C2 --> ext X) to handle vectors
By replacing dyn_cast of ConstantInt with m_Zero/m_One/m_AllOnes, we
allow these transforms for splat vectors.

Differential Revision: http://reviews.llvm.org/D21899

llvm-svn: 274696
2016-07-06 22:23:01 +00:00
Chad Rosier 232e29ebea [MemorySSA] Reinstate the legacy printer and verifier.
Differential Revision: http://reviews.llvm.org/D22058

llvm-svn: 274679
2016-07-06 21:20:47 +00:00
Haicheng Wu a95cd1267f [LIR] Fix mis-compilation with unwinding.
To fix PR27859, bail out if there is an instruction may throw.

Differential Revision: http://reviews.llvm.org/D20638

llvm-svn: 274673
2016-07-06 21:05:40 +00:00
Piotr Padlewski 6deaa6afae Add 'thinlto_src_module' metadata to imported function
Added metadata to be able to make statistics on how many functions
that have been imported have been removed. Also module name might
be helpfull when debugging.

Reviewers: tejohnson, eraman

Subscribers: mehdi_amini, llvm-commits

Differential Revision: http://reviews.llvm.org/D21943

llvm-svn: 274668
2016-07-06 20:26:25 +00:00
Justin Bogner a463537a36 NVPTX: Replace uses of cuda.syncthreads with nvvm.barrier0
Everywhere where cuda.syncthreads or __syncthreads is used, use the
properly namespaced nvvm.barrier0 instead.

llvm-svn: 274664
2016-07-06 20:02:45 +00:00
Chad Rosier dcfce2d0ec [DSE] Avoid iterator invalidation bugs.
The dse_with_dbg_value.ll test committed with r273141 is removed because this
we no longer performs any type of back tracking, which is what was causing the
codegen differences with and without debug information.

Differential Revision: http://reviews.llvm.org/D21613

llvm-svn: 274660
2016-07-06 19:48:52 +00:00
Sean Silva f50d4b6cdc Work around PR28400 a bit harder.
We were still crashing in the "no change" case because LVI was not
getting invalidated.

See the thread "Should analyses be able to hold AssertingVH to IR?
(related to PR28400)" for more discussion.

llvm-svn: 274656
2016-07-06 19:05:41 +00:00
Michael Kuperstein aa71bdd3af [TTI] The cost model should not assume vector casts get completely scalarized
The cost model should not assume vector casts get completely scalarized, since
on targets that have vector support, the common case is a partial split up to
the legal vector size. So, when a vector cast  gets split, the resulting casts
end up legal and cheap.

Instead of pessimistically assuming scalarization, base TTI can use the costs
the concrete TTI provides for the split vector, plus a fudge factor to account
for the cost of the split itself. This fudge factor is currently 1 by default,
except on AMDGPU where inserts and extracts are considered free.

Differential Revision: http://reviews.llvm.org/D21251

llvm-svn: 274642
2016-07-06 17:30:56 +00:00
Matthew Simpson 433cb1dfe3 [LV] Don't widen trivial induction variables
We currently always vectorize induction variables. However, if an induction
variable is only used for counting loop iterations or computing addresses with
getelementptr instructions, we don't need to do this. Vectorizing these trivial
induction variables can create vector code that is difficult to simplify later
on. This is especially true when the unroll factor is greater than one, and we
create vector arithmetic when computing step vectors. With this patch, we check
if an induction variable is only used for counting iterations or computing
addresses, and if so, scalarize the arithmetic when computing step vectors
instead. This allows for greater simplification.

This patch addresses the suboptimal pointer arithmetic sequence seen in
PR27881.

Reference: https://llvm.org/bugs/show_bug.cgi?id=27881
Differential Revision: http://reviews.llvm.org/D21620

llvm-svn: 274627
2016-07-06 14:26:59 +00:00
Elena Demikhovsky 971fbfda1e Vector GEP test: renamed + some comments
Differential revision: http://reviews.llvm.org/D21957

llvm-svn: 274611
2016-07-06 08:11:23 +00:00
Daniel Berlin fc7e651bfd Fix handling of forward unreachable but reverse-reachable blocks in MemorySSA construction
llvm-svn: 274606
2016-07-06 05:32:05 +00:00
Sanjay Patel cbaac41856 [InstCombine] enable vector select of bools -> logic folds
llvm-svn: 274465
2016-07-03 14:34:39 +00:00
Sanjay Patel 42396ae0ea add vector bool select tests and regenerate checks for scalar bool select tests
llvm-svn: 274460
2016-07-03 13:26:02 +00:00
Sean Silva 45835e731d Remove dead TLI arg of isKnownNonNull and propagate deadness. NFC.
This actually uncovered a surprisingly large chain of ultimately unused
TLI args.
From what I can gather, this argument is a remnant of when
isKnownNonNull would look at the TLI directly.
The current approach seems to be that InferFunctionAttrs runs early in
the pipeline and uses TLI to annotate the TLI-dependent non-null
information as return attributes.

This also removes the dependence of functionattrs on TLI altogether.

llvm-svn: 274455
2016-07-02 23:47:27 +00:00
Michael Kuperstein 071d8306b0 [PM] Port ConstantHoisting to the new Pass Manager
Differential Revision: http://reviews.llvm.org/D21945

llvm-svn: 274411
2016-07-02 00:16:47 +00:00
Alina Sbirlea 8d8aa5dd6c Address two correctness issues in LoadStoreVectorizer
Summary:
GetBoundryInstruction returns the last instruction as the instruction which follows or end(). Otherwise the last instruction in the boundry set is not being tested by isVectorizable().
Partially solve reordering of instructions. More extensive solution to follow.

Reviewers: tstellarAMD, llvm-commits, jlebar

Subscribers: escha, arsenm, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21934

llvm-svn: 274389
2016-07-01 21:44:12 +00:00
Duncan P. N. Exon Smith e60719b3fa Revert "add tests for bugs fixed by the GVN hoist pass"
This reverts commit r274327 since the tests fail.  E.g.:
  http://lab.llvm.org:8011/builders/clang-x86_64-linux-selfhost-modules/builds/17240

It looks like this commit is building on r274305, but that commit caused
a miscompile and was reverted in r274320.

llvm-svn: 274332
2016-07-01 04:55:13 +00:00
Sebastian Pop 196ba4f844 add tests for bugs fixed by the GVN hoist pass
https://llvm.org/bugs/show_bug.cgi?id=20242
https://llvm.org/bugs/show_bug.cgi?id=22005

llvm-svn: 274327
2016-07-01 03:03:19 +00:00
Matt Arsenault 0101ecade0 LoadStoreVectorizer: Don't increase alignment with no align set
If no alignment was set on the load/stores, it would vectorize
to the new type even though this increases the default alignment.

llvm-svn: 274323
2016-07-01 02:09:38 +00:00
Matt Arsenault 370e8226c7 LoadStoreVectorizer: Check TTI for vec reg bit width
llvm-svn: 274322
2016-07-01 02:07:22 +00:00
Matt Arsenault 42ad17059a LoadStoreVectorizer: Fix assert when merging pointer ops
This needs to use inttoptr/ptrtoint if combining an int and pointer
load. If a pointer is used always do an integer load.

llvm-svn: 274321
2016-07-01 01:55:52 +00:00
Duncan P. N. Exon Smith 9d1f156418 Revert "code hoisting pass based on GVN"
This reverts commit r274305, since it breaks self-hosting:
  http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/22349/
  http://lab.llvm.org:8011/builders/clang-x86_64-linux-selfhost-modules/builds/17232

Note that the blamelist on lab.llvm.org:8011 is incorrect.  The previous
build was r274299, but somehow r274305 wasn't included in the blamelist:
  http://lab.llvm.org:8011/builders/clang-x86_64-linux-selfhost-modules

llvm-svn: 274320
2016-07-01 01:51:40 +00:00
Matt Arsenault 241f34cde8 LoadStoreVectorizer: Use AA metadata
This was not passing the full instruction with metadata
to the alias query.

llvm-svn: 274318
2016-07-01 01:47:46 +00:00
Matt Arsenault d7e8898bdd LoadStoreVectorizer: if one element of a vector is integer, default to
integer.

Fixes issues on some architectures where we use arithmetic ops to build
vectors, which can cause bad things to happen for loads/stores of mixed
types.

Patch by Fiona Glaser

llvm-svn: 274307
2016-07-01 00:37:01 +00:00
Matt Arsenault 8a4ab5e19f LoadStoreVectorizer: Fix crashes on sub-byte types
llvm-svn: 274306
2016-07-01 00:36:54 +00:00
Sebastian Pop 5c5798c57c code hoisting pass based on GVN
This pass hoists duplicated computations in the program. The primary goal of
gvn-hoist is to reduce the size of functions before inline heuristics to reduce
the total cost of function inlining.

Pass written by Sebastian Pop, Aditya Kumar, Xiaoyu Hu, and Brian Rzycki.
Important algorithmic contributions by Daniel Berlin under the form of reviews.

Differential Revision: http://reviews.llvm.org/D19338

llvm-svn: 274305
2016-07-01 00:24:31 +00:00
Matt Arsenault 079d0f19a2 LoadStoreVectorizer: Check skipFunction first.
Also add test I forgot to add to r274296.

llvm-svn: 274299
2016-06-30 23:50:18 +00:00
Matt Arsenault 2cbe52b990 LoadStoreVectorizer: Skip optnone functions
llvm-svn: 274296
2016-06-30 23:30:29 +00:00
Matt Arsenault 08debb0244 Add LoadStoreVectorizer pass
This was contributed by Apple, and I've been working on
minimal cleanups and generalizing it.

llvm-svn: 274293
2016-06-30 23:11:38 +00:00
Matt Arsenault 727e279ac4 SLPVectorizer: Move propagateMetadata to VectorUtils
This will be re-used by the LoadStoreVectorizer.

Fix handling of range metadata and testcase by Justin Lebar.

llvm-svn: 274281
2016-06-30 21:17:59 +00:00
Wei Mi 95685faeee Refine the set of UniformAfterVectorization instructions.
Except the seed uniform instructions (conditional branch and consecutive ptr
instructions), dependencies to be added into uniform set should only be used
by existing uniform instructions or intructions outside of current loop.

Differential Revision: http://reviews.llvm.org/D21755

llvm-svn: 274262
2016-06-30 18:42:56 +00:00
Jun Bum Lim 596a3bd9ec [DSE] Fix bug in partial overwrite tracking
Summary:
Found cases where DSE incorrectly add partially-overwritten intervals.
Please see the test case for details.

Reviewers: mcrosier, eeckstein, hfinkel

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D21859

llvm-svn: 274237
2016-06-30 15:32:20 +00:00
Sanjay Patel 7c6eab5777 [InstCombine] shrink switch conditions better (PR24766)
https://llvm.org/bugs/show_bug.cgi?id=24766#c2

This removes a hack that was added for the benefit of x86 codegen. 
It prevented shrinking the switch condition even to smaller legal (DataLayout) types.
We have a safety mechanism in CGP after:
http://reviews.llvm.org/rL251857
...so we're free to use the optimal (smallest) IR type now.

Differential Revision: http://reviews.llvm.org/D12965

llvm-svn: 274233
2016-06-30 14:51:21 +00:00
Sanjay Patel 7ad98babfa [InstCombine] extend matchSelectFromAndOr() to work with i1 scalar types
If the incoming types are i1, then we don't have to pattern match any sext ops.

Differential Revision: http://reviews.llvm.org/D21740

llvm-svn: 274228
2016-06-30 14:18:18 +00:00
Sanjay Patel 348111f4b9 add vector tests to show missing transform
llvm-svn: 274192
2016-06-30 00:09:13 +00:00
Sanjay Patel c3701e8b92 regenerate checks
llvm-svn: 274188
2016-06-29 23:58:39 +00:00
Evgeniy Stepanov a5da256f92 StackColoring for SafeStack.
This is a fix for PR27842.

An IR-level implementation of stack coloring tailored to work with
SafeStack. It is a bit weaker than the MI implementation in that it
does not the "lifetime start at first access" logic. This can be
improved in the future.

This patch also replaces the naive implementation of stack frame
layout with a greedy algorithm that can split existing stack slots
and even fit small objects inside the alignment padding of other
objects.

llvm-svn: 274162
2016-06-29 20:37:43 +00:00
Tim Shen aec68b263d [InstCombine] Simplify and correct folding fcmps with the same children
Summary: Take advantage of FCmpInst::Predicate's bit pattern and handle (fcmp *, x, y) | (fcmp *, x, y) and (fcmp *, x, y) & (fcmp *, x, y) more consistently. Also fold more FCmpInst::FCMP_FALSE and FCmpInst::FCMP_TRUE to constants.

Currently InstCombine wrongly folds (fcmp ogt, x, y) | (fcmp ord, x, y) to (fcmp ogt, x, y); this patch also fixes that.

Reviewers: spatel

Subscribers: llvm-commits, iteratee, echristo

Differential Revision: http://reviews.llvm.org/D21775

llvm-svn: 274156
2016-06-29 20:10:17 +00:00
Tim Shen 860a67eb4c [InstCombine, NFC] Change the generated variable names by creating new instructions
This removes some noise for D21775's test changes.

llvm-svn: 274155
2016-06-29 20:10:13 +00:00
Tim Shen 4561e784f4 [InstCombine] Add full tests for FoldAndOfFCmps and FoldOrOfFCmps
Summary: This adds tests for covering all cases that FoldAndOfFCmps and FoldOrOfFCmps handle.

Reviewers: spatel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21844

llvm-svn: 274144
2016-06-29 17:55:11 +00:00
Elena Demikhovsky 5e21c94f25 Reverted patch 273864
llvm-svn: 274115
2016-06-29 10:01:06 +00:00
Craig Topper df7454f94b Revert "[ValueTracking] Teach computeKnownBits for PHI nodes to compute sign bit for a recurrence with a NSW addition."
This is breaking an optimizaton remark test in clang. I've identified a couple fixes for that, but want to understand it better before I commit to anything.

llvm-svn: 274102
2016-06-29 04:57:00 +00:00
Craig Topper 2cc199baff [ValueTracking] Teach computeKnownBits for PHI nodes to compute sign bit for a recurrence with a NSW addition.
If a operation for a recurrence is an addition with no signed wrap and both input sign bits are 0, then the result sign bit must also be 0. Similar for the negative case.

I found this deficiency while playing around with a loop in the x86 backend that contained a signed division that could be optimized into an unsigned division if we could prove both inputs were positive. One of them being the loop induction variable. With this patch we can perform the conversion for this case. One of the test cases here is a contrived variation of the loop I was looking at.

Differential revision: http://reviews.llvm.org/D21493

llvm-svn: 274098
2016-06-29 03:46:47 +00:00
Eric Christopher 0c58837b1f Revert "[InstCombine] Avoid combining the bitcast of a var that is used as both address and result of load instructions"
Revert "[InstCombine] Combine A->B->A BitCast"

as this appears to cause PR27996 and as discussed in http://reviews.llvm.org/D20847

This reverts commits r270135 and r263734.

llvm-svn: 274094
2016-06-29 03:05:58 +00:00
Weiming Zhao 5410edddb1 [ARM] Fix 28282: cost computation for constant hoisting
Summary:
This fixes bug: https://llvm.org/bugs/show_bug.cgi?id=28282

Currently the cost model of constant hoisting checks the bit width of the data type of the constants.
However, the actual immediate value is small enough and not need to be hoisted.
This patch checks for the actual bit width needed for the constant.

Reviewers: t.p.northover, rengolin

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D21668

llvm-svn: 274073
2016-06-28 22:30:45 +00:00
Sanjay Patel 3a0f2606ec minimize regression tests and update checks
llvm-svn: 274047
2016-06-28 18:40:08 +00:00
Sanjay Patel 8ce43c098b minimize regression tests and update checks
llvm-svn: 274046
2016-06-28 18:33:10 +00:00
Artur Pilipenko 7ad95ec22d Support arbitrary addrspace pointers in masked load/store intrinsics
This is a resubmittion of 263158 change after fixing the existing problem with intrinsics mangling (see LTO and intrinsics mangling llvm-dev thread for details).

This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace.

The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics.

Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D17270

llvm-svn: 274043
2016-06-28 18:27:25 +00:00
Adam Nemet bd861acf29 [LLE] Don't hoist conditionally executed loads
If the load is conditional we can't hoist its 0-iteration instance to
the preheader because that would make it unconditional.  Thus we would
access a memory location that the original loop did not access.

llvm-svn: 273991
2016-06-28 04:02:47 +00:00
Easwaran Raman 22eb80a114 Fix size computation of array allocation in inline cost analysis
Differential revision: http://reviews.llvm.org/D21690

llvm-svn: 273952
2016-06-27 22:31:53 +00:00
Sanjay Patel 59ed2ffca3 [InstCombine] shrink type of sdiv if dividend is sexted and constant divisor is small enough (PR28153)
This should fix PR28153:
https://llvm.org/bugs/show_bug.cgi?id=28153

Differential Revision: http://reviews.llvm.org/D21769

llvm-svn: 273951
2016-06-27 22:27:11 +00:00
Sanjay Patel 5cdf699daa add tests for PR28153
llvm-svn: 273936
2016-06-27 20:28:59 +00:00
Elena Demikhovsky 6f2ec8104a Fixed crash of SLP Vectorizer on KNL
The bug is connected to vector GEPs.
https://llvm.org/bugs/show_bug.cgi?id=28313

llvm-svn: 273919
2016-06-27 20:07:00 +00:00
Sanjay Patel c6ada53be5 [InstCombine] use m_APInt for div --> ashr fold
The APInt matcher works with splat vectors, so we get this fold for vectors too.

llvm-svn: 273897
2016-06-27 17:25:57 +00:00
Artur Pilipenko 72f76b8805 Revert -r273892 "Support arbitrary addrspace pointers in masked load/store intrinsics" since some of the clang tests don't expect to see the updated signatures.
llvm-svn: 273895
2016-06-27 16:54:33 +00:00
Easwaran Raman 1832bf6aee [PM] Port PartialInlining to the new PM
Differential revision: http://reviews.llvm.org/D21699

llvm-svn: 273894
2016-06-27 16:50:18 +00:00
Artur Pilipenko a36aa41519 Support arbitrary addrspace pointers in masked load/store intrinsics
This is a resubmittion of 263158 change after fixing the existing problem with intrinsics mangling (see LTO and intrinsics mangling llvm-dev thread for details).

This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace.

The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics.

Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D17270

llvm-svn: 273892
2016-06-27 16:29:26 +00:00
Elena Demikhovsky f65e865e33 Removed extra test from the prev commit.
llvm-svn: 273865
2016-06-27 11:40:49 +00:00
Elena Demikhovsky 4c58b2761a Fixed consecutive memory access detection in Loop Vectorizer.
It did not handle correctly cases without GEP.

The following loop wasn't vectorized:

for (int i=0; i<len; i++)

  *to++ = *from++;

I use getPtrStride() to find Stride for memory access and return 0 is the Stride is not 1 or -1.

Re-commit rL273257 - revision: http://reviews.llvm.org/D20789

llvm-svn: 273864
2016-06-27 11:19:23 +00:00
Igor Breger 7357849dca [ConstantFolding] Fix bitcast vector of i1.
Differential Revision: http://reviews.llvm.org/D21735

llvm-svn: 273845
2016-06-27 06:42:54 +00:00
Sanjay Patel 1d745384da add tests for potential select transforms
llvm-svn: 273833
2016-06-26 23:44:21 +00:00
Sanjoy Das a37bb4a65d [LoopUnswitch] Unswitch on conditions feeding into guards
Summary:
This is a straightforward extension of what LoopUnswitch does to
branches to guards.  That is, we unswitch

```
for (;;) {
  ...
  guard(loop_invariant_cond);
  ...
}
```

into

```
if (loop_invariant_cond) {
  for (;;) {
    ...
    // There is no need to emit guard(true)
    ...
  }
} else {
  for (;;) {
    ...
    guard(false);
    // SimplifyCFG will clean this up by adding an
    // unreachable after the guard(false)
    ...
  }
}
```

Reviewers: majnemer

Subscribers: mcrosier, llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21725

llvm-svn: 273801
2016-06-26 05:10:45 +00:00
Sanjay Patel 51ff79fd82 update tests to use FileCheck
llvm-svn: 273784
2016-06-25 17:39:10 +00:00
David Majnemer e14e7bc4b8 Revert "[SimplifyCFG] Stop inserting calls to llvm.trap for UB"
This reverts commit r273778, it seems to break UBSan :/

llvm-svn: 273779
2016-06-25 08:19:55 +00:00
David Majnemer d346a37737 [SimplifyCFG] Stop inserting calls to llvm.trap for UB
SimplifyCFG had logic to insert calls to llvm.trap for two very
particular IR patterns: stores and invokes of undef/null.

While InstCombine canonicalizes certain undefined behavior IR patterns
to stores of undef, phase ordering means that this cannot be relied upon
in general.

There are much better tools than llvm.trap: UBSan and ASan.

N.B. I could be argued into reverting this change if a clear argument as
to why it is important that we synthesize llvm.trap for stores, I'd be
hard pressed to see why it'd be useful for invokes...

llvm-svn: 273778
2016-06-25 08:04:19 +00:00
David Majnemer bb53d23ef8 [InstSimplify] Replace calls to null with undef
Calling null is undefined behavior, we can simplify the resulting value
to undef.

llvm-svn: 273777
2016-06-25 07:37:30 +00:00
David Majnemer 1fea77c6fc [SimplifyCFG] Replace calls to null/undef with unreachable
Calling null is undefined behavior, a call to undef can be trivially
treated as a call to null.

llvm-svn: 273776
2016-06-25 07:37:27 +00:00
Sanjoy Das f63768cbfc [PlaceSafepoints] Don't call undef in test case; NFC
llvm-svn: 273764
2016-06-25 01:40:54 +00:00
Sanjoy Das d850068282 [LoopUnswitch] Avoid exponential behavior
Summary: (No semantic change intended).

Reviewers: majnemer, bogner, mzolotukhin

Subscribers: mcrosier, llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21707

llvm-svn: 273763
2016-06-25 01:14:19 +00:00
David Majnemer 0f45572761 The absence of noreturn doesn't ensure mayReturn
There are two separate issues:
- LLVM doesn't consider infinite loops to be side effects: we happily
  hoist/sink above/below loops whose bounds are unknown.
- The absence of the noreturn attribute is insufficient for us to know
  if a function will definitely return.  Relying on noreturn in the
  middle-end for any property is an accident waiting to happen.

llvm-svn: 273762
2016-06-25 00:55:12 +00:00
Peter Collingbourne 0312f614b1 IR: Introduce llvm.type.checked.load intrinsic.
This intrinsic safely loads a function pointer from a virtual table pointer
using type metadata. This intrinsic is used to implement control flow integrity
in conjunction with virtual call optimization. The virtual call optimization
pass will optimize away llvm.type.checked.load intrinsics associated with
devirtualized calls, thereby removing the type check in cases where it is
not needed to enforce the control flow integrity constraint.

This patch also introduces the capability to copy type metadata between
global variables, and teaches the virtual call optimization pass to do so.

Differential Revision: http://reviews.llvm.org/D21121

llvm-svn: 273756
2016-06-25 00:23:04 +00:00
David Majnemer b8da3a2bb2 Reinstate r273711
r273711 was reverted by r273743.  The inliner needs to know about any
call sites in the inlined function.  These were obscured if we replaced
a call to undef with an undef but kept the call around.

This fixes PR28298.

llvm-svn: 273753
2016-06-25 00:04:10 +00:00
Michael Kuperstein 83b753d430 [PM] Port float2int to the new pass manager
Differential Revision: http://reviews.llvm.org/D21704

llvm-svn: 273747
2016-06-24 23:32:02 +00:00
Dehao Chen c66a06ad0e Hookup ProfileSummary with SampleProfilerLoader
Summary: Set ProfileSummary in SampleProfilerLoader.

Reviewers: davidxl, eraman

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21702

llvm-svn: 273745
2016-06-24 22:57:06 +00:00
Nico Weber ae2ef4ccd4 Revert r273711, it caused PR28298.
llvm-svn: 273743
2016-06-24 22:52:39 +00:00
Peter Collingbourne 7efd750607 IR: New representation for CFI and virtual call optimization pass metadata.
The bitset metadata currently used in LLVM has a few problems:

1. It has the wrong name. The name "bitset" refers to an implementation
   detail of one use of the metadata (i.e. its original use case, CFI).
   This makes it harder to understand, as the name makes no sense in the
   context of virtual call optimization.

2. It is represented using a global named metadata node, rather than
   being directly associated with a global. This makes it harder to
   manipulate the metadata when rebuilding global variables, summarise it
   as part of ThinLTO and drop unused metadata when associated globals are
   dropped. For this reason, CFI does not currently work correctly when
   both CFI and vcall opt are enabled, as vcall opt needs to rebuild vtable
   globals, and fails to associate metadata with the rebuilt globals. As I
   understand it, the same problem could also affect ASan, which rebuilds
   globals with a red zone.

This patch solves both of those problems in the following way:

1. Rename the metadata to "type metadata". This new name reflects how
   the metadata is currently being used (i.e. to represent type information
   for CFI and vtable opt). The new name is reflected in the name for the
   associated intrinsic (llvm.type.test) and pass (LowerTypeTests).

2. Attach metadata directly to the globals that it pertains to, rather
   than using the "llvm.bitsets" global metadata node as we are doing now.
   This is done using the newly introduced capability to attach
   metadata to global variables (r271348 and r271358).

See also: http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html

Differential Revision: http://reviews.llvm.org/D21053

llvm-svn: 273729
2016-06-24 21:21:32 +00:00
Michael Kuperstein 82d5da5aac [PM] Port PreISelIntrinsicLowering to the new PM
llvm-svn: 273713
2016-06-24 20:13:42 +00:00
David Majnemer 3b3e954ea2 SimplifyInstruction does not imply DCE
We cannot remove an instruction with no uses just because
SimplifyInstruction succeeds.  It may have side effects.

llvm-svn: 273711
2016-06-24 19:34:46 +00:00
Reid Kleckner fbd5eef691 Revert "InstCombine rule to fold trunc when value available"
This reverts commit r273608.

Broke building code with sanitizers, where apparently these kinds of
loads, casts, and truncations are common:

http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux/builds/24502
http://crbug.com/623099

llvm-svn: 273703
2016-06-24 18:42:58 +00:00
Sanjay Patel f8b08f7179 [InstCombine] consolidate commutation variants of matchSelectFromAndOr() in one place; NFCI
By putting all the possible commutations together, we simplify the code.
Note that this is NFCI, but I'm adding tests that actually exercise each
commutation pattern because we don't have this anywhere else.

llvm-svn: 273702
2016-06-24 18:26:02 +00:00
Matthew Simpson e794678404 [LV] Preserve order of dependences in interleaved accesses analysis
The interleaved access analysis currently assumes that the inserted run-time
pointer aliasing checks ensure the absence of dependences that would prevent
its instruction reordering. However, this is not the case.

Issues can arise from how code generation is performed for interleaved groups.
For a load group, all loads in the group are essentially moved to the location
of the first load in program order, and for a store group, all stores in the
group are moved to the location of the last store. For groups having members
involved in a dependence relation with any other instruction in the loop, this
reordering can violate the dependence.

This patch teaches the interleaved access analysis how to avoid breaking such
dependences, and should fix PR27626.

An assumption of the original analysis was that the accesses had been collected
in "program order". The analysis was then simplified by visiting the accesses
bottom-up. However, this ordering was never guaranteed for anything other than
single basic block loops. Thus, this patch also enforces the desired ordering.

Reference: https://llvm.org/bugs/show_bug.cgi?id=27626
Differential Revision: http://reviews.llvm.org/D19984

llvm-svn: 273687
2016-06-24 15:33:25 +00:00
Chuang-Yu Cheng 68f7f1cf00 Teaching SimplifyCFG to recognize the Or-Mask trick that InstCombine uses to
reduce the number of comparisons.

Specifically, InstCombine can turn:
  (i == 5334 || i == 5335)
into:
  ((i | 1) == 5335)

SimplifyCFG was already able to detect the pattern:
  (i == 5334 || i == 5335)
to:
  ((i & -2) == 5334)

This patch supersedes D21315 and resolves PR27555
(https://llvm.org/bugs/show_bug.cgi?id=27555).

Thanks to David and Chandler for the suggestions!

Author: Thomas Jablin (tjablin)
Reviewers: majnemer chandlerc halfdan cycheng

http://reviews.llvm.org/D21397

llvm-svn: 273639
2016-06-24 01:59:00 +00:00
Anna Thomas 31a0b2088f InstCombine rule to fold trunc when value available
Summary:
This instcombine rule folds away trunc operations that have value available from a prior load or store.
This kind of code can be generated as a result of GVN widening the load or from source code as well.

Reviewers: reames, majnemer, sanjoy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21246

llvm-svn: 273608
2016-06-23 20:22:22 +00:00
Artur Pilipenko 80771b9ad9 Upgrade other old memset/memcpy signatures in tests causing buildbot failures with rL273568.
llvm-svn: 273580
2016-06-23 16:34:52 +00:00
Michael Zolotukhin 2d3592d481 [LoopUnrollAnalyzer] Fix a bug in UnrolledInstAnalyzer::visitLoad.
When simplifying a load we need to make sure that the type of the
simplified value matches the type of the instruction we're processing.
In theory, we can handle casts here as we deal with constant data, but
since it's not implemented at the moment, we at least need to bail out.

This fixes PR28262.

llvm-svn: 273562
2016-06-23 14:31:31 +00:00
Hal Finkel a1271036c5 Allow DeadStoreElimination to track combinations of partial later wrties
DeadStoreElimination can currently remove a small store rendered unnecessary by
a later larger one, but could not remove a larger store rendered unnecessary by
a series of later smaller ones. This adds that capability.

It works by keeping a map, which is used as an effective interval map, for each
store later overwritten only partially, and filling in that interval map as
more such stores are discovered. No additional walking or aliasing queries are
used. In the map forms an interval covering the the entire earlier store, then
it is dead and can be removed. The map is used as an interval map by storing a
mapping between the ending offset and the beginning offset of each interval.

I discovered this problem when investigating a performance issue with code like
this on PowerPC:

  #include <complex>
  using namespace std;

  complex<float> bar(complex<float> C);
  complex<float> foo(complex<float> C) {
    return bar(C)*C;
  }

which produces this:

  define void @_Z4testSt7complexIfE(%"struct.std::complex"* noalias nocapture sret %agg.result, i64 %c.coerce) {
  entry:
    %ref.tmp = alloca i64, align 8
    %tmpcast = bitcast i64* %ref.tmp to %"struct.std::complex"*
    %c.sroa.0.0.extract.shift = lshr i64 %c.coerce, 32
    %c.sroa.0.0.extract.trunc = trunc i64 %c.sroa.0.0.extract.shift to i32
    %0 = bitcast i32 %c.sroa.0.0.extract.trunc to float
    %c.sroa.2.0.extract.trunc = trunc i64 %c.coerce to i32
    %1 = bitcast i32 %c.sroa.2.0.extract.trunc to float
    call void @_Z3barSt7complexIfE(%"struct.std::complex"* nonnull sret %tmpcast, i64 %c.coerce)
    %2 = bitcast %"struct.std::complex"* %agg.result to i64*
    %3 = load i64, i64* %ref.tmp, align 8
    store i64 %3, i64* %2, align 4 ; <--- ***** THIS SHOULD NOT BE HERE ****
    %_M_value.realp.i.i = getelementptr inbounds %"struct.std::complex", %"struct.std::complex"* %agg.result, i64 0, i32 0, i32 0
    %4 = lshr i64 %3, 32
    %5 = trunc i64 %4 to i32
    %6 = bitcast i32 %5 to float
    %_M_value.imagp.i.i = getelementptr inbounds %"struct.std::complex", %"struct.std::complex"* %agg.result, i64 0, i32 0, i32 1
    %7 = trunc i64 %3 to i32
    %8 = bitcast i32 %7 to float
    %mul_ad.i.i = fmul fast float %6, %1
    %mul_bc.i.i = fmul fast float %8, %0
    %mul_i.i.i = fadd fast float %mul_ad.i.i, %mul_bc.i.i
    %mul_ac.i.i = fmul fast float %6, %0
    %mul_bd.i.i = fmul fast float %8, %1
    %mul_r.i.i = fsub fast float %mul_ac.i.i, %mul_bd.i.i
    store float %mul_r.i.i, float* %_M_value.realp.i.i, align 4
    store float %mul_i.i.i, float* %_M_value.imagp.i.i, align 4
    ret void
  }

the problem here is not just that the i64 store is unnecessary, but also that
it blocks further backend optimizations of the other uses of that i64 value in
the backend.

In the future, we might want to add a special case for handling smaller
accesses (e.g. using a bit vector) if the map mechanism turns out to be
noticeably inefficient. A sorted vector is also a possible replacement for the
map for small numbers of tracked intervals.

Differential Revision: http://reviews.llvm.org/D18586

llvm-svn: 273559
2016-06-23 13:46:39 +00:00
David Majnemer d1fbf48566 [SCCP] Don't assume all Constants are ConstantInt
This fixes PR28269.

llvm-svn: 273521
2016-06-23 00:14:29 +00:00
Sanjay Patel a06d989552 [ValueTracking] improve ComputeNumSignBits for vector constants
This is similar to the computeKnownBits improvement in rL268479. 
There's probably more we can do for vector logic instructions, but 
this should let us see non-splat constant masking ops that can
become vector selects instead of and/andn/or sequences.

Differential Revision: http://reviews.llvm.org/D21610

llvm-svn: 273459
2016-06-22 19:20:59 +00:00
Artur Pilipenko 1cec4fdddf Upgrade old memset/memcpy signatures (without isVolatile argument) in tests
We no longer have corresponding code in autoupgrade and the vast majority of the tests were fixed long time ago. Fix the remaining few. One of the verifier test cases is marked as XFAIL because it was passing only because the signature was incorrect.

llvm-svn: 273428
2016-06-22 15:16:06 +00:00
Sanjay Patel c6cacd6067 [InstSimplify] add ashr tests including vector types
llvm-svn: 273421
2016-06-22 14:18:04 +00:00
Simon Pilgrim bc35f9f702 [SLPVectorizer][X86] Added ceil/floor/nearbyint/rint/trunc vectorization tests
llvm-svn: 273420
2016-06-22 14:07:46 +00:00
Sanjay Patel 21579bb39a [InstSimplify] regenerate checks
llvm-svn: 273419
2016-06-22 14:00:16 +00:00
Haicheng Wu a783bac50b [Kryo] Enable loop prefetcher.
Differential Revision: http://reviews.llvm.org/D21535

llvm-svn: 273329
2016-06-21 22:47:56 +00:00
Easwaran Raman 8bceb9d210 Fix PR28219: Use profile summary from reader and not compute it
Differentiaal revision: http://reviews.llvm.org/D21546

llvm-svn: 273301
2016-06-21 19:29:49 +00:00
Elena Demikhovsky a266cf0518 reverted the prev commit due to assertion failure
llvm-svn: 273258
2016-06-21 12:10:11 +00:00
Elena Demikhovsky 9823c995bc Fixed consecutive memory access detection in Loop Vectorizer.
It did not handle correctly cases without GEP.

The following loop wasn't vectorized:

for (int i=0; i<len; i++)
  *to++ = *from++;

I use getPtrStride() to find Stride for memory access and return 0 is the Stride is not 1 or -1.

Differential revision: http://reviews.llvm.org/D20789

llvm-svn: 273257
2016-06-21 11:32:01 +00:00
Simon Pilgrim 356e823b51 [X86][SSE] Add cost model for BSWAP of vectors
The BSWAP of vector types is quite efficiently implemented using vector shuffles on SSE/AVX targets, we should reflect the typical cost of this to encourage vectorization.

Differential Revision: http://reviews.llvm.org/D21521

llvm-svn: 273217
2016-06-20 23:08:21 +00:00
Sanjay Patel 9ad8fb68f7 [InstSimplify] analyze (optionally casted) icmps to eliminate obviously false logic (PR27869)
By moving this transform to InstSimplify from InstCombine, we sidestep the problem/question
raised by PR27869:
https://llvm.org/bugs/show_bug.cgi?id=27869
...where InstCombine turns an icmp+zext into a shift causing us to miss the fold.

Credit to David Majnemer for a draft patch of the changes to InstructionSimplify.cpp.

Differential Revision: http://reviews.llvm.org/D21512

llvm-svn: 273200
2016-06-20 20:59:59 +00:00
Dehao Chen 071bb9d7af Pass AssumptionCacheTracker from SampleProfileLoader to Inliner
Summary: Inliner needs ACT when calling InlineFunction. Instead of nullptr, we need to pass it in from SampleProfileLoader

Reviewers: davidxl

Subscribers: eraman, vsk, danielcdh, llvm-commits

Differential Revision: http://reviews.llvm.org/D21205

llvm-svn: 273199
2016-06-20 20:53:40 +00:00
Matt Arsenault 802ebcb4bb InstCombine: Don't strip convergent from intrinsic callsites
Specific instances of intrinsic calls may want to be convergent, such
as certain register reads but the intrinsic declaration is not.

llvm-svn: 273188
2016-06-20 19:04:44 +00:00
Sanjay Patel 445d7ecf89 [InstCombine] consolidate some icmp+logic tests and improve checks
llvm-svn: 273186
2016-06-20 18:40:37 +00:00
Sanjay Patel 14dcb042bc [InstCombine] update to use FileCheck with autogenerated exact checking
llvm-svn: 273180
2016-06-20 18:23:40 +00:00
Sanjay Patel 06918ad79e [InstCombine] update to use FileCheck with autogenerated exact checking
llvm-svn: 273173
2016-06-20 17:56:13 +00:00
Sanjay Patel a038240660 [InstCombine] regenerate checks
llvm-svn: 273170
2016-06-20 17:48:48 +00:00
David Majnemer c5601df9fd Reapply "[LoopIdiom] Don't remove dead operands manually"
This reverts commit r273160, reapplying r273132.
RecursivelyDeleteTriviallyDeadInstructions cannot be called on a
parentless Instruction.

llvm-svn: 273162
2016-06-20 16:03:25 +00:00
Cong Liu 1c28b6d733 Revert "[LoopIdiom] Don't remove dead operands manually"
This reverts commit r273132.
Breaks multiple test under /llvm/test:Transforms (e.g.
llvm/test:Transforms/LoopIdiom/basic.ll.test) under asan.

llvm-svn: 273160
2016-06-20 15:22:15 +00:00
Patrik Hagglund 7205215591 Fix for PR27940
After a store has been eliminated, when making sure that the
instruction iterator points to a valid instruction, dbg intrinsics are
now ignored as a new instruction.

Patch by Henric Karlsson.

Reviewed by Daniel Berlin.

Differential Revision: http://reviews.llvm.org/D21076

llvm-svn: 273141
2016-06-20 09:10:10 +00:00
David Majnemer a705843f23 [LoopIdiom] Don't remove dead operands manually
Removing dead instructions requires remembering which operands have
already been removed.  RecursivelyDeleteTriviallyDeadInstructions has
this logic, don't partially reimplement it in LoopIdiomRecognize.

This fixes PR28196.

llvm-svn: 273132
2016-06-20 02:33:29 +00:00
Sanjay Patel a4b052c7d1 [InstSimplify] add tests for PR27689; regenerate checks
llvm-svn: 273128
2016-06-19 21:40:12 +00:00
David Majnemer 3119599475 [LoadCombine] Combine Loads formed from GEPS with negative indexes
Change the underlying offset and comparisons to use int64_t instead of
uint64_t.

Patch by River Riddle!

Differential Revision: http://reviews.llvm.org/D21499

llvm-svn: 273105
2016-06-19 06:14:56 +00:00
Matt Arsenault a466a7cf62 Add looping testcase that broke in r272987
llvm-svn: 273081
2016-06-18 05:15:58 +00:00
Sanjoy Das e8fd9561cb [SCEV] Fix incorrect trip count computation
The way we elide max expressions when computing trip counts is incorrect
-- it breaks cases like this:

```
static int wrapping_add(int a, int b) {
  return (int)((unsigned)a + (unsigned)b);
}

void test() {
  volatile int end_buf = 2147483548; // INT_MIN - 100
  int end = end_buf;

  unsigned counter = 0;
  for (int start = wrapping_add(end,  200); start < end; start++)
    counter++;

  print(counter);
}
```

Note: the `NoWrap` variable that was being tested has little to do with
the values flowing into the max expression; it is a property of the
induction variable.

test/Transforms/LoopUnroll/nsw-tripcount.ll was added to solely test
functionality I'm reverting in this change, so I've deleted the test
fully.

llvm-svn: 273079
2016-06-18 04:38:31 +00:00
Matt Arsenault 8fd5978811 Revert "Revert "Revert "InstCombine: Reduce trunc (shl x, K) width."""
This seems to be causing an infinite loop / crash in instcombine
on some bots.

llvm-svn: 273069
2016-06-17 23:36:38 +00:00
Adam Nemet a9f09c6245 [LAA] Enable symbolic stride speculation for all LAA clients
This is a functional change for LLE and LDist.  The other clients (LV,
LVerLICM) already had this explicitly enabled.

The temporary boolean parameter to LAA is removed that allowed turning
off speculation of symbolic strides.  This makes LAA's caching interface
LAA::getInfo only take the loop as the parameter.  This makes the
interface more friendly to the new Pass Manager.

The flag -enable-mem-access-versioning is moved from LV to a LAA which
now allows turning off speculation globally.

llvm-svn: 273064
2016-06-17 22:35:41 +00:00
Matt Arsenault d76efc14b9 Revert "Revert "InstCombine: Reduce trunc (shl x, K) width.""
Reapply r272987. Condition should be in terms of the destination type,
and the flags should not be copied.

llvm-svn: 273045
2016-06-17 20:33:53 +00:00
Davide Italiano b49aa5c0c4 [PM] Port MergedLoadStoreMotion to the new pass manager, take two.
This is indeed a much cleaner approach (thanks to Daniel Berlin
for pointing out), and also David/Sean for review.

Differential Revision:  http://reviews.llvm.org/D21454

llvm-svn: 273032
2016-06-17 19:10:09 +00:00
James Y Knight 148a6469dc Support expanding partial-word cmpxchg to full-word cmpxchg in AtomicExpandPass.
Many CPUs only have the ability to do a 4-byte cmpxchg (or ll/sc), not 1
or 2-byte. For those, you need to mask and shift the 1 or 2 byte values
appropriately to use the 4-byte instruction.

This change adds support for cmpxchg-based instruction sets (only SPARC,
in LLVM). The support can be extended for LL/SC-based PPC and MIPS in
the future, supplanting the ISel expansions those architectures
currently use.

Tests added for the IR transform and SPARCv9.

Differential Revision: http://reviews.llvm.org/D21029

llvm-svn: 273025
2016-06-17 18:11:48 +00:00
Sanjay Patel 216d8cf720 [InstCombine] allow more than one use for vector bitcast folding with selects
The motivating example for this transform is similar to D20774 where bitcasts interfere
with a single cmp/select sequence, but in this case we have 2 uses of each bitcast to 
produce min and max ops:

define void @minmax_bc_store(<4 x float> %a, <4 x float> %b, <4 x float>* %ptr1, <4 x float>* %ptr2) {
  %cmp = fcmp olt <4 x float> %a, %b
  %bc1 = bitcast <4 x float> %a to <4 x i32>
  %bc2 = bitcast <4 x float> %b to <4 x i32>
  %sel1 = select <4 x i1> %cmp, <4 x i32> %bc1, <4 x i32> %bc2
  %sel2 = select <4 x i1> %cmp, <4 x i32> %bc2, <4 x i32> %bc1
  %bc3 = bitcast <4 x float>* %ptr1 to <4 x i32>*
  store <4 x i32> %sel1, <4 x i32>* %bc3
  %bc4 = bitcast <4 x float>* %ptr2 to <4 x i32>*
  store <4 x i32> %sel2, <4 x i32>* %bc4
  ret void
}

With this patch, we move the selects up to use the input args which allows getting rid of
all of the bitcasts:

define void @minmax_bc_store(<4 x float> %a, <4 x float> %b, <4 x float>* %ptr1, <4 x float>* %ptr2) {
  %cmp = fcmp olt <4 x float> %a, %b
  %sel1.v = select <4 x i1> %cmp, <4 x float> %a, <4 x float> %b
  %sel2.v = select <4 x i1> %cmp, <4 x float> %b, <4 x float> %a
  store <4 x float> %sel1.v, <4 x float>* %ptr1, align 16
  store <4 x float> %sel2.v, <4 x float>* %ptr2, align 16
  ret void
}

The asm for x86 SSE then improves from:

movaps  %xmm0, %xmm2
cmpltps %xmm1, %xmm2
movaps  %xmm2, %xmm3
andnps  %xmm1, %xmm3
movaps  %xmm2, %xmm4
andnps  %xmm0, %xmm4
andps %xmm2, %xmm0
orps  %xmm3, %xmm0
andps %xmm1, %xmm2
orps  %xmm4, %xmm2
movaps  %xmm0, (%rdi)
movaps  %xmm2, (%rsi)

To:

movaps  %xmm0, %xmm2
minps %xmm1, %xmm2
maxps %xmm0, %xmm1
movaps  %xmm2, (%rdi)
movaps  %xmm1, (%rsi)

The TODO comments show that we're limiting this transform only to vectors and only to bitcasts
because we need to improve other transforms or risk creating worse codegen.

Differential Revision: http://reviews.llvm.org/D21190

llvm-svn: 273011
2016-06-17 16:46:50 +00:00
Adam Nemet e7709d92ba [LLE] Don't hard-code the name of the preheader in test
Turns out a didn't get this right because symbolic stride versioning
changes the name.  Relax the matching.

llvm-svn: 272992
2016-06-17 09:13:15 +00:00
Matt Arsenault ce56f7bbaa Revert "InstCombine: Reduce trunc (shl x, K) width."
This reverts commit r272987.

This might be causing crashes on some bots.

llvm-svn: 272990
2016-06-17 06:28:53 +00:00
Matt Arsenault 028fd50642 InstCombine: Reduce trunc (shl x, K) width.
llvm-svn: 272987
2016-06-17 04:43:22 +00:00
Evgeniy Stepanov 45fa0fd758 [safestack] Sink unsafe address computation to each use.
This is a fix for PR27844.
When replacing uses of unsafe allocas, emit the new location
immediately after each use. Without this, the pointer stays live from
the function entry to the last use, while it's usually cheaper to
recalculate.

llvm-svn: 272969
2016-06-16 22:34:04 +00:00
Evgeniy Stepanov 72d961a1da [safestack] Fixup llvm.dbg.value when rewriting unsafe allocas.
When moving unsafe allocas to the unsafe stack, dbg.declare intrinsics are
updated to refer to the new location.

This change does the same to dbg.value intrinsics.

llvm-svn: 272968
2016-06-16 22:34:00 +00:00
Sanjoy Das 07c6521aed [EarlyCSE] Fold invariant loads
Redundant invariant loads can be CSE'ed with very little extra effort
over what early-cse already tracks, so it looks reasonable to make
early-cse handle this case.

llvm-svn: 272954
2016-06-16 20:47:57 +00:00
Justin Lebar b0bd07aff7 Fix strip-dead-debug-info test if path contains "bar".
This test checks that the string 'bar' (no quotes) doesn't exist in the
output after running opt.  But opt embeds the absolute path to the
filename, and on my machine, the filename contains the string 'jlebar',
causing the test to fail.

This patch changes the test to look for the string '"bar"' instead.

llvm-svn: 272941
2016-06-16 19:39:55 +00:00
Davide Italiano 41315f7873 [PM] Revert the port of MergeLoadStoreMotion to the new pass manager.
Daniel Berlin expressed some real concerns about the port and proposed
and alternative approach. I'll revert this for now while working on a
new patch, which I hope to put up for review shortly. Sorry for the churn.

llvm-svn: 272925
2016-06-16 17:40:53 +00:00
Adam Nemet 776346848a [LLE] New test to check that no versioning for symbolic strides occurs. NFC
This is currently only performed in the Vectorizer.  I will change this
as symbolic stride collection is moved to LAA.

This test will track when the actual functional change occurs.

llvm-svn: 272918
2016-06-16 16:45:29 +00:00
Igor Laevsky 87f0d0e185 Revert r272891 "[JumpThreading] Prevent dangling pointer problems in BranchProbabilityInfo"
It was causing failures in Profile-i386 and Profile-x86_64 tests.

llvm-svn: 272912
2016-06-16 16:25:53 +00:00
Igor Laevsky c9179fd2c2 [JumpThreading] Prevent dangling pointer problems in BranchProbabilityInfo
We should update results of the BranchProbabilityInfo after removing block in JumpThreading. Otherwise 
we will get dangling pointer inside BranchProbabilityInfo cache.

Differential Revision: http://reviews.llvm.org/D20957

llvm-svn: 272891
2016-06-16 13:28:25 +00:00
Patrik Hagglund 0acaefaf9d PR27938: Don't remove valid DebugLoc in Scalarizer
Added checks to make sure the Scalarizer::transferMetadata() don't
remove valid debug locations from instructions. This is important as
the verifier pass require that e.g. inlinable callsites have a valid
debug location.

https://llvm.org/bugs/show_bug.cgi?id=27938

Patch by Karl-Johan Karlsson

Reviewers: dblaikie

Differential Revision: http://reviews.llvm.org/D20807

llvm-svn: 272884
2016-06-16 10:48:54 +00:00
Chuang-Yu Cheng dbe00d51b4 SimplifyCFG is able to detect the pattern:
(i == 5334 || i == 5335)
to:
    ((i & -2) == 5334)

This transformation has some incorrect side conditions. Specifically, the
transformation is only applied when the right-hand side constant (5334 in
the example) is a power of two not equal and not equal to the negated mask.
These side conditions were added in r258904 to fix PR26323. The correct side
condition is that: ((Constant & Mask) == Constant)[(5334 & -2) == 5334].

It's a little bit hard to see why these transformations are correct and what
the side conditions ought to be. Here is a CVC3 program to verify them for
64-bit values:
    ONE  : BITVECTOR(64) = BVZEROEXTEND(0bin1, 63);
    x    : BITVECTOR(64);
    y    : BITVECTOR(64);
    z    : BITVECTOR(64);
    mask : BITVECTOR(64) = BVSHL(ONE, z);
    QUERY( (y & ~mask = y) =>
           ((x & ~mask = y) <=> (x = y OR x = (y |  mask)))
    );

Please note that each pattern must be a dual implication (<--> or iff). One
directional implication can create spurious matches. If the implication is
only one-way, an unsatisfiable condition on the left side can imply a
satisfiable condition on the right side. Dual implication ensures that
satisfiable conditions are transformed to other satisfiable conditions and
unsatisfiable conditions are transformed to other unsatisfiable conditions.

Here is a concrete example of a unsatisfiable condition on the left
implying a satisfiable condition on the right:
    mask = (1 << z)
    (x & ~mask) == y --> (x == y || x == (y | mask))

Substituting y = 3, z = 0 yields:
    (x & -2) == 3 --> (x == 3 || x == 2)

The version of this code before r258904 had no side-conditions and
incorrectly justified itself in comments through one-directional
implication.

Thanks to Chandler for the suggestion!

Author: Thomas Jablin (tjablin)
Reviewers: chandlerc majnemer hfinkel cycheng

http://reviews.llvm.org/D21417

llvm-svn: 272873
2016-06-16 04:44:25 +00:00
Eli Friedman bd254a6f45 [InstCombine] Don't widen metadata on store-to-load forwarding
The original check for load CSE or store-to-load forwarding is wrong
when the forwarded stored value happened to be a load.

Ref https://github.com/JuliaLang/julia/issues/16894

Differential Revision: http://reviews.llvm.org/D21271

Patch by Yichao Yu!

llvm-svn: 272868
2016-06-16 02:33:42 +00:00
Xinliang David Li 1eaecefaf9 [PM] Port Add discriminator pass to new PM
llvm-svn: 272847
2016-06-15 21:51:30 +00:00
David Majnemer b62692e2e0 [TargetLibraryInfo] Teach isValidProtoForLibFunc about tan
We would fail to validate the type of the tan function which would cause
downstream users of isValidProtoForLibFunc to assert.

This fixes PR28143.

llvm-svn: 272802
2016-06-15 16:47:23 +00:00
Sean Silva e0a9e66040 [PM] Port SLPVectorizer to the new PM
This uses the "runImpl" approach to share code with the old PM.

Porting to the new PM meant abandoning the anonymous namespace enclosing
most of SLPVectorizer.cpp which is a bit of a bummer (but not a big deal
compared to having to pull the pass class into a header which the new PM
requires since it calls the constructor directly).

llvm-svn: 272766
2016-06-15 08:43:40 +00:00
Sean Silva a4c2d150d0 [PM] Port AlignmentFromAssumptions to the new PM.
This uses the "runImpl" pattern to share code between the old and new PM.

llvm-svn: 272757
2016-06-15 06:18:01 +00:00
Michael Kuperstein 3277a05fcf Recommit [LV] Enable vectorization of loops where the IV has an external use
r272715 broke libcxx because it did not correctly handle cases where the
last iteration of one IV is the second-to-last iteration of another.

Original commit message:
Vectorizing loops with "escaping" IVs has been disabled since r190790, due to
PR17179. This re-enables it, with support for external use of both
"post-increment" (last iteration) and "pre-increment" (second-to-last iteration)
IVs.

llvm-svn: 272742
2016-06-15 00:35:26 +00:00
David Majnemer 4a697c312f [LoopUnroll] Don't crash trying to unroll loop with EH pad exit
We do not support splitting cleanuppad or catchswitches.  This is
problematic for passes which assume that a loop is in loop simplify
form (the loop would have a dedicated exit block instead of sharing it).

While it isn't great that we don't support this for cleanups, we still
cannot make loop-simplify form an assertable precondition because
indirectbr will also disable these sorts of CFG cleanups.

This fixes PR28132.

llvm-svn: 272739
2016-06-15 00:19:56 +00:00
David Majnemer cbf614a93b Remove the ScalarReplAggregates pass
Nearly all the changes to this pass have been done while maintaining and
updating other parts of LLVM.  LLVM has had another pass, SROA, which
has superseded ScalarReplAggregates for quite some time.

Differential Revision: http://reviews.llvm.org/D21316

llvm-svn: 272737
2016-06-15 00:19:09 +00:00
Matt Arsenault f42c69206d AMDGPU: Run pointer optimization passes
llvm-svn: 272736
2016-06-15 00:11:01 +00:00
Michael Kuperstein d4bd3ab5fe Reverting r272715 since it broke libcxx.
llvm-svn: 272730
2016-06-14 22:30:41 +00:00
Davide Italiano d737dd2ec6 [PM] Port WholeProgramDevirt to the new pass manager.
llvm-svn: 272721
2016-06-14 21:44:19 +00:00
Michael Kuperstein 23b6d6adc9 [LV] Enable vectorization of loops where the IV has an external use
Vectorizing loops with "escaping" IVs has been disabled since r190790, due to
PR17179. This re-enables it, with support for external use of both
"post-increment" (last iteration) and "pre-increment" (second-to-last iteration)
IVs.

Differential Revision: http://reviews.llvm.org/D21048

llvm-svn: 272715
2016-06-14 21:27:27 +00:00
Evgeniy Stepanov 0be3cf1d35 Add a missing test.
This is a test for r272421: Disable MSan-hostile loop unswitching.

llvm-svn: 272713
2016-06-14 21:24:13 +00:00
Peter Collingbourne 96efdd6107 IR: Introduce local_unnamed_addr attribute.
If a local_unnamed_addr attribute is attached to a global, the address
is known to be insignificant within the module. It is distinct from the
existing unnamed_addr attribute in that it only describes a local property
of the module rather than a global property of the symbol.

This attribute is intended to be used by the code generator and LTO to allow
the linker to decide whether the global needs to be in the symbol table. It is
possible to exclude a global from the symbol table if three things are true:
- This attribute is present on every instance of the global (which means that
  the normal rule that the global must have a unique address can be broken without
  being observable by the program by performing comparisons against the global's
  address)
- The global has linkonce_odr linkage (which means that each linkage unit must have
  its own copy of the global if it requires one, and the copy in each linkage unit
  must be the same)
- It is a constant or a function (which means that the program cannot observe that
  the unique-address rule has been broken by writing to the global)

Although this attribute could in principle be computed from the module
contents, LTO clients (i.e. linkers) will normally need to be able to compute
this property as part of symbol resolution, and it would be inefficient to
materialize every module just to compute it.

See:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160509/356401.html
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160516/356738.html
for earlier discussion.

Part of the fix for PR27553.

Differential Revision: http://reviews.llvm.org/D20348

llvm-svn: 272709
2016-06-14 21:01:22 +00:00
Sanjoy Das d7e8206b58 [ValueTracking] Calls to @llvm.assume always return
This change teaches llvm::isGuaranteedToTransferExecutionToSuccessor
that calls to @llvm.assume always terminate.  Most other relevant
intrinsics should be covered by the "CS.onlyReadsMemory() ||
CS.onlyAccessesArgMemory()" bit but we were missing @llvm.assumes
because we state that it clobbers memory.

Added an LICM test case, but this change is not specific to LICM.

llvm-svn: 272703
2016-06-14 20:23:16 +00:00
Adam Nemet 73a26957fc [LoopVer] Update all existing PHIs in the exit block
We only used to add the edge from the cloned loop to PHIs that
corresponded to values defined by the loop.  We need to do this for all
PHIs obviously since we need a PHI operand for each incoming edge.

This includes things like PHIs with a constant value or with values
defined before the original loop (see the testcases).

After the patch the PHIs are added to the exit block in two passes.

In the first pass we ensure there is a single-operand (LCSSA) PHI for
each value defined by the loop.

In the second pass we loop through each (single-operand) PHI and add the
value for the edge from the cloned loop.  If the value is defined in the
loop we'll use the cloned instruction from the cloned loop.

Fixes PR28037

llvm-svn: 272649
2016-06-14 09:38:54 +00:00
Davide Italiano cccf4f01ad [PM] Port Mem2Reg to the new pass manager.
llvm-svn: 272630
2016-06-14 03:22:22 +00:00
Sean Silva 6347df0f81 [PM] Port MemCpyOpt to the new PM.
The need for all these Lookup* functions is just because of calls to
getAnalysis inside methods (i.e. not at the top level) of the
runOnFunction method. They should be straightforward to clean up when
the old PM is gone.

llvm-svn: 272615
2016-06-14 02:44:55 +00:00
Davide Italiano 5669ef1efe Placate bots fixing a typo in AA-pipeline description. Sorry.
llvm-svn: 272608
2016-06-14 01:11:12 +00:00
Sean Silva 46590d556a Bring back "[PM] Port JumpThreading to the new PM" with a fix
This reverts commit r272603 and adds a fix.

Big thanks to Davide for pointing me at r216244 which gives some insight
into how to fix this VS2013 issue. VS2013 can't synthesize a move
constructor. So the fix here is to add one explicitly to the
JumpThreadingPass class.

llvm-svn: 272607
2016-06-14 00:51:09 +00:00
Davide Italiano 89ab89d6cd [PM] Port MergedLoadStoreMotion to the new pass manager.
llvm-svn: 272606
2016-06-14 00:49:23 +00:00
Sean Silva 7d5a57cbfc Revert "[PM] Port JumpThreading to the new PM"
This reverts commit r272597.

Will investigate issue with VS2013 compilation and then recommit.

llvm-svn: 272603
2016-06-14 00:26:31 +00:00
Sean Silva f81328d0b4 [PM] Port JumpThreading to the new PM
This follows the approach in r263208 (for GVN) pretty closely:
- move the bulk of the body of the function to the new PM class.
- expose a runImpl method on the new-PM class that takes the IRUnitT and
  pointers/references to any analyses and use that to implement the
  old-PM class.
- use a private namespace in the header for stuff that used to be file
  scope

llvm-svn: 272597
2016-06-13 22:52:52 +00:00
Sanjoy Das 98ac278b86 Move previously added test case to the right location
In rL272580 I accidentally added a test case to test/CodeGen when
test/Transforms/DeadStoreElimination/ is a better place for it.

llvm-svn: 272581
2016-06-13 20:12:07 +00:00
Sean Silva e3bb457423 [PM] Port DeadArgumentElimination to the new PM
The approach taken here follows r267631.

deadarghaX0r should be easy to port when the time comes to add new-PM
support to bugpoint.

llvm-svn: 272507
2016-06-12 09:16:39 +00:00
Sean Silva f5080194fd [PM] Port ReversePostOrderFunctionAttrs to the new PM
Below are my super rough notes when porting. They can probably serve as
a basic guide for porting other passes to the new PM. As I port more
passes I'll expand and generalize this and make a proper
docs/HowToPortToNewPassManager.rst document. There is also missing
documentation for general concepts and API's in the new PM which will
require some documentation.
Once there is proper documentation in place we can put up a list of
passes that have to be ported and game-ify/crowdsource the rest of the
porting (at least of the middle end; the backend is still unclear).

I will however be taking personal responsibility for ensuring that the
LLD/ELF LTO pipeline is ported in a timely fashion. The remaining passes
to be ported are (do something like
`git grep "<the string in the bullet point below>"` to find the pass):

General Scalar:
[ ] Simplify the CFG
[ ] Jump Threading
[ ] MemCpy Optimization
[ ] Promote Memory to Register
[ ] MergedLoadStoreMotion
[ ] Lazy Value Information Analysis

General IPO:
[ ] Dead Argument Elimination
[ ] Deduce function attributes in RPO

Loop stuff / vectorization stuff:
[ ] Alignment from assumptions
[ ] Canonicalize natural loops
[ ] Delete dead loops
[ ] Loop Access Analysis
[ ] Loop Invariant Code Motion
[ ] Loop Vectorization
[ ] SLP Vectorizer
[ ] Unroll loops

Devirtualization / CFI:
[ ] Cross-DSO CFI
[ ] Whole program devirtualization
[ ] Lower bitset metadata

CGSCC passes:
[ ] Function Integration/Inlining
[ ] Remove unused exception handling info
[ ] Promote 'by reference' arguments to scalars

Please let me know if you are interested in working on any of the passes
in the above list (e.g. reply to the post-commit thread for this patch).
I'll probably be tackling "General Scalar" and "General IPO" first FWIW.

Steps as I port "Deduce function attributes in RPO"
---------------------------------------------------

(note: if you are doing any work based on these notes, please leave a
note in the post-commit review thread for this commit with any
improvements / suggestions / incompleteness you ran into!)

Note: "Deduce function attributes in RPO" is a module pass.

1. Do preparatory refactoring.

Do preparatory factoring. In this case all I had to do was to pull out a static helper (r272503).
(TODO: give more advice here e.g. if pass holds state or something)

2. Rename the old pass class.

llvm/lib/Transforms/IPO/FunctionAttrs.cpp
Rename class ReversePostOrderFunctionAttrs -> ReversePostOrderFunctionAttrsLegacyPass
in preparation for adding a class ReversePostOrderFunctionAttrs as the pass in the new PM.
(edit: actually wait what? The new class name will be
ReversePostOrderFunctionAttrsPass, so it doesn't conflict. So this step is
sort of useless churn).

llvm/include/llvm/InitializePasses.h
llvm/lib/LTO/LTOCodeGenerator.cpp
llvm/lib/Transforms/IPO/IPO.cpp
llvm/lib/Transforms/IPO/FunctionAttrs.cpp
Rename initializeReversePostOrderFunctionAttrsPass -> initializeReversePostOrderFunctionAttrsLegacyPassPass
(note that the "PassPass" thing falls out of `s/ReversePostOrderFunctionAttrs/ReversePostOrderFunctionAttrsLegacyPass/`)
Note that the INITIALIZE_PASS macro is what creates this identifier name, so renaming the class requires this renaming too.

Note that createReversePostOrderFunctionAttrsPass does not need to be
renamed since its name is not generated from the class name.

3. Add the new PM pass class.

In the new PM all passes need to have their
declaration in a header somewhere, so you will often need to add a header.
In this case
llvm/include/llvm/Transforms/IPO/FunctionAttrs.h is already there because
PostOrderFunctionAttrsPass was already ported.
The file-level comment from the .cpp file can be used as the file-level
comment for the new header. You may want to tweak the wording slightly
from "this file implements" to "this file provides" or similar.

Add declaration for the new PM pass in this header:

    class ReversePostOrderFunctionAttrsPass
        : public PassInfoMixin<ReversePostOrderFunctionAttrsPass> {
    public:
      PreservedAnalyses run(Module &M, AnalysisManager<Module> &AM);
    };

Its name should end with `Pass` for consistency (note that this doesn't
collide with the names of most old PM passes). E.g. call it
`<name of the old PM pass>Pass`.

Also, move the doxygen comment from the old PM pass to the declaration of
this class in the header.
Also, include the declaration for the new PM class
`llvm/Transforms/IPO/FunctionAttrs.h` at the top of the file (in this case,
it was already done when the other pass in this file was ported).

Now define the `run` method for the new class.
The main things here are:
a) Use AM.getResult<...>(M) to get results instead of `getAnalysis<...>()`

b) If the old PM pass would have returned "false" (i.e. `Changed ==
false`), then you should return PreservedAnalyses::all();

c) In the old PM getAnalysisUsage method, observe the calls
   `AU.addPreserved<...>();`.

   In the case `Changed == true`, for each preserved analysis you should do
   call `PA.preserve<...>()` on a PreservedAnalyses object and return it.
   E.g.:

       PreservedAnalyses PA;
       PA.preserve<CallGraphAnalysis>();
       return PA;

Note that calls to skipModule/skipFunction are not supported in the new PM
currently, so optnone and optimization bisect support do not work. You can
just drop those calls for now.

4. Add the pass to the new PM pass registry to make it available in opt.

In llvm/lib/Passes/PassBuilder.cpp add a #include for your header.
`#include "llvm/Transforms/IPO/FunctionAttrs.h"`
In this case there is already an include (from when
PostOrderFunctionAttrsPass was ported).

Add your pass to llvm/lib/Passes/PassRegistry.def
In this case, I added
`MODULE_PASS("rpo-functionattrs", ReversePostOrderFunctionAttrsPass())`
The string is from the `INITIALIZE_PASS*` macros used in the old pass
manager.

Then choose a test that uses the pass and use the new PM `-passes=...` to
run it.
E.g. in this case there is a test that does:
; RUN: opt < %s -basicaa -functionattrs -rpo-functionattrs -S | FileCheck %s
I have added the line:
; RUN: opt < %s -aa-pipeline=basic-aa -passes='require<targetlibinfo>,cgscc(function-attrs),rpo-functionattrs' -S | FileCheck %s
The `-aa-pipeline=basic-aa` and
`require<targetlibinfo>,cgscc(function-attrs)` are what is needed to run
functionattrs in the new PM (note that in the new PM "functionattrs"
becomes "function-attrs" for some reason). This is just pulled from
`readattrs.ll` which contains the change from when functionattrs was ported
to the new PM.
Adding rpo-functionattrs causes the pass that was just ported to run.

llvm-svn: 272505
2016-06-12 07:48:51 +00:00
Eli Friedman 9f8031c2da [MergedLoadStoreMotion] Use correct helper for load hoist safety.
It isn't legal to hoist a load past a call which might not return;
even if it doesn't throw, it could, for example, call exit().

Fixes http://llvm.org/PR27953.

llvm-svn: 272495
2016-06-12 02:11:20 +00:00
Eli Friedman f1da33e4d3 [LICM] Make isGuaranteedToExecute more accurate.
Summary:
Make isGuaranteedToExecute use the
isGuaranteedToTransferExecutionToSuccessor helper, and make that helper
a bit more accurate.

There's a potential performance impact here from assuming that arbitrary
calls might not return. This probably has little impact on loads and
stores to a pointer because most things alias analysis can reason about
are dereferenceable anyway. The other impacts, like less aggressive
hoisting of sdiv by a variable and less aggressive hoisting around
volatile memory operations, are unlikely to matter for real code.

This also impacts SCEV, which uses the same helper.  It's a minor
improvement there because we can tell that, for example, memcpy always
returns normally. Strictly speaking, it's also introducing
a bug, but it's not any worse than everywhere else we assume readonly
functions terminate.

Fixes http://llvm.org/PR27857.

Reviewers: hfinkel, reames, chandlerc, sanjoy

Subscribers: broune, llvm-commits

Differential Revision: http://reviews.llvm.org/D21167

llvm-svn: 272489
2016-06-11 21:48:25 +00:00
Simon Pilgrim 3fc09f7be6 [CostModel][X86][SSE] Updated costs for vector BITREVERSE ops on SSSE3+ targets
To account for the fast PSHUFB implementation now available

llvm-svn: 272484
2016-06-11 19:23:02 +00:00
Matthew Simpson 12b9c5ba98 Reapply "[TTI] Refine default cost for interleaved load groups with gaps"
This reapplies commit r272385 with a fix. The build was failing when compiled
with gcc, but not with clang. With the fix, we now get the data layout from the
current TTI implementation, which will hopefully solve the issue.

llvm-svn: 272395
2016-06-10 14:33:30 +00:00
Matthew Simpson 65c7b74de4 Revert "[TTI] Refine default cost for interleaved load groups with gaps"
This reverts commit r272385. This commit broke the build. I'm temporarily
reverting to investigate.

llvm-svn: 272391
2016-06-10 12:41:33 +00:00
Matthew Simpson b16907f17a [TTI] Refine default cost for interleaved load groups with gaps
This patch refines the default cost for interleaved load groups having gaps. If
a load group has gaps, the legalized instructions corresponding to the unused
elements will be dead. Thus, we don't need to account for them in the cost
model. Instead, we only need to account for the fraction of legalized loads
that will actually be used.

Differential Revision: http://reviews.llvm.org/D20873

llvm-svn: 272385
2016-06-10 11:27:51 +00:00
Easwaran Raman 71069cf67d Use ProfileSummaryInfo in inline cost analysis.
Instead of directly using MaxFunctionCount and function entry count to determine callee hotness, use the isHotFunction/isColdFunction methods provided by ProfileSummaryInfo.

Differential revision: http://reviews.llvm.org/D21045

llvm-svn: 272321
2016-06-09 22:23:21 +00:00
Easwaran Raman e12c487b8c [PM] Port LCSSA to the new PM.
Differential Revision: http://reviews.llvm.org/D21090

llvm-svn: 272294
2016-06-09 19:44:46 +00:00
Michael Kuperstein c5edcdeb0e [LV] Use vector phis for some secondary induction variables
Previously, we materialized secondary vector IVs from the primary scalar IV,
by offseting the primary to match the correct start value, and then broadcasting
it - inside the loop body. Instead, we can use a real vector IV, like we do for
the primary.

This enables using vector IVs for secondary integer IVs whose type matches the
type of the primary.

Differential Revision: http://reviews.llvm.org/D20932

llvm-svn: 272283
2016-06-09 18:03:15 +00:00
Sanjoy Das c7f69b921f Be wary of abnormal exits from loop when exploiting UB
We can safely rely on a NoWrap add recurrence causing UB down the road
only if we know the loop does not have a exit expressed in a way that is
opaque to ScalarEvolution (e.g. by a function call that conditionally
calls exit(0)).

I believe with this change PR28012 is fixed.

Note: I had to change some llvm-lit tests in LoopReroll, since it looks
like they were depending on this incorrect behavior.

llvm-svn: 272237
2016-06-09 01:13:59 +00:00
Michael Zolotukhin 8e7e76729d [LoopSimplify] Preserve LCSSA when merging exit blocks.
Summary:
This fixes PR26682. Also add LCSSA as a preserved pass to LoopSimplify,
that looks correct to me and allows to write a test for the issue.

Reviewers: chandlerc, bogner, sanjoy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21112

llvm-svn: 272224
2016-06-08 23:13:21 +00:00
Michael Zolotukhin 987ab631fa [SLPVectorizer] Handle GEP with differing constant index types
Summary:
This fixes PR27617.

Bug description: The SLPVectorizer asserts on encountering GEPs with different index types, such as i8 and i64.

The patch includes a simple relaxation of the assert to allow constants being of different types, along with a regression test that will provoke the unrelaxed assert.

Reviewers: nadav, mzolotukhin

Subscribers: JesperAntonsson, llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D20685

Patch by Jesper Antonsson!

llvm-svn: 272206
2016-06-08 21:55:16 +00:00
Evgeny Stupachenko 3e2f389a7e The patch set unroll disable pragma when unroll
with user specified count has been applied.

Summary:
Previously SetLoopAlreadyUnrolled() set the disable pragma only if
there was some loop metadata.
Now it set the pragma in all cases. This helps to prevent multiple
unroll when -unroll-count=N is given.

Reviewers: mzolotukhin

Differential Revision: http://reviews.llvm.org/D20765

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 272195
2016-06-08 20:21:24 +00:00
Tim Shen 7aa0ad65ce [MemCpyOpt] Do not exchange llvm.lifetime.start and llvm.memcpy
Reviewers: iteratee

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21087

llvm-svn: 272192
2016-06-08 19:42:32 +00:00
Easwaran Raman f894b5e89c Use FileCheck instead of grepping for patterns. NFC.
llvm-svn: 272065
2016-06-07 21:46:14 +00:00
Andrey Turetskiy 94c2179550 Quick fix for the test from rL272014 "[LAA] Improve non-wrapping pointer
detection by handling loop-invariant case" (s couple of buildbots failed).

Patch by Roman Shirokiy.

llvm-svn: 272019
2016-06-07 15:52:35 +00:00
Andrey Turetskiy 9f02c58670 [LAA] Improve non-wrapping pointer detection by handling loop-invariant case.
This fixes PR26314. This patch adds new helper “isNoWrap” with detection of
loop-invariant pointer case.

Patch by Roman Shirokiy.

Ref: https://llvm.org/bugs/show_bug.cgi?id=26314

Differential Revision: http://reviews.llvm.org/D17268

llvm-svn: 272014
2016-06-07 14:55:27 +00:00
Simon Pilgrim db9893fb90 [InstCombine][AVX2] Add support for simplifying AVX2 per-element shifts to native shifts
Unlike native shifts, the AVX2 per-element shift instructions VPSRAV/VPSRLV/VPSLLV handle out of range shift values (logical shifts set the result to zero, arithmetic shifts splat the sign bit).

If the shift amount is constant we can sometimes convert these instructions to native shifts:

1 - if all shift amounts are in range then the conversion is trivial.
2 - out of range arithmetic shifts can be clamped to the (bitwidth - 1) (a legal shift amount) before conversion.
3 - logical shifts just return zero if all elements have out of range shift amounts.

In addition, UNDEF shift amounts are handled - either as an UNDEF shift amount in a native shift or as an UNDEF in the logical 'all out of range' zero constant special case for logical shifts.

Differential Revision: http://reviews.llvm.org/D19675

llvm-svn: 271996
2016-06-07 10:27:15 +00:00
Simon Pilgrim 91e3ac8293 [InstCombine][SSE] Add MOVMSK constant folding (PR27982)
This patch adds support for folding undef/zero/constant inputs to MOVMSK instructions.

The SSE/AVX versions can be fully folded, but the MMX version can only handle undef inputs.

Differential Revision: http://reviews.llvm.org/D20998

llvm-svn: 271990
2016-06-07 08:18:35 +00:00
Michael Kuperstein a0c6ae02a5 [InstCombine] scalarizePHI should not assume the code it sees has been CSE'd
scalarizePHI only looked for phis that have exactly two uses - the "latch"
use, and an extract. Unfortunately, we can not assume all equivalent extracts
are CSE'd, since InstCombine itself may create an extract which is a duplicate
of an existing one. This extends it to handle several distinct extracts from
the same index.

This should fix at least some of the  performance regressions from PR27988.

Differential Revision: http://reviews.llvm.org/D20983

llvm-svn: 271961
2016-06-06 23:38:33 +00:00
Michael Zolotukhin 19edbadfc5 [LoopUnrollAnalyzer] Fix a crash in analyzeLoopUnrollCost.
In some cases, when simplifying with SCEV, we might consider pointer values as
just usual integer values.  Thus, we might get a different type from what we
had originally in the map of simplified values, and hence we need to check
types before operating on the values.

This fixes PR28015.

llvm-svn: 271931
2016-06-06 19:21:40 +00:00
Geoff Berry 43e5160d0e Reapply [LSR] Create fewer redundant instructions.
Summary:
Fix LSRInstance::HoistInsertPosition() to check the original insert
position block first for a canonical insertion point that is dominated
by all inputs.  This leads to SCEV being able to reuse more instructions
since it currently tracks the instructions it creates for reuse by
keeping a table of <Value, insert point> pairs.

Originally reviewed in http://reviews.llvm.org/D18001

Reviewers: atrick

Subscribers: llvm-commits, mzolotukhin, mcrosier

Differential Revision: http://reviews.llvm.org/D18480

llvm-svn: 271929
2016-06-06 19:10:46 +00:00
Sanjay Patel 6a333c3ed9 [InstCombine] limit icmp transform to ConstantInt (PR28011)
In r271810 ( http://reviews.llvm.org/rL271810 ), I loosened the check
above this to work for any Constant rather than ConstantInt. AFAICT, 
that part makes sense if we can determine that the shrunken/extended 
constant remained equal. But it doesn't make sense for this later 
transform where we assume that the constant DID change. 

This could assert for a ConstantExpr:
https://llvm.org/bugs/show_bug.cgi?id=28011

And it could be wrong for a vector as shown in the added regression test.

llvm-svn: 271908
2016-06-06 16:56:57 +00:00
Sanjay Patel 027c469158 regenerate checks
llvm-svn: 271904
2016-06-06 16:03:06 +00:00
Sanjay Patel 70aa568c4e regenerate checks
llvm-svn: 271903
2016-06-06 15:55:00 +00:00
Eli Friedman ee89505799 LICM: Don't sink stores out of loops that may throw.
Summary:
This hasn't been caught before because it requires noalias or similarly
strong alias analysis to actually reproduce.

Fixes http://llvm.org/PR27952 .

Reviewers: hfinkel, sanjoy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D20944

llvm-svn: 271858
2016-06-05 22:13:52 +00:00
Sanjoy Das b7e861a488 Add safety check to InstCombiner::commonIRemTransforms
Since FoldOpIntoPhi speculates the binary operation to potentially each
of the predecessors of the PHI node (pulling it out of arbitrary control
dependence in the process), we can FoldOpIntoPhi only if we know the
operation doesn't have UB.

This also brings up an interesting profitability question -- the way it
is written today, commonIRemTransforms will hoist out work from
dynamically dead code into code that will execute at runtime.  Perhaps
that isn't the best canonicalization?

Fixes PR27968.

llvm-svn: 271857
2016-06-05 21:17:04 +00:00
Sanjoy Das 0dcd1d859c Add test case for InstCombiner::commonIRemTransforms; NFC
The PHI case in commonIRemTransforms was untested; add a trivial test
case.

llvm-svn: 271856
2016-06-05 21:17:00 +00:00
Davide Italiano a1cbc3f8cc [Internalize] Test that __stack_chk_{guard, fail} are not internalized.
r154645 introduced this feature without test. This should have better
coverage now.

llvm-svn: 271853
2016-06-05 19:08:54 +00:00
Sanjoy Das 4d4339d1e8 [PM] Port IndVarSimplify to the new pass manager
Summary:
There are some rough corners, since the new pass manager doesn't have
(as far as I can tell) LoopSimplify and LCSSA, so I've updated the
tests to run them separately in the old pass manager in the lit tests.
We also don't have an equivalent for AU.setPreservesCFG() in the new
pass manager, so I've left a FIXME.

Reviewers: bogner, chandlerc, davide

Subscribers: sanjoy, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20783

llvm-svn: 271846
2016-06-05 18:01:19 +00:00
Sanjoy Das f90e28d6fd [IndVars] Remove -liv-reduce
It is an off-by-default option that no one seems to use[0], and given
that SCEV directly understands the overflow instrinsics there is no real
need for it anymore.

[0]: http://lists.llvm.org/pipermail/llvm-dev/2016-April/098181.html

llvm-svn: 271845
2016-06-05 18:01:12 +00:00
Sanjay Patel 0fab306eb5 fix checks
update_test_checks.py got confused matching the variable names. 

llvm-svn: 271844
2016-06-05 17:54:56 +00:00
Sanjay Patel a6fbc82392 [InstCombine] allow vector icmp bool transforms
llvm-svn: 271843
2016-06-05 17:49:45 +00:00
Sanjay Patel 54d7010627 add tests to show missing vector transforms
llvm-svn: 271842
2016-06-05 17:32:58 +00:00
Sanjay Patel 51dc83c052 regenerate checks
llvm-svn: 271841
2016-06-05 17:29:45 +00:00
Sanjay Patel 009c3da65f update test to use FileCheck
llvm-svn: 271840
2016-06-05 17:13:09 +00:00
Sanjay Patel f48b909f28 update test to use FileCheck
llvm-svn: 271838
2016-06-05 16:41:20 +00:00
Sanjay Patel 8a3b6d0d8b update test to FileCheck
llvm-svn: 271837
2016-06-05 16:29:15 +00:00
Xinliang David Li 64dbb295b6 [PM] Port GCOVProfiler pass to the new pass manager
llvm-svn: 271823
2016-06-05 05:12:23 +00:00
David Majnemer 2482e1c017 [SimplifyCFG] Don't kill empty cleanuppads with multiple uses
A basic block could contain:
  %cp = cleanuppad []
  cleanupret from %cp unwind to caller

This basic block is empty and is thus a candidate for removal.  However,
there can be other uses of %cp outside of this basic block.  This is
only possible in unreachable blocks.

Make our transform more correct by checking that the pad has a single
user before removing the BB.

This fixes PR28005.

llvm-svn: 271816
2016-06-04 23:50:03 +00:00
Sanjay Patel ea8a211169 [InstCombine] allow vector constants for cast+icmp fold
This is step 1 of unknown towards fixing PR28001:
https://llvm.org/bugs/show_bug.cgi?id=28001

llvm-svn: 271810
2016-06-04 22:04:05 +00:00
Sanjay Patel 8e63999bee [InstCombine] add test for missing vector optimization
llvm-svn: 271808
2016-06-04 21:41:25 +00:00
Sanjay Patel 4c42211c6f [InstCombine] add test for missing vector optimization
llvm-svn: 271806
2016-06-04 21:20:03 +00:00
Sanjay Patel 58a92a327d [InstCombine] minimize test case and use FileCheck
llvm-svn: 271805
2016-06-04 21:04:59 +00:00
Simon Pilgrim ba319ded5e [Analysis] Enabled BITREVERSE as a vectorizable intrinsic
Allows XOP to vectorize BITREVERSE - other targets will follow as their costmodels improve.

llvm-svn: 271803
2016-06-04 20:21:07 +00:00
Simon Pilgrim fda22d66fc [InstCombine][MMX] Extend SimplifyDemandedUseBits MOVMSK support to MMX
Add the MMX implementation to the SimplifyDemandedUseBits SSE/AVX MOVMSK support added in D19614

Requires a minor tweak as llvm.x86.mmx.pmovmskb takes a x86_mmx argument - so we have to be explicit about the implied v8i8 vector type.

llvm-svn: 271789
2016-06-04 13:42:46 +00:00
Sanjay Patel 6cf18af1c5 [InstCombine] look through bitcasts to find selects
There was concern that creating bitcasts for the simpler potential select pattern:

define <2 x i64> @vecBitcastOp1(<4 x i1> %cmp, <2 x i64> %a) {
  %a2 = add <2 x i64> %a, %a
  %sext = sext <4 x i1> %cmp to <4 x i32>
  %bc = bitcast <4 x i32> %sext to <2 x i64>
  %and = and <2 x i64> %a2, %bc
  ret <2 x i64> %and
}

might lead to worse code for some targets, so this patch is matching the larger
patterns seen in the test cases.

The motivating example for this patch is this IR produced via SSE intrinsics in C:

define <2 x i64> @gibson(<2 x i64> %a, <2 x i64> %b) {
  %t0 = bitcast <2 x i64> %a to <4 x i32>
  %t1 = bitcast <2 x i64> %b to <4 x i32>
  %cmp = icmp sgt <4 x i32> %t0, %t1
  %sext = sext <4 x i1> %cmp to <4 x i32>
  %t2 = bitcast <4 x i32> %sext to <2 x i64>
  %and = and <2 x i64> %t2, %a
  %neg = xor <4 x i32> %sext, <i32 -1, i32 -1, i32 -1, i32 -1>
  %neg2 = bitcast <4 x i32> %neg to <2 x i64>
  %and2 = and <2 x i64> %neg2, %b
  %or = or <2 x i64> %and, %and2
  ret <2 x i64> %or
}

For an AVX target, this is currently:

vpcmpgtd  %xmm1, %xmm0, %xmm2
vpand     %xmm0, %xmm2, %xmm0
vpandn    %xmm1, %xmm2, %xmm1
vpor      %xmm1, %xmm0, %xmm0
retq

With this patch, it becomes:

vpmaxsd   %xmm1, %xmm0, %xmm0

Differential Revision: http://reviews.llvm.org/D20774

llvm-svn: 271676
2016-06-03 14:42:07 +00:00
Sanjay Patel 172bf6edd1 [InstCombine] change tests to show a more obvious transform possibility
The original tests were intended to show a missing transform that would
be solved by D20774:
http://reviews.llvm.org/D20774

But it's not clear that the transform for the simpler tests is a win for
all targets. Make the tests show a larger pattern that should be a win
regardless of the cost of bitcast instructions.

llvm-svn: 271603
2016-06-02 22:45:49 +00:00
Sanjay Patel dba8b4c04d transform obscured FP sign bit ops into a fabs/fneg using TLI hook
This is effectively a revert of:
http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886)
and:
http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions.

This is intended to resolve the objections raised on the dev list:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html
and:
https://llvm.org/bugs/show_bug.cgi?id=24886#c4

In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later.

Differential Revision: http://reviews.llvm.org/D19391

llvm-svn: 271573
2016-06-02 20:01:37 +00:00
Xinliang David Li 7008ce3f98 [profile] value profiling bug fix -- missing icall targets in profile-use
Inline virtual functions has linkeonceodr linkage (emitted in comdat on 
supporting targets). If the vtable for the class is not emitted in the
defining module, function won't be address taken thus its address is not
recorded. At the mercy of the linker, if the per-func prf_data from this
module (in comdat) is picked at link time, we will lose mapping from
function address to its hash val. This leads to missing icall promotion.
The second test case (currently disabled) in compiler_rt (r271528): 
instrprof-icall-prom.test demostrates the bug. The first profile-use
subtest is fine due to linker order difference.

With this change, no missing icall targets is found in instrumented clang's
raw profile.

llvm-svn: 271532
2016-06-02 16:33:41 +00:00
Xinliang David Li 0b29330612 make icall pass name consistent /NFC
llvm-svn: 271467
2016-06-02 01:52:05 +00:00
Geoff Berry b96d3b2dd8 [MemorySSA] Port to new pass manager
Add support for the new pass manager to MemorySSA pass.

Change MemorySSA to be computed eagerly upon construction.

Change MemorySSAWalker to be owned by the MemorySSA object that creates
it.

Reviewers: dberlin, george.burgess.iv

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D19664

llvm-svn: 271432
2016-06-01 21:30:40 +00:00
Daniel Berlin 73694bb92b Revert "Claim NoAlias if two GEPs index different fields of the same struct"
This reverts commit 2d5d6493f43eb68493a3852b8c226ac9fafdc7eb.

llvm-svn: 271422
2016-06-01 18:55:32 +00:00
Daniel Berlin e846c9dc52 Claim NoAlias if two GEPs index different fields of the same struct
Patch by Taewook Oh

Summary: Patch for Bug 27478. Make BasicAliasAnalysis claims NoAlias if two GEPs index different fields of the same structure.

Reviewers: hfinkel, dberlin

Subscribers: dberlin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20665

llvm-svn: 271415
2016-06-01 18:12:01 +00:00
Michael Kuperstein 3a3c64d23e [LV] For some IVs, use vector phis instead of widening in the loop body
Previously, whenever we needed a vector IV, we would create it on the fly,
by splatting the scalar IV and adding a step vector. Instead, we can create a
real vector IV. This tends to save a couple of instructions per iteration.

This only changes the behavior for the most basic case - integer primary
IVs with a constant step.

Differential Revision: http://reviews.llvm.org/D20315

llvm-svn: 271410
2016-06-01 17:16:46 +00:00
Guozhi Wei b994f4cdbc [SLP] Pass in correct alignment when query memory access cost
This patch fixes bug https://llvm.org/bugs/show_bug.cgi?id=27897.

When query memory access cost, current SLP always passes in alignment value of 1 (unaligned), so it gets a very high cost of scalar memory access, and wrongly vectorize memory loads in the test case.

It can be fixed by simply giving correct alignment.

llvm-svn: 271333
2016-05-31 20:41:19 +00:00
Erik Eckstein 0c48dd8ca5 Fix a crash in MergeFunctions related to ordering of weak/strong functions
The assumption, made in insert() that weak functions are always inserted after strong functions,
is only true in the first round of adding functions.
In subsequent rounds this is no longer guaranteed , because we might remove a strong function from the tree (because it's modified) and add it later,
where an equivalent weak function already exists in the tree.
This change removes the assert in insert() and explicitly enforces a weak->strong order.
This also removes the need of two separate loops in runOnModule().

llvm-svn: 271299
2016-05-31 17:20:23 +00:00
Sanjoy Das ae09b3cd4c [IndVars] Eliminate op.with.overflow when possible (re-apply)
Summary:
If we can prove that an op.with.overflow intrinsic does not overflow, we
can get rid of the intrinsic, and replace it with non-wrapping
arithmetic.

This was first checked in at r265913 but reverted in r265950 because it
exposed some issues around how SCEV handled post-inc add recurrences.
Those issues have now been fixed.

Reviewers: atrick, regehr

Subscribers: sanjoy, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D18685

llvm-svn: 271153
2016-05-29 00:36:25 +00:00
Sanjoy Das 7e4a64167d [SCEV] Don't always add no-wrap flags to post-inc add recs
Fixes PR27315.

The post-inc version of an add recurrence needs to "follow the same
rules" as a normal add or subtract expression.  Otherwise we miscompile
programs like

```
int main() {
  int a = 0;
  unsigned a_u = 0;
  volatile long last_value;
  do {
    a_u += 3;
    last_value = (long) ((int) a_u);
    if (will_add_overflow(a, 3)) {
      // Leave, and don't actually do the increment, so no UB.
      printf("last_value = %ld\n", last_value);
      exit(0);
    }
    a += 3;
  } while (a != 46);
  return 0;
}
```

This patch changes SCEV to put no-wrap flags on post-inc add recurrences
only when the poison from a potential overflow will go ahead to cause
undefined behavior.

To avoid regressing performance too much, I've assumed infinite loops
without side effects is undefined behavior to prove poison<->UB
equivalence in more cases.  This isn't ideal, but is not new to LLVM as
a whole, and far better than the situation I'm trying to fix.

llvm-svn: 271151
2016-05-29 00:32:17 +00:00
Simon Pilgrim 9602d678cb [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 271131
2016-05-28 18:03:41 +00:00
Sanjay Patel 395eca8d26 [InstCombine] add tests to show bitcast interference
llvm-svn: 271125
2016-05-28 16:10:37 +00:00
Sanjay Patel f49c7b1570 regenerate checks
llvm-svn: 271117
2016-05-28 15:44:28 +00:00
Sanjay Patel cbc4aa6bdd join RUN lines; NFC
llvm-svn: 271115
2016-05-28 15:34:05 +00:00
Sean Silva 02b9d892c5 Bring back r271090 in a way that doesn't depend on r271089.
llvm-svn: 271092
2016-05-28 04:05:36 +00:00
Sean Silva 9dd4b5c51d Revert r271089 and r271090.
It was triggering an msan bot.

Revert "[IRPGO] Set the function entry count metadata."

This reverts commit r271090.

Revert "[IRPGO] Centralize the function attribute inliner hint logic. NFC."

This reverts commit r271089.

llvm-svn: 271091
2016-05-28 03:56:25 +00:00
Sean Silva 7884633c5b [IRPGO] Set the function entry count metadata.
llvm-svn: 271090
2016-05-28 03:02:54 +00:00
Xinliang David Li d38392ecd6 [PM] Port the Sample FDO to new PM (part-2)
llvm-svn: 271072
2016-05-27 23:20:16 +00:00
Evgeny Stupachenko ea2aef4a1d The patch refactors unroll pass.
Summary:
Unroll factor (Count) calculations moved to a new function.
Early exits on pragma and "-unroll-count" defined factor added.
New type of unrolling "Force" introduced (previously used implicitly).
New unroll preference "AllowRemainder" introduced and set "true" by default.
(should be set to false for architectures that suffers from it).

Reviewers: hfinkel, mzolotukhin, zzheng

Differential Revision: http://reviews.llvm.org/D19553

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 271071
2016-05-27 23:15:06 +00:00
Sanjoy Das 6fff9dc932 [GVN] Preserve !range metadata when PRE'ing loads
Reviewers: dberlin, reames, george.burgess.iv

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20743

llvm-svn: 271034
2016-05-27 19:03:10 +00:00
Tim Northover 32b4d15e0a Move test to X86 directory: I think it depends on X86 TTI.
llvm-svn: 271019
2016-05-27 16:56:54 +00:00
Tim Northover 10a1e8b1fe Vectorizer: track non-fast FP instructions through phis when finding reductions.
When we traced through a phi node looking for floating-point reductions, we
forgot whether we'd ever seen an instruction without fast-math flags (that
would block vectorization). This propagates it through to the end.

llvm-svn: 271015
2016-05-27 16:40:27 +00:00
Dehao Chen 80b16d4135 Remove sample profile dependency to instcombine, which is not a analysis pass.
Summary: This patch removes dependency from sample profile pass to instcombine pass.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D20501

llvm-svn: 271009
2016-05-27 16:14:15 +00:00
Igor Laevsky df9db45c94 [RewriteStatepointsForGC] All constant should have null base pointer
Currently we consider that each constant has itself as a base value. I.e "base(const) = const". 
This introduces couple of problems when we are trying to avoid reporting constants in statepoint live sets:

1. When querying "base( phi(const1, const2) )" we will get "phi(const1, const2)" as a base pointer. Since 
   it's not a constant we will record it in a stack map. However on practice we don't want this to happen
   (constant are never relocated).
2. base( phi(const, gc ptr) ) = phi( const, base(gc ptr) ). This particular case imposes challenge on our 
   runtime - we don't expect to see constant base pointers other than null. This problems can be avoided 
   by treating all constant as if they were derived from null pointer base. I.e in a first case we will 
   not include constant pointer in a stack map at all. In a second case we will get "phi(null, base(gc ptr))" 
   as a base pointer which is a lot more convenient.

Differential Revision: http://reviews.llvm.org/D20584

llvm-svn: 270993
2016-05-27 13:13:59 +00:00
Simon Pilgrim 4642a57fbf Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
llvm-svn: 270976
2016-05-27 09:02:25 +00:00
Simon Pilgrim c013e5737b [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

A companion patch (D20684) removes/auto-upgrade the clang intrinsics.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 270973
2016-05-27 08:49:15 +00:00
Pete Cooper 1929b5539a Form objc_storeStrong in the presence of bitcasts.
objc_storeStrong can be formed from a sequence such as

  %0 = tail call i8* @objc_retain(i8* %p) nounwind
  %tmp = load i8*, i8** @x, align 8
  store i8* %0, i8** @x, align 8
  tail call void @objc_release(i8* %tmp) nounwind

The code was already looking through bitcasts for most of the values
involved, but had missed one case where the pointer operand for the
store was a bitcast.  Ultimately the pointer for the load and store
have to be the same value, after stripping casts.

llvm-svn: 270955
2016-05-27 02:13:53 +00:00
Michael Zolotukhin 15e745133e [LoopUnrollAnalyzer] Bail out instead of dying with assert when facing huge index.
This fixes PR27902.

llvm-svn: 270946
2016-05-27 00:55:16 +00:00
Easwaran Raman 5fe04a1d8e Attach profile summary in IR based instrumentation pass.
Differential revision: http://reviews.llvm.org/D20655

llvm-svn: 270933
2016-05-26 22:57:11 +00:00
Michael Zolotukhin 1ecdedad8d [LoopUnrollAnalyzer] Fix a crash in analyzeLoopUnrollCost.
Condition might be simplified to a Constant, but it doesn't have to be
ConstantInt, so we should dyn_cast, instead of cast.

This fixes PR27886.

llvm-svn: 270924
2016-05-26 21:42:51 +00:00
David Majnemer d99068d26d [MemCpyOpt] Don't perform callslot optimization across may-throw calls
An exception could prevent a store from occurring but MemCpyOpt's
callslot optimization would fire anyway, causing the store to occur.

This fixes PR27849.

llvm-svn: 270892
2016-05-26 19:24:24 +00:00
Michael Kuperstein 9a81b62a01 [BBVectorize] Don't vectorize selects with a scalar condition and vector operands.
This fixes PR27879.

Differential Revision: http://reviews.llvm.org/D20659

llvm-svn: 270888
2016-05-26 18:43:57 +00:00
David Majnemer 7f32420ed5 [CaptureTracking] Volatile operations capture their memory location
The memory location that corresponds to a volatile operation is very
special.  They are observed by the machine in ways which we cannot
reason about.

Differential Revision: http://reviews.llvm.org/D20555

llvm-svn: 270879
2016-05-26 17:36:22 +00:00
Chad Rosier e5819e2732 [InstCombine] Catch more bswap cases missed due to zext and truncs.
Fixes PR27824.
Differential Revision: http://reviews.llvm.org/D20591.

llvm-svn: 270853
2016-05-26 14:58:51 +00:00
David Majnemer 474512576e [MergedLoadStoreMotion] Don't transform across may-throw calls
It is unsafe to hoist a load before a function call which may throw, the
throw might prevent a pointer dereference.

Likewise, it is unsafe to sink a store after a call which may throw.
The caller might be able to observe the difference.

This fixes PR27858.

llvm-svn: 270828
2016-05-26 07:11:09 +00:00
Adam Nemet c68534bd13 [ConstantFold] Fix incorrect index rewrites for GEPs
Summary:
If an index for a vector or array type is out-of-range GEP constant
folding tries to factor it into preceding dimensions.  The code however
does not consider addressing of structure field padding which should not
qualify as out-of-range index.

As demonstrated by the testcase, this can occur if the indexing
performed on a vector type and the preceding index is an array type.

SROA generates GEPs for example involving padding bytes as it slices an
alloca.

My fix disables this folding if the element type is a vector type.  I
believe that this is the only way we can end up with padding.  (We have
no access to DataLayout so I am not sure if there is actual robust way
of actually checking the presence of padding.)

Reviewers: majnemer

Subscribers: llvm-commits, Gerolf

Differential Revision: http://reviews.llvm.org/D20663

llvm-svn: 270826
2016-05-26 07:08:05 +00:00
Peter Collingbourne b9aa1f4a03 MemorySSA: Revert r269678 and r268068; replace with special casing in MemorySSA.
It turns out that too many passes are relying on alias analysis results
for control dependencies. Until we fix that by introducing a more accurate
modelling of control dependencies, special case assume in MemorySSA instead.

Also introduce tests to ensure we don't regress the FunctionAttrs or LICM
passes.

Differential Revision: http://reviews.llvm.org/D20658

llvm-svn: 270823
2016-05-26 04:58:46 +00:00
Sanjoy Das a099268e85 [IRCE] Optimize conjunctions of range checks
After this change, we do the expected thing for cases like

```
Check0Passed = /* range check IRCE can optimize */
Check1Passed = /* range check IRCE can optimize */
if (!(Check0Passed && Check1Passed))
  throw_Exception();
```

llvm-svn: 270804
2016-05-26 00:09:02 +00:00
Davide Italiano 1021c68e92 [PM] Port PartiallyInlineLibCalls to the new pass manager.
llvm-svn: 270798
2016-05-25 23:38:53 +00:00
Hal Finkel 2f6886844e Look for a loop's starting location in the llvm.loop metadata
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.

The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.

I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.

Differential Revision: http://reviews.llvm.org/D19738

llvm-svn: 270771
2016-05-25 21:42:37 +00:00
Ahmed Bougacha 201b97f550 [TLI] Also cover Linux 64 libfunc (stat64, ...) prototype checking.
My script missed those in r270750.

llvm-svn: 270763
2016-05-25 21:16:33 +00:00
Ahmed Bougacha 1fe3f1ca50 [TLI] Fix NumParams==0 prototype checking typo.
There was a typo in r267758. It caused invalid accesses when
given something like "void @free(...)", as NumParams == 0, and
we then try to look at the 0th parameter.

Turns out, most of these were untested; add both attribute
and missing-prototype checks for all libc libfuncs.

Differential Revision: http://reviews.llvm.org/D20543

llvm-svn: 270750
2016-05-25 20:22:45 +00:00
Reid Kleckner c0a0363d5c [IR] Copy comdats in GlobalObject::copyAttributesFrom
This is probably correct for all uses except cross-module IR linking,
where we need to move the comdat from the source module to the
destination module.

Fixes PR27870.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D20631

llvm-svn: 270743
2016-05-25 18:36:22 +00:00
Sanjay Patel aedc347b29 [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826)
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826

There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.

Differential Revision: http://reviews.llvm.org/D20601

llvm-svn: 270729
2016-05-25 17:27:54 +00:00
Craig Topper 12e322a8cf [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
David Majnemer 124bdb7497 [FunctionAttrs] Volatile loads should disable readonly
A volatile load has side effects beyond what callers expect readonly to
signify.  For example, it is not safe to reorder two function calls
which each perform a volatile load to the same memory location.

llvm-svn: 270671
2016-05-25 05:53:04 +00:00
Davide Italiano 655a145e83 [PM] Port BDCE to the new pass manager.
llvm-svn: 270647
2016-05-25 01:57:04 +00:00
Michael Zolotukhin 8f7a242c7b Re-enable "[LoopUnroll] Enable advanced unrolling analysis by default" one more time.
This reverts commit r270577.

llvm-svn: 270630
2016-05-24 23:00:05 +00:00
Michael Zolotukhin 7216dd4668 [LoopUnrollAnalyzer] Fix a crash in UnrolledInstAnalyzer::visitCastInst.
This fixes PR27847. Now for real.

llvm-svn: 270629
2016-05-24 22:59:58 +00:00
Chad Rosier 47f0148c98 [InstCombine] Clean up and FileCheckize test case.
llvm-svn: 270586
2016-05-24 17:35:49 +00:00
Hans Wennborg b64e4390a3 Revert r270518, which re-enabled "[LoopUnroll] Enable advanced unrolling analysis by default.
Chromium builds are still hitting the assert in PR27874.

llvm-svn: 270577
2016-05-24 16:10:12 +00:00
Sanjay Patel 23019d1006 [ValueTracking, InstSimplify] extend isKnownNonZero() to handle vector constants
Similar in spirit to D20497 :
If all elements of a constant vector are known non-zero, then we can say that the
whole vector is known non-zero.

It seems like we could extend this to FP scalar/vector too, but isKnownNonZero()
says it only works for integers and pointers for now.

Differential Revision: http://reviews.llvm.org/D20544

llvm-svn: 270562
2016-05-24 14:18:49 +00:00
Simon Pilgrim 0295fbe1bb [InstCombine][X86][SSE41] The SSE41 PMOVSX intrinsics are auto upgraded now and aren't handled by InstCombine any more
llvm-svn: 270561
2016-05-24 13:52:44 +00:00
Michael Zolotukhin 96c150d154 Revert "Revert r270478 "[LoopUnroll] Enable advanced unrolling analysis by default.""
This reverts commit r270512 and reapplies r270478. Originally it caused
PR27847, but it was fixed in r270517.

llvm-svn: 270518
2016-05-24 01:22:20 +00:00
Michael Zolotukhin 3898b2b587 [LoopUnrollAnalyzer] Fix a crash in UnrolledInstAnalyzer::visitCastInst.
This fixes PR27847.

llvm-svn: 270517
2016-05-24 00:51:01 +00:00
Hans Wennborg 6951028b61 Revert r270478 "[LoopUnroll] Enable advanced unrolling analysis by default."
This caused PR27847.

llvm-svn: 270512
2016-05-23 23:42:35 +00:00
Sanjoy Das aa83c47bab [IRCE] Optimize "uses" not branches; NFCI
This changes IRCE to optimize uses, and not branches.  This change is
NFCI since the uses we do inspect are in practice only ever going to be
the condition use in conditional branches; but this flexibility will
later allow us to analyze more complex expressions than just a direct
branch on a range check.

llvm-svn: 270500
2016-05-23 22:16:45 +00:00
Sanjay Patel adcaef7238 [InstSimplify] add vector tests for isKnownNonZero
llvm-svn: 270498
2016-05-23 22:09:04 +00:00
Gerolf Hoflehner 00e7092f68 [InstCombine] Fix assertion when bitcast is converted to gep
When an aggregate contains an opaque type its size cannot be
determined. This triggers an "Invalid GetElementPtrInst indices for type" assert
in function checkGEPType. The fix suppresses the conversion in this case.

http://reviews.llvm.org/D20319

llvm-svn: 270479
2016-05-23 19:23:17 +00:00
Michael Zolotukhin be080fc51d [LoopUnroll] Enable advanced unrolling analysis by default.
Summary:
This patch turns on LoopUnrollAnalyzer by default. To mitigate compile
time regressions, I chose very conservative thresholds for now. Later we
can make them more aggressive, but it might require being smarter in
which loops we're optimizing. E.g. currently the biggest issue is that
with more agressive thresholds we unroll many cold loops, which
increases compile time for no performance benefit (performance of those
loops is improved, but it doesn't matter since they are cold).

Test results for compile time(using 4 samples to reduce noise):
```
MultiSource/Benchmarks/VersaBench/ecbdes/ecbdes 5.19%
SingleSource/Benchmarks/Polybench/medley/reg_detect/reg_detect  4.19%
MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow  3.39%
MultiSource/Applications/JM/lencod/lencod 1.47%
MultiSource/Benchmarks/Fhourstones-3_1/fhourstones3_1 -6.06%
```

I didn't see any performance changes in the testsuite, but it improves
some internal tests.

Reviewers: hfinkel, chandlerc

Subscribers: llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D20482

llvm-svn: 270478
2016-05-23 19:10:19 +00:00
Sanjay Patel e2e89ef936 [ValueTracking, InstCombine] extend isKnownToBeAPowerOfTwo() to handle vector splat constants
We could try harder to handle non-splat vector constants too, 
but that seems much rarer to me.

Note that the div test isn't resolved because there's a check
for isIntegerTy() guarding that transform.

Differential Revision: http://reviews.llvm.org/D20497

llvm-svn: 270369
2016-05-22 15:41:53 +00:00
David Majnemer 9f92f4c497 [SimplifyCFG] Remove cleanuppads which are empty except for calls to lifetime.end
A cleanuppad is not cheap, they turn into many instructions and result
in additional spills and fills.  It is not worth keeping a cleanuppad
around if all it does is hold a lifetime.end instruction.

N.B.  We first try to merge the cleanuppad with another cleanuppad to
avoid dropping the lifetime and debug info markers.

llvm-svn: 270314
2016-05-21 05:12:32 +00:00
Sanjoy Das be6c7a12cb [GuardWidening] Fix incorrect use of remove_if
I had used `std::remove_if` under the assumption that it moves the
predicate matching elements to the end, but actaully the elements
remaining towards the end (after the iterator returned by
`std::remove_if`) are indeterminate.  Fix the bug (and make the code
more straightforward) by using a temporary SmallVector, and add a test
case demonstrating the issue.

llvm-svn: 270306
2016-05-21 02:24:44 +00:00
Matt Arsenault 2907e51246 Fix constant folding of addrspacecast of null
This should not be making assumptions on the value of
the casted pointer.

llvm-svn: 270293
2016-05-21 00:14:04 +00:00
Sanjay Patel ec4d91a4d7 add test vector sdiv
llvm-svn: 270285
2016-05-20 22:08:40 +00:00
Sanjay Patel 312c9afd90 add test for vector shift
llvm-svn: 270284
2016-05-20 22:08:16 +00:00
Sanjay Patel 54acedf88f add tests for vector urem
llvm-svn: 270271
2016-05-20 20:55:17 +00:00
Sanjay Patel 3eded68bef use FileCheck instead of grep for exact checking
llvm-svn: 270265
2016-05-20 20:07:18 +00:00
Mark Lacey 9b5fcf65ec Functions with differing phis should not be merged.
Check that the incoming blocks of phi nodes are identical, and block
function merging if they are not.

rdar://problem/26255167

Differential Revision: http://reviews.llvm.org/D20462

llvm-svn: 270250
2016-05-20 18:39:11 +00:00
Sanjay Patel 75892a1543 [SimplifyCFG] eliminate switch cases based on known range of switch condition
This was noted in PR24766:
https://llvm.org/bugs/show_bug.cgi?id=24766#c2

We may not know whether the sign bit(s) are zero or one, but we can still
optimize based on knowing that the sign bit is repeated.

Differential Revision: http://reviews.llvm.org/D20275

llvm-svn: 270222
2016-05-20 14:53:09 +00:00
Easwaran Raman bb578ef0dd Allow -inline-threshold to override default threshold.
Before r257832, the threshold used by SimpleInliner was explicitly specified or generated from opt levels and passed to the base class Inliner's constructor. There, it was first overridden by explicitly specified -inline-threshold. The refactoring in r257832 did not preserve this behavior for all opt levels. This change brings back the original behavior.

Differential Revision: http://reviews.llvm.org/D20452

llvm-svn: 270153
2016-05-19 23:02:09 +00:00
Sanjoy Das f5f0331a3b [GuardWidening] Introduce range check merging
Sequences of range checks expressed using guards, like

  guard((I - 2) u< L)
  guard((I - 1) u< L)
  guard((I + 0) u< L)
  guard((I + 1) u< L)
  guard((I + 2) u< L)

can sometimes be combined into a smaller sequence:

  guard((I - 2) u< L AND (I + 2) u< L)

if we can prove that (I - 2) u< L AND (I + 2) u< L implies all of checks
expressed in the previous sequence.

This change teaches GuardWidening to do this kind of merging when
feasible.

llvm-svn: 270151
2016-05-19 22:55:46 +00:00
Guozhi Wei b1d37199cc [InstCombine] Avoid combining the bitcast of a var that is used as both address and result of load instructions
This patch fixes https://llvm.org/bugs/show_bug.cgi?id=27703.

If there is a sequence of one or more load instructions, each loaded value is used as address of later load instruction, bitcast is necessary to change the value type, don't optimize it.

llvm-svn: 270135
2016-05-19 21:07:01 +00:00
Wei Mi 0456d9dd18 Recommit r255691 since PR26509 has been fixed.
llvm-svn: 270113
2016-05-19 20:38:03 +00:00
Peter Collingbourne fe12d0e3e5 CodeGen: Make the global-merge pass independently testable, and add a test.
llvm-svn: 270023
2016-05-19 04:38:56 +00:00
Sanjoy Das b784ed36c0 [GuardWidening] Use getEquivalentICmp to fold constant compares
`ConstantRange::getEquivalentICmp` is more general, and better
factored.

llvm-svn: 270019
2016-05-19 03:53:17 +00:00
Sanjoy Das 083f38939b New pass: guard widening
Summary:
Implement guard widening in LLVM. Description from GuardWidening.cpp:

The semantics of the `@llvm.experimental.guard` intrinsic lets LLVM
transform it so that it fails more often that it did before the
transform.  This optimization is called "widening" and can be used hoist
and common runtime checks in situations like these:

```
%cmp0 = 7 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
%cmp1 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp1) [ "deopt"(...) ]
...
```

to

```
%cmp0 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
...
```

If `%cmp0` is false, `@llvm.experimental.guard` will "deoptimize" back
to a generic implementation of the same function, which will have the
correct semantics from that point onward.  It is always _legal_ to
deoptimize (so replacing `%cmp0` with false is "correct"), though it may
not always be profitable to do so.

NB! This pass is a work in progress.  It hasn't been tuned to be
"production ready" yet.  It is known to have quadriatic running time and
will not scale to large numbers of guards

Reviewers: reames, atrick, bogner, apilipenko, nlewycky

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20143

llvm-svn: 269997
2016-05-18 22:55:34 +00:00
Dehao Chen f16376b505 Follow-up patch of http://reviews.llvm.org/D19948 to handle missing profiles when simplifying CFG.
Summary: Set default branch weight to 1:1 if one of the branch has profile missing when simplifying CFG.

Reviewers: spatel, davidxl

Subscribers: danielcdh, llvm-commits

Differential Revision: http://reviews.llvm.org/D20307

llvm-svn: 269995
2016-05-18 22:41:03 +00:00
Michael Zolotukhin d2268a73bc [LoopUnrollAnalyzer] Take into account cost of instructions controlling branches, along with their operands.
Previously, we didn't add their and their operands cost, which could've
resulted in unrolling loops for no actual benefit.

llvm-svn: 269985
2016-05-18 21:20:12 +00:00
Matt Arsenault 1735da460b AMDGPU: Other sizes of popcnt are fast
We can chain bcnt instructions together, so
any width popcnt is pretty fast.

llvm-svn: 269950
2016-05-18 16:10:19 +00:00
Matt Arsenault 71fa1f375e AMDGPU: Fix a few slightly broken tests
Fix minor bugs and uses of undef which break when
pointer related optimization passes are run.

llvm-svn: 269944
2016-05-18 15:48:44 +00:00
Davide Italiano 98f7e0e790 [PM] Port per-function SCCP to the new pass manager.
llvm-svn: 269937
2016-05-18 15:18:25 +00:00
Justin Bogner 594e07bd78 [PM] Port DSE to the new pass manager
Patch by JakeVanAdrighem. Thanks!

llvm-svn: 269847
2016-05-17 21:38:13 +00:00
Sanjay Patel 22b01febd4 [InstCombine] add another test for wrong icmp constant (PR27792)
It doesn't matter if the comparison is unsigned; the inc/dec is always signed.

llvm-svn: 269831
2016-05-17 20:20:40 +00:00
Sanjay Patel de96f39392 [InstCombine] add test for wrong icmp constant (PR27792)
The code fix for this was checked in at r269797.

llvm-svn: 269803
2016-05-17 19:25:55 +00:00
Sanjoy Das fd67038c8b [Guards] Add branch metadata when lowering
Guards are expected to basically never fail.  Reflect this in the branch
probabilities in their lowered form.

llvm-svn: 269791
2016-05-17 17:51:19 +00:00
Igor Laevsky 953f2d2a54 [RewriteStatepointsForGC] Remove obsolete assertion
This is assertion is no longer necessary since we never record
constants in the live set anyway. (They are never recorded in 
the initial live set, and constant bases are removed near line 2119)

Differential Revision: http://reviews.llvm.org/D20293

llvm-svn: 269764
2016-05-17 13:54:10 +00:00
Benjamin Kramer ca9a0fe2b9 [InstCombine] Don't crash when trying to take an element of a ConstantExpr.
Fixes PR27786.

llvm-svn: 269757
2016-05-17 12:08:55 +00:00
Sanjay Patel e9b2c32e7f [InstCombine] check vector elements before trying to transform LE/GE vector icmp (PR27756)
Fix a bug introduced with rL269426 :
[InstCombine] canonicalize* LE/GE vector integer comparisons to LT/GT (PR26701, PR26819)

We were assuming that a ConstantDataVector / ConstantVector / ConstantAggregateZero operand of
an ICMP was composed of ConstantInt elements, but it might have ConstantExpr or UndefValue 
elements. Handle those appropriately.

Also, refactor this function to join the scalar and vector paths and eliminate the switches.

Differential Revision: http://reviews.llvm.org/D20289

llvm-svn: 269728
2016-05-17 00:57:57 +00:00
Matthew Simpson 37ec5f914e [LAA] Rename forwarding conflict detection option (NFC)
This patch renames the option enabling the store-to-load forwarding conflict
detection optimization. This change was requested in the review of D20241.

llvm-svn: 269668
2016-05-16 17:00:56 +00:00
Xinliang David Li f3c7a35238 [PM] Port indirect call promotion pass to new pass manager
llvm-svn: 269660
2016-05-16 16:31:07 +00:00
Matthew Simpson e43198dc4b [LV] Ensure safe VF for loops with interleaved accesses
The selection of the vectorization factor currently doesn't consider
interleaved accesses. The vectorization factor is based on the maximum safe
dependence distance computed by LAA. However, for loops with interleaved
groups, we should instead base the vectorization factor on the maximum safe
dependence distance divided by the maximum interleave factor of all the
interleaved groups. Interleaved accesses not in a group will be scalarized.

Differential Revision: http://reviews.llvm.org/D20241

llvm-svn: 269659
2016-05-16 15:08:20 +00:00
Sanjay Patel 399780f088 add test to show missing optimization
llvm-svn: 269601
2016-05-15 18:41:18 +00:00
Sanjay Patel ecdd13d788 regenerate checks
llvm-svn: 269596
2016-05-15 18:05:10 +00:00
Elena Demikhovsky ee004bc0a2 Vector GEP - fixed a crash on InstSimplify Pass.
Vector GEP with mixed (vector and scalar) indices failed on the InstSimplify Pass when all indices are constants.

Differential revision http://reviews.llvm.org/D20149

llvm-svn: 269590
2016-05-15 12:30:25 +00:00
Davide Italiano 9922344178 [PM] Port LowerAtomic to the new pass manager.
llvm-svn: 269511
2016-05-13 22:52:35 +00:00
Michael Zolotukhin 963a6d9c69 Revert "Revert "[Unroll] Implement a conservative and monotonically increasing cost tracking system during the full unroll heuristic analysis that avoids counting any instruction cost until that instruction becomes "live" through a side-effect or use outside the...""
This reverts commit r269395.

Try to reapply with a fix from chapuni.

llvm-svn: 269486
2016-05-13 21:23:25 +00:00
Sanjay Patel 23fa090738 regenerate checks and add a run to show missed shrinkage
llvm-svn: 269449
2016-05-13 18:04:39 +00:00
Sanjay Patel 4e0cf49318 regenerate checks
llvm-svn: 269447
2016-05-13 18:02:16 +00:00
Sanjay Patel 0c8f3f9332 [InstCombine] handle zero constant vectors for LE/GE comparisons too
Enhancement to: http://reviews.llvm.org/rL269426
With discussion in: http://reviews.llvm.org/D17859

This should complete the fixes for: PR26701, PR26819:
https://llvm.org/bugs/show_bug.cgi?id=26701
https://llvm.org/bugs/show_bug.cgi?id=26819
 

llvm-svn: 269439
2016-05-13 17:28:12 +00:00
Jun Bum Lim f28beac419 [MemCpyOpt] Use MaxIntSize in byte instead of bit
Summary: This change fix the bug in isProfitableToUseMemset() where MaxIntSize shoule be in byte, not bit.

Reviewers: arsenm, joker.eph, mcrosier

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20176

llvm-svn: 269433
2016-05-13 16:52:24 +00:00
Sanjay Patel b79ab27853 [InstCombine] canonicalize* LE/GE vector integer comparisons to LT/GT (PR26701, PR26819)
*We don't currently handle the  edge case constants (min/max values), so it's not a complete
canonicalization.

To fully solve the motivating bugs, we need to enhance this to recognize a zero vector
too because that's a ConstantAggregateZero which is a ConstantData, not a ConstantVector
or a ConstantDataVector.

Differential Revision: http://reviews.llvm.org/D17859 

llvm-svn: 269426
2016-05-13 15:10:46 +00:00
Michael Zolotukhin 9be3b8b9bb Revert "[Unroll] Implement a conservative and monotonically increasing cost tracking system during the full unroll heuristic analysis that avoids counting any instruction cost until that instruction becomes "live" through a side-effect or use outside the..."
This reverts commit r269388.

It caused some bots to fail, I'm reverting it until I investigate the
issue.

llvm-svn: 269395
2016-05-13 06:32:25 +00:00
Michael Zolotukhin b7b8052982 [Unroll] Implement a conservative and monotonically increasing cost tracking system during the full unroll heuristic analysis that avoids counting any instruction cost until that instruction becomes "live" through a side-effect or use outside the...
Summary:
...loop after the last iteration.

This is really hard to do correctly. The core problem is that we need to
model liveness through the induction PHIs from iteration to iteration in
order to get the correct results, and we need to correctly de-duplicate
the common subgraphs of instructions feeding some subset of the
induction PHIs. All of this can be driven either from a side effect at
some iteration or from the loop values used after the loop finishes.

This patch implements this by storing the forward-propagating analysis
of each instruction in a cache to recall whether it was free and whether
it has become live and thus counted toward the total unroll cost. Then,
at each sink for a value in the loop, we recursively walk back through
every value that feeds the sink, including looping back through the
iterations as needed, until we have marked the entire input graph as
live. Because we cache this, we never visit instructions more than twice
-- once when we analyze them and put them into the cache, and once when
we count their cost towards the unrolled loop. Also, because the cache
is only two bits and because we are dealing with relatively small
iteration counts, we can store all of this very densely in memory to
avoid this from becoming an excessively slow analysis.

The code here is still pretty gross. I would appreciate suggestions
about better ways to factor or split this up, I've stared too long at
the algorithmic side to really have a good sense of what the design
should probably look at.

Also, it might seem like we should do all of this bottom-up, but I think
that is a red herring. Specifically, the simplification power is *much*
greater working top-down. We can forward propagate very effectively,
even across strange and interesting recurrances around the backedge.
Because we use data to propagate, this doesn't cause a state space
explosion. Doing this level of constant folding, etc, would be very
expensive to do bottom-up because it wouldn't be until the last moment
that you could collapse everything. The current solution is essentially
a top-down simplification with a bottom-up cost accounting which seems
to get the best of both worlds. It makes the simplification incremental
and powerful while leaving everything dead until we *know* it is needed.

Finally, a core property of this approach is its *monotonicity*. At all
times, the current UnrolledCost is a conservatively low estimate. This
ensures that we will never early-exit from the analysis due to exceeding
a threshold when if we had continued, the cost would have gone back
below the threshold. These kinds of bugs can cause incredibly hard to
track down random changes to behavior.

We could use a techinque similar (but much simpler) within the inliner
as well to avoid considering speculated code in the inline cost.

Reviewers: chandlerc

Subscribers: sanjoy, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D11758

llvm-svn: 269388
2016-05-13 01:42:39 +00:00
Michael Zolotukhin a59a308e8d [LoopUnrollAnalyzer] Don't treat gep-instructions with simplified offset as simplified.
Summary:
Currently we consider such instructions as simplified, which is incorrect,
because if their user isn't simplified, we can't actually simplify them too.
This biases our estimates of profitability: for instance the analyzer expects
much more gains from unrolling memcpy loops than there actually are.

Reviewers: hfinkel, chandlerc

Subscribers: mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D17365

llvm-svn: 269387
2016-05-13 01:42:34 +00:00
David Majnemer 96f0d383a7 [SCCP] Resolve shifts beyond the bitwidth to undef
Shifts beyond the bitwidth are undef but SCCP resolved them to zero.
Instead, DTRT and resolve them to undef.

This reimplements the transform which caused PR27712.

llvm-svn: 269269
2016-05-12 03:07:40 +00:00
Sanjoy Das e0aa414acf All llvm.deoptimize declarations must use the same calling convention
This new verifier rule lets us unambigously pick a calling convention
when creating a new declaration for
`@llvm.experimental.deoptimize.<ty>`.  It is also congruent with our
lowering strategy -- since all calls to `@llvm.experimental.deoptimize`
are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce
a unique calling convention.

Some of the tests that were breaking this verifier rule have had to be
split up into different .ll files.

The inliner was violating this rule as well, and has been fixed to avoid
producing invalid IR.

llvm-svn: 269261
2016-05-12 01:17:38 +00:00
Davide Italiano cd7c84bd8b Revert "[SCCP] Partially propagate informations when the input is not fully defined."
This reverts commit r269105 as it caused PR27712.

llvm-svn: 269252
2016-05-11 23:06:10 +00:00
Easwaran Raman 9b792923d0 Revert r269131
llvm-svn: 269138
2016-05-10 23:26:04 +00:00
Dehao Chen b76e5d948a Propagate branch metadata when some branch probability is missing.
Summary: In sample profile, some branches may have profile missing due to profile inaccuracy. We want existing branch probability still valid after propagation.

Reviewers: hfinkel, davidxl, spatel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D19948

llvm-svn: 269137
2016-05-10 23:07:19 +00:00
Easwaran Raman 7eccf4ee0e Reapply r266477 and r266488
llvm-svn: 269131
2016-05-10 22:03:23 +00:00
Xinliang David Li da1955835d [PM]: port IR based profUse pass to new pass manager
llvm-svn: 269129
2016-05-10 21:59:52 +00:00
Tim Northover 3961735f03 Revert "MemCpyOpt: combine local load/store sequences into memcpy."
This reverts commit r269125. It was in my tree when I ran "git svn dcommit".
It's really still under review.

llvm-svn: 269127
2016-05-10 21:49:40 +00:00
Tim Northover 6c65c71639 MemCpyOpt: combine local load/store sequences into memcpy.
Sort of the BB-local equivalent to idiom-recognizer: if we have a basic-block
that really implements a memcpy operation, backends can benefit from seeing
this.

llvm-svn: 269125
2016-05-10 21:48:11 +00:00
Hans Wennborg 719b26ba54 Loop unroller: set thresholds for optsize and minsize functions to zero
Before r268509, Clang would disable the loop unroll pass when optimizing
for size. That commit enabled it to be able to support unroll pragmas
in -Os builds. However, this regressed binary size in one of Chromium's
DLLs with ~100 KB.

This restores the original behaviour of no unrolling at -Os, but doing it
in LLVM instead of Clang makes more sense, and also allows the pragmas to
keep working.

Differential revision: http://reviews.llvm.org/D20115

llvm-svn: 269124
2016-05-10 21:45:55 +00:00
Lawrence Hu e58a814c07 Enable loopreroll for sext of loop control only IV
This patch extend loopreroll to allow the instruction chain
        of loop control only IV has sext.

        Differential Revision: http://reviews.llvm.org/D19820

llvm-svn: 269121
2016-05-10 21:16:49 +00:00
Lawrence Hu fe7c87beac Revert r26084: Enable loopreroll for sext of loop control only IV
llvm-svn: 269119
2016-05-10 21:11:09 +00:00
Lawrence Hu 4c623d27b5 Revert r269093: Enable loopreroll for sext of loop control only IV
llvm-svn: 269117
2016-05-10 21:04:28 +00:00
Sanjay Patel 6786bc5390 [InstSimplify] use computeKnownBits on shift amount operands
Do simplifications common to all shift instructions based on the amount shifted:
1. If the shift amount is known larger than the bitwidth, the result is undefined.
2. If the valid bits of the shift amount are all known to be 0, it's a shift by zero, so the shift operand is the result.

Note that we could generalize the shift-by-zero transform into a shift-by-constant if all of the valid bits in the shift
amount are known, but that would have to be done in InstCombine rather than here because it would mean we need to create
a new shift instruction.

Differential Revision: http://reviews.llvm.org/D19874

llvm-svn: 269114
2016-05-10 20:46:54 +00:00
Chad Rosier 4e6cda2db5 [InstCombine] Fold icmp ugt/ult (udiv i32 C2, X), C1.
This patch adds support for two optimizations:
icmp ugt (udiv C2, X), C1 -> icmp ule X, C2/(C1+1)
icmp ult (udiv C2, X), C1 -> icmp ugt X, C2/C1

Differential Revision: http://reviews.llvm.org/D20123

llvm-svn: 269109
2016-05-10 20:22:09 +00:00
Davide Italiano 7860c9bbf4 [SCCP] Partially propagate informations when the input is not fully defined.
With this patch:
%r1 = lshr i64 -1, 4294967296 -> undef

Before this patch:
%r1 = lshr i64 -1, 4294967296 -> 0

llvm-svn: 269105
2016-05-10 19:49:47 +00:00
Justin Bogner da0fe183c3 LPM: Drop require<loops> from these tests, it's redundant. NFC
The LoopPassManager needs to calculate the loops analysis in order to
iterate over the loops at all. Requiring it is redundant and just adds
noise to the RUN lines here.

llvm-svn: 269097
2016-05-10 18:28:10 +00:00
Rafael Espindola 32483a7641 Make "@name =" mandatory for globals in .ll files.
An oddity of the .ll syntax is that the "@var = " in

@var = global i32 42

is optional. Writing just

global i32 42

is equivalent to

@0 = global i32 42

This means that there is a pretty big First set at the top level. The
current implementation maintains it manually. I was trying to refactor
it, but then started wondering why keep it a all. I personally find the
above syntax confusing. It looks like something is missing.

This patch removes the feature and simplifies the parser.

llvm-svn: 269096
2016-05-10 18:22:45 +00:00
Lawrence Hu b68f16e007 Enable loopreroll for sext of loop control only IV
This patch extend loopreroll to allow the instruction chain
    of loop control only IV has sext.

    Differential Revision: http://reviews.llvm.org/D19820

llvm-svn: 269093
2016-05-10 18:00:42 +00:00
Rong Xu b6211a0b4f [PGO] resubmit r268969
Put the test into a target specific directory.

llvm-svn: 269090
2016-05-10 17:45:33 +00:00
Lawrence Hu 8cc3b37d2c Enable loopreroll for sext of loop control only IV
This patch extend loopreroll to allow the instruction chain
    of loop control only IV has sext.

llvm-svn: 269084
2016-05-10 17:42:27 +00:00
James Molloy aa1d638800 Revert "[VectorUtils] Query number of sign bits to allow more truncations"
This was a fairly simple patch but on closer inspection was seriously flawed and caused PR27690.

This reverts commit r268921.

llvm-svn: 269051
2016-05-10 12:27:23 +00:00
Chuang-Yu Cheng 175741d5a7 Update Debug Intrinsics in RewriteUsesOfClonedInstructions in LoopRotation
Loop rotation clones instruction from the old header into the preheader. If
there were uses of values produced by these instructions that were outside
the loop, we have to insert PHI nodes to merge the two values. If the values
are used by DbgIntrinsics they will be used as a MetadataAsValue of a
ValueAsMetadata of the original values, and iterating all of the uses of the
original value will not update the DbgIntrinsics. The new code checks if the
values are used by DbgIntrinsics and if so, updates them using essentially
the same logic as the original code.

The attached testcase demonstrates the issue. Without the fix, the
DbgIntrinic outside the loop uses values computed inside the loop, even
though these values do not dominate the DbgIntrinsic.

Author: Thomas Jablin (tjablin)
Reviewers: dblaikie aprantl kbarton hfinkel cycheng

http://reviews.llvm.org/D19564

llvm-svn: 269034
2016-05-10 09:45:44 +00:00
Arnaud A. de Grandmaison 333ef381b8 [InstCombine] Remove trivially empty va_start/va_end and va_copy/va_end ranges.
When a va_start or va_copy is immediately followed by a va_end (ignoring
debug information or other start/end in between), then it is safe to
remove the pair. As this code shares some commonalities with the lifetime
markers, this has been factored to helper functions.

This InstCombine pattern kicks-in 3 times when running the LLVM test
suite.

llvm-svn: 269033
2016-05-10 09:24:49 +00:00
Renato Golin d876eecf02 Revert "[PGO] Fix __llvm_profile_raw_version linkage in MACHO IR instrumentation generates a COMDAT symbol __llvm_profile_raw_version to overwrite the same symbol in profile run-time to distinguish IR profiles from Clang generated profiles. In MACHO, LinkOnceODR linkage is used due to the lack of COMDAT support."
This reverts commits r268969, r268979 and r268984. They had target specific test
in generic directories without the correct specifiers and made it hard for us to
come up with a good solution by rapidly committing untested changes.

This test needs to be in a target specific directory or have the correct REQUIRED
identifier.

llvm-svn: 269027
2016-05-10 08:23:57 +00:00
Elena Demikhovsky c434d091c5 [LoopVectorize] Handling induction variable with non-constant step.
Allow vectorization when the step is a loop-invariant variable.
This is the loop example that is getting vectorized after the patch:

 int int_inc;
 int bar(int init, int *restrict A, int N) {

  int x = init;
  for (int i=0;i<N;i++){
    A[i] = x;
    x += int_inc;
  }
  return x;
 }

"x" is an induction variable with *loop-invariant* step.
But it is not a primary induction. Primary induction variable with non-constant step is not handled yet.

Differential Revision: http://reviews.llvm.org/D19258

llvm-svn: 269023
2016-05-10 07:33:35 +00:00
Sanjoy Das 12c91dc4c8 [ValueTracking] Use guards to prove non-nullness of a value
Reviewers: apilipenko, majnemer, reames

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20044

llvm-svn: 269008
2016-05-10 02:35:44 +00:00
Evgeniy Stepanov 6694ec7406 Don't inline functions with different SafeStack attributes.
llvm-svn: 268999
2016-05-10 00:33:07 +00:00
Adam Nemet 0a77dfad95 [LV] Hint at the new loop distribution pragma in optimization remark
When we encounter unsafe memory dependencies, loop distribution could
help.

Even though, the diagnostics is in LAA, it's only currently emitted in
the vectorizer.

llvm-svn: 268987
2016-05-09 23:03:44 +00:00
Rong Xu 2c9bdd0d3d Fix buildbot failure from r268968.
llvm-svn: 268984
2016-05-09 22:45:47 +00:00
Sanjay Patel 0f153424a9 [Inliner] don't assume that a Constant alloca size is a ConstantInt (PR27277)
Differential Revision: http://reviews.llvm.org/D20077

llvm-svn: 268980
2016-05-09 21:51:53 +00:00
Rong Xu c5508046b8 Fix buildbot failure from r268968.
llvm-svn: 268979
2016-05-09 21:51:50 +00:00
Rong Xu a12f6d3c7b [PGO] Fix __llvm_profile_raw_version linkage in MACHO
IR instrumentation generates a COMDAT symbol __llvm_profile_raw_version to
overwrite the same symbol in profile run-time to distinguish IR profiles from
Clang generated profiles. In MACHO, LinkOnceODR linkage is used due to the
lack of COMDAT support.

But LinkOnceODR linkage might have .weak_def_can_be_hidden assembly directive,
while the weak variable in run-time has a .weak_definition directive. Linker
will not merge these two symbols even they have the same name. The end result
is IR profiles are not properly flagged in MACHO.

This patch changes the linkage for __llvm_profile_raw_version in each module to
LinkOnceAny so that it has same .weak_definition directive as in the run-time.

Differential Revision: http://reviews.llvm.org/D20078

llvm-svn: 268969
2016-05-09 21:03:06 +00:00
Chad Rosier 131a42ccdf [InstCombine] Fold icmp eq/ne (udiv i32 A, B), 0 -> icmp ugt/ule B, A.
Differential Revision: http://reviews.llvm.org/D20036

llvm-svn: 268960
2016-05-09 19:30:20 +00:00
Sanjay Patel da7fe0c4a4 clean up; NFC
llvm-svn: 268949
2016-05-09 18:54:14 +00:00
Joerg Sonnenberger 8ffe7ab7c2 Optimize a printf with a double procent to putchar.
llvm-svn: 268922
2016-05-09 14:36:16 +00:00
James Molloy 5c20e27b7f [VectorUtils] Query number of sign bits to allow more truncations
When deciding if a vector calculation can be done in a smaller bitwidth, use sign bit information from ValueTracking to add more information and allow more truncations.

llvm-svn: 268921
2016-05-09 14:32:30 +00:00
Mehdi Amini 581f0e1451 Refactor stripDebugInfo(Function) to handle intrinsic
This moves the code that handles stripping debug info intrinsic from
 StripDebugInfo(Module) to StripDebugInfo(Function). The latter is
already walking every instructions so it makes sense to do it at the
same time.
This makes also stripDebugInfo(Function) as an API more useful: it
is really dropping every debug info in the Function.
Finally the existing code is trigerring an assertion when the Module
is not fully materialized.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268847
2016-05-07 04:10:52 +00:00
Simon Pilgrim 45964c3742 [SLPVectorizer][X86] Regenerated SEXT/ZEXT cast vectorization tests
Added 256-bit vector test as well

llvm-svn: 268811
2016-05-06 22:22:18 +00:00
Philip Reames 6f4d0088c6 Reapply 267210 with fix for PR27490
Original Commit Message
Extend load/store type canonicalization to handle unordered operations

Extend the type canonicalization logic to work for unordered atomic loads and stores.  Note that while this change itself is fairly simple and low risk, there's a reasonable chance this will expose problems in the backends by suddenly generating IR they wouldn't have seen before.  Anything of this nature will be an existing bug in the backend (you could write an atomic float load), but this will definitely change the frequency with which such cases are encountered.  If you see problems, feel free to revert this change, but please make sure you collect a test case. 

Note that the concern about lowering is now much less likely.  PR27490 proved that we already *were* mucking with the types of ordered atomics and volatiles.  As a result, this change doesn't introduce as much new behavior as originally thought.

llvm-svn: 268809
2016-05-06 22:17:01 +00:00
Philip Reames 4a3c3b66d7 [GVN] PRE of unordered loads
Again, fairly simple.  Only change is ensuring that we actually copy the property of the load correctly.  The aliasing legality constraints were already handled by the FRE patches.  There's nothing special about unorder atomics from the perspective of the PRE algorithm itself.

llvm-svn: 268804
2016-05-06 21:43:51 +00:00
Simon Pilgrim 2def0a878a [SLPVectorizer][X86] Added BSWAP/BITREVERSE vectorization tests
llvm-svn: 268803
2016-05-06 21:41:55 +00:00
Simon Pilgrim a2220ea456 [SLPVectorizer][X86] Added CTPOP/CTLZ/CTTZ vectorization tests
llvm-svn: 268800
2016-05-06 21:33:01 +00:00
Philip Reames 1fdce639d2 [GVN] Handle unordered atomics in cross block FRE
You'll note there are essentially no code changes here.  Cross block FRE heavily reuses code from the block local FRE.  All of the tricky parts were done as part of the previous patch and the refactoring that removed the original code duplication.  

llvm-svn: 268775
2016-05-06 18:46:45 +00:00
Philip Reames ae8997f496 [GVN] Do local FRE for unordered atomic loads
This patch is the first in a small series teaching GVN to optimize unordered loads aggressively. This change just handles block local FRE because that's the simplest thing which lets me test MDA, and the AvailableValue pieces. Somewhat suprisingly, MDA appears fine and only a couple of small changes are needed in GVN.

Once this is in, I'll tackle non-local FRE and PRE. The former looks like a natural extension of this, the later will require a couple of minor changes.

Differential Revision: http://reviews.llvm.org/D19440

llvm-svn: 268770
2016-05-06 18:17:13 +00:00
Sanjay Patel 1cb6241a89 [SimplifyCFG] propagate branch metadata when creating select (retry r268550 / r268751 with possible fix)
Retrying r268550/r268751 which were reverted at r268577/r268765 due a memory sanitizer failure.
I have not been able to reproduce that failure, but I've taken another guess at fixing
the problem in this version of the patch and will watch for another failure.

Original commit message:
Unlike earlier similar fixes, we need to recalculate the branch weights
in this case.

Differential Revision: http://reviews.llvm.org/D19674

llvm-svn: 268767
2016-05-06 18:07:46 +00:00
Sanjay Patel 84a0bf64a8 revert r268751 - caused same failures on msan bot
llvm-svn: 268765
2016-05-06 17:51:37 +00:00
Sanjay Patel 6609510c32 [SimplifyCFG] propagate branch metadata when creating select (retry r268550 with possible fix)
Retrying r268550 which was reverted at r268577 due a memory sanitizer failure.
I have not been able to reproduce that failure, but I've taken a guess at fixing
the problem in this version of the patch and will watch for another failure.

Original commit message:
Unlike earlier similar fixes, we need to recalculate the branch weights
in this case.

Differential Revision: http://reviews.llvm.org/D19674

llvm-svn: 268751
2016-05-06 17:07:47 +00:00
Chad Rosier 4ab37c0037 [SimplifyCFG] Prefer a simplification based on a dominating condition.
Rather than merge two branches with a common destination.
Differential Revision: http://reviews.llvm.org/D19743

llvm-svn: 268735
2016-05-06 14:25:14 +00:00
Xinliang David Li 8aebf44c97 [PM] port IR based PGO prof-gen pass to new pass manager
llvm-svn: 268710
2016-05-06 05:49:19 +00:00
Davide Italiano f54f2f0893 [PM] Port Interprocedural SCCP to the new pass manager.
llvm-svn: 268684
2016-05-05 21:05:36 +00:00
Dehao Chen f50c67ce7c Revert http://reviews.llvm.org/D19926 as it breaks tests.
llvm-svn: 268681
2016-05-05 20:47:53 +00:00
Dehao Chen e48b4ee98c Simplify CFG before assigning discriminator.
Summary: We need to clean up CFG before assigning discriminator to minimize the impact of optimization on debug info.

Reviewers: davidxl, dblaikie, dnovillo

Subscribers: dnovillo, danielcdh, llvm-commits

Differential Revision: http://reviews.llvm.org/D19926

llvm-svn: 268675
2016-05-05 20:18:49 +00:00
Chad Rosier 25cfb7dbd6 [ValueTracking] Improve isImpliedCondition for matching LHS and Imm RHSs.
llvm-svn: 268636
2016-05-05 15:39:18 +00:00
Silviu Baranga c05bab8a9c [LV] Identify more induction PHIs by coercing expressions to AddRecExprs
Summary:
Some PHIs can have expressions that are not AddRecExprs due to the presence
of sext/zext instructions. In order to prevent the Loop Vectorizer from
bailing out when encountering these PHIs, we now coerce the SCEV
expressions to AddRecExprs using SCEV predicates (when possible).

We only do this when the alternative would be to not vectorize.

Reviewers: mzolotukhin, anemet

Subscribers: mssimpso, sanjoy, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D17153

llvm-svn: 268633
2016-05-05 15:20:39 +00:00
Davide Italiano 344e838fea [PM] Port EliminateAvailableExternally pass to the new pass manager.
llvm-svn: 268599
2016-05-05 02:37:32 +00:00
Davide Italiano 164b9bc6fe [PM] Port ConstantMerge to the new pass manager.
llvm-svn: 268582
2016-05-05 00:51:09 +00:00
Adam Nemet 3c5eabfcbc [LoopDataPrefetch] Add optimization remark
With -Rpass=loop-data-prefetch, show the memory access that got
prefetched.

llvm-svn: 268578
2016-05-05 00:08:15 +00:00
Vitaly Buka fdcea9d78a Revert "[SimplifyCFG] propagate branch metadata when creating select"
MemorySanitizer: use-of-uninitialized-value
0x4910e47 in count /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/Support/MathExtras.h:159:12
0x4910e47 in countLeadingZeros<unsigned long> /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/Support/MathExtras.h:183
0x4910e47 in FitWeights /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:855
0x4910e47 in SimplifyCondBranchToCondBranch /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:2895

This reverts commit 609f4dd4bf3bc735c8c047a4d4b0a8e9e4d202e2.

llvm-svn: 268577
2016-05-04 23:59:33 +00:00
Balaram Makam 569eaec5f3 "Reapply r268521 "[InstCombine] Canonicalize icmp instructions based on dominating conditions.""
This reapplies commit r268521, that was reverted in r268530 due to a test failure in select-implied.ll
Modified the test case to reflect the new change.

llvm-svn: 268557
2016-05-04 21:32:14 +00:00
Sanjay Patel 7e8c285814 [SimplifyCFG] propagate branch metadata when creating select
Unlike earlier similar fixes, we need to recalculate the branch weights
in this case.

Differential Revision: http://reviews.llvm.org/D19674

llvm-svn: 268550
2016-05-04 20:48:24 +00:00
Hal Finkel e2b89118bd [ConstantFold] Don't try to strip fp -> int bitcasts to simplify icmps
ConstantFold has logic to take icmp (bitcast x to y), null and strip the
bitcast. This makes sense in general, but not if x has floating-point type. In
this case, we'd need a fcmp, not an icmp, and the code will assert. We normally
don't see this situation because we constant fold fp -> int bitcasts, however,
we'll see it for bitcasts of ppc_fp128 -> i128. This is because that bitcast is
Endian-dependent, and as a result, we don't simplify it in ConstantFold (we
could, but no one has yet added the necessary logic). Regardless, ConstantFold
should not depend on that canonicalization for correctness.

llvm-svn: 268534
2016-05-04 19:37:08 +00:00
Balaram Makam 31e7e13789 Revert "[InstCombine] Canonicalize icmp instructions based on dominating conditions."
This reverts commit 573a40f79b35cf3e71db331bb00f6a84f03b835d.

llvm-svn: 268530
2016-05-04 18:37:35 +00:00
Marianne Mailhot-Sarrasin b192670279 Adding test cases showing the behavior of LoopUnrollPass according to optnone and optsize attributes
The unroll pass was disabled by clang in /Os. Those new test cases shows that the pass will behave correctly even if it is not fully disabled. This patch is related in some way to the clang commit (http://reviews.llvm.org/D19827), which re-enables the pass in /Os.

Differential Revision: http://reviews.llvm.org/D19870

llvm-svn: 268524
2016-05-04 17:45:40 +00:00
Balaram Makam cf3bcb2625 [InstCombine] Canonicalize icmp instructions based on dominating conditions.
Summary:
    This patch canonicalizes conditions based on the constant range information
    of the dominating branch condition.
    For example:

      %cmp = icmp slt i64 %a, 0
      br i1 %cmp, label %land.lhs.true, label %lor.rhs
      lor.rhs:
        %cmp2 = icmp sgt i64 %a, 0

    Would now be canonicalized into:

      %cmp = icmp slt i64 %a, 0
      br i1 %cmp, label %land.lhs.true, label %lor.rhs
      lor.rhs:
        %cmp2 = icmp ne i64 %a, 0

Reviewers: mcrosier, gberry, t.p.northover, llvm-commits, reames, hfinkel, sanjoy, majnemer

Subscribers: MatzeB, majnemer, mcrosier

Differential Revision: http://reviews.llvm.org/D18841

llvm-svn: 268521
2016-05-04 17:34:20 +00:00
Hans Wennborg 0c3518e84b [SimplifyCFG] isSafeToSpeculateStore now ignores debug info
This patch fixes PR27615.

@llvm.dbg.value instructions no longer count towards the maximum number of
instructions to look back at in the instruction list when searching for a
store instruction. This should make the output consistent between debug and
non-debug build.

Patch by Henric Karlsson <henric.karlsson@ericsson.com>!

Differential Revision: http://reviews.llvm.org/D19912

llvm-svn: 268512
2016-05-04 15:40:57 +00:00
Igor Laevsky fb1811d3a0 [RS4GC] Use SetVector/MapVector instead of DenseSet/DenseMap to guarantee stable ordering
Goal of this change is to guarantee stable ordering of the statepoint arguments and other 
newly inserted values such as gc.relocates. Previously we had explicit sorting in a couple
of places. However for unnamed values ordering was partial and overall we didn't have any 
strong invariant regarding it. This change switches all data structures to use SetVector's
and MapVector's which provide possibility for deterministic iteration over them.
Explicit sorting is now redundant and was removed.

Differential Revision: http://reviews.llvm.org/D19669

llvm-svn: 268502
2016-05-04 14:55:36 +00:00
David Majnemer 3918cdd2a1 [ConstantFolding, ValueTracking] Fold constants involving bitcasts of ConstantVector
We assumed that ConstantVectors would be rather uninteresting from the
perspective of analysis.  However, this is not the case due to a quirk
of how LLVM handles vectors of i1.  Vectors of i1 are not
ConstantDataVectors like vectors of i8, i16, i32 or i64 because i1's
SizeInBits differs from it's StoreSizeInBytes.  This leads to it being
categorized as a ConstantVector instead of a ConstantDataVector.

Instead, treat ConstantVector more uniformly.

This fixes PR27591.

llvm-svn: 268479
2016-05-04 06:13:33 +00:00
David Majnemer 95549497ec [GlobalDCE, Misc] Don't remove functions referenced by ifuncs
We forgot to consider the target of ifuncs when considering if a
function was alive or dead.

N.B. Also update a few auxiliary tools like bugpoint and
verify-uselistorder.

This fixes PR27593.

llvm-svn: 268468
2016-05-04 00:20:48 +00:00
Justin Bogner d0d2341f30 PM: Port LoopRotation to the new loop pass manager
llvm-svn: 268452
2016-05-03 22:02:31 +00:00
Justin Bogner ab6a513b4e PM: Port LoopSimplifyCFG to the new pass manager
llvm-svn: 268446
2016-05-03 21:47:32 +00:00
Sanjoy Das 8a004551d0 [RS4GC] Add a test case around calling conventions; NFC
llvm-svn: 268436
2016-05-03 20:58:10 +00:00
Davide Italiano 66228c4cf1 [IPO/GlobalDCE] Port to the new pass manager.
Differential Revision:  http://reviews.llvm.org/D19782

llvm-svn: 268425
2016-05-03 19:39:15 +00:00
Jack Liu f101c0f7a1 [SROA] Function canConvertValue needs to check whether both NewTy and OldTy pointers are
pointing to the same addr space. This can prevent SROA from creating a bitcast
between pointers with different addr spaces.

Differential Revision: http://reviews.llvm.org/D19697

llvm-svn: 268424
2016-05-03 19:30:48 +00:00
Jack Liu 430e2c2140 Revert 268409 due to missing comment.
llvm-svn: 268421
2016-05-03 19:15:02 +00:00
Jack Liu 1ff4a0b7ee (no commit message)
llvm-svn: 268409
2016-05-03 18:01:43 +00:00
Sanjoy Das 4ae3920c5b [LICM] Kill SCEV loop dispositions if needed
SCEV caches whether SCEV expressions are loop invariant, variant or
computable.  LICM breaks this cache, almost by definition; so clear the
SCEV disposition cache if LICM changed anything.

llvm-svn: 268408
2016-05-03 17:50:11 +00:00
Sanjoy Das 7e7a5a050a Use all_of instead of a raw loop; NFC
Added some tests despite being NFC, since it looks like nothing was
exercising the "all incoming values to exit PHIs are same" logic.

llvm-svn: 268407
2016-05-03 17:50:06 +00:00
Sanjoy Das 905fc27ebf [LoopDeletion] Clear SCEV loop dispositions
`Loop::makeLoopInvariant` can hoist instructions out of loops, so loop
dispositions for the loop it operated on may need to be cleared.  We can
be smarter here (especially around how `forgetLoopDispositions` is
implemented), but let's be correct first.

Fixes PR27570.

llvm-svn: 268406
2016-05-03 17:50:02 +00:00
Anna Thomas 43d7e1cbff Fold compares irrespective of whether allocation can be elided
Summary
When a non-escaping pointer is compared to a global value, the
comparison can be folded even if the corresponding malloc/allocation
call cannot be elided.
We need to make sure the global value is not null, since comparisons to
null cannot be folded.

In future, we should also handle cases when the the comparison
instruction dominates the pointer escape.

Reviewers: sanjoy
Subscribers s.egerton, llvm-commits

Differential Revision: http://reviews.llvm.org/D19549

llvm-svn: 268390
2016-05-03 14:58:21 +00:00
Kristof Beyls c08f70588d Mark that SpeculativeExecution preserves Globals Alias Analysis.
A few benchmarks with lots of accesses to global variables in the hot
loops regressed a lot since r266399, which added the
SpeculativeExecution pass to the default pipeline. The problem is that
this pass doesn't mark Globals Alias Analysis as preserved. Globals
Alias Analysis is computed in a module pass, whereas
SpeculativeExecution is a function pass, and a lot of passes dependent
on the Globals Alias Analysis to optimize these benchmarks are also
function passes. As such, the Globals Alias Analysis information cannot
be recomputed between SpeculativeExecution and the following function
passes needing that information.

SpeculativeExecution doesn't invalidate Globals Alias Analysis, so mark
it as such to fix those performance regressions.

Differential Revision: http://reviews.llvm.org/D19806

llvm-svn: 268370
2016-05-03 08:33:26 +00:00
Jack Liu cd777c8b35 test commit
llvm-svn: 268358
2016-05-03 04:06:24 +00:00
David Majnemer 3d90bb79c4 [LoopUnroll] Unroll loops which have exit blocks to EH pads
We were overly cautious in our analysis of loops which have invokes
which unwind to EH pads.  The loop unroll transform is safe because it
only clones blocks in the loop body, it does not try to split critical
edges involving EH pads.  Instead, move the necessary safety check to
LoopUnswitch.

N.B. The safety check for loop unswitch is covered by an existing test
which fails without it.

llvm-svn: 268357
2016-05-03 03:57:40 +00:00
Mehdi Amini 5b85d8d67b ThinLTO: do not import function whose linkage prevents inlining.
There is not point in importing a "weak" or a "linkonce" function
since we won't be able to inline it anyway.
We already had a targeted check for WeakAny, this is using the
same check on GlobalValue as the inline, i.e.
isMayBeOverriddenLinkage()

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268341
2016-05-03 00:27:28 +00:00
Mehdi Amini 1e918c9cb3 Revert "ThinLTO: do not import function whose linkage prevents inlining."
This reverts commit r268315, the tests are not passing.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268317
2016-05-02 22:26:04 +00:00
Mehdi Amini bda9b2ae9e ThinLTO: do not import function whose linkage prevents inlining.
There is not point in importing a "weak" or a "linkonce" function
since we won't be able to inline it anyway.
We already had a targeted check for WeakAny, this is using the
same check on GlobalValue as the inline, i.e.
isMayBeOverriddenLinkage()

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268315
2016-05-02 22:11:27 +00:00
Reid Kleckner bca59d2a43 Revert "[SimplifyCFG] Extend TryToSimplifyUncondBranchFromEmptyBlock for empty block including lifetime intrinsics"
This reverts commit r268254.

This change causes assertion failures while building Chromium. Reduced
test case coming soon.

llvm-svn: 268288
2016-05-02 19:43:22 +00:00
Hans Wennborg b7599329fc [SimplifyCFG] Extend TryToSimplifyUncondBranchFromEmptyBlock for empty block including lifetime intrinsics
Make it possible that TryToSimplifyUncondBranchFromEmptyBlock merges empty
basic block including lifetime intrinsics as well as phi nodes and
unconditional branch into its successor or predecessor(s).

If successor of empty block has single predecessor, all contents including
lifetime intrinsics are sinked into the successor. Otherwise, they are
hoisted into its predecessor(s) and then merged into the predecessor(s).

Patch by Josh Yoon <josh.yoon@samsung.com>!

Differential Revision: http://reviews.llvm.org/D19257

llvm-svn: 268254
2016-05-02 17:22:54 +00:00
Chad Rosier 84567343bc Remove extra whitespace. NFC.
llvm-svn: 268248
2016-05-02 16:45:00 +00:00
Sanjay Patel ec41cd2461 remove blank lines
llvm-svn: 268246
2016-05-02 15:49:09 +00:00
Sanjay Patel ebc0faa8d4 [InstCombine] regenerate checks
llvm-svn: 268245
2016-05-02 15:32:10 +00:00
Sanjay Patel 0d0181006a [InstCombine] regenerate checks
llvm-svn: 268244
2016-05-02 15:25:49 +00:00
Sanjay Patel 0b75fd81e1 [InstCombine] regenerate checks
llvm-svn: 268242
2016-05-02 15:21:41 +00:00
Sanjay Patel 933f9da43d [InstCombine] regenerate checks
llvm-svn: 268241
2016-05-02 15:18:13 +00:00
Sanjay Patel b193fe943f [InstCombine] regenerate checks
llvm-svn: 268239
2016-05-02 15:06:55 +00:00
Sanjay Patel 1540b19407 [InstCombine] regenerate checks
llvm-svn: 268232
2016-05-02 14:21:55 +00:00
Davide Italiano 4f277763cf [GlobalDCE] Modernize. Use FileCheck instead of grep.
llvm-svn: 268207
2016-05-01 22:51:14 +00:00
Simon Pilgrim ca140b17cb [InstCombine][SSE] Added support to VPERMD/VPERMPS to shuffle combine to accept UNDEF elements.
llvm-svn: 268206
2016-05-01 20:43:02 +00:00
Simon Pilgrim c590492075 Dropped FIXME comment
llvm-svn: 268205
2016-05-01 20:33:25 +00:00
Simon Pilgrim eeacc40e27 [InstCombine][SSE] Added support to VPERMILVAR to shuffle combine to accept UNDEF elements.
llvm-svn: 268204
2016-05-01 20:22:42 +00:00
Simon Pilgrim cc7f567b6a [InstCombine][AVX] Fixed PERMILVAR identity tests and added additional decode tests
llvm-svn: 268203
2016-05-01 20:06:47 +00:00
Simon Pilgrim e5e8c2fde0 [InstCombine][SSE] Added support to PSHUFB to shuffle combine to accept UNDEF elements.
llvm-svn: 268202
2016-05-01 19:26:21 +00:00
Simon Pilgrim cae3e70707 [InstCombine][SSE] Regenerate MOVSX/MOVZX tests
llvm-svn: 268201
2016-05-01 18:28:45 +00:00
Simon Pilgrim 8cddf8b3c6 [InstCombine][AVX2] Combine VPERMD/VPERMPS intrinsics with constant masks to shufflevector.
llvm-svn: 268199
2016-05-01 16:41:22 +00:00
Simon Pilgrim c179435055 [InstCombine][AVX2] Added VPERMD/VPERMPS shuffle combining placeholder tests.
For future support for VPERMD/VPERMPS to generic shuffles combines

llvm-svn: 268166
2016-04-30 20:41:52 +00:00
Simon Pilgrim 8e38a5439b [InstCombine][AVX] Split off VPERMILVAR tests and added additional tests for UNDEF mask elements
llvm-svn: 268159
2016-04-30 07:32:19 +00:00
Sanjoy Das 47cf2affbd [LowerGuardIntrinsics] Keep track of !make.implicit metadata
If a guard call being lowered by LowerGuardIntrinsics has the
`!make.implicit` metadata attached, then reattach the metadata to the
branch in the resulting expanded form of the intrinsic.  This allows us
to implement null checks as guards and still get the benefit of implicit
null checks.

llvm-svn: 268148
2016-04-30 00:55:59 +00:00
Lawrence Hu 1befea2bdc Reroll loops with multiple IV and negative step part 3
support multiple induction variables

    This patch enable loop reroll for the following case:
        for(int i=0;  i<N; i += 2) {
           S += *a++;
           S += *a++;
        };

Differential Revision: http://reviews.llvm.org/D16550

llvm-svn: 268147
2016-04-30 00:51:22 +00:00
Sanjoy Das 52c68bb0f5 [LowerGuardIntrinsics] Preserve calling conv when lowering
llvm-svn: 268142
2016-04-30 00:17:47 +00:00
Sanjay Patel bc6fad0bdf add minimal test to show dropped metadata
llvm-svn: 268141
2016-04-30 00:12:54 +00:00
Sanjay Patel 6748ec49e9 remove the metadata added with r267827
We can demonstrate the 'select' bug and fix with a simpler test case.
The merged weight values are already tested in another test.

llvm-svn: 268139
2016-04-30 00:02:36 +00:00
Sanjoy Das 107aefc2fc Mark guards on true as "trivially dead"
This moves some logic added to EarlyCSE in rL268120 into
`llvm::isInstructionTriviallyDead`.  Adds a test case for DCE to
demonstrate that passes other than EarlyCSE can now pick up on the new
information.

llvm-svn: 268126
2016-04-29 22:23:16 +00:00
Sanjoy Das ee81b23fe7 [EarlyCSE] Simplify guard intrinsics
Summary:
This change teaches EarlyCSE some basic properties of guard intrinsics:

 - Guard intrinsics read all memory, but don't write to any memory
 - After a guard has executed, the condition it was guarding on can be
   assumed to be true
 - Guard intrinsics on a constant `true` are no-ops

Reviewers: reames, hfinkel

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D19578

llvm-svn: 268120
2016-04-29 21:52:58 +00:00
Chad Rosier cd62bf5821 [InstCombine] Determine the result of a select based on a dominating condition.
Differential Revision: http://reviews.llvm.org/D19550

llvm-svn: 268104
2016-04-29 21:12:31 +00:00
David Majnemer d2a074b1f4 [ValueTracking] matchSelectPattern needs to be more careful around FP
matchSelectPattern attempts to see through casts which mask min/max
patterns from being more obvious.  Under certain circumstances, it would
misidentify a sequence of instructions as a min/max because it assumed
that folding casts would preserve the result.  This is not the case for
floating point <-> integer casts.

This fixes PR27575.

llvm-svn: 268086
2016-04-29 18:40:34 +00:00
Geoff Berry b92cd5293e [BasicAA] Treat llvm.assume as not accessing memory in getModRefBehavior(Function)
Reviewers: dberlin, chandlerc, hfinkel, reames, sanjoy

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D19730

llvm-svn: 268068
2016-04-29 17:18:28 +00:00
Sanjay Patel 362dcf9615 auto-generate checks
llvm-svn: 268061
2016-04-29 16:39:37 +00:00
Simon Pilgrim 07a691c706 [InstCombine][SSE] Added x86 pshufb undef mask tests
FIXME: We currently don't support folding constant pshufb shuffle masks containing undef elements.
llvm-svn: 268016
2016-04-29 09:13:53 +00:00
Simon Pilgrim 5779fb61b0 [InstCombine][SSE] Regenerated x86 pshufb tests
llvm-svn: 268014
2016-04-29 08:53:35 +00:00
David Majnemer 1a5799fe3e [DeadArgumentElimination] Propagate operand bundles to promoted call sites
We neglected to transfer operand bundles when performing argument
promotion.

llvm-svn: 268008
2016-04-29 07:22:36 +00:00
Adam Nemet c60d8f8fd0 [LoopDist] Add missing RUN line in test from r268006
llvm-svn: 268007
2016-04-29 07:16:00 +00:00
Adam Nemet 88ec491830 [LoopDist] Also emit optimization remark on success (-Rpass=)
The option -Rpass=loop-distribute now reports the loops that were
distributed.

llvm-svn: 268006
2016-04-29 07:10:46 +00:00
David Majnemer 13d5526392 [SLPVectorizer] Add operand bundles to vectorized functions
SLPVectorizing a call site should result in further propagation of its
bundles.

llvm-svn: 268004
2016-04-29 07:09:51 +00:00
David Majnemer 50ddc0e1b6 [LoopVectorize] Add operand bundles to vectorized functions
Also, do not crash when calculating a cost model for loop-invariant
token values.

llvm-svn: 268003
2016-04-29 07:09:48 +00:00
Matt Arsenault 7d1b6c81af AMDGPU: Stop reporting an addressing mode for unknown addrspace
This was being treated the same as private, which has an immediate
offset. For unknown, it probably means it's for a computation not
actually being used for accessing memory, so it should not have a
nontrivial addressing mode.

llvm-svn: 268002
2016-04-29 06:25:10 +00:00
David Majnemer cd24bb1d3a [ArgumentPromotion] Propagate operand bundles to promoted call sites
We neglected to transfer operand bundles when performing argument
promotion.

This fixes PR27568.

llvm-svn: 267986
2016-04-29 04:56:12 +00:00
Michael Zolotukhin 1816d03b7d [PR25281] Remove AAResultsWrapper from preserved analyses of loop vectorizer.
We don't preserve AAResults, because, for one, we don't preserve SCEV-AA.
That fixes PR25281.

llvm-svn: 267980
2016-04-29 03:31:25 +00:00
Hal Finkel 1b66f7e3c8 [LoopVectorize] Keep hints from original loop on the vector loop
We need to keep loop hints from the original loop on the new vector loop.
Failure to do this meant that, for example:

  void foo(int *b) {
  #pragma clang loop unroll(disable)
    for (int i = 0; i < 16; ++i)
      b[i] = 1;
  }

this loop would be unrolled. Why? Because we'd vectorize it, thus dropping the
hints that unrolling should be disabled, and then we'd unroll it.

llvm-svn: 267970
2016-04-29 01:27:40 +00:00
Adam Nemet 0ba164bbcb [LoopDist] Emit optimization remarks (-Rpass*)
I closely followed the precedents set by the vectorizer:

* With -Rpass-missed, the loop is reported with further details pointing
to -Rpass--analysis.

* -Rpass-analysis reports the details why distribution has failed.

* Regardless of -Rpass*, when distribution fails for a loop where
distribution was forced with the pragma, a warning is produced according
to -Wpass-failed.  In this case the analysis info is also printed even
without -Rpass-analysis.

llvm-svn: 267952
2016-04-28 23:08:32 +00:00
Hal Finkel 50316d95a9 [Inliner] Preserve llvm.mem.parallel_loop_access metadata
When inlining a call site with llvm.mem.parallel_loop_access metadata, this
metadata needs to be propagated to all cloned memory-accessing instructions.
Otherwise, inlining parts of the loop body will invalidate the annotation.

With this functionality, we now vectorize the following as expected:

  void Body(int *res, int *c, int *d, int *p, int i) {
    res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
  }

  void Test(int *res, int *c, int *d, int *p, int n) {
    int i;

  #pragma clang loop vectorize(assume_safety)
    for (i = 0; i < 1600; i++) {
      Body(res, c, d, p, i);
    }
  }

llvm-svn: 267949
2016-04-28 23:00:04 +00:00
Arch D. Robison 0e61034018 [SLPVectorizer] Extend SLP Vectorizer to deal with aggregates.
The refactoring portion part was done as r267748.

http://reviews.llvm.org/D14185

llvm-svn: 267899
2016-04-28 16:11:45 +00:00
Simon Pilgrim bd4a3be7d2 [InstCombine][SSE] Add MOVMSK support to SimplifyDemandedUseBits
The MOVMSK instructions copies a vector elements' sign bits to the low bits of a scalar register and zeros the high bits.

This patch adds MOVMSK support to SimplifyDemandedUseBits so that its aware that the upper bits are known to be zero. It also removes the call to MOVMSK if none of the lower bits are actually required and just returns zero.

Differential Revision: http://reviews.llvm.org/D19614

llvm-svn: 267873
2016-04-28 12:22:53 +00:00
Sanjay Patel 21bd38a07b Update test to use FileCheck
Also, add some metadata to show what that currently looks like.

llvm-svn: 267827
2016-04-28 00:29:27 +00:00
Rong Xu a4c3f67fe8 more buildbot failure fix to r267792
__llvm_prf_nm length is embedded in llvm_used. Relax llvm_used check.

llvm-svn: 267816
2016-04-27 23:23:53 +00:00
Rong Xu 6e34c490ff [PGO] Promote indirect calls to conditional direct calls with value-profile
This patch implements the transformation that promotes indirect calls to
conditional direct calls when the indirect-call value profile meta-data is
available.

Differential Revision: http://reviews.llvm.org/D17864

llvm-svn: 267815
2016-04-27 23:20:27 +00:00
Rong Xu 4b1dc5d60b Fix buildbot failure due to r267792
Relax the test check as some targets do not have name compression.

llvm-svn: 267803
2016-04-27 22:06:35 +00:00
Rong Xu af5aebaa32 [PGO] Prohibit address recording if the function is both internal and COMDAT
Differential Revision: http://reviews.llvm.org/D19515

llvm-svn: 267792
2016-04-27 21:17:30 +00:00
Simon Pilgrim 3f595aabe2 [InstCombine][AVX2] Add AVX2 per-element vector shift tests
At the moment we don't simplify PSRAV/PSRLV/PSLLV intrinsics to generic IR for constant shift amounts, but we could.

llvm-svn: 267777
2016-04-27 20:25:34 +00:00
David Majnemer 0c80e2eac6 [CodeGenPrepare] Don't sink a cast past its user
The sink cast machinery is supposed to sink casts as close to their user
as possible.  However, an EH pad is the first instruction in it's basic
block.  Don't sink if the user is an EH pad.

This fixes PR27536.

llvm-svn: 267767
2016-04-27 19:36:38 +00:00
Ahmed Bougacha ace97c1f7d [LIR] Set attributes on memset_pattern16.
"inferattrs" will deduce the attribute, but it will be too late for
many optimizations. Set it ourselves when creating the call.

Differential Revision: http://reviews.llvm.org/D17598

llvm-svn: 267762
2016-04-27 19:04:50 +00:00
Ahmed Bougacha 44c19876c7 [InferAttrs] Mark memset_pattern16 params nocapture.
Differential Revision: http://reviews.llvm.org/D19471

llvm-svn: 267760
2016-04-27 19:04:43 +00:00
Matthew Simpson 622b95be7b [LV] Reallow positive-stride interleaved load groups with gaps
We previously disallowed interleaved load groups that may cause us to
speculatively access memory out-of-bounds (r261331). We did this by ensuring
each load group had an access corresponding to the first and last member.
Instead of bailing out for these interleaved groups, this patch enables us to
peel off the last vector iteration, ensuring that we execute at least one
iteration of the scalar remainder loop. This solution was proposed in the
review of the previous patch.

Differential Revision: http://reviews.llvm.org/D19487

llvm-svn: 267751
2016-04-27 18:21:36 +00:00
Gerolf Hoflehner 88017c08a6 [InstCombine] Sharpended test case in pr21210.ll
llvm-svn: 267742
2016-04-27 17:19:54 +00:00
Matthew Simpson e5dfb08fcb [TTI] Add hook for vector extract with extension
This change adds a new hook for estimating the cost of vector extracts followed
by zero- and sign-extensions. The motivating example for this change is the
SMOV and UMOV instructions on AArch64. These instructions move data from vector
to general purpose registers while performing the corresponding extension
(sign-extend for SMOV and zero-extend for UMOV) at the same time. For these
operations, TargetTransformInfo can assume the extensions are free and only
report the cost of the vector extract. The SLP vectorizer has been updated to
make use of the new hook.

Differential Revision: http://reviews.llvm.org/D18523

llvm-svn: 267725
2016-04-27 15:20:21 +00:00
Simon Pilgrim f23aa2a9c9 [InstCombine][SSE] Regenerated vector shift tests
llvm-svn: 267699
2016-04-27 12:04:44 +00:00
Simon Pilgrim d2ea708739 [InstCombine][SSE] Added DemandedBits tests for MOVMSK instructions
MOVMSK zeros the upper bits of the gpr - we should be able to use this.

llvm-svn: 267686
2016-04-27 09:53:09 +00:00
Adam Nemet d2fa414718 [LoopDist] Add llvm.loop.distribute.enable loop metadata
Summary:
D19403 adds a new pragma for loop distribution.  This change adds
support for the corresponding metadata that the pragma is translated to
by the FE.

As part of this I had to rethink the flag -enable-loop-distribute.  My
goal was to be backward compatible with the existing behavior:

  A1. pass is off by default from the optimization pipeline
  unless -enable-loop-distribute is specified

  A2. pass is on when invoked directly from opt (e.g. for unit-testing)

The new pragma/metadata overrides these defaults so the new behavior is:

  B1. A1 + enable distribution for individual loop with the pragma/metadata

  B2. A2 + disable distribution for individual loop with the pragma/metadata

The default value whether the pass is on or off comes from the initiator
of the pass.  From the PassManagerBuilder the default is off, from opt
it's on.

I moved -enable-loop-distribute under the pass.  If the flag is
specified it overrides the default from above.

Then the pragma/metadata can further modifies this per loop.

As a side-effect, we can now also use -enable-loop-distribute=0 from opt
to emulate the default from the optimization pipeline.  So to be precise
this is the new behavior:

  C1. pass is off by default from the optimization pipeline
  unless -enable-loop-distribute or the pragma/metadata enables it

  C2. pass is on when invoked directly from opt
  unless -enable-loop-distribute=0 or the pragma/metadata disables it

Reviewers: hfinkel

Subscribers: joker.eph, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D19431

llvm-svn: 267672
2016-04-27 05:28:18 +00:00
Evgeny Stupachenko 23ce61b663 The patch fixes PR27392.
Summary:
 It is incorrect to compare TripCount (which is BECount + 1)
  with extraiters (or Count) to check if we should enter unrolled
  loop or not, because TripCount can potentially overflow
  (when BECount is max unsigned integer).
 While comparing BECount with (Count - 1) is overflow safe and
  therefore correct.

Reviewer: hfinkel

Differential Revision: http://reviews.llvm.org/D19256

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 267662
2016-04-27 03:04:54 +00:00
Philip Reames 3f83dbeed9 [LVI] Reduce compile time by lazily scanning blocks if needed
When encountering a non-local pointer, LVI would eagerly scan the block for dereferences of the given object to prove the pointer to be non null.  That's all well and good, but *then* we'd go recurse through our input blocks.  As a result, we could end up scanning each and every block we traverse, even if the final definition was obviously non null or we found a constant value somewhere up the chain.  The previous code papered over this by using the isKnownNonNull routine from value tracking.  This made the duplication less painful in the common case.

Instead, we know do the block scan only *after* we've gotten the recursive results back.  This lets us stop scanning individual blocks as soon as we've determined it to be non-null in any predecessor block and use our usual merge rules to propagate that information cheaply through successor blocks.  For a pointer which can be found non-null, this does strictly less work and sometimes substaintially so.

Note that the case where we *can't* prove something non-null is still the really expensive case.  We end up scanning each and every block looking for a dereference and never end up finding one.

llvm-svn: 267642
2016-04-27 00:30:55 +00:00
Justin Bogner c2bf63d29d PM: Port Reassociate to the new pass manager
llvm-svn: 267631
2016-04-26 23:39:29 +00:00
Sanjay Patel 29dea0d230 [SimplifyCFG] propagate branch metadata when creating select
llvm-svn: 267624
2016-04-26 23:15:48 +00:00
Philip Reames 053c2a6f25 [LVI] Apply transfer rule for overdefine inputs for binary operators
As pointed out by John Regehr over in http://reviews.llvm.org/D19485, LVI was being incredibly stupid about applying its transfer rules.  Rather than gathering local facts from the expression itself, it was simply giving up entirely if one of the inputs was overdefined.  This greatly impacts the precision of the overall analysis and makes it far more fragile as well.

This patch builds on 267609 which did the same thing for unary casts.

llvm-svn: 267620
2016-04-26 23:10:35 +00:00
Philip Reames e5030e85ea [LVI] A better fix for the assertion error introduced by 267609
Essentially, I was using the wrong size function.  For types which were sized, but not primitive, I wasn't getting a useful size for the operand and failed an assert.  I fixed this, and also added a guard that the input is a sized type.  Test case is for the original mistake.  I'm not sure how to actually exercise the sized type check.

llvm-svn: 267618
2016-04-26 22:52:30 +00:00
Sanjay Patel d2d2aa52cd [LowerExpectIntrinsic] make default likely/unlikely ratio bigger
We need the default ratio to be sufficiently large that it triggers transforms 
based on block frequency info (BFI) and plays well with the recently introduced
BranchProbability used by CGP.

Differential Revision: http://reviews.llvm.org/D19435

llvm-svn: 267615
2016-04-26 22:23:38 +00:00
Philip Reames 38c87c2e50 [LVI] Infer local facts from unary expressions
As pointed out by John Regehr over in http://reviews.llvm.org/D19485, LVI was being incredibly stupid about applying its transfer rules. Rather than gathering local facts from the expression itself, it was simply giving up entirely if one of the inputs was overdefined. This greatly impacts the precision of the overall analysis and makes it far more fragile as well.

This patch implements only the unary operation case. Once this is in, I'll implement the same for the binary operations.

Differential Revision: http://reviews.llvm.org/D19492

llvm-svn: 267609
2016-04-26 21:48:16 +00:00
David Majnemer abb9f55c80 Revert "[SimplifyLibCalls] sprintf doesn't copy null bytes"
The destination buffer that sprintf uses is restrict qualified, we do
not need to worry about derived pointers referenced via format
specifiers.

This reverts commit r267580.

llvm-svn: 267605
2016-04-26 21:04:47 +00:00
Elena Demikhovsky 308a7eb0d2 Masked Store in Loop Vectorizer - bugfix
Fixed a bug in loop vectorization with conditional store.

Differential Revision: http://reviews.llvm.org/D19532

llvm-svn: 267597
2016-04-26 20:18:04 +00:00
Justin Bogner 4563a06cee PM: Port Internalize to the new pass manager
llvm-svn: 267596
2016-04-26 20:15:52 +00:00
David Majnemer 8cd77baebc [SimplifyLibCalls] sprintf doesn't copy null bytes
sprintf doesn't read or copy the terminating null byte from it's string
operands.  sprintf will append it's own after processing all of the
format specifiers.

This fixes PR27526.

llvm-svn: 267580
2016-04-26 18:16:49 +00:00
Dehao Chen 5d6d4841ed Tune basic block annotation algorithm.
Summary:
Instead of using maximum IR weight as the basic block weight, this patch uses the voting algorithm to find the most likely weight for the basic block. This can effectively avoid the cases when some IRs are annotated incorrectly due to code motion of the profiled binary.

This patch also updates propagate.ll unittest to include discriminator in the input file so that it is testing something meaningful.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D19301

llvm-svn: 267519
2016-04-26 04:59:11 +00:00
Hal Finkel e4c0c1679b [SimplifyCFG] Preserve !llvm.mem.parallel_loop_access when merging
When SimplifyCFG merges identical instructions from both sides of a diamond, it
can preserve !llvm.mem.parallel_loop_access (as it does with most of the other
metadata). There's no real data or control dependency change in this case.

llvm-svn: 267515
2016-04-26 02:06:06 +00:00
Hal Finkel 411d31ad72 [LoopVectorize] Don't consider conditional-load dereferenceability for marked parallel loops
I really thought we were doing this already, but we were not. Given this input:

void Test(int *res, int *c, int *d, int *p) {
  for (int i = 0; i < 16; i++)
    res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
}

we did not vectorize the loop. Even with "assume_safety" the check that we
don't if-convert conditionally-executed loads (to protect against
data-dependent deferenceability) was not elided.

One subtlety: As implemented, it will still prefer to use a masked-load
instrinsic (given target support) over the speculated load. The choice here
seems architecture specific; the best option depends on how expensive the
masked load is compared to a regular load. Ideally, using the masked load still
reduces unnecessary memory traffic, and so should be preferred. If we'd rather
do it the other way, flipping the order of the checks is easy.

The LangRef is updated to make explicit that llvm.mem.parallel_loop_access also
implies that if conversion is okay.

Differential Revision: http://reviews.llvm.org/D19512

llvm-svn: 267514
2016-04-26 02:00:36 +00:00
Sanjay Patel a31b0c0ece [CodeGenPrepare] don't convert an unpredictable select into control flow
Suggested in the review of D19488:
http://reviews.llvm.org/D19488

llvm-svn: 267504
2016-04-26 00:47:39 +00:00
Justin Bogner 1a07501379 PM: Port GlobalOpt to the new pass manager
llvm-svn: 267499
2016-04-26 00:28:01 +00:00