Commit Graph

8113 Commits

Author SHA1 Message Date
Daniel Berlin 97f34e887f MemorySSAUpdater: Only add phis to insertedphis if we actually inserted them, not if we just found existing ones
llvm-svn: 314273
2017-09-27 05:35:19 +00:00
Matthias Braun cc603ee3d5 TargetLibraryInfo: Stop guessing wchar_t size
Usually the frontend communicates the size of wchar_t via metadata and
we can optimize wcslen (and possibly other calls in the future). In
cases without the wchar_size metadata we would previously try to guess
the correct size based on the target triple; however this is fragile to
keep up to date and may miss users manually changing the size via flags.
Better be safe and stop guessing and optimizing if the frontend didn't
communicate the size.

Differential Revision: https://reviews.llvm.org/D38106

llvm-svn: 314185
2017-09-26 02:36:57 +00:00
Michael Liao b30286d81c Remove trailing whitespaces.
llvm-svn: 314115
2017-09-25 16:21:21 +00:00
Clement Courbet 2807c0a442 [CodeGenPrepare][NFC] Rename TargetTransformInfo::expandMemCmp -> TargetTransformInfo::enableMemCmpExpansion.
Summary:
Right now there are two functions with the same name, one does the work
and the other one returns true if expansion is needed. Rename
TargetTransformInfo::expandMemCmp to make it more consistent with other
members of TargetTransformInfo.

Remove the unused Instruction* parameter.

Differential Revision: https://reviews.llvm.org/D38165

llvm-svn: 314096
2017-09-25 06:35:16 +00:00
Daniel Neilson 1341ac2ced [SCEV] Generalize folding of trunc(x)+n*trunc(y) into folding m*trunc(x)+n*trunc(y)
Summary:
A SCEV such as:
 {%v2,+,((-1 * (trunc i64 (-1 * %v1) to i32)) + (-1 * (trunc i64 %v1 to i32)))}<%loop>

can be folded into, simply, {%v2,+,0}. However, the current code in ::getAddExpr()
will not try to apply the simplification m*trunc(x)+n*trunc(y) -> trunc(trunc(m)*x+trunc(n)*y)
because it only keys off having a non-multiplied trunc as the first term in the simplification.

This patch generalizes this code to try to do a more generic fold of these trunc
expressions.

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37888

llvm-svn: 313988
2017-09-22 15:47:57 +00:00
Sanjoy Das 388b012f4e Rename markAsErased to erase, as pointed out in a previous review; NFC
llvm-svn: 313951
2017-09-22 01:47:41 +00:00
Hans Wennborg 57c3341ada Revert r313771 "[SLP] Vectorize jumbled memory loads."
This broke the buildbots, e.g.
http://bb.pgr.jp/builders/test-llvm-i686-linux-RA/builds/391

> Summary:
> This patch tries to vectorize loads of consecutive memory accesses, accessed
> in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
> which was reverted back due to some basic issue with representing the 'use mask'
> jumbled accesses.
>
> This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
>
> Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df
>
> Subscribers: mzolotukhin
>
> Reviewed By: ayal
>
> Differential Revision: https://reviews.llvm.org/D36130
>
> Review comments updated accordingly
>
> Change-Id: I22ab0a8a9bac9d49d74baa81a08e1e486f5e75f0
>
> Added a TODO for sortLoadAccesses API
>
> Change-Id: I3c679bf1865422d1b45e17ea28f1992bca660b58
>
> Modified the TODO for sortLoadAccesses API
>
> Change-Id: Ie64a66cb5f9e2a7610438abb0e750c6e090f9565
>
> Review comment update for using OpdNum to insert the mask in respective location
>
> Change-Id: I016d0c1b29874e979efc0205bbf078991f92edce
>
> Fixes '-Wsign-compare warning' in LoopAccessAnalysis.cpp and code rebase
>
> Change-Id: I64b2ea5e68c1d7b6a028f5ef8251c5a97333f89b

llvm-svn: 313781
2017-09-20 18:00:03 +00:00
Mohammad Shahid 2b281de576 [SLP] Vectorize jumbled memory loads.
Summary:
This patch tries to vectorize loads of consecutive memory accesses, accessed
in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
which was reverted back due to some basic issue with representing the 'use mask'
jumbled accesses.

This patch fixes the mask representation by recording the 'use mask' in the usertree entry.

Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df

Subscribers: mzolotukhin

Reviewed By: ayal

Differential Revision: https://reviews.llvm.org/D36130

Review comments updated accordingly

Change-Id: I22ab0a8a9bac9d49d74baa81a08e1e486f5e75f0

Added a TODO for sortLoadAccesses API

Change-Id: I3c679bf1865422d1b45e17ea28f1992bca660b58

Modified the TODO for sortLoadAccesses API

Change-Id: Ie64a66cb5f9e2a7610438abb0e750c6e090f9565

Review comment update for using OpdNum to insert the mask in respective location

Change-Id: I016d0c1b29874e979efc0205bbf078991f92edce

Fixes '-Wsign-compare warning' in LoopAccessAnalysis.cpp and code rebase

Change-Id: I64b2ea5e68c1d7b6a028f5ef8251c5a97333f89b
llvm-svn: 313771
2017-09-20 17:19:57 +00:00
Alexander Kornienko 6a140234ed Revert r313736: "[SLP] Vectorize jumbled memory loads."
The revision breaks buildbots:
http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/6694/steps/test/logs/stdio

llvm-svn: 313758
2017-09-20 14:53:07 +00:00
Alexander Kornienko 7302344bdf Revert r313753: "Fix a -Wsign-compare warning in LoopAccessAnalysis.cpp"
llvm-svn: 313757
2017-09-20 14:52:56 +00:00
Alexander Kornienko 6c629b5728 Fix a -Wsign-compare warning in LoopAccessAnalysis.cpp
llvm-svn: 313753
2017-09-20 12:18:22 +00:00
Mohammad Shahid f8db9bd857 [SLP] Vectorize jumbled memory loads.
Summary:
This patch tries to vectorize loads of consecutive memory accesses, accessed
in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
which was reverted back due to some basic issue with representing the 'use mask' of
jumbled accesses.

This patch fixes the mask representation by recording the 'use mask' in the usertree entry.

Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df

Reviewers: mkuper, loladiro, Ayal, zvi, danielcdh

Reviewed By: Ayal

Subscribers: mzolotukhin

Differential Revision: https://reviews.llvm.org/D36130

Commit after rebase for patch D36130

Change-Id: I8add1c265455669ef288d880f870a9522c8c08ab
llvm-svn: 313736
2017-09-20 08:18:28 +00:00
Sanjoy Das 09613b122e Tighten the invariants around LoopBase::invalidate
Summary:
With this change:
 - Methods in LoopBase trip an assert if the receiver has been invalidated
 - LoopBase::clear frees up the memory held the LoopBase instance

This change also shuffles things around as necessary to work with this stricter invariant.

Reviewers: chandlerc

Subscribers: mehdi_amini, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D38055

llvm-svn: 313708
2017-09-20 02:31:57 +00:00
Sanjoy Das 66a004ac0c Clang-format few files to make later diffs leaner; NFC
llvm-svn: 313705
2017-09-20 01:12:09 +00:00
Sanjoy Das 76ab23234c [LoopInfo] Make LoopBase and Loop destructors non-public
Summary:
See comment for why I think this is a good idea.

This change also:

 - Removes an SCEV test case.  The SCEV test was not testing anything useful (most of it was `#if 0` ed out) and it would need to be updated to deal with a private ~Loop::Loop.
 - Updates the loop pass manager test case to deal with a private ~Loop::Loop.
 - Renames markAsRemoved to markAsErased to contrast with removeLoop, via the usual remove vs. erase idiom we already have for instructions and basic blocks.

Reviewers: chandlerc

Subscribers: mehdi_amini, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D37996

llvm-svn: 313695
2017-09-19 23:19:00 +00:00
Sanjay Patel 0d4fd5b668 [InstSimplify] fold sdiv/srem based on compare of dividend and divisor
This should bring signed div/rem analysis up to the same level as unsigned. 
We use icmp simplification to determine when the divisor is known greater than the dividend.

Each positive test is followed by a negative test to show that we're not overstepping the boundaries of the known bits.
There are extra tests for the signed-min-value special cases.

Alive proofs:
http://rise4fun.com/Alive/WI5

Differential Revision: https://reviews.llvm.org/D37713

llvm-svn: 313264
2017-09-14 14:59:07 +00:00
Sanjay Patel cca8f7853f [InstSimplify] clean up div/rem handling; NFCI
The idea to make an 'isDivZero' helper was suggested for the signed case in D37713:
https://reviews.llvm.org/D37713

This clean-up makes it clear that D37713 is just filling the gap for signed div/rem,
removes unnecessary code, and allows us to remove a bit of duplicated code from the
planned improvement in D37713.

llvm-svn: 313261
2017-09-14 14:09:11 +00:00
Chandler Carruth 7376ae88eb [PM/CGSCC] Teach the CGSCC pass manager components to gracefully handle
invalidated SCCs even when we do not have an updated SCC to redirect
towards.

This comes up in a fairly subtle and surprising circumstance: we need to
have a connected but internal node in the call graph which later becomes
a disconnected island, and then gets deleted. All of this needs to
happen mid-CGSCC walk. Because it is disconnected, we have no way of
computing a new "current" SCC when it gets deleted. Instead, we need to
explicitly check for a deleted "current" SCC and bail out of the current
CGSCC step. This will bubble all the way up to the post-order walk and
then resume correctly.

I've included minimal tests for this bug. The specific behavior
matches something we've seen in the wild with the new PM combined with
ThinLTO and sample PGO, but I've not yet confirmed whether this is the
only issue there.

llvm-svn: 313242
2017-09-14 08:33:57 +00:00
Alon Kom 682cfc1d4c [LV] Fix maximum legal VF calculation
This patch fixes pr34283, which exposed that the computation of
maximum legal width for vectorization was wrong, because it relied
on MaxInterleaveFactor to obtain the maximum stride used in the loop,
however not all strided accesses in the loop have an interleave-group
associated with them.
Instead of recording the maximum stride in the loop, which can be over
conservative (e.g. if the access with the maximum stride is not involved
in the dependence limitation), this patch tracks the actual maximum legal
width imposed by accesses that are involved in dependencies.

Differential Revision: https://reviews.llvm.org/D37507

llvm-svn: 313237
2017-09-14 07:40:02 +00:00
Easwaran Raman 4924bb002d [Inliner] Add another way to compute full inline cost.
Summary:
Full inline cost is computed when -inline-cost-full is true or ORE is
non-null. This patch adds another way to compute full inline cost by
adding a field to InlineParams. This will be used by SampleProfileLoader
to check legality of inlining a callee that it wants to inline.

Reviewers: danielcdh, haicheng

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37819

llvm-svn: 313185
2017-09-13 20:16:02 +00:00
Hiroshi Yamauchi a43913cfaf Add options to dump PGO counts in text.
Summary:
Added text options to -pgo-view-counts and -pgo-view-raw-counts that dump block frequency and branch probability info in text.

This is useful when the graph is very large and complex (the dot command crashes, lines/edges too close to tell apart, hard to navigate without textual search) or simply when text is preferred.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37776

llvm-svn: 313159
2017-09-13 17:20:38 +00:00
Teresa Johnson cbdc5ff628 [ThinLTO] AliasSummary should not have any references
Summary: References should only be on the aliasee.

Reviewers: pcc

Subscribers: llvm-commits, inglorion

Differential Revision: https://reviews.llvm.org/D37814

llvm-svn: 313158
2017-09-13 17:10:24 +00:00
Silviu Baranga ac920f7716 [LAA] Allow more run-time alias checks by coercing pointer expressions to AddRecExprs
Summary:
LAA can only emit run-time alias checks for pointers with affine AddRec
SCEV expressions. However, non-AddRecExprs can be now be converted to
affine AddRecExprs using SCEV predicates.

This change tries to add the minimal set of SCEV predicates in order
to enable run-time alias checking.

Reviewers: anemet, mzolotukhin, mkuper, sanjoy, hfinkel

Reviewed By: hfinkel

Subscribers: mssimpso, Ayal, dorit, roman.shirokiy, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D17080

llvm-svn: 313012
2017-09-12 07:48:22 +00:00
Marcello Maggioni ce90060d1c [ScalarEvolution] Refactor forgetLoop() to improve performance
forgetLoop() has pretty bad performance because it goes over
the same instructions over and over again in particular when
nested loop are involved.
The refactoring changes the function to a not-recursive function
and reusing the allocation for data-structures and the Visited
set.

NFCI

Differential Revision: https://reviews.llvm.org/D37659

llvm-svn: 312920
2017-09-11 15:44:20 +00:00
Sanjay Patel fa877fd464 [InstSimplify] reorder methods; NFC
I'm trying to refactor some shared code for integer div/rem,
but I keep having to scroll through fdiv. The FP ops have
nothing in common with the integer ops, so I'm moving FP
below everything else. 

While here, improve a couple of comments and fix some formatting.

llvm-svn: 312913
2017-09-11 13:34:27 +00:00
Sanjay Patel 5876189ff1 [InstSimplify] refactor udiv/urem code and add tests; NFCI
This removes some duplicated code and makes it easier to support signed div/rem
in a similar way if we want to do that. Note that the existing comments were not
accurate - we don't need a constant divisor to simplify; icmp simplification does
more than that. But as the added tests show, it could go even further.

llvm-svn: 312885
2017-09-10 17:55:08 +00:00
Nuno Lopes 404f106d71 Merge isKnownNonNull into isKnownNonZero
It now knows the tricks of both functions.
Also, fix a bug that considered allocas of non-zero address space to be always non null

Differential Revision: https://reviews.llvm.org/D37628

llvm-svn: 312869
2017-09-09 18:23:11 +00:00
Sanjay Patel 6fd4391ddd [DivRempairs] add a pass to optimize div/rem pairs (PR31028)
This is intended to be a superset of the functionality from D31037 (EarlyCSE) but implemented 
as an independent pass, so there's no stretching of scope and feature creep for an existing pass. 
I also proposed a weaker version of this for SimplifyCFG in D30910. And I initially had almost 
this same functionality as an addition to CGP in the motivating example of PR31028:
https://bugs.llvm.org/show_bug.cgi?id=31028

The advantage of positioning this ahead of SimplifyCFG in the pass pipeline is that it can allow 
more flattening. But it needs to be after passes (InstCombine) that could sink a div/rem and
undo the hoisting that is done here.

Decomposing remainder may allow removing some code from the backend (PPC and possibly others).

Differential Revision: https://reviews.llvm.org/D37121 

llvm-svn: 312862
2017-09-09 13:38:18 +00:00
Guozhi Wei 62d6414465 [TargetTransformInfo] Add a new public interface getInstructionCost
Current TargetTransformInfo can support throughput cost model and code size model, but sometimes we also need instruction latency cost model in different optimizations. Hal suggested we need a single public interface to query the different cost of an instruction. So I proposed following interface:

  enum TargetCostKind {
    TCK_RecipThroughput, ///< Reciprocal throughput.
    TCK_Latency,         ///< The latency of instruction.
    TCK_CodeSize         ///< Instruction code size.
  };

  int getInstructionCost(const Instruction *I, enum TargetCostKind kind) const;

All clients should mainly use this function to query the cost of an instruction, parameter <kind> specifies the desired cost model.

This patch also provides a simple default implementation of getInstructionLatency.

The default getInstructionLatency provides latency numbers for only small number of instruction classes, those latency numbers are only reasonable for modern OOO processors. It can be extended in following ways:

   Add more detail into this function.
   Add getXXXLatency function and call it from here.
   Implement target specific getInstructionLatency function.

Differential Revision: https://reviews.llvm.org/D37170

llvm-svn: 312832
2017-09-08 22:29:17 +00:00
Alexey Bataev 6dd29fccb8 [SLP] Support for horizontal min/max reduction.
SLP vectorizer supports horizontal reductions for Add/FAdd binary
operations. Patch adds support for horizontal min/max reductions.
Function getReductionCost() is split to getArithmeticReductionCost() for
binary operation reductions and getMinMaxReductionCost() for min/max
reductions.
Patch fixes PR26956.

Differential revision: https://reviews.llvm.org/D27846

llvm-svn: 312791
2017-09-08 13:49:36 +00:00
Peter Collingbourne 681fbb64a4 ModuleSummaryAnalysis: Correctly handle all function operand references.
The current code that handles personality functions when creating a
module summary does not correctly handle the case where a function's
personality function operand refers to the function indirectly
(e.g. via a bitcast). This patch handles such cases by treating
personality function references like any other reference, i.e. by
adding them to the function's reference list. This has the minor side
benefit of allowing personality functions to participate in early
dead stripping.

We do this by calling findRefEdges on the function itself. This way
we also end up handling other function operands (specifically prefix
data and prologue data) for free.

Differential Revision: https://reviews.llvm.org/D37553

llvm-svn: 312698
2017-09-07 05:35:35 +00:00
Matt Arsenault 3ced3d90c3 InstSimplify: canonicalize is idempotent
llvm-svn: 312685
2017-09-07 01:21:43 +00:00
Nuno Lopes ba1c9f7aee Fix PR33878: BasicAA incorrectly assumes different address spaces don't alias
Remove code that assumed that a nullptr of address space != 0 couldnt alias with a non-null pointer. This is incorrect, since nothing can be concluded about a null pointer in an address space != 0.
This code was written before address spaces were introduced

Differential Revision: https://reviews.llvm.org/D37518

llvm-svn: 312648
2017-09-06 16:55:31 +00:00
Sanjay Patel 6840c5ff75 [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145

In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.

But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.

By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.

Differential Revision: https://reviews.llvm.org/D37427

llvm-svn: 312591
2017-09-05 23:13:13 +00:00
Daniel Neilson 3f0e4ad833 [SCEV] Ensure ScalarEvolution::createAddRecFromPHIWithCastsImpl properly handles out of range truncations of the start and accum values
Summary:
 When constructing the predicate P1 in ScalarEvolution::createAddRecFromPHIWithCastsImpl() it is possible
for the PHISCEV from which the predicate is constructed to be a SCEVConstant instead of a SCEVAddRec. If
this happens, then the cast<SCEVAddRec>(PHISCEV) in the code will assert.

 Such a PHISCEV is possible if either the start value or the accumulator value is a constant value
that not equal to its truncated value, and if the truncated value is zero.

 This patch adds tests that demonstrate the cast<> assertion, and fixes this problem by checking
whether the PHISCEV is a constant before constructing the P1 predicate; if it is, then P1 is
equivalent to one of P2 or P3. Additionally, if we know that the start value or accumulator
value are constants then we check whether the P2 and/or P3 predicates are known false at compile
time; if either is, then we bail out of constructing the AddRec.

Reviewers: sanjoy, mkazantsev, silviu.baranga

Reviewed By: mkazantsev

Subscribers: mkazantsev, llvm-commits

Differential Revision: https://reviews.llvm.org/D37265

llvm-svn: 312568
2017-09-05 19:54:03 +00:00
Eugene Zelenko 75075efe5e [Analysis, Transforms] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 312383
2017-09-01 21:37:29 +00:00
Craig Topper 924f20262b [InstCombine][InstSimplify] Teach decomposeBitTestICmp to look through truncate instructions
This patch teaches decomposeBitTestICmp to look through truncate instructions on the input to the compare. If a truncate is found it will now return the pre-truncated Value and appropriately extend the APInt mask.

This allows some code to be removed from InstSimplify that was doing this functionality.

This allows InstCombine's bit test combining code to match a pre-truncate Value with the same Value appear with an 'and' on another icmp. Or it allows us to combine a truncate to i16 and a truncate to i8. This also required removing the type check from the beginning of getMaskedTypeForICmpPair, but I believe that's ok because we still have to find two values from the input to each icmp that are equal before we'll do any transformation. So the type check was really just serving as an early out.

There was one user of decomposeBitTestICmp that didn't want to look through truncates, so I've added a flag to prevent that behavior when necessary.

Differential Revision: https://reviews.llvm.org/D37158

llvm-svn: 312382
2017-09-01 21:27:34 +00:00
Peter Collingbourne 5e8b94c137 ModuleSummaryAnalysis: Correctly handle refs from function inline asm to module inline asm.
If a function contains inline asm and the module-level inline asm
contains the definition of a local symbol, prevent the function from
being imported in case the function-level inline asm refers to a
symbol in the module-level inline asm.

Differential Revision: https://reviews.llvm.org/D37370

llvm-svn: 312332
2017-09-01 16:24:02 +00:00
Alexandre Isoard 405728fd47 [SCEV] Add URem support to SCEV
In LLVM IR the following code:

    %r = urem <ty> %t, %b

is equivalent to

    %q = udiv <ty> %t, %b
    %s = mul <ty> nuw %q, %b
    %r = sub <ty> nuw %t, %q ; (t / b) * b + (t % b) = t

As UDiv, Mul and Sub are already supported by SCEV, URem can be implemented
with minimal effort using that relation:

    %r --> (-%b * (%t /u %b)) + %t

We implement two special cases:

  - if %b is 1, the result is always 0
  - if %b is a power-of-two, we produce a zext/trunc based expression instead

That is, the following code:

    %r = urem i32 %t, 65536

Produces:

    %r --> (zext i16 (trunc i32 %a to i16) to i32)

Note that while this helps get a tighter bound on the range analysis and the
known-bits analysis, this exposes some normalization shortcoming of SCEVs:

    %div = udim i32 %a, 65536
    %mul = mul i32 %div, 65536
    %rem = urem i32 %a, 65536
    %add = add i32 %mul, %rem

Will usually not be reduced.

llvm-svn: 312329
2017-09-01 14:59:59 +00:00
Eugene Zelenko fa6434bebb [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes. Also affected in files (NFC).
llvm-svn: 312289
2017-08-31 21:56:16 +00:00
Adam Nemet 4846e66fdd Remove an unnecessary const_cast.
I think that this is dating back to when emit used to take a const reference.

llvm-svn: 311948
2017-08-28 23:00:13 +00:00
Don Hinton a67e13129d [Dominators] Remove redundant explicit template instantiation.
Summary:
Remove redundant explicit template instantiation.

This was reported by Andrew Kelley building release_50 with gcc7.2.0 on MacOS: duplicate symbol llvm::DominatorTreeBase.

Reviewers: kuhar, andrewrk, davide, hans

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37185

llvm-svn: 311835
2017-08-26 21:08:51 +00:00
Hiroshi Yamauchi 63e17ebf8b Add options to dump block frequency/branch probability info in text.
Summary:
Add options -print-bfi/-print-bpi that dump block frequency and branch
probability info like -view-block-freq-propagation-dags and
-view-machine-block-freq-propagation-dags do but in text.

This is useful when the graph is very large and complex (the dot command
crashes, lines/edges too close to tell apart, hard to navigate without textual
search) or simply when text is preferred.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37165

llvm-svn: 311822
2017-08-26 00:31:00 +00:00
Haicheng Wu 61995364de [InlineCost] Small changes to early exit condition. NFC.
Change the early exit condition from Cost > Threshold to Cost >= Threshold
because the inline condition is Cost < Threshold.

Differential Revision: https://reviews.llvm.org/D37087

llvm-svn: 311791
2017-08-25 19:00:33 +00:00
Michael Kruse c0a6aab6b6 Normlize to LF line endings.
Commit r297442 introduced mixed CRLF/LF line endings to two files.
Normalize to to LF-only line endings.

llvm-svn: 311774
2017-08-25 12:38:53 +00:00
Dehao Chen f0e27e63e7 Move accurate-sample-profile into the function attribute.
Summary: We need to have accurate-sample-profile in function attribute so that it works with LTO.

Reviewers: davidxl, rsmith

Reviewed By: davidxl

Subscribers: sanjoy, mehdi_amini, javed.absar, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D37113

llvm-svn: 311706
2017-08-24 21:37:04 +00:00
Tobias Grosser d7eb619299 Model cache size and associativity in TargetTransformInfo
Summary:
We add the precise cache sizes and associativity for the following Intel
architectures:

  - Penry
  - Nehalem
  - Westmere
  - Sandy Bridge
  - Ivy Bridge
  - Haswell
  - Broadwell
  - Skylake
  - Kabylake

Polly uses since several months a performance model for BLAS computations that
derives optimal cache and register tile sizes from cache and latency
information (based on ideas from "Analytical Modeling Is Enough for High-Performance BLIS", by Tze Meng Low published at TOMS 2016).
While bootstrapping this model, these target values have been kept in Polly.
However, as our implementation is now rather mature, it seems time to teach
LLVM itself about cache sizes.

Interestingly, L1 and L2 cache sizes are pretty constant across
micro-architectures, hence a set of architecture specific default values
seems like a good start. They can be expanded to more target specific values,
in case certain newer architectures require different values. For now a set
of Intel architectures are provided.

Just as a little teaser, for a simple gemm kernel this model allows us to
improve performance from 1.2s to 0.27s. For gemm kernels with less optimal
memory layouts even larger speedups can be reported.

Reviewers: Meinersbur, bollu, singam-sanjay, hfinkel, gareevroman, fhahn, sebpop, efriedma, asb

Reviewed By: fhahn, asb

Subscribers: lsaba, asb, pollydev, llvm-commits

Differential Revision: https://reviews.llvm.org/D37051

llvm-svn: 311647
2017-08-24 09:46:25 +00:00
Rong Xu 15848e5977 [PGO] Set edge weights for indirectbr instruction with profile counts
Current PGO only annotates the edge weight for branch and switch instructions
with profile counts. We should also annotate the indirectbr instruction as
all the information is there. This patch enables the annotating for indirectbr
instructions. Also uses this annotation in branch probability analysis.

Differential Revision: https://reviews.llvm.org/D37074

llvm-svn: 311604
2017-08-23 21:36:02 +00:00
George Rimar 1e94ca115d [lib/Analysis] - Mark personality functions as live.
This is PR33245.

Case I am fixing is next:
Imagine we have 2 BC files, one defines and uses personality routine,
second has only declaration and also uses it.

Previously algorithm computing dead symbols (llvm::computeDeadSymbols) did
not know about personality routines and leaved them dead even if function that
has routine was live.

As a result thinLTOInternalizeAndPromoteGUID() method changed binding for
such symbol to local. Later when LLD tried to link these objects it failed
because one object had undefined global symbol for routine and second
object contained local definition instead of global.

Patch set the live root flag on the corresponding FunctionSummary
for personality routines when we build the per-module summaries
during the compile step.

Differential revision: https://reviews.llvm.org/D36834

llvm-svn: 311432
2017-08-22 08:50:56 +00:00
Craig Topper 7227ebad9c [ValueTracking] Add assertions that the starting Depth in isKnownToBeAPowerOfTwo and ComputeNumSignBitsImpl is not above MaxDepth
The function does an equality check later to terminate the recursion, but that won't work if its starts out too high. Similar assert already exists in computeKnownBits.

llvm-svn: 311400
2017-08-21 22:56:12 +00:00
Haicheng Wu 0812c5bea3 [InlineCost] Add cl::opt to allow full inline cost to be computed for debugging purposes.
Currently, the inline cost model will bail once the inline cost exceeds the
inline threshold in order to avoid unnecessary compile-time. However, when
debugging it is useful to compute the full cost, so this command line option
is added to override the default behavior.

I took over this work from Chad Rosier (mcrosier@codeaurora.org).

Differential Revision: https://reviews.llvm.org/D35850

llvm-svn: 311371
2017-08-21 20:00:09 +00:00
Chad Rosier 4eb18742ca [InlineCost] Add more debug during inline cost computation.
llvm-svn: 311370
2017-08-21 19:56:46 +00:00
Eugene Zelenko be709f2c19 [Analysis] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 311212
2017-08-18 23:51:26 +00:00
Amjad Aboud 88ffa3afe2 [InstCombine] Teach ComputeNumSignBitsImpl to handle integer multiply instruction.
Differential Revision: https://reviews.llvm.org/D36679

llvm-svn: 311206
2017-08-18 22:56:55 +00:00
Eugene Zelenko bb1b2d09cf [Analysis] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 311048
2017-08-16 22:07:40 +00:00
Sanjay Patel 042a53624c [DemandedBits] simplify call; NFC
llvm-svn: 311009
2017-08-16 14:28:23 +00:00
Craig Topper b1e4b1a070 [InstSimplify] Teach decomposeBitTestICmp to handle non-canonical compares
This adds support non-canonical compare predicates. InstSimplify can't rely on canonicalization to have occurred.

Differential Revision: https://reviews.llvm.org/D36646

llvm-svn: 310893
2017-08-14 22:11:43 +00:00
Craig Topper 0aa3a19512 Recommit r310869, "[InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify"
This recommits r310869, with the moved files and no extra changes.

Original commit message:

This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too.

I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself.

I also had to make decomposeBitTest support vectors since InstSimplify needs that.

As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library.

Differential Revision: https://reviews.llvm.org/D36593

llvm-svn: 310889
2017-08-14 21:39:51 +00:00
Chandler Carruth bba762a13f [InlineCost] Refactor the checks for different analyses to be a bit more
localized to the code that uses those analyses.

Technically, this can change behavior as we no longer require the
existence of the ProfileSummaryInfo analysis to use local profile
information via BFI. We didn't actually require the PSI to have an
interesting profile though, so this only really impacts the behavior in
non-default pass pipelines.

IMO, this makes it substantially less surprising how everything works --
before an analysis that wasn't actually used had to exist to trigger
*any* profile aware inlining. I think the new organization makes it more
obvious where various checks for profile signals happen.

Differential Revision: https://reviews.llvm.org/D36710

llvm-svn: 310888
2017-08-14 21:25:00 +00:00
Andrew Kaylor 53a5fbb45f Add strictfp attribute to prevent unwanted optimizations of libm calls
Differential Revision: https://reviews.llvm.org/D34163

llvm-svn: 310885
2017-08-14 21:15:13 +00:00
Craig Topper 69fa8e0d99 Revert r310869 "[InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify"
Failed to add the two files that moved. And then added an extra change I didn't mean to while trying to fix that. Reverting everything.

llvm-svn: 310873
2017-08-14 19:09:32 +00:00
Craig Topper 9c7b881677 Revert r310870 "[InstCombine][InstSimplify] 'git add' two files that moved in r310869."
An extra change crept in here.

llvm-svn: 310872
2017-08-14 19:09:28 +00:00
Craig Topper 914c836842 [InstCombine][InstSimplify] 'git add' two files that moved in r310869.
llvm-svn: 310870
2017-08-14 19:01:32 +00:00
Craig Topper 2f0b450666 [InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify
This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too.

I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself.

I also had to make decomposeBitTest support vectors since InstSimplify needs that.

As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library.

Differential Revision: https://reviews.llvm.org/D36593

llvm-svn: 310869
2017-08-14 18:49:42 +00:00
Hal Finkel b03dd4be70 [ValueTracking] Don't delete assumes of side-effectful instructions
ValueTracking has to strike a balance when attempting to propagate information
backwards from assumes, because if the information is trivially propagated
backwards, it can appear to LLVM that the assumption is known to be true, and
therefore can be removed.

This is sound (because an assumption has no semantic effect except for causing
UB), but prevents the assume from allowing further optimizations.

The isEphemeralValueOf check exists to try and prevent this issue by not
removing the source of an assumption. This tries to make it a little bit more
general to handle the case of side-effectful instructions, such as in

  %0 = call i1 @get_val()
  %1 = xor i1 %0, true
  call void @llvm.assume(i1 %1)

Patch by Ariel Ben-Yehuda, thanks!

Differential Revision: https://reviews.llvm.org/D36590

llvm-svn: 310859
2017-08-14 17:11:43 +00:00
Chandler Carruth 37c7b08710 [ValueTracking] Revert r310583 which enabled functionality that still is
causing compile time issues.

Moreover, the patch *deleted* the flag in addition to changing the
default, and links to a code review that doesn't even discuss the flag
and just has an update to a Clang test case.

I've followed up on the commit thread to ask for numbers on compile time
at this point, leaving the flag in place until things stabilize, and
pointing at specific code that seems to exhibit excessive compile time
with this patch.

Original commit message for r310583:
"""
[ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2.

The original patch was an improvement to IR ValueTracking on
non-negative integers. It has been checked in to trunk (D18777,
r284022). But was disabled by default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
""""

llvm-svn: 310816
2017-08-14 07:03:24 +00:00
Eugene Zelenko 530851c2bc [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 310766
2017-08-11 21:30:02 +00:00
Chandler Carruth 19913b22c0 [PM] Switch the CGSCC debug messages to use the standard LLVM debug
printing techniques with a DEBUG_TYPE controlling them.

It was a mistake to start re-purposing the pass manager `DebugLogging`
variable for generic debug printing -- those logs are intended to be
very minimal and primarily used for testing. More detailed and
comprehensive logging doesn't make sense there (it would only make for
brittle tests).

Moreover, we kept forgetting to propagate the `DebugLogging` variable to
various places making it also ineffective and/or unavailable. Switching
to `DEBUG_TYPE` makes this a non-issue.

llvm-svn: 310695
2017-08-11 05:47:13 +00:00
Nikolai Bozhenov d97136c182 [ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
 
Reviewers: reames, hfinkel
 
Differential Revision: https://reviews.llvm.org/D34101
 
Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 310583
2017-08-10 11:24:57 +00:00
Chandler Carruth 9c161e894a [LCG] Fix an assert in a on-scope-exit lambda that checked the contents
of the returned value.

Checking the returned value from inside of a scoped exit isn't actually
valid. It happens to work when NRVO fires and the stars align, which
they reliably do with Clang but don't, for example, on MSVC builds.

llvm-svn: 310547
2017-08-10 03:05:21 +00:00
Hiroshi Yamauchi ccd412f48d [LVI] Fix LVI compile time regression around constantFoldUser()
Summary:
Avoid checking each operand and calling getValueFromCondition() before calling
constantFoldUser() when the instruction type isn't supported by
constantFoldUser().

This fixes a large compile time regression in an internal build.

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36552

llvm-svn: 310545
2017-08-10 02:23:14 +00:00
Craig Topper ba69187988 [InstSimplify] Add test cases that show that simplifySelectWithICmpCond doesn't work with non-canonical comparisons.
llvm-svn: 310542
2017-08-10 01:02:02 +00:00
Nuno Lopes 7829506731 CFLAA: return MustAlias when pointers p, q are equal, i.e.,
must-alias(p, sz_p, p, sz_q)  irrespective of access sizes sz_p, sz_q

As discussed a couple of weeks ago on the ML.
This makes the behavior consistent with that of BasicAA.
AA clients already check the obj size themselves and may not require the
obj size to match exactly the access size (e.g., in case of store forwarding)

llvm-svn: 310495
2017-08-09 17:02:18 +00:00
Davide Italiano 1a943a90f5 [ValueTracking] Turn a test into an assertion.
As discussed with Chad, this should never happen, but this
assertion is basically free, so, keep it around just in case.

llvm-svn: 310493
2017-08-09 16:06:54 +00:00
Davide Italiano 30e5194287 [ValueTracking] Honour recursion limit.
The recently improved support for `icmp` in ValueTracking
(r307304) exposes the fact that `isImplied` condition doesn't
really bail out if we hit the recursion limit (and calls
`computeKnownBits` which increases the depth and asserts).

Differential Revision:  https://reviews.llvm.org/D36512

llvm-svn: 310481
2017-08-09 15:13:50 +00:00
Jonas Paulsson 6228aeda65 [LSR / TTI / SystemZ] Eliminate TargetTransformInfo::isFoldableMemAccess()
isLegalAddressingMode() has recently gained the extra optional Instruction*
parameter, and therefore it can now do the job that previously only
isFoldableMemAccess() could do.

The SystemZ implementation of isLegalAddressingMode() has gained the
functionality of checking for offsets, which used to be done with
isFoldableMemAccess().

The isFoldableMemAccess() hook has been removed everywhere.

Review: Quentin Colombet, Ulrich Weigand
https://reviews.llvm.org/D35933

llvm-svn: 310463
2017-08-09 11:28:01 +00:00
Chandler Carruth 2cd28b2ba0 [LCG] Completely remove the map-based association of post-order numbers
to Nodes when removing ref edges from a RefSCC.

This map based association turns out to be pretty expensive for large
RefSCCs and pointless as we already have embedded data members inside
nodes that we use to track the DFS state. We can reuse one of those and
the map becomes unnecessary.

This also fuses the update of those numbers into the scan across the
pending stack of nodes so that we don't walk the nodes twice during the
DFS.

With this I expect the new PM to be faster than the old PM for the test
case I have been optimizing. That said, it also seems simpler and more
direct in many ways. The side storage was always pretty awkward.

The last remaining hot-spot in the profile of the LCG once this is done
will be the edge iterator walk in the DFS. I'll take a look at improving
that next.

llvm-svn: 310456
2017-08-09 09:37:39 +00:00
Chandler Carruth 9c3deaa653 [LCG] Special case when removing a ref edge from a RefSCC leaves
that RefSCC still connected.

This is common and can be handled much more efficiently. As soon as we
know we've covered every node in the RefSCC with the DFS, we can simply
reset our state and return. This avoids numerous data structure updates
and other complexity.

On top of other changes, this appears to get new PM back to parity with
the old PM for a large protocol buffer message source code. The dense
map updates are very hot in this function.

llvm-svn: 310451
2017-08-09 09:14:34 +00:00
Chandler Carruth 23c2f44cc7 [LCG] Switch one of the update methods for the LazyCallGraph to support
limited batch updates.

Specifically, allow removing multiple reference edges starting from
a common source node. There are a few constraints that play into
supporting this form of batching:

1) The way updates occur during the CGSCC walk, about the most we can
   functionally batch together are those with a common source node. This
   also makes the batching simpler to implement, so it seems
   a worthwhile restriction.
2) The far and away hottest function for large C++ files I measured
   (generated code for protocol buffers) showed a huge amount of time
   was spent removing ref edges specifically, so it seems worth focusing
   there.
3) The algorithm for removing ref edges is very amenable to this
   restricted batching. There are just both API and implementation
   special casing for the non-batch case that gets in the way. Once
   removed, supporting batches is nearly trivial.

This does modify the API in an interesting way -- now, we only preserve
the target RefSCC when the RefSCC structure is unchanged. In the face of
any splits, we create brand new RefSCC objects. However, all of the
users were OK with it that I could find. Only the unittest needed
interesting updates here.

How much does batching these updates help? I instrumented the compiler
when run over a very large generated source file for a protocol buffer
and found that the majority of updates are intrinsically updating one
function at a time. However, nearly 40% of the total ref edges removed
are removed as part of a batch of removals greater than one, so these
are the cases batching can help with.

When compiling the IR for this file with 'opt' and 'O3', this patch
reduces the total time by 8-9%.

Differential Revision: https://reviews.llvm.org/D36352

llvm-svn: 310450
2017-08-09 09:05:27 +00:00
Nuno Lopes 598d1632e1 BasicAA: assert on another case where aliasGEP shouldn't get a PartialAlias response
llvm-svn: 310420
2017-08-08 21:25:26 +00:00
Dehao Chen 34cfcb29aa Make ICP uses PSI to check for hotness.
Summary: Currently, ICP checks the count against a fixed value to see if it is hot enough to be promoted. This does not work for SamplePGO because sampled count may be much smaller. This patch uses PSI to check if the count is hot enough to be promoted.

Reviewers: davidxl, tejohnson, eraman

Reviewed By: davidxl

Subscribers: sanjoy, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36341

llvm-svn: 310416
2017-08-08 20:57:33 +00:00
Craig Topper b498a23f0e [KnownBits][ValueTracking] Move the math for calculating known bits for add/sub into a static method in KnownBits object
I want to reuse this code in SimplifyDemandedBits handling of Add/Sub. This will make that easier.

Wonder if we should use it in SelectionDAG's computeKnownBits too.

Differential Revision: https://reviews.llvm.org/D36433

llvm-svn: 310378
2017-08-08 16:29:35 +00:00
Nuno Lopes c7d4110aa7 BasicAA: aliasGEP shouldn't get a PartialAlias response here
add an assert() to ensure that's the case (as I'm not convinced it won't happen)

llvm-svn: 310373
2017-08-08 16:13:24 +00:00
Chandler Carruth 6e35c31d2d [PM] Fix a likely more critical infloop bug in the CGSCC pass manager.
This was just a bad oversight on my part. The code in question should
never have worked without this fix. But it turns out, there are
relatively few places that involve libfunctions that participate in
a single SCC, and unless they do, this happens to not matter.

The effect of not having this correct is that each time through this
routine, the edge from write_wrapper to write was toggled between a call
edge and a ref edge. First time through, it becomes a demoted call edge
and is turned into a ref edge. Next time it is a promoted call edge from
a ref edge. On, and on it goes forever.

I've added the asserts which should have always been here to catch silly
mistakes like this in the future as well as a test case that will
actually infloop without the fix.

The other (much scarier) infinite-inlining issue I think didn't actually
occur in practice, and I simply misdiagnosed this minor issue as that
much more scary issue. The other issue *is* still a real issue, but I'm
somewhat relieved that so far it hasn't happened in real-world code
yet...

llvm-svn: 310342
2017-08-08 10:13:23 +00:00
Chandler Carruth 691d0243a5 [LCG] Remove yet another variable only used inside of asserts.
llvm-svn: 310174
2017-08-05 08:33:16 +00:00
Benjamin Kramer ef42fd43f4 [LCG] Fold otherwise unused variable into assert.
No functionality change intended.

llvm-svn: 310173
2017-08-05 08:28:48 +00:00
Chandler Carruth adbf14ab85 [LCG] Completely remove the parent set and leaf tracking for RefSCCs.
After the previous series of patches, this is now trivial and deletes
a pretty astonishing amount of complexity. This has been a long time
coming, as the move toward a PO sequence of RefSCCs started eroding the
underlying use cases for this half of the data structure.

Among the biggest advantages here is that now there aren't two
independent data structures that need to stay in sync.

Some of my profiling has also indicated that updating the parent sets
was among the most expensive parts of the lazy call graph. Eliminating
it whole sale is likely to be a nice win in terms of compile time.

Last but not least, I had discussed with some folks previously keeping
it around for asserts and other correctness checking, but once the
fundamentals of the parent and child checking were implemented without
the parent sets their value in correctness checking was tiny and no
where near worth the cost of the complexity required to keep everything
up-to-date.

llvm-svn: 310171
2017-08-05 07:37:00 +00:00
Chandler Carruth 38bd6b50ef [LCG] Re-implement the basic isParentOf, isAncestorOf, isChildOf, and
isDescendantOf methods on RefSCCs in terms of the forward edges rather
than the parent sets.

This is technically slower, but probably not interestingly slower, and
all of these routines were already so expensive that they're guarded
behind both !NDEBUG and EXPENSIVE_CHECKS.

This removes another non-critical usage of parent sets.

I've also added some comments to try and help clarify to any potential
users the costs of these routines. They're mostly useful for debugging,
asserts, or other queries.

llvm-svn: 310170
2017-08-05 06:24:09 +00:00
Chandler Carruth c718b8e7c3 [LCG] Add the concept of a "dead" node and use it to avoid a complex
walk over the parent set.

When removing a single function from the call graph, we previously would
walk the entire RefSCC's parent set and then walk every outgoing edge
just to find the ones to remove. In addition to this being quite high
complexity in theory, it is also the last fundamental use of the parent
sets.

With this change, when we remove a function we transform the node
containing it to be recognizably "dead" and then teach the edge
iterators to recognize edges to such nodes and skip them the same way
they skip null edges.

We can't move fully to using "dead" nodes -- when disconnecting two live
nodes we need to null out the edge. But the complexity this adds to the
edge sequence isn't too bad and the simplification of lazily handling
this seems like a significant win.

llvm-svn: 310169
2017-08-05 05:47:37 +00:00
Chandler Carruth 39df40d8c2 [LCG] Replace an implicit bool operator with a named function. (NFC)
The definition of 'false' here was already pretty vague and debatable,
and I'm about to add another potential 'false' that would actually make
much more sense in a bool operator. Especially given how rarely this is
used, a nicely named method seems better.

llvm-svn: 310165
2017-08-05 04:04:06 +00:00
Chandler Carruth 403d3c4b2b [LCG] When removing a dead function and clearing out the data
structures, actually null out the graph pointers as well. We won't ever
update these, and we certainly shouldn't be calling any methods on them,
so it seems good to defensively nuke them.

llvm-svn: 310164
2017-08-05 03:37:39 +00:00
Chandler Carruth 7cb23e705f [LCG] Rather than walking the directed graph structure to update graph
pointers in node objects, just walk the map from function to node.

It doesn't have stable ordering, but works just as well and is much
simpler. We don't need ordering when just updating internal pointers.

llvm-svn: 310163
2017-08-05 03:37:39 +00:00
Chandler Carruth 2c58e1a45c [LCG] Remove the complex walk of the parent sets to update graph
pointers.

This is completely unnecessary as we have a trivial list of RefSCCs now
that we can walk.

llvm-svn: 310162
2017-08-05 03:37:38 +00:00
Chandler Carruth 13ffd110ad [LCG] Remove the use of the parent sets to compute connectivity when
merging RefSCCs.

The logic to directly use the reference edges is simpler and not
substantially slower (despite the comments to the contrary) because this
is not actually an especially hot part of LCG in practice.

llvm-svn: 310161
2017-08-05 03:37:37 +00:00
Amara Emerson 56dca4e3ca [SCEV] Preserve NSW information for sext(subtract).
Pushes the sext onto the operands of a Sub if NSW is present.
Also adds support for propagating the nowrap flags of the
llvm.ssub.with.overflow intrinsic during analysis.

Differential Revision: https://reviews.llvm.org/D35256

llvm-svn: 310117
2017-08-04 20:19:46 +00:00
Easwaran Raman ff77cc750c [Inliner] Fix a typo in option description. NFC.
llvm-svn: 310073
2017-08-04 17:15:17 +00:00
Craig Topper 4e22ee6745 [ConstantInt] Use ConstantInt::getValue instead of Constant::getUniqueInteger in a few places where we obviously have a ConstantInt. NFC
getUniqueInteger will ultimately call ConstantInt::getValue, but calling ConstantInt::getValue should be inlined.

llvm-svn: 310069
2017-08-04 16:59:29 +00:00
Dehao Chen 63799512b2 Adjust the hotness threshold from 99.9% to 99%.
Summary: We originally set the hotness threshold as 99.9% to be consistent with gcc FDO. But because the inline heuristic is different between 2 compilers: llvm uses bottom-up algorithm while gcc uses priority based. The LLVM algorithm tends to inline too much early that prevents hot callsites from further inlined into its caller. Due to this restriction, we think it is reasonable to lower the hotness threshold to give priority to those that are really hot. Our experiments show that this change would improve performance on large applications. Note that the inline heuristic has great room for further tuning. Once the inline heuristics are refined, we could adjust this threshold to allow inlining for less hot callsites.

Reviewers: davidxl, tejohnson, eraman

Reviewed By: tejohnson

Subscribers: sanjoy, llvm-commits

Differential Revision: https://reviews.llvm.org/D36317

llvm-svn: 310065
2017-08-04 16:20:54 +00:00
Charles Saternos 75da10d1b2 [ThinLTO] Add FunctionAttrs to ThinLTO index
Adds function attributes to index: ReadNone, ReadOnly, NoRecurse, NoAlias. This attributes will be used for future ThinLTO optimizations that will propagate function attributes across modules.

llvm-svn: 310061
2017-08-04 16:00:58 +00:00
Nikolai Bozhenov 1545eb3408 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 310054
2017-08-04 12:22:17 +00:00
Teresa Johnson 8482e56920 Use profile summary to disable peeling for huge working sets
Summary:
Detect when the working set size of a profiled application is huge,
by comparing the number of counts required to reach the hot percentile
in the profile summary to a large threshold*.

When the working set size is determined to be huge, disable peeling
to avoid bloating the working set further.

*Note that the selected threshold (15K) is significantly larger than the
largest working set value in SPEC cpu2006 (which is gcc at around 11K).

Reviewers: davidxl

Subscribers: mehdi_amini, mzolotukhin, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D36288

llvm-svn: 310005
2017-08-03 23:42:58 +00:00
Easwaran Raman 974d4eea93 [Inliner] Increase threshold for hot callsites without PGO.
Summary:
This increases the inlining threshold for hot callsites. Hotness is
defined in terms of block frequency of the callsite relative to the
caller's entry block's frequency. Since this requires BFI in the
inliner, this only affects the new PM pipeline. This is enabled by
default at -O3.

This improves the performance of some internal benchmarks. Notably, an
internal benchmark for Gipfeli compression
(https://github.com/google/gipfeli) improves by ~7%. Povray in SPEC2006
improves by ~2.5%. I am running more experiments and will update the
thread if other benchmarks show improvement/regression.

In terms of text size, LLVM test-suite shows an 1.22% text size
increase. Diving into the results, 13 of the benchmarks in the
test-suite increases by > 10%. Most of these are small, but
Adobe-C++/loop_unroll (17.6% increases) and tramp3d(20.7% size increase)
have >250K text size. On a large application, the text size increases by
2%

Reviewers: chandlerc, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36199

llvm-svn: 309994
2017-08-03 22:23:33 +00:00
Hiroshi Yamauchi 144ee2b4d7 [LVI] Constant-propagate a zero extension of the switch condition value through case edges
Summary:
(This is a second attempt as https://reviews.llvm.org/D34822 was reverted.)

LazyValueInfo currently computes the constant value of the switch condition through case edges, which allows the constant value to be propagated through the case edges.

But we have seen a case where a zero-extended value of the switch condition is used past case edges for which the constant propagation doesn't occur.

This patch adds a small logic to handle such a case in getEdgeValueLocal().

This is motivated by the Python 2.7 eval loop in PyEval_EvalFrameEx() where the lack of the constant propagation causes longer live ranges and more spill code than necessary.

With this patch, we see that the code size of PyEval_EvalFrameEx() decreases by ~5.4% and a performance test improves by ~4.6%.

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36247

llvm-svn: 309986
2017-08-03 21:11:30 +00:00
Dehao Chen f58df39529 Do not want to use BFI to get profile count for sample pgo
Summary: For SamplePGO, we already record the callsite count in the call instruction itself. So we do not want to use BFI to get profile count as it is less accurate.

Reviewers: tejohnson, davidxl, eraman

Reviewed By: eraman

Subscribers: sanjoy, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36025

llvm-svn: 309964
2017-08-03 17:11:41 +00:00
Max Kazantsev 2cb3653404 [SCEV] Re-enable "Cache results of computeExitLimit"
The patch rL309080 was reverted because it did not clean up the cache on "forgetValue"
method call. This patch re-enables this change, adds the missing check and introduces
two new unit tests that make sure that the cache is cleaned properly.

Differential Revision: https://reviews.llvm.org/D36087

llvm-svn: 309925
2017-08-03 08:41:30 +00:00
Hiroshi Inoue 0bd906ec8f [StackColoring] Update AliasAnalysis information in stack coloring pass (part 2)
This patch is update after the first patch (https://reviews.llvm.org/rL309651) based on the post-commit comments.

Stack coloring pass need to maintain AliasAnalysis information when merging stack slots of different types.
Actually, there is a FIXME comment in StackColoring.cpp

// FIXME: In order to enable the use of TBAA when using AA in CodeGen,
// we'll also need to update the TBAA nodes in MMOs with values
// derived from the merged allocas.

But, TBAA has been already enabled in CodeGen without fixing this pass.
The incorrect TBAA metadata results in recent failures in bootstrap test on ppc64le (PR33928) by allowing unsafe instruction scheduling.
Although we observed the problem on ppc64le, this is a platform neutral issue.

This patch makes the stack coloring pass maintains AliasAnalysis information when merging multiple stack slots.

This patch fixes PR33928.

llvm-svn: 309849
2017-08-02 18:16:32 +00:00
Chad Rosier 5ce28f4f92 [InlineCost] Remove redundant call. NFC.
llvm-svn: 309819
2017-08-02 14:50:27 +00:00
Chad Rosier 2e1c050e52 [InlineCost] Simplify more 'and' and 'or' operations.
Differential Revision: https://reviews.llvm.org/D35856

llvm-svn: 309817
2017-08-02 14:40:42 +00:00
Sanjoy Das 4cad61adb3 [SCEV/IndVars] Always compute loop exiting values if the backedge count is 0
If SCEV can prove that the backedge taken count for a loop is zero, it does not
need to "understand" a recursive PHI to compute its exiting value.

This should fix PR33885.

llvm-svn: 309758
2017-08-01 22:37:58 +00:00
Chad Rosier dfd1de687d [Value Tracking] Default argument to true and rename accordingly. NFC.
IMHO this is a bit more readable.

llvm-svn: 309739
2017-08-01 20:18:54 +00:00
Chad Rosier f73a10d2df [Value Tracking] Refactor and/or logic into helper. NFC.
llvm-svn: 309726
2017-08-01 19:22:36 +00:00
Chandler Carruth 3c6a820ce3 [PM] Add a comment clarifying what a particular predicate is doing.
This came up as a point of confusion while working on a fundamental
problem with the combination of CGSCC iteration and the inliner.

llvm-svn: 309662
2017-08-01 06:40:11 +00:00
Daniel Jasper 43cd2ef49c Revert r309415: "[LVI] Constant-propagate a zero extension of the switch condition value through case edges"
This causes assertion failures in (a somewhat old version of) SpiderMonkey.
I have already forwarded reproduction instructions to the patch author.

llvm-svn: 309659
2017-08-01 05:30:49 +00:00
Hiroshi Inoue b9417dbd48 [StackColoring] Update AliasAnalysis information in stack coloring pass
Stack coloring pass need to maintain AliasAnalysis information when merging stack slots of different types.
Actually, there is a FIXME comment in StackColoring.cpp

// FIXME: In order to enable the use of TBAA when using AA in CodeGen,
// we'll also need to update the TBAA nodes in MMOs with values
// derived from the merged allocas.

But, TBAA has been already enabled in CodeGen without fixing this pass.
The incorrect TBAA metadata results in recent failures in bootstrap test on ppc64le (PR33928) by allowing unsafe instruction scheduling.
Although we observed the problem on ppc64le, this is a platform neutral issue.

This patch makes the stack coloring pass maintains AliasAnalysis information when merging multiple stack slots.

llvm-svn: 309651
2017-08-01 03:32:15 +00:00
Alina Sbirlea 967e7966fc Allow None as a MemoryLocation to getModRefInfo
Summary:
Adding part of the changes in D30369 (needed to make progress):
Current patch updates AliasAnalysis and MemoryLocation, but does _not_ clean up MemorySSA.

Original summary from D30369, by dberlin:
Currently, we have instructions which affect memory but have no memory
location. If you call, for example, MemoryLocation::get on a fence,
it asserts. This means things specifically have to avoid that. It
also means we end up with a copy of each API, one taking a memory
location, one not.

This starts to fix that.

We add MemoryLocation::getOrNone as a new call, and reimplement the
old asserting version in terms of it.

We make MemoryLocation optional in the (Instruction, MemoryLocation)
version of getModRefInfo, and kill the old one argument version in
favor of passing None (it had one caller). Now both can handle fences
because you can just use MemoryLocation::getOrNone on an instruction
and it will return a correct answer.

We use all this to clean up part of MemorySSA that had to handle this difference.

Note that literally every actual getModRefInfo interface we have could be made private and replaced with:

getModRefInfo(Instruction, Optional<MemoryLocation>)
and
getModRefInfo(Instruction, Optional<MemoryLocation>, Instruction, Optional<MemoryLocation>)

and delegating to the right ones, if we wanted to.

I have not attempted to do this yet.

Reviewers: dberlin, davide, dblaikie

Subscribers: sanjoy, hfinkel, chandlerc, llvm-commits

Differential Revision: https://reviews.llvm.org/D35441

llvm-svn: 309641
2017-08-01 00:28:29 +00:00
Alexey Bataev 0ab22bb991 [SLP] Initial rework for min/max horizontal reduction vectorization, NFC.
Summary: All getReductionCost() functions are renamed to getArithmeticReductionCost() + added basic infrastructure to handle non-binary reduction operations.

Reviewers: spatel, mzolotukhin, Ayal, mkuper, gilr, hfinkel

Subscribers: RKSimon, llvm-commits

Differential Revision: https://reviews.llvm.org/D29402

llvm-svn: 309566
2017-07-31 14:36:05 +00:00
Alexey Bataev 3e9b3eb91d [Cost] Rename getReductionCost() to getArithmeticReductionCost(), NFC.
llvm-svn: 309563
2017-07-31 14:19:32 +00:00
Sanjoy Das b5a968f62d [SCEV] Change an early exit to an assert; NFC
llvm-svn: 309480
2017-07-29 05:32:47 +00:00
Easwaran Raman 51b809bf2f [Inliner] Do not apply any bonus for cold callsites.
Summary:
Inlining threshold is increased by application of bonuses when the
callee has a single reachable basic block or is rich in vector
instructions. Similarly, inlining cost is reduced by applying a large
bonus when the last call to a static function is considered for
inlining. This patch disables the application of these bonuses when the
callsite or the callee is cold. The intention here is to prevent a large
cold callsite from being inlined to a non-cold caller that could prevent
the caller from being inlined. This is especially important when the
cold callsite is a last call to a static since the associated bonus is
very high.

Reviewers: chandlerc, davidxl

Subscribers: danielcdh, llvm-commits

Differential Revision: https://reviews.llvm.org/D35823

llvm-svn: 309441
2017-07-28 21:47:36 +00:00
Chad Rosier 2f49803c1f [Value Tracking] Refactor icmp comparison logic into helper. NFC.
llvm-svn: 309417
2017-07-28 18:47:43 +00:00
Hiroshi Yamauchi 1b179bc5ff [LVI] Constant-propagate a zero extension of the switch condition value through case edges
Summary:
LazyValueInfo currently computes the constant value of the switch condition through case edges, which allows the constant value to be propagated through the case edges.

But we have seen a case where a zero-extended value of the switch condition is used past case edges for which the constant propagation doesn't occur.

This patch adds a small logic to handle such a case in getEdgeValueLocal().

This is motivated by the Python 2.7 eval loop in PyEval_EvalFrameEx() where the lack of the constant propagation causes longer live ranges and more spill code than necessary.

With this patch, we see that the code size of PyEval_EvalFrameEx() decreases by ~5.4% and a performance test improves by ~4.6%.




Reviewers: wmi, dberlin, sanjoy

Reviewed By: sanjoy

Subscribers: davide, davidxl, llvm-commits

Differential Revision: https://reviews.llvm.org/D34822

llvm-svn: 309415
2017-07-28 18:35:25 +00:00
Chad Rosier e42b44b87d [ValueTracking] Remove a number of unused arguments. NFC.
llvm-svn: 309385
2017-07-28 14:39:06 +00:00
Max Kazantsev fa4969539a [SCEV] Do not visit nodes twice in containsConstantSomewhere
This patch reworks the function that searches constants in Add and Mul SCEV expression
chains so that now it does not visit a node more than once, and also renames this function
for better correspondence between its implementation and semantics.

Differential Revision: https://reviews.llvm.org/D35931

llvm-svn: 309367
2017-07-28 06:42:15 +00:00
Sanjoy Das 843ab57457 Revert "[SCEV] Cache results of computeExitLimit"
This reverts commit r309080.  The patch needs to clear out the
ScalarEvolution::ExitLimits cache in forgetMemoizedResults.

I've replied on the commit thread for the patch with more details.

llvm-svn: 309357
2017-07-28 03:25:07 +00:00
Dehao Chen e70a472bad Changing the default MaxNumPromotions from 2 to 3.
Summary: In performance tuning, we see performance benefits when enlarge the maximum num promotion targets to 3. This is safe as soon as we have total percentage threshold properly setup (https://reviews.llvm.org/D35962)

Reviewers: davidxl, tejohnson

Reviewed By: tejohnson

Subscribers: llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D35966

llvm-svn: 309346
2017-07-28 01:03:10 +00:00
Dehao Chen f4240b5b91 Separate the ICP total threshold and remaining threshold.
Summary: In the current implementation, isPromotionProfitable only checks if the call count to a direct target is no less than a certain percentage threshold of the remaining call counts that have not been promoted. This causes code size problems when the target count is small but greater than a large portion of remaining counts. E.g. target1 takes 99.9%, while target2 takes 0.1%. Both targets will be promoted and inlined, makes the function size too large, which potentially prevents it from further inlining into its callers. This patch adds another percentage threshold against the total indirect call count. If the target count needs to be no less than both thresholds in order to be promoted speculatively.

Reviewers: davidxl, tejohnson

Reviewed By: tejohnson

Subscribers: sanjoy, llvm-commits

Differential Revision: https://reviews.llvm.org/D35962

llvm-svn: 309345
2017-07-28 01:02:54 +00:00
Evgeny Astigeevich 61c1bd5abc [InlineCost, NFC] Change CallAnalyzer::isGEPFree to use TTI::getUserCost instead of TTI::getGEPCost
Currently CallAnalyzer::isGEPFree uses TTI::getGEPCost to check if GEP is free.
TTI::getGEPCost cannot handle cases when GEPs participate in Def-Use dependencies
(see https://reviews.llvm.org/D31186 for example).
There is TTI::getUserCost which can calculate the cost more accurately by
taking dependencies into account.

Differential Revision: https://reviews.llvm.org/D33685

llvm-svn: 309268
2017-07-27 12:49:27 +00:00
Mohammed Agabaria cef53dcb6f [TTI] fixing a bug in the isLegalMaskedScatter API
isLegalMaskedScatter called the Gather version which is a bug.
use test case is provided within the patch of AVX2 gathers at: https://reviews.llvm.org/D35772

Differential Revision: https://reviews.llvm.org/D35786

llvm-svn: 309260
2017-07-27 10:28:16 +00:00
Max Kazantsev f282aed428 [SCEV] Cache results of computeExitLimit
This patch adds a cache for computeExitLimit to save compilation time. A lot of examples of
tests that take extensive time to compile are attached to the bug 33494.

Differential Revision: https://reviews.llvm.org/D35827

llvm-svn: 309080
2017-07-26 04:55:54 +00:00
Sanjoy Das 469e740f2b [SCEV] Remove unnecessary call to forgetMemoizedResults
`SCEVUnknown::allUsesReplacedWith` does not need to call `forgetMemoizedResults`
since RAUW does a value-equivalent replacement by assumption.  If this
assumption was false then the later setValPtr(New) call would be incorrect too.

This is a non-trivial performance optimization for functions with a large number
of loops since `forgetMemoizedResults` walks all loop backedge taken counts to
see if any of them use the SCEVUnknown being RAUWed.  However, this improvement
is difficult to demonstrate without checking in an excessively large IR file.

llvm-svn: 309072
2017-07-26 01:32:19 +00:00
Eugene Zelenko 48666a694c [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 308936
2017-07-24 23:16:33 +00:00
Max Kazantsev 0e9e0796f4 [SCEV] Limit max size of AddRecExpr during evolving
When SCEV calculates product of two SCEVAddRecs from the same loop, it
tries to combine them into one big AddRecExpr. If the sizes of the initial
SCEVs were `S1` and `S2`, the size of their product is `S1 + S2 - 1`, and every
operand of the resulting SCEV is combined from operands of initial SCEV and
has much higher complexity than they have.

As result, if we try to calculate something like:
  %x1 = {a,+,b}
  %x2 = mul i32 %x1, %x1
  %x3 = mul i32 %x2, %x1
  %x4 = mul i32 %x3, %x2
  ...
The size of such SCEVs grows as `2^N`, and the arguments
become more and more complex as we go forth. This leads
to long compilation and huge memory consumption.

This patch sets a limit after which we don't try to combine two
`SCEVAddRecExpr`s into one. By default, max allowed size of the
resulting AddRecExpr is set to 16.

Differential Revision: https://reviews.llvm.org/D35664

llvm-svn: 308847
2017-07-23 15:40:19 +00:00
Eugene Zelenko 38c02bc7f5 [Analysis] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 308787
2017-07-21 21:37:46 +00:00
Jonas Paulsson 024e319489 [SystemZ, LoopStrengthReduce]
This patch makes LSR generate better code for SystemZ in the cases of memory
intrinsics, Load->Store pairs or comparison of immediate with memory.

In order to achieve this, the following common code changes were made:

 * New TTI hook: LSRWithInstrQueries(), which defaults to false. Controls if
 LSR should do instruction-based addressing evaluations by calling
 isLegalAddressingMode() with the Instruction pointers.
 * In LoopStrengthReduce: handle address operands of memset, memmove and memcpy
 as address uses, and call isFoldableMemAccessOffset() for any LSRUse::Address,
 not just loads or stores.

SystemZ changes:

 * isLSRCostLess() implemented with Insns first, and without ImmCost.
 * New function supportedAddressingMode() that is a helper for TTI methods
 looking at Instructions passed via pointers.

Review: Ulrich Weigand, Quentin Colombet
https://reviews.llvm.org/D35262
https://reviews.llvm.org/D35049

llvm-svn: 308729
2017-07-21 11:59:37 +00:00
Chandler Carruth 06a86301a1 [PM/LCG] Follow-up fix to r308088 to handle deletion of library
functions.

In the prior commit, we provide ordering to the LCG between functions
and library function definitions that they might begin to call through
transformations. But we still would delete these library functions from
the call graph if they became dead during inlining.

While this immediately crashed, it also exposed a loss of information.
We shouldn't remove definitions of library functions that can still
usefully participate in the LCG-powered CGSCC optimization process. If
new call edges are formed, we want to have definitions to be called.

We can still remove these functions if truly dead using global-dce, etc,
but removing them during the CGSCC walk is premature.

This fixes a crash in the new PM when optimizing some unusual libraries
that end up with "internal" lib functions such as the code in the "R"
language's libraries.

llvm-svn: 308417
2017-07-19 04:12:25 +00:00
Dorit Nuzman ca4fd18ddc PSCEV] Create AddRec for Phis in cases of possible integer overflow,
using runtime checks

Extend the SCEVPredicateRewriter to work a bit harder when it encounters an
UnknownSCEV for a Phi node; Try to build an AddRecurrence also for Phi nodes
whose update chain involves casts that can be ignored under the proper runtime
overflow test. This is one step towards addressing PR30654.

Differential revision: http://reviews.llvm.org/D30041

llvm-svn: 308299
2017-07-18 11:57:08 +00:00
Craig Topper 9e465894f8 [Analysis] RemoveTotalMemInst counting in InstCount to avoid reading back other Statistic variables
Summary:
Previously, we counted TotalMemInst by reading certain instruction counters before and after calling visit and then finding the difference. But that wouldn't be thread safe if this same pass was being ran on multiple threads.

This list of "memory instructions" doesn't make sense to me as it includes call/invoke and is missing atomics.

This patch removes the counter all together.

Reviewers: hfinkel, chandlerc, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D33608

llvm-svn: 308260
2017-07-18 02:41:12 +00:00
NAKAMURA Takumi 5869ba8792 Analysis/MemorySSA.cpp: Prune unused "llvm/Transforms/Scalar.h".
llvm-svn: 308162
2017-07-17 04:31:26 +00:00
Craig Topper dad7d8dfb0 [InstSimplify] Use commutable matchers to simplify some code. NFC
llvm-svn: 308125
2017-07-16 06:57:41 +00:00
Chandler Carruth f59a838720 [PM/LCG] Teach the LazyCallGraph to maintain reference edges from every
function to every defined function known to LLVM as a library function.

LLVM can introduce calls to these functions either by replacing other
library calls or by recognizing patterns (such as memset_pattern or
vector math patterns) and replacing those with calls. When these library
functions are actually defined in the module, we need to have reference
edges to them initially so that we visit them during the CGSCC walk in
the right order and can effectively rebuild the call graph afterward.

This was discovered when building code with Fortify enabled as that is
a common case of both inline definitions of library calls and
simplifications of code into calling them.

This can in extreme cases of LTO-ing with libc introduce *many* more
reference edges. I discussed a bunch of different options with folks but
all of them are unsatisfying. They either make the graph operations
substantially more complex even when there are *no* defined libfuncs, or
they introduce some other complexity into the callgraph. So this patch
goes with the simplest possible solution of actual synthetic reference
edges. If this proves to be a memory problem, I'm happy to implement one
of the clever techniques to save memory here.

llvm-svn: 308088
2017-07-15 08:08:19 +00:00
Haicheng Wu abdef9ee7e [TTI] Refine the cost of EXT in getUserCost()
Now, getUserCost() only checks the src and dst types of EXT to decide it is free
or not. This change first checks the types, then calls isExtFreeImpl(), and
check if EXT can form ExtLoad at last. Currently, only AArch64 has customized
implementation of isExtFreeImpl() to check if EXT can be folded into its use.

Differential Revision: https://reviews.llvm.org/D34458

llvm-svn: 308076
2017-07-15 02:12:16 +00:00
Jakub Kuderski b292c22c8d [Dominators] Make IsPostDominator a template parameter
Summary:
DominatorTreeBase used to have IsPostDominators (bool) member to indicate if the tree is a dominator or a postdominator tree. This made it possible to switch between the two 'modes' at runtime, but it isn't used in practice anywhere.

This patch makes IsPostDominator a template argument. This way, it is easier to switch between different algorithms at compile-time based on this argument and design external utilities around it. It also makes it impossible to incidentally assign a postdominator tree to a dominator tree (and vice versa), and to further simplify template code in GenericDominatorTreeConstruction.

Reviewers: dberlin, sanjoy, davide, grosser

Reviewed By: dberlin

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D35315

llvm-svn: 308040
2017-07-14 18:26:09 +00:00
Chandler Carruth 051bdb0b22 [PM] Fix a silly bug in my recent update to the CG update logic.
I used the wrong variable to update. This was even covered by a unittest
I wrote, and the comments for the unittest were correct (if confusing)
but the test itself just matched the buggy behavior. =[

llvm-svn: 307764
2017-07-12 09:08:11 +00:00
Mikael Holmen ad7e718307 [MemoryBuiltins] Allow truncation in visitAllocaInst()
Summary:
Solves PR33689.

If the pointer size is less than the size of the type used for the array
size in an alloca (the <ty> type below) then we could trigger the assert in
the PR. In that example we have pointer size i16 and <ty> is i32.

<result> = alloca [inalloca] <type> [, <ty> <NumElements>] [, align <alignment>]

Handle the situation by allowing truncation as well as zero extension in
ObjectSizeOffsetVisitor::visitAllocaInst().

Also, we now detect overflow in visitAllocaInst(), similar to how it was
already done in visitCallSite().

Reviewers: craig.topper, rnk, george.burgess.iv

Reviewed By: george.burgess.iv

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D35003

llvm-svn: 307754
2017-07-12 06:19:10 +00:00
NAKAMURA Takumi a089dd86a3 Whitespace.
llvm-svn: 307614
2017-07-11 02:31:54 +00:00
NAKAMURA Takumi 76bab1f20b Revert r307581, "Avoid doing conservative phi checks in aliasSameBasePointerGEPs() if no phis have been visited yet."
It broke stage2 tests in selfhosting.

llvm-svn: 307613
2017-07-11 02:31:51 +00:00
Farhana Aleen 2ff973f2a5 Avoid doing conservative phi checks in aliasSameBasePointerGEPs() if no phis have been visited yet.
Reviewers: Daniel Berlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34478

llvm-svn: 307581
2017-07-10 20:15:40 +00:00
Hiroshi Inoue a86c920b1e fix typos in comments and error messages; NFC
llvm-svn: 307533
2017-07-10 12:44:25 +00:00
Chandler Carruth c213c67df8 [PM] Fix a nasty bug in the new PM where we failed to properly
invalidation of analyses when merging SCCs.

While I've added a bunch of testing of this, it takes something much
more like the inliner to really trigger this as you need to have
partially-analyzed SCCs with updates at just the right time. So I've
added a direct test for this using the inliner and verifying the
domtree. Without the changes here, this test ends up finding a stale
dominator tree.

However, to handle this properly, we need to invalidate analyses
*before* merging the SCCs. After talking to Philip and Sanjoy about this
they convinced me this was the right approach. To do this, we need
a callback mechanism when merging SCCs so we can observe the cycle that
will be merged before the merge happens. This API update ended up being
surprisingly easy.

With this commit, the new PM passes the test-suite again. It hadn't
since MemorySSA was enabled for EarlyCSE as that also will find this bug
very quickly.

llvm-svn: 307498
2017-07-09 13:45:11 +00:00
Chandler Carruth 7c8964d885 [PM] Add unittesting of the call graph update logic with complex
dependencies between analyses.

This uncovers even more issues with the proxies and the splitting apart
of SCCs which are fixed in this patch. I discovered this while trying to
add more rigorous testing for a change I'm making to the call graph
update invalidation logic.

llvm-svn: 307497
2017-07-09 13:16:55 +00:00
Craig Topper fde4723ebe [IR] Add Type::isIntOrIntVectorTy(unsigned) similar to the existing isIntegerTy(unsigned), but also works for vectors.
llvm-svn: 307492
2017-07-09 07:04:03 +00:00
Craig Topper 95d2347ae1 [IR] Make use of Type::isPtrOrPtrVectorTy/isIntOrIntVectorTy/isFPOrFPVectorTy to shorten code. NFC
llvm-svn: 307491
2017-07-09 07:04:00 +00:00
Hiroshi Inoue 713b5ba2de fix trivial typos; NFC
sucessor -> successor 

llvm-svn: 307488
2017-07-09 05:54:44 +00:00
Chandler Carruth bd9c29039e [PM] Finish implementing and fix a chain of bugs uncovered by testing
the invalidation propagation logic from an SCC to a Function.

I wrote the infrastructure to test this but didn't actually use it in
the unit test where it was designed to be used. =[ My bad. Once
I actually added it to the test case I discovered that it also hadn't
been properly implemented, so I've implemented it. The logic in the FAM
proxy for an SCC pass to propagate invalidation follows the same ideas
as the FAM proxy for a Module pass, but the implementation is a bit
different to reflect the fact that it is forwarding just for an SCC.

However, implementing this correctly uncovered a surprising "bug" (it
was conservatively correct but relatively very expensive) in how we
handle invalidation when splitting one SCC into multiple SCCs. We did an
eager invalidation when in reality we should be deferring invaliadtion
for the *current* SCC to the CGSCC pass manager and just invaliating the
newly constructed SCCs. Otherwise we end up invalidating too much too
soon. This was exposed by the inliner test case that I've updated. Now,
we invalidate *just* the split off '(test1_f)' SCC when doing the CG
update, and then the inliner finishes and invalidates the '(test1_g,
test1_h)' SCC's analyses. The first few attempts at fixing this hit
still more bugs, but all of those are covered by existing tests. For
example, the inliner should also preserve the FAM proxy to avoid
unnecesasry invalidation, and this is safe because the CG update
routines it uses handle any necessary adjustments to the FAM proxy.

Finally, the unittests for the CGSCC pass manager needed a bunch of
updates where we weren't correctly preserving the FAM proxy because it
hadn't been fully implemented and failing to preserve it didn't matter.

Note that this doesn't yet fix the current crasher due to MemSSA finding
a stale dominator tree, but without this the fix to that crasher doesn't
really make any sense when testing because it relies on the proxy
behavior.

llvm-svn: 307487
2017-07-09 03:59:31 +00:00
Dehao Chen 64c46574b0 Increase the import-threshold for crtical functions.
Summary: For interative sample-pgo, if a hot call site is inlined in the profiling binary, we should inline it in before profile annotation in the backend. Before that, the compile phase first collects all GUIDs that needs to be imported and creates virtual "hot" call edge in the summary. However, "hot" is not good enough to guarantee the callsites get inlined. This patch introduces "critical" call edge, and assign much higher importing threshold for those edges.

Reviewers: tejohnson

Reviewed By: tejohnson

Subscribers: sanjoy, mehdi_amini, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D35096

llvm-svn: 307439
2017-07-07 21:01:00 +00:00
Sanjay Patel 1bbdf4e11a [DemandedBits] fix formatting; NFC
llvm-svn: 307403
2017-07-07 14:39:26 +00:00
Chad Rosier 3f02123f7c [ValueTracking] Fix the identity case (LHS => RHS) when the LHS is false.
Prior to this commit both of the added test cases were passing.  However, in the
latter case (test7) we were doing a lot more work to arrive at the same answer
(i.e., we were using isImpliedCondMatchingOperands() to determine the
implication.).

llvm-svn: 307400
2017-07-07 13:55:55 +00:00
Sean Fertile 9cd1cdf814 Extend memcpy expansion in Transform/Utils to handle wider operand types.
Adds loop expansions for known-size and unknown-sized memcpy calls, allowing the
target to provide the operand types through TTI callbacks. The default values
for the TTI callbacks use int8 operand types and matches the existing behaviour
if they aren't overridden by the target.

Differential revision: https://reviews.llvm.org/D32536

llvm-svn: 307346
2017-07-07 02:00:06 +00:00
Chad Rosier a72a9ff557 [ValueTracking] Support icmps fed by 'and' and 'or'.
This patch adds support for handling some forms of ands and ors in
ValueTracking's isImpliedCondition API.

PR33611
https://reviews.llvm.org/D34901

llvm-svn: 307304
2017-07-06 20:00:25 +00:00
Craig Topper ca2c87653c [Constants] Replace calls to ConstantInt::equalsInt(0)/equalsInt(1) with isZero and isOne. NFCI
llvm-svn: 307293
2017-07-06 18:39:49 +00:00
Craig Topper 79ab643da8 [Constants] If we already have a ConstantInt*, prefer to use isZero/isOne/isMinusOne instead of isNullValue/isOneValue/isAllOnesValue inherited from Constant. NFCI
Going through the Constant methods requires redetermining that the Constant is a ConstantInt and then calling isZero/isOne/isMinusOne.

llvm-svn: 307292
2017-07-06 18:39:47 +00:00
Brendon Cahoon cb8c7b912d [DependenceAnalysis] Make sure base objects are the same when comparing GEPs
The dependence analysis was returning incorrect information when using the GEPs
to compute dependences. The analysis uses the GEP indices under certain
conditions, but was doing it incorrectly when the base objects of the GEP are
aliases, but pointing to different locations in the same array.

This patch adds another check for the base objects. If the base pointer SCEVs
are not equal, then the dependence analysis should fall back on the path
that uses the whole SCEV for the dependence check. This fixes PR33567.

Differential Revision: https://reviews.llvm.org/D34702

llvm-svn: 307203
2017-07-05 21:35:47 +00:00
Hiroshi Inoue ef1c2ba22a fix trivial typos, NFC
llvm-svn: 306952
2017-07-01 07:12:15 +00:00
Jakub Kuderski 604a22b9fb [Dominators] Reapply r306892, r306893, r306893.
This reverts commit r306907 and reapplies the patches in the title.
The patches used to make one of the
CodeGen/ARM/2011-02-07-AntidepClobber.ll test to fail because of a
missing null check.

llvm-svn: 306919
2017-07-01 00:23:01 +00:00
Brian Gesiak 4ef3daafef [ORE] Add diagnostics hotness threshold
Summary:
Add an option to prevent diagnostics that do not meet a minimum hotness
threshold from being output. When generating optimization remarks for
large codebases with a ton of cold code paths, this option can be used
to limit the optimization remark output at a reasonable size. Discussion of
this change can be read here:
http://lists.llvm.org/pipermail/llvm-dev/2017-June/114377.html

Reviewers: anemet, davidxl, hfinkel

Reviewed By: anemet

Subscribers: qcolombet, javed.absar, fhahn, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D34867

llvm-svn: 306912
2017-06-30 23:14:53 +00:00
Jakub Kuderski 0c3d76179c Revert "[Dominators] Teach IDF to use level information"
This reverts commit r306894.

Revert "[Dominators] Add NearestCommonDominator verification"

This reverts commit r306893.

Revert "[Dominators] Keep tree level in DomTreeNode and use it to find NCD and answer dominance queries"

This reverts commit r306892.

llvm-svn: 306907
2017-06-30 22:56:28 +00:00
Jakub Kuderski c008779918 [Dominators] Teach IDF to use level information
Summary: This patch teaches IteratedDominanceFrontier to use the level information stored in DomTreeNodes instead of calculating it manually.

Reviewers: dberlin, sanjoy, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D34703

llvm-svn: 306894
2017-06-30 21:51:43 +00:00
Brian Gesiak 44e5f6c4ac [ORE] Unify spelling as "diagnostics hotness"
Summary:
To enable profile hotness information in diagnostics output, Clang takes
the option `-fdiagnostics-show-hotness` -- that's "diagnostics", with an
"s" at the end. Clang also defines `CodeGenOptions::DiagnosticsWithHotness`.

LLVM, on the other hand, defines
`LLVMContext::getDiagnosticHotnessRequested` -- that's "diagnostic", not
"diagnostics". It's a small difference, but it's confusing, typo-inducing, and
frustrating.

Add a new method with the spelling "diagnostics", and "deprecate" the
old spelling.

Reviewers: anemet, davidxl

Reviewed By: anemet

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D34864

llvm-svn: 306848
2017-06-30 18:13:59 +00:00
Nikolai Bozhenov bde9b14c6f Revert of r306525: "Canonicalize clamp of float types to minmax"
llvm-svn: 306815
2017-06-30 10:39:09 +00:00
Max Kazantsev 8d0322e612 [SCEV] Use depth limit instead of local cache for SExt and ZExt
In rL300494 there was an attempt to deal with excessive compile time on
invocations of getSign/ZeroExtExpr using local caching. This approach only
helps if we request the same SCEV multiple times throughout recursion. But
in the bug PR33431 we see a case where we request different values all the time,
so caching does not help and the size of the cache grows enormously.

In this patch we remove the local cache for this methods and add the recursion
depth limit instead, as we do for arithmetics. This gives us a guarantee that the
invocation sequence is limited and reasonably short.

Differential Revision: https://reviews.llvm.org/D34273

llvm-svn: 306785
2017-06-30 05:04:09 +00:00
Davide Italiano f6b3d21198 [CFLAA] Remove unneded function declaration. NFCI.
llvm-svn: 306754
2017-06-29 22:57:37 +00:00
Alexandre Isoard 41044876fc Reverting r306695 while investigating failing test case.
Failing test case:
    Transforms/LoopVectorize.iv_outside_user.ll

llvm-svn: 306723
2017-06-29 18:48:56 +00:00
Alexandre Isoard aa29afc756 ScalarEvolution: Add URem support
In LLVM IR the following code:

    %r = urem <ty> %t, %b

is equivalent to:

    %q = udiv <ty> %t, %b
    %s = mul <ty> nuw %q, %b
    %r = sub <ty> nuw %t, %q ; (t / b) * b + (t % b) = t

As UDiv, Mul and Sub are already supported by SCEV, URem can be
implemented with minimal effort this way.

Note: While SRem and SDiv are also related this way, SCEV does not
provides SDiv yet.

llvm-svn: 306695
2017-06-29 16:29:04 +00:00
Florian Hahn 8a44b7be76 [TBAA] Remove metadata keyword from IR examples in comments (NFC).
The metadata keyword has been removed from the IR.

llvm-svn: 306675
2017-06-29 13:55:23 +00:00
Evgeny Astigeevich 70ed78e504 [TargetTransformInfo, API] Add a list of operands to TTI::getUserCost
The changes are a result of discussion of https://reviews.llvm.org/D33685.
It solves the following problem:

1. We can inform getGEPCost about simplified indices to help it with
   calculating the cost. But getGEPCost does not take into account the
   context which GEPs are used in.
2. We have getUserCost which can take the context into account but we cannot
   inform about simplified indices.

With the changes getUserCost will have access to additional information
as getGEPCost has.

The one parameter getUserCost is also provided.

Differential Revision: https://reviews.llvm.org/D34057

llvm-svn: 306674
2017-06-29 13:42:12 +00:00
Eric Christopher 7ad02eee8a Fix a typo.
llvm-svn: 306599
2017-06-28 21:10:31 +00:00
Geoff Berry 66d9bdbca8 [LoopUnroll] Pass SCEV to getUnrollingPreferences hook. NFCI.
Reviewers: sanjoy, anna, reames, apilipenko, igor-laevsky, mkuper

Subscribers: jholewinski, arsenm, mzolotukhin, nemanjai, nhaehnle, javed.absar, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D34531

llvm-svn: 306554
2017-06-28 15:53:17 +00:00
Nikolai Bozhenov 6710ba07c7 Revert r306528
llvm-svn: 306536
2017-06-28 12:15:13 +00:00
Nikolai Bozhenov 77b5536e4e [ValueTracking] Enabling existing ValueTracking patch by default.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.

Reviewers: reames

Differential Revision: https://reviews.llvm.org/D34101

Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 306528
2017-06-28 10:08:08 +00:00
Nikolai Bozhenov b01e6b5a52 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 306525
2017-06-28 09:26:20 +00:00
Easwaran Raman c5fa6358ba [NewPM/Inliner] Reduce threshold for cold callsites in the non-PGO case
Differential Revision: https://reviews.llvm.org/D34312

llvm-svn: 306484
2017-06-27 23:11:18 +00:00
Eugene Zelenko 4f820d0e01 [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 306472
2017-06-27 21:52:05 +00:00
Davide Italiano 31d4c1bbbc [CFLAA] Move a common function to the header to reduce duplication.
Differential Revision:  https://reviews.llvm.org/D34660

llvm-svn: 306354
2017-06-27 02:25:06 +00:00
Davide Italiano 604c003f5f [CFLAA] Use raw pointers instead of Optional<Pointer>. NFC.
Using Optional<> here doesn't seem to be terribly valuable, but
this is not the main point of this change. The change enables
us to merge the (now) two identical copies of parentFunctionOfValue()
that Steensgaard's and Andersens' provide.

llvm-svn: 306351
2017-06-27 00:33:37 +00:00
Davide Italiano e34a806431 [CFLAA] Change FunctionHandle to be common to Steensgaard's and Andersens'
Differential Revision:  https://reviews.llvm.org/D34638

llvm-svn: 306348
2017-06-26 23:59:14 +00:00
Davide Italiano 9a02494230 [CFL-AA] Remove unneeded function declaration. NFCI.
llvm-svn: 306268
2017-06-26 03:55:41 +00:00
Davide Italiano f15fb368a3 [MemDep] Cleanup return after else & use `auto`. NFC.
llvm-svn: 306255
2017-06-25 22:12:59 +00:00
Xin Tong 70f7512add [AST] Fix a bug in aliasesUnknownInst. Make sure we are comparing the unknown instructions in the alias set and the instruction interested in.
Summary:
Make sure we are comparing the unknown instructions in the alias set and the instruction interested in.
I believe this is clearly a bug (missed opportunity). I can also add some test cases if desired.

Reviewers: hfinkel, davide, dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34597

llvm-svn: 306241
2017-06-25 12:55:11 +00:00
Craig Topper 010203964d [SCEV] Avoid copying ConstantRange just to get the min/max value
Summary:
This patch changes getRange to getRangeRef and returns a reference to the ConstantRange object stored inside the DenseMap caches. We then take advantage of that to add new helper methods that can return min/max value of a signed or unsigned ConstantRange using that reference without first copying the ConstantRange.

getRangeRef calls itself recursively and I believe the reference return is fine for those calls.

I've left getSignedRange and getUnsignedRange returning a ConstantRange object so they will make a copy now. This is to ensure safety since the reference will be invalidated if the DenseMap changes.

I'm sure there are still more places that can take advantage of the reference and I'll submit future patches as I find them.

Reviewers: sanjoy, davide

Reviewed By: sanjoy

Subscribers: zzheng, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D32978

llvm-svn: 306229
2017-06-24 23:34:50 +00:00
Hiroshi Inoue a85d24b73d fix trivial typos in comment, NFC
llvm-svn: 306211
2017-06-24 16:00:26 +00:00
Craig Topper 8bec6a4e1c [IR][AssumptionCache] Add m_Shift and m_BitwiseLogic matchers to replace a couple m_CombineOr
Summary:
m_CombineOr isn't very efficient. The code using it is also quite verbose.

This patch adds m_Shift and m_BitwiseLogic matchers to make the using code more concise and improve the match efficiency.

Reviewers: spatel, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D34593

llvm-svn: 306206
2017-06-24 06:27:14 +00:00
Craig Topper 7b66ffe875 [ValueTracking][InstCombine] Use m_Shr instead m_CombineOr(m_LShr, m_AShr). NFC
llvm-svn: 306205
2017-06-24 06:24:04 +00:00
Craig Topper 72ee6945af [Analysis][Transforms] Use commutable matchers instead of m_CombineOr in a few places. NFC
llvm-svn: 306204
2017-06-24 06:24:01 +00:00
Vitaly Buka 9c2a036276 Make visible isDereferenceableAndAlignedPointer(..., const APInt &Size, ...)
Summary: Used by D34311 and D34467

Reviewers: hfinkel, efriedma

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34585

llvm-svn: 306193
2017-06-24 01:35:13 +00:00
Jun Bum Lim 506cfb7ab7 [InlineCost] Do not take INT_MAX when Cost is negative
Summary: visitSwitchInst should not take INT_MAX when Cost is negative. Instead of INT_MAX , we also use a valid upperbound cost when overflow occurs in Cost.

Reviewers: hans, echristo, dmgreen

Reviewed By: dmgreen

Subscribers: mcrosier, javed.absar, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D34436

llvm-svn: 306118
2017-06-23 16:12:37 +00:00
Craig Topper 2c20c42cb6 [JumpThreading] Teach jump threading how to analyze (and (cmp A, C1), (cmp A, C2)) after InstCombine has turned it into (cmp (add A, C3), C4)
Currently JumpThreading can use LazyValueInfo to analyze an 'and' or 'or' of compare if the compare is fed by a livein of a basic block. This can be used to to prove the condition can't be met for some predecessor and the jump from that predecessor can be moved to the false path of the condition.

But if the compare is something that InstCombine turns into an add and a single compare, it can't be analyzed because the livein is now an input to the add and not the compare.

This patch adds a new method to LVI to get a ConstantRange on an edge. Then we teach jump threading to detect the add livein feeding a compare and to get the ConstantRange and propagate it.

Differential Revision: https://reviews.llvm.org/D33262

llvm-svn: 306085
2017-06-23 05:41:35 +00:00
Craig Topper b60f866a8b [LVI] Teach LVI to reason about ORs of icmps similar to how it reasons about ANDs of icmps
Summary: LVI can reason about an AND of icmps on the true dest of a branch. I believe we can do similar for the false dest of ORs. This allows us to get the same answer for the demorganed versions of some of the AND test cases as you can see.

Reviewers: anna, reames

Reviewed By: reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34431

llvm-svn: 306076
2017-06-23 01:08:16 +00:00
Craig Topper d3711ee93e [BasicAA] Add type check and Value equality check around code added in r305481.
This matches the checks done at the beginning of isKnownNonEqual that this code is partially emulating.

Without this we can get assertion failures due to the bit widths of the KnownBits not matching.

llvm-svn: 306044
2017-06-22 19:04:14 +00:00
Michael Kruse 47f856095a [BasicAA] Use MayAlias instead of PartialAlias for fallback.
Using various methods, BasicAA tries to determine whether two
GetElementPtr memory locations alias when its base pointers are known
to be equal. When none of its heuristics are applicable, it falls back
to PartialAlias to, according to a comment, protect TBAA making a wrong
decision in case of unions and malloc. PartialAlias is not correct,
because a PartialAlias result implies that some, but not all, bytes
overlap which is not necessarily the case here.

AAResults returns the first analysis result that is not MayAlias.
BasicAA is always the first alias analysis. When it returns
PartialAlias, no other analysis is queried to give a more exact result
(which was the intention of returning PartialAlias instead of MayAlias).
For instance, ScopedAA could return a more accurate result.

The PartialAlias hack was introduced in r131781 (and re-applied in
r132632 after some reverts) to fix llvm.org/PR9971 where TBAA returns a
wrong NoAlias result due to a union. A test case for the malloc case
mentioned in the comment was not provided and I don't think it is
affected since it returns an omnipotent char anyway.

Since r303851 (https://reviews.llvm.org/D33328) clang does emit specific
TBAA for unions anymore (but "omnipotent char" instead). Hence, the
PartialAlias workaround is not required anymore.

This patch passes the test-suite and check-llvm/check-clang of a
self-hoisted build on x64.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D34318

llvm-svn: 305938
2017-06-21 18:25:37 +00:00
Max Kazantsev eac01d4c62 [SCEV] Make MulOpsInlineThreshold lower to avoid excessive compilation time
MulOpsInlineThreshold option of SCEV is defaulted to 1000, which is inadequately high.
When constructing SCEVs of expressions like:

  x1 = a * a
  x2 = x1 * x1
  x3 = x2 * x2
    ...

We actually have huge SCEVs with max allowed amount of operands inlined.
Such expressions are easy to get from unrolling of loops looking like

  x = a
  for (i = 0; i < n; i++)
    x = x * x

Or more tricky cases where big powers are involved. If some non-linear analysis
tries to work with a SCEV that has 1000 operands, it may lead to excessively long
compilation. The attached test does not pass within 1 minute with default threshold.

This patch decreases its default value to 32, which looks much more reasonable if we
use analyzes with complexity O(N^2) or O(N^3) working with SCEV.

Differential Revision: https://reviews.llvm.org/D34397

llvm-svn: 305882
2017-06-21 07:28:13 +00:00
Max Kazantsev 0bcf6ec85c [SCEV][NFC] Fix a misleading description of AddOpsInlineThreshold
The description of this option was copy-pasted from another one and does not
correspond to reality.

Differential Revision: https://reviews.llvm.org/D34390

llvm-svn: 305782
2017-06-20 08:37:31 +00:00
Xin Tong bb8dbcf915 [BDCE] Add comments. NFC
llvm-svn: 305739
2017-06-19 20:10:41 +00:00
Max Kazantsev 35b2a18eb9 [SCEV] Teach SCEVExpander to expand BinPow
Current implementation of SCEVExpander demonstrates a very naive behavior when
it deals with power calculation. For example, a SCEV for x^8 looks like

  (x * x * x * x * x * x * x * x)

If we try to expand it, it generates a very straightforward sequence of muls, like:

  x2 = mul x, x
  x3 = mul x2, x
  x4 = mul x3, x
      ...
  x8 = mul x7, x

This is a non-efficient way of doing that. A better way is to generate a sequence of
binary power calculation. In this case the expanded calculation will look like:

  x2 = mul x, x
  x4 = mul x2, x2
  x8 = mul x4, x4

In some cases the code size reduction for such SCEVs is dramatic. If we had a loop:

  x = a;
  for (int i = 0; i < 3; i++)
    x = x * x;

And this loop have been fully unrolled, we have something like:

  x = a;
  x2 = x * x;
  x4 = x2 * x2;
  x8 = x4 * x4;

The SCEV for x8 is the same as in example above, and if we for some reason
want to expand it, we will generate naively 7 multiplications instead of 3.
The BinPow expansion algorithm here allows to keep code size reasonable.

This patch teaches SCEV Expander to generate a sequence of BinPow multiplications
if we have repeating arguments in SCEVMulExpressions.

Differential Revision: https://reviews.llvm.org/D34025

llvm-svn: 305663
2017-06-19 06:24:53 +00:00
Alexander Timofeev 0f9c84cd93 DivergencyAnalysis patch for review
llvm-svn: 305494
2017-06-15 19:33:10 +00:00
Craig Topper 587525468d [BasicAA] Don't call isKnownNonEqual if we might be have gone through a PHINode.
This is a fix for the test case in PR32314.

Basic Alias Analysis can ask if two nodes are known non-equal after looking through a phi node to find a GEP. isAddOfNonZero saw an add of a constant from the same phi and said that its output couldn't be equal. But Basic Alias Analysis was really asking about the value from the previous loop iteration.

This patch at least makes that case not happen anymore, I'm not sure if there were still other ways this can fail. As was discussed in the bug, it looks like fixing BasicAA would be difficult so this patch seemed like a possible workaround

Differential Revision: https://reviews.llvm.org/D33136

llvm-svn: 305481
2017-06-15 17:16:56 +00:00
Max Kazantsev dc80366d52 [ScalarEvolution] Apply Depth limit to getMulExpr
This is a fix for PR33292 that shows a case of extremely long compilation
of a single .c file with clang, with most time spent within SCEV.

We have a mechanism of limiting recursion depth for getAddExpr to avoid
long analysis in SCEV. However, there are calls from getAddExpr to getMulExpr
and back that do not propagate the info about depth. As result of this, a chain

  getAddExpr -> ... .> getAddExpr -> getMulExpr -> getAddExpr -> ... -> getAddExpr

can be extremely long, with every segment of getAddExpr's being up to max depth long.
This leads either to long compilation or crash by stack overflow. We face this situation while
analyzing big SCEVs in the test of PR33292.

This patch applies the same limit on max expression depth for getAddExpr and getMulExpr.

Differential Revision: https://reviews.llvm.org/D33984

llvm-svn: 305463
2017-06-15 11:48:21 +00:00
Craig Topper f93b7b1c1f [ValueTracking] Correct early out in computeKnownBitsFromOperator to work with non power of 2 bit widths
There's an early out that's trying to detect when we don't know any bits that make up the legal range of a shift. The code subtracts one from BitWidth which creates a mask in the lower bits for power of 2 bit widths. This is then ANDed with the known bits to see if any of those bits are known. If the bit width isn't a power of 2 this creates a non-sensical mask.

This patch corrects this by rounding up to a power of 2 before doing the subtract and mask.

Differential Revision: https://reviews.llvm.org/D34165

llvm-svn: 305400
2017-06-14 17:04:59 +00:00
Simon Pilgrim 7ce9926ce4 Strip UTF8 BOM that got added for some reason in rL305163
llvm-svn: 305282
2017-06-13 09:58:27 +00:00
Sanjay Patel 2ad88f81f0 fix typos/formatting; NFC
llvm-svn: 305243
2017-06-12 22:34:37 +00:00
Yaron Keren 7d46392124 Address http://bugs.llvm.org/pr32207 by making BannerPrinted local to runOnSCC and skipping banner for function declarations.
Reviewed By: Mehdi AMINI

Differential Revision: https://reviews.llvm.org/D34086

llvm-svn: 305179
2017-06-12 02:18:50 +00:00
Simon Pilgrim 516938452f Fix unused variable warning on non-debug EXPENSIVE_CHECKS builds
llvm-svn: 305163
2017-06-11 12:49:29 +00:00
Davide Italiano 83122058cf [MemorySSA] preservesAll() implies preserves<MemorySSA>(). NFCI.
llvm-svn: 305160
2017-06-11 01:05:45 +00:00
Andrew Kaylor 647025f9e1 [InstSimplify] Don't constant fold or DCE calls that are marked nobuiltin
Differential Revision: https://reviews.llvm.org/D33737

llvm-svn: 305132
2017-06-09 23:18:11 +00:00
Craig Topper 7ad13f259f [LVI] Fix spelling error in comment. NFC
llvm-svn: 305115
2017-06-09 21:21:17 +00:00
Craig Topper 6dd9dcf26e [LVI] Const correct and rename the LVILatticeVal parameter to getPredicateResult. NFC
Previously it was non-const reference named Result which would tend to make someone think that it was an outparam when really its an input.

llvm-svn: 305114
2017-06-09 21:18:16 +00:00
Craig Topper 31ce4ec2fd [LazyValueInfo] Don't run the more complex predicate handling code for EQ and NE in getPredicateResult
Summary:
Unless I'm mistaken, the special handling for EQ/NE should cover everything and there is no reason to fallthrough to the more complex code. For that matter I'm not sure there's any reason to special case EQ/NE other than avoiding creating temporary ConstantRanges.

This patch moves the complex code into an else so we only do it when we are handling a predicate other than EQ/NE.

Reviewers: anna, reames, resistor, Farhana

Reviewed By: anna

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34000

llvm-svn: 305086
2017-06-09 16:16:20 +00:00
Sanjay Patel fef83e8fb9 [ValueTracking] fix typo; NFC
llvm-svn: 305080
2017-06-09 14:21:18 +00:00
Peter Collingbourne e357fbd243 Write summaries for merged modules when splitting modules for ThinLTO.
This is to prepare to allow for dead stripping of globals in the
merged modules.

Differential Revision: https://reviews.llvm.org/D33921

llvm-svn: 305027
2017-06-08 23:01:49 +00:00
Craig Topper db52809e77 [LazyValueInfo] Make LVILatticeVal intersect method take arguments by reference so we don't copy ConstantRanges unless we need to.
llvm-svn: 304990
2017-06-08 17:08:58 +00:00
John Brawn da4a68a1d2 [BPI] Don't assume that strcmp returning >0 is more likely than <0
The zero heuristic assumes that integers are more likely positive than negative,
but this also has the effect of assuming that strcmp return values are more
likely positive than negative. Given that for nonzero strcmp return values it's
the ordering of arguments that determines the sign of the result there's no
reason to assume that's true.

Fix this by inspecting the LHS of the compare and using TargetLibraryInfo to
decide if it's strcmp-like, and if so only assume that nonzero is more likely
than zero i.e. strings are more often different than the same. This causes a
slight code generation change in the spec2006 benchmark 403.gcc, but with no
noticeable performance impact. The intent of this patch is to allow better
optimisation of dhrystone on Cortex-M cpus, but currently it won't as there are
also some changes that need to be made to if-conversion.

Differential Revision: https://reviews.llvm.org/D33934

llvm-svn: 304970
2017-06-08 09:44:40 +00:00
David Blaikie 7a9b788830 GlobalsModRef: Ensure optnone+readonly/readnone attributes are respected
llvm-svn: 304945
2017-06-07 21:37:39 +00:00
Alina Sbirlea 33e5872367 [mssa] Fix case when there is no definition in a block prior to an inserted use.
Summary:
Check that the first access before one being tested is valid.
Before this patch, if there was no definition prior to the Use being tested,
the first time Iter was deferenced, it hit the sentinel.

Reviewers: dberlin, gbiv

Subscribers: sanjoy, Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D33950

llvm-svn: 304926
2017-06-07 16:46:53 +00:00
Craig Topper 73ba1c84be [InstCombine][InstSimplify] Use APInt::isNullValue/isOneValue to reduce compiled code for comparing APInts with 0 and 1. NFC
These methods are specifically optimized to only counting leading zeros without an additional uint64_t compare.

llvm-svn: 304876
2017-06-07 07:40:37 +00:00
NAKAMURA Takumi 92c99cd6dc Update libdeps to add BinaryFormat, introduced in r304864.
llvm-svn: 304869
2017-06-07 04:48:49 +00:00
NAKAMURA Takumi ef9d9481b5 Reorder and reformat.
llvm-svn: 304868
2017-06-07 04:48:45 +00:00
Craig Topper 7945248267 [LazyValueInfo] Remove redundant calls to ConstantRange::contains. The same exact call was made in the if above and we already know it returned true. NFC
llvm-svn: 304857
2017-06-07 00:58:09 +00:00
Davide Italiano c88f2c712f [CFLAA] Remove unused include. NFCI.
llvm-svn: 304842
2017-06-06 23:16:19 +00:00
David Blaikie c662b50150 GlobalsModRef+OptNone: Don't prove readnone/other properties from an optnone function
Seems like at least one reasonable interpretation of optnone is that the
optimizer never "looks inside" a function. This fix is consistent with
that interpretation.

Specifically this came up in the situation:

f3 calls f2 calls f1
f2 is always_inline
f1 is optnone

The application of readnone to f1 (& thus to f2) caused the inliner to
kill the call to f2 as being trivially dead (without even checking the
cost function, as it happens - not sure if that's also a bug).

llvm-svn: 304833
2017-06-06 20:51:15 +00:00
Anna Thomas 4acfc7e16e [LVI Printer] Rely on the LVI analysis functions rather than the LVI cache
Summary:
LVIPrinter pass was previously relying on the LVICache. We now directly call the
the LVI functions which solves the value if the LVI information is not already
available in the cache. This has 2 benefits over the printing of LVI cache:
1. higher coverage (i.e. catches errors) in LVI code when cache value is
invalidated.
2. relies on the core functions, and not dependent on the LVI cache (which may
be scrapped at some point).
It would still catch any cache invalidation errors, since we first go through
the cache.

Reviewers: reames, dberlin, sanjoy

Reviewed by: reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32135

llvm-svn: 304819
2017-06-06 19:25:31 +00:00
Anna Thomas b2a212c070 [Atomics][LoopIdiom] Recognize unordered atomic memcpy
Summary:
Expanding the loop idiom test for memcpy to also recognize
unordered atomic memcpy. The only difference for recognizing
an unordered atomic memcpy and instead of a normal memcpy is
that the loads and/or stores involved are unordered atomic operations.

Background:  http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html

Patch by Daniel Neilson!

Reviewers: reames, anna, skatkov

Reviewed By: reames, anna

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D33243

llvm-svn: 304806
2017-06-06 16:45:25 +00:00
Chandler Carruth 6bda14b313 Sort the remaining #include lines in include/... and lib/....
I did this a long time ago with a janky python script, but now
clang-format has built-in support for this. I fed clang-format every
line with a #include and let it re-sort things according to the precise
LLVM rules for include ordering baked into clang-format these days.

I've reverted a number of files where the results of sorting includes
isn't healthy. Either places where we have legacy code relying on
particular include ordering (where possible, I'll fix these separately)
or where we have particular formatting around #include lines that
I didn't want to disturb in this patch.

This patch is *entirely* mechanical. If you get merge conflicts or
anything, just ignore the changes in this patch and run clang-format
over your #include lines in the files.

Sorry for any noise here, but it is important to keep these things
stable. I was seeing an increasing number of patches with irrelevant
re-ordering of #include lines because clang-format was used. This patch
at least isolates that churn, makes it easy to skip when resolving
conflicts, and gets us to a clean baseline (again).

llvm-svn: 304787
2017-06-06 11:49:48 +00:00
Joey Gouly 61eaa63b65 [InstSimplify] Constant fold the new GEP in SimplifyGEPInst.
llvm-svn: 304784
2017-06-06 10:17:14 +00:00
Craig Topper aa9a24bd8b [InstSimplify] Remove some redundant code from InstSimplify now that llvm::isKnownNonEqual handles vectors.
isKnownNonEqual is called a little earlier in this function and can handle the case that we were checking here as well as more complex cases.

llvm-svn: 304775
2017-06-06 07:13:17 +00:00
Craig Topper 3002d5b0bf [ValueTracking] Remove scalar only restriction from isKnownNonEqual. The computeKnownBits and isKnownNonZero calls this code relies on should work fine for vectors.
This will be used by another commit to remove some code from InstSimplify that is redundant for scalars, but was needed for vectors due to this issue.

llvm-svn: 304774
2017-06-06 07:13:15 +00:00
Craig Topper 2dfb4804f2 [InstSimplify] Use the getTrue/getFalse helpers and make sure we use the computed result type instead of hardcoding to i1. NFC
Currently, isKnownNonEqual punts on vectors so the hardcoding to i1 doesn't matter. But I plan to fix that in a future patch.

llvm-svn: 304773
2017-06-06 07:13:13 +00:00
Craig Topper 8e662f7f81 [ValueTracking] Use the computeKnownBits version that returns a KnownBits object instead of taking one by reference. NFC
llvm-svn: 304772
2017-06-06 07:13:11 +00:00
Craig Topper 8365df825e [ValueTracking] Use APInt::intersects to avoid some temporary APInts. NFC
llvm-svn: 304771
2017-06-06 07:13:09 +00:00
Craig Topper c2790ecda8 [InstSimplify] Use ICmpInst::isEquality predicate method. NFC
llvm-svn: 304770
2017-06-06 07:13:04 +00:00
Evgeny Stupachenko f2b3b467e5 Fix PR23384 (part 2 of 3) NFC
Summary:
The patch moves LSR cost comparison to target part.

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D30561

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 304750
2017-06-05 23:37:00 +00:00
Craig Topper da8037f299 [InstSimplify] Use llvm::all_of instead of a manual loop. NFC
llvm-svn: 304692
2017-06-04 22:41:56 +00:00
Craig Topper d470d73c2d [ConstantFolding] Combine an if statement into an earlier one that checked the same condition. NFC
llvm-svn: 304681
2017-06-04 08:21:53 +00:00
Craig Topper 0dd29e2256 [ConstantFolding][X86] Replace an LLVM_FALLTHROUGH with a break because it really shouldn't fallthrough.
This is actually NFC because the next case starts with the same if statement as this case did. So the result will be the same and it will fallthrough to the end of the switch. But there's no reason to rely on that so we should just break.

llvm-svn: 304680
2017-06-04 08:21:51 +00:00
Craig Topper fe9ad82e44 [ConstantFolding] Properly support constant folding of vector powi intrinsic. The second argument is not a vector so needs special treatment.
llvm-svn: 304679
2017-06-04 07:30:28 +00:00
Craig Topper 7c553edced [ConstantFolding] Fix constant folding for vector cttz and ctlz intrinsics to understand that the second argument is still a scalar.
llvm-svn: 304668
2017-06-03 18:50:29 +00:00
Craig Topper a803d5b8b0 [LazyValueInfo] Use Type::getIntegerBitWidth instead of casting to IntegerType to call getBitWidth. NFC
llvm-svn: 304656
2017-06-03 07:47:14 +00:00
Craig Topper 0e5f1093ee [LazyValueInfo] Make solveBlockValueCast take a CastInst* instead of Instruction*. Makes getOpcode return the appropriate enum without a cast. NFC
llvm-svn: 304655
2017-06-03 07:47:08 +00:00
Jun Bum Lim 2960d41e68 [InlineCost] Enable the new switch cost heuristic
Summary:
This is to enable the new switch inline cost heuristic (r301649) by removing the
old heuristic as well as the flag itself.
In my experiment for LLVM test suite and spec2000/2006, +17.82% performance and
8% code size reduce was observed in spec2000/vertex with O3 LTO in AArch64.
No significant code size / performance regression was found in O3/O2/Os. No
significant complain was reported from the llvm-dev thread.

Reviewers: hans, chandlerc, eraman, haicheng, mcrosier, bmakam, eastig, ddibyend, echristo

Reviewed By: echristo

Subscribers: javed.absar, kristof.beyls, echristo, aemerson, rengolin, mehdi_amini

Differential Revision: https://reviews.llvm.org/D32653

llvm-svn: 304594
2017-06-02 20:42:54 +00:00
Craig Topper 9277a86f03 [LazyValueInfo] Fix formatting NFC.
llvm-svn: 304567
2017-06-02 17:28:12 +00:00
Craig Topper 3778c8943b [LazyValueInfo] Make solveBlockValueBinaryOp take a BinaryOperator* instead of Instruction*. This removes a cast of getOpcode to BinaryOps.
llvm-svn: 304563
2017-06-02 16:33:13 +00:00
Craig Topper 84a9f168f1 [LazyValueInfo] Fix typo in comment. NFC
llvm-svn: 304560
2017-06-02 16:21:13 +00:00
Craig Topper b23e7c78a5 [InstSimplify][ConstantFolding] Teach constant folding how to handle icmp null, (inttoptr x) as well as it handles icmp (inttoptr x), null
Summary:
The constant folding code currently assumes that the constant expression will always be on the left and the simple null will be on the right. But that's not true at least on the path from InstSimplify.

This patch adds support to ConstantFolding to detect the reversed case.

Reviewers: spatel, dberlin, majnemer, davide, joey

Reviewed By: joey

Subscribers: joey, llvm-commits

Differential Revision: https://reviews.llvm.org/D33801

llvm-svn: 304559
2017-06-02 16:17:32 +00:00
Benjamin Kramer c1f5ae236c [OrderedBasicBlock] Return false for comesBefore(A, A)
So far it would return true for the first uncached query, then cached
queries return false.

llvm-svn: 304545
2017-06-02 13:10:31 +00:00
Eli Friedman 0d823d610d Add opt-bisect support for region passes.
This is necessary to get opt-bisect working with polly.

Differential Revision: https://reviews.llvm.org/D33751

llvm-svn: 304476
2017-06-01 21:22:26 +00:00
Teresa Johnson 596b2e7ab2 [PGO] Adjust indirect call promotion threshold
Summary:
Reduce min percent required for indirect call promotion from 33% to 30%,
which matches gcc's threshold and catches the same hot opportunities.

Reviewers: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D33798

llvm-svn: 304469
2017-06-01 21:10:10 +00:00
Evgeniy Stepanov 56584bbf16 (NFC) Track global summary liveness in GVFlags.
Replace GVFlags::LiveRoot with GVFlags::Live and use that instead of
all the DeadSymbols sets. This is refactoring in order to make
liveness information available in the RegularLTO pipeline.

llvm-svn: 304466
2017-06-01 20:30:06 +00:00
Reid Kleckner fc7ba565ed [EH] Recognize __(gxx|gcc)_personality_seh0 as the GNU EH personalities
These are no-ops when there are no invokes. We don't need to emit LSDAs
for them.

Fixes PR33220.

llvm-svn: 304367
2017-05-31 22:35:52 +00:00
Galina Kistanova 244621faad Added LLVM_FALLTHROUGH to address warning: this statement may fall through. NFC.
llvm-svn: 304361
2017-05-31 22:16:24 +00:00
Galina Kistanova 8514dd540d Added LLVM_FALLTHROUGH to address warning: this statement may fall through. NFC.
llvm-svn: 304358
2017-05-31 22:09:46 +00:00
Galina Kistanova 0b69e363f6 Added LLVM_FALLTHROUGH to address warning: this statement may fall through. NFC.
llvm-svn: 304356
2017-05-31 22:02:05 +00:00
Galina Kistanova c2b642d009 Added missing break; added LLVM_FALLTHROUGH to address warning: this statement may fall through. NFC.
llvm-svn: 304340
2017-05-31 20:25:13 +00:00
Zaara Syeda 3a7578c658 [PPC] Inline expansion of memcmp
This patch does an inline expansion of memcmp.
It changes the memcmp library call into an inline expansion when the size is
known at compile time and is under a target specified threshold.
This expansion is implemented in CodeGenPrepare and expands into straight line
code. The target specifies a maximum load size and the expansion works by using
this size to load the two sources, compare, and exit early if a difference is
found. It also has a special case when the memcmp result is used in a compare
to zero equality.

Differential Revision: https://reviews.llvm.org/D28637

llvm-svn: 304313
2017-05-31 17:12:38 +00:00
George Burgess IV 0a7b989036 [CFLAA] Add missing break; note things are broken.
Thanks to Galina Kistanova for finding the missing break!

When trying to make a test for this, I realized our logic for handling
extractvalue/insertvalue/... is somewhat broken. This makes constructing
a test-case for this missing break nontrivial.

llvm-svn: 304275
2017-05-31 02:35:26 +00:00
Daniel Berlin 71ff663e1b InstructionSimplify: Remove now-redundant reachability tests, as dominates() already does them
llvm-svn: 304270
2017-05-31 01:47:24 +00:00
Max Kazantsev d8fe3eb9cb [SCEV][NFC] Remove redundant params from isAvailableAtLoopEntry
Params DT and LI are redundant, because these values are contained in fields anyways.

Differential Revision: https://reviews.llvm.org/D33668

llvm-svn: 304204
2017-05-30 10:54:58 +00:00
Tobias Grosser e3684d0b84 [SCEV] Assume parameters coming from function calls contain IVs
The optimistic delinearization implemented in LLVM detects array sizes by
looking for non-linear products between parameters and induction variables.
In OpenCL code, such products often look like:

  A[get_global_id(0) * N + get_global_id(1)]

Hence, the IV is hidden in the get_global_id() call and consequently
delinearization would fail as no induction variable is available that helps
us to identify N as array size parameter.

We now use a very simple heuristic to change this. We assume that each parameter
that comes directly from a function call is a hidden induction variable. As
a result, we can delinearize the access above to:

  A[get_global_id(0)][get_global_id(1]

llvm-svn: 304073
2017-05-27 15:17:49 +00:00
Keno Fischer 090f1959c1 [SCEVExpander] Try harder to avoid introducing inttoptr
Summary:
This fixes introduction of an incorrect inttoptr/ptrtoint pair in
the included test case which makes use of non-integral pointers. I
suspect there are more cases like this left, but this takes care of
the one I was seeing at the moment.

Reviewers: sanjoy

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D33129

llvm-svn: 304058
2017-05-27 03:22:55 +00:00
Craig Topper 348314dfb8 [InstSimplify] Push commuted op checks for and/or of icmp further down to avoid duplicate work
Previously, we called simplifyPossiblyCastedAndOrOfICmps twice with the operands commuted, but the call to simplifyAndOrOfICmpsWithConstants further down already handles commuting and doesn't need to be called both ways.

This patch pushes double calls further down to just the individual routines that need to be called twice.

Differential Revision: https://reviews.llvm.org/D33603

llvm-svn: 304044
2017-05-26 22:42:34 +00:00
Craig Topper 9bce1ad232 [InstSimplify] Move a variable declaration to make simplifyAndOfICmps look more like simplifyOrOfICmps. NFC
llvm-svn: 304023
2017-05-26 19:04:02 +00:00
Craig Topper c8bebb1e84 [InstSimplify] Use commutable matchers to shorten some code
This code was replicated two additional times to handle commuted cases, but I think a commutable matcher can take care of it.

Differential Revision: https://reviews.llvm.org/D33585

llvm-svn: 304022
2017-05-26 19:03:59 +00:00
Craig Topper 1da22c3244 [InstSimplify] Use m_APInt instead of m_ConstantInt in ((V + N) & C1) | (V & C2) handling in order to support splat vectors.
The tests here are have operands commuted to provide more coverage. I also commuted one of the instructions in the scalar tests so the 4 tests cover the 4 commuted variations

Differential Revision: https://reviews.llvm.org/D33599

llvm-svn: 304021
2017-05-26 19:03:53 +00:00
Max Kazantsev 41450329f7 Re-enable "[SCEV] Do not fold dominated SCEVUnknown into AddRecExpr start"
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:

  // LSR is not APInt clean, do not touch integers bigger than 64-bits.
  // Also avoid creating IVs of non-native types. For example, we don't want a
  // 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
  uint64_t Width = SE->getTypeSizeInBits(I->getType());
  if (Width > 64 || !DL.isLegalInteger(Width))
    return false;

To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.

This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.

Differential Revision: https://reviews.llvm.org/D33543

llvm-svn: 303971
2017-05-26 06:47:04 +00:00
Craig Topper 25d9ba9a12 [InstSimplify] Use APInt::isMask isntead of manually implementing it. NFC
llvm-svn: 303968
2017-05-26 05:16:22 +00:00
Craig Topper 50500d5054 [InstSimplify] Use m_ConstantInt matchers to short some code. NFC
llvm-svn: 303967
2017-05-26 05:16:20 +00:00
Chandler Carruth 29c22d2835 [LegacyPM] Make the 'addLoop' method accept a loop to add rather than
having it internally allocate the loop.

This is a much more flexible API and necessary in the new loop unswitch
to reasonably support both new and old PMs in common code. It also just
seems like a cleaner separation of concerns.

NFC, this should just be a pure refactoring.

Differential Revision: https://reviews.llvm.org/D33528

llvm-svn: 303834
2017-05-25 03:01:31 +00:00
Craig Topper 77e07cc010 [InstSimplify] Simplify uadd/sadd/umul/smul with overflow intrinsics when the Zero or Undef is on the LHS.
Summary: This code was migrated from InstCombine a few years ago. InstCombine had nearby code that would move Constants to the RHS for these, but InstSimplify doesn't have such code on this path.

Reviewers: spatel, majnemer, davide

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D33473

llvm-svn: 303774
2017-05-24 17:05:28 +00:00
Craig Topper 8205a1a9b6 [ValueTracking] Convert most of the calls to computeKnownBits to use the version that returns the KnownBits object.
This continues the changes started when computeSignBit was replaced with this new version of computeKnowBits.

Differential Revision: https://reviews.llvm.org/D33431

llvm-svn: 303773
2017-05-24 16:53:07 +00:00
Craig Topper a2025eaaef [ValueTracking] Add OptimizationRemarkEmitter to the other signature for commuteKnownBits.
This is needed for an upcoming patch.

llvm-svn: 303772
2017-05-24 16:53:03 +00:00
Diana Picus 183863fc3b Revert "[SCEV] Do not fold dominated SCEVUnknown into AddRecExpr start"
This reverts commit r303730 because it broke all the buildbots.

llvm-svn: 303747
2017-05-24 14:16:04 +00:00
Jonas Paulsson 8624b7e1ce [LoopVectorizer] Let target prefer scalar addressing computations.
The loop vectorizer usually vectorizes any instruction it can and then
extracts the elements for a scalarized use. On SystemZ, all elements
containing addresses must be extracted into address registers (GRs). Since
this extraction is not free, it is better to have the address in a suitable
register to begin with. By forcing address arithmetic instructions and loads
of addresses to be scalar after vectorization, two benefits result:

* No need to extract the register
* LSR optimizations trigger (LSR isn't handling vector addresses currently)

Benchmarking show improvements on SystemZ with this new behaviour.

Any other target could try this by returning false in the new hook
prefersVectorizedAddressing().

Review: Renato Golin, Elena Demikhovsky, Ulrich Weigand
https://reviews.llvm.org/D32422

llvm-svn: 303744
2017-05-24 13:42:56 +00:00
Max Kazantsev 13e016bf48 [SCEV] Do not fold dominated SCEVUnknown into AddRecExpr start
When folding arguments of AddExpr or MulExpr with recurrences, we rely on the fact that
the loop of our base recurrency is the bottom-lost in terms of domination. This assumption
may be broken by an expression which is treated as invariant, and which depends on a complex
Phi for which SCEVUnknown was created. If such Phi is a loop Phi, and this loop is lower than
the chosen AddRecExpr's loop, it is invalid to fold our expression with the recurrence.

Another reason why it might be invalid to fold SCEVUnknown into Phi start value is that unlike
other SCEVs, SCEVUnknown are sometimes position-bound. For example, here:

for (...) { // loop
  phi = {A,+,B}
}
X = load ...
Folding phi + X into {A+X,+,B}<loop> actually makes no sense, because X does not exist and cannot
exist while we are iterating in loop (this memory can be even not allocated and not filled by this moment).
It is only valid to make such folding if X is defined before the loop. In this case the recurrence {A+X,+,B}<loop>
may be existant.

This patch prohibits folding of SCEVUnknown (and those who use them) into the start value of an AddRecExpr,
if this instruction is dominated by the loop. Merging the dominating unknown values is still valid. Some tests that
relied on the fact that some SCEVUnknown should be folded into AddRec's are changed so that they no longer
expect such behavior.

llvm-svn: 303730
2017-05-24 08:52:18 +00:00
Tim Northover 997f5f10c6 InstructionSimplify: don't speculate about Constants changing.
When presented with an icmp/select pair, we can end up asking what would happen
if we replaced one constant with another in an instruction. This is a mistake,
while non-constant Values could become a constant, constants cannot change and
trying to do so can lead to completely invalid IR (a GEP referencing a
non-existant field in the original case).

llvm-svn: 303580
2017-05-22 21:28:08 +00:00
Sanjoy Das 036dda25a5 [SCEV] Clarify behavior around max backedge taken count
This is a re-application of a r303497 that was reverted in r303498.
I thought it had broken a bot when it had not (the breakage did not
go away with the revert).

This change makes the split between the "exact" backedge taken count
and the "maximum" backedge taken count a bit more obvious.  Both of
these are upper bounds on the number of times the loop header
executes (since SCEV does not account for most kinds of abnormal
control flow), but the latter is guaranteed to be a constant.

There were a few places where the max backedge taken count *was* a
non-constant; I've changed those to compute constants instead.

At this point, I'm not sure if the constant max backedge count can be
computed by calling `getUnsignedRange(Exact).getUnsignedMax()` without
losing precision.  If it can, we can simplify even further by making
`getMaxBackedgeTakenCount` a thin wrapper around
`getBackedgeTakenCount` and `getUnsignedRange`.

llvm-svn: 303531
2017-05-22 06:46:04 +00:00
Sanjoy Das 8963650cfa Revert "[SCEV] Clarify behavior around max backedge taken count"
This reverts commit r303497 since it breaks the msan bootstrap bot:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/1379/

llvm-svn: 303498
2017-05-21 05:02:12 +00:00
Sanjoy Das 5207168383 [SCEV] Clarify behavior around max backedge taken count
This change makes the split between the "exact" backedge taken count
and the "maximum" backedge taken count a bit more obvious.  Both of
these are upper bounds on the number of times the loop header
executes (since SCEV does not account for most kinds of abnormal
control flow), but the latter is guaranteed to be a constant.

There were a few places where the max backedge taken count *was* a
non-constant; I've changed those to compute constants instead.

At this point, I'm not sure if the constant max backedge count can be
computed by calling `getUnsignedRange(Exact).getUnsignedMax()` without
losing precision.  If it can, we can simplify even further by making
`getMaxBackedgeTakenCount` a thin wrapper around
`getBackedgeTakenCount` and `getUnsignedRange`.

llvm-svn: 303497
2017-05-21 01:47:50 +00:00
Xin Tong 9fbfeefadf Revert "Add pthread_self function prototype and make it speculatable."
This reverts commit 143d7445b5dfa2f6d6c45bdbe0433d9fc531be21.

Build breaking

llvm-svn: 303496
2017-05-21 00:37:55 +00:00
Xin Tong 75af3af957 Add pthread_self function prototype and make it speculatable.
Summary: This allows pthread_self to be pulled out of a loop by LICM.

Reviewers: hfinkel, arsenm, davide

Reviewed By: davide

Subscribers: davide, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D32782

llvm-svn: 303495
2017-05-20 22:40:25 +00:00
Matthias Braun 57fd12db0c Fix breakage after r303461
- Improve wchar_t size predicitions based on target triple.
- Be less strict in wchar_t size verifier.

llvm-svn: 303477
2017-05-20 01:28:52 +00:00
Matthias Braun 50ec0b5dce SimplifyLibCalls: Optimize wcslen
Refactor the strlen optimization code to work for both strlen and wcslen.

This especially helps with programs in the wild where people pass
L"string"s to const std::wstring& function parameters and the wstring
constructor gets inlined.

This also fixes a lingerind API problem/bug in getConstantStringInfo()
where zeroinitializers would always give you an empty string (without a
length) back regardless of the actual length of the initializer which
did not work well in the TrimAtNul==false causing the PR mentioned
below.

Note that the fixed getConstantStringInfo() needed fixes to SelectionDAG
memcpy lowering and may lead to some cases for out-of-bounds
zeroinitializer accesses not getting optimized anymore. So some code
with UB may produce out of bound memory reads now instead of just
producing zeros.

The refactoring "accidentally" fixes http://llvm.org/PR32124

Differential Revision: https://reviews.llvm.org/D32839

llvm-svn: 303461
2017-05-19 22:37:09 +00:00
Daniel Berlin a5130bbd12 BasicAA: Uninserted instructions have no parent, and notDifferentParent explicitly allows for this case, but getParent crashes when handed one.
llvm-svn: 303442
2017-05-19 19:01:21 +00:00
Craig Topper 9c913bfd49 [InstSimplify] Fix 80 column violation. NFC
llvm-svn: 303433
2017-05-19 16:56:53 +00:00
Reid Kleckner 96ab8726a3 [IR] De-virtualize ~Value to save a vptr
Summary:
Implements PR889

Removing the virtual table pointer from Value saves 1% of RSS when doing
LTO of llc on Linux. The impact on time was positive, but too noisy to
conclusively say that performance improved. Here is a link to the
spreadsheet with the original data:

https://docs.google.com/spreadsheets/d/1F4FHir0qYnV0MEp2sYYp_BuvnJgWlWPhWOwZ6LbW7W4/edit?usp=sharing

This change makes it invalid to directly delete a Value, User, or
Instruction pointer. Instead, such code can be rewritten to a null check
and a call Value::deleteValue(). Value objects tend to have their
lifetimes managed through iplist, so for the most part, this isn't a big
deal.  However, there are some places where LLVM deletes values, and
those places had to be migrated to deleteValue.  I have also created
llvm::unique_value, which has a custom deleter, so it can be used in
place of std::unique_ptr<Value>.

I had to add the "DerivedUser" Deleter escape hatch for MemorySSA, which
derives from User outside of lib/IR. Code in IR cannot include MemorySSA
headers or call the MemoryAccess object destructors without introducing
a circular dependency, so we need some level of indirection.
Unfortunately, no class derived from User may have any virtual methods,
because adding a virtual method would break User::getHungOffOperands(),
which assumes that it can find the use list immediately prior to the
User object. I've added a static_assert to the appropriate OperandTraits
templates to help people avoid this trap.

Reviewers: chandlerc, mehdi_amini, pete, dberlin, george.burgess.iv

Reviewed By: chandlerc

Subscribers: krytarowski, eraman, george.burgess.iv, mzolotukhin, Prazek, nlewycky, hans, inglorion, pcc, tejohnson, dberlin, llvm-commits

Differential Revision: https://reviews.llvm.org/D31261

llvm-svn: 303362
2017-05-18 17:24:10 +00:00
Max Kazantsev 627ad0fec3 [SCEV][NFC] Remove duplication of isLoopInvariant code
Replace two places that duplicate the code of isLoopInvariant method with
the invocation of this method.

Differential Revision: https://reviews.llvm.org/D33313

llvm-svn: 303336
2017-05-18 08:26:41 +00:00
Serguei Katkov ba831f78fd [BPI] Reduce the probability of unreachable edge to minimal value greater than 0
The probability of edge coming to unreachable block should be as low as possible.
The change reduces the probability to minimal value greater than zero.

The bug https://bugs.llvm.org/show_bug.cgi?id=32214 show the example when
the probability of edge coming to unreachable block is greater than for edge
coming to out of the loop and it causes incorrect loop rotation.

Please note that with this change the behavior of unreachable heuristic is a bit different
than others. Specifically, before this change the sum of probabilities
coming to unreachable blocks have the same weight for all branches
(it was just split over all edges of this block coming to unreachable blocks).
With this change it might be slightly different but not to much due to probability of
taken branch to unreachable block is really small.

Reviewers: chandlerc, sanjoy, vsk, congh, junbuml, davidxl, dexonsmith
Reviewed By: chandlerc, dexonsmith
Subscribers: reames, llvm-commits
Differential Revision: https://reviews.llvm.org/D30633

llvm-svn: 303327
2017-05-18 06:11:56 +00:00
Craig Topper 8a950275f7 [Statistics] Add a method to atomically update a statistic that contains a maximum
Summary:
There are several places in the codebase that try to calculate a maximum value in a Statistic object. We currently do this in one of two ways:

  MaxNumFoo = std::max(MaxNumFoo, NumFoo);

or

  MaxNumFoo = (MaxNumFoo > NumFoo) ? MaxNumFoo : NumFoo;

The first version reads from MaxNumFoo one time and uncontionally rwrites to it. The second version possibly reads it twice depending on the result of the first compare.  But we have no way of knowing if the value was changed by another thread between the reads and the writes.

This patch adds a method to the Statistic object that can ensure that we only store if our value is the max and the previous max didn't change after we read it. If it changed we'll recheck if our value should still be the max or not and try again.

This spawned from an audit I'm trying to do of all places we uses the implicit conversion to unsigned on the Statistics objects. See my previous thread on llvm-dev https://groups.google.com/forum/#!topic/llvm-dev/yfvxiorKrDQ

Reviewers: dberlin, chandlerc, hfinkel, dblaikie

Reviewed By: chandlerc

Subscribers: llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D33301

llvm-svn: 303318
2017-05-18 00:51:39 +00:00
Sanjay Patel e2787b9a35 [InstSimplify] handle all icmp i1 X, C in one place; NFCI
We already handled all of the new tests identically, but several
of those went through a lot of unnecessary processing before
getting folded.

Another motivation for grouping these cases together is that
InstCombine needs a similar fold. Currently, it handles the
'not' cases inefficiently which can lead to bugs as described
in the post-commit comments of:
https://reviews.llvm.org/D32143 

llvm-svn: 303295
2017-05-17 20:27:55 +00:00
Max Kazantsev 4c7f293d24 [SCEV] Always sort AddRecExprs from different loops by dominance
Sorting of AddRecExprs by loop nesting does not make sense since we only invoke
the CompareSCEVComplexity for AddRecExprs that are used by one SCEV. This
guarantees that there is always a dominance relationship between them. This
patch removes the sorting by nesting which is a dead code in current usage of
this function.

Reviewed By: sanjoy

Differential Revision: https://reviews.llvm.org/D33228

llvm-svn: 303235
2017-05-17 04:09:14 +00:00
Max Kazantsev b67d344850 [SCEV][NFC] Replace redundant dyn_cast with cast in getAddExpr
Replace dyn_cast which is ensured by isa just one line above with cast.

Differential Revision: https://reviews.llvm.org/D33231

llvm-svn: 303234
2017-05-17 03:58:42 +00:00
Francis Visoiu Mistrih b52e036600 BitVector: add iterators for set bits
Differential revision: https://reviews.llvm.org/D32060

llvm-svn: 303227
2017-05-17 01:07:53 +00:00
Sanjay Patel 877364ff99 [InstSimplify] add folds for constant mask of value shifted by constant
We would eventually catch these via demanded bits and computing known bits in InstCombine,
but I think it's better to handle the simple cases as soon as possible as a matter of efficiency.

This fold allows further simplifications based on distributed ops transforms. eg:
  %a = lshr i8 %x, 7
  %b = or i8 %a, 2
  %c = and i8 %b, 1

InstSimplify can directly fold this now:
  %a = lshr i8 %x, 7

Differential Revision: https://reviews.llvm.org/D33221

llvm-svn: 303213
2017-05-16 21:51:04 +00:00
Easwaran Raman 3cd1479c3f [Inliner] Do not mix callsite and callee hotness based updates.
Update threshold based on callee's hotness only when BFI is not available.
Otherwise use only callsite's hotness. This makes it easier to reason about
hotness related threshold updates.

Differential revision: https://reviews.llvm.org/D33157

llvm-svn: 303210
2017-05-16 21:18:09 +00:00
Easwaran Raman dadc0f11ad Add hasProfileSummary and has{Sample|Instrumentation}Profile methods
ProfileSummaryInfo already checks whether the module has sample profile
in determining profile counts. This will also be useful in inliner to
clean up threshold updates.

llvm-svn: 303204
2017-05-16 20:14:39 +00:00
Max Kazantsev b09b5db793 [SCEV] Fix sorting order for AddRecExprs
The existing sorting order in defined CompareSCEVComplexity sorts AddRecExprs
by loop depth, but does not pay attention to dominance of loops. This can
lead us to the following buggy situation:

for (...) { // loop1
  op1 = {A,+,B}
}
for (...) { // loop2
  op2 = {A,+,B}
  S = add op1, op2
}

In this case there is no guarantee that in operand list of S the op2 comes
before op1 (loop depth is the same, so they will be sorted just
lexicographically), so we can incorrectly treat S as a recurrence of loop1,
which is wrong.

This patch changes the sorting logic so that it places the dominated recs
before the dominating recs. This ensures that when we pick the first recurrency
in the operands order, it will be the bottom-most in terms of domination tree.
The attached test set includes some tests that produce incorrect SCEV
estimations and crashes with oldlogic.

Reviewers: sanjoy, reames, apilipenko, anna

Reviewed By: sanjoy

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D33121

llvm-svn: 303148
2017-05-16 07:27:06 +00:00
Peter Collingbourne 6f0ecca3b5 IR: Give function GlobalValue::getRealLinkageName() a less misleading name: dropLLVMManglingEscape().
This function gives the wrong answer on some non-ELF platforms in some
cases. The function that does the right thing lives in Mangler.h. To try to
discourage people from using this function, give it a different name.

Differential Revision: https://reviews.llvm.org/D33162

llvm-svn: 303134
2017-05-16 00:39:01 +00:00
Adam Nemet e29686e5c1 [SLP] Enable 64-bit wide vectorization on AArch64
ARM Neon has native support for half-sized vector registers (64 bits).  This
is beneficial for example for 2D and 3D graphics.  This patch adds the option
to lower MinVecRegSize from 128 via a TTI in the SLP Vectorizer.

*** Performance Analysis

This change was motivated by some internal benchmarks but it is also
beneficial on SPEC and the LLVM testsuite.

The results are with -O3 and PGO.  A negative percentage is an improvement.
The testsuite was run with a sample size of 4.

** SPEC

* CFP2006/482.sphinx3  -3.34%

A pretty hot loop is SLP vectorized resulting in nice instruction reduction.
This used to be a +22% regression before rL299482.

* CFP2000/177.mesa     -3.34%
* CINT2000/256.bzip2   +6.97%

My current plan is to extend the fix in rL299482 to i16 which brings the
regression down to +2.5%.  There are also other problems with the codegen in
this loop so there is further room for improvement.

** LLVM testsuite

* SingleSource/Benchmarks/Misc/ReedSolomon               -10.75%

There are multiple small SLP vectorizations outside the hot code.  It's a bit
surprising that it adds up to 10%.  Some of this may be code-layout noise.

* MultiSource/Benchmarks/VersaBench/beamformer/beamformer -8.40%

The opt-viewer screenshot can be seen at F3218284.  We start at a colder store
but the tree leads us into the hottest loop.

* MultiSource/Applications/lambda-0.1.3/lambda            -2.68%
* MultiSource/Benchmarks/Bullet/bullet                    -2.18%

This is using 3D vectors.

* SingleSource/Benchmarks/Shootout-C++/Shootout-C++-lists +6.67%

Noise, binary is unchanged.

* MultiSource/Benchmarks/Ptrdist/anagram/anagram          +4.90%

There is an additional SLP in the cold code.  The test runs for ~1sec and
prints out over 2000 lines. This is most likely noise.

* MultiSource/Applications/aha/aha                        +1.63%
* MultiSource/Applications/JM/lencod/lencod               +1.41%
* SingleSource/Benchmarks/Misc/richards_benchmark         +1.15%

Differential Revision: https://reviews.llvm.org/D31965

llvm-svn: 303116
2017-05-15 21:15:01 +00:00
Sanjay Patel a23b141cd2 [InstSimplify] restrict icmp fold with 2 sdiv exact operands (PR32949)
These folds were introduced with https://reviews.llvm.org/rL127064 as part of solving:
https://bugs.llvm.org/show_bug.cgi?id=9343

As shown here:
http://rise4fun.com/Alive/C8
...however, the sdiv exact case needs a stronger predicate.

I opted for duplicated code instead of adding another fallthrough because I think that's 
easier to read (and edit in case we need/want to restrict/loosen the predicates any more).

This should fix:
https://bugs.llvm.org/show_bug.cgi?id=32949
https://bugs.llvm.org/show_bug.cgi?id=32948

Differential Revision: https://reviews.llvm.org/D32954

llvm-svn: 303104
2017-05-15 19:16:49 +00:00
Craig Topper 716cad8bb7 [SCEV] Use copy initialization of APInts instead of direct initialization.
This is based on post commit feed back from r302769.

llvm-svn: 303092
2017-05-15 18:14:16 +00:00
Craig Topper 1a36b7d836 [ValueTracking] Replace all uses of ComputeSignBit with computeKnownBits.
This patch finishes off the conversion of ComputeSignBit to computeKnownBits.

Differential Revision: https://reviews.llvm.org/D33166

llvm-svn: 303035
2017-05-15 06:39:41 +00:00
Sanjoy Das f6f6fb903e Move some code into ScalarEvolution.cpp; NFC
I need to add some asserts to these constructors that are easier to
add once they're in the .cpp file.

llvm-svn: 303032
2017-05-15 04:22:09 +00:00
Craig Topper bb9737247a [InstCombine] Merge duplicate functionality between InstCombine and ValueTracking
Summary:
Merge overflow computation for signed add,
appearing both in InstCombine and ValueTracking.

As part of the merge,
cleanup the interface for overflow checks in InstCombine.

Patch by Yoav Ben-Shalom.

Reviewers: craig.topper, majnemer

Reviewed By: craig.topper

Subscribers: takuto.ikuta, llvm-commits

Differential Revision: https://reviews.llvm.org/D32946

llvm-svn: 303029
2017-05-15 02:44:08 +00:00
Craig Topper 479daaf74c [InstSimplify] Add patterns for folding (A & B) | (~A ^ B) -> (~A ^ B) and its commuted variants.
We already had (A & ~B) | (A ^ B), but we missed the cases where the not was part of the xor.

llvm-svn: 303004
2017-05-14 07:54:43 +00:00
Craig Topper dfc8955ee6 [BasicAA] Alphabetize includes. NFC
llvm-svn: 303002
2017-05-14 06:18:34 +00:00
Craig Topper 9fe357971c [ValueTracking] Remove const_casts on several calls to computeKnownBits and ComputeSignBit. NFC
llvm-svn: 302991
2017-05-13 17:22:16 +00:00
Andrew Kaylor b01e94ee8d [TLI] Add mapping for various '__<func>_finite' forms of the math routines to SVML routines
Patch by Chris Chrulski

Differential Revision: https://reviews.llvm.org/D31789

llvm-svn: 302957
2017-05-12 22:11:26 +00:00
Andrew Kaylor f7c864f89c [ConstantFolding] Add folding for various math '__<func>_finite' routines generated from -ffast-math
Patch by Chris Chrulski

Differential Revision: https://reviews.llvm.org/D31788

llvm-svn: 302956
2017-05-12 22:11:20 +00:00
Andrew Kaylor 3cd8c16d7f [TLI] Add declarations for various math header file routines from math-finite.h that create '__<func>_finite as functions
Patch by Chris Chrulski

Differential Revision: https://reviews.llvm.org/D31787

llvm-svn: 302955
2017-05-12 22:11:12 +00:00
Craig Topper 8df66c602a [KnownBits] Add bit counting methods to KnownBits struct and use them where possible
This patch adds min/max population count, leading/trailing zero/one bit counting methods.

The min methods return answers based on bits that are known without considering unknown bits. The max methods give answers taking into account the largest count that unknown bits could give.

Differential Revision: https://reviews.llvm.org/D32931

llvm-svn: 302925
2017-05-12 17:20:30 +00:00
Serguei Katkov 63c9c81152 [BPI] Ignore remainder while distributing the remaining probability from unreachanble
This is a follow up patch for https://reviews.llvm.org/rL300440
to address a comment.

To make implementation to be consistent with other cases we just
ignore the remainder after distribution of remaining probability between
reachable edges.

If we reduced the probability of some edges coming to unreachable
blocks we should distribute the remaining part across other edges
coming to reachable blocks to satisfy the condition that sum of all
probabilities should be equal to one. If this remaining part is not
divided by number of "reachable" edges then we get this remainder.
This remainder probability should be pretty small. Other cases just ignore
if the sum of probabilities is not equal to one so we do the same.

Reviewers: chandlerc, sanjoy, vsk, junbuml, reames
Reviewed By: reames
Subscribers: reames, llvm-commits
Differential Revision: https://reviews.llvm.org/D32124

llvm-svn: 302883
2017-05-12 07:50:06 +00:00
Peter Collingbourne f3e9f12296 CallGraph: Remove almost-unused field 'Root'.
llvm-svn: 302852
2017-05-11 23:59:05 +00:00
Teresa Johnson 2a6b7991d4 Restrict call metadata based hotness detection to Sample PGO mode
Summary:
Don't use the metadata on call instructions for determining hotness
unless we are in sample PGO mode, where it is needed because profile
counts are not accurate. In instrumentation mode this is not necessary
and does more harm than good when calls have VP metadata that hasn't
been properly scaled after transformations or dropped after constant
prop based devirtualization (both should be fixed, but we don't need
to do this in the first place for instrumentation PGO).

This required adjusting a number of tests to distinguish between sample
and instrumentation PGO handling, and to add in profile summary metadata
so that getProfileCount can get the summary.

Reviewers: davidxl, danielcdh

Subscribers: aemerson, rengolin, mehdi_amini, Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D32877

llvm-svn: 302844
2017-05-11 23:18:05 +00:00
Easwaran Raman c103ef89ee Decrease inlinecold-threshold to 45
I ran the test-suite (including SPEC 2006) in PGO mode comparing cold
thresholds of 225 and 45. Here are some stats on the text size:

Out of 904 tests that ran, 197 see a change in text size. The average
text size reduction (of all the 904 binaries) is 1.07%. Of the 197
binaries, 19 see a text size increase, as high as 18%, but most of them
are small single source benchmarks. There are 3 multisource benchmarks
with a >0.5% size increase (0.7, 1.3 and 2.1 are their % increases). On
the other side of the spectrum, 31 benchmarks see >10% size reduction
and 6 of them are MultiSource.

I haven't run the test-suite with other values of inlinecold-threshold.
Since we have a cold callsite threshold of 45, I picked this value.

Differential revision: https://reviews.llvm.org/D33106

llvm-svn: 302829
2017-05-11 21:36:28 +00:00
Craig Topper e3e1a35f68 [SCEV] Reduce possible APInt allocations a bit.
llvm-svn: 302769
2017-05-11 06:48:54 +00:00
Craig Topper 6694a4e6d6 [SCEV] Remove unneeded 'using namespace APIntOps'.
llvm-svn: 302768
2017-05-11 06:48:51 +00:00
Teresa Johnson 94624aca2a Ensure non-null ProfileSummaryInfo passed to ModuleSummaryIndex builder
This fixes a ubsan bot failure after r302597, which made getProfileCount
non-static, but ended up invoking it on a null ProfileSummaryInfo object
in some cases from buildModuleSummaryIndex.

Most testing passed because the non-static getProfileCount currently
doesn't access any member variables, but I found this when testing a
follow on patch (D32877) that adds a member variable access.

llvm-svn: 302705
2017-05-10 18:52:16 +00:00
Amara Emerson 836b0f48c1 Add a late IR expansion pass for the experimental reduction intrinsics.
This pass uses a new target hook to decide whether or not to expand a particular
intrinsic to the shuffevector sequence.

Differential Revision: https://reviews.llvm.org/D32245

llvm-svn: 302631
2017-05-10 09:42:49 +00:00
Easwaran Raman f5f9160072 [ProfileSummary] Make getProfileCount a non-static member function.
This change is required because the notion of count is different for
sample profiling and getProfileCount will need to determine the
underlying profile type.

Differential revision: https://reviews.llvm.org/D33012

llvm-svn: 302597
2017-05-09 23:21:10 +00:00
Amara Emerson cf9daa33a7 Introduce experimental generic intrinsics for horizontal vector reductions.
- This change allows targets to opt-in to using them instead of the log2
  shufflevector algorithm.
- The SLP and Loop vectorizers have the common code to do shuffle reductions
  factored out into LoopUtils, and now have a unified interface for generating
  reductions regardless of the preference of the target. LoopUtils now uses TTI
  to determine what kind of reductions the target wants to handle.
- For CodeGen, basic legalization support is added.

Differential Revision: https://reviews.llvm.org/D30086

llvm-svn: 302514
2017-05-09 10:43:25 +00:00
Craig Topper ef869ecf0e [SCEV] Don't use std::move on both inputs to APInt::operator+ or operator-. It might be confusing to the reader. NFC
llvm-svn: 302448
2017-05-08 17:39:01 +00:00
Craig Topper 868813ffbb [ValueTracking] Use KnownOnes to provide a better bound on known zeros for ctlz/cttz intrinics
This patch uses KnownOnes of the input of ctlz/cttz to bound the value that can be returned from these intrinsics. This makes these intrinsics more similar to the handling for ctpop which already uses known bits to produce a similar bound.

Differential Revision: https://reviews.llvm.org/D32521

llvm-svn: 302444
2017-05-08 17:22:34 +00:00
Sanjay Patel 6745447753 [InstSimplify] fix typo; NFC
llvm-svn: 302439
2017-05-08 16:35:02 +00:00
Craig Topper 6e11a05e7e [ValueTracking] Introduce a version of computeKnownBits that returns a KnownBits struct. Begin using it to replace internal usages of ComputeSignBit
This introduces a new interface for computeKnownBits that returns the KnownBits object instead of requiring it to be pre-constructed and passed in by reference.

This is a much more convenient interface as it doesn't require the caller to figure out the BitWidth to pre-construct the object. It's so convenient that I believe we can use this interface to remove the special ComputeSignBit flavor of computeKnownBits.

As a step towards that idea, this patch replaces all of the internal usages of ComputeSignBit with this new interface. As you can see from the patch there were a couple places where we called ComputeSignBit which really called computeKnownBits, and then called computeKnownBits again directly. I've reduced those places to only making one call to computeKnownBits. I bet there are probably external users that do it too.

A future patch will update the external users and remove the ComputeSignBit interface. I'll also working on moving more locations to the KnownBits returning interface for computeKnownBits.

Differential Revision: https://reviews.llvm.org/D32848

llvm-svn: 302437
2017-05-08 16:22:48 +00:00
Sanjay Patel 2df38a80f1 [InstCombine/InstSimplify] add comments about code duplication; NFC
llvm-svn: 302436
2017-05-08 16:21:55 +00:00
Zvi Rackover 558f86b4bc InstructionSimplify: Refactor foldIdentityShuffles. NFC.
Summary:
Minor refactoring of foldIdentityShuffles() which allows the removal of a
ConstantDataVector::get() in SimplifyShuffleVectorInstruction.

Reviewers: spatel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32955

Conflicts:
	lib/Analysis/InstructionSimplify.cpp

llvm-svn: 302433
2017-05-08 15:46:58 +00:00
Zvi Rackover dfbd3d7903 IR: Add a shufflevector mask commutation helper function. NFC.
Summary:
Following up on Sanjay's suggetion in D32955, move this functionality
into ShuffleVectornstruction.

Reviewers: spatel, RKSimon

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32956

llvm-svn: 302420
2017-05-08 12:40:18 +00:00
Craig Topper 389d8cebd1 [SCEV] Use APInt::operator*=(uint64_t) to avoid a temporary APInt for a constant.
llvm-svn: 302404
2017-05-08 04:55:13 +00:00
Craig Topper d6f2639fd7 [SCEV] Have getRangeForAffineARHelper take StartRange by const reference to avoid a copy in many of the cases.
llvm-svn: 302398
2017-05-08 02:29:15 +00:00
Zvi Rackover 973ff7c74c InstructionSimplify: Relanding r301766
Summary:
Re-applying r301766 with a fix to a typo and a regression test.

The log message for r301766 was:
==================================================================================
    InstructionSimplify: Canonicalize shuffle operands. NFC-ish.

    Summary:
     Apply canonicalization rules:
        1. Input vectors with no elements selected from can be replaced with undef.
        2. If only one input vector is constant it shall be the second one.

    This allows constant-folding to cover more ad-hoc simplifications that
    were in place and avoid duplication for RHS and LHS checks.

    There are more rules we may want to add in the future when we see a
    justification. e.g. mask elements that select undef elements can be
    replaced with undef.
==================================================================================

Reviewers: spatel, RKSimon

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32863

llvm-svn: 302373
2017-05-07 18:16:37 +00:00
Craig Topper 252682a41b [SCEV] Use move semantics in ScalarEvolution::setRange
Summary: This makes setRange take ConstantRange by rvalue reference since most callers were passing an unnamed temporary ConstantRange. We can then move that ConstantRange into the DenseMap caches. For the callers that weren't passing a temporary, I've added std::move to to the local variable being passed.

Reviewers: sanjoy, mzolotukhin, efriedma

Reviewed By: sanjoy

Subscribers: takuto.ikuta, llvm-commits

Differential Revision: https://reviews.llvm.org/D32943

llvm-svn: 302371
2017-05-07 16:28:17 +00:00
Sanjay Patel 599e65b1ff [InstSimplify] use ConstantRange to simplify or-of-icmps
We can simplify (or (icmp X, C1), (icmp X, C2)) to 'true' or one of the icmps in many cases.
I had to check some of these with Alive to prove to myself it's right, but everything seems 
to check out. Eg, the deleted code in instcombine was completely ignoring predicates with
mismatched signedness.

This is a follow-up to:
https://reviews.llvm.org/rL301260
https://reviews.llvm.org/D32143

llvm-svn: 302370
2017-05-07 15:11:40 +00:00
Sanjoy Das df8c2ebe73 Remove unnecessary const_cast
llvm-svn: 302368
2017-05-07 05:29:36 +00:00
Sanjoy Das 40415eeb59 Use array_pod_sort instead of std::sort
llvm-svn: 302367
2017-05-07 05:29:34 +00:00
Craig Topper 6c5e22a4b8 [SCEV] Remove extra APInt copies from getRangeForAffineARHelper.
This changes one parameter to be a const APInt& since we only read from it. Use std::move on local APInts once they are no longer needed so we can reuse their allocations. Lastly, use operator+=(uint64_t) instead of adding 1 to an APInt twice creating a new APInt each time.

llvm-svn: 302335
2017-05-06 06:03:07 +00:00
Craig Topper 69f1af29fb [SCEV] Use std::move to avoid some APInt copies.
llvm-svn: 302334
2017-05-06 05:22:56 +00:00
Craig Topper c97fdb846e [SCEV] Use APInt's uint64_t operations instead of creating a temporary APInt to hold 1.
llvm-svn: 302333
2017-05-06 05:15:11 +00:00
Craig Topper 8f26b7945e [SCEV] Avoid a couple APInt copies by capturing by reference since the method returns a reference.
llvm-svn: 302332
2017-05-06 05:15:09 +00:00
Craig Topper 2b195fd2c3 [LazyValueInfo] Avoid unnecessary copies of ConstantRanges
Summary:
ConstantRange contains two APInts which can allocate memory if their width is larger than 64-bits. So we shouldn't copy it when we can avoid it.

This changes LVILatticeVal::getConstantRange() to return its internal ConstantRange by reference. This allows many places that just need a ConstantRange reference to avoid making a copy.

Several places now capture the return value of getConstantRange() by reference so they can call methods on it that don't need a new object.

Lastly it adds std::move in one place to capture to move a local ConstantRange into an LVILatticeVal.

Reviewers: reames, dberlin, sanjoy, anna

Reviewed By: reames

Subscribers: grandinj, llvm-commits

Differential Revision: https://reviews.llvm.org/D32884

llvm-svn: 302331
2017-05-06 03:35:15 +00:00
Matthias Braun 60b40b8fec TargetLibraryInfo: Introduce wcslen
wcslen is part of the C99 and C++98 standards.

- This introduces the function to TargetLibraryInfo.
- Also set attributes for wcslen in llvm::inferLibFuncAttributes().

Differential Revision: https://reviews.llvm.org/D32837

llvm-svn: 302278
2017-05-05 20:25:50 +00:00
Craig Topper f0aeee01c3 [KnownBits] Add wrapper methods for setting and clear all bits in the underlying APInts in KnownBits.
This adds routines for reseting KnownBits to unknown, making the value all zeros or all ones. It also adds methods for querying if the value is zero, all ones or unknown.

Differential Revision: https://reviews.llvm.org/D32637

llvm-svn: 302262
2017-05-05 17:36:09 +00:00
Sanjay Patel e42b4d566e [InstSimplify] add folds for or-of-casted-icmps
The sibling folds for 'and' with casts were added with https://reviews.llvm.org/rL273200.
This is a preliminary step for adding the 'or' variants for the folds added with https://reviews.llvm.org/rL301260.

The reason for the strange form with constant LHS in the 1st test is because there's another missing fold in that
case for the inverted predicate. That should be fixed when we add the ConstantRange functionality for 'or-of-icmps' 
that already exists for 'and-of-icmps'.

I'm hoping to share more code for the and/or cases, so we won't have these differences. This will allow us to remove
code from InstCombine. It's also possible that we can remove some code here in InstSimplify. I think we have some 
duplicated folds because patterns are not matched in a general way.

Differential Revision: https://reviews.llvm.org/D32876

llvm-svn: 302189
2017-05-04 19:51:34 +00:00
Sanjay Patel 142cb83768 [InstSimplify] move logic-of-icmps helper functions; NFC
Putting these next to each other should make it easier to see
what's missing from each side. Patch to plug one of those holes
should be posted soon.

llvm-svn: 302178
2017-05-04 18:19:17 +00:00
Peter Collingbourne 9667b91b13 Re-apply r302108, "IR: Use pointers instead of GUIDs to represent edges in the module summary. NFCI."
with a fix for the clang backend.

llvm-svn: 302176
2017-05-04 18:03:25 +00:00
Michael Zolotukhin 3207d30fdd Fix a typo.
llvm-svn: 302175
2017-05-04 17:42:34 +00:00
Eric Liu f6039f255e Revert "IR: Use pointers instead of GUIDs to represent edges in the module summary. NFCI."
This reverts commit r302108. This causes crash in clang bootstrap with LTO.

Contacted the auther in the original commit.

llvm-svn: 302140
2017-05-04 11:49:39 +00:00
Peter Collingbourne 5f85a9deda IR: Use pointers instead of GUIDs to represent edges in the module summary. NFCI.
When profiling a no-op incremental link of Chromium I found that the functions
computeImportForFunction and computeDeadSymbols were consuming roughly 10% of
the profile. The goal of this change is to improve the performance of those
functions by changing the map lookups that they were previously doing into
pointer dereferences.

This is achieved by changing the ValueInfo data structure to be a pointer to
an element of the global value map owned by ModuleSummaryIndex, and changing
reference lists in the GlobalValueSummary to hold ValueInfos instead of GUIDs.
This means that a ValueInfo will take a client directly to the summary list
for a given GUID.

Differential Revision: https://reviews.llvm.org/D32471

llvm-svn: 302108
2017-05-04 03:36:16 +00:00
Michael Zolotukhin 37162adf3e [SCEV] createAddRecFromPHI: Optimize for the most common case.
Summary:
The existing implementation creates a symbolic SCEV expression every
time we analyze a phi node and then has to remove it, when the analysis
is finished. This is very expensive, and in most of the cases it's also
unnecessary. According to the data I collected, ~60-70% of analyzed phi
nodes (measured on SPEC) have the following form:
  PN = phi(Start, OP(Self, Constant))
Handling such cases separately significantly speeds this up.

Reviewers: sanjoy, pete

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32663

llvm-svn: 302096
2017-05-03 23:53:38 +00:00
Craig Topper 8189a87a1e [KnownBits] Add methods for determining if KnownBits is a constant value
This patch adds isConstant and getConstant for determining if KnownBits represents a constant value and to retrieve the value. Use them to simplify code.

Differential Revision: https://reviews.llvm.org/D32785

llvm-svn: 302091
2017-05-03 23:12:29 +00:00
Craig Topper 6b3940a4b3 [ValueTracking] Remove handling for BitWidth being 0 in ComputeSignBit and isKnownNonZero.
I don't believe its possible to have non-zero values here since DataLayout became required. The APInt constructor inside of the KnownBits object will assert if this ever happens.

llvm-svn: 302089
2017-05-03 22:25:19 +00:00
Craig Topper d938fd1397 [KnownBits] Add zext, sext, and trunc methods to KnownBits
This patch adds zext, sext, and trunc methods to KnownBits and uses them where possible.

Differential Revision: https://reviews.llvm.org/D32784

llvm-svn: 302088
2017-05-03 22:07:25 +00:00
Reid Kleckner a0b45f4bfc [IR] Abstract away ArgNo+1 attribute indexing as much as possible
Summary:
Do three things to help with that:
- Add AttributeList::FirstArgIndex, which is an enumerator currently set
  to 1. It allows us to change the indexing scheme with fewer changes.
- Add addParamAttr/removeParamAttr. This just shortens addAttribute call
  sites that would otherwise need to spell out FirstArgIndex.
- Remove some attribute-specific getters and setters from Function that
  take attribute list indices.  Most of these were only used from
  BuildLibCalls, and doesNotAlias was only used to test or set if the
  return value is malloc-like.

I'm happy to split the patch, but I think they are probably easier to
review when taken together.

This patch should be NFC, but it sets the stage to change the indexing
scheme to this, which is more convenient when indexing into an array:
  0: func attrs
  1: retattrs
  2...: arg attrs

Reviewers: chandlerc, pete, javed.absar

Subscribers: david2050, llvm-commits

Differential Revision: https://reviews.llvm.org/D32811

llvm-svn: 302060
2017-05-03 18:17:31 +00:00
Matt Arsenault 6a288c1e32 Replace hardcoded intrinsic list with speculatable attribute.
No change in which intrinsics should be speculated.

llvm-svn: 301995
2017-05-03 02:26:10 +00:00
Peter Collingbourne e95901caa4 Revert r295861, "[ModuleSummaryAnalysis] Don't crash when referencing unnamed globals."
We should always expect values to be named before running the module summary
analysis (see NameAnonGlobals pass), so it's fine if we crash in that case.

llvm-svn: 301991
2017-05-03 00:18:48 +00:00
Sanjay Patel d091e76e0e revert r301766: InstructionSimplify: Canonicalize shuffle operands. NFC-ish
Turns out this wasn't NFC-ish at all because there's a bug processing shuffles
that change the size of their input vectors (that case always seems to trip us
up). 

This should fix PR32872 while we investigate how it failed and reduce a testcase:
https://bugs.llvm.org/show_bug.cgi?id=32872
 

llvm-svn: 301977
2017-05-02 21:37:28 +00:00
Xinliang David Li 351d9b01b9 Refactor callsite cost computation into a helper function /NFC
Makes code more readable. The function will also be used
by the partial inlining's cost analysis.

llvm-svn: 301899
2017-05-02 05:38:41 +00:00
George Burgess IV 7bc507a2e8 Revert r301880
This change caused buildbot failures, apparently because we're not
passing around types that InstSimplify is used to seeing. I'm not overly
familiar with InstSimplify, so I'm reverting this until I can figure out
what exactly is wrong.

llvm-svn: 301885
2017-05-01 23:54:41 +00:00
George Burgess IV 6935aefdf0 [InstSimplify] Handle selects of GEPs with 0 offset
In particular (since it wouldn't fit nicely in the summary):
(select (icmp eq V 0) P (getelementptr P V)) -> (getelementptr P V)

Differential Revision: https://reviews.llvm.org/D31435

llvm-svn: 301880
2017-05-01 23:12:08 +00:00
Sanjoy Das e6bca0eecb Rename WeakVH to WeakTrackingVH; NFC
This relands r301424.

llvm-svn: 301812
2017-05-01 17:07:49 +00:00
Sanjoy Das 08989c7ecd Rename isKnownNotFullPoison to programUndefinedIfPoison; NFC
Summary:
programUndefinedIfPoison makes more sense, given what the function
does; and I'm about to add a function with a name similar to
isKnownNotFullPoison (so do the rename to avoid confusion).

Reviewers: broune, majnemer, bjarke.roune

Reviewed By: broune

Subscribers: mcrosier, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D30444

llvm-svn: 301776
2017-04-30 19:41:19 +00:00
Zvi Rackover 9d8cd821e6 InstructionSimplify: Canonicalize shuffle operands. NFC-ish.
Summary:
 Apply canonicalization rules:
    1. Input vectors with no elements selected from can be replaced with undef.
    2. If only one input vector is constant it shall be the second one.

This allows constant-folding to cover more ad-hoc simplifications that
were in place and avoid duplication for RHS and LHS checks.

There are more rules we may want to add in the future when we see a
justification. e.g. mask elements that select undef elements can be
replaced with undef.

Reviewers: spatel, RKSimon, andreadb, davide

Reviewed By: spatel, RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32338

llvm-svn: 301766
2017-04-30 06:25:04 +00:00
Zvi Rackover 0411e46fff InstructionSimplify: One getShuffleMask() replacing multiple getMaskValue(). NFC.
Summary: This is a preparatory step for D32338.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon, spatel

Subscribers: spatel, llvm-commits

Differential Revision: https://reviews.llvm.org/D32388

llvm-svn: 301765
2017-04-30 06:10:54 +00:00
Zvi Rackover 4086e13e0d InstructionSimplify: Simplify a shuffle with a undef mask to undef
Summary:
Following the discussion in pr32486, adding the simplification:
 shuffle %x, %y, undef -> undef

Reviewers: spatel, RKSimon, andreadb, davide

Reviewed By: spatel

Subscribers: jroelofs, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D32293

llvm-svn: 301764
2017-04-30 06:06:26 +00:00
Craig Topper ca48af3c87 [KnownBits] Add methods for determining if the known bits represent a negative/nonnegative number and add methods for changing the negative/nonnegative state
Summary: This patch adds isNegative, isNonNegative for querying whether the sign bit is known. It also adds makeNegative and makeNonNegative for controlling the sign bit.

Reviewers: RKSimon, spatel, davide

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32651

llvm-svn: 301747
2017-04-29 16:43:11 +00:00
Michael Zolotukhin 146a221260 [SCEV] Use early exit in createAddRecFromPHI. NFC.
llvm-svn: 301703
2017-04-28 22:14:27 +00:00
Matt Arsenault cf5e7fe358 [ValueTracking] Teach isSafeToSpeculativelyExecute() about the speculatable attribute
Patch by Tom Stellard

llvm-svn: 301688
2017-04-28 21:13:09 +00:00
Daniel Berlin 4d0fe64ae3 Kill off the old SimplifyInstruction API by converting remaining users.
llvm-svn: 301673
2017-04-28 19:55:38 +00:00
Reid Kleckner 6652a52e2b Use Argument::hasAttribute and AttributeList::ReturnIndex more
This eliminates many extra 'Idx' induction variables in loops over
arguments in CodeGen/ and Target/. It also reduces the number of places
where we assume that ReturnIndex is 0 and that we should add one to
argument numbers to get the corresponding attribute list index.

NFC

llvm-svn: 301666
2017-04-28 18:37:16 +00:00
Craig Topper 24db6b800f [APInt] Add clearSignBit method. Use it and setSignBit in a few places. NFCI
llvm-svn: 301656
2017-04-28 16:58:05 +00:00
Craig Topper 96d6ee8576 [LazyValueInfo] Fix typo in comment. NFC
llvm-svn: 301655
2017-04-28 16:57:59 +00:00
Craig Topper 9eb2d72a1d [ValueTracking] Use APInt::isSubsetOf and APInt::intersects. NFC
llvm-svn: 301654
2017-04-28 16:57:55 +00:00
Jun Bum Lim 919f9e8d65 [InlineCost] Improve the cost heuristic for Switch
Summary:
The motivation example is like below which has 13 cases but only 2 distinct targets

```
lor.lhs.false2:                                   ; preds = %if.then
  switch i32 %Status, label %if.then27 [
    i32 -7012, label %if.end35
    i32 -10008, label %if.end35
    i32 -10016, label %if.end35
    i32 15000, label %if.end35
    i32 14013, label %if.end35
    i32 10114, label %if.end35
    i32 10107, label %if.end35
    i32 10105, label %if.end35
    i32 10013, label %if.end35
    i32 10011, label %if.end35
    i32 7008, label %if.end35
    i32 7007, label %if.end35
    i32 5002, label %if.end35
  ]
```
which is compiled into a balanced binary tree like this on AArch64 (similar on X86)

```
.LBB853_9:                              // %lor.lhs.false2
        mov     w8, #10012
        cmp             w19, w8
        b.gt    .LBB853_14
// BB#10:                               // %lor.lhs.false2
        mov     w8, #5001
        cmp             w19, w8
        b.gt    .LBB853_18
// BB#11:                               // %lor.lhs.false2
        mov     w8, #-10016
        cmp             w19, w8
        b.eq    .LBB853_23
// BB#12:                               // %lor.lhs.false2
        mov     w8, #-10008
        cmp             w19, w8
        b.eq    .LBB853_23
// BB#13:                               // %lor.lhs.false2
        mov     w8, #-7012
        cmp             w19, w8
        b.eq    .LBB853_23
        b       .LBB853_3
.LBB853_14:                             // %lor.lhs.false2
        mov     w8, #14012
        cmp             w19, w8
        b.gt    .LBB853_21
// BB#15:                               // %lor.lhs.false2
        mov     w8, #-10105
        add             w8, w19, w8
        cmp             w8, #9          // =9
        b.hi    .LBB853_17
// BB#16:                               // %lor.lhs.false2
        orr     w9, wzr, #0x1
        lsl     w8, w9, w8
        mov     w9, #517
        and             w8, w8, w9
        cbnz    w8, .LBB853_23
.LBB853_17:                             // %lor.lhs.false2
        mov     w8, #10013
        cmp             w19, w8
        b.eq    .LBB853_23
        b       .LBB853_3
.LBB853_18:                             // %lor.lhs.false2
        mov     w8, #-7007
        add             w8, w19, w8
        cmp             w8, #2          // =2
        b.lo    .LBB853_23
// BB#19:                               // %lor.lhs.false2
        mov     w8, #5002
        cmp             w19, w8
        b.eq    .LBB853_23
// BB#20:                               // %lor.lhs.false2
        mov     w8, #10011
        cmp             w19, w8
        b.eq    .LBB853_23
        b       .LBB853_3
.LBB853_21:                             // %lor.lhs.false2
        mov     w8, #14013
        cmp             w19, w8
        b.eq    .LBB853_23
// BB#22:                               // %lor.lhs.false2
        mov     w8, #15000
        cmp             w19, w8
        b.ne    .LBB853_3
```
However, the inline cost model estimates the cost to be linear with the number
of distinct targets and the cost of the above switch is just 2 InstrCosts.
The function containing this switch is then inlined about 900 times.

This change use the general way of switch lowering for the inline heuristic. It
etimate the number of case clusters with the suitability check for a jump table
or bit test. Considering the binary search tree built for the clusters, this
change modifies the model to be linear with the size of the balanced binary
tree. The model is off by default for now :
  -inline-generic-switch-cost=false

This change was originally proposed by Haicheng in D29870.

Reviewers: hans, bmakam, chandlerc, eraman, haicheng, mcrosier

Reviewed By: hans

Subscribers: joerg, aemerson, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D31085

llvm-svn: 301649
2017-04-28 16:04:03 +00:00
Craig Topper f42b23f7d8 [ValueTracking] Convert computeKnownBitsFromRangeMetadata to use KnownBits struct.
llvm-svn: 301626
2017-04-28 06:28:56 +00:00
Daniel Berlin 99397cea69 Kill the old Simplify* APIs, leave SimplifyInstruction for the moment
llvm-svn: 301467
2017-04-26 20:56:17 +00:00
Daniel Berlin e6cb21a287 PHITransAddr: Use new SimplifyQuery based API.
llvm-svn: 301465
2017-04-26 20:56:13 +00:00
Craig Topper b45eabcf82 [ValueTracking] Introduce a KnownBits struct to wrap the two APInts for computeKnownBits
This patch introduces a new KnownBits struct that wraps the two APInt used by computeKnownBits. This allows us to treat them as more of a unit.

Initially I've just altered the signatures of computeKnownBits and InstCombine's simplifyDemandedBits to pass a KnownBits reference instead of two separate APInt references. I'll do similar to the SelectionDAG version of computeKnownBits/simplifyDemandedBits as a separate patch.

I've added a constructor that allows initializing both APInts to the same bit width with a starting value of 0. This reduces the repeated pattern of initializing both APInts. Once place default constructed the APInts so I added a default constructor for those cases.

Going forward I would like to add more methods that will work on the pairs. For example trunc, zext, and sext occur on both APInts together in several places. We should probably add a clear method that can be used to clear both pieces. Maybe a method to check for conflicting information. A method to return (Zero|One) so we don't write it out everywhere. Maybe a method for (Zero|One).isAllOnesValue() to determine if all bits are known. I'm sure there are many other methods we can come up with.

Differential Revision: https://reviews.llvm.org/D32376

llvm-svn: 301432
2017-04-26 16:39:58 +00:00
Sanjoy Das 2cbeb00f38 Reverts commit r301424, r301425 and r301426
Commits were:

"Use WeakVH instead of WeakTrackingVH in AliasSetTracker's UnkownInsts"
"Add a new WeakVH value handle; NFC"
"Rename WeakVH to WeakTrackingVH; NFC"

The changes assumed pointers are 8 byte aligned on all architectures.

llvm-svn: 301429
2017-04-26 16:37:05 +00:00
Sanjoy Das 01de557738 Rename WeakVH to WeakTrackingVH; NFC
Summary:
I plan to use WeakVH to mean "nulls itself out on deletion, but does
not track RAUW" in a subsequent commit.

Reviewers: dblaikie, davide

Reviewed By: davide

Subscribers: arsenm, mehdi_amini, mcrosier, mzolotukhin, jfb, llvm-commits, nhaehnle

Differential Revision: https://reviews.llvm.org/D32266

llvm-svn: 301424
2017-04-26 16:20:52 +00:00
Daniel Berlin 3fef15b73f InstructionSimplify: Use braced initializer list for SimplifyQuery creation
llvm-svn: 301381
2017-04-26 04:10:02 +00:00
Daniel Berlin e8d74dce81 InstructionSimplify: Have SimplifyFPBinOp pass FastMathFlags by value, like we do everywhere else
llvm-svn: 301380
2017-04-26 04:10:00 +00:00
Daniel Berlin 5e3fcb1a2b InstructionSimplify: End our long national nightmare of ever-growing Simplify* arguments.
Summary:
Expose the internal query structure, start using it.

Note: This is the most minimal change possible i could create.  I have
trivial followups, like fixing the one use of const FastMathFlags &,
the renaming of CtxI to be consistent, etc.

This should be NFC.

Reviewers: majnemer, davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32448

llvm-svn: 301379
2017-04-26 04:09:56 +00:00
Craig Topper f3dbd17d0a [APInt] Use isSubsetOf, intersects, and bit counting methods to reduce temporary APInts
This patch uses various APInt methods to reduce temporary APInt creation.

This should be all of the unrelated cleanups that got buried in D32376(creating a KnownBits struct) as well as some pointed out by Simon during the review of that. Plus a few improvements to use counting instead of masking.

I've left out any places where we do something like (KnownZero & KnownOne) != 0 as I plan to add a helper method to KnownBits to ask that question and didn't want to thrash that code an additional time.

Differential Revision: https://reviews.llvm.org/D32495

llvm-svn: 301338
2017-04-25 17:46:30 +00:00
Craig Topper 0b650d3569 [InstSimplify] Handle (~A & ~B) | (~A ^ B) -> ~A ^ B
The code Sanjay Patel moved over from InstCombine doesn't work properly if the 'and' has both inputs as nots because we used a commuted op matcher on the 'and' first. But this will bind to the first 'not' on 'and' when there could be two 'not's. InstCombine could rely on DeMorgan to ensure the 'and' wouldn't have two 'not's eventually, but InstSimplify can't rely on that.

This patch matches the xor first then checks for the ands and allows a not of either operand of the xor.

Differential Revision: https://reviews.llvm.org/D32458

llvm-svn: 301329
2017-04-25 17:01:32 +00:00
Craig Topper 2d9afa7745 [ValueTracking] Use APInt::operator|=(uint64_t) instead of creating a temporary APInt. NFC
llvm-svn: 301325
2017-04-25 16:48:14 +00:00
Craig Topper da8ff4181c [ValueTracking] Use APInt instead of auto. NFC
This is a pre-commit for a patch I'm working on to turn KnownZero/One into a struct. Once I do that the type here will be less obvious.

llvm-svn: 301324
2017-04-25 16:48:09 +00:00
Craig Topper 9c932d31e1 [ValueTracking] Use BitWidth local variable instead of re-reading it from KnownZero. NFC
This is a pre-commit for a patch that I'm working on to merge KnownZero/KnownOne into a KnownBits struct which would have had to touch this line.

llvm-svn: 301323
2017-04-25 16:48:03 +00:00
Sanjoy Das 561247a823 [IVUsers] Don't bail out of normalizing non-affine add recs
Summary:
In a previous change I changed SCEV's normalization / denormalization
to work with non-affine add recs.  So the bailout in IVUsers can be
removed.

Reviewers: atrick, efriedma

Reviewed By: atrick

Subscribers: davide, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D32105

llvm-svn: 301298
2017-04-25 06:53:25 +00:00
Sanjoy Das bbebcb6c4d Teach SCEV normalization to de/normalize non-affine add recs
Summary:
Before this change, SCEV Normalization would incorrectly normalize
non-affine add recurrences.  To work around this there was (still is)
a check in place to make sure we only tried to normalize affine add
recurrences.

We recently found a bug in aforementioned check to bail out of
normalizing non-affine add recurrences.  However, instead of fixing
the bailout, I have decided to teach SCEV normalization to work
correctly with non-affine add recurrences, making the bailout
unnecessary (I'll remove it in a subsequent change).

I've also added some unit tests (which would have failed before this
change).

Reviewers: atrick, sunfish, efriedma

Reviewed By: atrick

Subscribers: mcrosier, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D32104

llvm-svn: 301281
2017-04-25 00:09:19 +00:00
Sanjay Patel 35c362ebbb [InstSimplify] use ConstantRange to simplify more and-of-icmps
We can simplify (and (icmp X, C1), (icmp X, C2)) to one of the icmps in many cases. 
I had to check some of these with Alive to prove to myself it's right, but everything 
seems to check out. Eg, the code in instcombine was completely ignoring predicates with 
mismatched signedness.

Handling or-of-icmps would be a follow-up step.

Differential Revision: https://reviews.llvm.org/D32143

llvm-svn: 301260
2017-04-24 21:52:39 +00:00
Piotr Padlewski 610c966a4e Handle invariant.group.barrier in BasicAA
Summary:
llvm.invariant.group.barrier returns pointer that mustalias
pointer it takes. It can't be marked with `returned` attribute,
because it would be remove easily. The other reason is that
only Alias Analysis can know about this, because if any other
pass would know it, then the result would be replaced with it's
argument, which would be invalid.

We can think about returned pointer as something that mustalias, but
it doesn't have to be bitwise the same as the argument.

Reviewers: dberlin, chandlerc, hfinkel, sanjoy

Subscribers: reames, nlewycky, rsmith, anna, amharc

Differential Revision: https://reviews.llvm.org/D31585

llvm-svn: 301227
2017-04-24 19:37:17 +00:00
Sanjay Patel 0889225f51 [InstSimplify] move (A & ~B) | (A ^ B) -> (A ^ B) from InstCombine
This is a straight cut and paste, but there's a bigger problem: if this
fold exists for simplifyOr, there should be a DeMorganized version for
simplifyAnd. But more than that, we have a patchwork of ad hoc logic
optimizations in InstCombine. There should be some structure to ensure 
that we're not missing sibling folds across and/or/xor.
 

llvm-svn: 301213
2017-04-24 18:24:36 +00:00
Davide Italiano ebd77645cc [DomPrinter] Add a way to programmatically dump a dot representation.
Differential Revision:  https://reviews.llvm.org/D32145

llvm-svn: 301205
2017-04-24 17:48:44 +00:00
Sanjoy Das 0cdcdf018e Revert "[SCEV] Enable SCEV verification by default in EXPENSIVE_CHECKS builds"
This reverts commit r301150.  It breaks CodeGen/Hexagon/hwloop-wrap2.ll, reverting
while I investigate.

llvm-svn: 301154
2017-04-24 02:35:19 +00:00
Sanjoy Das 25972aa82e Fix unused variables / fields warnings in release builds
llvm-svn: 301151
2017-04-24 00:46:40 +00:00
Sanjoy Das 8919303b0a [SCEV] Enable SCEV verification by default in EXPENSIVE_CHECKS builds
llvm-svn: 301150
2017-04-24 00:41:58 +00:00
Sanjoy Das bdbc4938f9 [SCEV] Fix exponential time complexity by caching
llvm-svn: 301149
2017-04-24 00:09:46 +00:00
Sanjoy Das 148e49f3c8 [SCEV] Move towards a verifier without false positives
This change reboots SCEV's current (off by default) verification logic
to avoid false failures.  Instead of stringifying trip counts, it maps
old and new trip counts to the same ScalarEvolution "universe" and
asks ScalarEvolution to compute the difference between them.  If the
difference comes out to be a non-zero constant, then (barring some
corner cases) we *know* we messed up.

I've not yet enabled this by default since it hits an exponential time
issue in SCEV, but once I fix that, I'll flip it on by default in
EXPENSIVE_CHECKS builds.

llvm-svn: 301146
2017-04-23 23:04:45 +00:00
Easwaran Raman e1bd7cceca Remove a repeated comment line. NFC.
llvm-svn: 301059
2017-04-21 23:12:16 +00:00
Craig Topper 72f31a8381 [ValueTracking] Use APInt::setAllBits and APInt::intersects to simplify some code. NFC
llvm-svn: 300997
2017-04-21 16:43:32 +00:00
George Burgess IV 56169ed753 [MSSA] Clean up the updater a bit. NFC
- Mark an internal function static
- Remove the llvm namespace (just holding on to the `using namespace
  llvm;` Works on My Machine(TM))

llvm-svn: 300947
2017-04-21 04:54:52 +00:00
Eli Friedman d0e6ae5678 Revert r300746 (SCEV analysis for or instructions).
There have been multiple reports of this causing problems: a
compile-time explosion on the LLVM testsuite, and a stack
overflow for an opencl kernel.

llvm-svn: 300928
2017-04-20 23:59:05 +00:00
Craig Topper bcfd2d1789 [APInt] Rename getSignBit to getSignMask
getSignBit is a static function that creates an APInt with only the sign bit set. getSignMask seems like a better name to convey its functionality. In fact several places use it and then store in an APInt named SignMask.

Differential Revision: https://reviews.llvm.org/D32108

llvm-svn: 300856
2017-04-20 16:56:25 +00:00
Craig Topper 9b71a402c2 [APInt] Cast calls to add/sub/mul overflow methods to void if only their overflow bool out param is used.
This is preparation for a clang change to improve the [[nodiscard]] warning to not be ignored on methods that return a class marked [[nodiscard]] that are defined in the class itself. See D32207.

We should consider adding wrapper methods to APInt that return the overflow flag directly and discard the APInt result. This would eliminate the void casts and the need to create a bool before the call to pass to the out param.

llvm-svn: 300758
2017-04-19 21:09:45 +00:00
Eli Friedman e77d2b86b4 [SCEV] Make SCEV or modeling more aggressive.
Use haveNoCommonBitsSet to figure out whether an "or" instruction
is equivalent to addition. This handles more cases than just
checking for a constant on the RHS.

Differential Revision: https://reviews.llvm.org/D32239

llvm-svn: 300746
2017-04-19 20:19:58 +00:00
Sanjay Patel a3c297dba4 [InstSimplify] fold identity shuffles (recursing if needed)
This patch simplifies the examples from D31509 and D31927 (PR30630) and catches 
the basic identity shuffle tests that Zvi recently added.

I'm not sure if we have something like this in DAGCombiner, but we should?

It's worth noting that "MaxRecurse / RecursionLimit" is only 3 on entry at the moment. 
We might want to bump that up if there are longer shuffle chains like this in the wild.

For now, we're ignoring shuffles that have undef mask elements because it's not
clear how those should be handled.

Differential Revision: https://reviews.llvm.org/D31960

llvm-svn: 300714
2017-04-19 16:48:22 +00:00
Davide Italiano a9f047a594 [InstSimplify] Deduce correct type for vector GEP.
InstSimplify returned the wrong type when simplifying a vector GEP
and we ended up crashing when trying to replace all uses with the
new value. Fixes PR32697.

Differential Revision: https://reviews.llvm.org/D32180

llvm-svn: 300693
2017-04-19 14:23:42 +00:00
Sanjoy Das f09c1e346e Add a getPointerOperandType() helper to LoadInst and StoreInst; NFC
I will use this in a later change.

llvm-svn: 300613
2017-04-18 22:00:54 +00:00
Craig Topper 09bb760baa [MemoryBuiltins] Add isMallocOrCallocLikeFn so BasicAA can check for both at the same time
BasicAA wants to know if a function is either a malloc or calloc like function. Currently we have to check both separately. This means both calls check if its an intrinsic, query TLI, check the nobuiltin attribute, scan the AllocationFnData, etc.

This patch adds a isMallocOrCallocLikeFn so we can go through all of the checks once per call.

This also changes the one other location I saw that called both together.

Differential Revision: https://reviews.llvm.org/D32188

llvm-svn: 300608
2017-04-18 21:43:46 +00:00
Craig Topper eae6db0e5c [MemoryBuiltins] Use ImmutableCallSite instead of CallSite to remove a const_cast and const correct. NFCI
llvm-svn: 300585
2017-04-18 20:17:23 +00:00
Craig Topper fc947bcfba [APInt] Use lshrInPlace to replace lshr where possible
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result.

This adds an lshrInPlace(const APInt &) version as well.

Differential Revision: https://reviews.llvm.org/D32155

llvm-svn: 300566
2017-04-18 17:14:21 +00:00
Benjamin Kramer 61d85bc9ae [SCEV] Fix another unused variable warning in release builds.
llvm-svn: 300500
2017-04-17 21:07:26 +00:00
Wei Mi 66c4dd2e29 Fix an unused variable error in rL300494.
llvm-svn: 300499
2017-04-17 21:00:45 +00:00
Wei Mi 8c4053372e [SCEV] Add a local cache for getZeroExtendExpr and getSignExtendExpr to prevent
the exponential behavior.

The patch is to fix PR32043. Functions getZeroExtendExpr and getSignExtendExpr
may call themselves recursively more than once. This is potentially a 2^N
complexity behavior. The exponential behavior was not commonly exposed before
because of existing global cache mechnism like UniqueSCEVs or some early return
mechanism when flags FlagNSW or FlagNUW are seen. However, we still have case
which can expose the exponential behavior, like the case in PR32043, so we add
a local cache in getZeroExtendExpr and getSignExtendExpr. If the input of the
functions -- SCEV and type pair have been seen before, we can find the extended
expression directly in the local cache.

Differential Revision: https://reviews.llvm.org/D30350

llvm-svn: 300494
2017-04-17 20:40:05 +00:00
Craig Topper d23004c37b Introduce APInt::isSignBitSet/isSignBitClear. Use in place isSignBitSet in place of isNegative in known bits tracking.
This makes statements like KnownZero.isNegative() (which means the value we're tracking is positive) less confusing.

llvm-svn: 300457
2017-04-17 16:38:20 +00:00
Serguei Katkov 11d9c4f691 [BPI] NFC: reorder ifs to bail out earlier
This is non-functional change to re-order if statements to bail out earlier
from unreachable and ColdCall heuristics.

Reviewers: sanjoy, reames, junbuml, vsk, chandlerc

Reviewed By: chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31704

llvm-svn: 300442
2017-04-17 06:39:47 +00:00
Serguei Katkov 2616bbb16d [BPI] Use metadata info before any other heuristics
Metadata potentially is more precise than any heuristics we use, so
it makes sense to use first metadata info if it is available. However it makes
sense to examine it against other strong heuristics like unreachable one.
If edge coming to unreachable block has higher probability then it is expected 
by unreachable heuristic then we use heuristic and remaining probability is
distributed among other reachable blocks equally.

An example where metadata might be more strong then unreachable heuristic is
as follows: it is possible that there are two branches and for the branch A
metadata says that its probability is (0, 2^25). For the branch B
the probability is (1, 2^25).
So the expectation is that first edge of B is hotter than first edge of A
because first edge of A did not executed at least once.
If first edge of A points to the unreachable block then using the unreachable
heuristics we'll set the probability for A to (1, 2^20) and now edge of A
becomes hotter than edge of B.
This is unexpected behavior.

This fixed the biggest part of https://bugs.llvm.org/show_bug.cgi?id=32214

Reviewers: sanjoy, junbuml, vsk, chandlerc

Reviewed By: chandlerc

Subscribers: llvm-commits, reames, davidxl

Differential Revision: https://reviews.llvm.org/D30631

llvm-svn: 300440
2017-04-17 04:33:04 +00:00
Craig Topper da886c665b [InstCombine][ValueTracking] When computing known bits for Srem make sure we don't compute known bits for the LHS twice.
If we already called computeKnownBits for the RHS being a constant power of 2, we've already computed everything we can and should just stop. I think previously we would still recurse if we had determined the result was negative or had not determined the sign bit at all.

llvm-svn: 300432
2017-04-16 21:46:12 +00:00
Bryant Wong c819ba8874 MemorySSA: Stop tracking def-or-use blocks.
The tracking is unused, since MemoryPhis are not pruned as of r282419.

Differential Revision: https://reviews.llvm.org/D32121

llvm-svn: 300428
2017-04-16 19:45:51 +00:00
Sanjay Patel 35ed2413af [InstSimplify] improve getTrue/getFalse; NFCI
The ConstantInt version has the same assert, and using null/allOnes is likely less efficient.
The only advantage of these local variants (and there's probably a better way to achieve this?)
is to save typing "ConstantInt::" over and over.

llvm-svn: 300426
2017-04-16 17:43:11 +00:00
Eric Christopher 908ed7f20c Tidy checking for the soft float attribute.
llvm-svn: 300394
2017-04-15 06:14:52 +00:00
Eric Christopher 85be8ca881 Cache the DataLayout rather than looking it up frequently.
llvm-svn: 300393
2017-04-15 06:14:50 +00:00
Reid Kleckner fb502d2f5e [IR] Make paramHasAttr to use arg indices instead of attr indices
This avoids the confusing 'CS.paramHasAttr(ArgNo + 1, Foo)' pattern.

Previously we were testing return value attributes with index 0, so I
introduced hasReturnAttr() for that use case.

llvm-svn: 300367
2017-04-14 20:19:02 +00:00
Sanjoy Das 3470e14ba4 Rewrite SCEV Normalization using SCEVRewriteVisitor; NFC
Removes all of the boilerplate, cache management etc. from
ScalarEvolutionNormalization, and keeps only the interesting bits.

llvm-svn: 300349
2017-04-14 17:42:10 +00:00
Sanjoy Das 01545beb75 Remove "#if 0"ed out assert
It won't compile after the recent changes I've made, and I think
keeping it in provides very little value.

Instead I've added (in an earlier commit) a C++ unit test to check the
Denormalize(Normalized(X)) == X property for specific instances of X,
which is what the assert was trying to do anyway.

llvm-svn: 300339
2017-04-14 16:47:15 +00:00
Sanjoy Das 369f3039a3 Delete some unnecessary boilerplate
The PostIncTransform class was not pulling its weight, so delete it
and use free functions instead.

This also makes the use of `function_ref` more idiomatic.  We were
storing an instance of function_ref in the PostIncTransform class
before, which was fine in that specific case, but the usage after this
change is more obviously okay.

llvm-svn: 300338
2017-04-14 16:47:12 +00:00
Sanjoy Das 478cd98b22 Use range for
llvm-svn: 300334
2017-04-14 15:50:19 +00:00
Sanjoy Das c5a87a1949 Simplify PostIncTransform further; NFC
Instead of having two ways to check if an add recurrence needs to be
normalized, just pass in one predicate to decide that.

llvm-svn: 300333
2017-04-14 15:50:07 +00:00
Sanjoy Das e3a15e832c Tighten the API for ScalarEvolutionNormalization
llvm-svn: 300331
2017-04-14 15:49:59 +00:00
Sanjoy Das ac9f3ea0b4 Remove NormalizeAutodetect; NFC
It is cleaner to have a callback based system where the logic of
whether an add recurrence is normalized or not lives on IVUsers.

This is one step in a multi-step cleanup.

llvm-svn: 300330
2017-04-14 15:49:53 +00:00
Craig Topper 66df10ff63 [ValueTracking] Calculate the KnownZeros for Intrinsic::ctpop without using a temporary APInt to count leading zeros on.
The APInt was created from an 'unsigned' and we just wanted to know how many bits the value needed to represent it. We can just use Log2_32 from MathExtras.h to get the info.

llvm-svn: 300309
2017-04-14 06:43:34 +00:00
Craig Topper 1281deaa00 [ValueTracking] Use APInt::isNegative(). NFC
llvm-svn: 300308
2017-04-14 06:43:32 +00:00
Craig Topper f8631cd1de [ValueTracking] Use APInt::sext instead of zext and setBitsFrom. NFC
llvm-svn: 300307
2017-04-14 06:43:29 +00:00
Sanjoy Das b4654299f3 Use range-for; NFC
llvm-svn: 300292
2017-04-14 01:33:15 +00:00
Sanjoy Das 62f4b6bece Use transform instead of manual loop; NFC
llvm-svn: 300291
2017-04-14 01:33:13 +00:00
Craig Topper e953dec673 [ValueTracking] Remove duplicate call to computeKnownBits for the operands of Select.
We call it unconditionally on the operands of the select. Then decide if its a min/max and call it on the min/max operands or on the select operands again. Either of those second calls will overwrite the results of the initial call so we can just delete the first call.

llvm-svn: 300256
2017-04-13 20:39:37 +00:00
Craig Topper a80f2041f7 [ValueTracking] Prevent a call to computeKnownBits if we already know the state of the bit we would calculate. Also reuse a temporary APInt instead of creating a new one.
llvm-svn: 300239
2017-04-13 19:04:45 +00:00
Craig Topper 9ce07b6a15 [ValueTracking] Move a temporary APInt instead of copying it.
llvm-svn: 300233
2017-04-13 18:25:53 +00:00
Brian Gesiak 0a7894d99c [Analysis] Support bitreverse in -demanded-bits pass
Summary:
* Add a bitreverse case in the demanded bits analysis pass.
* Add tests for the bitreverse (and bswap) intrinsic in the
  demanded bits pass.
* Add a test case to the BDCE tests: that manipulations to
  high-order bits are eliminated once the bits are reversed
  and then right-shifted.

Reviewers: mkuper, jmolloy, hfinkel, trentxintong

Reviewed By: jmolloy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31857

llvm-svn: 300215
2017-04-13 16:44:25 +00:00
Craig Topper 81c03a7784 [InstSimplify] Don't try to constant fold AllocaInsts since it won't do anything.
Should give a small compile time improvement.

llvm-svn: 300125
2017-04-12 22:54:24 +00:00
Craig Topper 854824139e [ValueTracking] Teach GetUnderlyingObject to stop when it reachs an alloca instruction.
Previously it tried to call SimplifyInstruction which doesn't know anything about alloca so defers to constant folding which also doesn't do anything with alloca. This results in wasted cycles making calls that won't do anything. Given the frequency with which this function is called this time adds up.

llvm-svn: 300118
2017-04-12 22:29:23 +00:00
Jonas Paulsson da74ed42da [LoopVectorizer, TTI] New method supportsEfficientVectorElementLoadStore()
Since SystemZ supports vector element load/store instructions, there is no
need for extracts/inserts if a vector load/store gets scalarized.

This patch lets Target specify that it supports such instructions by means of
a new TTI hook that defaults to false.

The use for this is in the LoopVectorizer getScalarizationOverhead() method,
which will with this patch produce a smaller sum for a vector load/store on
SystemZ.

New test: test/Transforms/LoopVectorize/SystemZ/load-store-scalarization-cost.ll

Review: Adam Nemet
https://reviews.llvm.org/D30680

llvm-svn: 300056
2017-04-12 12:41:37 +00:00
Jonas Paulsson fccc7d66c3 [SystemZ] TargetTransformInfo cost functions implemented.
getArithmeticInstrCost(), getShuffleCost(), getCastInstrCost(),
getCmpSelInstrCost(), getVectorInstrCost(), getMemoryOpCost(),
getInterleavedMemoryOpCost() implemented.

Interleaved access vectorization enabled.

BasicTTIImpl::getCastInstrCost() improved to check for legal extending loads,
in which case the cost of the z/sext instruction becomes 0.

Review: Ulrich Weigand, Renato Golin.
https://reviews.llvm.org/D29631

llvm-svn: 300052
2017-04-12 11:49:08 +00:00
Chandler Carruth 927d8e610a [IR] Redesign the case iterator in SwitchInst to actually be an iterator
and to expose a handle to represent the actual case rather than having
the iterator return a reference to itself.

All of this allows the iterator to be used with common STL facilities,
standard algorithms, etc.

Doing this exposed some missing facilities in the iterator facade that
I've fixed and required some work to the actual iterator to fully
support the necessary API.

Differential Revision: https://reviews.llvm.org/D31548

llvm-svn: 300032
2017-04-12 07:27:28 +00:00
Serguei Katkov ecebc3db72 [BPI] Refactor post domination calculation and simple fix for ColdCall
Collection of PostDominatedByUnreachable and PostDominatedByColdCall have been
split out of heuristics itself. Update of the data happens now for each basic
block (before update for PostDominatedByColdCall might be skipped if
unreachable or matadata heuristic handled this basic block).

This separation allows re-ordering of heuristics without loosing
the post-domination information.

Reviewers: sanjoy, junbuml, vsk, chandlerc, reames

Reviewed By: chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31701

llvm-svn: 300029
2017-04-12 05:42:14 +00:00
Zvi Rackover 30efd24d78 InstSimplify: A shuffle of a splat is always the splat itself
Summary:
Fold:
 shuffle (splat-shuffle), undef, M --> splat-shuffle

Reviewers: spatel, RKSimon, craig.topper

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31527

llvm-svn: 299990
2017-04-11 21:37:02 +00:00
Daniel Berlin 554dcd8c89 MemorySSA: Move to Analysis, from Transforms/Utils. It's used as
Analysis, it has Analysis passes, and once NewGVN is made an Analysis,
this removes the cross dependency from Analysis to Transform/Utils.
NFC.

llvm-svn: 299980
2017-04-11 20:06:36 +00:00
Vassil Vassilev e1f12fadc0 Remove unused functions. Remove static qualifier from functions in header files. NFC.
llvm-svn: 299947
2017-04-11 14:55:32 +00:00
Craig Topper 0c19861051 [InstSimplify] Use cast instead of dyn_cast after isa<> check. NFCI
llvm-svn: 299870
2017-04-10 19:37:10 +00:00
Craig Topper 492db48733 [ConstantFolding] Use Intrinsic::not_intrinsic instead of 0 for readability. NFCI
llvm-svn: 299801
2017-04-07 21:36:32 +00:00
Craig Topper 60dd9cd8e4 [InstSimplify] Use Instruction::BinaryOps instead of unsigned for a few function operands to remove some casts. NFC
llvm-svn: 299745
2017-04-07 05:57:51 +00:00
Daniel Berlin d952ceae2f AliasAnalysis: Be less conservative about volatile than atomic.
Summary:
getModRefInfo is meant to answer the question "what impact does this
instruction have on a given memory location" (not even another
instruction).

Long debate on this on IRC comes to the conclusion the answer should be "nothing special".

That is, a noalias volatile store does not affect a memory location
just by being volatile.  Note: DSE and GVN and memdep currently
believe this, because memdep just goes behind AA's back after it says
"modref" right now.

see line 635 of memdep. Prior to this patch we would get modref there, then check aliasing,
and if it said noalias, we would continue.

getModRefInfo *already* has this same AA check, it just wasn't being used because volatile was
lumped in with ordering.

(I am separately testing whether this code in memdep is now dead except for the invariant load case)

Reviewers: jyknight, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31726

llvm-svn: 299741
2017-04-07 01:28:36 +00:00
Craig Topper 8ef20ea7c2 [InstSimplify] Remove unreachable default from SimplifyBinOp.
We have dedicated handlers for every opcode so nothing can get here anymore. The switch doesn't get detected as fully covered because Opcode is an unsigned. Casting to Instruction::BinaryOps still doesn't detect it because BinaryOpsEnd is in the enum and 1 past the last opcode.

llvm-svn: 299687
2017-04-06 18:59:08 +00:00
Craig Topper 2f1e1c351b [InstSimplify] Teach SimplifyMulInst to recognize vectors of i1 as And. Not just scalar i1.
llvm-svn: 299665
2017-04-06 17:33:37 +00:00
Craig Topper aa5f524095 [InstSimplify] Teach SimplifyAddInst and SimplifySubInst that vectors of i1 can be treated as Xor too.
llvm-svn: 299626
2017-04-06 05:28:41 +00:00
James Molloy 37dd4d7aaa [LAA] Correctly return a half-open range in expandBounds
This is a latent bug that's been hanging around for a while. For a loop-invariant
pointer, expandBounds would return the range {Ptr, Ptr}, but this was interpreted
as a half-open range, not a closed range. So we ended up planting incorrect
bounds checks. Even worse, they were tautological, so we ended up incorrectly
executing the optimized loop.

llvm-svn: 299526
2017-04-05 09:24:26 +00:00
Zvi Rackover 8f460655a2 InstSimplify: Add a hook for shufflevector
Summary:
Add a hook for simplification of shufflevector's with the following rules:
- Constant folding - NFC, as it was already being done by the default handler.
-  If only one of the operands is constant, constant fold the shuffle if the
    mask does not select elements from the variable operand -  to show the hook is firing and affecting the test-cases.

Reviewers: RKSimon, craig.topper, spatel, sanjoy, nlopes, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31525

llvm-svn: 299393
2017-04-03 22:05:30 +00:00
Jun Bum Lim dee5565869 [CodeGenPrep] move aarch64-type-promotion to CGP
Summary:
Move the aarch64-type-promotion pass within the existing type promotion framework in CGP.
This change also support forking sexts when a new sext is required for promotion.
Note that change is based on D27853 and I am submitting this out early to provide a better idea on D27853.

Reviewers: jmolloy, mcrosier, javed.absar, qcolombet

Reviewed By: qcolombet

Subscribers: llvm-commits, aemerson, rengolin, mcrosier

Differential Revision: https://reviews.llvm.org/D28680

llvm-svn: 299379
2017-04-03 19:20:07 +00:00
Craig Topper d33ee1b960 [APInt] Move isMask and isShiftedMask out of APIntOps and into the APInt class. Implement them without memory allocation for multiword
This moves the isMask and isShiftedMask functions to be class methods. They now use the MathExtras.h function for single word size and leading/trailing zeros/ones or countPopulation for the multiword size. The previous implementation made multiple temorary memory allocations to do the bitwise arithmetic operations to match the MathExtras.h implementation.

Differential Revision: https://reviews.llvm.org/D31565

llvm-svn: 299362
2017-04-03 16:34:59 +00:00
Sanjay Patel 8b5ad3f00e [InstSimplify] add constant folding for fdiv/frem
Also, add a helper function so we don't have to repeat this code for each binop.

llvm-svn: 299309
2017-04-01 19:05:11 +00:00
Sanjay Patel 1fd16f073d fix formatting; NFC
llvm-svn: 299307
2017-04-01 18:40:30 +00:00
Craig Topper 9ab8d7f9c3 [APInt] Remove the mul/urem/srem/udiv/sdiv functions from the APIntOps namespace. Replace the few usages with calls to the class methods. NFC
llvm-svn: 299292
2017-04-01 05:08:57 +00:00
Craig Topper 885fa12e8a [APInt] Remove shift functions from APIntOps namespace. Replace the few users with the APInt class methods. NFCI
llvm-svn: 299248
2017-03-31 20:01:16 +00:00
Max Kazantsev 2e44d2969a [ScalarEvolution] Re-enable Predicate implication from operations
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:

  LHS = sext(a +nsw b) > RHS.

This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.

This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).

The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.

The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll

Reviewers: reames, apilipenko, anna, sanjoy

Reviewed By: sanjoy

Subscribers: mzolotukhin, mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D31238

llvm-svn: 299205
2017-03-31 12:05:30 +00:00
Simon Pilgrim 6bdc755519 Spelling mistakes in comments. NFCI.
llvm-svn: 299197
2017-03-31 10:59:37 +00:00
Peter Collingbourne 61781ac26e ModuleSummaryAnalysis: Use a more precise #include. NFC.
llvm-svn: 299142
2017-03-31 00:08:24 +00:00
Craig Topper 3a40a397c3 [InstSimplify] Use m_SignBit instead of calling getSignBit and using m_Specific. NFCI
llvm-svn: 299121
2017-03-30 22:21:16 +00:00
Craig Topper 6856d341a8 [InstSimplify] Use APInt::isMaxSignedValue() instead of comparing with ~APInt::getSignBit. NFC
llvm-svn: 299120
2017-03-30 22:10:54 +00:00
Craig Topper 8fbb74b5b2 Revert r298711 "[InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits"
Tsan bot is failing.

llvm-svn: 298745
2017-03-24 22:12:10 +00:00
Craig Topper d4521c2fc2 [InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits
SimplifyDemandedUseBits for Add/Sub already recursed down LHS and RHS for simplifying bits. If that didn't provide any simplifications we fall back to calling computeKnownBits which will recurse again. Instead just take the known bits for LHS and RHS we already have and call into a new function in ValueTracking that can calculate the known bits given the LHS/RHS bits.

llvm-svn: 298711
2017-03-24 16:56:51 +00:00
Max Kazantsev 7696a7edf9 Revert "[ScalarEvolution] Re-enable Predicate implication from operations"
This reverts commit rL298690

Causes failures on clang.

llvm-svn: 298693
2017-03-24 07:04:31 +00:00
Max Kazantsev 89554446e7 [ScalarEvolution] Re-enable Predicate implication from operations
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:

  LHS = sext(a +nsw b) > RHS.

This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.

This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).

The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.

The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll

llvm-svn: 298690
2017-03-24 06:19:00 +00:00
Craig Topper 059b98e044 [ValueTracking] Use uint64_t for CarryIn in computeKnownBitsAddSub instead of a creating a temporary APInt. NFC
llvm-svn: 298688
2017-03-24 05:38:09 +00:00
Craig Topper 2bd9514e8c [ValueTracking] Convert more places to use setHighBits/setLowBits/setSignBit. NFCI
llvm-svn: 298683
2017-03-24 03:57:24 +00:00
Dehao Chen 775341a14c Use isFunctionHotInCallGraph to set the function section prefix.
Summary: The current prefix based function layout algorithm only looks at function's entry count, which is not sufficient. A function should be grouped together if its entry count or any call edge count is hot.

Reviewers: davidxl, eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31225

llvm-svn: 298656
2017-03-23 23:14:11 +00:00
Anna Thomas a8ce8fa700 [LVIPrinterPass] Print LVI info for function arguments
Using AssemblyAnnotationWriter for LVI printer prints
for instructions and basic blocks.
So, we explicitly need to print LVI info for the arguments of the function (these
are values and not instructions).

llvm-svn: 298640
2017-03-23 20:00:54 +00:00
Zhaoshi Zheng e3c9070f06 Model ashr(shl(x, n), m) as mul(x, 2^(n-m)) when n > m
Given below case:

  %y = shl %x, n
  %z = ashr %y, m

when n = m, SCEV models it as sext(trunc(x)). This patch tries to handle
the case where n > m by using sext(mul(trunc(x), 2^(n-m)))) as the SCEV
expression.

llvm-svn: 298631
2017-03-23 18:06:09 +00:00
Zhaoshi Zheng f47c27513b revert test commit r298629
llvm-svn: 298630
2017-03-23 17:52:20 +00:00
Zhaoshi Zheng 49ae35580e test commit
llvm-svn: 298629
2017-03-23 17:38:47 +00:00
Craig Topper 93683b6aff [ValueTracking] Use APInt::isNegative instead of using operator[BitWidth-1]. NFCI
llvm-svn: 298584
2017-03-23 07:06:42 +00:00
Craig Topper d73c6b4ef8 [ValueTracking] Use setAllBits/setSignBit/setLowBits/setHighBits. NFCI
llvm-svn: 298583
2017-03-23 07:06:39 +00:00
Anna Thomas e27b39a976 [LVI] Add an LVI printer pass to capture test LVI cache after transformations
Summary:
Adding a printer pass for printing the LVI cache values after transformations
that use LVI.
This will help us in identifying cases where LVI
invariants are violated, or transforms that leave LVI in an incorrect state.
Right now, I have added two test cases to show that the printer pass is working.
I will be adding more test cases in a later change, once this change is
checked in upstream.

Reviewers: reames, dberlin, sanjoy, apilipenko

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30790

llvm-svn: 298542
2017-03-22 19:27:12 +00:00
Max Kazantsev c6effaa495 Revert "[ScalarEvolution] Predicate implication from operations"
This reverts commit rL298481

Fails clang-with-lto-ubuntu build.

llvm-svn: 298489
2017-03-22 07:50:33 +00:00
Craig Topper ad5c2d04f7 [ValueTracking] Make sure we keep range metadata information when calculating known bits for calls to bitreverse intrinsic.
llvm-svn: 298488
2017-03-22 07:22:49 +00:00
Craig Topper 57d8ca72d1 [ValueTracking] use setLowBits/setHighBits/setBitsFrom to replace |= getHighBits/getLowBits. NFCI
llvm-svn: 298486
2017-03-22 06:19:37 +00:00
Max Kazantsev 15e76aa0f8 [ScalarEvolution] Predicate implication from operations
This patch allows SCEV predicate analysis to prove implication of some expression predicates
from context predicates related to arguments of those expressions.
It introduces three new rules:

For addition:
  (A >X && B >= 0) || (B >= 0 && A > X) ===> (A + B) > X.

For division:
  (A > X) && (0 < B <= X + 1) ===> (A / B > 0).
  (A > X) && (-B <= X < 0) ===> (A / B >= 0).

Using these rules, SCEV is able to prove facts like "if X > 1 then X / 2 > 0".
They can also be combined with the same context, to prove more complex expressions like
"if X > 1 then X/2 + 1 > 1".

Diffirential Revision: https://reviews.llvm.org/D30887

Reviewed by: sanjoy

llvm-svn: 298481
2017-03-22 04:48:46 +00:00
George Burgess IV 56c7e88c2c Let llvm.objectsize be conservative with null pointers
This adds a parameter to @llvm.objectsize that makes it return
conservative values if it's given null.

This fixes PR23277.

Differential Revision: https://reviews.llvm.org/D28494

llvm-svn: 298430
2017-03-21 20:08:59 +00:00
Dehao Chen 9907e9d860 Do not inline hot callsites for samplepgo in thinlto compile phase.
Summary: Because SamplePGO passes will be invoked twice in ThinLTO build: once at compile phase, the other at backend. We want to make sure the IR at the 2nd phase matches the hot part in profile, thus we do not want to inline hot callsites in the first phase.

Reviewers: tejohnson, eraman

Reviewed By: tejohnson

Subscribers: mehdi_amini, llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D31201

llvm-svn: 298428
2017-03-21 19:55:36 +00:00
Dehao Chen 190f17cae7 Use ProfileSummary:getProfileCount to get ScaledCount for ModuleSummary
Summary: ModuleSummary should use the standard interface of ProfileSummary::getProfileCount.

Reviewers: eraman, tejohnson

Reviewed By: tejohnson

Subscribers: tejohnson, mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D31154

llvm-svn: 298404
2017-03-21 17:22:35 +00:00
Reid Kleckner b518054b87 Rename AttributeSet to AttributeList
Summary:
This class is a list of AttributeSetNodes corresponding the function
prototype of a call or function declaration. This class used to be
called ParamAttrListPtr, then AttrListPtr, then AttributeSet. It is
typically accessed by parameter and return value index, so
"AttributeList" seems like a more intuitive name.

Rename AttributeSetImpl to AttributeListImpl to follow suit.

It's useful to rename this class so that we can rename AttributeSetNode
to AttributeSet later. AttributeSet is the set of attributes that apply
to a single function, argument, or return value.

Reviewers: sanjoy, javed.absar, chandlerc, pete

Reviewed By: pete

Subscribers: pete, jholewinski, arsenm, dschuff, mehdi_amini, jfb, nhaehnle, sbc100, void, llvm-commits

Differential Revision: https://reviews.llvm.org/D31102

llvm-svn: 298393
2017-03-21 16:57:19 +00:00
David Green da21170c49 [ConstantFolding] Fix to prevent constant folding having to repeatedly scan operands. NFCI
After the loop unroll threshold was increased in r295538, very
large constant expressions can be created. This prevents them
from having to be recursively scanned, leading to a compile
time blow-up.

Differential Revision: https://reviews.llvm.org/D30689

llvm-svn: 298356
2017-03-21 10:17:39 +00:00
Eli Friedman b1578d3612 [SCEV] Fix trip multiple calculation
If loop bound containing calculations like min(a,b), the Scalar
Evolution API getSmallConstantTripMultiple returns 4294967295 "-1"
as the trip multiple. The problem is that, SCEV use -1 * umax to
represent umin. The multiple constant -1 was returned, and the logic
of guarding against huge trip counts was skipped. Because -1 has 32
active bits.

The fix attempt to factor more general cases. First try to get the
greatest power of two divisor of trip count expression. In case
overflow happens, the trip count expression is still divisible by the
greatest power of two divisor returned. Returns 1 if not divisible by 2.

Patch by Huihui Zhang <huihuiz@codeaurora.org>

Differential Revision: https://reviews.llvm.org/D30840

llvm-svn: 298301
2017-03-20 20:25:46 +00:00
Simon Pilgrim 00b34996b4 Use MutableArrayRef for APFloat::convertToInteger
As discussed on D31074, use MutableArrayRef for destination integer buffers to help assert before stack overflows happen.

llvm-svn: 298253
2017-03-20 14:40:12 +00:00
Xin Tong aef0fcb191 Extract FindAvailablePtrLoadStore out of FindAvailableLoadedValue. NFCI
Summary:
Extract FindAvailablePtrLoadStore out of FindAvailableLoadedValue.
Prepare for upcoming change which will do phi-translation for load on
phi pointer in jump threading SimplifyPartiallyRedundantLoad.

This is in preparation for https://reviews.llvm.org/D30543

Reviewers: efriedma, sanjoy, davide, dberlin

Reviewed By: davide

Subscribers: junbuml, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D30524

llvm-svn: 298216
2017-03-19 15:27:52 +00:00
Brian Gesiak 1640e68728 [Analysis] bitreverse(undef) returns undef
Summary:
The reverse of an artbitrary bitpattern is also an arbitrary
bitpattern.

Reviewers: trentxintong, arsenm, majnemer

Reviewed By: majnemer

Subscribers: majnemer, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D31118

llvm-svn: 298201
2017-03-19 04:40:42 +00:00
Craig Topper 7cfd4a9d7a [ValueTracking] Remove deadish code from computeKnownBitsAddSub.
The code assigned to KnownZero, but later code unconditionally assigned over it. I'm pretty sure the later code can handle the same cases and more equally well.

llvm-svn: 298190
2017-03-18 18:21:46 +00:00
Craig Topper 3eb0d80de0 [ValueTracking] Add APInt::setSignBit and use it to replace ORing with getSignBit which will malloc if the bit width is larger than 64.
llvm-svn: 298180
2017-03-18 04:01:29 +00:00
Eli Friedman f7b060bd3e [SCEV] Use const Loop *L instead of Loop *L. NFC
Use const pointer in the trip count and trip multiple calculations.

Patch by Huihui Zhang <huihuiz@codeaurora.org>

llvm-svn: 298161
2017-03-17 22:19:52 +00:00
Michael Zolotukhin 99de88d1f3 [SCEV] Compute affine range in another way to avoid bitwidth extending.
Summary:
This approach has two major advantages over the existing one:
1. We don't need to extend bitwidth in our computations. Extending
bitwidth is a big issue for compile time as we often end up working with
APInts wider than 64bit, which is a slow case for APInt.
2. When we zero extend a wrapped range, we lose some information (we
replace the range with [0, 1 << src bit width)). Thus, avoiding such
extensions better preserves information.

Correctness testing:
I ran 'ninja check' with assertions that the new implementation of
getRangeForAffineAR gives the same results as the old one (this
functionality is not present in this patch). There were several failures
- I inspected them manually and found out that they all are caused by
the fact that we're returning more accurate results now (see bullet (2)
above).
Without such assertions 'ninja check' works just fine, as well as
SPEC2006.

Compile time testing:
CTMark/Os:
 - mafft/pairlocalalign	-16.98%
 - tramp3d-v4/tramp3d-v4	-12.72%
 - lencod/lencod	-11.51%
 - Bullet/bullet	-4.36%
 - ClamAV/clamscan	-3.66%
 - 7zip/7zip-benchmark	-3.19%
 - sqlite3/sqlite3	-2.95%
 - SPASS/SPASS	-2.74%
 - Average	-5.81%

Performance testing:
The changes are expected to be neutral for runtime performance.

Reviewers: sanjoy, atrick, pete

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30477

llvm-svn: 297992
2017-03-16 21:07:38 +00:00
Oliver Stannard 062041113f [ValueTracking] Out of range shifts might be undef
If it is possible for the RHS of a shift operation to be greater than or equal
to the bit-width, then the result might be undef, and we can't report any known
bits.

In some cases, this was allowing a transformation in instcombine which widened
an undef value from i1 to i32, increasing the range of values that a function
could return.

Differential revision: https://reviews.llvm.org/D30781

llvm-svn: 297724
2017-03-14 10:13:17 +00:00
Jonas Paulsson a48ea231c0 [TargetTransformInfo] getIntrinsicInstrCost() scalarization estimation improved
getIntrinsicInstrCost() used to only compute scalarization cost based on types.
This patch improves this so that the actual arguments are checked when they are
available, in order to handle only unique non-constant operands.

Tests updates:

Analysis/CostModel/X86/arith-fp.ll
Transforms/LoopVectorize/AArch64/interleaved_cost.ll
Transforms/LoopVectorize/ARM/interleaved_cost.ll

The improvement in getOperandsScalarizationOverhead() to differentiate on
constants made it necessary to update the interleaved_cost.ll tests even
though they do not relate to intrinsics.

Review: Hal Finkel
https://reviews.llvm.org/D29540

llvm-svn: 297705
2017-03-14 06:35:36 +00:00
Anna Thomas a10e3e4c34 [LVI] Add Datalayout to the class LazyValueInfo since all its Impls require it. NFC
llvm-svn: 297583
2017-03-12 14:06:41 +00:00
Sanjoy Das 3f1e8e0102 Use a WeakVH for UnknownInstructions in AliasSetTracker
Summary:
This change solves the same problem as D30726, except that this only
throws out the bathwater.

AST was not correctly tracking and deleting UnknownInstructions via
handles.  The existing code only tracks "pointers" in its
`ASTCallbackVH`, so an UnknownInstruction (that isn't also def'ing a
pointer used by another memory instruction) never gets a
`ASTCallbackVH`.

There are two other ways to solve this problem:

 - Use the `PointerRec` scheme for both known and unknown instructions.
 - Use a `CallbackVH` that erases the offending Instruction from the
   UnknownInstruction list.

Both of the above changes seemed to be significantly (and unnecessarily
IMO) more complex than this.

Reviewers: chandlerc, dberlin, hfinkel, reames

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30849

llvm-svn: 297539
2017-03-11 01:15:48 +00:00
Davide Italiano 574e59786e [ProfileSummaryInfo] Remove unneeded braces. NFCI.
llvm-svn: 297506
2017-03-10 20:50:51 +00:00
Dehao Chen c2048155a0 Refactor the PSI to extract getCallSiteCount and remove checks for profile type.
Summary: There is no need to check profile count as only CallInst will have metadata attached.

Reviewers: eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30799

llvm-svn: 297500
2017-03-10 19:45:16 +00:00
Michael Kuperstein 5fb39a7966 [SLP] Revert everything that has to do with memory access sorting.
This reverts r293386, r294027, r294029 and r296411.

Turns out the SLP tree isn't actually a "tree" and we don't handle
accessing the same packet of loads in several different orders well,
causing miscompiles.

Revert until we can fix this properly.

llvm-svn: 297493
2017-03-10 18:59:07 +00:00
Yaron Keren 1de4792c55 Implement getPassName() for IR printing passes.
llvm-svn: 297442
2017-03-10 07:09:20 +00:00
Dehao Chen 22645eea7a Do not use branch metadata to check if a basic block is hot.
Summary: We should not use that to check basic block hotness as optimization may mess it up.

Reviewers: eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30800

llvm-svn: 297437
2017-03-10 01:44:37 +00:00
Sanjay Patel 962a8431ea [InstSimplify] allow folds for bool vector div/rem
llvm-svn: 297411
2017-03-09 21:56:03 +00:00
Sanjay Patel 2b1f6f4b92 [InstSimplify] vector div/rem with any zero element in divisor is undef
This was suggested as a DAG simplification in the review for rL297026 :
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20170306/435253.html
...but let's start with IR since we have actual docs for IR (LangRef).

Differential Revision:
https://reviews.llvm.org/D30665

llvm-svn: 297390
2017-03-09 16:20:52 +00:00
Teresa Johnson d820447212 Perform symbol binding for .symver versioned symbols
Summary:
In a .symver assembler directive like:
.symver name, name2@@nodename
"name2@@nodename" should get the same symbol binding as "name".

While the ELF object writer is updating the symbol binding for .symver
aliases before emitting the object file, not doing so when the module
inline assembly is handled by the RecordStreamer is causing the wrong
behavior in *LTO mode.

E.g. when "name" is global, "name2@@nodename" must also be marked as
global. Otherwise, the symbol is skipped when iterating over the LTO
InputFile symbols (InputFile::Symbol::shouldSkip). So, for example,
when performing any *LTO via the gold-plugin, the versioned symbol
definition is not recorded by the plugin and passed back to the
linker. If the object was in an archive, and there were no other symbols
needed from that object, the object would not be included in the final
link and references to the versioned symbol are undefined.

The llvm-lto2 tests added will give an error about an unused symbol
resolution without the fix.

Reviewers: rafael, pcc

Reviewed By: pcc

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D30485

llvm-svn: 297332
2017-03-09 00:19:49 +00:00
Amjad Aboud 5448e989a5 [SLP] Fixed non-deterministic behavior in Loop Vectorizer.
Differential Revision: https://reviews.llvm.org/D30638

llvm-svn: 297257
2017-03-08 05:09:10 +00:00
Sebastian Pop 4a4d245b19 Handle UnreachableInst in isGuaranteedToTransferExecutionToSuccessor
A block with an UnreachableInst does not transfer execution to a successor.
The problem was exposed by GVN-hoist. This patch fixes bug 32153.

Patch by Aditya Kumar.

Differential Revision: https://reviews.llvm.org/D30667

llvm-svn: 297254
2017-03-08 01:54:50 +00:00
Michael Kuperstein 768d013a03 [SLP] Revert r296863 due to miscompiles.
Details and reproducer are on the email thread for r296863.

llvm-svn: 297103
2017-03-06 23:54:51 +00:00
Sanjay Patel 0cb2ee9287 [InstSimplify] refactor related div/rem folds; NFCI
llvm-svn: 297052
2017-03-06 19:08:35 +00:00
Sanjay Patel 79a9ecbe80 [InstSimplify] remove misleading comments; NFC
Div/rem-of-0 does not cause faults/undef (not the same as div/rem-by-0).

llvm-svn: 297029
2017-03-06 16:49:35 +00:00
Sanjoy Das 1bd479dd5c [SCEV] Decrease the recursion threshold for CompareValueComplexity
Fixes PR32142.

r287232 accidentally increased the recursion threshold for
CompareValueComplexity from 2 to 32.  This change reverses that change
by introducing a separate flag for CompareValueComplexity's threshold.

llvm-svn: 296992
2017-03-05 23:49:17 +00:00
Mohammad Shahid bdac9f30c0 [SLP] Fixes the bug due to absence of in order uses of scalars which needs to be available
for VectorizeTree() API.This API uses it for proper mask computation to be used in shufflevector IR.
The fix is to compute the mask for out of order memory accesses while building the vectorizable tree
instead of actual vectorization of vectorizable tree.It also needs to recompute the proper Lane for
external use of vectorizable scalars based on shuffle mask.

Reviewers: mkuper

Differential Revision: https://reviews.llvm.org/D30159

Change-Id: Ide8773ce0ad3562f3cf4d1a0ad0f487e2f60ce5d
llvm-svn: 296863
2017-03-03 10:02:47 +00:00
Hans Wennborg cc4ff78c9d Revert r296575 "[SLP] Fixes the bug due to absence of in order uses of scalars which needs to be available"
It caused miscompiles, e.g. in Chromium (PR32109).

llvm-svn: 296654
2017-03-01 18:57:16 +00:00
Igor Laevsky 37cba43604 [BasicAA] Take attributes into account when requesting modref info for a call site
Differential Revision: https://reviews.llvm.org/D29989

llvm-svn: 296617
2017-03-01 13:19:51 +00:00
Mohammad Shahid 175ffa8c35 [SLP] Fixes the bug due to absence of in order uses of scalars which needs to be available
for VectorizeTree() API.This API uses it for proper mask computation to be used in shufflevector IR.
The fix is to compute the mask for out of order memory accesses while building the vectorizable tree
instead of actual vectorization of vectorizable tree.

Reviewers: mkuper

Differential Revision: https://reviews.llvm.org/D30159

Change-Id: Id1e287f073fa4959713ba545fa4254db5da8b40d
llvm-svn: 296575
2017-03-01 03:51:54 +00:00
Francis Visoiu Mistrih 262ad16a3a [LCG] Fix EXPENSIVE_CHECKS typo. NFC
Differential Revision: https://reviews.llvm.org/D30434

llvm-svn: 296500
2017-02-28 18:34:55 +00:00
Dehao Chen a60cdd3881 Add function importing info from samplepgo profile to the module summary.
Summary: For SamplePGO, the profile may contain cross-module inline stacks. As we need to make sure the profile annotation happens when all the hot inline stacks are expanded, we need to pass this info to the module importer so that it can import proper functions if necessary. This patch implemented this feature by emitting cross-module targets as part of function entry metadata. In the module-summary phase, the metadata is used to build call edges that points to functions need to be imported.

Reviewers: mehdi_amini, tejohnson

Reviewed By: tejohnson

Subscribers: davidxl, llvm-commits

Differential Revision: https://reviews.llvm.org/D30053

llvm-svn: 296498
2017-02-28 18:09:44 +00:00
Michael Kuperstein c07cca85fb [SLP] Load sorting should not try to sort things that aren't loads.
We may get a VL where the first element is a load, but the others
aren't. Trying to sort such VLs can only lead to sorrow.

llvm-svn: 296411
2017-02-27 23:18:11 +00:00
Sanjoy Das 39a684d117 [ValueTracking] Don't do an unchecked shift in ComputeNumSignBits
Summary:
Previously we used to return a bogus result, 0, for IR like `ashr %val,
-1`.

I've also added an assert checking that `ComputeNumSignBits` at least
returns 1.  That assert found an already checked in test case where we
were returning a bad result for `ashr %val, -1`.

Fixes PR32045.

Reviewers: spatel, majnemer

Reviewed By: spatel, majnemer

Subscribers: efriedma, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30311

llvm-svn: 296273
2017-02-25 20:30:45 +00:00
Easwaran Raman a8b9cdc9e2 [InlineCost] Move the code in isGEPOffsetConstant to a lambda.
Differential revision: https://reviews.llvm.org/D30112

llvm-svn: 296208
2017-02-25 00:10:22 +00:00
Xin Tong 68ea9aa23a Fix Indentation. NFCI
llvm-svn: 296169
2017-02-24 20:59:26 +00:00
Adam Nemet 2531df3d49 [ORE] Remove ORE.emit{{.+}} functions
Last use was killed in my previous patch. The preferred way is now to
construct the remark, pipe things to it and pass it to ORE.emit.

llvm-svn: 296019
2017-02-23 21:32:53 +00:00
Adam Nemet 41b019a39c [LAA] Remove unused LoopAccessReport
The need for this removed when I converted everything to use the opt-remark
classes directly with the streaming interface.

llvm-svn: 296017
2017-02-23 21:17:36 +00:00
Davide Italiano e122d6885a [ModuleSummaryAnalysis] Don't crash when referencing unnamed globals.
Instead, just be conservative as these are unfrequent enough. Thanks
to Peter Collingbourne for the discussion about this on IRC.

llvm-svn: 295861
2017-02-22 18:53:38 +00:00
Justin Bogner 8281c81413 OptDiag: Add const to some interfaces that don't modify anything. NFC
This needed a const_cast for the dominator tree recalculation in
OptimizationRemarkEmitter, but we do that all over the place already
and it's safe.

llvm-svn: 295812
2017-02-22 07:38:17 +00:00
Sanjoy Das 5cd6c5cacf [ValueTracking] Make poison propagation more aggressive
Summary:
Motivation: fix PR31181 without regression (the actual fix is still in
progress).  However, the actual content of PR31181 is not relevant
here.

This change makes poison propagation more aggressive in the following
cases:

 1. poision * Val == poison, for any Val.  In particular, this changes
    existing intentional and documented behavior in these two cases:
     a. Val is 0
     b. Val is 2^k * N
 2. poison << Val == poison, for any Val
 3. getelementptr is poison if any input is poison

I think all of these are justified (and are axiomatically true in the
new poison / undef model):

1a: we need poison * 0 to be poison to allow transforms like these:

  A * (B + C) ==> A * B + A * C

If poison * 0 were 0 then the above transform could not be allowed
since e.g. we could have A = poison, B = 1, C = -1, making the LHS

  poison * (1 + -1) = poison * 0 = 0

and the RHS

  poison * 1 + poison * -1 = poison + poison = poison

1b: we need e.g. poison * 4 to be poison since we want to allow

  A * 4 ==> A + A + A + A

If poison * 4 were a value with all of their bits poison except the
last four; then we'd not be able to do this transform since then if A
were poison the LHS would only be "partially" poison while the RHS
would be "full" poison.

2: Same reasoning as (1b), we'd like have the following kinds
transforms be legal:

  A << 1 ==> A + A

Reviewers: majnemer, efriedma

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30185

llvm-svn: 295809
2017-02-22 06:52:32 +00:00
Sanjoy Das 7b0b408973 [ValueTracking] clang-format a section I'm about to touch; NFC
(Whitespace only change)

llvm-svn: 295690
2017-02-21 02:42:42 +00:00
Sanjay Patel fe67255961 [InstSimplify] add nsw/nuw (xor X, signbit), signbit --> X
The change to InstCombine in:
https://reviews.llvm.org/D29729
...exposes this missing fold in InstSimplify, so adding this
first to avoid a regression.

llvm-svn: 295573
2017-02-18 21:59:09 +00:00
Easwaran Raman 617f63640b Refactor instruction simplification code in visitors. NFC.
Several visitors check if operands to the instruction are constants,
either as it is or after looking up SimplifiedValues, check if the
result is a constant and update the SimplifiedValues map. This
refactoring splits it into a common function that does the checking of
whether the operands are constants and updating of the SimplifiedValues
table, and an instruction specific part that is implemented by each
instruction visitor as a lambda and passed to the common function.

Differential revision: https://reviews.llvm.org/D30104

llvm-svn: 295552
2017-02-18 17:22:52 +00:00
Justin Bogner d890f95bf6 OptDiag: Decouple backend diagnostics from debug info metadata
This creates and uses a DiagnosticLocation type rather than using
DebugLoc for this purpose in the backend diagnostics. This is NFC for
now, but will allow us to create locations for diagnostics without
having to create new metadata nodes when we don't have a DILocation.

llvm-svn: 295519
2017-02-18 00:42:23 +00:00
Matthew Simpson a899f86054 [LAA] Remove unused code (NFC)
llvm-svn: 295493
2017-02-17 20:46:52 +00:00
Peter Collingbourne 9421c2dc54 AssumptionCache: Disable the verifier by default, move it behind a hidden cl::opt and verify from releaseMemory().
This is a short term solution to the problem that many passes currently fail
to update the assumption cache. In the long term the verifier should not
be controllable with a flag. We should either fix all passes to correctly
update the assumption cache and enable the verifier unconditionally or
somehow arrange for the assumption list to be updated automatically by passes.

Differential Revision: https://reviews.llvm.org/D30003

llvm-svn: 295236
2017-02-15 21:10:09 +00:00
Adam Nemet 4c98023724 [LazyBFI] Fix typos
llvm-svn: 295073
2017-02-14 17:21:12 +00:00
Igor Laevsky c11c1ed909 [SCEV] Cache results during GetMinTrailingZeros query
Differential Revision: https://reviews.llvm.org/D29759

llvm-svn: 295060
2017-02-14 15:53:12 +00:00
Sanjay Patel 97e4b98749 [ValueTracking] use nonnull argument attribute to eliminate null checks
Enhancing value tracking's analysis of null-ness was suggested in D27855, so here's a first attempt at that.

This is part of solving:
https://llvm.org/bugs/show_bug.cgi?id=28430

Differential Revision: https://reviews.llvm.org/D28204

llvm-svn: 294897
2017-02-12 15:35:34 +00:00
Dorit Nuzman eac89d736c [LV/LoopAccess] Check statically if an unknown dependence distance can be
proven larger than the loop-count

This fixes PR31098: Try to resolve statically data-dependences whose
compile-time-unknown distance can be proven larger than the loop-count, 
instead of resorting to runtime dependence checking (which are not always 
possible).

For vectorization it is sufficient to prove that the dependence distance 
is >= VF; But in some cases we can prune unknown dependence distances early,
and even before selecting the VF, and without a runtime test, by comparing 
the distance against the loop iteration count. Since the vectorized code 
will be executed only if LoopCount >= VF, proving distance >= LoopCount 
also guarantees that distance >= VF. This check is also equivalent to the 
Strong SIV Test.

Reviewers: mkuper, anemet, sanjoy

Differential Revision: https://reviews.llvm.org/D28044

llvm-svn: 294892
2017-02-12 09:32:53 +00:00
Peter Collingbourne be9ffaacfa IR: Function summary extensions for whole-program devirtualization pass.
The summary information includes all uses of llvm.type.test and
llvm.type.checked.load intrinsics that can be used to devirtualize calls,
including any constant arguments for virtual constant propagation.

Differential Revision: https://reviews.llvm.org/D29734

llvm-svn: 294795
2017-02-10 22:29:38 +00:00
Chandler Carruth 1f8fcfeac5 [PM/LCG] Teach LCG to support spurious reference edges.
Somewhat amazingly, this only requires teaching it to clean them up when
deleting a dead function from the graph. And we already have exactly the
necessary data structures to do that in the parent RefSCCs.

This allows ArgPromote to work in a much simpler way be merely letting
reference edges linger in the graph after the causing IR is deleted. We
will clean up these edges when we run any function pass over the IR, but
don't remove them eagerly.

This avoids all of the quadratic update issues both in the current pass
manager and in my previous attempt with the new pass manager.

Differential Revision: https://reviews.llvm.org/D29579

llvm-svn: 294663
2017-02-09 23:30:14 +00:00
Chandler Carruth aaad9f84be [PM/LCG] Teach the LazyCallGraph how to replace a function without
disturbing the graph or having to update edges.

This is motivated by porting argument promotion to the new pass manager.
Because of how LLVM IR Function objects work, in order to change their
signature a new object needs to be created. This is efficient and
straight forward in the IR but previously was very hard to implement in
LCG. We could easily replace the function a node in the graph
represents. The challenging part is how to handle updating the edges in
the graph.

LCG previously used an edge to a raw function to represent a node that
had not yet been scanned for calls and references. This was the core
of its laziness. However, that model causes this kind of update to be
very hard:
1) The keys to lookup an edge need to be `Function*`s that would all
   need to be updated when we update the node.
2) There will be some unknown number of edges that haven't transitioned
   from `Function*` edges to `Node*` edges.

All of this complexity isn't necessary. Instead, we can always build
a node around any function, always pointing edges at it and always using
it as the key to lookup an edge. To maintain the laziness, we need to
sink the *edges* of a node into a secondary object and explicitly model
transitioning a node from empty to populated by scanning the function.
This design seems much cleaner in a number of ways, but importantly
there is now exactly *one* place where the `Function*` has to be
updated!

Some other cleanups that fall out of this include having something to
model the *entry* edges more accurately. Rather than hand rolling parts
of the node in the graph itself, we have an explicit `EdgeSequence`
object that gives us exactly the functionality needed. We also have
a consistent place to define the edge iterators and can use them for
both the entry edges and the internal edges of the graph.

The API used to model the separation between a node and its edges is
intentionally very thin as most clients are expected to deal with nodes
that have populated edges. We model this exactly as an optional does
with an additional method to populate the edges when that is
a reasonable thing for a client to do. This is based on API design
suggestions from Richard Smith and David Blaikie, credit goes to them
for helping pick how to model this without it being either too explicit
or too implicit.

The patch is somewhat noisy due to shifting around iterator types and
new syntax for walking the edges of a node, but most of the
functionality change is in the `Edge`, `EdgeSequence`, and `Node` types.

Differential Revision: https://reviews.llvm.org/D29577

llvm-svn: 294653
2017-02-09 23:24:13 +00:00
Daniel Berlin 73ad5cb9b1 Drop graph_ prefix
llvm-svn: 294621
2017-02-09 20:37:46 +00:00
Daniel Berlin 58a6e57394 GraphTraits: Add range versions of graph traits functions (graph_nodes, graph_children, inverse_graph_nodes, inverse_graph_children).
Summary:
Convert all obvious node_begin/node_end and child_begin/child_end
pairs to range based for.

Sending for review in case someone has a good idea how to make
graph_children able to be inferred. It looks like it would require
changing GraphTraits to be two argument or something. I presume
inference does not happen because it would have to check every
GraphTraits in the world to see if the noderef types matched.

Note: This change was 3-staged with clang as well, which uses
Dominators/etc from LLVM.

Reviewers: chandlerc, tstellarAMD, dblaikie, rsmith

Subscribers: arsenm, llvm-commits, nhaehnle

Differential Revision: https://reviews.llvm.org/D29767

llvm-svn: 294620
2017-02-09 20:37:24 +00:00
Vitaly Buka 9987d98370 LVI: Fix use-of-uninitialized-value after r294463
BlockValueStack can be reallocated making reference e invalid.

llvm-svn: 294572
2017-02-09 09:28:05 +00:00
Daniel Berlin 9c92a469b4 LVI: Add a per-value worklist limit to LazyValueInfo.
Summary:
LVI is now depth first, which is optimal for iteration strategy in
terms of work per call.  However, the way the results get cached means
it can still go very badly N^2 or worse right now.  The overdefined
cache is per-block, because LVI wants to try to get different results
for the same name in different blocks (IE solve the problem
PredicateInfo solves).  This means even if we discover a value is
overdefined after going very deep, it doesn't cache this information,
causing it to end up trying to rediscover it again and again.  The
same is true for values along the way.  In practice, overdefined
anywhere should mean overdefined everywhere (this is how, for example,
SCCP works).

Until we get around to reworking the overdefined cache, we need to
limit the worklist size we process.  Note that permanently reverting
the DFS strategy exploration seems the wrong strategy (temporarily
seems fine if we really want).  BFS is clearly the wrong approach, it
just gets luckier on some testcases.  It's also very hard to design
an effective throttle for BFS. For DFS, the throttle is directly related
to the depth of the CFG.  So really deep CFGs will get cutoff, smaller
ones will not. As the CFG simplifies, you get better results.
In BFS, the limit is it's related to the fan-out times average block size,
which is harder to reason about or make good choices for.

Bug being filed about the overdefined cache, but it will require major
surgery to fix it (plumbing predicateinfo through CVP or LVI).

Note: I did not make this number configurable because i'm not sure
anyone really needs to tweak this knob.  We run CVP 3 times. On the
testcases i have the slow ones happen in the middle, where CVP is
doing cleanup work other things are effective at.  Over the course of
3 runs, we don't see to have any real loss of performance.

I haven't gotten a minimized testcase yet, but just imagine in your
head a testcase where, going *up* the CFG, you have branches, one of
which leads 50000 blocks deep, and the other, to something where the
answer is overdefined immediately.  BFS would discover the overdefined
faster than DFS, but do more work to do so.  In practice, the right
answer is "once DFS discovers overdefined for a value, stop trying to
get more info about that value" (and so, DFS would normally cache the
overdefined results for every value it passed through in those 50k
blocks, and never do that work again. But it don't, because of the
naming problem)

Reviewers: chandlerc, djasper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29715

llvm-svn: 294463
2017-02-08 15:22:52 +00:00
Chandler Carruth 346542b769 Revert r293017 and fix the actual underlying issue.
The patch committed in r293017, as discussed on the list, doesn't really
make sense but was causing an actual issue to go away.

The issue turns out to be that in one place the extra template arguments
were dropped from the OuterAnalysisManagerProxy. This in turn caused the
types used in one set of places to access the key to be completely
different from the types used in another set of places for both Loop and
CGSCC cases where there are extra arguments.

I have literally no idea how anything seemed to work with this bug in
place. It blows my mind. But it did except for mingw64 in a DLL build.

I've added a really handy static assert that helps ensure we don't break
this in the future. It immediately diagnoses the issue with a compile
failure and a very clear error message. Much better that staring at
backtraces on a build bot. =]

llvm-svn: 294267
2017-02-07 01:50:48 +00:00
Philip Reames c80bd0486d [LVI] Switch from BFS to DFS exploration order
This patch changes the order in which LVI explores previously unexplored paths.

Previously, the code used an BFS strategy where each unexplored input was added to the search queue before any of them were explored. This has the effect of causing all inputs to be explored before returning to re-evaluate the merge point (non-local or phi node). This has the unfortunate property of doing redundant work if one of the inputs to the merge is found to be overdefined (i.e. unanalysable). If any input is overdefined, the result of the merge will be too; regardless of the values of other inputs.

The new code uses a DFS strategy where we re-evaluate the merge after evaluating each input. If we discover an overdefined input, we immediately return without exploring other inputs.

We have reports of large (4-10x) improvements of compile time with this patch and some reports of more precise analysis results as well.  See the review discussion for details.  The original motivating case was pr10584.

Differential Revision: https://reviews.llvm.org/D28190

llvm-svn: 294264
2017-02-07 00:25:24 +00:00
Chandler Carruth a80cfb3063 [PM/LCG] Fix the no-asserts build after r294227. Sorry for the noise.
llvm-svn: 294235
2017-02-06 20:59:07 +00:00
Chandler Carruth 2e0fe3e65b [PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph during
iteration.

The lazy formation of RefSCCs isn't really the most important part of
the laziness here -- that has to do with walking the functions
themselves -- and isn't essential to maintain. Originally, there were
incremental update algorithms that relied on updates happening
predominantly near the most recent RefSCC formed, but those have been
replaced with ones that have much tighter general case bounds at this
point. We do still perform asserts that only scale well due to this
incrementality, but those are easy to place behind EXPENSIVE_CHECKS.

Removing this simplifies the entire analysis by having a single up-front
step that builds all of the RefSCCs in a direct Tarjan walk. We can even
easily replace this with other or better algorithms at will and with
much less confusion now that there is no iterator-based incremental
logic involved. This removes a lot of complexity from LCG.

Another advantage of moving in this direction is that it simplifies
testing the system substantially as we no longer have to worry about
observing and mutating the graph half-way through the RefSCC formation.

We still need a somewhat special iterator for RefSCCs because we want
the iterator to remain stable in the face of graph updates. However,
this now merely involves relative indexing to the current RefSCC's
position in the sequence which isn't too hard.

Differential Revision: https://reviews.llvm.org/D29381

llvm-svn: 294227
2017-02-06 19:38:06 +00:00
Sanjay Patel 54656ca7db [ValueTracking] emit a remark when we detect a conflicting assumption (PR31809)
This is a follow-up to D29395 where we try to be good citizens and let the user know that
we've probably gone off the rails.

This should allow us to resolve:
https://llvm.org/bugs/show_bug.cgi?id=31809

Differential Revision: https://reviews.llvm.org/D29404

llvm-svn: 294208
2017-02-06 18:26:06 +00:00
Daniil Fukalov 6378bdb2dd [SCEV] limit recursion depth and operands number in getAddExpr
for a quite big function with source like

%add = add nsw i32 %mul, %conv
%mul1 = mul nsw i32 %add, %conv
%add2 = add nsw i32 %mul1, %add
%mul3 = mul nsw i32 %add2, %add
; repeat couple of thousands times
that can be produced by loop unroll, getAddExpr() tries to recursively construct SCEV and runs almost infinite time.

Added recursion depth restriction (with new parameter to set it)

Reviewers: sanjoy

Subscribers: hfinkel, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D28158

llvm-svn: 294181
2017-02-06 12:38:06 +00:00
Michael Kuperstein 2a735b71b6 [SLP] Make sortMemAccesses explicitly return an error. NFC.
llvm-svn: 294029
2017-02-03 19:32:50 +00:00
Michael Kuperstein 723999d4aa [SLP] Use SCEV to sort memory accesses.
This generalizes memory access sorting to use differences between SCEVs,
instead of relying on constant offsets. That allows us to properly do
SLP vectorization of non-sequentially ordered loads within loops bodies.

Differential Revision: https://reviews.llvm.org/D29425

llvm-svn: 294027
2017-02-03 19:09:45 +00:00
Mehdi Amini 1380edf4ef Revert "[ThinLTO] Add an auto-hide feature"
This reverts commit r293970.

After more discussion, this belongs to the linker side and
there is no added value to do it at this level.

llvm-svn: 293993
2017-02-03 07:41:43 +00:00
Mehdi Amini b0a8ff71e5 [ThinLTO] Add an auto-hide feature
When a symbol is not exported outside of the
DSO, it is can be hidden. Usually we try to internalize
as much as possible, but it is not always possible, for
instance a symbol can be referenced outside of the LTO
unit, or there can be cross-module reference in ThinLTO.

This is a recommit of r293912 after fixing build failures,
and a recommit of r293918 after fixing LLD tests.

Differential Revision: https://reviews.llvm.org/D28978

llvm-svn: 293970
2017-02-03 00:32:38 +00:00
Mehdi Amini 21c89dc920 Revert "[ThinLTO] Add an auto-hide feature"
This reverts commit r293918, one lld test does not pass.

llvm-svn: 293961
2017-02-02 23:20:36 +00:00
Xinliang David Li 58fcc9bdce [PGO] internal option cleanups
1. Added comments for options
2. Added missing option cl::desc field
3. Uniified function filter option for graph viewing.
   Now PGO count/raw-counts share the same
   filter option: -view-bfi-func-name=.

llvm-svn: 293938
2017-02-02 21:29:17 +00:00
Xinliang David Li 1eb4ec6a2e [PGO] make graph view internal options available for all builds
Differential Revision: https://reviews.llvm.org/D29259

llvm-svn: 293921
2017-02-02 19:18:56 +00:00
Mehdi Amini 97624fb1ec [ThinLTO] Add an auto-hide feature
When a symbol is not exported outside of the
DSO, it is can be hidden. Usually we try to internalize
as much as possible, but it is not always possible, for
instance a symbol can be referenced outside of the LTO
unit, or there can be cross-module reference in ThinLTO.

This is a recommit of r293912 after fixing build failures.

Differential Revision: https://reviews.llvm.org/D28978

llvm-svn: 293918
2017-02-02 18:31:35 +00:00
Mehdi Amini 827600deaf Revert "[ThinLTO] Add an auto-hide feature"
This reverts r293912, bots are broken.

llvm-svn: 293914
2017-02-02 18:24:37 +00:00
Mehdi Amini dc5a7444f0 [ThinLTO] Add an auto-hide feature
When a symbol is not exported outside of the
DSO, it is can be hidden. Usually we try to internalize
as much as possible, but it is not always possible, for
instance a symbol can be referenced outside of the LTO
unit, or there can be cross-module reference in ThinLTO.

Differential Revision: https://reviews.llvm.org/D28978

llvm-svn: 293912
2017-02-02 18:13:46 +00:00
Jun Bum Lim 180bc5a021 [JumpThread] Enhance finding partial redundant loads by continuing scanning single predecessor
Summary: While scanning predecessors to find an available loaded value, if the predecessor has a single predecessor, we can continue scanning through the single predecessor.

Reviewers: mcrosier, rengolin, reames, davidxl, haicheng

Reviewed By: rengolin

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D29200

llvm-svn: 293896
2017-02-02 15:12:34 +00:00
Adam Nemet 0bf1b863b9 [LV] Also port failure remarks to new OptimizationRemarkEmitter API
llvm-svn: 293866
2017-02-02 05:41:51 +00:00
Sanjay Patel 52e4e6594e [ValueTracking] remove a FIXME for something we don't want to do; NFC
The comment was added with:
https://reviews.llvm.org/rL293773
...but there would be a cost to implement this and possibly no payoff.

llvm-svn: 293823
2017-02-01 22:27:34 +00:00
Matthew Simpson ba5cf9dfee [LV] Move interleaved access helper functions to VectorUtils (NFC)
This patch moves some helper functions related to interleaved access
vectorization out of LoopVectorize.cpp and into VectorUtils.cpp. We would like
to use these functions in a follow-on patch that improves interleaved load and
store lowering in (ARM/AArch64)ISelLowering.cpp. One of the functions was
already duplicated there and has been removed.

Differential Revision: https://reviews.llvm.org/D29398

llvm-svn: 293788
2017-02-01 17:45:46 +00:00
Sanjay Patel 25f6d710d9 [ValueTracking] avoid crashing from bad assumptions (PR31809)
A program may contain llvm.assume info that disagrees with other analysis. 
This may be caused by UB in the program, so we must not crash because of that.

As noted in the code comments:
https://llvm.org/bugs/show_bug.cgi?id=31809
...we can do better, but this at least avoids the assert/crash in the bug report.

Differential Revision: https://reviews.llvm.org/D29395

llvm-svn: 293773
2017-02-01 15:41:32 +00:00
Eli Friedman 10d1ff64fe [SCEV] Simplify/generalize howFarToZero solving.
Make SolveLinEquationWithOverflow take the start as a SCEV, so we can
solve more cases. With that implemented, get rid of the special case
for powers of two.

The additional functionality probably isn't particularly useful,
but it might help a little for certain cases involving pointer
arithmetic.

Differential Revision: https://reviews.llvm.org/D28884

llvm-svn: 293576
2017-01-31 00:42:42 +00:00
Matt Arsenault 42b6478344 NVPTX: Refactor NVPTXInferAddressSpaces to check TTI
Add a new TTI hook for getting the generic address space value.

llvm-svn: 293563
2017-01-30 23:02:12 +00:00
Sanjay Patel 14a4b8185f [ValueTracking] clean up lookThroughCast; NFCI
1. Use auto with dyn_cast.
2. Don't use else after return.
3. Convert chain of 'else if' to switch.
4. Improve variable names.

llvm-svn: 293432
2017-01-29 16:34:57 +00:00
Mohammad Shahid 3121334d32 [SLP] Vectorize loads of consecutive memory accesses, accessed in non-consecutive (jumbled) way.
The jumbled scalar loads will be sorted while building the tree and these accesses will be marked to generate shufflevector after the vectorized load with proper mask.

Reviewers: hfinkel, mssimpso, mkuper

Differential Revision: https://reviews.llvm.org/D26905

Change-Id: I9c0c8e6f91a00076a7ee1465440a3f6ae092f7ad
llvm-svn: 293386
2017-01-28 17:59:44 +00:00
Matthias Braun 8c209aa877 Cleanup dump() functions.
We had various variants of defining dump() functions in LLVM. Normalize
them (this should just consistently implement the things discussed in
http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html

For reference:
- Public headers should just declare the dump() method but not use
  LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
- The definition of a dump method should look like this:
  #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
  LLVM_DUMP_METHOD void MyClass::dump() {
    // print stuff to dbgs()...
  }
  #endif

llvm-svn: 293359
2017-01-28 02:02:38 +00:00
Peter Collingbourne 5ad775f2e8 Analysis: Add appropriate const qualification to functions in TypeMetadataUtils.cpp. NFC.
llvm-svn: 293341
2017-01-27 22:55:30 +00:00
Mehdi Amini 1726fc698c Fix BasicAA incorrect assumption on GEP
This is fixing pr31761: BasicAA is deducing NoAlias
on the result of the GEP if the base pointer is itself NoAlias.

This is possible only if the NoAlias on the base pointer is
deduced with a non-sized query: this should guarantee that
the pointers are belonging to different memory allocation
and that the GEP can't legally jump from one to another.

Differential Revision: https://reviews.llvm.org/D29216

llvm-svn: 293293
2017-01-27 16:12:22 +00:00
Justin Lebar 322c127bee [ValueTracking] Add comment that CannotBeOrderedLessThanZero does the wrong thing for powi.
Summary:
CannotBeOrderedLessThanZero(powi(x, exp)) returns true if
CannotBeOrderedLessThanZero(x).  But powi(-0, exp) is negative if exp is
odd, so we actually want to return SignBitMustBeZero(x).

Except that also isn't right, because we want to return true if x is
NaN, even if x has a negative sign bit.

What we really need in order to fix this is a consistent approach in
this function to handling the sign bit of NaNs.  Without this it's very
difficult to say what the correct behavior here is.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28927

llvm-svn: 293243
2017-01-27 00:58:34 +00:00
Daniil Fukalov b09dac59fc [SCEV] Introduce add operation inlining limit
Inlining in getAddExpr() can cause abnormal computational time in some cases.
New parameter -scev-addops-inline-threshold is intruduced with default value 500.

Reviewers: sanjoy

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D28812

llvm-svn: 293176
2017-01-26 13:33:17 +00:00
Chandler Carruth 41421df02b [PM] Use PoisoningVH correctly when merely deleting entries in a map
with it.

This code was dereferencing the PoisoningVH which isn't allowed once it
is poisoned. But the code itself really doesn't need to access the
pointer, it is just doing the safe stuff of clearing out data structures
keyed on the pointer value.

Change the code to use iterators to erase directly from a DenseMap. This
is also substantially more efficient as it avoids lots of hashing and
lookups to do the erasure. DenseMap supports iterating behind the
iteration which is fairly easy to implement.

Sadly, I don't have a test case here. I'm not even close and I don't
know that I ever will be. The issue is that several of the tricky
aspects of fixing this only show up when you cause the stack's
SmallVector to be in *EXACTLY* the right location. I only ever got
a reproduction for those with Clang, and only with *exactly* the right
command line flags. Any adjustment, even to seemingly unrelated flags,
would make partial and half-way solutions magically start to "work". In
good news, all of this was caught with the LLVM test suite. Also, there
is no *specific* code here that is untested, just that the old pattern
of code won't immediately fail on any test case I've managed to
contrive.

llvm-svn: 293160
2017-01-26 08:31:54 +00:00
Jonas Paulsson 8e2f948ef0 [TargetTransformInfo] Refactor and improve getScalarizationOverhead()
Refactoring to remove duplications of this method.

New method getOperandsScalarizationOverhead() that looks at the present unique
operands and add extract costs for them. Old behaviour was to just add extract
costs for one operand of the type always, which still happens in
getArithmeticInstrCost() if no operands are provided by the caller.

This is a good start of improving on this, but there are more places
that can be improved by using getOperandsScalarizationOverhead().

Review: Hal Finkel
https://reviews.llvm.org/D29017

llvm-svn: 293155
2017-01-26 07:03:25 +00:00
Adam Nemet 916923e689 [llc] Add -pass-remarks-output
This is the opt/llc counterpart of -fsave-optimization-record to output
optimization remarks in a YAML file.

llvm-svn: 293121
2017-01-26 00:39:51 +00:00
Justin Lebar 7e3184c412 [ValueTracking] Implement SignBitMustBeZero correctly for sqrt.
Summary:
Previously we assumed that the result of sqrt(x) always had 0 as its
sign bit.  But sqrt(-0) == -0.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28928

llvm-svn: 293115
2017-01-26 00:10:26 +00:00
Adam Nemet a964066705 New OptimizationRemarkEmitter pass for MIR
This allows MIR passes to emit optimization remarks with the same level
of functionality that is available to IR passes.

It also hooks up the greedy register allocator to report spills.  This
allows for interesting use cases like increasing interleaving on a loop
until spilling of registers is observed.

I still need to experiment whether reporting every spill scales but this
demonstrates for now that the functionality works from llc
using -pass-remarks*=<pass>.

Differential Revision: https://reviews.llvm.org/D29004

llvm-svn: 293110
2017-01-25 23:20:33 +00:00
Adam Nemet 484f93db30 [OptDiag] Split code region out of DiagnosticInfoOptimizationBase
Code region is the only part of this class that is IR-specific.  Code
region is moved down in the inheritance tree to a new derived class,
called DiagnosticInfoIROptimization.

All the existing remarks are derived from this new class now.

This allows the new MIR pass-remark classes to be derived from
DiagnosticInfoOptimizationBase.

Also because we keep the name DiagnosticInfoOptimizationBase, the clang
parts don't need any adjustment.

Differential Revision: https://reviews.llvm.org/D29003

llvm-svn: 293109
2017-01-25 23:20:25 +00:00
whitequark 16f1e5f1ca Mark @llvm.powi.* as safe to speculatively execute.
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.

However, the @llvm.powi.* intrinsics do not correspond to any libm
function, and lacks any defined error handling semantics in LangRef.
It most certainly does not alter errno.

llvm-svn: 293041
2017-01-25 09:32:30 +00:00
NAKAMURA Takumi 28dc4d5122 Rewind instantiations of OuterAnalysisManagerProxy in r289317, r291651, and r291662.
I found root class should be instantiated for variadic tempate to instantiate static member explicitly.

This will fix failures in mingw DLL build.

llvm-svn: 293017
2017-01-25 04:26:29 +00:00
Sanjay Patel 562272536a [InstSimplify] try to eliminate icmp Pred (add nsw X, C1), C2
I was surprised to see that we're missing icmp folds based on 'add nsw' in InstCombine, 
but we should handle the InstSimplify cases first because that could make the InstCombine
code simpler.

Here are Alive-based proofs for the logic:

Name: add_neg_constant
Pre: C1 < 0 && (C2 > ((1<<(width(C1)-1)) + C1))
%a = add nsw i7 %x, C1
%b = icmp sgt %a, C2
  =>
%b = false

Name: add_pos_constant
Pre: C1 > 0 && (C2 < ((1<<(width(C1)-1)) + C1 - 1))
%a = add nsw i6 %x, C1
%b = icmp slt %a, C2
  =>
%b = false

Name: nuw
Pre: C1 u>= C2
%a = add nuw i11 %x, C1
%b = icmp ult %a, C2
  =>
%b = false

Differential Revision: https://reviews.llvm.org/D29053

llvm-svn: 292952
2017-01-24 17:03:24 +00:00
Chandler Carruth 6acdca78a0 [PH] Replace uses of AssertingVH from members of analysis results with
a lazy-asserting PoisoningVH.

AssertVH is fundamentally incompatible with cache-invalidation of
analysis results. The invaliadtion happens after the AssertingVH has
already fired. Instead, use a PoisoningVH that will assert if the
dangling handle is ever used rather than merely be assigned or
destroyed.

This patch also removes all of the (numerous) doomed attempts to work
around this fundamental incompatibility. It is a pretty significant
simplification IMO.

The most interesting change is in the Inliner where we still do some
clearing because we don't want to rely on the coarse grained
invalidation strategy of the containing pass manager. However, I prefer
the approach that contains this logic to the cleanup phase of the
Inliner, and I think we could enhance the CGSCC analysis management
layer to make this even better in the future if desired.

The rest is straight cleanup.

I've also added a test for one of the harder cases to work around: when
a *module analysis* contains many AssertingVHes pointing at functions.

Differential Revision: https://reviews.llvm.org/D29006

llvm-svn: 292928
2017-01-24 12:55:57 +00:00
Serge Pavlov 69b3ff9d93 Make VerifyDomInfo and VerifyLoopInfo global variables
Verifications of dominator tree and loop info are expensive operations
so they are disabled by default. They can be enabled by command line
options -verify-dom-info and -verify-loop-info. These options however
enable checks only in files Dominators.cpp and LoopInfo.cpp. If some
transformation changes dominaror tree and/or loop info, it would be
convenient to place similar checks to the files implementing the
transformation.

This change makes corresponding flags global, so they can be used in
any file to optionally turn verification on.

llvm-svn: 292889
2017-01-24 05:52:07 +00:00
David L. Jones d21529fa0d [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).

Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.

The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)

There are additional changes required in clang.

Reviewers: rsmith

Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D28476

llvm-svn: 292848
2017-01-23 23:16:46 +00:00
Xinliang David Li cb253ce90b [PGO] add debug option to view annotated cfg after prof use annotation
Differential Revision: http://reviews.llvm.org/D28967 

llvm-svn: 292815
2017-01-23 18:58:24 +00:00
Sanjay Patel be332137fd [InstSimplify] refactor finding limits for icmp with binop; NFCI
llvm-svn: 292812
2017-01-23 18:22:26 +00:00
Chandler Carruth a504f2b8e8 [PM] Teach LVI to correctly invalidate itself when its dependencies
become unavailable.

The AssumptionCache is now immutable but it still needs to respond to
DomTree invalidation if it ended up caching one.

This lets us remove one of the explicit invalidates of LVI but the
other one continues to avoid hitting a latent bug.

llvm-svn: 292769
2017-01-23 06:35:12 +00:00
Sanjay Patel 24c6f88e4c [ValueTracking] tighten up matchMinMax(); NFCI
This is similar to what the caller (matchSelectPattern()) does. In all
cases where we succeed in matching a min/max pattern, the values in
that pattern will be the values of the 'select', so hoist that and
remove a bunch of duplicated code.

llvm-svn: 292725
2017-01-21 17:51:25 +00:00