Commit Graph

29059 Commits

Author SHA1 Message Date
Fabian Wolff 7eec832def [DSE] Improve handling of `strncpy` in Dead Store Elimination
Fixes PR#52062 and one of the remaining cases of PR#47644.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D114035
2021-11-19 17:46:29 +00:00
Florian Hahn 76effb001d
[LV] Remove obsolete comment about creating a dummy block (NFC)
No dummy pre-entry block is created since a6c4969f5f. The comment is
stale now and can be removed.

Mentioned by @Ayal in D113182.
2021-11-19 17:17:04 +00:00
Alexey Bataev d1fdf867b1 [SLP][NFC]Introduce TreeEntry::getVectorFactor member function, NFC.
Added TreeEntry::getVectorFactor to get the final vectotization factor
to simplify the code.

Differential Revision: https://reviews.llvm.org/D114190
2021-11-19 06:32:19 -08:00
Senran Zhang 0425ea4621 [NFC][OpaquePtr][Evaluator] Remove call to PointerType::getElementType
There are still another 2 uses of PointerType::getElementType in
Evaluator when evaluating BitCast's on pointers. BitCast's on pointers
should be removed when opaque ptr is ready, so I just keep them as is.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D114131
2021-11-19 10:32:55 +08:00
Nikita Popov 46c26991ae [DSE] Remove getLocForWrite() (NFCI)
This implements nearly the same logic as getLocForWriteEx(), and
is only used in one place. In that context, we should also know
that getLocForWriteEx() returns a non-None result. As such,
consolidate everything to use one function.
2021-11-18 21:19:18 +01:00
Nikita Popov f1295563f1 [DSE] Move removePartiallyOverlappedStores() into DSEState (NFC)
So it can use getLocForWriteEx().
2021-11-18 21:19:18 +01:00
Arnold Schwaighofer 7d11c5dac2 Coro: Remove coro_end and coro_suspend_retcon in private unprocessed functions
We might emit functions that are private and never called. The coro
split pass only processes functions that might be called. Remove
intrinsics that we can't generate code for.

rdar://84619859

Differential Revision: https://reviews.llvm.org/D114021
2021-11-18 07:48:24 -08:00
Stanislav Mekhanoshin 6d3db28088 [InstCombine] Generalize complex OR patterns to AND
For every pattern with only NOT, OR, and AND operations there is
always a symmetrical attern with AND and OR swapped.

This adds 2 transformations: https://reviews.llvm.org/D113526

```
(~(a & b) | c) & (~(a & c) | b) --> ~((b ^ c) & a)
(~(a & b) | c) & ~(a & c) --> ~((b | c) & a)
```

```
----------------------------------------
define i4 @src(i4 %a, i4 %b, i4 %c) {
%0:
  %and1 = and i4 %b, %a
  %not1 = xor i4 %and1, 15
  %and2 = and i4 %a, %c
  %not2 = xor i4 %and2, 15
  %or = or i4 %not2, %b
  %r = and i4 %or, %not1
  ret i4 %r
}
=>
define i4 @tgt(i4 %a, i4 %b, i4 %c) {
%0:
  %or = or i4 %b, %c
  %and = and i4 %or, %a
  %r = xor i4 %and, 15
  ret i4 %r
}
Transformation seems to be correct!

----------------------------------------
define i4 @src(i4 %a, i4 %b, i4 %c) {
%0:
  %and1 = and i4 %a, %b
  %not1 = xor i4 %and1, 15
  %or1 = or i4 %not1, %c
  %and2 = and i4 %a, %c
  %not2 = xor i4 %and2, 15
  %or2 = or i4 %not2, %b
  %and3 = and i4 %or1, %or2
  ret i4 %and3
}
=>
define i4 @tgt(i4 %a, i4 %b, i4 %c) {
%0:
  %xor = xor i4 %b, %c
  %and = and i4 %xor, %a
  %not = xor i4 %and, 15
  ret i4 %not
}
Transformation seems to be correct!
```

Differential Revision: https://reviews.llvm.org/D113526
2021-11-17 10:47:36 -08:00
Arthur Eubanks e3e25b5112 [NewPM] Add option to prevent rerunning function pipeline on functions in CGSCC adaptor
In a CGSCC pass manager, we may visit the same function multiple times
due to SCC mutations. In the inliner pipeline, this results in running
the function simplification pipeline on a function multiple times even
if it hasn't been changed since the last function simplification
pipeline run.

We use a newly introduced analysis to keep track of whether or not a
function has changed since the last time the function simplification
pipeline has run on it. If we see this analysis available for a function
in a CGSCCToFunctionPassAdaptor, we skip running the function passes on
the function. The analysis is queried at the end of the function passes
so that it's available after the first time the function simplification
pipeline runs on a function. This is a per-adaptor option so it doesn't
apply to every adaptor.

The goal of this is to improve compile times. However, currently we
can't turn this on by default at least for the higher optimization
levels since the function simplification pipeline is not robust enough
to be idempotent in many cases, resulting in performance regressions if
we stop running the function simplification pipeline on a function
multiple times. We may be able to turn this on for -O1 in the near
future, but turning this on for higher optimization levels would require
more investment in the function simplification pipeline.

Heavily inspired by D98103.

Example compile time improvements with flag turned on:
https://llvm-compile-time-tracker.com/compare.php?from=998dc4a5d3491d2ae8cbe742d2e13bc1b0cacc5f&to=5c27c913687d3d5559ef3ab42b5a3d513531d61c&stat=instructions

Reviewed By: asbirlea, nikic

Differential Revision: https://reviews.llvm.org/D113947
2021-11-17 09:06:46 -08:00
Dmitry Vyukov a7c57c4ec8 tsan: don't consider debug calls as calls
Tsan pass does 2 optimizations based on presence of calls:
1. Don't emit function entry/exit callbacks if there are no calls
and no memory accesses.
2. Combine read/write of the same variable if there are no
intervening calls.
However, all debug info is represented as CallInst as well
and thus effectively disables these optimizations.
Don't consider debug info calls as calls.

Reviewed By: glider, melver

Differential Revision: https://reviews.llvm.org/D114079
2021-11-17 14:42:16 +01:00
David Sherwood 670dd40244 [Analysis] Fix getNumberOfParts to return 0 when the answer is unknown
When asking how many parts are required for a scalable vector type
there are occasions when it cannot be computed. For example, <vscale x 1 x i3>
is one such vector for AArch64+SVE because at the moment no matter how we
promote the i3 type we never end up with a legal vector. This means
that getTypeConversion returns TypeScalarizeScalableVector as the
LegalizeKind, and then getTypeLegalizationCost returns an invalid cost.
This then causes BasicTTImpl::getNumberOfParts to dereference an invalid
cost, which triggers an assert. This patch changes getNumberOfParts to
return 0 for such cases, since the definition of getNumberOfParts in
TargetTransformInfo.h states that we can use a return value of 0 to represent
an unknown answer.

Currently, LoopVectorize.cpp is the only place where we need to check for
0 as a return value, because all other instances will not currently
ask for the number of parts for <vscale x 1 x iX> types.

In addition, I have changed the target-independent interface for
getNumberOfParts to return 1 and assume there is a single register
that can fit the type. The loop vectoriser has lots of tests that are
target-independent and they relied upon the 0 value to mean the
answer is known and that we are not scalarising the vector.

I have added tests here that show we correctly return an invalid cost
for VF=vscale x 1 when the loop contains unusual types such as i7:

  Transforms/LoopVectorize/AArch64/sve-inductions-unusual-types.ll

Differential Revision: https://reviews.llvm.org/D113772
2021-11-17 12:07:09 +00:00
Stanislav Mekhanoshin c74f2e5b27 [InstCombine] Use SpecificBinaryOp_match in two more places
Differential Revision: https://reviews.llvm.org/D114038
2021-11-17 01:16:06 -08:00
Hongtao Yu 042cefd2b5 [CSSPGO] Fix a hash code truncating issue in ContextTrieNode.
std::hash returns a 64bit hash code while previously we were using only lower 32 bits which caused hash collision for large workloads.

Reviewed By: wenlei, wlei

Differential Revision: https://reviews.llvm.org/D113688
2021-11-16 11:01:52 -08:00
Sanjay Patel 8fce94f916 [InstCombine] canonicalize icmp with trunc op into mask and cmp, part 2
If C is a high-bit mask:
(trunc X) u< C --> (X & C) != C (are any masked-high-bits clear?)

If C is low-bit mask:
(trunc X) u> C --> (X & ~C) != 0 (are any masked-high-bits set?)

If C is not-of-power-of-2 (one clear bit):
(trunc X) u> C --> (X & (C+1)) == C+1 (are all masked-high-bits set?)

This extends the fold added with:
acabad9ff6 (https://alive2.llvm.org/ce/z/aFr7qV)

Using decomposeBitTestICmp() to generalize this is a planned follow-up, but that requires removing an inverse fold.

Here are Alive2 generalizations for these folds:
https://alive2.llvm.org/ce/z/u-ZpC_ (ult, the previous patch)
https://alive2.llvm.org/ce/z/YsuAu2 (ult, this patch)
https://alive2.llvm.org/ce/z/ekktQP (ugt, low bitmask)
https://alive2.llvm.org/ce/z/pJY9wR (ugt, one clear bit)

Differential Revision: https://reviews.llvm.org/D112634
2021-11-16 09:27:30 -05:00
Alexey Bataev 900cc1a226 [SLP]Improve cost of the gather nodes.
No need to count the final shuffle cost for the constants, gathering of
the constants is just a constant vector + extra inserts, if required.

Differential Revision: https://reviews.llvm.org/D113770
2021-11-16 06:25:07 -08:00
Alexey Bataev cdf8a53c1d [SLP]Fix windows build, NFC.
Need to put `IndexIdx` var to the list of captures.
2021-11-16 06:09:51 -08:00
Alexey Bataev aa9bbb64be [SLP]Adjust GEP indices types when trying to build entries.
Need to adjust the types of GEPs indices when building the tree
entries/operands. Otherwise some of the nodes might differ and
vectorizer is unable to correctly find them and count their cost.

Differential Revision: https://reviews.llvm.org/D113792
2021-11-16 05:44:33 -08:00
Sander.DeSmalen@arm.com 305816ff1e [IndVarSimplify] Reduce nondeterministic behaviour in visitIVCast.
rGf39978b84f1d3a1da6c32db48f64c8daae64b3ad led to and/or exposed
an issue with IndVarSimplification for a loop where a i32 phi node is
no longer replaced by a widened (i64) phi node, because the SCEVs of a
sign-extend no longer folded the same way. I'm unsure how to properly
explain this because it's all rather complicated, but in short: SCEVs
don't fold as nicely as they used to and this caused a difference.

While investigating this, I found that IndVarSimplify can actually
optimise the case in the way we want to if it chooses the widened IV to
be 'signed' (the i32 IV is both sign and zero-extended). Oddly enough,
there is some level of indeterminism in the way the algorithm works,
it just picks the sign of the 'first' zext/sext user, where the order of
the users-iterator is not guaranteed to be the same on each invocation
of the pass (e.g. shown by first running loop-rotate, which puts the
users in a different order).

While I think the fix is valid in the sense that consistently picking
_any_ order is better than having an nondeterministic order, I can
use a bit of advice from people more familiar in this area of the
code-base.

For example, I'm not sure if this fix is hiding another issue where the
IndVarSimplify pass could actually draw the same conclusions (i.e. that
it only needs an i64 phi node) if it does a bit more work, regardless
of whether it chooses the induction variable to be signed or unsigned.

I'm also not sure if choosing signed is better than unsigned, or whether
that just happens to be beneficial only in this individual case.

Any feedback would be much appreciated!

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D112573
2021-11-16 12:41:04 +00:00
Arthur Eubanks 19867de9e7 [NewPM] Only invalidate modified functions' analyses in CGSCC passes + turn on eagerly invalidate analyses
Previously, any change in any function in an SCC would cause all
analyses for all functions in the SCC to be invalidated. With this
change, we now manually invalidate analyses for functions we modify,
then let the pass manager know that all function analyses should be
preserved since we've already handled function analysis invalidation.

So far this only touches the inliner, argpromotion, function-attrs, and
updateCGAndAnalysisManager(), since they are the most used.

This is part of an effort to investigate running the function
simplification pipeline less on functions we visit multiple times in the
inliner pipeline.

However, this causes major memory regressions especially on larger IR.
To counteract this, turn on the option to eagerly invalidate function
analyses. This invalidates analyses on functions immediately after
they're processed in a module or scc to function adaptor for specific
parts of the pipeline.

Within an SCC, if a pass only modifies one function, other functions in
the SCC do not have their analyses invalidated, so in later function
passes in the SCC pass manager the analyses may still be cached. It is
only after the function passes that the eager invalidation takes effect.
For the default pipelines this makes sense because the inliner pipeline
runs the function simplification pipeline after all other SCC passes
(except CoroSplit which doesn't request any analyses).

Overall this has mostly positive effects on compile time and positive effects on memory usage.
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=instructions
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=max-rss

D113196 shows that we slightly regressed compile times in exchange for
some memory improvements when turning on eager invalidation.  D100917
shows that we slightly improved compile times in exchange for major
memory regressions in some cases when invalidating less in SCC passes.
Turning these on at the same time keeps the memory improvements while
keeping compile times neutral/slightly positive.

Reviewed By: asbirlea, nikic

Differential Revision: https://reviews.llvm.org/D113304
2021-11-15 14:44:53 -08:00
Philip Reames 8f95e915cd [unroll-runtime] Relax two profitability limitations on multi-exit unrolling
This change is mostly about getting rid of some "uninteresting" cases in a follow on deeper heuristic change.  If anyone sees actually interesting code differences out of this, please let me know.  I'm not expecting this to have much impact at all.

Case 1 - With the single deoptimize non-latch exit, we can't have two exiting blocks sharing an exit block.  We can only hit this with a poorly documented debug flag.

Case 2 - Why should we treat epilog cases differently from prolog cases?  Or to say it differently, why should starting with a constant control whether a multiple exit loop gets unrolled?

Sorry for the lack of tests here.  These are both *exceedingly* narrow cases in practice, and after a while trying, I couldn't come up with a test which did anything "useful" as opposed to simply exercise a random combination of force flags.  Note that the legality cases for each are already exercised with force flags.
2021-11-15 13:00:14 -08:00
Philip Reames 423da61835 [runtime-unroll] Inline canSafelyUnrollMultiExitLoop [NFC]
All of the interesting logic from this routine has been removed, inline the single check into the sole non-assert caller.  The assert use has little value with the restructured code and is simply dropped.
2021-11-15 11:39:07 -08:00
Stanislav Mekhanoshin e785f4ab6a [PatternMatch] Add m_BinOp/m_c_BinOp with specific opcode
Differential Revision: https://reviews.llvm.org/D113508
2021-11-15 11:24:27 -08:00
Philip Reames e99902a872 [runtime-unroll] Restructure if-clause to improve readability [NFC] 2021-11-15 11:13:27 -08:00
Alexey Bataev 224e46d355 [SLP][DOT][NFCI]Output all scalars for the splats, not only the first one. 2021-11-15 10:54:26 -08:00
Mehrnoosh Heidarpour 7daa95c8fa [InstCombine] Fold (A^B)|~A-->~(A&B)
https://alive2.llvm.org/ce/z/2v6rhF

Fixes:
https://llvm.org/PR52478

Differential Revision: https://reviews.llvm.org/D113783
2021-11-15 12:29:37 -05:00
Alexey Bataev 036207d5f2 [SLP]Improve splat detection.
A bunch of scalars can be treated as a splat not only if all elements
are the same but also if some of them are undefvalues.

Differential Revision: https://reviews.llvm.org/D113774
2021-11-15 07:50:34 -08:00
Alexey Bataev b85152f8b1 [SLP][NFC]Use `isa_and_nonnull` and fix comment, NFC. 2021-11-15 06:49:33 -08:00
ksyx 72b5138d37 Revert "[GVN][NFC] Remove redundant check"
This reverts commit c35e8185d8.

mstorsjo reported in the revision thread that one VNCoercion assertion
is violated and seemly in relate to this commit. As per "If a test case
that demonstrates a problem is reported in the commit thread, please
revert and investigate offline", this commit is reverted.
2021-11-15 09:14:13 -05:00
Alexey Bataev 6fb5bed7d1 [SLP]Do not create unused gather nodes for scalar arguments of vector intrinsics.
If the vector intrinsic has scalar argument, we currently still create
a tree entry for this argument. This entry is not used, just consumes
resources and increases the cost of the tree.

Differential Revision: https://reviews.llvm.org/D113806
2021-11-15 06:11:19 -08:00
Sander de Smalen f835fe8ef7 [LV] Rename blockNeedsPredication to blockNeedsPredicationForAnyReason.
The interface is a convenience function to ask if a block requires
predication when widening, but it's important that there are two
separate concepts to consider:
(A) The block was predicated in the original loop.
(B) The block was unpredicated in the original loop, but requires
    predication because of tail folding.

In the case of (B) we know that at least one lane of the vector will
be executed, which means we can implementing a load from a uniform address
with a scalar load + splat (D112552). In the case of predication because
of (A), we cannot do this, because the scalar load itself requires
predication.

The name 'blockNeedsPredication' does not make the distinction between
(A) and (B), hence the reason to rename it.

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D113392
2021-11-15 08:04:20 +00:00
Kazu Hirata feb40a3a47 [llvm] Use range-based for loops with instructions (NFC) 2021-11-14 19:40:48 -08:00
Kazu Hirata d243cbf8ea [llvm] Use isa instead of dyn_cast (NFC) 2021-11-14 19:40:46 -08:00
Mircea Trofin a32c2c3808 [NFC] Use Optional<ProfileCount> to model invalid counts
ProfileCount could model invalid values, but a user had no indication
that the getCount method could return bogus data. Optional<ProfileCount>
addresses that, because the user must dereference the optional. In
addition, the patch removes concept duplication.

Differential Revision: https://reviews.llvm.org/D113839
2021-11-14 19:03:30 -08:00
Kazu Hirata 7379736774 [llvm] Use range-based for loops with User::operands (NFC) 2021-11-14 09:32:38 -08:00
Kazu Hirata 098e935174 [llvm] Use range-based for loops with CallBase::args (NFC) 2021-11-14 09:32:36 -08:00
Mircea Trofin 0662a3612c [NFC][InlineFunction] Renamed some vars to conform to coding style 2021-11-14 07:26:44 -08:00
Kazu Hirata 7505b7045f [llvm] Use GetElementPtrInst::indices (NFC) 2021-11-13 21:43:28 -08:00
ksyx c35e8185d8
[GVN][NFC] Remove redundant check
The if-check above deleted part guarantees that StoreOffset <= LoadOffset
and that StoreOffset + StoreSize >= LoadOffset + LoadSize, and given that
LoadOffset + LoadSize > LoadOffset when LoadSize > 0. Thus, this shows
StoreOffset + StoreSize > LoadOffset is guaranteed given LoadSize > 0,
while it could be meaningless to have a type with nonpositive size, so that
the check could be removed.

Part of revision D100179
Reviewed By: nikic
2021-11-13 15:59:43 -05:00
Philip Reames 37ead201e6 [runtime-unroll] Use incrementing IVs instead of decrementing ones
This is one of those wonderful "in theory X doesn't matter, but in practice is does" changes. In this particular case, we shift the IVs inserted by the runtime unroller to clamp iteration count of the loops* from decrementing to incrementing.

Why does this matter?  A couple of reasons:
* SCEV doesn't have a native subtract node.  Instead, all subtracts (A - B) are represented as A + -1 * B and drops any flags invalidated by such.  As a result, SCEV is slightly less good at reasoning about edge cases involving decrementing addrecs than incrementing ones.  (You can see this in the inferred flags in some of the test cases.)
* Other parts of the optimizer produce incrementing IVs, and they're common in idiomatic source language.  We do have support for reversing IVs, but in general if we produce one of each, the pair will persist surprisingly far through the optimizer before being coalesced.  (You can see this looking at nearby phis in the test cases.)

Note that if the hardware prefers decrementing (i.e. zero tested) loops, LSR should convert back immediately before codegen.

* Mostly irrelevant detail: The main loop of the prolog case is handled independently and will simple use the original IV with a changed start value.  We could in theory use this scheme for all iteration clamping, but that's a larger and more invasive change.
2021-11-12 15:44:58 -08:00
Philip Reames de2fed6152 [unroll] Keep unrolled iterations with initial iteration
The unrolling code was previously inserting new cloned blocks at the end of the function.  The result of this with typical loop structures is that the new iterations are placed far from the initial iteration.

With unrolling, the general assumption is that the a) the loop is reasonable hot, and b) the first Count-1 copies of the loop are rarely (if ever) loop exiting.  As such, placing Count-1 copies out of line is a fairly poor code placement choice.  We'd much rather fall through into the hot (non-exiting) path.  For code with branch profiles, later layout would fix this, but this may have a positive impact on non-PGO compiled code.

However, the real motivation for this change isn't performance.  Its readability and human understanding.  Having to jump around long distances in an IR file to trace an unrolled loop structure is error prone and tedious.
2021-11-12 11:40:50 -08:00
Joel E. Denny c9dfe322ee [OpenMP] Fix main thread barrier for Pascal and amdgpu
Fixes what's left of https://bugs.llvm.org/show_bug.cgi?id=51781.

Reviewed By: jdoerfert, JonChesterfield, tianshilei1992

Differential Revision: https://reviews.llvm.org/D113602
2021-11-12 11:18:45 -05:00
Alexey Bataev 352c46e707 [SLP]Improve vectorization of split loads.
Need to fix ther cost estimation for split loads, since we look at the
subregs already, no need to permute them, need just to estimate
subregister insert, if it is smaller than the real register. Also, using
split loads, it might be profitable already to vectorize smaller trees
with gathering of the loads.

Differential Revision: https://reviews.llvm.org/D107188
2021-11-12 06:13:22 -08:00
Nikita Popov 986416251b [InstCombine] Drop redundant fold for and/or of icmp eq/ne (NFCI)
This handles a special case of foldAndOrOfICmpsUsingRanges()
with two equality predicates.
2021-11-11 20:25:40 +01:00
Nikita Popov 84e273cced [InstCombine] Handle undefs in and of icmp eq zero fold
For the scalar/splat case, this fold is subsumed by
foldLogOpOfMaskedICmps(). However, the conjugated fold for "or"
also supports splats with undef. Make both code paths consistent
by using m_ZeroInt() for the "and" implementation as well.

https://alive2.llvm.org/ce/z/tN63cu
https://alive2.llvm.org/ce/z/ufB_Ue
2021-11-11 19:07:07 +01:00
Nikita Popov 0242a6adf7 [InstCombine] Support splat vectors in some or of icmp folds
Replace m_ConstantInt() with m_APInt() in order to support splat
constants in addition to scalar integers.
2021-11-10 22:59:09 +01:00
Nikita Popov 861adaf2ad [InstCombine] Support splat vectors in some and of icmp folds
Replace m_ConstantInt() with m_APInt() to support splat vectors
in addition to scalar integers.
2021-11-10 22:37:54 +01:00
Nikita Popov 58ebc79a64 [InstCombine] Strip offset when folding and/or of icmps
When folding and/or of icmps, look through add of a constant and
adjust the icmp range instead. Effectively, this decomposes
X + C1 < C2 style range checks back into a normal range. This allows
us to fold comparisons involving two range checks or one range check
and some other condition. We had a fold for a really specific case
of this (or of range check and eq, and only one one side!) while
this handles it in fully generality.

Differential Revision: https://reviews.llvm.org/D113510
2021-11-10 22:01:52 +01:00
Stanislav Mekhanoshin 5731381594 [InstCombine] Relax and reorganize one use checks in the ~(a | b) & c
Since there is just a single check for LHS in ~(A | B) & C | ...
transforms and multiple RHS checks inside with more coming I am
removing m_OneUse checks for LHS and adding new checks for RHS.
This is non essential as long as there is total benefit.

In addition (~(A | B) & C) | (~(A | C) & B) --> (B ^ C) & ~A
checks were overly restrictive, it should be good without any
additional checks.

Differential Revision: https://reviews.llvm.org/D113141
2021-11-10 10:14:12 -08:00
Sanjay Patel 67299aa84f [InstCombine] add check for integer source type from cast to prevent crash
A problem was noted in the post-commit review for
c36b7e21bd / D113035 :

If the source type is not integer or integer vector,
then we could crash when trying to ComputeNumSignBits().
2021-11-10 09:44:55 -05:00
Florian Hahn 93931d78cf
[LV] Do not rely on InductionDescriptor::getCastInsts. (NFC)
Now that CastDef is passed as VPValue, there is no need to access
ID.getCastInsts, as CastDef can instead be checked.
2021-11-10 13:03:44 +00:00
Florian Hahn e7f1232cb7
[LV] Move optimized IV recipes to phi section of header after sinking.
Unfortunately sinking recipes for first-order recurrences relies on
the original position of recipes. So if a recipes needs to be sunk after
an optimized induction, it needs to stay in the original position, until
sinking is done. This is causing PR52460.

To fix the crash, keep the recipes in the original position until
sink-after is done.

Post-commit follow-up to c45045bfd0 to address PR52460.
2021-11-10 11:41:08 +00:00
Kerry McLaughlin 6f16ee5e14 Revert "[LoopVectorize] Extract the last lane from a uniform store"
This reverts commit 0d748b4d32.
This is causing some failures when building Spec2017 with scalable
vectors. Reverting to investigate.
2021-11-10 11:21:19 +00:00
Dmitry Makogon 62f86d4f95 Reapply 5ec2386 "Reapply db28934 "[IndVars] Pass TTI to replaceCongruentIVs""
This reverts commit 7cd273c339.

Several patches with tests fixes have been applied:
0cada82f0a "[Test] Remove incorrect test in GVN"
97cb13615d "[Test] Separate IndVars test into AArch64 and X86 parts"
985cc490f1 "[Test] Remove separated test in IndVars",
and test failures caused by 5ec2386 should be resolved now.
2021-11-10 17:36:14 +07:00
Itay Bookstein f9059efa0d [InstCombine] Extend stacksave/restore elimination
Previously, InstCombine detected a pair of llvm.stacksave/stackrestore
instructions that are adjacent modulo debug instructions in order to
eliminate the llvm.stackrestore. This precludes situations where
intervening instructions (e.g. loads) preclude the llvm.stacksave and
llvm.stackrestore from becoming adjacent. This commit extends the logic
and allows for eliminating the llvm.stackrestore when the range of
instructions between them does not include any alloca or side-effect
causing instructions.

Signed-off-by: Itay Bookstein <itay.bookstein@nextsilicon.com>

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D113105
2021-11-10 10:41:58 +02:00
Itay Bookstein fe7491d32f [InstCombine][NFC] Refactor llvm.stackrestore handling
Hoist the instruction classification logic outside the loop
in preparation for reuse in a future commit.

Signed-off-by: Itay Bookstein <itay.bookstein@nextsilicon.com>

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D113464
2021-11-10 10:41:56 +02:00
Joseph Huber e52937eba0 [OpenMP] Use AAAssumptionInfo to get assumptions in OpenMPOpt
This patch uses the abstract attributor introduced in D111054 to get the
assumption values instead of the `hasAssumption` function. This also
calls it so assumption information should propagate throug the device
where applicabile.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D111445
2021-11-09 17:39:21 -05:00
Joseph Huber b8a825b483 [Attributor] Introduce AAAssumptionInfo to propagate assumptions
This patch introduces a new abstract attributor instance that propagates
assumption information from functions. Conceptually, if a function is
only called by functions that have certain assumptions, then we can
apply the same assumptions to that function. This problem is similar to
calculating the dominator set, but the assumptions are merged instead of
nodes.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D111054
2021-11-09 17:39:18 -05:00
Kostya Serebryany b7f3a4f4fa [sancov] add tracing for loads and store
add tracing for loads and stores.

The primary goal is to have more options for data-flow-guided fuzzing,
i.e. use data flow insights to perform better mutations or more agressive corpus expansion.
But the feature is general puspose, could be used for other things too.

Pipe the flag though clang and clang driver, same as for the other SanitizerCoverage flags.
While at it, change some plain arrays into std::array.

Tests: clang flags test, LLVM IR test, compiler-rt executable test.

Reviewed By: morehouse

Differential Revision: https://reviews.llvm.org/D113447
2021-11-09 14:35:13 -08:00
Nikita Popov 0aabdad1ef [InstCombine] Combine code for and/or of icmps (NFC)
The implementation for and/or is the same, apart from the choice
of exactIntersectWith() vs exactUnionWith(). Extract a common
function to make future extension easier.
2021-11-09 21:18:31 +01:00
Nikita Popov bb12dedede [InstCombine] Refactor and/or of icmp with constant (NFCI)
Rather than testing for many specific combinations of predicates
and values, compute the exact icmp regions for both comparisons
and check whether they union/intersect exactly. If they do,
construct the equivalent icmp for the new range. Assuming that the
existing code handled all possible cases, this should be NFC.

Differential Revision: https://reviews.llvm.org/D113367
2021-11-09 21:05:46 +01:00
Stanislav Mekhanoshin 791baf38e1 [InstCombine] Fuse checks for LHS (~(A | B) & C) | ... NFC.
Differential Revision: https://reviews.llvm.org/D113132
2021-11-09 11:31:22 -08:00
Sanjay Patel d5c002bdc7 [InstCombine] fix code comment to match code; NFC 2021-11-09 14:27:29 -05:00
Arthur Eubanks 1d8750c3da [NFC] Rename GVN -> GVNPass and SROA -> SROAPass
To be more consistent with other pass struct names.

There are still more passes that don't end with "Pass", but these are the important ones.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D112935
2021-11-09 10:35:58 -08:00
Douglas Yung 7cd273c339 Revert "Reapply db28934 "[IndVars] Pass TTI to replaceCongruentIVs""
This reverts commit 5ec2386332.

This change is causing test failures on the PS4 linux build bot: https://lab.llvm.org/buildbot/#/builders/139/builds/12871
2021-11-09 10:28:41 -08:00
Sanjay Patel 2a88d00cf2 [InstCombine] fold sub-of-umax to 0-usubsat
Op0 - umax(X, Op0) --> 0 - usub.sat(X, Op1)

I'm not sure if this is really an improvement in IR because
we probably have better recognition/analysis for min/max,
but this lines up with the fold we do for the icmp+select
idiom and removes another diff from D98152.

This is similar to the previous fold in the code that was
added with:
83c2fb9f66
baa6a85130

https://alive2.llvm.org/ce/z/5MrVB9
2021-11-09 12:46:03 -05:00
Sanjay Patel baa6a85130 [InstCombine] allow commute in sub-of-umax fold
This fold was added with:
83c2fb9f66
...but missed the commuted pattern:
https://alive2.llvm.org/ce/z/_tYEGy
2021-11-09 10:50:11 -05:00
Kerry McLaughlin 0d748b4d32 [LoopVectorize] Extract the last lane from a uniform store
Changes VPReplicateRecipe to extract the last lane from an unconditional,
uniform store instruction. collectLoopUniforms will also add stores to
the list of uniform instructions where Legal->isUniformMemOp is true.

setCostBasedWideningDecision now sets the widening decision for
all uniform memory ops to Scalarize, where previously GatherScatter
may have been chosen for scalable stores.

This fixes an assert ("Cannot yet scalarize uniform stores") in
setCostBasedWideningDecision when we have a loop containing a
uniform i1 store and a scalable VF, which we cannot create a scatter for.

Reviewed By: sdesmalen, david-arm, fhahn

Differential Revision: https://reviews.llvm.org/D112725
2021-11-09 14:43:16 +00:00
Sanjay Patel c36b7e21bd [InstCombine] enhance vector bitwise select matching
(Cond & C) | (~bitcast(Cond) & D) --> bitcast (select Cond, (bc C), (bc D))

This is part of fixing:
https://llvm.org/PR34047

That report shows a case where a bitcast is sitting between the select condition
candidate and its 'not' value due to current cast canonicalization rules.

There's a bitcast type restriction that might be violated in existing matching,
but I still need to investigate if that is possible -
Alive2 shows we can only do this transform safely when the bitcast is from
narrow to wide vector elements (otherwise poison could leak into elements
that were safe in the original code):
https://alive2.llvm.org/ce/z/Hf66qh

Differential Revision: https://reviews.llvm.org/D113035
2021-11-09 08:54:59 -05:00
Chris Jackson 116dc70cf3 [DebugInfo][LSR] Add more stringent checks on IV selection and salvage
attempts

Prevent the selection of IVs that have a SCEV containing an undef. Also
prevent salvaging attempts for values for which a SCEV could not be
created by ScalarEvolution and have only SCEVUknown.

Reviewed by: Orlando

Differential Revision: https://reviews.llvm.org/D111810
2021-11-09 13:09:37 +00:00
Florian Hahn 2ead34716a
[SimplifyCFG] Add early bailout if Use is not in same BB.
Without this patch, passingValueIsAlwaysUndefined will iterate over all
instructions from I to the end of the basic block, even if the use is
outside the block.

This patch adds an early bail out, if the use instruction is outside I's
BB. This can greatly reduce compile-time in cases where very large basic
blocks are involved, with a large number of PHI nodes and incoming
values.

Note that the refactoring makes the handling of the case where I is a
phi and Use is in PHI more explicit  as well: for phi nodes, we can also
directly bail out. In the existing code, we would iterate until we reach
the end and return false.

Based on an earlier patch by Matt Wala.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D113293
2021-11-09 12:57:03 +00:00
Max Kazantsev cb728cb8a9 [NFC] Get rid of hardcoded magical constant and use Optionals instead
Refactor calculateIterationsToInvariance so that it doesn't need a magical
constant to signify unknown answer.
2021-11-09 18:13:19 +07:00
Dmitry Makogon 5ec2386332 Reapply db28934 "[IndVars] Pass TTI to replaceCongruentIVs"
This reapplies patch db289340c8.

The test failures on build with expensive checks caused by the patch happened due
to the fact that we sorted loop Phis in replaceCongruentIVs using llvm::sort,
which shuffles the given container if the expensive checks are enabled,
so equivalent Phis in the sorted vector had different mutual order from run
to run. replaceCongruentIVs tries to replace narrow Phis with truncations
of wide ones. In some test cases there were several Phis with the same
width, so if their order differs from run to run, the narrow Phis would
be replaced with a different Phi, depending on the shuffling result.

The patch ae14fae0ff fixed this issue by
replacing llvm::sort with llvm::stable_sort.
2021-11-09 17:42:29 +07:00
Florian Hahn acbefbf19f [VPlan] Guard code to dump instructions after d9361bfbe2.
This should fix build failures when built without assertions enabled,
e.g.
    https://lab.llvm.org/buildbot/#/builders/205/builds/172
2021-11-09 10:29:05 +00:00
Florian Hahn d9361bfbe2 [VPlan] Add initial inner-loop VPlan verification.
This patch adds a function to verify general properties of VPlans. The
first check makes sure that all phi-like recipes are at the beginning of
a block, with no other recipes in between.

Note that this currently may not hold for VPBlendRecipes at the moment,
as other recipes may be inserted before the VPBlendRecipe during mask
creation.

Note that this patch depends on D111300 and D111301, which fix code that
breaks the checked invariant.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D111302
2021-11-09 10:18:28 +00:00
Dmitry Makogon ae14fae0ff [SCEVExpander] Use stable_sort to sort loop Phis in SCEVExpander::replaceCongruentIVs
This is a fix for test failures on expensive checks build caused by db289340c8.

With LLVM_ENABLE_EXPENSIVE_CHECKS enabled the llvm::sort shuffles the given container.
However, the sort is only called when the TTI is passed to replaceCongruentIVs.
In the mentioned patch we pass it TTI, so the sort happens. But due to shuffling
equivalent Phis may appear in different order from run to run.
With the stable_sort instead of sort this is impossible - the order of sorted Phis
is preserved.
2021-11-09 16:29:57 +07:00
Liqiang Tao 6cad45d5c6 [llvm][Inline] Add a module level inliner
Add module level inliner, which is a minimum viable product at this point.
Also add some tests for it.

RFC: https://lists.llvm.org/pipermail/llvm-dev/2021-August/152297.html

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D106448
2021-11-09 11:03:29 +08:00
Akira Hatanaka 8f8d9f743d [ObjC][ARC] Handle operand bundle "clang.arc.attachedcall" on targets
that don't use the inline asm marker

This patch makes the changes to the ARC middle-end passes that are
needed to handle operand bundle "clang.arc.attachedcall" on targets that
don't use the inline asm marker for the retainRV/autoreleaseRV
handshake (e.g., x86-64).

Note that anyone who wants to use the operand bundle on their target has
to teach their backend to handle the operand bundle. The x86-64 backend
already knows about the operand bundle (see
https://reviews.llvm.org/D94597).

Differential Revision: https://reviews.llvm.org/D111334
2021-11-08 18:38:39 -08:00
Arthur Eubanks 28a06a1b87 [NFC][FuncAttrs] Keep track of modified functions
This is in preparation for only invalidating analyses on changed
functions.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D113303
2021-11-08 15:04:56 -08:00
Akira Hatanaka f2c7c3c7c7 [ObjC][ARC] Invalidate an entry of UnderlyingObjCPtrCache when the
instruction the key points to is deleted

Use weak value handles for both the key and the value. The entry is
invalid if either value handle is null.

This fixes an assertion failure in BasicAAResult::alias that is caused
by UnderlyingObjCPtrCache returning a wrong value.

I don't have a test case for this patch that fails reliably.

rdar://83984790
2021-11-08 14:41:06 -08:00
Michael Liao bf225939bc [InferAddressSpaces] Support assumed addrspaces from addrspace predicates.
- CUDA cannot associate memory space with pointer types. Even though Clang could add extra attributes to specify the address space explicitly on a pointer type, it breaks the portability between Clang and NVCC.
- This change proposes to assume the address space from a pointer from the assumption built upon target-specific address space predicates, such as `__isGlobal` from CUDA. E.g.,

```
  foo(float *p) {
    __builtin_assume(__isGlobal(p));
    // From there, we could assume p is a global pointer instead of a
    // generic one.
  }
```

This makes the code portable without introducing the implementation-specific features.

Note that NVCC starts to support __builtin_assume from version 11.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D112041
2021-11-08 16:51:57 -05:00
Nikita Popov 1376301c87 [InstCombine] Canonicalize range test idiom
InstCombine converts range tests of the form (X > C1 && X < C2) or
(X < C1 || X > C2) into checks of the form (X + C3 < C4) or
(X + C3 > C4). It is possible to express all range tests in either
of these forms (with different choices of constants), but currently
neither of them is considered canonical. We may have equivalent
range tests using either ult or ugt.

This proposes to canonicalize all range tests to use ult. An
alternative would be to canonicalize to either ult or ugt depending
on the specific constants involved -- e.g. in practice we currently
generate ult for && style ranges and ugt for || style ranges when
going through the insertRangeTest() helper. In fact, the "clamp like"
fold was relying on this, which is why I had to tweak it to not
assume whether inversion is needed based on just the predicate.

Proof: https://alive2.llvm.org/ce/z/_SP_rQ

Differential Revision: https://reviews.llvm.org/D113366
2021-11-08 21:15:46 +01:00
Florian Hahn e3bfb6a146
[VPlan] Make sure recurrence splice is not inserted between phis.
All phi-like recipes should be at the beginning of a VPBasicBlock with
no other recipes in between. Ensure that the recurrence-splicing recipe
is not added between phi-like recipes, but after them.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D111301
2021-11-08 17:42:32 +00:00
Sander de Smalen 2829376bb2 [LV] Use VScaleForTuning to fine-tune the cost per lane.
When targeting a specific CPU with scalable vectorization, the knowledge
of that particular CPU's vscale value can be used to tune the cost-model
and make the cost per lane less pessimistic.

If the target implements 'TTI.getVScaleForTuning()', the cost-per-lane
is calculated as:

  Cost / (VScaleForTuning * VF.KnownMinLanes)

Otherwise, it assumes a value of 1 meaning that the behavior
is unchanged and calculated as:

  Cost / VF.KnownMinLanes

Reviewed By: kmclaughlin, david-arm

Differential Revision: https://reviews.llvm.org/D113209
2021-11-08 16:59:46 +00:00
Anton Afanasyev ce4fa93db8 [SCCP] Tune cast instruction handling for overdefined operand
Extended value is known to be inside range smaller than full one.
Prevent SCCP to mark such value as overdefined.

Fixes PR52253

Differential Revision: https://reviews.llvm.org/D112721
2021-11-08 18:34:30 +03:00
David Sherwood c63b0f471b [NFC][LoopVectorize] Make the createStepForVF interface more caller-friendly
The common use case for calling createStepForVF is currently something
like:

  Value *Step = createStepForVF(Builder, ConstantInt::get(Ty, UF), VF);

and it makes more sense to reduce overall lines of code and change the
function to let it create the constant instead. With my patch this
becomes:

  Value *Step = createStepForVF(Builder, Ty, VF, UF);

and the ConstantInt is created instead createStepForVF. A side-effect of
this is that the code in createStepForVF is also becomes simpler.

As part of this patch I've also replaced some calls to getRuntimeVF
with calls to createStepForVF, i.e.

  getRuntimeVF(Builder, Count->getType(), VFactor * UFactor) ->
  createStepForVF(Builder, Count->getType(), VFactor, UFactor)

because this feels semantically better.

Differential Revision: https://reviews.llvm.org/D113122
2021-11-08 15:14:14 +00:00
Dmitry Makogon 8d4eba6c0d Revert "[IndVars] Pass TTI to replaceCongruentIVs"
This reverts commit db289340c8.

The patch caused 2 crashes with expensive checks enabled.
2021-11-08 19:35:14 +07:00
Dmitry Makogon db289340c8 [IndVars] Pass TTI to replaceCongruentIVs
In IndVarSimplify after simplifying and extending loop IVs we call 'replaceCongruentIVs'.
This function optionally takes a TTI argument to be able to replace narrow IVs uses
with truncates of the widest one.
For some reason the TTI wasn't passed to the function, so it couldn't perform such
transform.
This patch fixes it.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D113024
2021-11-08 19:20:53 +07:00
David Sherwood c42bb30b9e [LoopVectorize] Permit fixed-width epilogue loops for scalable vector bodies
At the moment in LoopVectorizationCostModel::selectEpilogueVectorizationFactor
we bail out if the main vector loop uses a scalable VF. This patch adds
support for generating epilogue vector loops using a fixed-width VF when the
main vector loop uses a scalable VF.

I've changed LoopVectorizationCostModel::selectEpilogueVectorizationFactor
so that we convert the scalable VF into a fixed-width VF and do profitability
checks on that instead. In addition, since the scalable and fixed-width VFs
live in different VPlans that means I had to change the calls to
LVP.hasPlanWithVFs so that we only pass in the fixed-width VF.

New tests added here:

  Transforms/LoopVectorize/AArch64/sve-epilog-vect.ll

Differential Revision: https://reviews.llvm.org/D109432
2021-11-08 09:41:13 +00:00
Kazu Hirata 0d182d9d1e [Transforms] Use make_early_inc_range (NFC) 2021-11-07 17:03:15 -08:00
Nikita Popov 2060895c9c [ConstantRange] Add exact union/intersect (NFC)
For some optimizations on comparisons it's necessary that the
union/intersect is exact and not a superset. Add methods that
return Optional<ConstantRange> only if the result is exact.

For the sake of simplicity this is implemented by comparing
the subset and superset approximations for now, but it should be
possible to do this more directly, as unionWith() and intersectWith()
already distinguish the cases where the result is imprecise for the
preferred range type functionality.
2021-11-07 21:46:06 +01:00
Simon Pilgrim f057756a1a [SLP] Fix Wdocumentation warning - remove \returns from void function. NFC. 2021-11-07 15:08:39 +00:00
Benjamin Kramer 9b8b16457c Put implementation details into anonymous namespaces. NFCI. 2021-11-07 15:18:30 +01:00
Kazu Hirata 843d1eda18 [llvm] Use llvm::reverse (NFC) 2021-11-06 19:31:18 -07:00
Nikita Popov 9f0194be45 [ConstantRange] Add getEquivalentICmp() variant with offset (NFCI)
Add a variant of getEquivalentICmp() that produces an optional
offset. This allows us to create an equivalent icmp for all ranges.

Use this in the with.overflow folding code, which was doing this
adjustment separately -- this clarifies that the fold will indeed
always apply.
2021-11-06 21:59:45 +01:00
Nikita Popov f8627877a9 [SCEV] Make eraseValueFromMap() private (NFC)
The public API for this functionality is forgetValue(). There was
only one call from LoopVectorize, which was directly next to a
forgetValue() call and as such redundant.
2021-11-06 17:14:02 +01:00
Sanjay Patel 83c2fb9f66 [InstCombine] match usub.sat from umax intrinsic
umax(X, Op1) - Op1 --> usub.sat(X, Op1)

https://alive2.llvm.org/ce/z/HpcGiJ

This happens in 2 or more steps with an icmp-select idiom
instead of an intrinsic. This is another step towards
canonicalization of the min/max intrinsics. See:
D98152
2021-11-06 08:32:52 -04:00
Kazu Hirata 87e53a0ad8 [llvm] Use make_early_inc_range (NFC) 2021-11-05 19:39:07 -07:00
David Green 08056e1888 [InstCombine] Generalize sadd.sat combine to compute sign bits.
There is a combine in instcombine to transform a saturated add/sub into
a saddsat/ssubsat, currently handling inputs which are both sign
extended (https://alive2.llvm.org/ce/z/68qpTn). This can generalize to,
for example ashr of at least the bitwidth (https://alive2.llvm.org/ce/z/4TFyX-
and https://alive2.llvm.org/ce/z/qDWzFs for example). Which means it
generalizes further to "the number of sign bits", needing to be enough
to truncate to the size of the saturate. (An example using `or` for
instance: https://alive2.llvm.org/ce/z/EI_h_A).

So this patch makes use of ComputeNumSignBits (with the newly added
ComputeMinSignedBits) in matchSAddSubSat to generalize the fold to any
inputs with enough sign bits known, truncating the inputs to the new
size of the saturate.

Differential Revision: https://reviews.llvm.org/D112298
2021-11-05 15:05:09 +00:00
David Green 61225c0818 [ValueTracking][InstCombine] Introduce and use ComputeMinSignedBits
This introduces a new ComputeMinSignedBits method for ValueTracking that
returns the BitWidth - SignBits + 1 from ComputeSignBits, and represents
the minimum bit size for the value as a signed integer.  Similar to the
existing APInt::getMinSignedBits method, this can make some of the
reasoning around ComputeSignBits more natural.

See https://reviews.llvm.org/D112298
2021-11-05 14:41:37 +00:00
Kirill Stoimenov 3f1aca58df [ASan] Added stack safety support in address sanitizer.
Added and implemented -asan-use-stack-safety flag, which control if ASan would use the Stack Safety results to emit less code for operations which are marked as 'safe' by the static analysis.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D112098
2021-11-04 17:22:31 -07:00
Philip Reames dec15d9a0a [indvars] Use loop guards when canonicalizing exit conditions
This extends the logic in canonicalizeExitConditions to use loop guards to specialize the SCEV of the loop invariant term before quering it's range.
2021-11-04 15:23:34 -07:00
Philip Reames c0d9bf2f6a [indvars] Allow rotation (narrowing) of exit test when discovering trip count
This relaxes the one-use requirement on the rotation transform specifically for the case where we know we're zexting an IV of the loop.  This allows us to discover trip count information in SCEV, which seems worth a single extra loop invariant truncate.  Honestly, I'd prefer if SCEV could just compute the trip count directly (e.g. D109457), but this unblocks practical benefit.
2021-11-04 14:49:24 -07:00
Philip Reames 453fdebd48 [indvars] Extend canonicalizeExitConditions to inverted operands
As discussed in the original reviews, but done in a follow on.
2021-11-04 14:20:37 -07:00
Sjoerd Meijer 3fd1902ad8 [FuncSpec] Enable it only with -O3
Function specialisation was running at all optimisation levels (if enabled on
the command line, it is not on by default). That was an oversight and not
something we want to do. Function specialisation duplicates functions when it
triggers, so the backend is processing more functions/instructions resulting in
compile-time increases, which seems more appropriate with -O3 and inline with
GCC. Please note that since function specialisation is not enabled by default,
this didn't require updating any pass manager tests.

Differential Revision: https://reviews.llvm.org/D112129
2021-11-04 13:59:00 +00:00
Florian Hahn b4992dbb21
[LV] Clarify uniform worklist contains instrs demanding lane 0. 2021-11-04 13:11:50 +01:00
Tim Northover 3d39612b3d Coroutines: don't infer function attrs before lowering
Coroutines have weird semantics that don't quite match normal LLVM functions,
so trying to infer even simple attributes based on thier contents can go wrong.
2021-11-04 10:24:28 +00:00
David Green 1e5f814302 [InstCombine] Fix infinite recursion in ashr/xor vector fold.
The added test has poison lanes due to the vector shuffle. This can
cause an infinite loop of combines in instcombine where it folds
xor(ashr, -1) -> select (icmp slt 0), -1, 0 -> sext (icmp slt 0) -> xor(ashr, -1).
We usually prevent this by checking that the xor constant is not -1,
but with vectors some of the lanes may be -1, some may be poison. So
this changes the way we detect that from "!C1->isAllOnesValue()" to
"!match(C1, m_AllOnes())", which is more able to detect that some of the
lanes are poison.

Fixes PR52397
2021-11-04 09:24:27 +00:00
Philip Reames d4708fa480 Backout must-exit based parts of 3fc9882e, and 412eb0
Not sure these are correct.  I think I missed a case when porting this from the original SCEV change to the IndVar changes.  I may end up reapplying this later with a comment about how this is correct, but in case the current bad feeling turns out to be true, I'm removing from tree while investigating further.
2021-11-03 15:19:49 -07:00
Arthur Eubanks 88052fc362 [ArgPromo] Preserve FunctionAnalysisManagerCGSCCProxy
We already make sure to properly clear analyses for deleted functions.

This makes investigating some future potential compile time improvements easier.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D113032
2021-11-03 14:56:58 -07:00
Philip Reames 64990f1408 Revert "[indvars] Move a check slightlly earlier [NFC]"
This reverts commit 7ff943a9ed.

This wasn't NFC.  isSigned != !isUnsigned as there are also relational operators.
2021-11-03 13:38:52 -07:00
Kirill Stoimenov a55c4ec1ce [ASan] Process functions in Asan module pass
This came up as recommendation while reviewing D112098.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D112732
2021-11-03 20:27:53 +00:00
Philip Reames 7ff943a9ed [indvars] Move a check slightlly earlier [NFC] 2021-11-03 12:24:10 -07:00
Philip Reames 3fc9882e88 [indvars] Rotate zext though icmp to reduce loop varying computation
This change looks for cases where we can prove that an exit test of a loop can be performed in a narrower bitwidth, and that by doing so we can replace a loop-varying extend with a loop-invariant truncate.

The motivation here is that doing this unblocks the trip count analysis for narrow IVs involved in extended compare exit tests. It also has the nice side effect of simply making the code faster, even if we gain no other benefit from the improved analysis ability.

I've noted a few places this could be extended, but I think this stands reasonable on it's own as well.

Differential Revision: https://reviews.llvm.org/D112262
2021-11-03 12:09:20 -07:00
Vitaly Buka 3131714f8d [NFC][asan] Use AddressSanitizerOptions in ModuleAddressSanitizerPass
Reviewed By: kstoimenov

Differential Revision: https://reviews.llvm.org/D113072
2021-11-03 11:32:14 -07:00
Kirill Stoimenov b3145323b5 Revert "[ASan] Process functions in Asan module pass"
This reverts commit 76ea87b94e.

Reviewed By: kstoimenov

Differential Revision: https://reviews.llvm.org/D113129
2021-11-03 18:01:01 +00:00
Kirill Stoimenov 76ea87b94e [ASan] Process functions in Asan module pass
This came up as recommendation while reviewing D112098.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D112732
2021-11-03 17:51:01 +00:00
Sanjay Patel c85df3c7d5 [InstCombine] refactor fold for icmp with trunc op; NFC
There are at least 3 related folds we can add here - see D112634.
2021-11-03 12:43:15 -04:00
David Sherwood c0f2774973 [NFC][LoopVectorize] Simple tidy-up in InnerLoopVectorizer::createVectorIntOrFpInductionPHI
Use getSignedIntOrFpConstant instead of creating int or FP constants
manually.
2021-11-03 14:05:21 +00:00
Florian Hahn 64bc31ee93
[LV] Drop unneeded use of getVPSingleValue (NFC).
VPReductionPHIRecipe inherits from VPValue, so there's no need to call
getVPSingleValue.
2021-11-03 14:26:15 +01:00
Florian Hahn 8e44bdd12a
[VPlan] Make VPWidenCanonicalIVRecipe a VPValue (NFC).
The recipe produces exactly one VPValue and can inherit directly from
it. This is in line with other recipes and avoids having to use
getVPSingleValue.
2021-11-03 14:11:01 +01:00
Piotr Sobczak 03961709ed [InstCombine] Extend pattern to replace shuffle's insertelement operand
In D71220 a pattern was added to replace shuffle's insertelement operand
if inserted scalar is not demanded. The pattern was added only for
the case where the shuffle's mask size is equal to element's vector size.
However, that condition is not required because the pattern does not
change the shuffle vector size.

This patch extends the pattern to also include cases where shuffle's mask
size is not equal to element's vector size.

Differential Revision: https://reviews.llvm.org/D112318
2021-11-03 09:43:04 +01:00
Johannes Doerfert d61aac76bf [OpenMP][FIX] Do not signal SPMD-mode but then keep generic-mode
If we assume SPMD-mode during the fixpoint iteration we have to execute
the kernel in SPMD-mode. If we change our mind during manifest there is
the chance of a mismatch between the simplification, e.g., of
`__kmpc_is_spmd_exec_mode` calls, and the execution mode. This problem
was introduced in D109438.

This patch is compromise to resolve the problem purely in OpenMP-opt
while trying to keep the benefits of D109438 around. This might not
always work, see `get_hardware_num_threads_in_block_fold` but it often
does. At the same time we do keep value specialization and execution
mode in sync.

Proper solutions to this problem should be considered. I believe a new
execution mode is the easiest way forward (Singleton-SPMD).
Alternatively, SPMD-mode execution can be used with a way to provide a
new thread_limit (here 1) to the runtime. This is more general and could
be useful if we see `num_threads` clauses or workshared loops with small
trip counts in the kernel. In either proposal we need to disable the
guarding for the kernel (which was the motivation for D109438).

Reviewed By: jhuber6

Differential Revision: https://reviews.llvm.org/D112894
2021-11-02 23:22:04 -05:00
Johannes Doerfert 73720c8059 [OpenMP][FIX] Introduce and use a simple generic-mode barrier
Before we had aligned barriers the `__kmpc_barrier_simple_spmd` was
OK to be used in the custom state machine. Now that SPMD barriers are
assumed to be aligned we need to use a "generic" barrier in places
that are not aligned.

Reviewed By: tianshilei1992

Differential Revision: https://reviews.llvm.org/D112893
2021-11-02 23:22:01 -05:00
Johannes Doerfert e6e440ae5f [OpenMP][FIX] Ensure guarding uses proper global name
Global symbols cannot have any name so we need to sanitize the string
first. Also remove an assertion that is not actually necessary nor
true in general.

Reviewed By: ggeorgakoudis

Differential Revision: https://reviews.llvm.org/D112892
2021-11-02 23:21:53 -05:00
Kazu Hirata 1b108ab975 [Transforms] Use make_early_inc_range (NFC) 2021-11-02 18:13:23 -07:00
Dmitry Makogon e09958d5eb [LoopPeel] Peel loops with exits followed by an unreachable or deopt block
Added support for peeling loops with exits that are followed either by an
unreachable-terminated block or block that has a terminatnig deoptimize call.
All blocks in the sequence must have an unique successor, maybe except
for the last one.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D110922
2021-11-02 23:12:04 +07:00
Youngsuk Kim 76b53da3ce
[SimpleLoopUnswitch] Remove duplicate include.
Header "llvm/Transforms/Scalar/SimpleLoopUnswitch.h" is currently
included twice. This commit removes the duplicate 'include' line.

Previous commit 693eedb138
seems to have mistakenly added the duplicate 'include'.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D112979
2021-11-02 15:22:41 +01:00
Sanjay Patel 829146164f [InstCombine] change 'not' match for bitwise select
The tests diffs are logically equivalent, and so this is
generally NFC, but this makes the code match the code
comment.

It should also be more efficient. If we choose the 'not'
operand (rather than the 'not' instruction) as the select
condition, then we don't have to invert the select
condition/operands as a subsequent transform.
2021-11-02 10:16:01 -04:00
Daniele Vettorel 67887b0f81 [Scalarizer] Do not insert instructions between PHI nodes and debug intrinsics.
The scalarizer pass seems to be inserting instructions in-between PHI nodes or debug intrinsics that end up staying at the end of the pass, resulting in malformed IR and violating assumptions.

This patch adds a check to make sure the `extractelement` instructions that it adds are correctly placed after all PHI nodes and debug intrinsics.

Patch by vettoreldaniele.

Reviewed By: bjope

Differential Revision: https://reviews.llvm.org/D112472
2021-11-02 09:53:59 -04:00
Rosie Sumpter dcb8222d87 [LoopVectorize] Propagate fast-math flags for inloop reductions
This patch updates VPReductionRecipe::execute so that the fast-math
flags associated with the underlying instruction of the VPRecipe are
propagated through to the reductions which are created.

Differential Revision: https://reviews.llvm.org/D112548
2021-11-02 08:59:53 +00:00
Sanjay Patel 42c94bc1ab [InstCombine] allow vector splat matching for bitwise logic fold
Similar to 54e969cffd (and with cosmetic updates to hopefully
make that easier to read), this fold has been around since early
in LLVM history.

Intermediate folds have been added subsequently, so extra uses
are required to exercise this code.

The test example actually shows an unintended consequence with
extra uses - we end up with an extra instruction compared to what
we started with. But this at least makes scalar/vector consistent.

General proof:
https://alive2.llvm.org/ce/z/tmuBza
2021-11-01 11:39:48 -04:00
Matt Morehouse 4d8b0aa5c0 [HWASan] Apply TagMaskByte to every global tag.
Previously we only applied it to the first one, which could allow
subsequent global tags to exceed the valid number of bits.

Reviewed By: hctim

Differential Revision: https://reviews.llvm.org/D112853
2021-11-01 06:31:44 -07:00
Sanjay Patel 54e969cffd [InstCombine] allow vector splat matching for bitwise logic folds
This fold was added long ago (part of fixing PR4216),
and it matched scalars only. Intermediate folds have
been added subsequently, so extra uses are required
to exercise this code.

General proof:
https://alive2.llvm.org/ce/z/G6BBhB

One of the specific tests:
https://alive2.llvm.org/ce/z/t0JhEB
2021-11-01 08:26:42 -04:00
Sanjay Patel 511ee8759f [InstCombine] reduce code duplication with commutative matcher; NFC 2021-11-01 08:26:41 -04:00
David Sherwood 87a294d5eb [LoopVectorize] Change getRuntimeVFAsFloat to use unsigned int->FP conversion
We never expect the runtime VF to be negative so we should use
the uitofp instruction instead of sitofp.

Differential revision: https://reviews.llvm.org/D112610
2021-11-01 09:58:14 +00:00
Roman Lebedev b554e41e2d
[CVP] Canonicalize signed relational comparisons of scalar integers to unsigned comparison predicates
Now that the reasoning was added to ConstantRange in D90924,
this replicates IndVars variant of this transform (D111836)
in a pass that uses value range reasoning for the transform.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D112895
2021-11-01 12:16:05 +03:00
Jun Ma c93f93b2e3 Revert "Revert "Recommit "Revert "[CVP] processSwitch: Remove default case when switch cover all possible values.""""
This reverts commit 3a998c06a8.
2021-11-01 15:31:59 +08:00
Kazu Hirata c714da2ceb [Transforms] Use {DenseSet,SetVector,SmallPtrSet}::contains (NFC) 2021-10-31 07:57:32 -07:00
Kazu Hirata c8b1ed5fb2 [clang, llvm] Use Optional::getValueOr (NFC) 2021-10-30 19:00:21 -07:00
Roman Lebedev 25043c8276
[NFCI] Introduce `ICmpInst::compare()` and use it where appropriate
As noted in https://reviews.llvm.org/D90924#inline-1076197
apparently this is a pretty common pattern,
let's not repeat it yet again, but have it in a common place.

There may be some more places where it could be used,
but these are the most obvious ones.
2021-10-30 17:50:06 +03:00
David Green 66281baea1 [InstCombine] Fix type of constant in canonicalizeClampLike
As a followup to D108049, one of the constants could now be generated
with an incorrect type, now that the input could be truncated.
2021-10-30 09:06:21 +01:00
Kazu Hirata 972d4133e9 Use {DenseSet,SmallPtrSet}::contains (NFC) 2021-10-29 20:26:07 -07:00
Florian Hahn 274a9b0f0b
[DSE] Support redundant stores eliminated by memset.
This patch adds support to remove stores that write the same value
as earlier memesets.

It uses isOverwrite to check that a memset completely overwrites a later
store. The candidate store must store the same bytewise value as the
byte stored by the memset.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D112321
2021-10-29 22:19:53 +01:00
Sanjay Patel 8f786b4618 [InstCombine] fix comments to match code; NFC 2021-10-29 15:48:35 -04:00
modimo 5caad9b5d3 [InlineAdvisor] Add fallback/format switches and negative remark processing to Replay Inliner
Adds the following switches:

1. --sample-profile-inline-replay-fallback/--cgscc-inline-replay-fallback: controls what the replay advisor does for inline sites that are not present in the replay. Options are:

 1. Original: defers to original advisor
 2. AlwaysInline: inline all sites not in replay
 3. NeverInline: inline no sites not in replay

2. --sample-profile-inline-replay-format/--cgscc-inline-replay-format: controls what format should be generated to match against the replay remarks. Options are:

  1. Line
  2. LineColumn
  3. LineDiscriminator
  4. LineColumnDiscriminator

Adds support for negative inlining decisions. These are denoted by "will not be inlined into" as compared to the positive "inlined into" in the remarks.

All of these together with the previous `--sample-profile-inline-replay-scope/--cgscc-inline-replay-scope` allow tweaking in how to apply replay. In my testing, I'm using:
1. --sample-profile-inline-replay-scope/--cgscc-inline-replay-scope = Function to only replay on a function
2. --sample-profile-inline-replay-fallback/--cgscc-inline-replay-fallback = NeverInline since I'm feeding in only positive remarks to the replay system
3. --sample-profile-inline-replay-format/--cgscc-inline-replay-format = Line since I'm generating the remarks from DWARF information from GCC which can conflict quite heavily in column number compared to Clang

An alternative configuration could be to do Function, AlwaysInline, Line fallback with negative remarks which closer matches the final call-sites. Note that this can lead to unbounded inlining if a negative remark doesn't match/exist for one reason or another.

Updated various tests to cover the new switches and negative remarks

Testing:
ninja check-all

Reviewed By: wenlei, mtrofin

Differential Revision: https://reviews.llvm.org/D112040
2021-10-29 12:32:03 -07:00
modimo 51ce567b38 [SampleProfile] Add all callsites to AllCandidates if InlineReplay is in effect
Replay in sample profiling needs to be asked on candidates that may not have counts or below the threshold. If replay is in effect for a function make sure these are captured and also imported during thinLTO.

Testing:
ninja check-all

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D112033
2021-10-29 12:04:52 -07:00
Roman Lebedev 0ae7bf124a
[NFC][LoopDeletion] Count the number of broken backedges
Those don't contribute to the number of deleted loops.
2021-10-29 21:58:16 +03:00
Sanjay Patel d0e9879d96 [InstCombine] allow vector splat matching for bitwise logic folds
These transforms are also likely missing a one-use check,
but that's another patch.
2021-10-29 14:22:50 -04:00
Stanislav Mekhanoshin a905c54b76 [InstCombine] Fold `(~(a | b) & c) | ~(a | c)` into `~((b & c) | a)`
```
----------------------------------------
define i4 @src(i4 %a, i4 %b, i4 %c) {
  %or1 = or i4 %b, %a
  %not1 = xor i4 %or1, -1
  %or2 = or i4 %a, %c
  %not2 = xor i4 %or2, -1
  %and = and i4 %not2, %b
  %or3 = or i4 %and, %not1
  ret i4 %or3
}

define i4 @tgt(i4 %a, i4 %b, i4 %c) {
  %and = and i4 %c, %b
  %or = or i4 %and, %a
  %or3 = xor i4 %or, -1
  ret i4 %or3
}
Transformation seems to be correct!
```

Differential Revision: https://reviews.llvm.org/D112338
2021-10-29 10:58:09 -07:00
Jay Foad 1b758925ad [IR] Merge createReplacementInstr into ConstantExpr::getAsInstruction
createReplacementInstr was a trivial wrapper around
ConstantExpr::getAsInstruction, which also inserted the new instruction
into a basic block. Implement this directly in getAsInstruction by
adding an InsertBefore parameter and change all callers to use it. NFC.

A follow-up patch will remove createReplacementInstr.

Differential Revision: https://reviews.llvm.org/D112791
2021-10-29 15:02:58 +01:00