Commit Graph

23852 Commits

Author SHA1 Message Date
Florian Hahn e6a74803d4 [VPlan] Use underlying value for printing, if available.
When the an underlying value is available, we can use its name for
printing, as discussed in D73078.

Reviewers: rengolin, hsaito, Ayal, gilr

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D76200
2020-03-18 17:46:57 +00:00
Simon Pilgrim f4e495a18e [InstCombine][X86] simplifyX86varShift - convert variable in-range per-element shift amounts to generic shifts (PR40391)
AVX2/AVX512 per-element shifts can be replaced with generic shifts if the shift amounts are guaranteed to be in-range (upper bits are known zero).
2020-03-18 11:26:54 +00:00
Florian Hahn 5672ae8d86 [SCCP] Use constant ranges for select, if cond is overdefined.
For selects with an unknown condition, we can approximate the result by
merging the state of both options. This automatically takes care of
the case where on operand is undef.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71935
2020-03-18 09:26:02 +00:00
Michael Liao f2f8bdc2b1 Fix `-Wunused-variable` warning. NFC. 2020-03-17 20:15:50 -04:00
Florian Hahn a72ae99cf9 [SCCP] Split up callsite handling, only propagate result on change (NFC)
Functions include their arguments in the use-list. Changed function
values mean that the result of the function changed. We only need
to update the call sites with the new function result and do not
have to propagate the call arguments.

To do so, this patch splits up the visitCallSite into handleCallResult
and handleCallArguments and updates markUsersAsChanged to only update
call results for functions.

Reviewers: efriedma, davide

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75846
2020-03-17 20:05:35 +00:00
Sanjay Patel be9e3d9416 [InstCombine] reduce demand-limited bool math to logic, part 2
Follow-on suggested in:
D75961
2020-03-17 15:18:18 -04:00
Tyker e8ac825f5b [AssumeBundles] Detection of Empty bundles
Summary: Prevent InstCombine from removing llvm.assume for which the arguement is true when they have operand bundles with usefull information.

Reviewers: jdoerfert, nikic, lebedev.ri

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76147
2020-03-17 15:50:15 +01:00
Florian Hahn 1d6f919df2 [SCCP] Explicitly mark values as overdefined (NFC).
This was part of D60582 but can be committed separately.
2020-03-17 12:13:30 +00:00
Roman Lebedev 398b497cd0
[NFC] LoopRotate: do issue debug message when not rotating due to instr count
It is somewhat problematic to notice this issue otherwise.
2020-03-17 09:26:09 +03:00
Serguei Katkov 80c351cdb6 [InstCombine] Transform to undef incorrect atomic unordered mem intrinsics
According to LangRef:
If len is not a positive integer multiple of element_size, then the behaviour of the intrinsic is undefined.

Add InstCombine rule to transform intrinsic to undef operation.

This is a follow-up for D76116.

Reviewers: reames
Reviewed By: reames
Subscribers: hiraditya, jfb, dantrushin, llvm-commits
Differential Revision: https://reviews.llvm.org/D76215
2020-03-17 10:20:16 +07:00
Matt Arsenault b0bdb186f5 Utils: Always set alignment when expanding mem intrinsics
This was creating natural aligned loads and stores, which may not be
the case. The target could request a wider type load with less
alignment.
2020-03-16 14:34:29 -04:00
Matt Arsenault 05e7d8d6ce TTI: Add addrspace parameters to memcpy lowering functions 2020-03-16 14:34:29 -04:00
Florian Hahn 4878aa36d4 [ValueLattice] Add new state for undef constants.
This patch adds a new undef lattice state, which is used to represent
UndefValue constants or instructions producing undef.

The main difference to the unknown state is that merging undef values
with constants (or single element constant ranges) produces  the
constant/constant range, assuming all uses of the merge result will be
replaced by the found constant.

Contrary, merging non-single element ranges with undef needs to go to
overdefined. Using unknown for UndefValues currently causes mis-compiles
in CVP/LVI (PR44949) and will become problematic once we use
ValueLatticeElement for SCCP.

Reviewers: efriedma, reames, davide, nikic

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75120
2020-03-14 17:19:59 +00:00
Whitney Tsang aca7167535 [NFC][LoopUnrollAndJam] clang-format.
I am currently working on this file.
2020-03-14 00:04:10 +00:00
Akira Hatanaka c6f1713c46 [ObjC][ARC] Don't remove autoreleaseRV/retainRV pairs if the call isn't
a tail call

This reapplies the patch in https://reviews.llvm.org/rG1f5b471b8bf4,
which was reverted because it was causing crashes.

https://bugs.chromium.org/p/chromium/issues/detail?id=1061289#c2

Check that HasSafePathToCall is true before checking the call is a tail
call.

Original commit message:

Previosly ARC optimizer removed the autoreleaseRV/retainRV pair in the
following code, which caused the object returned by @something to be
placed in the autorelease pool because the call to @something isn't a
tail call:

```
  %call = call i8* @something(...)
  %2 = call i8* @objc_retainAutoreleasedReturnValue(i8* %call)
  %3 = call i8* @objc_autoreleaseReturnValue(i8* %2)
  ret i8* %3
```

Fix the bug by checking whether @something is a tail call.

rdar://problem/59275894
2020-03-13 13:52:14 -07:00
Alexey Zhikhartsev f71abec661 [LoopInterchange] Fix interchanging contents of preheader BBs
Summary:
Previously LCSSA was getting broken by placing instructions into the
(newly) inner *header* instead of the *pre*header.

Fixes PR43474

Reviewers: fhahn

Reviewed By: fhahn

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75943
2020-03-13 15:59:37 -04:00
Reid Kleckner 478b06e687 Revert "[ObjC][ARC] Check the basic block size before calling DominatorTree::dominate"
This reverts commit 5c3117b0a9

This should not be necessary after
7593a480db, and Florian Hahn has confirmed
that the problem no longer reproduces with this patch.

I happened to notice this code because the FIXME talks about
OrderedBasicBlock.

Reviewed By: fhahn, dexonsmith

Differential Revision: https://reviews.llvm.org/D76075
2020-03-13 11:57:55 -07:00
Huihui Zhang fc1f205745 [SLPVectorizer][SVE] Bail out early for scalable vector.
Summary:
SLPVectorizer try to vectorize list of scalar instructions of the same type,
instructions already vectorized are rejected through isValidElementType().

Without this patch, tryToVectorizeList() will first try to determine vectorization
factor of a list of Instructions before checking whether each instruction has unsupported
type or not. For instructions already vectorized for SVE, it will crash at getVectorElementSize(),
where it try to return a fixed size.

This patch make sure invalid element types are rejected before trying to get vectorization
factor. This make sure we are not trying to vectorize instructions already vectorized.

Reviewers: sdesmalen, efriedma, spatel, RKSimon, ABataev, apazos, rengolin

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76017
2020-03-13 11:23:31 -07:00
Sanjay Patel 94f5d73182 [SimplifyCFG] fix formatting; NFC 2020-03-13 14:12:28 -04:00
Sanjay Patel 51e53af11c [SimplifyCFG] fix debug print formatting; NFC 2020-03-13 14:12:28 -04:00
Florian Hahn 0c5b6e2ea5 Recommit "[SCCP] Use ValueLatticeElement instead of LatticeVal (NFCI)"
This patch should fix the cause of the stage2 failures and
PR45185.

This reverts the revert commit c52f839e72.
2020-03-13 17:03:22 +00:00
Tyker 69375fd0a3 [AssumeBundles] Preserve Information in the inliner
Summary:
during inling Create and insert an llvm.assume with attributes to preserve them.
to prevent any changes for now generation of llvm.assume is under a flag disabled by default.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75825
2020-03-13 17:35:47 +01:00
omarahmed1111 b285b333dc [Attributor] Detect possibly unbounded cycles in functions
This patch add mayContainUnboundedCycle helper function which checks whether a function has any cycle which we don't know if it is bounded or not.
Loops with maximum trip count are considered bounded, any other cycle not.
It also contains some fixed tests and some added tests contain bounded and
unbounded loops and non-loop cycles.

Reviewed By: jdoerfert, uenoku, baziotis

Differential Revision: https://reviews.llvm.org/D74691
2020-03-13 11:17:33 -05:00
Pankaj Gode bf990530ae [Attributor] Improve noalias preservation using reachability
Resolution for below fixme:
(ii) Check whether the value is captured in the scope using AANoCapture.
FIXME: This is conservative though, it is better to look at CFG and
             check only uses possibly executed before this callsite.

Propagates caller argument's noalias attribute to callee.

Reviewed by: jdoerfert, uenoku

Reviewers: jdoerfert, sstefan1, uenoku

Subscribers: uenoku, sstefan1, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D71617
2020-03-13 21:09:08 +05:30
Sanjay Patel cbeffa3f6c [SimplifyCFG] convert if-else chain to switch; NFC
Fix formatting of related function names while changing the code.
2020-03-13 10:28:41 -04:00
Nico Weber 86eb2c3991 Revert "[ObjC][ARC] Don't remove autoreleaseRV/retainRV pairs if the call isn't"
This reverts commit 1f5b471b8b.
Causes asserts when building code with arc. See
https://bugs.chromium.org/p/chromium/issues/detail?id=1061289#c2
for a full repro. Will post a creduced repro once creduce is done
running.
2020-03-13 10:16:02 -04:00
Johannes Doerfert a198adb490 [Attributor] IPO across definition boundary of a function marked alwaysinline
Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D75590
2020-03-13 01:06:12 -05:00
rathod-sahaab 263c4a3c75 Fix compiler warning when compiling without asserts
This patch aims to prevent warning-as-error failures in release build.
As suggested in this comment
https://reviews.llvm.org/D69930#1910922

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D75970
2020-03-13 00:26:49 -05:00
Huihui Zhang 118abf2017 [SVE] Update API ConstantVector::getSplat() to use ElementCount.
Summary:
Support ConstantInt::get() and Constant::getAllOnesValue() for scalable
vector type, this requires ConstantVector::getSplat() to take in 'ElementCount',
instead of 'unsigned' number of element count.

This change is needed for D73753.

Reviewers: sdesmalen, efriedma, apazos, spatel, huntergr, willlovett

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74386
2020-03-12 13:22:41 -07:00
Florian Hahn c52f839e72 Revert "[SCCP] Use ValueLatticeElement instead of LatticeVal (NFCI)"
This commit is likely causing clang-with-lto-ubuntu to fail
http://lab.llvm.org:8011/builders/clang-with-lto-ubuntu/builds/16052

Also causes PR45185.

This reverts commit f1ac5d2263.
2020-03-12 18:49:11 +00:00
Hideto Ueno d9bf79f4e9 [Attributor][FIX] Add a missing dependence track in noalias deduction 2020-03-12 15:27:35 +00:00
Florian Hahn f1ac5d2263 [SCCP] Use ValueLatticeElement instead of LatticeVal (NFCI)
This patch switches SCCP to use ValueLatticeElement for lattice values,
instead of the local LatticeVal, as first step to enable integer range support.

This patch does not make use of constant ranges for additional operations
and the only difference for now is that integer constants are represented by
single element ranges. To preserve the existing behavior, the following helpers
are used

* isConstant(LV): returns true when LV is either a constant or a constant range with a single element. This should return true in the same cases where LV.isConstant() returned true previously.
* getConstant(LV): returns a constant if LV is either a constant or a constant range with a single element. This should return a constant in the same cases as LV.getConstant() previously.
* getConstantInt(LV): same as getConstant, but additionally casted to ConstantInt.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D60582
2020-03-12 12:03:06 +00:00
Max Kazantsev 3dc6e53c97 [LoopPeel] Turn incorrect assert into a check
Summary:
This patch replaces incorrectt assert with a check. Previously it asserts that
if SCEV cannot prove `isKnownPredicate(A != B)`, then it should be able to prove
`isKnownPredicate(A == B)`.

Both these fact may be not provable. It is shown in the provided test:

Could not prove: `{-294,+,-2}<%bb1> !=  0`
Asserting: `{-294,+,-2}<%bb1> == 0`

Obviously, this SCEV is not equal to zero, but 0 is in its range so we cannot
also prove that it is not zero.

Instead of assert, we should be checking the required conditions explicitly.

Reviewers: lebedev.ri, fhahn, sanjoy, fedor.sergeev
Reviewed By: lebedev.ri
Subscribers: hiraditya, zzheng, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76050
2020-03-12 17:23:07 +07:00
Sanjay Patel fae900921b [InstCombine] reduce demand-limited bool math to logic
The cmp math test is inspired by memcmp() patterns seen in D75840.
I know there's at least 1 related fold we can do here if both
values are sext'd, but I'm not seeing a way to generalize further.

We have some other bool math patterns that we want to reduce, but
that might require fixing the bogus transforms noted in D72396.

Alive proof translations of the regression tests:
https://rise4fun.com/Alive/zGWi

  Name: demand add 1
  %xz = zext i1 %x to i32
  %ys = sext i1 %y to i32
  %sub = add i32 %xz, %ys
  %r = lshr i32 %sub, 31
  =>
  %notx = xor i1 %x, 1
  %and = and i1 %y, %notx
  %r = zext i1 %and to i32

  Name: demand add 2
  %xz = zext i1 %x to i5
  %ys = sext i1 %y to i5
  %sub = add i5 %xz, %ys
  %r = and i5 %sub, 16
  =>
  %notx = xor i1 %x, 1
  %and = and i1 %y, %notx
  %r = select i1 %and, i5 -16, i5 0

  Name: demand add 3
  %xz = zext i1 %x to i8
  %ys = sext i1 %y to i8
  %a = add i8 %ys, %xz
  %r = ashr i8 %a, 7
  =>
  %notx = xor i1 %x, 1
  %and = and i1 %y, %notx
  %r = sext i1 %and to i8

  Name: cmp math
  %gt = icmp ugt i32 %x, %y
  %lt = icmp ult i32 %x, %y
  %xz = zext i1 %gt to i32
  %yz = zext i1 %lt to i32
  %s = sub i32 %xz, %yz
  %r = lshr i32 %s, 31
  =>
  %r = zext i1 %lt to i32

Differential Revision: https://reviews.llvm.org/D75961
2020-03-11 15:45:58 -04:00
Florian Hahn bc6c8c4bbb [Matrix] Add remark propagation along the inlined-at chain.
This patch adds support for propagating matrix expressions along the
inlined-at chain and emitting remarks at the traversed function scopes.

To motivate this new behavior, consider the example below. Without the
remark 'up-leveling', we would only get remarks in load.h and store.h,
but we cannot generate a remark describing the full expression in
toplevel.cpp, which is the place where the user has the best chance of
spotting/fixing potential problems.

With this patch, we generate a remark for the load in load.h, one for
the store in store.h and one for the complete expression in
toplevel.cpp. For a bigger example, please see remarks-inlining.ll.

    load.h:
    template <typename Ty, unsigned R, unsigned C> Matrix<Ty, R, C> load(Ty *Ptr) {
      Matrix<Ty, R, C> Result;
      Result.value = *reinterpret_cast <typename Matrix<Ty, R, C>::matrix_t *>(Ptr);
      return Result;
    }

    store.h:
    template <typename Ty, unsigned R, unsigned C> void store(Matrix<Ty, R, C> M1, Ty *Ptr) {
       *reinterpret_cast<typename decltype(M1)::matrix_t *>(Ptr) = M1.value;
    }

    toplevel.cpp
    void test(double *A, double *B, double *C) {
      store(add(load<double, 3, 5>(A), load<double, 3, 5>(B)), C);
    }

For a given function, we traverse the inlined-at chain for each
matrix instruction (= instructions with shape information). We collect
the matrix instructions in each DISubprogram we visit. This produces a
mapping of DISubprogram -> (List of matrix instructions visible in the
subpogram). We then generate remarks using the list of instructions for
each subprogram in the inlined-at chain. Note that the list of instructions
for a subprogram includes the instructions from its own subprograms
recursively. For example using the example above, for the subprogram
'test' this includes inline functions 'load' and 'store'. This allows
surfacing the remarks at a level useful to users.

Please note that the current approach may create a lot of extra remarks.
Additional heuristics to cut-off the traversal can be implemented in the
future. For example, it might make sense to stop 'up-leveling' once all
matrix instructions are at the same debug location.

Reviewers: anemet, Gerolf, thegameg, hfinkel, andrew.w.kaylor, LuoYuanke

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D73600
2020-03-11 17:40:08 +00:00
Anna Welker a6d3bec83f [TTI][ARM][MVE] Refine gather/scatter cost model
Refines the gather/scatter cost model, but also changes the TTI
function getIntrinsicInstrCost to accept an additional parameter
which is needed for the gather/scatter cost evaluation.
This did require trivial changes in some non-ARM backends to
adopt the new parameter.
Extending gathers and truncating scatters are now priced cheaper.

Differential Revision: https://reviews.llvm.org/D75525
2020-03-11 10:23:41 +00:00
Fangrui Song a0c0389ffb [SimplifyLibcalls] Don't replace locked IO (fgetc/fgets/fputc/fputs/fread/fwrite) with unlocked IO (*_unlocked)
This essentially reverts some of the SimplifyLibcalls part changes of D45736 [SimplifyLibcalls] Replace locked IO with unlocked IO.

C11 7.21.5.2 The fflush function

> If stream is a null pointer, the fflush function performs this flushing action on all streams for which the behavior is defined above.

i.e. fopen'ed FILE* is inherently captured.

POSIX.1-2017 getc_unlocked, getchar_unlocked, putc_unlocked, putchar_unlocked - stdio with explicit client locking

> These functions can safely be used in a multi-threaded program if and only if they are called while the invoking thread owns the ( FILE *) object, as is the case after a successful call to the flockfile() or ftrylockfile() functions.

After a thread fopen'ed a FILE*, when it is calling foobar() which is now replaced by foobar_unlocked(),
if another thread is concurrently calling fflush(0), the behavior is undefined.

C11 7.22.4.4 The exit function

> Next, all open streams with unwritten buffered data are flushed, all open streams are closed, and all files created by the tmpfile function are removed.

The replacement is only feasible if the program is single threaded, or exit or fflush(0) is never called.
See also http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20180528/556615.html
for how the replacement makes libc interceptors difficult to implement.

dalias: in a worst case, it's unbounded data corruption because of concurrent access to pointers
without synchronization.  f->wpos or rpos could get outside of the buffer, thread A could do
f->wpos += j after knowing j is in bounds, while thread B also changes it concurrently.

This can produce exploitable conditions depending on libc internals.

Revert the SimplifyLibcalls part change because the cons obviously
overweigh the pros.  Even when the replacement is feasible, the benefit
is indemonstrable, more so in an application instead of an artificial
glibc benchmark.  Theoretically the replacement could be beneficial when
calling getc_unlocked/putc_unlocked in a loop, but then it is better
using a blocked IO operation and the user is likely aware of that.

The function attribute inference is still useful and thus kept.

Reviewed By: xbolva00

Differential Revision: https://reviews.llvm.org/D75933
2020-03-10 11:11:58 -07:00
Benjamin Kramer 247a177cf7 Give helpers internal linkage. NFC. 2020-03-10 18:27:42 +01:00
Tyker a4cde9ad7b Fixed [AssumeBundles] Move to IR so it can be used by Analysis
This is a recommit of 57c964aaa7
after fixing modules build.
2020-03-10 18:02:39 +01:00
Simon Moll d871ef4e6a [instcombine] remove fsub to fneg hacks; only emit fneg
Summary: Rewrite the fsub-0.0 idiom to fneg and always emit fneg for fp
negation. This also extends the scalarization cost in instcombine for unary
operators to result in the same IR rewrites for fneg as for the idiom.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D75467
2020-03-10 16:57:02 +01:00
Florian Hahn c8c14d979a [InstCombine] Support vectors in SimplifyAddWithRemainder.
SimplifyAddWithRemainder currently also matches for vector types, but
tries to create an integer constant, which causes a crash.

By using Constant::getIntegerValue() we can support both the scalar and
vector cases.

The 2 added test cases crash without the fix.

Reviewers: spatel, lebedev.ri

Reviewed By: spatel, lebedev.ri

Differential Revision: https://reviews.llvm.org/D75906
2020-03-10 14:29:40 +00:00
Jonas Paulsson c2dafe12dc [SimplifyCFG] Skip merging return blocks if it would break a CallBr.
SimplifyCFG should not merge empty return blocks and leave a CallBr behind
with a duplicated destination since the verifier will then trigger an
assert. This patch checks for this case and avoids the transformation.

CodeGenPrepare has a similar check which also has a FIXME comment about why
this is needed. It seems perhaps better if these two passes would eventually
instead update the CallBr instruction instead of just checking and avoiding.

This fixes https://bugs.llvm.org/show_bug.cgi?id=45062.

Review: Craig Topper

Differential Revision: https://reviews.llvm.org/D75620
2020-03-10 14:59:13 +01:00
Sanjay Patel 467eec0910 [InstCombine] fold gep-of-select-of-constants (PR45084)
As shown in:
https://bugs.llvm.org/show_bug.cgi?id=45084
...we failed to combine a gep with constant indexes with a
pointer operand that is a select of constants.

Differential Revision: https://reviews.llvm.org/D75807
2020-03-10 09:25:13 -04:00
Florian Hahn 2d6ecf4648 [SLP] Support vectorizing functions provided by vector libs.
It seems like the SLPVectorizer is currently not aware of vector
versions of functions provided by libraries like Accelerate [1].
This patch updates SLPVectorizer to use the same infrastructure
the LoopVectorizer uses to detect vectorizable library functions.

For calls, it computes the cost of an intrinsic call (existing behavior)
and the cost of a vector function library call, if available. Like
LoopVectorizer, it assumes the cost of the vector function is simply the
cost of a call to a vector function.

[1] https://developer.apple.com/documentation/accelerate

Reviewers: ABataev, RKSimon, spatel

Reviewed By: ABataev

Differential Revision: https://reviews.llvm.org/D75878
2020-03-10 13:10:50 +00:00
ahatanak 1f5b471b8b [ObjC][ARC] Don't remove autoreleaseRV/retainRV pairs if the call isn't
a tail call

Previosly ARC optimizer removed the autoreleaseRV/retainRV pair in the
following code, which caused the object returned by @something to be
placed in the autorelease pool because the call to @something isn't a
tail call:

```
  %call = call i8* @something(...)
  %2 = call i8* @objc_retainAutoreleasedReturnValue(i8* %call)
  %3 = call i8* @objc_autoreleaseReturnValue(i8* %2)
  ret i8* %3
```

Fix the bug by checking whether @something is a tail call.

rdar://problem/59275894
2020-03-09 13:21:38 -07:00
Nikita Popov c3ca6876ed [InstCombine] Don't simplify calls without uses
When simplifying a call without uses, replaceInstUsesWith() is
going to do nothing, but we'll skip all following folds. We can
only run into this problem with calls that both simplify and are
not trivially dead if unused, which currently seems to happen only
with calls to undef, as the test diff shows. When extending
SimplifyCall() to handle "returned" attributes, this becomes a much
bigger problem, so I'm fixing this first.

Differential Revision: https://reviews.llvm.org/D75814
2020-03-09 18:47:46 +01:00
Jonas Devlieghere 882f589e20 Revert "[AssumeBundles] Move to IR so it can be used by Analysis"
This breaks the modules build:

http://green.lab.llvm.org/green/job/clang-stage2-Rthinlto/
http://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/

This reverts commit 57c964aaa7.
2020-03-09 09:02:47 -07:00
evgeny 6d2032e259 [WPD] Provide a way to prevent functions from being devirtualized
Differential revision: https://reviews.llvm.org/D75617
2020-03-09 14:05:15 +03:00
Hideto Ueno bdcbdb4848 [Attributor] Deduction based on path exploration
This patch introduces the propagation of known information based on path exploration.
For example,
```
int u(int c, int *p){
  if(c) {
     return *p;
  } else {
     return *p + 1;
  }
}
```
An argument `p` is dereferenced whatever c's value is.

For an instruction `CtxI`, we accumulate branch instructions in the must-be-executed-context of `CtxI` and then, we take the conjunction of the successors' known state.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D65593
2020-03-09 14:29:26 +09:00
Sanjay Patel a69158c12a [VectorCombine] fold extract-extract-op with different extraction indexes
opcode (extelt V0, Ext0), (ext V1, Ext1) --> extelt (opcode (splat V0, Ext0), V1), Ext1

The first part of this patch generalizes the cost calculation to accept
different extraction indexes. The second part creates a shuffle+extract
before feeding into the existing code to create a vector op+extract.

The patch conservatively uses "TargetTransformInfo::SK_PermuteSingleSrc"
rather than "TargetTransformInfo::SK_Broadcast" (splat specifically
from element 0) because we do not have a more general "SK_Splat"
currently. That does not affect any of the current regression tests,
but we might be able to find some cost model target specialization where
that comes into play.

I suspect that we can expose some missing x86 horizontal op codegen with
this transform, so I'm speculatively adding a debug flag to disable the
binop variant of this transform to allow easier testing.

The test changes show that we're sensitive to cost model diffs (as we
should be), so that means that patches like D74976
should have better coverage.

Differential Revision: https://reviews.llvm.org/D75689
2020-03-08 09:57:55 -04:00
Tyker 57c964aaa7 [AssumeBundles] Move to IR so it can be used by Analysis
Summary:
Assume bundles need to be usable by Analysis and Transforms/Utils isn't.
so this commit moves utilities to deal with asusme bundles to IR.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75618
2020-03-08 12:21:50 +01:00
Tyker 84056394e9 [AssumeBundles] Add API to query a bundles from a use
Summary: Finding what information is know about a value from a use is generally useful and can be done quickly.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75616
2020-03-08 12:04:23 +01:00
Nikita Popov 51a466a61f [InstCombine] Fix known bits handling in SimplifyDemandedUseBits
Fixes a regression from D75801. SimplifyDemandedUseBits() is also
supposed to compute the known bits (of the demanded subset) of the
instruction. For unknown instructions it does so by directly calling
computeKnownBits(). For known instructions it will compute known
bits itself. However, for instructions where only some cases are
handled directly (e.g. a constant shift amount) the known bits
invocation for the unhandled case is sometimes missing. This patch
adds the missing calls and thus removes the main discrepancy with
ExpensiveCombines mode.

Differential Revision: https://reviews.llvm.org/D75804
2020-03-07 18:16:41 +01:00
Stefanos Baziotis 01c48d7d11 [Attributor] Fold terminators before changing instructions to unreachable
It is possible that an instruction to be changed to unreachable is
in the same block with a terminator that can be constant-folded.
In this case, as of now, the instruction will be changed to
unreachable before the terminator is folded. But, then the
whole BB becomes invalidated and so when we go ahead to fold
the terminator, we trap.

Change the order of these two.

Differential Revision: https://reviews.llvm.org/D75780
2020-03-07 12:38:44 +02:00
Andrew Monshizadeh c5a06019d2 Extend TimeTrace to LLVM's new pass manager
With the addition of the LLD time tracing it made sense to include coverage
for LLVM's various passes. Doing so ensures that ThinLTO is also covered
with a time trace.

Before:
{F11333974}

After:
{F11333928}

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D74516
2020-03-06 14:45:19 -08:00
Anna Thomas 59029b9eef [RS4GC] Handle uses of extractelement for conversion from vector to scalar base
As mentioned in the comments, extractelement is special
since we actually want a scalar base for that element we extracted from
the vector (i.e. not a vector base).
This same logic should apply to uses of the extractelement such as phis
and selects which have the same BDV as the extractelement.
Howeber, for these uses we conservatively mark the BDV state as
conflict, since setting the EE's new base BDV does not always dominate
these uses.

Added testcase showcases the problem where the BDV identification chokes
on the incorrect cast from vector to scalar for the phi use of
extractelement.

Tests-Run: make check, internal fuzzer testing

Reviewers: reames, skatkov, dantrushin
Reviewed-By: dantrushin

Differential Revision: https://reviews.llvm.org/D75704
2020-03-06 16:28:49 -05:00
Roman Lebedev 1badf7c33a
[InstComine] Forego of one-use check in `(X - (X & Y)) --> (X & ~Y)` if Y is a constant
Summary:
This is potentially more friendly for further optimizations,
analysies, e.g.: https://godbolt.org/z/G24anE

This resolves phase-ordering bug that was introduced
in D75145 for https://godbolt.org/z/2gBwF2
https://godbolt.org/z/XvgSua

Reviewers: spatel, nikic, dmgreen, xbolva00

Reviewed By: nikic, xbolva00

Subscribers: hiraditya, zzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75757
2020-03-06 21:39:07 +03:00
Jay Foad 11d1573bb6 [APFloat] Make use of new overloaded comparison operators. NFC.
Reviewers: ekatz, spatel, jfb, tlively, craig.topper, RKSimon, nikic, scanon

Subscribers: arsenm, jvesely, nhaehnle, hiraditya, dexonsmith, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75744
2020-03-06 16:42:53 +00:00
Fangrui Song 952ee0df9e ThinLTOBitcodeWriter: drop dso_local when a GlobalVariable is converted to a declaration
If we infer the dso_local flag for -fpic, dso_local should be dropped
when we convert a GlobalVariable a declaration. dso_local causes the
generation of direct access (e.g. R_X86_64_PC32). Such relocations referencing
STB_GLOBAL STV_DEFAULT objects are not allowed in a -shared link.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D74749
2020-03-05 18:09:33 -08:00
Zhongduo Lin eae228a292 [IndVarSimplify] Extend previous special case for load use instruction to any narrow type loop variant to avoid extra trunc instruction
Summary:
The widenIVUse avoids generating trunc by evaluating the use as AddRec, this
will not work when:
   1) SCEV traces back to an instruction inside the loop that SCEV can not
expand, eg. add %indvar, (load %addr)
   2) SCEV finds a loop variant, eg. add %indvar, %loopvariant

While SCEV fails to avoid trunc, we can still try to use instruction
combining approach to prove trunc is not required. This can be further
extended with other instruction combining checks, but for now we handle the
following case (sub can be "add" and "mul", "nsw + sext" can be "nus + zext")
```
Src:
  %c = sub nsw %b, %indvar
  %d = sext %c to i64
Dst:
  %indvar.ext1 = sext %indvar to i64
  %m = sext %b to i64
  %d = sub nsw i64 %m, %indvar.ext1
```
Therefore, as long as the result of add/sub/mul is extended to wide type with
right extension and overflow wrap combination, no
trunc is required regardless of how %b is generated. This pattern is common
when calculating address in 64 bit architecture.

Note that this patch reuse almost all the code from D49151 by @az:
https://reviews.llvm.org/D49151

It extends it by providing proof of why trunc is unnecessary in more general case,
it should also resolve some of the concerns from the following discussion with @reames.

http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20180910/585945.html

Reviewers: sanjoy, efriedma, sebpop, reames, az, javed.absar, amehsan

Reviewed By: az, amehsan

Subscribers: hiraditya, llvm-commits, amehsan, reames, az

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73059
2020-03-05 16:27:59 -05:00
Hiroshi Yamauchi 76b9901fb1 [PGO][PGSO] Use IsColdXNthPercentile for sample PGO.
Summary:
This performs better for sample PGO.
NFC as PGSOColdCodeOnlyForSamplePGO is still true.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75550
2020-03-05 09:54:54 -08:00
Florian Hahn 40e7bfc424 [VPlan] Use consecutive numbers to print VPValues instead of addresses.
Currently when printing VPValues we use the object address, which makes
it hard to distinguish VPValues as they usually are large numbers with
varying distance between them.

This patch adds a simple slot tracker, similar to the ModuleSlotTracker
used for IR values. In order to dump a VPValue or anything containing a
VPValue, a slot tracker for the enclosing VPlan needs to be created. The
existing VPlanPrinter can take care of that for the existing code. We
assign consecutive numbers to each VPValue we encounter in a reverse
post order traversal of the VPlan.

Reviewers: rengolin, hsaito, fhahn, Ayal, dorit, gilr

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D73078
2020-03-05 14:55:15 +00:00
Simon Pilgrim 01a91a6de7 Fix static analyzer uninitialized variable warning. NFCI. 2020-03-05 14:22:24 +00:00
Jun Ma b10deb9487 [Coroutines] Optimized coroutine elision based on reachability
Differential Revision: https://reviews.llvm.org/D75440
2020-03-05 14:43:50 +08:00
Sameer Sahasrabuddhe 42febbab91 StructurizeCFG: simplify phi nodes when possible
After structurization, some phi nodes can have a single incoming edge
and can be simplified away. This change runs a simplify query on all
phis that are either modified or added by the structurizer. This also
moves some phis closer to their use as a side benefit.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D75500
2020-03-05 10:33:15 +05:30
Guozhi Wei ee9a3eba76 [CodeGenPrepare] Handle ExtractValueInst in dupRetToEnableTailCallOpts
As the test case shows if there is an ExtractValueInst in the Ret block, function dupRetToEnableTailCallOpts can't duplicate it into the block containing call. So later no tail call is generated in CodeGen.

    This patch adds the ExtractValueInst handling code in function dupRetToEnableTailCallOpts and FoldReturnIntoUncondBranch, and later tail call can be generated for this case.

Differential Revision: https://reviews.llvm.org/D74242
2020-03-04 11:10:32 -08:00
David Green 38e532278e [LSR] Add masked load and store handling
This teaches Loop Strength Reduction the details about masked load and
store address operands, so that it can have a better time optimising
them as it would for normal loads and stores.

Differential Revision: https://reviews.llvm.org/D75371
2020-03-04 18:36:10 +00:00
Nikita Popov 293d813020 [InstCombine] Don't explicitly invoke const folding in shift combine
InstCombine uses an IRBuilder that automatically performs
target-dependent constant folding, so explicitly invoking it here
is not necessary.
2020-03-04 18:33:00 +01:00
Nikita Popov 9b5de84e27 [InstCombine] Use IRBuilder to create bitcast
This makes sure that the constant expression bitcast goes through
target-dependent constant folding, and thus avoids an additional
iteration of InstCombine.
2020-03-04 18:28:38 +01:00
Nikita Popov 0e890cd4d4 [ConstantFolding] Always return something from ConstantFoldConstant
Spin-off from D75407. As described there, ConstantFoldConstant()
currently returns null for non-ConstantExpr/ConstantVector inputs,
but otherwise always returns non-null, independently of whether
any folding has happened or not.

This is confusing and makes consumer code more complicated.
I would expect either that ConstantFoldConstant() returns only if
it actually folded something, or that it always returns non-null.
I'm going to the latter possibility here, which appears to be more
useful considering existing usage.

Differential Revision: https://reviews.llvm.org/D75543
2020-03-04 18:24:47 +01:00
Sanjay Patel 71a316883d [PassManager] adjust VectorCombine placement
The initial placement of vector-combine in the opt pipeline revealed phase ordering bugs:
https://bugs.llvm.org/show_bug.cgi?id=45015
https://bugs.llvm.org/show_bug.cgi?id=42022

This patch contains a few independent changes:

1. Move the pass up in the pipeline, so it happens just after loop-vectorization.
   This is only to keep vectorization passes together in the pipeline at the moment.
   I don't have evidence of interaction between these yet.
2. Add an -early-cse pass after -vector-combine to clean up redundant ops. This was
   partly proposed as far back as rL219644 (which is why it's effectively being moved
   in the old PM code). This is important because the subsequent -instcombine doesn't
   work as well without EarlyCSE. With the CSE, -instcombine is able to squash
   shuffles together in 1 of the tests (because those are simple "select" shuffles).
3. Remove the -vector-combine pass that was running after SLP. We may want to do that
   eventually, but I don't have a test case to support it yet.

Differential Revision: https://reviews.llvm.org/D75145
2020-03-04 11:10:49 -05:00
Matt Arsenault f9047ede58 LICM: Reorder condition checks
Check the fast math flag before the more expensive loop check.
2020-03-03 17:15:57 -05:00
Brian Gesiak aa85b437a9 [Coroutines] Use dbg.declare for frame variables
Summary:
https://gist.github.com/modocache/ed7c62f6e570766c0f39b35dad675c2f
is an example of a small C++ program that uses C++20 coroutines that
is difficult to debug, due to the loss of debug info for variables that
"spill" across coroutine suspension boundaries. This patch addresses
that issue by inserting 'llvm.dbg.declare' intrinsics that point the
debugger to the variables' location at an offset to the coroutine frame.

With this patch, I confirmed that running the 'frame variable' commands in
https://gist.github.com/modocache/ed7c62f6e570766c0f39b35dad675c2f at
the specified breakpoints results in the correct values being printed
for coroutine frame variables 'i' and 'j' when using an lldb built from
trunk, as well as with gdb 8.3 (lldb 9.0.1, however, could not print the
values). The added test case also verifies this improved behavior.

The existing coro-debug.ll test case is also modified to reflect the
locations at which Clang actually places calls to 'dbg.declare', and
additional checks are added to ensure this patch works as intended in that
example as well.

Reviewers: vsk, jmorse, GorNishanov, lewissbaker, wenlei

Subscribers: EricWF, aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75338
2020-03-03 17:13:46 -05:00
Tyker c5ec8890c9 [NFC] Try fix ubsan buildbot after 876d133789 2020-03-03 17:53:02 +01:00
Tyker 876d133789 [AssumeBundles] Add API to fill a map from operand bundles of an llvm.assume.
Summary: This patch adds a new way to query operand bundles of an llvm.assume that is much better suited to some users like the Attributor that need to do many queries on the operand bundles of llvm.assume. Some modifications of the IR like replaceAllUsesWith can cause information in the map to be outdated, so this API is more suited to analysis passes and passes that don't make modification that could invalidate the map.

Reviewers: jdoerfert, sstefan1, uenoku

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75020
2020-03-03 14:22:52 +01:00
Florian Hahn 05afa55521 [VPlan] Add getPlan() to VPBlockBase.
This patch adds a getPlan accessor to VPBlockBase, which finds the entry
block of the plan containing the block and returns the plan set for this
block.

VPBlockBase contains a VPlan pointer, but it should only be set for
the entry block of a plan. This allows moving blocks without updating
the pointer for each moved block and in the future we might introduce a
parent relationship between plans and blocks, similar to the one in LLVM IR.

Reviewers: rengolin, hsaito, fhahn, Ayal, dorit, gilr

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D74445
2020-03-03 13:20:13 +00:00
David Green ec7e4a9a80 [LoopVectorizer] Add reduction tests for inloop reductions. NFC
Also adds a force-reduction-intrinsics option for testing, for forcing
the generation of reduction intrinsics even when the backend is not
requesting them.
2020-03-03 10:54:00 +00:00
Alok Kumar Sharma 6f029dadf6 [DebugInfo] Avoid generating duplicate llvm.dbg.value
Summary:
This is to avoid generating duplicate llvm.dbg.value instrinsic if it already exists after the Instruction.

Before inserting llvm.dbg.value instruction, LLVM checks if the same instruction is already present before the instruction to avoid duplicates.
Currently it misses to check if it already exists after the instruction.
flang generates IR like this.

%4 = load i32, i32* %i1_311, align 4, !dbg !42
call void @llvm.dbg.value(metadata i32 %4, metadata !35, metadata !DIExpression()), !dbg !33
When this IR is processed in llvm, it ends up inserting duplicates.
%4 = load i32, i32* %i1_311, align 4, !dbg !42
call void @llvm.dbg.value(metadata i32 %4, metadata !35, metadata !DIExpression()), !dbg !33
call void @llvm.dbg.value(metadata i32 %4, metadata !35, metadata !DIExpression()), !dbg !33
We have now updated LdStHasDebugValue to include the cases when instruction is already
followed by same dbg.value instruction we intend to insert.

Now,

Definition and usage of function LdStHasDebugValue are deleted.
RemoveRedundantDbgInstrs is called for the cleanup of duplicate dbg.value's

Testing:
Added unit test for validation
check-llvm
check-debuginfo (the debug info integration tests)

Reviewers: aprantl, probinson, dblaikie, jmorse, jini.susan.george
SouraVX, awpandey, dstenb, vsk

Reviewed By: aprantl, jmorse, dstenb, vsk

Differential Revision: https://reviews.llvm.org/D74030
2020-03-03 09:56:45 +05:30
Juneyoung Lee 9f1f244d3c [LICM] Allow freeze to hoist/sink out of a loop
Summary: This patch allows LICM to hoist/sink freeze instructions out of a loop.

Reviewers: reames, fhahn, efriedma

Reviewed By: reames

Subscribers: jfb, lebedev.ri, hiraditya, asbirlea, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75400
2020-03-03 12:29:39 +09:00
Sumanth Gundapaneni 9897daa6bf Update LSR's logic that identifies a post-increment SCEV value.
One of the checks has been removed as it seem invalid.
The LoopStep size is always almost a 32-bit.

Differential Revision: https://reviews.llvm.org/D75079
2020-03-02 16:34:18 -06:00
Teresa Johnson 80bf137fa1 Revert "Restore "[WPD/LowerTypeTests] Delay lowering/removal of type tests until after ICP""
This reverts commit 80d0a137a5, and the
follow on fix in 873c0d0786. It is
causing test failures after a multi-stage clang bootstrap. See
discussion on D73242 and D75201.
2020-03-02 14:02:13 -08:00
Teresa Johnson 873c0d0786 [ThinLTO/LowerTypeTests] Handle unpromoted local type ids
Summary:
Fixes an issue that cropped up after the changes in D73242 to delay
the lowering of type tests. LTT couldn't handle any type tests with
non-string type id (which happens for local vtables, which we try to
promote during the compile step but cannot always when there are no
exported symbols).

We can simply treat the same as having an Unknown resolution, which
delays their lowering, still allowing such type tests to be used in
subsequent optimization (e.g. planned usage during ICP). The final
lowering which simply removes these handles them fine.

Beefed up an existing ThinLTO test for such unpromoted type ids so that
the internal vtable isn't removed before lower type tests, which hides
the problem.

Reviewers: evgeny777, pcc

Subscribers: inglorion, hiraditya, steven_wu, dexonsmith, aganea, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75201
2020-03-02 09:31:44 -08:00
Arkady Shlykov 3dcaf296ae [Loop Peeling] Add possibility to enable peeling on loop nests.
Summary:
Current peeling implementation bails out in case of loop nests.
The patch introduces a field in TargetTransformInfo structure that
certain targets can use to relax the constraints if it's
profitable (disabled by default).
Also additional option is added to enable peeling manually for
experimenting and testing purposes.

Reviewers: fhahn, lebedev.ri, xbolva00

Reviewed By: xbolva00

Subscribers: RKSimon, xbolva00, hiraditya, zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D70304
2020-03-02 08:37:11 -08:00
David Green d0d38df091 [LoopVectorizer] Change types of lists from pointers to references. NFC
getReductionVars, getInductionVars and getFirstOrderRecurrences were all
being returned from LoopVectorizationLegality as pointers to lists. This
just changes them to be references, cleaning up the interface slightly.

Differential Revision: https://reviews.llvm.org/D75448
2020-03-02 15:04:41 +00:00
Reid Kleckner 1adbe86d87 [WinEH] Fix inttoptr+phi optimization in presence of catchswitch
getFirstInsertionPt's return value must be checked for validity before
casting it to Instruction*. Don't attempt to insert casts after a phi in
a catchswitch block.

Fixes PR45033, introduced in D37832.

Reviewed By: davidxl, hfinkel

Differential Revision: https://reviews.llvm.org/D75381
2020-03-01 07:49:28 -08:00
Juneyoung Lee 5cbb265694 [GVN] Fold equivalent freeze instructions
Summary:
This patch defines two freeze instructions to have the same value number if they are equivalent.

This is allowed because GVN replaces all uses of a duplicated instruction with another.

If it partially rewrites use, it is not allowed. e.g)

```
a = freeze(x)
b = freeze(x)
use(a)
use(a)
use(b)
=>
use(a)
use(b) // This is not allowed!
use(b)
```

Reviewers: fhahn, reames, spatel, efriedma

Reviewed By: fhahn

Subscribers: lebedev.ri, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75398
2020-03-01 07:32:05 +09:00
Simon Pilgrim 7e9747b50b [X86][F16C] Remove cvtph2ps intrinsics and use generic half2float conversion (PR37554)
This removes everything but int_x86_avx512_mask_vcvtph2ps_512 which provides the SAE variant, but even this can use the fpext generic if the rounding control is the default.

Differential Revision: https://reviews.llvm.org/D75162
2020-02-29 18:57:35 +00:00
Vedant Kumar dd1ea9de2e Reland: [Coverage] Revise format to reduce binary size
Try again with an up-to-date version of D69471 (99317124 was a stale
revision).

---

Revise the coverage mapping format to reduce binary size by:

1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.

This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).

Rationale for changes to the format:

- With the current format, most coverage function records are discarded.
  E.g., more than 97% of the records in llc are *duplicate* placeholders
  for functions visible-but-not-used in TUs. Placeholders *are* used to
  show under-covered functions, but duplicate placeholders waste space.

- We reached general consensus about giving (1) a try at the 2017 code
  coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
  duplicates is simpler than alternatives like teaching build systems
  about a coverage-aware database/module/etc on the side.

- Revising the format is expensive due to the backwards compatibility
  requirement, so we might as well compress filenames while we're at it.
  This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).

See CoverageMappingFormat.rst for the details on what exactly has
changed.

Fixes PR34533 [2], hopefully.

[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533

Differential Revision: https://reviews.llvm.org/D69471
2020-02-28 18:12:04 -08:00
Vedant Kumar 3388871714 Revert "[Coverage] Revise format to reduce binary size"
This reverts commit 99317124e1. This is
still busted on Windows:

http://lab.llvm.org:8011/builders/lld-x86_64-win7/builds/40873

The llvm-cov tests report 'error: Could not load coverage information'.
2020-02-28 18:03:15 -08:00
Vedant Kumar 99317124e1 [Coverage] Revise format to reduce binary size
Revise the coverage mapping format to reduce binary size by:

1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.

This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).

Rationale for changes to the format:

- With the current format, most coverage function records are discarded.
  E.g., more than 97% of the records in llc are *duplicate* placeholders
  for functions visible-but-not-used in TUs. Placeholders *are* used to
  show under-covered functions, but duplicate placeholders waste space.

- We reached general consensus about giving (1) a try at the 2017 code
  coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
  duplicates is simpler than alternatives like teaching build systems
  about a coverage-aware database/module/etc on the side.

- Revising the format is expensive due to the backwards compatibility
  requirement, so we might as well compress filenames while we're at it.
  This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).

See CoverageMappingFormat.rst for the details on what exactly has
changed.

Fixes PR34533 [2], hopefully.

[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533

Differential Revision: https://reviews.llvm.org/D69471
2020-02-28 17:33:25 -08:00
Matt Morehouse 30bb737a75 [DFSan] Add __dfsan_cmp_callback.
Summary:
When -dfsan-event-callbacks is specified, insert a call to
__dfsan_cmp_callback on every CMP instruction.

Reviewers: vitalybuka, pcc, kcc

Reviewed By: kcc

Subscribers: hiraditya, #sanitizers, eugenis, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D75389
2020-02-28 15:49:44 -08:00
Matt Morehouse f668baa459 [DFSan] Add __dfsan_mem_transfer_callback.
Summary:
When -dfsan-event-callbacks is specified, insert a call to
__dfsan_mem_transfer_callback on every memcpy and memmove.

Reviewers: vitalybuka, kcc, pcc

Reviewed By: kcc

Subscribers: eugenis, hiraditya, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D75386
2020-02-28 15:48:25 -08:00
Matt Morehouse 52f889abec [DFSan] Add __dfsan_load_callback.
Summary:
When -dfsan-event-callbacks is specified, insert a call to
__dfsan_load_callback() on every load.

Reviewers: vitalybuka, pcc, kcc

Reviewed By: vitalybuka, kcc

Subscribers: hiraditya, #sanitizers, llvm-commits, eugenis, kcc

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D75363
2020-02-28 14:26:09 -08:00
Austin Kerbow 4fa63fd452 [VectorCombine] Fix assert on compare extract index
Extract index could be a differnet integral type.

Differential Revision: https://reviews.llvm.org/D75327
2020-02-28 10:37:08 -08:00
Valery N Dmitriev d723ec4f04 [SLP][NFC] Assert that tree entry operands completed when scheduler looks for dependencies.
This change adds an assertion to prevent tricky bug related to recursive
approach of building vectorization tree. For loop below takes number of
operands directly from tree entry rather than from scalars.
If the entry at this moment turns out incomplete (i.e. not all operands set)
then not all the dependencies will be seen by the scheduler.
This can lead to failed scheduling (and thus failed vectorization)
for perfectly vectorizable tree.
Here is code example which is likely to fire the assertion:
for (i : VL0->getNumOperands()) {
  ...
  TE->setOperand(i, Operands);
  buildTree_rec(Operands, Depth + 1,...);
}

Correct way is two steps process: first set all operands to a tree entry
and then recursively process each operand.

Differential Revision: https://reviews.llvm.org/D75296
2020-02-28 10:34:48 -08:00
Hiroshi Yamauchi f16d2bec40 Devirtualize a call on alloca without waiting for post inline cleanup and next DevirtSCCRepeatedPass iteration.
This aims to fix a missed inlining case.

If there's a virtual call in the callee on an alloca (stack allocated object) in
the caller, and the callee is inlined into the caller, the post-inline cleanup
would devirtualize the virtual call, but if the next iteration of
DevirtSCCRepeatedPass doesn't happen (under the new pass manager), which is
based on a heuristic to determine whether to reiterate, we may miss inlining the
devirtualized call.

This enables inlining in clang/test/CodeGenCXX/member-function-pointer-calls.cpp.

This is a second commit after a revert
https://reviews.llvm.org/rG4569b3a86f8a4b1b8ad28fe2321f936f9d7ffd43 and a fix
https://reviews.llvm.org/rG41e06ae7ba91.

Differential Revision: https://reviews.llvm.org/D69591
2020-02-28 09:43:32 -08:00
Hiroshi Yamauchi 41e06ae7ba [CallPromotionUtils] Add missing promotion legality check to tryPromoteCall.
Summary: This fixes the crash that led to the revert of D69591.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75307
2020-02-28 09:35:09 -08:00
Valery N Dmitriev 02e5e47e17 [SLP][NFC] Delete some unreachable code.
This patch deletes some dead code out of SLP vectorizer.
Couple of changes taken out of D57059 to slightly lighten it
plus one more similar case fixed.

Differential Revision: https://reviews.llvm.org/D75276
2020-02-28 09:22:51 -08:00
Teresa Johnson f9ca75f19b [Inliner] Inlining should honor nobuiltin attributes
Summary:
Final patch in series to fix inlining between functions with different
nobuiltin attributes/options, which was specifically an issue in LTO.
See discussion on D61634 for background.

The prior patch in this series (D67923) enabled per-Function TLI
construction that identified the nobuiltin attributes.

Here I have allowed inlining to proceed if the callee's nobuiltins are a
subset of the caller's nobuiltins, but not in the reverse case, which
should be conservatively correct. This is controlled by a new option,
-inline-caller-superset-nobuiltin, which is enabled by default.

Reviewers: hfinkel, gchatelet, chandlerc, davidxl

Subscribers: arsenm, jvesely, nhaehnle, mehdi_amini, eraman, hiraditya, haicheng, dexonsmith, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74162
2020-02-28 07:34:14 -08:00
Pierre-vh 2809abbd98 [Transform][MemCpyOpt] Add missing DebugLoc to %tmpbitcast
Fix for https://bugs.llvm.org/show_bug.cgi?id=37967

Differential Revision: https://reviews.llvm.org/D75173
2020-02-28 15:20:51 +00:00
Juneyoung Lee cc28a75467 Let EarlyCSE fold equivalent freeze instructions
Summary:
This patch makes EarlyCSE fold equivalent freeze instructions.

Another optimization that I think will be useful is to remove freeze if its operand is used as a branch condition or at llvm.assume:

```
  %c = ...
  br i1 %c, label %A, ..
A:
  %d = freeze %c ; %d can be optimized to %c because %c cannot be poison or undef (or 'br %c' would be UB otherwise)
```

If it make sense for EarlyCSE to support this as well, I will make a patch for this.

Reviewers: spatel, reames, lebedev.ri

Reviewed By: lebedev.ri

Subscribers: lebedev.ri, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75334
2020-02-28 20:35:20 +09:00
Hans Wennborg d48c981697 SROA: Don't drop atomic load/store alignments (PR45010)
SROA will drop the explicit alignment on allocas when the ABI guarantees
enough alignment. Because the alignment on new load/store instructions
are set based on the alloca's alignment, that means SROA would end up
dropping the alignment from atomic loads and stores, which is not
allowed (see bug). For those, make sure to always carry over the
alignment from the previous instruction.

Differential revision: https://reviews.llvm.org/D75266
2020-02-28 10:38:40 +01:00
Jun Ma 43c8307c6c [Coroutines] CoroElide enhancement
Fix regression of CoreElide pass when current function is
coroutine.

Differential Revision: https://reviews.llvm.org/D71663
2020-02-28 10:41:59 +08:00
Juneyoung Lee 2b5a897651 Revert "[SimpleLoopUnswitch] Fix introduction of UB when hoisted condition may be undef or poison"
.. due to performance regression.

This patch is reverted until infrastructore for CSE/LICM support for freeze is
added.

This reverts commit 181628b
2020-02-28 11:10:46 +09:00
Eli Friedman b299926453 [IndVars] Fix sort comparator.
std::sort will compare an element to itself in some cases.  We should
not crash if this happens.

Differential Revision: https://reviews.llvm.org/D75000
2020-02-27 17:25:18 -08:00
Matt Morehouse 470db54cbd [DFSan] Add flag to insert event callbacks.
Summary:
For now just insert the callback for stores, similar to how MSan tracks
origins.  In the future we may want to add callbacks for loads, memcpy,
function calls, CMPs, etc.

Reviewers: pcc, vitalybuka, kcc, eugenis

Reviewed By: vitalybuka, kcc, eugenis

Subscribers: eugenis, hiraditya, #sanitizers, llvm-commits, kcc

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D75312
2020-02-27 17:14:19 -08:00
Matt Morehouse 2a29617b9d [DFSan] Remove unused IRBuilder. NFC
Reviewers: pcc, vitalybuka, kcc

Reviewed By: kcc

Subscribers: hiraditya, llvm-commits, kcc

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75190
2020-02-27 16:27:20 -08:00
Artur Pilipenko 02e3d5c3a2 Fix DSE miscompile when store is clobbered across loop iterations
DSE would mistakenly remove store (2):

  a = calloc(n+1)
  for (int i = 0; i < n; i++) {
    store 1, a[i+1] // (1)
    store 0, a[i]   // (2)
  }

The fix is to do PHI transaltion while looking for clobbering
instructions between the store and the calloc.

Reviewed By: efriedma, bjope

Differential Revision: https://reviews.llvm.org/D68006
2020-02-27 14:43:01 -08:00
Nikita Popov 4ef272ec9c [InstCombine] DCE instructions earlier
When InstCombine initially populates the worklist, it already
performs constant folding and DCE. However, as the instructions
are initially visited in program order, this DCE can pick up only
the last instruction of a dead chain, the rest would only get
picked up in the main InstCombine run.

To avoid this, we instead perform the DCE in separate pass over the
collected instructions in reverse order, which will allow us to
pick up full dead instruction chains. We already need to do this
reverse iteration anyway to populate the worklist, so this
shouldn't add extra cost.

This by itself only fixes a small part of the problem though:
The same basic issue also applies during the main InstCombine loop.
We generally always want DCE to occur as early as possible,
because it will allow one-use folds to happen. Address this by also
performing DCE while adding deferred instructions to the main worklist.

This drops the number of tests that perform more than 2 InstCombine
iterations from ~80 to ~40. There's some spurious test changes due
to operand order / icmp toggling.

Differential Revision: https://reviews.llvm.org/D75008
2020-02-27 18:45:59 +01:00
Simon Moll ddd11273d9 Remove BinaryOperator::CreateFNeg
Use UnaryOperator::CreateFNeg instead.

Summary:
With the introduction of the native fneg instruction, the
fsub -0.0, %x idiom is obsolete. This patch makes LLVM
emit fneg instead of the idiom in all places.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D75130
2020-02-27 09:06:03 -08:00
Pierre-vh f64e457cb7 [Transforms][Debugify] Ignore PHI nodes when checking for DebugLocs
Fix for: https://bugs.llvm.org/show_bug.cgi?id=37964

Differential Revision: https://reviews.llvm.org/D75242
2020-02-27 16:14:11 +00:00
Kirill Bobyrev 4569b3a86f
Revert "Devirtualize a call on alloca without waiting for post inline cleanup and next"
This reverts commit 59fb9cde7a.

The patch caused internal miscompilations.
2020-02-27 15:58:39 +01:00
Jay Foad f41e82c82c [InstCombine] Fix confusing variable name. 2020-02-27 11:27:49 +00:00
Nemanja Ivanovic c46b85aaf4 [LoopVectorize] Fix cost for calls to functions that have vector versions
A recent commit
(https://reviews.llvm.org/rG66c120f02560ef528a60924104ead66f330190f1) changed
the cost for calls to functions that have a vector version for some
vectorization factor. However, no check is performed for whether the
vectorization factor matches the current one being cost modeled. This leads to
attempts to widen call instructions to a vectorization factor for which such a
function does not exist, which in turn leads to an assertion failure.

This patch adds the check for vectorization factor (i.e. not just that the
called function has a vector version for some VF, but that it has a vector
version for this VF).

Differential revision: https://reviews.llvm.org/D74944
2020-02-26 21:39:11 -06:00
Sanjay Patel 25c6544f32 [VectorCombine] add a debug flag to skip all transforms
As suggested in D75145 -

I'm not sure why, but several passes have this kind of disable/enable flag
implemented at the pass manager level. But that means we have to duplicate
the flag for both pass managers and add code to check the flag every time
the pass appears in the pipeline.

We want a debug option to see if this pass is misbehaving regardless of the
pass managers, so just add a disablement check at the single point before
any transforms run.

Differential Revision: https://reviews.llvm.org/D75204
2020-02-26 15:15:42 -05:00
Nikita Popov 00f54050f7 [SimpleLoopUnswitch] Remove unnecessary include; NFC 2020-02-26 20:40:43 +01:00
Nikita Popov 9d9633fb70 [CVP] Simplify cmp of local phi node
CVP currently does not simplify cmps with instructions in the same
block, because LVI getPredicateAt() currently does not provide
much useful information for that case (D69686 would change that,
but is stuck.) However, if the instruction is a Phi node, then
LVI can compute the result of the predicate by threading it into
the predecessor blocks, which allows it simplify some conditions
that nothing else can handle. Relevant code:
6d6a4590c5/llvm/lib/Analysis/LazyValueInfo.cpp (L1904-L1927)

Differential Revision: https://reviews.llvm.org/D72169
2020-02-26 20:36:41 +01:00
Nikita Popov 7da3b5e45c [InstCombine] Simplify DCE code; NFC
As pointed out on D75008, MadeIRChange is already set by
eraseInstFromFunction(), which also already does a debug print.
2020-02-26 20:33:00 +01:00
Nikita Popov 56f7de5baa [InstCombine] Remove trivially empty ranges from end
InstCombine removes pairs of start+end intrinsics that don't
have anything in between them. Currently this is done by starting
at the start intrinsic and scanning forwards. This patch changes
it to start at the end intrinsic and scan backwards.

The motivation here is as follows: When we process the start
intrinsic, we have not yet looked at the following instructions,
which may still get folded/removed. If they do, we will only be
able to remove the start/end pair on the next iteration. When we
process the end intrinsic, all the instructions before it have
already been visited, and we don't run into this problem.

Differential Revision: https://reviews.llvm.org/D75011
2020-02-26 20:04:11 +01:00
Hiroshi Yamauchi 59fb9cde7a Devirtualize a call on alloca without waiting for post inline cleanup and next
DevirtSCCRepeatedPass iteration.  Needs ReviewPublic

This aims to fix a missed inlining case.

If there's a virtual call in the callee on an alloca (stack allocated object) in
the caller, and the callee is inlined into the caller, the post-inline cleanup
would devirtualize the virtual call, but if the next iteration of
DevirtSCCRepeatedPass doesn't happen (under the new pass manager), which is
based on a heuristic to determine whether to reiterate, we may miss inlining the
devirtualized call.

This enables inlining in clang/test/CodeGenCXX/member-function-pointer-calls.cpp.
2020-02-26 09:51:24 -08:00
Reid Kleckner 465dca79b3 Avoid SmallString.h include in MD5.h, NFC
Saves 200 includes, which is mostly immaterial.
2020-02-26 09:10:24 -08:00
Hans Wennborg 546918cbb4 Revert "[compiler-rt] Add a critical section when flushing gcov counters"
See discussion on PR44792.

This reverts commit 02ce9d8ef5.

It also reverts the follow-up commits
8f46269f0 "[profile] Don't dump counters when forking and don't reset when calling exec** functions"
62c7d8402 "[profile] gcov_mutex must be static"
2020-02-26 13:27:44 +01:00
Juneyoung Lee 1cb7ec870d [SimpleLoopUnswitch] Canonicalize variable names 2020-02-26 15:33:02 +09:00
Juneyoung Lee 181628b52d [SimpleLoopUnswitch] Fix introduction of UB when hoisted condition may be undef or poison
Summary:
Loop unswitch hoists branches on loop-invariant conditions. However, if this
condition is poison/undef and the branch wasn't originally reachable, loop
unswitch introduces UB (since the optimized code will branch on poison/undef and
the original one didn't)).
We fix this problem by freezing the condition to ensure we don't introduce UB.

We will now transform the following:
  while (...) {
    if (C) { A }
    else   { B }
  }

Into:
  C' = freeze(C)
  if (C') {
    while (...) { A }
  } else {
    while (...) { B }
  }

This patch fixes the root cause of the following bug reports (which use the old loop unswitch, but can be reproduced with minor changes in the code and -enable-nontrivial-unswitch):
- https://llvm.org/bugs/show_bug.cgi?id=27506
- https://llvm.org/bugs/show_bug.cgi?id=31652

Reviewers: reames, majnemer, chenli, sanjoy, hfinkel

Reviewed By: reames

Subscribers: hiraditya, jvesely, nhaehnle, filcab, regehr, trentxintong, nlopes, llvm-commits, mzolotukhin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D29015
2020-02-26 13:47:33 +09:00
Johannes Doerfert 396b725394 [OpenMP][Opt] Combine `struct ident_t*` during deduplication
If we deduplicate OpenMP runtime calls we have multiple `ident_t*` that
represent information like source location. So far, we simply kept the
one used by the replacement call. However, as exposed by PR44893, that
can cause problems if we have stack allocated `ident_t` objects. While
we need to revisit the use of these as well, it is clear that we
eventually want to merge source location information in some way. With
this patch we add the infrastructure to do so but without doing the
actual merge. Instead we pick a global `ident_t` from the replaced
calls, if possible, or create a new one with an unknown location
instead.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D74925
2020-02-25 14:07:14 -08:00
Akira Hatanaka 430512ed7d [ObjC][ARC] Don't move a retain call living outside a loop into the loop
body

We started seeing cases where ARC optimizer would move retain calls into
loop bodies, causing imbalance in the number of retain and release
calls, after changes were made to delete inert ARC calls since the inert
calls that used to block code motion are gone.

Fix the bug by setting the CFG hazard flag when visiting a loop header.

rdar://problem/56908836
2020-02-25 13:00:10 -08:00
Roman Lebedev 400ceda425
[SCEV][IndVars] Always provide insertion point to the SCEVExpander::isHighCostExpansion()
Summary: This addresses the `llvm/test/Transforms/IndVarSimplify/elim-extend.ll` `@nestedIV` regression from D73728

Reviewers: reames, mkazantsev, wmi, sanjoy

Reviewed By: mkazantsev

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73777
2020-02-25 23:05:59 +03:00
Roman Lebedev 44edc6fd2c
[SCEV] rewriteLoopExitValues(): even if have hard uses, still rewrite if cheap (PR44668)
Summary:
Replacing uses of IV outside of the loop is likely generally useful,
but `rewriteLoopExitValues()` is cautious, and if it isn't told to always
perform the replacement, and there are hard uses of IV in loop,
it doesn't replace.

In [[ https://bugs.llvm.org/show_bug.cgi?id=44668 | PR44668 ]],
that prevents `-indvars` from replacing uses of induction variable
after the loop, which might be one of the optimization failures
preventing that code from being vectorized.

Instead, now that the cost model is fixed, i believe we should be
a little bit more optimistic, and also perform replacement
if we believe it is within our budget.

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44668 | PR44668 ]].

Reviewers: reames, mkazantsev, asbirlea, fhahn, skatkov

Reviewed By: mkazantsev

Subscribers: nikic, hiraditya, zzheng, javed.absar, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73501
2020-02-25 23:05:59 +03:00
Roman Lebedev b99c91a087
[NFC][SCEV] Piping to pass new SCEVCheapExpansionBudget option into SCEVExpander::isHighCostExpansionHelper()
Summary:
In future patches`SCEVExpander::isHighCostExpansionHelper()` will respect the budget allocated by performing TTI cost modelling.
This is a fully NFC patch to make things reviewable.

Reviewers: reames, mkazantsev, wmi, sanjoy

Reviewed By: mkazantsev

Subscribers: hiraditya, zzheng, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73705
2020-02-25 23:05:57 +03:00
Roman Lebedev 0789f28048
[NFC][SCEV] Piping to pass TTI into SCEVExpander::isHighCostExpansionHelper()
Summary:
Future patches will make use of TTI to perform cost-model-driven `SCEVExpander::isHighCostExpansionHelper()`
This is a fully NFC patch to make things reviewable.

Reviewers: reames, mkazantsev, wmi, sanjoy

Reviewed By: mkazantsev

Subscribers: hiraditya, zzheng, javed.absar, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73704
2020-02-25 23:05:56 +03:00
Philip Reames 14845b2c45 Revert "[LICM] Support hosting of dynamic allocas out of loops"
This reverts commit 8d22100f66.

There was a functional regression reported (https://bugs.llvm.org/show_bug.cgi?id=44996).  I'm not actually sure the patch is wrong, but I don't have time to investigate currently, and this line of work isn't something I'm likely to get back to quickly.
2020-02-25 09:05:31 -08:00
Roman Lebedev 2855c8fed9
[InstCombine] foldShiftIntoShiftInAnotherHandOfAndInICmp(): fix miscompile (PR44802)
Much like with reassociateShiftAmtsOfTwoSameDirectionShifts(),
as input, we have the following pattern:
  icmp eq/ne (and ((x shift Q), (y oppositeshift K))), 0
We want to rewrite that as:
  icmp eq/ne (and (x shift (Q+K)), y), 0  iff (Q+K) u< bitwidth(x)

While we know that originally (Q+K) would not overflow
(because  2 * (N-1) u<= iN -1), we may have looked past extensions of
shift amounts. so it may now overflow in smaller bitwidth.

To ensure that does not happen, we need to ensure that the total maximal
shift amount is still representable in that smaller bitwidth.
If the overflow would happen, (Q+K) u< bitwidth(x) check would be bogus.

https://bugs.llvm.org/show_bug.cgi?id=44802
2020-02-25 18:23:58 +03:00
Roman Lebedev 781d077afb
[InstCombine] reassociateShiftAmtsOfTwoSameDirectionShifts(): fix miscompile (PR44802)
As input, we have the following pattern:
  Sh0 (Sh1 X, Q), K
We want to rewrite that as:
  Sh x, (Q+K)  iff (Q+K) u< bitwidth(x)
While we know that originally (Q+K) would not overflow
(because  2 * (N-1) u<= iN -1), we may have looked past extensions of
shift amounts. so it may now overflow in smaller bitwidth.

To ensure that does not happen, we need to ensure that the total maximal
shift amount is still representable in that smaller bitwidth.
If the overflow would happen, (Q+K) u< bitwidth(x) check would be bogus.

https://bugs.llvm.org/show_bug.cgi?id=44802
2020-02-25 18:23:51 +03:00
Sanjay Patel 10ea01d80d [VectorCombine] make cost calc consistent for binops and cmps
Code duplication (subsequently removed by refactoring) allowed
a logic discrepancy to creep in here.

We were being conservative about creating a vector binop -- but
not a vector cmp -- in the case where a vector op has the same
estimated cost as the scalar op. We want to be more aggressive
here because that can allow other combines based on reduced
instruction count/uses.

We can reverse the transform in DAGCombiner (potentially with a
more accurate cost model) if this causes regressions.

AFAIK, this does not conflict with InstCombine. We have a
scalarize transform there, but it relies on finding a constant
operand or a matching insertelement, so that means it eliminates
an extractelement from the sequence (so we won't have 2 extracts
by the time we get here if InstCombine succeeds).

Differential Revision: https://reviews.llvm.org/D75062
2020-02-25 08:41:59 -05:00
Florian Hahn b8d638d337 [DSE,MSSA] Do not attempt to remove un-removable memdefs.
We have to skip MemoryDefs that cannot be removed. This fixes a crash in
the newly added test case and fixes a wrong case in
memset-and-memcpy.ll.
2020-02-25 13:31:46 +00:00
Hideto Ueno 2c0edbf19c [Attributor] Use AssumptionCache in AANonNullFloating::initialize 2020-02-25 13:00:03 +09:00
Ayke van Laethem 2a7a989c3e
[LLVM-C] Add bindings for addCoroutinePassesToExtensionPoints
This patch adds bindings to C and Go for
addCoroutinePassesToExtensionPoints, which is used to add coroutine
passes to the correct locations in PassManagerBuilder.

Differential Revision: https://reviews.llvm.org/D51642
2020-02-24 20:15:51 +01:00
Calixte Denizet 8f46269f0c [profile] Don't dump counters when forking and don't reset when calling exec** functions
Summary:
There is no need to write out gcdas when forking because we can just reset the counters in the parent process.
Let say a counter is N before the fork, then fork and this counter is set to 0 in the child process.
In the parent process, the counter is incremented by P and in the child process it's incremented by C.
When dump is ran at exit, parent process will dump N+P for the given counter and the child process will dump 0+C, so when the gcdas are merged the resulting counter will be N+P+C.
About exec** functions, since the current process is replaced by an another one there is no need to reset the counters but just write out the gcdas since the counters are definitely lost.
To avoid to have lists in a bad state, we just lock them during the fork and the flush (if called explicitely) and lock them when an element is added.

Reviewers: marco-c

Reviewed By: marco-c

Subscribers: hiraditya, cfe-commits, #sanitizers, llvm-commits, sylvestre.ledru

Tags: #clang, #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D74953
2020-02-24 10:38:33 +01:00
Florian Hahn 7769030b93 Recommit "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This version fixes a buildbot failure cause by picking the wrong insert
point for XORs. We cannot pick the XOR binary operator as insert point,
as it is not guaranteed that both input operands for the overflow
intrinsic are defined before it.

This reverts the revert commit
c7fc0e5da6.
2020-02-23 18:33:18 +00:00
Florian Hahn af69d5e10e [DSE] Track overlapping stores.
Add a map from BasicBlocks to overlap intervals. For partial writes, we
can keep track of those in IOLs. We only add candidates that are valid
for eliminations.

Reviewers: dmgreen, bryant, asbirlea, Tyker

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D73757
2020-02-23 15:44:40 +00:00
Tyker 837d8129e9 [NFC] Remove some GCC warning from c9e93c84f6 2020-02-22 14:11:31 +01:00
Johannes Doerfert 9708279c72 [Attributor][FIX] Undo 16188f9 until SCC iterator bug is fixed
The buildbot
  http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win
shows some strange SCC iterator bug since 16188f9 which we need to
investigate. This patch should remove the part of 16188f9 that could
have exposed the problem.
2020-02-21 14:20:42 -08:00
Whitney Tsang 0a70edd696 [CloneFunction] Update loop headers after cloning all blocks in loop.
Summary:
Blocks in a loop can be in any order as long as the loop header is the
first block in Blocks.
With some order of Blocks, cloneLoopWithPreheader would trigger the
assertion in addBasicBlockToLoop.

Example:

define void @test(i64 %N) {
preheader.i:
  br label %header.i

header.i:
  %i = phi i64 [ 0, %preheader.i ], [ %inc.i, %latch.i ]
  br label %header.j

header.j:
  %j = phi i64 [ 0, %header.i ], [ %inc.j, %latch.j ]
  br label %header.k

header.k:
  %k = phi i64 [ 0, %header.j ], [ %inc.k, %latch.k ]
  call void @baz(i64 %i, i64 %j, i64 %k)
  br label %latch.k

latch.k:
  %inc.k = add nsw i64 %k, 1
  %cmp.k = icmp slt i64 %inc.k, %N
  br i1 %cmp.k, label %header.k, label %latch.j

latch.j:
  %inc.j = add nsw i64 %j, 1
  %cmp.j = icmp slt i64 %inc.j, %N
  br i1 %cmp.j, label %header.j, label %latch.i

latch.i:
  %inc.i = add nsw i64 %i, 1
  %cmp.i = icmp slt i64 %inc.i, %N
  br i1 %cmp.i, label %header.i, label %exit.i

exit.i:
  ret void
}
declare void @baz(i64, i64, i64)
If the blocks of loop-i is in the order: header.i, latch.k, header.k,
header.j, latch.j, latch.i,
then cloneLoopWithPreheader would trigger the assertion in
addBasicBlockToLoop
assert(contains(SameHeader) && getHeader() == SameHeader->getHeader() &&
"Incorrect LI specified for this loop!");

As latch.k is in both loop-j and loop-k, it would be set as the header
of both loops after adding latch.k.
If we update loop headers during cloning blocks, then after adding
header.k,
the header of loop-k would be updated with header.k,
while the header of loop-j stays as latch.k.

When adding header.j, SameHeader is loop-k, SameHeader->getHeader() is
header.k, but getHeader() is latch.k, which trigger the assertion.
Reviewer: jdoerfert, Meinersbur, fhahn, kbarton, hfinkel, bmahjour,
etiotto
Reviewed By: Meinersbur
Subscribers: hiraditya, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D74382
2020-02-21 22:18:24 +00:00
Sanjay Patel e9c79a7aef [VectorCombine] refactor to reduce duplicated code; NFC
This should be the last step in the current cleanup.
Follow-ups should resolve the TODO about cost calc
and enable the more general case where we extract
different elements.
2020-02-21 15:56:00 -05:00
Sanjay Patel 34e3485560 [VectorCombine] refactor cost calcs to reduce duplication; NFC
More cleanup is possible now, but we probably need to
resolve the TODO about the existing difference between
compares and binops.
2020-02-21 15:12:00 -05:00
Krzysztof Parzyszek d2b7c09e79 [Hexagon] Simplify intrinsic (vandvrt (vandqrt q b) m) -> q if possible
When each byte in b&m is non-zero, this conversion Q->V->Q is a no-op.
2020-02-21 13:56:04 -06:00
Nikita Popov b178555318 [InstCombine] Improve simplify demanded bits worklist management
This fixes a small mistake from D72944: The worklist add should
happen before assigning the new operand, not after.

In case an actual replacement happens, the old operand needs to
be added for DCE. If no actual replacement happens, then old/new
are the same, so it doesn't matter.

This drops one iteration from the annotated test case.
2020-02-21 18:51:41 +01:00
Nikita Popov 656dff9af4 [InstCombine] Use replaceOperand() in more places
Followup to D73919 with another batch of replacements of
setOperand() -> replaceOperand(), to make sure the old
operand gets DCEd right away.

Differential Revision: https://reviews.llvm.org/D74932
2020-02-21 18:41:16 +01:00
Florian Hahn 98f5268a72 [VectorUtils] Move ToVectorTy to VectorUtils.h (NFC).
ToVectorTy is defined and used in multiple places. Hoist it to
VectorUtils.h to avoid duplication and improve re-usability.

Reviewers: rengolin, hsaito, Ayal, gilr, fpetrogalli

Reviewed By: fpetrogalli

Differential Revision: https://reviews.llvm.org/D74959
2020-02-21 17:31:24 +00:00
Nikita Popov a8db806d52 [SimplifyLibCalls][IRBuilder] Accept any IRBuilder in SimplifyLibCalls
This changes the SimplifyLibCalls utility to accept an IRBuilderBase,
which allows us to pass through the IRBuilder used by InstCombine.
This will ensure that new instructions get added to the worklist.
The annotated test-case drops from 4 to 2 InstCombine iterations thanks
to this.

To achieve this, I'm adding an IRBuilderBase::OperandBundlesGuard,
which is basically the same as the existing InsertPointGuard and
FastMathFlagsGuard, but for operand bundles. Also add a
setDefaultOperandBundles() method so these can be set outside the
constructor.

Differential Revision: https://reviews.llvm.org/D74792
2020-02-21 18:26:05 +01:00
Sanjay Patel fc4455891c [VectorCombine] refactor matching code to reduce duplication; NFC
cmp/binop were already diverging even though they are largely
the same logic.
2020-02-21 12:06:51 -05:00
Florian Hahn 134bab7cd5 [DSE,MSSA] Add debug counter.
Can be used like
-debug-counter=dse-memoryssa-skip=10,dse-memoryssa-counter-count=20

Reviewers: dmgreen, rnk, efriedma, bryant, asbirlea

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D72147
2020-02-21 17:04:37 +00:00
Bill Wendling 2fe457690d Filter callbr insts from critical edge splitting
Similarly to how splitting predecessors with an indirectbr isn't handled
in the generic way, we also shouldn't split callbrs, for similar
reasons.
2020-02-20 16:24:42 -08:00
Florian Hahn 99809f98d7 [SCCP] Do not mark unknown loads as overdefined.
For tracked globals that are unknown after solving, we expect all
non-store uses to be replaced.

This is a follow-up to f8045b250d, which removed forcedconstant.

We should not mark unknown loads as overdefined, as they either load
from an unknown pointer or an undef global. Restore the original logic
for loads.
2020-02-20 22:48:58 +01:00
dfukalov dbfc682e2b SpeculativeExecution: fixed ingoring free execution
Summary:
After updating cost model in AMDGPU target (47a5c36b37) the pass started to
ignore some BBs since they got all instructions estimated as free.

Reviewers: arsenm, chandlerc, nhaehnle

Reviewed By: nhaehnle

Subscribers: jvesely, wdng, nhaehnle, tpr, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74825
2020-02-20 14:45:02 +03:00
Johannes Doerfert d95cb56649 [Attributor] Make sure abstract attributes are properly initialized 2020-02-20 02:46:40 -06:00
Johannes Doerfert 6185fb13d6 [Attributor][NFC] Refactor interface 2020-02-20 02:46:40 -06:00
Johannes Doerfert 8e76fec0ae [Attributor][NFC] Improve the debug output & add a TODO 2020-02-19 23:46:08 -06:00
Johannes Doerfert f8ad735729 [Attributor] Use existing `returned` information better
We can look through calls with `returned` argument attributes when we
collect subsuming positions. This allows us to get existing attributes
from more places.
2020-02-19 23:46:07 -06:00
Johannes Doerfert a801ee869d [Attributor][FIX] Avoid setting wrong load/store alignments 2020-02-19 23:46:07 -06:00
Johannes Doerfert e1eed6c5b9 [Attributor] Generalize `getAssumedConstantInt` interface
We are often interested in an assumed constant and sometimes it has to
be an integer constant. Before we only looked for the latter, now we can
ask for either.
2020-02-19 22:33:51 -06:00
Johannes Doerfert 16188f9d70 [Attributor][FIX] Do not create new calls edge we cannot handle
If we propagate function pointers across function boundaries we can
create new call edges. These need to be represented in the CG if we run
as a CGSCC pass. In the new pass manager that is currently not handled
by the CallGraphUpdater so we need to prevent the situation for now.
2020-02-19 22:33:51 -06:00
Johannes Doerfert 1e99fc9d58 [Attributor] Add initial AAIsDead for arguments
We usually will ask for liveness of an argument anyway so we ended up
lazily creating the attribute anyway. However, that is not always the
case and even if it is we should go the eager route here. Various tests
show how this can improve the outcome. One test exposed a problem with
type mismatches between argument and call site argument, a fix is
included. For liveness various more tests were added as well.
2020-02-19 21:39:45 -06:00
Johannes Doerfert c6ac717aa7 [Attributor] Allow multiple uses of a casted function pointer
If a function pointer is casted into a different type the resulting
expression can be a constant. If so, it can be used multiple times which
cannot be handled by the AbstractCallSite constructor alone. Instead, we
follow the cast expression uses now explicitly during the call site
traversal.
2020-02-19 20:43:38 -06:00
Michael Kruse e4d20ec8ad [IndVarSimply] Fix assert/release build difference.
In builds with assertions enabled (!NDEBUG), IndVarSimplify does an
additional query to ScalarEvolution which may change future SCEV queries
since it fills the internal cache differently. The result is actually
only used with the -verify-indvars command line option. We fix the issue
by only calling SE->getBackedgeTakenCount(L) if -verify-indvars is
enabled such that only -verify-indvars shows the behavior, but not debug
builds themselves. Also add a remark to the description of
-verify-indvars about this behavior.

Fixes llvm.org/PR44815

Differential Revision: https://reviews.llvm.org/D74810
2020-02-19 14:36:22 -06:00
Florian Hahn c7fc0e5da6 Revert "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This reverts commit e01a3d49c2.
and commit a6a585b803.

This causes a failure on GreenDragon:
http://lab.llvm.org:8080/green/view/LLDB/job/lldb-cmake/9597
2020-02-19 19:37:08 +01:00
Florian Hahn e01a3d49c2 [PatternMatch] Match XOR variant of unsigned-add overflow check.
Instcombine folds (a + b <u a) to (a ^ -1 <u b) and that does not match
the expected pattern in CodeGenPerpare via UAddWithOverflow.

This causes a regression over Clang 7 on both X86 and AArch64:
https://gcc.godbolt.org/z/juhXYV

This patch extends UAddWithOverflow to also catch the XOR case, if the
XOR is only used in the ICMP. This covers just a single case, but I'd
like to make sure I am not missing anything before tackling the other
cases.

Reviewers: nikic, RKSimon, lebedev.ri, spatel

Reviewed By: nikic, lebedev.ri

Differential Revision: https://reviews.llvm.org/D74228
2020-02-19 15:25:18 +01:00
Brian Gesiak 5a187d8ed1 [Coroutines][4/6] New pass manager: coro-cleanup
Summary:
Depends on https://reviews.llvm.org/D71900.

The fourth in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements
'coro-cleanup'.

No existing regression tests check the behavior of coro-cleanup on its
own, so this patch adds one. (A test named 'coro-cleanup.ll' exists, but
it relies on the entire coroutines pipeline being run. It's updated to
test the new pass manager in the 5th patch of this series.)

Reviewers: GorNishanov, lewissbaker, chandlerc, junparser, deadalnix, wenlei

Reviewed By: wenlei

Subscribers: wenlei, EricWF, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71901
2020-02-19 00:30:27 -05:00
Brian Gesiak 2365238b9d Re-land new pass manager coro-split and coro-elide
This re-applies patches https://reviews.llvm.org/D71899 and
https://reviews.llvm.org/D71900, which were reverted in
https://reviews.llvm.org/rG11053a1cc61 and
https://reviews.llvm.org/rGe999aa38d16. The underlying problem that
caused two buildbots to fail with these patches is explained in
https://reviews.llvm.org/rG26f356350bd -- older compliers disagree with
the order in which the left- and right-hand side of an assignment in
LazyCallGraph ought to be evaluated, which caused an assertion in
SmallVector::operator[] to fire when the test suite was run.
2020-02-19 00:11:23 -05:00
Reid Kleckner 0c2b09a9b6 [IR] Lazily number instructions for local dominance queries
Essentially, fold OrderedBasicBlock into BasicBlock, and make it
auto-invalidate the instruction ordering when new instructions are
added. Notably, we don't need to invalidate it when removing
instructions, which is helpful when a pass mostly delete dead
instructions rather than transforming them.

The downside is that Instruction grows from 56 bytes to 64 bytes.  The
resulting LLVM code is substantially simpler and automatically handles
invalidation, which makes me think that this is the right speed and size
tradeoff.

The important change is in SymbolTableTraitsImpl.h, where the numbering
is invalidated. Everything else should be straightforward.

We probably want to implement a fancier re-numbering scheme so that
local updates don't invalidate the ordering, but I plan for that to be
future work, maybe for someone else.

Reviewed By: lattner, vsk, fhahn, dexonsmith

Differential Revision: https://reviews.llvm.org/D51664
2020-02-18 14:44:24 -08:00
Fangrui Song 13a97305ba [JumpThreading] Skip unconditional PredBB when threading jumps through two basic blocks
Fixes https://bugs.llvm.org/show_bug.cgi?id=44922 (caused by 4698bf145d)

ThreadThroughTwoBasicBlocks assumes PredBBBranch is conditional. The following code can segfault.

  AddPHINodeEntriesForMappedBlock(PredBBBranch->getSuccessor(1), PredBB, NewBB,
                                  ValueMapping);

We can also allow unconditional PredBB, but the produced code is not
better.

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D74747
2020-02-18 11:01:46 -08:00
Tyker c9e93c84f6 Add Query API for llvm.assume holding attributes
Reviewers: jdoerfert, sstefan1, uenoku

Reviewed By: jdoerfert

Subscribers: mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72885
2020-02-18 19:42:07 +01:00
Huihui Zhang 8ee0e1dc02 [NFC] Silence compiler warning [-Wmissing-braces]. 2020-02-18 10:37:12 -08:00
Florian Hahn e32522ca17 [SLPVectorizer] Do not assume extracelement idx is a ConstantInt.
The index of an ExtractElementInst is not guaranteed to be a
ConstantInt. It can be any integer value. Check explicitly for
ConstantInts.

The new test cases illustrate scenarios where we crash without
this patch. I've also added another test case to check the matching
of extractelement vector ops works.

Reviewers: RKSimon, ABataev, dtemirbulatov, vporpo

Reviewed By: ABataev

Differential Revision: https://reviews.llvm.org/D74758
2020-02-18 18:16:06 +01:00
Nikita Popov ec6c623ff9 [SimplifyLibCalls] Accept IRBuilderBase; NFC 2020-02-18 17:59:07 +01:00
Nikita Popov 28ffe38bba [LoopUtils] Accept IRBuilderBase; NFC 2020-02-18 17:58:46 +01:00
Nikita Popov ed6d30b517 [BuildLibCalls] Accept IRBuilderBase; NFC
Accept IRBuilderBase instead of IRBuilder<>. Remove dependency
on IRBuilder from header.
2020-02-18 17:58:16 +01:00
Nikita Popov 1ab37fad61 [InstCombine] Fix worklist management when simplifying demanded bits
When simplifying demanded bits, we currently only report the
instruction on which SimplifyDemandedBits was called as changed.
However, this is a recursive call, and the actually modified
instruction will usually be further up the chain. Additionally,
all the intermediate instructions should also be revisited,
as additional combines may be possible after the demanded bits
simplification. We fix this by explicitly adding them back to the
worklist.

Differential Revision: https://reviews.llvm.org/D72944
2020-02-18 17:55:40 +01:00
Nikita Popov c9540fe59b [InstCombine] Fix multi-use handling in cttz transform
The select-of-cttz transform can currently duplicate cttz intrinsics
and zext/trunc ops. The cause is that it unnecessarily duplicates
the intrinsic and the zext/trunc when setting the "undef_on_zero"
flag to false. However, it's always legal to set the flag from true
to false, so we can make this replacement even if there are extra users.

Differential Revision: https://reviews.llvm.org/D74685
2020-02-18 17:55:00 +01:00
Nikita Popov 9adedd146d [InstCombine] Relax preconditions for ashr+and+icmp fold (PR44754)
Fix for https://bugs.llvm.org/show_bug.cgi?id=44754. We already have
a fold that converts icmp (and (ashr X, C3), C2), C1 into
icmp (and C2'), C1', but it imposed overly strict requirements on the
transform.

Relax this by checking that both C2 and C1 don't shift out bits
(in a signed sense) when forming the new constants.

Alive proofs (https://rise4fun.com/Alive/PTz0):

    Name: ashr_legal
    Pre: ((C2 << C3) >> C3) == C2 && ((C1 << C3) >> C3) == C1
    %a = ashr i16 %x, C3
    %b = and i16 %a, C2
    %c = icmp i16 %b, C1
    =>
    %d = and i16 %x, C2 << C3
    %c = icmp i16 %d, C1 << C3

    Name: ashr_shiftout_eq
    Pre: ((C2 << C3) >> C3) == C2 && ((C1 << C3) >> C3) != C1
    %a = ashr i16 %x, C3
    %b = and i16 %a, C2
    %c = icmp eq i16 %b, C1
    =>
    %c = false

Note that >> corresponds to ashr here. The case of an equality
comparison has some special handling in this transform, because
it will form to a true/false result if the condition on the comparison
constant it violated.

Differential Revision: https://reviews.llvm.org/D74294
2020-02-18 17:49:46 +01:00
Florian Hahn 9063022573 [InstCombin] Avoid nested Create calls, to guarantee order.
The original code allowed creating the != checks in unpredictable order,
causing http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/34014
to fail.
2020-02-18 09:44:11 +01:00
Florian Hahn 6c85e92bcf [InstCombine] Simplify a umul overflow check to a != 0 && b != 0.
This patch adds a simplification if an OR weakens the overflow condition
for umul.with.overflow by treating any non-zero result as overflow. In that
case, we overflow if both umul.with.overflow operands are != 0, as in that
case the result can only be 0, iff the multiplication overflows.

Code like this is generated by code using __builtin_mul_overflow with
negative integer constants, e.g.
   bool test(unsigned long long v, unsigned long long *res) {
     return __builtin_mul_overflow(v, -4775807LL, res);
   }

```
----------------------------------------
Name: D74141
  %res = umul_overflow {i8, i1} %a, %b
  %mul = extractvalue {i8, i1} %res, 0
  %overflow = extractvalue {i8, i1} %res, 1
  %cmp = icmp ne %mul, 0
  %ret = or i1 %overflow, %cmp
  ret i1 %ret
=>
  %t0 = icmp ne i8 %a, 0
  %t1 = icmp ne i8 %b, 0
  %ret = and i1 %t0, %t1
  ret i1 %ret
  %res = umul_overflow {i8, i1} %a, %b
  %mul = extractvalue {i8, i1} %res, 0
  %cmp = icmp ne %mul, 0
  %overflow = extractvalue {i8, i1} %res, 1

Done: 1
Optimization is correct!

```

Reviewers: nikic, lebedev.ri, spatel, Bigcheese, dexonsmith, aemerson

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D74141
2020-02-18 09:11:55 +01:00
Brian Gesiak 11053a1cc6 Revert new pass manager coro-split and coro-elide
This reverts
https://reviews.llvm.org/rG7125d66f9969605d886b5286780101a45b5bed67 and
https://reviews.llvm.org/rG00fec8004aca6588d8d695a2c3827c3754c380a0 due
to buildbot failures:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/34004
2020-02-17 23:55:10 -05:00
Brian Gesiak 00fec8004a [Coroutines][3/6] New pass manager: coro-elide
Summary:
Depends on https://reviews.llvm.org/D71899.

The third in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements 'coro-elide'.

The new pass manager infrastructure does not implicitly repeat CGSCC
pass pipelines when a function is devirtualized, and so the tests
for the new pass manager that rely on that behavior now explicitly
specify `repeat<2>`.

Reviewers: GorNishanov, lewissbaker, chandlerc, jdoerfert, junparser, deadalnix, wenlei

Reviewed By: wenlei

Subscribers: wenlei, EricWF, Prazek, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71900
2020-02-17 23:41:57 -05:00
Brian Gesiak 7125d66f99 [Coroutines][2/6] New pass manager: coro-split
Summary:
This patch has four dependencies:

1. The first in this series of patches that implement coroutine passes in the
   new pass manager: https://reviews.llvm.org/D71898.
2. A patch that introduces an API for CGSCC passes to add new reference
   edges to a `LazyCallGraph`, `updateCGAndAnalysisManagerForCGSCCPass`:
   https://reviews.llvm.org/D72025.
3. A patch that introduces a `CallGraphUpdater` helper class that is
   capable of mutating internal `LazyCallGraph` state in order to insert
   new function nodes into a specific SCC: https://reviews.llvm.org/D70927.
4. And finally, a small edge case fix for updating `LazyCallGraph` that
   patch 3 above happens to run into: https://reviews.llvm.org/D72226.

This is the second in a series of patches that ports the LLVM coroutines
passes to the new pass manager infrastructure. This patch implements
'coro-split'.

Some notes:
* Using the new CGSCC pass manager resulted in IR being printed in the
  reverse order in some tests. To prevent FileCheck checks from failing due
  to these reversed orders, this patch splits up test files that test
  multiple different coroutine functions: specifically
  coro-alloc-with-param.ll, coro-split-eh.ll, and coro-eh-aware-edge-split.ll.
* CoroSplit.cpp contained 2 overloads of `splitCoroutine`, one of which
  dispatched to the other based on the coroutine ABI being used (C++20
  switch-based versus Swift returned-continuation-based). I found this
  confusing, especially with the additional branching based on `CallGraph`
  vs. `LazyCallGraph`, so I removed the ABI-checking overload of
  `splitCoroutine`.

Reviewers: GorNishanov, lewissbaker, chandlerc, jdoerfert, junparser, deadalnix, wenlei

Reviewed By: wenlei

Subscribers: wenlei, qcolombet, EricWF, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71899
2020-02-17 23:35:27 -05:00
Vedant Kumar c74026daf3 [HotColdSplit] Mark entire function cold when entry block is cold
rdar://58855712
2020-02-17 15:57:50 -08:00
Nicolai Hähnle 58297e4d8f LowerMatrixIntrinsics: Avoid use of deprecated CreateCall methods
Reviewers: t.p.northover

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74675
2020-02-18 00:24:09 +01:00
Tim Northover 464d4cf7e6 Coroutines: avoid use of deprecated CreateLoad and CreateCall methods
Summary: Patch originally by Tim Northover

Reviewers: t.p.northover

Subscribers: EricWF, hiraditya, modocache, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74674
2020-02-18 00:24:09 +01:00
Brian Gesiak e9849d5195 [Coroutines][1/6] New pass manager: coro-early
Summary:
The first in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements
'coro-early'.

NB: All coroutines passes begin by checking that coroutine intrinsics are
declared within the LLVM IR module they're operating on. To do so, they call
`coro::declaresIntrinsics`. The next 3 patches in this series, which add new
pass manager implementations of the 'coro-split', 'coro-elide', and
'coro-cleanup' passes, use a similar pattern as the one used here: a static
function is shared across both old and new passes to detect if relevant
coroutine intrinsics are delcared. To make this pattern easier to read, this
patch adds `const` keywords to the parameters of `coro::declaresIntrinsics`.

Reviewers: GorNishanov, lewissbaker, junparser, chandlerc, deadalnix, wenlei

Reviewed By: wenlei

Subscribers: ychen, wenlei, EricWF, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71898
2020-02-17 13:27:48 -05:00
Nikita Popov 3eaa53e805 Reapply "[IRBuilder] Virtualize IRBuilder"
Relative to the original commit, this fixes some warnings,
and is based on the deletion of the IRBuilder copy constructor
in D74693. The automatic copy constructor would no longer be
safe.

-----

Related llvm-dev thread:
http://lists.llvm.org/pipermail/llvm-dev/2020-February/138951.html

This patch moves the IRBuilder from templating over the constant
folder and inserter towards making both of these virtual.
There are a couple of motivations for this:

1. It's not possible to share code between use-sites that use
different IRBuilder folders/inserters (short of templating the code
and moving it into headers).
2. Methods currently defined on IRBuilderBase (which is not templated)
do not use the custom inserter, resulting in subtle bugs (e.g.
incorrect InstCombine worklist management). It would be possible to
move those into the templated IRBuilder, but...
3. The vast majority of the IRBuilder implementation has to live
in the header, because it depends on the template arguments.
4. We have many unnecessary dependencies on IRBuilder.h,
because it is not easy to forward-declare. (Significant parts of
the backend depend on it via TargetLowering.h, for example.)

This patch addresses the issue by making the following changes:

* IRBuilderDefaultInserter::InsertHelper becomes virtual.
  IRBuilderBase accepts a reference to it.
* IRBuilderFolder is introduced as a virtual base class. It is
 implemented by ConstantFolder (default), NoFolder and TargetFolder.
  IRBuilderBase has a reference to this as well.
* All the logic is moved from IRBuilder to IRBuilderBase. This means
  that methods can in the future replace their IRBuilder<> & uses
  (or other specific IRBuilder types) with IRBuilderBase & and thus
  be usable with different IRBuilders.
* The IRBuilder class is now a thin wrapper around IRBuilderBase.
  Essentially it only stores the folder and inserter and takes care
  of constructing the base builder.

What this patch doesn't do, but should be simple followups after this change:

* Fixing use of the inserter for creation methods originally defined
  on IRBuilderBase.
* Replacing IRBuilder<> uses in arguments with IRBuilderBase, where useful.
* Moving code from the IRBuilder header to the source file.

From the user perspective, these changes should be mostly transparent:
The only thing that consumers using a custom inserted may need to do is
inherit from IRBuilderDefaultInserter publicly and mark their InsertHelper
as public.

Differential Revision: https://reviews.llvm.org/D73835
2020-02-17 19:04:11 +01:00
Nikita Popov 80397d2d12 [IRBuilder] Delete copy constructor
D73835 will make IRBuilder no longer trivially copyable. This patch
deletes the copy constructor in advance, to separate out the breakage.

Currently, the IRBuilder copy constructor is usually used by accident,
not by intention.  In rG7c362b25d7a9 I've fixed a number of cases where
functions accepted IRBuilder rather than IRBuilder &, thus performing
an unnecessary copy. In rG5f7b92b1b4d6 I've fixed cases where an
IRBuilder was copied, while an InsertPointGuard should have been used
instead.

The only non-trivial use of the copy constructor is the
getIRBForDbgInsertion() helper, for which I separated construction and
setting of the insertion point in this patch.

Differential Revision: https://reviews.llvm.org/D74693
2020-02-17 18:14:48 +01:00
Benjamin Kramer 564a9de28e Hide implementation details. NFC> 2020-02-17 17:55:23 +01:00
Benjamin Kramer 5fc5c7db38 Strength reduce vectors into arrays. NFCI. 2020-02-17 15:37:35 +01:00
Nikita Popov 5f7b92b1b4 [IRBuilder] Prefer InsertPointGuard over full copy; NFC
Don't copy the IRBuilder when an InsertPointGuard would also do.
2020-02-16 18:02:29 +01:00
Nikita Popov 7c362b25d7 [IRBuilder] Fix unnecessary IRBuilder copies; NFC
Fix a few cases where an IRBuilder is passed to a helper function
by value, while a by reference pass was intended.
2020-02-16 17:57:18 +01:00
Nikita Popov af480e8c63 Revert "[IRBuilder] Virtualize IRBuilder"
This reverts commit 0765d3824d.
This reverts commit 1b04866a3d.

Relevant looking crashes observed on:
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win
2020-02-16 17:01:10 +01:00
Sanjay Patel 62dd44d76d [VectorCombine] fix cost calc for extract-cmp
getOperationCost() is not the cost we wanted; that's not the
throughput value that the rest of the calculation uses.

We may want to switch everything in this code to use the
getInstructionThroughput() wrapper to avoid these kinds of
problems, but I'll look at that as a follow-up because that
can create other logical diffs via using optional parameters
(we'd need to speculatively create the vector instruction to
make a fair(er) comparison).
2020-02-16 10:40:28 -05:00
Nikita Popov 893c630fbe [InstCombine] Create new log2 intrinsic; NFCI
Rather than mixing creation of new instructions and in-place
modification here, create a new log2 intrinsic. This should be
NFC apart from worklist order changes.
2020-02-16 15:52:09 +01:00
Nikita Popov 1b04866a3d [IRBuilder] Try to fix warnings
Try to fix -Wnon-virtual-dtor warnings that cause build failure
on clang-pcc64le-rhel.
2020-02-16 15:32:11 +01:00
Nikita Popov 0765d3824d [IRBuilder] Virtualize IRBuilder
Related llvm-dev thread:
http://lists.llvm.org/pipermail/llvm-dev/2020-February/138951.html

This patch moves the IRBuilder from templating over the constant
folder and inserter towards making both of these virtual.
There are a couple of motivations for this:

1. It's not possible to share code between use-sites that use
different IRBuilder folders/inserters (short of templating the code
and moving it into headers).
2. Methods currently defined on IRBuilderBase (which is not templated)
do not use the custom inserter, resulting in subtle bugs (e.g.
incorrect InstCombine worklist management). It would be possible to
move those into the templated IRBuilder, but...
3. The vast majority of the IRBuilder implementation has to live
in the header, because it depends on the template arguments.
4. We have many unnecessary dependencies on IRBuilder.h,
because it is not easy to forward-declare. (Significant parts of
the backend depend on it via TargetLowering.h, for example.)

This patch addresses the issue by making the following changes:

* IRBuilderDefaultInserter::InsertHelper becomes virtual.
  IRBuilderBase accepts a reference to it.
* IRBuilderFolder is introduced as a virtual base class. It is
 implemented by ConstantFolder (default), NoFolder and TargetFolder.
  IRBuilderBase has a reference to this as well.
* All the logic is moved from IRBuilder to IRBuilderBase. This means
  that methods can in the future replace their IRBuilder<> & uses
  (or other specific IRBuilder types) with IRBuilderBase & and thus
  be usable with different IRBuilders.
* The IRBuilder class is now a thin wrapper around IRBuilderBase.
  Essentially it only stores the folder and inserter and takes care
  of constructing the base builder.

What this patch doesn't do, but should be simple followups after this change:

* Fixing use of the inserter for creation methods originally defined
  on IRBuilderBase.
* Replacing IRBuilder<> uses in arguments with IRBuilderBase, where useful.
* Moving code from the IRBuilder header to the source file.

From the user perspective, these changes should be mostly transparent:
The only thing that consumers using a custom inserted may need to do is
inherit from IRBuilderDefaultInserter publicly and mark their InsertHelper
as public.

Differential Revision: https://reviews.llvm.org/D73835
2020-02-16 13:48:55 +01:00
Johannes Doerfert 1d5da8cd30 [Attributor][FIX] Use pointer not reference as it can be null 2020-02-15 20:38:49 -06:00
Florian Hahn f8045b250d Recommit "[SCCP] Remove forcedconstant, go to overdefined instead"
This includes a fix for cases where things get marked as overdefined in
ResolvedUndefsIn, but we later discover a constant. To avoid crashing,
we consistently bail out on overdefined values in the visitors. This is
similar to the previous behavior with forcedconstant.

This reverts the revert commit 02b72f564c.
2020-02-15 18:36:44 +01:00
Simon Pilgrim 8a48c4a97c Fix boolean/bitwise operator precedence warnings. NFCI. 2020-02-15 13:53:18 +00:00
Johannes Doerfert ef746aa11f [Attributor] Collect memory accesses with their respective kind and location
In addition to a single bit per memory locations, e.g., globals and
arguments, we now collect more information about the actual accesses,
e.g., what instruction caused it, was it a read/write/read+write, and
what the underlying base pointer was. Follow up patches will make
explicit use of this.

Reviewed By: uenoku

Differential Revision: https://reviews.llvm.org/D73527
2020-02-15 02:12:04 -06:00
Fangrui Song fd5665af2c [Attributor] Fix -Wunused-variable for -DLLVM_ENABLE_ASSERTIONS=off builds after b4352e43d8 2020-02-14 21:47:19 -08:00
Johannes Doerfert b70297a39a [Attributor][FIX] Ensure abstract attributes are existing before manifest
While the function return updateImpl did only look at call sites the
manifest method looked at return values. If we don't do this during the
updateImpl we might create new abstract attributes during manifest. This
is a problem when it comes to liveness information.
2020-02-14 21:44:46 -06:00
Johannes Doerfert ad121ea14d [Attributor] Manifest simplified (return) values properly
If we simplify a function return value we have to modify the return
instructions.
2020-02-14 21:44:46 -06:00
Johannes Doerfert b53af0e7f9 [Attributor][FIX] Collapse `undef` to a proper value
If we see an undef we cannot assume it's the same as "no value". For now
we just collapse it to 0.
2020-02-14 21:44:46 -06:00
Johannes Doerfert 137c99a6a5 [Attributor][FIX] Restrict cross-SCC call deletion
If we know a call was not needed we might have ended up deleting it even
if it was in a different SCC. This prevents us from doing so.
2020-02-14 21:44:46 -06:00
Johannes Doerfert 32e98a7089 [Attributor][FIX] Carefully strip casts in AANoAlias
We can strip casts in AANoAlias but that might cause us to end up with a
non-pointer type. We do properly handle that case now.
2020-02-14 21:44:46 -06:00
Johannes Doerfert b4352e43d8 [Attributor][FIX] Do not RAUW void values
This caused an error when passes iterated over cached assumptions in the
tracker and assumed them to be `null` or an instruction. I failed to
create a test case so far.
2020-02-14 21:44:46 -06:00
Johannes Doerfert 282f5d7ad1 [Attributor] Derive memory location attributes (argmemonly, ...)
In addition to memory behavior attributes (readonly/writeonly) we now
derive memory location attributes (argmemonly/inaccessiblememonly/...).
The former is part of AAMemoryBehavior and the latter part of
AAMemoryLocation. While they are similar in nature it got messy when
they were put in a single AA. Location attributes for arguments and
floating values will follow later.

Note that both memory attributes kinds can derive readnone. If there are
no accesses AAMemoryBehavior will derive readnone. If there are accesses
but only to stack (=local) locations AAMemoryLocation will derive
readnone.

Reviewed By: uenoku

Differential Revision: https://reviews.llvm.org/D73426
2020-02-14 19:05:51 -06:00
Johannes Doerfert 7cbb107feb [Attributor][FIX] Validate the type for AAValueConstantRange as needed
Due to the genericValueTraversal we might visit values for which we did
not create an AAValueConstantRange object, e.g., as they are behind a
PHI or select or call with `returned` argument. As a consequence we need
to validate the types as we are about to query AAValueConstantRange for
operands.
2020-02-14 17:22:40 -06:00
Alina Sbirlea 1326a5a4cf [LoopRotate] Get and update MSSA only if available in legacy pass manager.
Summary:
Potential fix for: https://bugs.llvm.org/show_bug.cgi?id=44889 and https://bugs.llvm.org/show_bug.cgi?id=44408

In the legacy pass manager, loop rotate need not compute MemorySSA when not being in the same loop pass manager with other loop passes.
There isn't currently a way to differentiate between the two cases, so this attempts to limit the usage in LoopRotate to only update MemorySSA when the analysis is already available.
The side-effect of this is that it will split the Loop pipeline.

This issue does not apply to the new pass manager, where we have a flag specifying if all loop passes in that loop pass manager preserve MemorySSA.

Reviewers: dmgreen, fedor.sergeev, nikic

Subscribers: Prazek, hiraditya, george.burgess.iv, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74574
2020-02-14 10:47:26 -08:00
Kadir Cetinkaya 1674f772b4
[VecotrCombine] Fix unused variable for assertion disabled builds 2020-02-14 09:30:29 +01:00
Vedant Kumar 8e77b33b3c [Local] Do not move around dbg.declares during replaceDbgDeclare
replaceDbgDeclare is used to update the descriptions of stack variables
when they are moved (e.g. by ASan or SafeStack). A side effect of
replaceDbgDeclare is that it moves dbg.declares around in the
instruction stream (typically by hoisting them into the entry block).
This behavior was introduced in llvm/r227544 to fix an assertion failure
(llvm.org/PR22386), but no longer appears to be necessary.

Hoisting a dbg.declare generally does not create problems. Usually,
dbg.declare either describes an argument or an alloca in the entry
block, and backends have special handling to emit locations for these.
In optimized builds, LowerDbgDeclare places dbg.values in the right
spots regardless of where the dbg.declare is. And no one uses
replaceDbgDeclare to handle things like VLAs.

However, there doesn't seem to be a positive case for moving
dbg.declares around anymore, and this reordering can get in the way of
understanding other bugs. I propose getting rid of it.

Testing: stage2 RelWithDebInfo sanitized build, check-llvm

rdar://59397340

Differential Revision: https://reviews.llvm.org/D74517
2020-02-13 14:35:02 -08:00
Sanjay Patel 19b62b79db [VectorCombine] try to form vector binop to eliminate an extract element
binop (extelt X, C), (extelt Y, C) --> extelt (binop X, Y), C

This is a transform that has been considered for canonicalization (instcombine)
in the past because it reduces instruction count. But as shown in the x86 tests,
it's impossible to know if it's profitable without a cost model. There are many
potential target constraints to consider.

We have implemented similar transforms in the backend (DAGCombiner and
target-specific), but I don't think we have this exact fold there either (and if
we did it in SDAG, it wouldn't work across blocks).

Note: this patch was intended to handle the more general case where the extract
indexes do not match, but it got too big, so I scaled it back to this pattern
for now.

Differential Revision: https://reviews.llvm.org/D74495
2020-02-13 17:23:27 -05:00
Vedant Kumar 02b72f564c Revert "Recommit "[SCCP] Remove forcedconstant, go to overdefined instead""
This reverts commit bb310b3f73. This
breaks the stage2 ASan build, see:

https://bugs.llvm.org/show_bug.cgi?id=44898

rdar://59431448
2020-02-13 11:55:18 -08:00
stozer 9bda7ab835 Re-revert: Recover debug intrinsics when killing duplicated/empty blocks
This reverts commit 61b35e4111.

This commit causes a timeout in chromium builds; likely to have a
similar cause to the previous timeout issue caused by this commit (see
6ded69f294 for more details). It is possible that there is no way to
fix this bug that will not cause this issue; further investigations as
to the efficiency of handling large amounts of debug info will be
necessary.
2020-02-13 11:48:19 +00:00
Johannes Doerfert 70cac41a2b Reapply "[OpenMP][IRBuilder] Perform finalization (incl. outlining) late"
Reapply 8a56d64d76 with minor fixes.

The problem was that cancellation can cause new edges to the parallel
region exit block which is not outlined. The CodeExtractor will encode
the information which "exit" was taken as a return value. The fix is to
ensure we do not return any value from the outlined function, to prevent
control to value conversion we ensure a single exit block for the
outlined region.

This reverts commit 3aac953afa.
2020-02-12 22:29:07 -06:00
Johannes Doerfert 3aac953afa Revert "[OpenMP][IRBuilder] Perform finalization (incl. outlining) late"
This reverts commit 8a56d64d76.

Will be recommitted once the clang test problem is addressed.
2020-02-12 18:50:43 -06:00
Johannes Doerfert 8a56d64d76 [OpenMP][IRBuilder] Perform finalization (incl. outlining) late
In order to fix PR44560 and to prepare for loop transformations we now
finalize a function late, which will also do the outlining late. The
logic is as before but the actual outlining step happens now after the
function was fully constructed. Once we have loop transformations we
can apply them in the finalize step before the outlining.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D74372
2020-02-12 17:55:01 -06:00
Johannes Doerfert 23f41f16d4 [Attributor] Use fine-grained liveness in all helpers
We used coarse-grained liveness before, thus we looked if the
instruction was executed, but we did not use fine-grained liveness,
hence if the instruction was needed or could be deleted even if the
surrounding ones are live. This patches introduces this level of
liveness checks together with other liveness queries, e.g., for uses.

For more control we enforce that all liveness queries go through the
Attributor.

Test have been adjusted to reflect the changes or augmented to prevent
deletion of the parts we want to check.

Reviewed By: sstefan1

Differential Revision: https://reviews.llvm.org/D73313
2020-02-12 17:36:38 -06:00
Johannes Doerfert b2c76002ca [Attributor] Ignore uses if a value is simplified
If we have a replacement for a value, via AAValueSimplify, the original
value will lose all its uses. Thus, as long as a value is simplified we
can skip the uses in checkForAllUses, given that these uses are
transitive uses for the simplified version and will therefore affect the
simplified version as necessary.

Since this allowed us to remove calls without side-effects and a known
return value, we need to make sure not to eliminate `musttail` calls.
Those we keep around, or later remove the entire `musttail` call chain.
2020-02-12 17:36:38 -06:00
Johannes Doerfert 86509e8c3b [Attributor] Use assumed information to determine side-effects
We relied on wouldInstructionBeTriviallyDead before but that functions
does not take assumed information, especially for calls, into account.
The replacement, AAIsDead::isAssumeSideEffectFree, does.

This change makes AAIsDeadCallSiteReturn more complex as we can have
a dead call or only dead users.

The test have been modified to include a side effect where there was
none in order to keep the coverage.

Reviewed By: sstefan1

Differential Revision: https://reviews.llvm.org/D73311
2020-02-12 17:36:38 -06:00
Ehud Katz d8a2ea9fd5 [LoopExtractor] Fix legacy pass dependencies
Fixes a memory leak of allocating `LoopInfoWrapperPass` and `DominatorTreeWrapperPass`.
2020-02-12 22:39:21 +02:00
Vedant Kumar 34d9f93977 [AddressSanitizer] Ensure only AllocaInst is passed to dbg.declare
Various parts of the LLVM code generator assume that the address
argument of a dbg.declare is not a `ptrtoint`-of-alloca. ASan breaks
this assumption, and this results in local variables sometimes being
unavailable at -O0.

GlobalISel, SelectionDAG, and FastISel all do not appear to expect
dbg.declares to have a `ptrtoint` as an operand. This means that they do
not place entry block allocas in the usual side table reserved for local
variables available in the whole function scope. This isn't always a
problem, as LLVM can try to lower the dbg.declare to a DBG_VALUE, but
those DBG_VALUEs can get dropped for all the usual reasons DBG_VALUEs
get dropped. In the ObjC test case I'm looking at, the cause happens to
be that `replaceDbgDeclare` has hoisted dbg.declares into the entry
block, causing LiveDebugValues to "kill" the DBG_VALUEs because the
lexical dominance check fails.

To address this, I propose:

1) Have ASan (always) pass an alloca to dbg.declares (this patch). This
   is a narrow bugfix for -O0 debugging.

2) Make replaceDbgDeclare not move dbg.declares around. This should be a
   generic improvement for optimized debug info, as it would prevent the
   lexical dominance check in LiveDebugValues from killing as many
   variables.

   This means reverting llvm/r227544, which fixed an assertion failure
   (llvm.org/PR22386) but no longer seems to be necessary. I was able to
   complete a stage2 build with the revert in place.

rdar://54688991

Differential Revision: https://reviews.llvm.org/D74369
2020-02-12 11:24:02 -08:00
Jay Foad 32aac25637 [KnownBits] Introduce anyext instead of passing a flag into zext
Summary:
This was a very odd API, where you had to pass a flag into a zext
function to say whether the extended bits really were zero or not. All
callers passed in a literal true or false.

I think it's much clearer to make the function name reflect the
operation being performed on the value we're tracking (rather than on
the KnownBits Zero and One fields), so zext means the value is being
zero extended and new function anyext means the value is being extended
with unknown bits.

NFC.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74482
2020-02-12 19:06:53 +00:00
Florian Hahn bb310b3f73 Recommit "[SCCP] Remove forcedconstant, go to overdefined instead"
This version includes a fix for a set of crashes caused by marking
values depending on a yet unknown & tracked call as overdefined.

In some cases, we would later discover that the call has a constant
result and try to mark a user of it as constant, although it was already
marked as overdefined. Most instruction handlers bail out early if the
instruction is already overdefined. But that is not necessary for
CastInsts for example. By skipping values that depend on skipped
calls, we resolve the crashes and also improve the precision in some
cases (see resolvedundefsin-tracked-fn.ll).

Note that we may not skip PHI nodes that may depend on a skipped call,
but they can be safely marked as overdefined, as we bail out early if
the PHI node is overdefined.

This reverts the revert commit
a74b31a3e9cd844c7ce2087978568e3f5ec8519.
2020-02-12 18:02:18 +00:00
Anh Tuyen Tran a5b6480d05 [NFC] Remove extra headers included in Loop Unroll and LoopUnrollAndJam files
Summary:
This refactor patch removes some header files which are not needed and also add some to meet IWYU principles.

Reviewers: rnk (Reid Kleckner), Meinersbur (Michael Kruse), dmgreen (Dave Green)

Reviewed By: dmgreen (Dave Green), rnk (Reid Kleckner), Meinersbur (Michael Kruse)

Subscribers: dmgreen (Dave Green), Whitney (Whitney Tsang), hiraditya (Aditya Kumar), zzheng (Z. Zheng), llvm-commits, LLVM

Tag: LLVM

Differential Revision: https://reviews.llvm.org/D73498
2020-02-12 17:57:56 +00:00
Alina Sbirlea 4f33a68973 Compute ORE, BPI, BFI in Loop passes.
Summary:
Passes ORE, BPI, BFI are not being preserved by Loop passes, hence it
is incorrect to retrieve these passes as cached.
This patch makes the loop passes in question compute a new instance.

In some of these cases, however, it may be beneficial to change the Loop pass to
a Function pass instead, similar to the change for LoopUnrollAndJam.

Reviewers: chandlerc, dmgreen, jdoerfert, reames

Subscribers: mehdi_amini, hiraditya, zzheng, steven_wu, dexonsmith, Whitney, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72891
2020-02-12 09:15:18 -08:00
Sven van Haastregt 665dcdacc0 Add missing newlines at EOF; NFC 2020-02-12 15:57:25 +00:00
stozer 61b35e4111 Re-reapply: Recover debug intrinsics when killing duplicated/empty blocks
This reverts commit 636c93ed11.

The original patch caused build failures on TSan buildbots. Commit 6ded69f294
fixes this issue by reducing the rate at which empty debug intrinsics
propagate, reducing the memory footprint and preventing a fatal spike.
2020-02-12 14:36:30 +00:00
Florian Hahn 81dbb6aec6 Recommit "[DSE] Add first version of MemorySSA-backed DSE (Bottom up walk)."
This includes a fix for the santizier failures.

This reverts the revert commit
42f8b915eb.
2020-02-12 14:17:50 +00:00
Ayman Musa 35f02aa021 Revert "[AggressiveInstCombine] Add support for ICmp instr that feeds a select intsr's condition operand."
This reverts commit cf155150f9.
2020-02-12 15:04:49 +02:00
Ayman Musa cf155150f9 [AggressiveInstCombine] Add support for ICmp instr that feeds a select intsr's condition operand. 2020-02-12 15:01:27 +02:00
stozer ffeb64db35 Reapply "[DebugInfo] Prevent explosion of debug intrinsics during jump threading"
This reverts commit 6ded69f294.
2020-02-12 12:39:54 +00:00
Ayman Musa 3bda9059b8 [AggressiveInstCombine] Add support for select instruction.
Differential Revision: https://reviews.llvm.org/D72837
2020-02-12 13:59:34 +02:00
stozer 6ded69f294 Revert "[DebugInfo] Prevent explosion of debug intrinsics during jump threading"
This reverts commit fe6f6cd6b8.

Found test failure on several buildbots.
2020-02-12 11:48:00 +00:00
Ayman Musa 49a4d85f6d [NFC][AggressiveInstCombine] Remove redundant std::max.
Differential Revision: https://reviews.llvm.org/D74476
2020-02-12 13:47:40 +02:00
stozer fe6f6cd6b8 [DebugInfo] Prevent explosion of debug intrinsics during jump threading
This patch is a fix following the revert of 72ce759
(https://reviews.llvm.org/rG72ce759928e6dfee6a9efa310b966c19722352ba)
and fixes the failure that it caused.

The above patch failed on the Thread Sanitizer buildbot with an out of
memory error. After an investigation, the cause was identified as an
explosion in debug intrinsics while running the Jump Threading pass on
ModuleMap.ll. The above patched prevented debug intrinsics from being
dropped when their Basic Block was deleted due to being "empty". In this
case, one of the functions in ModuleMap.ll had (after many optimization
passes) a very large number of debug intrinsics representing a set of
repeatedly inlined variables. Previously the vast majority of these were
silently dropped during Jump Threading when their blocks were deleted,
but as of the above patch they survived for longer, causing a large
increase in the number of debug intrinsics. These intrinsics were then
repeatedly cloned by the Jump Threading pass as edges were threaded,
multiplying the intrinsic count further. The memory consumed by this
process spiralled out of control, crashing the buildbot that uses TSan
(which has an estimated 5-10x memory overhead compared to non-sanitized
builds).

This patch adds RemoveRedundantDbgInstrs to the Jump Threading pass, in
order to reduce the number of debug intrinsics down to a manageable
amount in cases where many intrinsics for the same variable end up
bunched together contiguously, as in this case.

Differential Revision: https://reviews.llvm.org/D73054
2020-02-12 11:22:54 +00:00
Florian Hahn fa74b31a3e Revert "[SCCP] Remove forcedconstant, go to overdefined instead"
This causes a crash for the reproducer below

 enum { a };
 enum b { c, d };
 e;
 static _Bool g(struct f *h, enum b i) {
   i &&j();
   return a;
 }
 static k(char h, enum b i) {
   _Bool l = g(e, i);
   l;
 }
 m(h) {
   k(h, c);
   g(h, d);
 }

This reverts commit aadb635e04.
2020-02-12 09:41:19 +00:00
Matt Arsenault 86f9117d47 AMDGPU: Don't report 2-byte alignment as fast
This is apparently worse than 1-byte alignment. This does not attempt
to decompose 2-byte aligned wide stores, but will stop trying to
produce them.

Also fix bug in LoadStoreVectorizer which was decreasing the alignment
and vectorizing stack accesses. It was assuming a stack object was an
alloca that could have its base alignment changed, which is not true
if the pointer is derived from a function argument.
2020-02-11 18:35:00 -05:00
Johannes Doerfert 52aec3221f [Attributor][NFC] Clarify the documentation a bit more 2020-02-11 15:11:55 -06:00
Johannes Doerfert 8e62968d45 [Attributor] Identify dead uses in PHIs (almost) based on dead edges
As an approximation to a dead edge we can check if the terminator is
dead. If so, the corresponding operand use in a PHI node is dead even if
the PHI node itself is not.
2020-02-11 15:11:55 -06:00
Teresa Johnson 80d0a137a5 Restore "[WPD/LowerTypeTests] Delay lowering/removal of type tests until after ICP"
This restores commit 748bb5a0f1, along
with a fix for a Chromium test suite build issue (and a new test for
that case).

Differential Revision: https://reviews.llvm.org/D73242
2020-02-11 10:48:05 -08:00
Johannes Doerfert 185e9b083e [Attributor][NFC] Improve documentation 2020-02-11 11:19:34 -06:00
Johannes Doerfert f95553923f [Attributor] Return uses do not free pointers
If a pointer is returned that does not mean it is freed in the current
(function) scope. We can ignore such uses in AANoFree.
2020-02-11 11:02:59 -06:00
Johannes Doerfert 4c62a35860 [Attributor][FIX] Remove duplicate, half-broken functionality
The changeXXXAfterManifest functions are better suited to deal with
changes so we should prefer them. These functions also recursively
delete dead instructions which is why we see test changes.
2020-02-11 11:02:59 -06:00
Johannes Doerfert 77a9e61c9a [Attributor][NFC] Improve debug message 2020-02-11 11:02:59 -06:00
Nikita Popov 5a8819b216 [InstCombine] Use replaceOperand() in more places
This is a followup to D73803, which uses the replaceOperand()
helper in more places.

This should be NFC apart from changes to worklist order.

Differential Revision: https://reviews.llvm.org/D73919
2020-02-11 17:38:23 +01:00
Florian Hahn aadb635e04 [SCCP] Remove forcedconstant, go to overdefined instead
This patch removes forcedconstant to simplify things for the
move to ValueLattice, which includes constant ranges, but no
forced constants.

This patch removes forcedconstant and changes ResolvedUndefsIn
to mark instructions with unknown operands as overdefined. This
means we do not do simplifications based on undef directly in SCCP
any longer, but this seems to hardly come up in practice (see stats
below), presumably because InstCombine & others take care
of most of the relevant folds already.

It is still beneficial to keep ResolvedUndefIn, as it allows us delaying
going to overdefined until we propagated all known information.

I also built MultiSource, SPEC2000 and SPEC2006 and compared
sccp.IPNumInstRemoved and sccp.NumInstRemoved. It looks like the impact
is quite low:

Tests: 244
Same hash: 238 (filtered out)
Remaining: 6
Metric: sccp.IPNumInstRemoved

Program                                        base     patch    diff
 test-suite...arks/VersaBench/dbms/dbms.test     4.00    3.00  -25.0%
 test-suite...TimberWolfMC/timberwolfmc.test    38.00   34.00  -10.5%
 test-suite...006/453.povray/453.povray.test   158.00  155.00  -1.9%
 test-suite.../CINT2000/176.gcc/176.gcc.test   668.00  668.00   0.0%
 test-suite.../CINT2006/403.gcc/403.gcc.test   1209.00 1209.00  0.0%
 test-suite...arks/mafft/pairlocalalign.test    76.00   76.00   0.0%

Tests: 244
Same hash: 238 (filtered out)
Remaining: 6
Metric: sccp.NumInstRemoved

Program                                        base    patch     diff
 test-suite...arks/mafft/pairlocalalign.test   185.00  175.00  -5.4%
 test-suite.../CINT2006/403.gcc/403.gcc.test   2059.00 2056.00 -0.1%
 test-suite.../CINT2000/176.gcc/176.gcc.test   2358.00 2357.00 -0.0%
 test-suite...006/453.povray/453.povray.test   317.00  317.00   0.0%
 test-suite...TimberWolfMC/timberwolfmc.test    12.00   12.00   0.0%

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D61314
2020-02-11 15:24:15 +00:00
Kadir Cetinkaya 42f8b915eb
Revert "[DSE] Add first version of MemorySSA-backed DSE (Bottom up walk)."
This reverts commit d0c4d4fe09.

Revert "[DSE,MSSA] Move more passing test cases from todo to simple.ll."

This reverts commit 02266e64bb.

Revert "[DSE,MSSA] Adjust mda-with-dbg-values.ll to MSSA backed DSE."

This reverts commit 74f03e4ff0.
2020-02-11 15:34:48 +01:00
Sanjay Patel a2a0f9a43a [VectorCombine] remove unused debug counter; NFC
The variable was added to the initial commit via copy/paste of existing
code, but it wasn't actually used in the code. We can add it back with
the proper usage if/when that is needed.
2020-02-11 08:24:07 -05:00
Sanjay Patel b8ebc11f03 [EarlyCSE] avoid crashing when detecting min/max/abs patterns (PR41083)
As discussed in PR41083:
https://bugs.llvm.org/show_bug.cgi?id=41083
...we can assert/crash in EarlyCSE using the current hashing scheme and
instructions with flags.

ValueTracking's matchSelectPattern() may rely on overflow (nsw, etc) or
other flags when detecting patterns such as min/max/abs composed of
compare+select. But the value numbering / hashing mechanism used by
EarlyCSE intersects those flags to allow more CSE.

Several alternatives to solve this are discussed in the bug report.
This patch avoids the issue by doing simple matching of min/max/abs
patterns that never requires instruction flags. We give up some CSE
power because of that, but that is not expected to result in much
actual performance difference because InstCombine will canonicalize
these patterns when possible. It even has this comment for abs/nabs:

  /// Canonicalize all these variants to 1 pattern.
  /// This makes CSE more likely.

(And this patch adds PhaseOrdering tests to verify that the expected
transforms are still happening in the standard optimization pipelines.

I left this code to use ValueTracking's "flavor" enum values, so we
don't have to change the callers' code. If we decide to go back to
using the ValueTracking call (by changing the hashing algorithm
instead), it should be obvious how to replace this chunk.

Differential Revision: https://reviews.llvm.org/D74285
2020-02-10 17:25:34 -05:00
Hiroshi Yamauchi bb383ae612 [CallPromotionUtils] Add tryPromoteCall.
Summary: It attempts to devirtualize a call on alloca through vtable loads.

Reviewers: davidxl

Subscribers: mgorny, Prazek, hiraditya, llvm-commits

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71308
2020-02-10 13:43:16 -08:00
Sanjay Patel 62ce7e650a [InstCombine] fix use check when canonicalizing abs/nabs
We were checking for extra uses of the negated operand even
if we were not going to create it as part of this canonicalization.

This was showing up as a regression when we limit EarlyCSE as
proposed in D74285.
2020-02-10 14:57:37 -05:00
David Stenberg 982944525c Revert "[InstCombine][DebugInfo] Fold constants wrapped in metadata"
This reverts commit b54a8ec1bc.

The commit triggered debug invariance (different output with/without
-g). The patch seems to have exposed a pre-existing invariance problem
in GlobalOpt, which I'll write a bug report for.
2020-02-10 17:58:33 +01:00
Bill Wendling c55cf4afa9 Revert "Remove redundant "std::move"s in return statements"
The build failed with

  error: call to deleted constructor of 'llvm::Error'

errors.

This reverts commit 1c2241a793.
2020-02-10 07:07:40 -08:00
Bill Wendling 1c2241a793 Remove redundant "std::move"s in return statements 2020-02-10 06:39:44 -08:00
Mikael Holmen a50c0b0df7 Fix compiler warning when compiling without asserts [NFC] 2020-02-10 13:55:52 +01:00
Florian Hahn d0c4d4fe09 [DSE] Add first version of MemorySSA-backed DSE (Bottom up walk).
This patch adds a first version of a MemorySSA based DSE. It is missing
a lot of features, which will get added as follow-ups, to help to keep
the review manageable.

The patch uses the following general approach: given a MemoryDef, walk
upwards to find clobbering MemoryDefs that may be killed by the
starting def. Then check that there are no uses that may read the
location of the original MemoryDef in between both MemoryDefs. A bit
more concretely:

For all MemoryDefs StartDef:
1. Get the next dominating clobbering MemoryDef (DomAccess) by walking upwards.
2. Check that there no reads between DomAccess and the StartDef by checking
   all uses starting at DomAccess and walking until we see StartDef.
3. For each found DomDef, check that:
  1. There are no barrier instructions between DomDef and StartDef (like
     throws or stores with ordering constraints).
  2. StartDef is executed whenever DomDef is executed.
3. StartDef completely overwrites DomDef.
4. Erase DomDef from the function and MemorySSA.

The patch uses a very simple approach to guarantee that no throwing
instructions are between 2 stores: We only allow accesses to stack
objects, access that are in the same basic block if the block does not
contain any throwing instructions or accesses in functions that do
not contain any throwing instructions. This will get lifted later.

Besides adding support for the missing cases, there is plenty of additional
potential for improvements as follow-up work, e.g. the way we visit stores
(could be just a traversal of the MemorySSA, rather than collecting them
up-front), using the alias information discovered during walking to optimize
the MemorySSA.

This is loosely based on D40480 by Dave Green.

Reviewers: dmgreen, rnk, efriedma, bryant, asbirlea, Tyker

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D72700
2020-02-10 11:52:11 +00:00
Florian Hahn da52b9c118 [DSE] Add tests for MemorySSA based DSE.
This copies the DSE tests into a MSSA subdirectory to test the MemorySSA
backed DSE implementation, without disturbing the original tests.

Differential Revision: https://reviews.llvm.org/D72145
2020-02-10 10:28:43 +00:00
Johannes Doerfert 87ddf1f4fa [Attributor] Simple casts preserve no-alias property
This is a minimal but important advancement over the existing code. A
cast with an operand that is only used in the cast retains the no-alias
property of the operand.
2020-02-10 01:11:32 -06:00
Johannes Doerfert 8155439331 [Attributor] Allow PHI nodes in AAValueConstantRangeFloating
Traversing PHI nodes is natural with the genericValueTraversal but also
a bit tricky. The problem is similar to the ones we have seen in AAAlign
and AADereferenceable, namely that we continue to increase the range in
each iteration. We use a pessimistic approach here to stop the
iterations. Nevertheless, optimistic information can now be propagated
through a PHI node.
2020-02-10 00:55:10 -06:00
Johannes Doerfert 63adbb9a0e [Attributor][FIX] Remove FIXME that seems outdated
The change is performed as stated by the FIXME and the tests are
adjusted. All changes look fine to me and values can be inferred as
undef without it being an error.
2020-02-10 00:55:10 -06:00
Johannes Doerfert 7e7e6594b3 [Attributor] Allow SelectInst in AAValueConstantRangeFloating
The genericValueTraversal will already handle SelectInst properly and we
just needed to allow them in the initialize method.
2020-02-10 00:55:09 -06:00
Johannes Doerfert ffdbd2a06c [Attributor] Look through (some) casts in AAValueConstantRangeFloating
Casts can be handled natively by the ConstantRange class. We do limit it
to extends for now as we assume an integer type in different locations.
A TODO and a test case with a FIXME was added to remove that restriction
in the future.
2020-02-10 00:38:01 -06:00
Johannes Doerfert 028db8c490 [Attributor][FIX] Call right base method in AAValueConstantRangeFloating
We now call the base class method as we should.
2020-02-10 00:38:01 -06:00
Michael Liao ab3da5dd66 Fix `-Wparentheses` warning. NFC. 2020-02-10 00:45:02 -05:00
Sanjay Patel a17f03bd93 [VectorCombine] new IR transform pass for partial vector ops
We have several bug reports that could be characterized as "reducing scalarization",
and this topic was also raised on llvm-dev recently:
http://lists.llvm.org/pipermail/llvm-dev/2020-January/138157.html
...so I'm proposing that we deal with these patterns in a new, lightweight IR vector
pass that runs before/after other vectorization passes.

There are 4 alternate options that I can think of to deal with this kind of problem
(and we've seen various attempts at all of these), but they all have flaws:

    InstCombine - can't happen without TTI, but we don't want target-specific
                  folds there.
    SDAG - too late to assist other vectorization passes; TLI is not equipped
           for these kind of cost queries; limited to a single basic block.
    CGP - too late to assist other vectorization passes; would need to re-implement
          basic cleanups like CSE/instcombine.
    SLP - doesn't fit with existing transforms; limited to a single basic block.

This initial patch/transform is based on existing code in AggressiveInstCombine:
we walk backwards through the function looking for a pattern match. But we diverge
from that cost-independent IR canonicalization pass by using TTI to decide if the
vector alternative is profitable.

We probably have at least 10 similar bug reports/patterns (binops, constants,
inserts, cheap shuffles, etc) that would fit in this pass as follow-up enhancements.
It's possible that we could iterate on a worklist to fix-point like InstCombine does,
but it's safer to start with a most basic case and evolve from there, so I didn't
try to do anything fancy with this initial implementation.

Differential Revision: https://reviews.llvm.org/D73480
2020-02-09 10:04:41 -05:00
Ehud Katz 3b70ee27a5 [LoopExtractor] Convert LoopExtractor from LoopPass to ModulePass
The LoopExtractor created new functions (by definition), which violates
the restrictions of a LoopPass.
The correct implementation of this pass should be as a ModulePass.
Includes reverting rL82990 implications on the LoopExtractor.

Fixes PR3082 and PR8929.

Differential Revision: https://reviews.llvm.org/D69069
2020-02-09 12:25:21 +02:00
Johannes Doerfert b0c77c36d2 [Attributor] Add an Attributor CGSCC pass and run it
In addition to the module pass, this patch introduces a CGSCC pass that
runs the Attributor on a strongly connected component of the call graph
(both old and new PM). The Attributor was always design to be used on a
subset of functions which makes this patch mostly mechanical.

The one change is that we give up `norecurse` deduction in the module
pass in favor of doing it during the CGSCC pass. This makes the
interfaces simpler but can be revisited if needed.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D70767
2020-02-08 21:27:34 -06:00
Johannes Doerfert e565db49c6 [OpenMP][Opt] Delete terminating and read-only parallel regions
Parallel regions known to be read-only, e.g., after we removed all dead
write accesses, and terminating (`willreturn`) can be removed.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D69954
2020-02-08 18:52:04 -06:00
Johannes Doerfert e28936f613 [OpenMP][Opt] Annotate known runtime functions and deduplicate more
This adds ~27 more runtime calls to the OpenMPKinds.def file, all with
attributes. We deduplicate 16 of those automatically in function =
thread scope. And we annotate all of them automatically during the
OpenMPOpt discovery step. A test with all omp_XXXX runtime calls to
track annotation coverage is included.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D69984
2020-02-08 18:35:39 -06:00
Nikita Popov a05932931c [InstCombine] Refactor foldICmpAndShift(); NFCI
Separate out handling for shl, lshr and ashr. The combined handling
obscured some overly pessimistic requirements for the transform.
2020-02-08 22:27:43 +01:00
Johannes Doerfert 9548b74a83 [OpenMP] Introduce the OpenMPOpt transformation pass
The OpenMPOpt pass is a CGSCC pass in which OpenMP specific
optimizations can reside.

The OpenMPOpt pass uses the OpenMPKinds.def file to identify runtime
calls and their uses. This allows targeted transformations and eases
their implementation.

This initial patch deduplicates `__kmpc_global_thread_num` and
`omp_get_thread_num` calls. We can also identify arguments that are
equivalent to such a call result and use it instead. Later we can
determine "gtid" arguments based on the use in kernel functions etc.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D69930
2020-02-08 14:47:03 -06:00
Johannes Doerfert 72277ecd62 Introduce a CallGraph updater helper class
The CallGraphUpdater is a helper that simplifies the process of updating
the call graph, both old and new style, while running an CGSCC pass.

The uses are contained in different commits, e.g. D70767.

More functionality is added as we need it.

Reviewed By: modocache, hfinkel

Differential Revision: https://reviews.llvm.org/D70927
2020-02-08 14:16:48 -06:00
George Burgess IV f8c9ceb1ce [SimplifyLibCalls] Add __strlen_chk.
Bionic has had `__strlen_chk` for a while. Optimizing that into a
constant is quite profitable, when possible.

Differential Revision: https://reviews.llvm.org/D74079
2020-02-08 11:51:00 -08:00
Nikita Popov a148b9e990 [InstCombine] Fix infinite min/max canonicalization loop (PR44541)
While D72944 also fixes https://bugs.llvm.org/show_bug.cgi?id=44541,
it does so in a more roundabout manner and there might be other
loopholes to trigger the same issue. This is a more direct fix,
that prevents the transform if the min/max is based on a
non-canonical sub X, 0 instruction.

Differential Revision: https://reviews.llvm.org/D73849
2020-02-08 20:42:17 +01:00
Nikita Popov 5b2b67be8e [InstCombine] Remove unnecessary worklist push; NFCI
This is no longer needed after d4627b90a0,
should have dropped it there...
2020-02-08 17:09:28 +01:00
Nikita Popov d4627b90a0 [InstCombine] Avoid modifying instructions in-place
As discussed on D73919, this replaces a few cases where we were
modifying multiple operands of instructions in-place with the
creation of a new instruction, which we generally prefer nowadays.

This tends to be more readable and less prone to worklist management
bugs.

Test changes are only superficial (instruction naming and order).
2020-02-08 17:05:56 +01:00
Nikita Popov 9d03b7d0d0 [InstCombine] Use swapValues(); NFC
Less code, and makes it more obvious that these operands do not
need to be added back to the worklist.
2020-02-08 16:57:28 +01:00
Nikita Popov 23db9724d0 [InstCombine] Fix infinite loop in min/max load/store bitcast combine (PR44835)
Fixes https://bugs.llvm.org/show_bug.cgi?id=44835. Skip the transform
if it wouldn't actually do anything (apart from removing and reinserting
the same instructions).

Note that the test case doesn't loop on current master anymore, only
on the LLVM 10 release branch. The issue is already mitigated on master
due to worklist order fixes, but we should fix the root cause there as well.

As a side note, we should probably assert in combineLoadToNewType()
that it does not combine to the same type. Not doing this here, because
this assertion would also be triggered in another place right now.

Differential Revision: https://reviews.llvm.org/D74278
2020-02-08 16:55:22 +01:00
Akira Hatanaka 4dcc029edb [ObjC][ARC] Keep track of phis that have been discovered to avoid an
infinite loop

This fixes a bug introduced in 6770fbb314.

rdar://problem/59137105
2020-02-07 20:33:11 -08:00
Akira Hatanaka 6770fbb314 [ObjC][ARC] Delete ARC runtime calls that take inert phi values
This improves on the following patch, which removed ARC runtime calls
taking inert global variables:

https://reviews.llvm.org/D62433

rdar://problem/59137105
2020-02-07 16:31:36 -08:00
Hiroshi Yamauchi 4ed205c816 [PGO][PGSO] Enable profile guided size optimization for non-cold code under instrumentation PGO.
Summary:
This enables it for large working set size cases only.

This does not enable it under sample PGO.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74073
2020-02-06 10:29:01 -08:00
Denis Antrushin 99a6e405ed [IRCE] Use SCEVExpander to modify loop bound
IRCE pass checks that it can calculate loop bounds by checking
SCEV availability at loop entry. However it is possible that loop
bound SCEV is loop invariant, but instruction used to compute it
resides within loop. In such case adjusting loop bound in preheader
using IRBuilder leads to malformed SSA.
Use SCEVExpander instead to generate proper instructions.

Reviewed-by: mkazantsev
Differential Revision: https://reviews.llvm.org/D73496
2020-02-06 12:44:43 +03:00
Teresa Johnson 25aa2eef99 Revert "[WPD/LowerTypeTests] Delay lowering/removal of type tests until after ICP"
This reverts commit 748bb5a0f1.

Due to Chromium CFI+ThinLTO test crashes reported on patch.
2020-02-05 19:27:32 -08:00
Juneyoung Lee 5687acf431 [MemCpyOpt] Simplify find*Alignment 2020-02-06 06:42:07 +09:00
Juneyoung Lee ad9ae6ee2b MemCpyOpt cannot use ABI alignment even if it was not given
Summary: This patch fixes https://bugs.llvm.org/show_bug.cgi?id=44388 which incorrectly assigns an ABI alignment to memset when there was no explicit alignment given.

Reviewers: gchatelet, lenary, nikic

Reviewed By: nikic

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74083
2020-02-06 06:21:55 +09:00
Hiroshi Yamauchi b70f23f599 [PGO][PGSO] Tune flags for profile guided size optimization.
Summary:
Tune the profile threshold flag value for instrumentation PGO based on internal
benchmarks.

Also, add flags to allow profile guided size optimizations for non-cold code
to be enabled separately for instrumentation and sample PGSO.

Neither changes the default behavior (yet) as it's disabled for non-cold code.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72937
2020-02-05 09:37:32 -08:00
Kazu Hirata 4698bf145d Resubmit^2: [JumpThreading] Thread jumps through two basic blocks
This reverts commit 41784bed01.

Since the original revision ead815924e,
this revision fixes three issues:

- This revision fixes the Windows build.  My original patch improperly
  copied EH pads on Windows.  This patch disregards jump threading
  opportunities having to do with EH pads.

- This revision fixes jump threading to a wrong destination.
  Specifically, my original patch treated any Constant other than 0 as 1
  while evaluating the branch condition.  This bug led to treating
  constant expressions like:

    icmp ugt i8* null, inttoptr (i64 4 to i8*)

  to "true".  This patch fixes the bug by calling isOneValue.

- This revision fixes the cost calculation of two basic blocks being
  threaded through.  Note that getJumpThreadDuplicationCost returns
  "(unsigned)~0" for those basic blocks that cannot be duplicated.  If
  we sum of two return values from getJumpThreadDuplicationCost, we
  could have an unsigned overflow like:

    (unsigned)~0 + 5 = 4

  and mistakenly determine that it's safe and profitable to proceed
  with the jump threading opportunity.  The patch fixes the bug by
  checking each return value before summing them up.

[JumpThreading] Thread jumps through two basic blocks

Summary:
This patch teaches JumpThreading.cpp to thread through two basic
blocks like:

  bb3:
    %var = phi i32* [ null, %bb1 ], [ @a, %bb2 ]
    %tobool = icmp eq i32 %cond, 0
    br i1 %tobool, label %bb4, label ...

  bb4:
    %cmp = icmp eq i32* %var, null
    br i1 %cmp, label bb5, label bb6

by duplicating basic blocks like bb3 above.  Once we duplicate bb3 as
bb3.dup and redirect edge bb2->bb3 to bb2->bb3.dup, we have:

  bb3:
    %var = phi i32* [ @a, %bb2 ]
    %tobool = icmp eq i32 %cond, 0
    br i1 %tobool, label %bb4, label ...

  bb3.dup:
    %var = phi i32* [ null, %bb1 ]
    %tobool = icmp eq i32 %cond, 0
    br i1 %tobool, label %bb4, label ...

  bb4:
    %cmp = icmp eq i32* %var, null
    br i1 %cmp, label bb5, label bb6

Then the existing code in JumpThreading.cpp can thread edge
bb3.dup->bb4 through bb4 and eventually create bb3.dup->bb5.

Reviewers: wmi

Subscribers: hiraditya, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70247
2020-02-05 09:23:37 -08:00
Alina Sbirlea 67904db23c [IRCE] Make IRCE a Function pass.
Summary: Make InductiveRangeCheckElimination a FunctionPass.

Reviewers: reames, mkazantsev

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73592
2020-02-05 09:22:41 -08:00
Teresa Johnson 748bb5a0f1 [WPD/LowerTypeTests] Delay lowering/removal of type tests until after ICP
Summary:
Currently type test assume sequences inserted for devirtualization are
removed during WPD. This patch delays their removal until later in the
optimization pipeline. This is an enabler for upcoming enhancements to
indirect call promotion, for example streamlined promotion guard
sequences that compare against vtable address instead of the target
function, when there are small number of possible vtables (either
determined via WPD or by in-progress type profiling). We need the type
tests to correlate the callsites with the address point offset needed in
the compare sequence, and optionally to associated type summary info
computed during WPD.

This depends on work in D71913 to enable invocation of LowerTypeTests to
drop type test assume sequences, which will now be invoked following ICP
in the ThinLTO post-LTO link pipelines, and also after the existing
export phase LowerTypeTests invocation in regular LTO (which is already
after ICP). We cannot simply move the existing import phase
LowerTypeTests pass later in the ThinLTO post link pipelines, as the
comment in PassBuilder.cpp notes (it must run early because when
performing CFI other passes may disturb the sequences it looks for).

This necessitated adding a new type test resolution "Unknown" that we
can use on the type test assume sequences previously removed by WPD,
that we now want LTT to ignore.

Depends on D71913.

Reviewers: pcc, evgeny777

Subscribers: mehdi_amini, Prazek, hiraditya, steven_wu, dexonsmith, arphaman, davidxl, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73242
2020-02-05 08:59:48 -08:00
Alina Sbirlea 1c03cc5a39 [NFCI] Update according to style.
clang-tidy + clang-format
2020-02-04 17:11:36 -08:00
Sanjay Patel dc42ff6697 [InstCombine] add FIXME comment to shuffle transform; NFC
Existing tests:
rG5d04e008f708
rG2a191cf8500f
...should verify that the underlying analysis doesn't improve
too much without updating this user code.
2020-02-04 13:02:06 -05:00
Matt Arsenault a3c814d234 Separately track input and output denormal mode
AMDGPU and x86 at least both have separate controls for whether
denormal results are flushed on output, and for whether denormals are
implicitly treated as 0 as an input. The current DAGCombiner use only
really cares about the input treatment of denormals.
2020-02-04 12:59:21 -05:00
Sanjay Patel 0cf0be993c [InstCombine] fix operands of shouldChangeType() for casted phi transform
This is a bug noted in the recent D72733 and seen
in the similar transform just above the changed source code.

I added tests with illegal types and zexts to show the bug -
we could transform legal phi ops to illegal, etc. I did not add
tests with trunc because we won't see any diffs on those patterns.
That is because InstCombiner::SliceUpIllegalIntegerPHI() appears to
do those transforms independently of datalayout. It can also create
more casts than are present in existing code.

There are some existing regression tests that do not include a
datalayout that would be altered by this fix. I assumed that the
lack of a datalayout in those regression files is an oversight, so
I added the minimal layout (make i32 legal) necessary to preserve
behavior on those tests.

Differential Revision: https://reviews.llvm.org/D73907
2020-02-04 07:45:48 -05:00
Thomas Raoux e53bbf1213 [GVN] Add GVNOption to control load-pre more fine-grained.
Adds the global (cl::opt) GVNOption enable-load-in-loop-pre in order
to control whether the optimization will be performed if the load
is part of a loop.

Patch by Hendrik Greving!

Differential Revision: https://reviews.llvm.org/D73804
2020-02-03 23:00:58 -08:00
Tyker 15f54d348b [NFC] Factor out function to detect if an attribute has an argument. 2020-02-03 22:27:24 +01:00
Alina Sbirlea 388de9dfcd [LoopUtils] Make duplicate method a utility. [NFCI]
Summary:
Method appendLoopsToWorklist is duplicate in LoopUnroll and in the
LoopPassManager as an internal method. Make it an utility.

Reviewers: dmgreen, chandlerc, fedor.sergeev, yamauchi

Subscribers: mehdi_amini, hiraditya, zzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73569
2020-02-03 10:24:18 -08:00
Nikita Popov 575a975afd [SimplifyLibCalls] Remove unused IRBuilder argument; NFC
isLocallyOpenedFile() does not use IRBuilder.
2020-02-03 19:12:57 +01:00
Nikita Popov 878cb38a5c [InstCombine] Add replaceOperand() helper
Adds a replaceOperand() helper, which is like Instruction.setOperand()
but adds the old operand to the worklist. This reduces the amount of
missing or incorrect worklist management.

This only applies the helper to a relatively small subset of
setOperand() calls in InstCombine, namely those of the pattern
`I.setOperand(); return &I;`, where it is most obviously applicable.

Differential Revision: https://reviews.llvm.org/D73803
2020-02-03 19:00:17 +01:00
Nikita Popov e6c9ab4fb7 [InstCombine] Rename worklist methods; NFC
This renames Worklist.AddDeferred() to Worklist.add() and
Worklist.Add() to Worklist.push(). The intention here is that
Worklist.add() should be the go-to method for explicit worklist
management, while the raw Worklist.push() is mostly for
InstCombine internals. I will then migrate uses of Worklist.push()
to Worklist.add() in followup changes.

As suggested by spatel on D73411 I'm also changing the remaining
method names to lowercase first character, in line with current
coding standards.

Differential Revision: https://reviews.llvm.org/D73745
2020-02-03 18:56:51 +01:00
Nikita Popov a59954051e [InstCombine] Fix unused variable warning; NFC 2020-02-03 18:47:38 +01:00
Teresa Johnson bed4d9c897 [ThinLTO] More efficient export computation (NFC)
Summary:
A recent change to enable more importing of global variables with
references exposed some efficiency issues with export computation.
See D73724 for more information and detailed analysis.

The first was specific to variable importing. The code was marking every
copy of a referenced value (from possibly thousands of files in the case
of linkonce_odr) as exported, and we only need to mark the copy in the
module containing the variable def being imported as exported. The
reason is that this is tracking what values are newly exported as a
result of importing. Anything that was defined in another module and
simply used in the exporting module is already exported, and would have
been identified by the caller (e.g. the LTO API implementations).

The second issue is that the code was re-adding previously exported
values (along with all references). It is easy to identify when a
variable was already imported into the same module (via the
import list insert call return value), and we already did this for
function importing. However, what we weren't doing for either function
or variable importing was avoiding a re-insertion when it was previously
exported into a different importing module. The reason we couldn't do
this is there was no way of telling from the export list whether it was
previously inserted there because its definition was exported (in which
case we already marked all its references as exported) from when it was
inserted there because it was referenced by another exported value (in
which case we haven't yet inserted its own references).

To address this we can restructure the way the export list is
constructed. This patch only adds the actual imported definitions
(variable or function) to the export list for its module during the
import computation. After import computation is complete, where we were
already post-processing the export list we go ahead and add all
references made by those exported values to the export list.

These changes speed up the thin link not only with constant variable
importing enabled, but also without (due to the efficiency improvement
in function importing).

Some thin link user time measurements for one large application, average
of 5 runs:

With constant variable importing enabled:
- without this patch: 479.5s
- with this patch: 74.6s

Without constant variable importing enabled:
- without this patch: 80.6s
- with this patch: 70.3s

Note I have not re-enabled constant variable importing here, as I would
like to do additional compile time measurements with these fixes first.

Reviewers: evgeny777

Subscribers: mehdi_amini, inglorion, hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73851
2020-02-03 09:15:33 -08:00
Sanjay Patel e78fb556c5 [InstCombine] reassociate splatted vector ops
bo (splat X), (bo Y, OtherOp) --> bo (splat (bo X, Y)), OtherOp

This patch depends on the splat analysis enhancement in D73549.
See the test with comment:
; Negative test - mismatched splat elements
...as the motivation for that first patch.

The motivating case for reassociating splatted ops is shown in PR42174:
https://bugs.llvm.org/show_bug.cgi?id=42174

In that example, a slight change in order-of-associative math results
in a big difference in IR and codegen. This patch gets all of the
unnecessary shuffles out of the way, but doesn't address the potential
scalarization (see D50992 or D73480 for that).

Differential Revision: https://reviews.llvm.org/D73703
2020-02-03 09:08:36 -05:00
Sam Parker 2663a25fad [JumpThreading] Half the duplicate threshold at Oz
Duplicating instructions can lead to code size increases but using
a threshold of 3 is good for reducing code size.

Differential Revision: https://reviews.llvm.org/D72916
2020-02-03 08:40:20 +00:00
Johannes Doerfert 26d02b0f28 [Attributor] AANoRecurse check all call sites for `norecurse`
If all call sites are in `norecurse` functions we can derive `norecurse`
as the ReversePostOrderFunctionAttrsPass does. This should make
ReversePostOrderFunctionAttrsLegacyPass obsolete once the Attributor is
enabled.

Reviewed By: uenoku

Differential Revision: https://reviews.llvm.org/D72017
2020-02-02 23:57:17 -06:00
Johannes Doerfert 368f7ee7a5 [Attributor] Propagate known information from `checkForAllCallSites`
If we know that all call sites have been processed we can derive an
early fixpoint. The use in this patch is likely not to trigger right now
but a follow up patch will make use of it.

Reviewed By: uenoku, baziotis

Differential Revision: https://reviews.llvm.org/D72016
2020-02-02 23:57:17 -06:00
Juneyoung Lee 578d2e2cb1 [llvm-extract] Add -keep-const-init commandline option
Summary:
This adds -keep-const-init option to llvm-extract which preserves initializers of
used global constants.

For example:

```
$ cat a.ll
@g = constant i32 0
define i32 @f() {
  %v = load i32, i32* @g
  ret i32 %v
}

$ llvm-extract --func=f a.ll -S -o -
@g = external constant i32
define i32 @f() { .. }

$ llvm-extract --func=f a.ll -keep-const-init -S -o -
@g = constant i32 0
define i32 @f() { .. }
```

This option is useful in checking whether a function that uses a constant global is optimized correctly.

Reviewers: jsji, MaskRay, david2050

Reviewed By: MaskRay

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73833
2020-02-03 14:30:28 +09:00
Johannes Doerfert 342357c568 [Inliner][NoAlias] Use call site attributes too
If we had `noalias` on an argument the inliner created alias scope
metadata already. However, the call site `noalias` annotation was not
considered. Since the Attributor can derive such call site `noalias`
annotation we should treat them the same as argument annotations.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D73528
2020-02-02 23:21:29 -06:00
Tyker a7bbe45a3e Build assume from call
Fix attempt

this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.
2020-02-02 19:43:36 +01:00
Tyker 7cb5d96fbe Revert "[WIP] Build assume from call"
casued buildbot failure

This reverts commit 8ebe001553.
2020-02-02 18:35:19 +01:00
Tyker 8ebe001553 [WIP] Build assume from call
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72475
2020-02-02 18:15:50 +01:00
Tyker c2d0336208 Revert "[WIP] Build assume from call"
caused build bot failure

This reverts commit 780d2c532f.
2020-02-02 18:09:06 +01:00
Tyker 780d2c532f [WIP] Build assume from call
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72475
2020-02-02 17:54:31 +01:00
Tyker ad8ffc5010 Revert "[WIP] Build assume from call"
This reverts commit 355e4bfd78.
2020-02-02 17:49:23 +01:00
Tyker 355e4bfd78 [WIP] Build assume from call
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72475
2020-02-02 17:17:46 +01:00
Tyker 0adda3df92 Revert "[WIP] Build assume from call"
This reverts commit 2ff5602cb5.
2020-02-02 15:05:33 +01:00
Tyker d431c5d9af Revert "[NFC] Factor out function to detect if an attribute has an argument."
This reverts commit ff1b9add2f.
2020-02-02 15:03:06 +01:00
Tyker ff1b9add2f [NFC] Factor out function to detect if an attribute has an argument.
Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72884
2020-02-02 14:50:31 +01:00
Tyker 2ff5602cb5 [WIP] Build assume from call
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72475
2020-02-02 14:50:31 +01:00
Fangrui Song ba3a1774a9 [Transforms] Simplify with make_early_inc_range 2020-02-02 00:54:32 -08:00
Artur Pilipenko 34547ac959 NFC. Comments cleanup in DSE::memoryIsNotModifiedBetween
Separated from https://reviews.llvm.org/D68006 review.
2020-01-31 15:22:33 -08:00
Nikita Popov ff17da3f75 [InstCombine] Push negation through multiply (PR44234)
Fixes https://bugs.llvm.org/show_bug.cgi?id=44234 by adding
multiply support to freelyNegateValue(). Only one of the operands
needs to be negatible, so this still fits within the framework.

Differential Revision: https://reviews.llvm.org/D73410
2020-01-31 20:58:55 +01:00
Hiroshi Yamauchi ac8da31a0f [PGO][PGSO] Handle MBFIWrapper
Some code gen passes use MBFIWrapper to keep track of the frequency of new
blocks. This was not taken into account and could lead to incorrect frequencies
as MBFI silently returns zero frequency for unknown/new blocks.

Add a variant for MBFIWrapper in the PGSO query interface.

Depends on D73494.
2020-01-31 09:36:55 -08:00
Sanjay Patel bc1148e7bc [PATCH] D73727: [SLP] drop poison-generating flags for shuffle reduction ops (PR44536)
We may calculate reassociable math ops in arbitrary order when creating a shuffle reduction,
so there's no guarantee that things like 'nsw' hold on those intermediate values. Drop all
poison-generating flags for safety.

This change is limited to shuffle reductions because I don't think we have a problem in the
general case (where we intersect flags of each scalar op that goes into a vector op), but if
there's evidence of other cases being wrong, we can extend this fix to cover those cases.

https://bugs.llvm.org/show_bug.cgi?id=44536

Differential Revision: https://reviews.llvm.org/D73727
2020-01-31 09:54:35 -05:00
Nikita Popov 480391035c [InstCombine] Remove unnecessary worklist add; NFCI
Again, this will already be added by IRBuilder.
2020-01-30 23:24:59 +01:00
Nikita Popov 90b5ed996b [InstCombine] Remove unnecessary worklist add; NFCI
The IRBuilder will automatically add instructions to the worklist.
Adding it manually is unnecessary, but may mess up worklist order.
2020-01-30 23:06:28 +01:00
Nikita Popov cad91074a6 [InstCombine] Create new insts in foldICmpEqIntrinsicWithConstant; NFCI
In line with current conventions, create new instructions rather
than modify two operands in place and performing manual worklist
management.

This should be NFC apart from possible worklist order changes.
2020-01-30 23:03:16 +01:00
Whitney Tsang e44f4a8a54 [LoopFusion] Move instructions from FC1.GuardBlock to FC0.GuardBlock and
from FC0.ExitBlock to FC1.ExitBlock when proven safe.

Summary:
Currently LoopFusion give up when the second loop nest guard
block or the first loop nest exit block is not empty. For example:

if (0 < N) {
  for (int i = 0; i < N; ++i) {}
  x+=1;
}
y+=1;
if (0 < N) {
  for (int i = 0; i < N; ++i) {}
}
The above example should be safe to fuse.
This PR moves instructions in FC1 guard block (e.g. y+=1;) to
FC0 guard block, or instructions in FC0 exit block (e.g. x+=1;) to
FC1 exit block, which then LoopFusion is able to fuse them.
Reviewer: kbarton, jdoerfert, Meinersbur, dmgreen, fhahn, hfinkel,
bmahjour, etiotto
Reviewed By: jdoerfert
Subscribers: hiraditya, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D73641
2020-01-30 18:02:22 +00:00
David Stenberg b54a8ec1bc [InstCombine][DebugInfo] Fold constants wrapped in metadata
Summary:
When constant folding, constants that are wrapped in metadata were not
folded. This could lead to dbg.values being the only user of a constant
expression, due to the non-dbg uses having been rewritten, resulting in
the constant later on being removed by some other pass. This occurred
with the attached test case, in which the non-rewritten GEP in the
dbg.value intrinsic was later on removed by globalopt.

This patch makes the code look through metadata and fold such constants.

I guess that we in the future may want to allow dbg.values using GEPs and
other constant expressions to be emittable even if there are no non-dbg
uses, but for example SelectionDAG does not support that.

Reviewers: jmorse, aprantl, vsk, davide

Reviewed By: aprantl, vsk, davide

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D73630
2020-01-30 15:50:16 +01:00
Piotr Sobczak dd7148822b [InstCombine][AMDGPU] Trim components of s_buffer_load
Summary:
Add trimming of unused components of s_buffer_load.

For s_buffer_load and unformatted buffer_load also trim unused
components at the beginning of vector and update offset accordingly.

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71785
2020-01-30 10:48:25 +01:00
Michael Forster 676c29694c Inline debug variable.
Summary:
In a release build this variable becomes unused and may break the build
with `-Werror,-Wunused-variable`.

Reviewers: gribozavr2, jdoerfert, sstefan1

Reviewed By: gribozavr2

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73683
2020-01-30 10:29:24 +01:00
Nikita Popov 8058196677 [InstCombine] Process newly inserted instructions in the correct order
InstCombine operates on the basic premise that the operands of the
currently processed instruction have already been simplified. It
achieves this by pushing instructions to the worklist in reverse
program order, so that instructions are popped off in program order.
The worklist management in the main combining loop also makes sure
to uphold this invariant.

However, the same is not true for all the code that is performing
manual worklist management. The largest problem (addressed in this
patch) are instructions inserted by InstCombine's IRBuilder. These
will be pushed onto the worklist in order of insertion (generally
matching program order), which means that a) the users of the
original instruction will be visited first, as they are pushed later
in the main loop and b) the newly inserted instructions will be
visited in reverse program order.

This causes a number of problems: First, folds operate on instructions
that have not had their operands simplified, which may result in
optimizations being missed (ran into this in
https://reviews.llvm.org/D72048#1800424, which was the original
motivation for this patch). Additionally, this increases the amount
of folds InstCombine has to perform, both within one iteration, and
by increasing the number of total iterations.

This patch addresses the issue by adding a Worklist.AddDeferred()
method, which is used for instructions inserted by IRBuilder. These
will only be added to the real worklist after the combine finished,
and in reverse order, so they will end up processed in program order.
I should note that the same should also be done to nearly all other
uses of Worklist.Add(), but I'm starting with just this occurrence,
which has by far the largest test fallout.

Most of the test changes are due to
https://bugs.llvm.org/show_bug.cgi?id=44521 or other cases where
we don't canonicalize something. These are neutral. One regression
has been addressed in D73575 and D73647. The remaining regression
in an shl+sdiv fold can't really be fixed without dropping another
transform, but does not seem particularly problematic in the first
place.

Differential Revision: https://reviews.llvm.org/D73411
2020-01-30 09:40:10 +01:00
Francesco Petrogalli 623cff81fe [llvm][VectorUtils] Tweak VFShape for scalable vector functions.
Summary:
This patch makes sure that the field VFShape.VF is greater than zero
when demangling the vector function name of scalable vector functions
encoded in the "vector-function-abi-variant" attribute.

This change is required to be able to provide instances of VFShape
that can be used to query the VFDatabase for the vectorization passes,
as such passes always require a positive value for the Vectorization Factor (VF)
needed by the vectorization process.

It is not possible to extract the value of VFShape.VF from the mangled
name of scalable vector functions, because it is encoded as
`x`. Therefore, the VFABI demangling function has been modified to
extract such information from the IR declaration of the vector
function, under the assumption that _all_ vectors in the signature of
the vector function have the same number of lanes. Such assumption is
valid because it is also assumed by the Vector Function ABI
specifications supported by the demangling function (x86, AArch64, and
LLVM internal one).

The unit tests that demangle scalable names have been modified by
adding the IR module that carries the declaration of the vector
function name being demangled.

In particular, the demangling function fails in the following cases:

1. When the declaration of the scalable vector function is not
    present in the module.

2. When the value of VFSHape.VF is not greater than 0.

Reviewers: jdoerfert, sdesmalen, andwar

Reviewed By: jdoerfert

Subscribers: mgorny, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73286
2020-01-30 05:53:56 +00:00
Johannes Doerfert 89c2e733e8 [Attributor] Pointer privatization attribute (argument promotion)
A pointer is privatizeable if it can be replaced by a new, private one.
Privatizing pointer reduces the use count, interaction between unrelated
code parts. This is a first step towards replacing argument promotion.
While we can already handle recursion (unlike argument promotion!) we
are restricted to stack allocations for now because we do not analyze
the uses in the callee.

Reviewed By: uenoku

Differential Revision: https://reviews.llvm.org/D68852
2020-01-29 21:31:04 -06:00
Johannes Doerfert 791c9f1145 [Attributor] Fix TODO to avoid recomputation of results
The helpers AAReturnedFromReturnedValues and
AACallSiteReturnedFromReturned are useful not only to avoid code
duplication but also to avoid recomputation of results. If we have N
call sites we should not recompute the function return information N
times but once. These are mostly straightforward usages with some minor
improvements on the helpers and addition of a new one
(IRPosition::getAssociatedType) that knows about function return types.
2020-01-29 19:24:34 -06:00
Nikita Popov e086e23024 [InstCombine] Support non-splat vectors in icmp eq + add/sub fold
For the

    icmp eq (add X, C1), C2 => icmp eq X, C2-C1
    icmp eq (sub C1, X), C2 => icmp eq X, C1-C2

folds, this allows C1 to be non-splat and contain undefs.
C2 is still splat, due to the structure of the code.

This is to address the remaining part of the regression in D73411,
where demanded element analysis replaces some elements with undef.

Differential Revision: https://reviews.llvm.org/D73647
2020-01-29 20:56:58 +01:00
Elia Geretto ab2300bc15 [PassManagerBuilder] Remove global extension when a plugin is unloaded
This commit fixes PR39321.

GlobalExtensions is not guaranteed to be destroyed when optimizer plugins are unloaded. If it is indeed destroyed after a plugin is dlclose-d, the destructor of the corresponding ExtensionFn is not mapped anymore, causing a call to unmapped memory during destruction.

This commit guarantees that extensions coming from external plugins are removed from GlobalExtensions when the plugin is unloaded if GlobalExtensions has not been destroyed yet.

Differential Revision: https://reviews.llvm.org/D71959
2020-01-29 16:15:45 +00:00
Simon Pilgrim 79748add70 Fix MSVC lamdba default capture mode warning. NFCI. 2020-01-29 15:47:04 +00:00
Whitney Tsang da58e68fdf [LoopFusion] Move instructions from FC1.Preheader to FC0.Preheader when
proven safe.

Summary:
Currently LoopFusion give up when the second loop nest preheader is
not empty. For example:

for (int i = 0; i < 100; ++i) {}
x+=1;
for (int i = 0; i < 100; ++i) {}
The above example should be safe to fuse.
This PR moves instructions in FC1 preheader (e.g. x+=1; ) to
FC0 preheader, which then LoopFusion is able to fuse them.
Reviewer: kbarton, Meinersbur, jdoerfert, dmgreen, fhahn, hfinkel,
bmahjour, etiotto
Reviewed By: jdoerfert
Subscribers: hiraditya, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D71821
2020-01-29 15:06:11 +00:00
Sanjay Patel 87f6314f8c [InstCombine] canonicalize splat shuffle after cmp
cmp (splat V1, M), SplatC --> splat (cmp V1, SplatC'), M

As discussed in PR44588:
https://bugs.llvm.org/show_bug.cgi?id=44588
...we try harder to push shuffles after binops than after compares.

This patch handles the special (but presumably most common case) of
splat shuffles. If both operands are splats, then we can do the
comparison on the non-splat inputs followed by splat of the compare.
That should take care of the regression noted in D73411.

There's another potential fold requested in PR37463 to scalarize the
compare, but that's another patch (and it's not clear if we can do
that without the ability to undo it later):
https://bugs.llvm.org/show_bug.cgi?id=37463

Differential Revision: https://reviews.llvm.org/D73575
2020-01-29 08:34:29 -05:00
Johannes Doerfert 76843ba37f [Attributor][Fix] Initialize unused but loaded variable
This hopefully un-breaks:
  http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/38333
2020-01-28 23:52:16 -06:00
Johannes Doerfert ea5fabe60c [Attributor] Reuse existing logic to avoid duplication
There was a TODO in AAValueConstantRangeArgument to reuse
AAArgumentFromCallSiteArguments. We now do this by allowing new States
to be build from the bestState.
2020-01-28 23:45:59 -06:00
Johannes Doerfert 224085409d [Attributor][FIX] Treat invalidated attributes as changed
If we invalidate an attribute we need to inform all dependent ones even
if the fixpoint state is not invalid. Before we only continued
invalidation if the fixpoint state was invalid, now we signal a change
in case the fixpoint state is valid.

The test case was already included in D71620 but the problem was hiding
because it only manifested with the old PM (for that input).
2020-01-28 23:40:41 -06:00
Johannes Doerfert 53992c7bf7 [Attributor] Modularize AANoAliasCallSiteArgument to simplify extensions
This patch modularizes the way we check for no-alias call site arguments
by putting the existing logic into helper functions. The reasoning was
not changed but special cases for readonly/readnone were added.
2020-01-28 23:39:29 -06:00
Johannes Doerfert 24ae77eebf [Attributor] Mark a non-defined `null` pointer as `noalias`
If `null` is not defined we cannot access it, hence the pointer is
`noalias`. While this is not helpful on it's own it simplifies later
deductions that can skip over already known `noalias` pointers in
certain situations.
2020-01-28 23:09:37 -06:00