Commit Graph

31127 Commits

Author SHA1 Message Date
Chenbing Zheng ffaaf2498b [InstCombine] (rot X, ?) == 0/-1 --> X == 0/-1
In this patch we add a function foldICmpInstWithConstantAllowUndef
to fold integer comparisons with a constant operand: icmp Pred X, C
where X is some kind of instruction and C is AllowUndef.

We move this fold to the new function, so that it can solve undef elts in a vector.

Reviewed By: spatel, RKSimon

Differential Revision: https://reviews.llvm.org/D125220
2022-05-19 11:22:26 +08:00
Chenbing Zheng 51df77f36d [InstCombine] Allow undef vectors when foldSelectToCopysign
Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D125671
2022-05-19 10:57:49 +08:00
Alexey Bataev 7d8060bc19 [SLP]Improve reductions vectorization.
The pattern matching and vectgorization for reductions was not very
effective. Some of of the possible reduction values were marked as
external arguments, SLP could not find some reduction patterns because
of too early attempt to vectorize pair of binops arguments, the cost of
consts reductions was not correct. Patch addresses these issues and
improves the analysis/cost estimation and vectorization of the
reductions.

The most significant changes in SLP.NumVectorInstructions:

Metric: SLP.NumVectorInstructions                                                                                                                                                                                                 [140/14396]

Program                                                                                        results  results0 diff
               test-suite :: SingleSource/Benchmarks/Adobe-C++/loop_unroll.test   920.00  3548.00 285.7%
                test-suite :: SingleSource/Benchmarks/BenchmarkGame/n-body.test    66.00   122.00  84.8%
      test-suite :: MultiSource/Benchmarks/DOE-ProxyApps-C/miniGMG/miniGMG.test   100.00   128.00  28.0%
 test-suite :: MultiSource/Benchmarks/Prolangs-C/TimberWolfMC/timberwolfmc.test   664.00   810.00  22.0%
                 test-suite :: MultiSource/Benchmarks/mafft/pairlocalalign.test   592.00   687.00  16.0%
  test-suite :: MultiSource/Benchmarks/MiBench/consumer-lame/consumer-lame.test   402.00   426.00   6.0%
                   test-suite :: MultiSource/Applications/JM/lencod/lencod.test  1665.00  1745.00   4.8%
  test-suite :: External/SPEC/CINT2017rate/500.perlbench_r/500.perlbench_r.test   135.00   139.00   3.0%
 test-suite :: External/SPEC/CINT2017speed/600.perlbench_s/600.perlbench_s.test   135.00   139.00   3.0%
                  test-suite :: MultiSource/Benchmarks/7zip/7zip-benchmark.test   388.00   397.00   2.3%
                   test-suite :: MultiSource/Applications/JM/ldecod/ldecod.test   895.00   914.00   2.1%
    test-suite :: MultiSource/Benchmarks/MiBench/telecomm-gsm/telecomm-gsm.test   240.00   244.00   1.7%
           test-suite :: MultiSource/Benchmarks/mediabench/gsm/toast/toast.test   240.00   244.00   1.7%
             test-suite :: External/SPEC/CINT2017speed/602.gcc_s/602.gcc_s.test   820.00   832.00   1.5%
              test-suite :: External/SPEC/CINT2017rate/502.gcc_r/502.gcc_r.test   820.00   832.00   1.5%
       test-suite :: External/SPEC/CFP2017rate/526.blender_r/526.blender_r.test 14804.00 14914.00   0.7%
                        test-suite :: MultiSource/Benchmarks/Bullet/bullet.test  8125.00  8183.00   0.7%
           test-suite :: External/SPEC/CINT2017speed/625.x264_s/625.x264_s.test  1330.00  1338.00   0.6%
            test-suite :: External/SPEC/CINT2017rate/525.x264_r/525.x264_r.test  1330.00  1338.00   0.6%
         test-suite :: External/SPEC/CFP2017rate/510.parest_r/510.parest_r.test  9832.00  9880.00   0.5%
         test-suite :: External/SPEC/CFP2017rate/511.povray_r/511.povray_r.test  5267.00  5291.00   0.5%
       test-suite :: External/SPEC/CFP2017rate/538.imagick_r/538.imagick_r.test  4018.00  4024.00   0.1%
      test-suite :: External/SPEC/CFP2017speed/638.imagick_s/638.imagick_s.test  4018.00  4024.00   0.1%
              test-suite :: External/SPEC/CFP2017speed/644.nab_s/644.nab_s.test   426.00   424.00  -0.5%
               test-suite :: External/SPEC/CFP2017rate/544.nab_r/544.nab_r.test   426.00   424.00  -0.5%
          test-suite :: External/SPEC/CINT2017rate/541.leela_r/541.leela_r.test   201.00   192.00  -4.5%
         test-suite :: External/SPEC/CINT2017speed/641.leela_s/641.leela_s.test   201.00   192.00  -4.5%

644.nab_s and 544.nab_r - reduced number of shuffles but increased number
of useful vectorized instructions.

641.leela_s and 541.leela_r - the function
`@_ZN9FastBoard25get_pattern3_augment_specEiib` is not inlined anymore
but its body gets vectorized successfully. Before, the function was
inlined twice and vectorized just after inlining, currently it is not
required. The vector code looks pretty similar, just like as it was before.

Differential Revision: https://reviews.llvm.org/D111574
2022-05-18 13:22:18 -07:00
Sanjay Patel ebbc37391f [InstCombine] allow variable shift amount in bswap + shift fold
When shifting by a byte-multiple:
bswap (shl X, Y) --> lshr (bswap X), Y
bswap (lshr X, Y) --> shl (bswap X), Y

This was limited to constants as a first step in D122010 / 60820e53ec ,
but issue #55327 shows a source example (and there's a test based on that here)
where a variable shift amount is used in this pattern.
2022-05-18 14:38:16 -04:00
NAKAMURA Takumi 6ca7eb2c6d [SCEV] Part 1, Serialize function calls in function arguments.
Evaluation odering in function call arguments is implementation-dependent.
In fact, gcc evaluates bottom-top and clang does top-bottom.

Fixes #55283 partially.

Part of https://reviews.llvm.org/D125627
2022-05-18 23:20:08 +09:00
Sanjay Patel 990cc49ca0 [InstCombine] avoid crash on fold of icmp with cast operand
We could do better by inserting a bitcast from scalar int
to vector int or using an insertelement (the alternate test
does not crash because there's an independent fold like that).

But this doesn't seem like a likely pattern, so just bail out
for now.

Fixes issue #55516.
2022-05-18 09:16:30 -04:00
Sanjay Patel be6d7cc93c [InstCombine] reduce code duplication for checking types; NFC 2022-05-18 09:16:30 -04:00
Nikita Popov c9e7049754 [JumpThreading] Look through freeze in getPredicateAt() fold
This code is valid for any icmp, so we can safely look through a
freeze when trying to find one.

A caveat here is that replaceFoldableUses() may not end up replacing
any uses in this case. It might make sense to use the freeze as the
context instruction (rather than the terminator) if there is a
freeze, to ensure that it always gets folded. This would require
some changes to how replaceFoldedUses() works though, as it
currently assumes that the value is valid at the end of the block.
2022-05-18 12:09:59 +02:00
Sun Ziping 242961f23b [llvm][fix-irreducible] ensure that loop subtree under child is correctly reconnected to new loop
The modified function was incorrectly (not unnecessarily) ignoring grandchild
loops, and this change fixes the bug. In particular, this fixes the handling of
the loop { inner, body }. The TODO in the same function is talking about the b1
self loop, which may be "unnecessarily" lost, but that is a different issue.
2022-05-18 10:45:52 +01:00
Nikita Popov 18c70a7bd9 [JumpThreading] Simplify getPredicateAt() based folding
It's sufficient to just fold the icmp to true/false here, and then
let constant terminator folding take care of the rest.

It should be noted that while replaceFoldableUses() may not replace
all uses of the icmp, at least the use in the terminator we're
working on is always replaceable, so terminator constant folding
should be reliably enabled as a subsequent step.
2022-05-18 11:24:52 +02:00
Nikita Popov d4cdf013c7 [JumpThreading] Use common code to skip freeze (NFC)
There are multiple places that want to look through freeze, so
store condition without freeze in a separate variable.
2022-05-18 10:49:41 +02:00
Florian Hahn fcfb86483b
[LV] set Header earlier, use variable instead of repeated access (NFC). 2022-05-18 09:29:59 +01:00
Nikita Popov e9a1c82d69 [SCEVExpander] Expand umin_seq using freeze
%x umin_seq %y is currently expanded to %x == 0 ? 0 : umin(%x, %y).
This patch changes the expansion to umin(%x, freeze %y) instead
(https://alive2.llvm.org/ce/z/wujUhp).

The motivation for this change are the test cases affected by
D124910, where the freeze expansion ultimately produces better
optimization results. This is largely because
`(%x umin_seq %y) == %x` is a common expansion pattern, which
reliably optimizes in freeze representation, but only sometimes
with the zero comparison (in particular, if %x == 0 can fold to
something else, we generally won't be able to cover reasonable
code from this.)

Differential Revision: https://reviews.llvm.org/D125372
2022-05-18 09:53:07 +02:00
Nikita Popov 323514de58 [LoopUnroll] Avoid branch on poison for runtime unroll with multiple exits
When performing runtime unrolling with multiple exits, one of the
earlier (non-latch) exits may exit the loop on the first iteration,
such that we never branch on the latch exit condition. As such, we
need to freeze the condition of the new branch that is introduced
before the loop, as it now executes unconditionally.

Differential Revision: https://reviews.llvm.org/D125754
2022-05-18 09:51:22 +02:00
Juneyoung Lee 3adcf96b4f [JumpThreading] Let ProcessImpliedCondition look into freeze instructions
This patch makes JumpThreading's ProcessImpliedCondition deal with frozen
conditions.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D84941
2022-05-18 10:41:31 +09:00
Sanjay Patel dbf3b5f114 [InstCombine] fold more shuffles with FP<->Int cast operands
shuffle (cast X), (cast Y), Mask --> cast (shuffle X, Y, Mask)

This extends the transform added with 0353c2c996.

If the casts are to a larger element type, the transform
reduces shuffle bit width, so that should be a win for
most codegen (if not, it can be inverted).
2022-05-17 14:25:11 -04:00
Sanjay Patel f31d39c42c [InstCombine] remove cast-of-signbit to shift transform
The transform was wrong in 3 ways:

1. It created an extra instruction when the source and dest types don't match.
2. It did not account for an extra use of the icmp, so could create 2 extra insts.
3. It favored bit hacks over icmp (icmp generally has better analysis).

This fixes #54692 (modeled by the PhaseOrdering tests).

This is a minimal step to fix the bug, but we should likely invert
this and the sibling transform for the "is negative" pattern too.

The backend should be able to invert this back to a shift if that
leads to better codegen.

This is a reduced try of 3794cc0e99 - that was reverted because
it could cause infinite loops by conflicting with the related
transforms in this block that create shifts.
2022-05-17 11:10:28 -04:00
Florian Hahn 5b00d13c00
[LV] Fetch vector loop region once and remember it (NFC).
This avoids an unnecessary lookup and makes the code slightly more
compact.
2022-05-17 15:57:23 +01:00
Alexey Bataev b0f0313feb [SLP]Add an extra check for select minmax reduction to avoid crash.
Need to check if the reduction is still (not)cmp-select pattern min/max
reduction to avoid compiler crash during building list of reduction
operations. cmp-sel pattern provides 2 reduction operations, while
intrinsics - just one.
2022-05-17 06:05:52 -07:00
Florian Hahn c1a9d14982
[VPlan] Move usesScalars/onlyFirstLaneUsed to VPUser.
Those helpers model properties of a user and they should also be
available to non-recipe users. This will be used in D123537 for a new
exit value user.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D124936
2022-05-17 11:20:06 +01:00
Nikita Popov 9ba452b08e [JumpThreading] Don't pass DT to isGuaranteedNotToBeUndefOrPoison()
JumpThreading intentionally does not force updating of the DT
during optimization, because this may be expensive when many CFG
updates and DT calculations are interleaved.

We shouldn't be fetching the DT just for the purpose of calling
isGuaranteedNotToBeUndefOrPoison(), especially as DT availability
doesn't even show benefit in tests.
2022-05-17 11:53:49 +02:00
Dmitry Vassiliev 7759680e2f [SROA] Avoid postponing rewriting load/store by ignoring lifetime intrinsics in partition's promotability checking
This patch fixes a bug that generates unnecessary packing/unpacking structure code because of incorrectly handling lifetime intrinsic.
For example, a partition of an alloca may contain many slices:
```
Partition [0, 4):
  Slice0: [0, 4) used by: load i32 addr;
  Slice1: [0, 4) used by: store i32 v, addr;
  Slice2: [0, 16) used by lifetime.start(16, addr);
```
When SROA determines if the partition can be promoted, lifetime.start is currently treated as a whole alloca load/store, so Slice0 and Slice1 cannot be promoted at this attempt,
but the packing/unpacking code for Slice0 and Slice1 has been generated.
After rewrite lifetime.start/end intrinsic, SROA tries again with Slice0 and Slice1 and finally promotes them, but redundant packing/unpacking code remaining in the IRs.
This patch changes promotability checking to ignore lifetime intrinsic (they will be rewritten to correct sizes later), so we can promote the real users (load/store) at the first attempt with optimal code.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D124967
2022-05-17 11:25:59 +02:00
Nikita Popov a694546f7c [KnownBits] Add operator==
Checking whether two KnownBits are the same is somewhat common,
mainly in test code.

I don't think there is a lot of room for confusion with "determine
what the KnownBits for an icmp eq would be", as that has a
different result type (this is what the eq() method implements,
which returns Optional<bool>).

Differential Revision: https://reviews.llvm.org/D125692
2022-05-17 09:38:13 +02:00
Sanjay Patel 07d549bce9 Revert "[InstCombine] invert canonicalization for cast of signbit test"
This reverts commit 3794cc0e99.
This change is suspected of causing bots to hang at stage 2
compiles, so reverting to confirm and investigate.
2022-05-16 17:47:02 -04:00
Ellis Hoag 9a90ea1fdc [InstrProf] Fix promoter when using counter relocations
When using counter relocations, two instructions are emitted to compute
the address of the counter variable.

```
%BiasAdd = add i64 ptrtoint <__profc_>, <__llvm_profile_counter_bias>
%Addr = inttoptr i64 %BiasAdd to i64*
```

When promoting a counter, these instructions might not be available in
the block, so we need to copy these instructions.

This fixes https://github.com/llvm/llvm-project/issues/55125

Reviewed By: phosek

Differential Revision: https://reviews.llvm.org/D125710
2022-05-16 14:32:39 -07:00
Sanjay Patel 3794cc0e99 [InstCombine] invert canonicalization for cast of signbit test
The existing transform was wrong in 3 ways:
1. It created an extra instruction when the source and dest types don't match.
2. It did not account for an extra use of the icmp, so could create 2 extra insts.
3. It favored bit hacks over icmp (icmp generally has better analysis).

This fixes #54692 (modeled by the PhaseOrdering tests).

This is a minimal step to fix the bug, but we should likely invert
the sibling transform for the "is negative" pattern too.

The backend should be able to invert this back to a shift if that
leads to better codegen.
2022-05-16 12:55:52 -04:00
Ellis Hoag 6e23cd2bf0 [InstrProf][NFC] Save profile bias to function map
Add a map from functions to load instructions that compute the profile bias. Previously we assumed that if the first instruction in the function was a load instruction, then it must be computing the bias. This was likely to work out because functions usually start with the `llvm.instrprof.increment` instruction, but optimizations could change this. For example, inlining into a non-profiled function.

Reviewed By: phosek

Differential Revision: https://reviews.llvm.org/D114319
2022-05-16 08:32:31 -07:00
Sanjay Patel be7f09f7b2 [IR] create and use helper functions that test the signbit; NFCI 2022-05-16 11:26:23 -04:00
Alexey Bataev 152072801e [SLP]Check if the root of the buildvector has one use only.
The root of the buildvector can have only one use, otherwise it can be
treated only as a final element of the previous buildvector sequence.
2022-05-16 07:30:36 -07:00
Florian Hahn b7315ffc3c
[LAA,LV] Add initial support for pointer-diff memory checks.
This patch adds initial support for a pointer diff based runtime check
scheme for vectorization. This scheme requires fewer computations and
checks than the existing full overlap checking, if it is applicable.

The main idea is to only check if source and sink of a dependency are
far enough apart so the accesses won't overlap in the vector loop. To do
so, it is sufficient to compute the difference and compare it to the
`VF * UF * AccessSize`. It is sufficient to check
`(Sink - Src) <u VF * UF * AccessSize` to rule out a backwards
dependence in the vector loop with the given VF and UF. If Src >=u Sink,
there is not dependence preventing vectorization, hence the overflow
should not matter and using the ULT should be sufficient.

Note that the initial version is restricted in multiple ways:

1. Pointers must only either be read or written, by a single
   instruction (this allows re-constructing source/sink for
   dependences with the available information)
 2. Source and sink pointers must be add-recs, with matching steps
 3. The step must be a constant.
 3. abs(step) == AccessSize.

Most of those restrictions can be relaxed in the future.

See https://github.com/llvm/llvm-project/issues/53590.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D119078
2022-05-16 15:27:22 +01:00
Biplob Mishra ec4adf1f6c [InstCombine] Combine instructions of type or/and where AND masks can be combined.
The patch simplifies some of the patterns as below

(A | (B & C0)) | (B & C1) -> A | (B & C0|C1)
((B & C0) | A) | (B & C1) -> (B & C0|C1) | A

In some scenarios like byte reverse on half word, we can see this pattern multiple times and this conversion can optimize these patterns.

Differential Revision: https://reviews.llvm.org/D124119
2022-05-16 12:43:33 +01:00
Nikita Popov 7ba484660b [ControlHeightReduction] Freeze condition when converting select to branch
While select conditions can be poison, branch on poison is
immediate UB. As such, we need to freeze the condition when
converting a select into a branch.

Differential Revision: https://reviews.llvm.org/D125398
2022-05-16 10:37:26 +02:00
David Sherwood befc952045 [LoopVectorize] Permit tail-folding for low trip counts using scalable vectors
When the loop vectoriser encounters a known low trip count it tries
to create a single predicated loop in order to get the benefit of
vectorisation and eliminate the scalar tail. However, until now the
vectoriser prevented the use of scalable vectors in this case due
to concerns in the past about stability. I believe that tail-folded
loops using scalable vectors are now sufficiently well tested that
we can enable this. For the same reason I've also enabled it when
optimising for code size too.

Tests added here:

  Transforms/LoopVectorize/AArch64/sve-low-trip-count.ll
  Transforms/LoopVectorize/AArch64/sve-tail-folding-optsize.ll
  Transforms/LoopVectorize/RISCV/low-trip-count.ll

Differential Revision: https://reviews.llvm.org/D121595
2022-05-16 09:14:24 +01:00
Florian Hahn 8b7c3d2179
[LV] Set SCEVCheckCond to nullptr whenever it was used.
Under some circumstances, SCEVExpander will insert new instructions when
expanding a predicate, but the final result of the expansion can be a
false constant.

In those cases, the expanded instructions may later be used by other
expansions, e.g. the trip count. This may trigger an assertion during
SCEVExpander cleanup. To avoid this, always mark the result as used.

Fixes #55100.
2022-05-15 21:52:07 +01:00
Craig Topper b3097eb6cd [SLP] Fix misspelling of 'analyzed'. NFC 2022-05-15 10:30:24 -07:00
Florian Hahn 39552964e1
[VPlan] Improve printing of VPReplicateRecipe with calls.
Suggested as part of D124718.
2022-05-15 15:51:26 +01:00
Wende Tan 59afc4038b [LowerTypeTests][clang] Implement and allow -fsanitize=cfi-icall for RISCV
Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D106888
2022-05-14 18:05:06 -07:00
Fangrui Song 60e5fd00cd [RS4GC] Fix -Wunused-function in -DLLVM_ENABLE_ASSERTIONS=off build after D125000 2022-05-14 10:47:50 -07:00
Chenbing Zheng acbad5086a [InstCombine] [NFC] separate a function foldICmpBinOpWithConstant
There is a long function foldICmpInstWithConstant,
we can separate a function foldICmpBinOpWithConstant from it.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D125457
2022-05-14 10:54:15 +08:00
Alexander Shaposhnikov badd088c57 [GlobalOpt] Enable optimization of constructors with different priorities
Adjust `optimizeGlobalCtorsList` to handle the case of different priorities.
This addresses the issue https://github.com/llvm/llvm-project/issues/55083.

Test plan: ninja check-all

Differential revision: https://reviews.llvm.org/D125278
2022-05-13 22:19:29 +00:00
Alexey Bataev 8b8281f354 [SLP]Do not vectorize non-profitable alternate nodes.
If alternate node has only 2 instructions and the tree is already big
enough, better to skip the vectorization of such nodes, they are not
very profitable (the resulting code cotains 3 instructions instead of
original 2 scalars). SLP can try to vectorize the buildvector sequence
in the next attempt, if it is profitable.

Metric: SLP.NumVectorInstructions

Program                                                                                       SLP.NumVectorInstructions
                                                                               results                   results0 diff
     test-suite :: MultiSource/Benchmarks/DOE-ProxyApps-C/miniAMR/miniAMR.test    72.00                     73.00   1.4%
test-suite :: MultiSource/Benchmarks/Prolangs-C/TimberWolfMC/timberwolfmc.test  1186.00                   1198.00   1.0%
     test-suite :: MultiSource/Benchmarks/DOE-ProxyApps-C++/miniFE/miniFE.test   241.00                    242.00   0.4%
                  test-suite :: MultiSource/Applications/JM/lencod/lencod.test  2131.00                   2139.00   0.4%
 test-suite :: External/SPEC/CINT2017rate/523.xalancbmk_r/523.xalancbmk_r.test  6377.00                   6384.00   0.1%
test-suite :: External/SPEC/CINT2017speed/623.xalancbmk_s/623.xalancbmk_s.test  6377.00                   6384.00   0.1%
        test-suite :: External/SPEC/CFP2017rate/510.parest_r/510.parest_r.test 12650.00                  12658.00   0.1%
      test-suite :: External/SPEC/CFP2017rate/526.blender_r/526.blender_r.test 26169.00                  26147.00  -0.1%
          test-suite :: MultiSource/Benchmarks/Trimaran/enc-3des/enc-3des.test    99.00                     86.00 -13.1%

Gains:
526.blender_r - more vectorized trees.
enc-3des - same.

Others:
510.parest_r - no changes.
miniFE - same
623.xalancbmk_s - some (non-profitable) parts of the trees are not
    vectorized.
523.xalancbmk_r - same
lencod - same
timberwolfmc - same
miniAMR - same

Differential Revision: https://reviews.llvm.org/D125571
2022-05-13 14:28:54 -07:00
Alexey Bataev 85f6b15ee5 [SLP]Do not look for buildvector sequence, if the index is reused.
If the insert indes was used already or is not constant, we should stop
looking for unique buildvector sequence, it mustbe splitted to
2 different buildvectors.
2022-05-13 13:56:02 -07:00
Nikita Popov afc21c7e79 [ControlHeightReduction] Simplify addToMergedCondition() (NFC) 2022-05-13 15:30:09 +02:00
David Sherwood 92c645b5c1 [LoopVectorize] Add overflow checks when tail-folding with scalable vectors
In InnerLoopVectorizer::getOrCreateVectorTripCount there is an
assert that the known minimum value for the VF is a power of 2
when tail-folding is enabled. However, for scalable vectors the
value of vscale may not be a power of 2, which means we have
to worry about the possibility of overflow. I have solved this
problem by adding preheader checks that prevent us from entering
the vector body if the canonical IV would overflow, i.e.

  if ((IntMax - TripCount) < (VF * UF)) ... skip vector loop ...

Differential Revision: https://reviews.llvm.org/D125235
2022-05-13 14:09:43 +01:00
Nikita Popov ed1cb01baf [IRBuilder] Add IsInBounds parameter to CreateGEP()
We commonly want to create either an inbounds or non-inbounds GEP
based on a boolean value, e.g. when preserving inbounds from
existing GEPs. Directly accept such a boolean in the API, rather
than requiring a ternary between CreateGEP and CreateInBoundsGEP.

This change is not entirely NFC, because we now preserve an
inbounds flag in a constant expression edge-case in InstCombine.
2022-05-13 14:30:55 +02:00
Florian Hahn 8e6d481f3b
[ConstraintElimination] Simplify ssub(A,B) if B s>=b && B s>=0.
A first patch to use the reasoning in ConstraintElimination to simplify
sub with overflow to a regular sub, if the operation is guaranteed to
not overflow.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D125264
2022-05-13 13:19:41 +01:00
Nikita Popov d9ad6a2c8b [InstCombine] Fix unused variable warning (NFC) 2022-05-13 12:43:21 +02:00
Dmitry Makogon 1da42c9f71 [RS4GC] Cache BDVs and bases alogn with IsKnownBase flag (NFC)
This refactors RS4GC to cache results returned findBaseDefiningValue
and also gets rid of BaseDefiningValueResult by caching the
IsKnownBase flag for BDVs and bases.

Differential Revision: https://reviews.llvm.org/D125000
2022-05-13 14:14:17 +07:00
Chenbing Zheng 2a0837aab1 [InstCombine] fix sub(add(X,Y),umin(Y,Z)) --> add(X,usub.sat(Y,Z))
This patch fix bug left in D124503. We should do
sub(add(X,Z),umin(Y,Z)) --> add(X,usub.sat(Z,Y)) instead of
sub(add(X,Z),umin(Y,Z)) --> add(X,usub.sat(Y,Z)).

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D125352
2022-05-13 09:54:10 +08:00
Sanjay Patel 2fa8fc3d0a [InstCombine] freeze operand in div+mul fold
As discussed in issue #37809, this transform is not safe
if the input is an undefined value.

This is similar to recent changes for urem and sdiv:
d428f09b2c
99ef341ce9

There is no difference in codegen on the basic examples,
but this could lead to regressions. We may need to
improve freeze analysis or lowering if that happens.

Presumably, in real cases that are similar to the tests
where a subsequent transform removes the rem, we
will also be able to remove the freeze by seeing that
the parameter has 'noundef'.
2022-05-12 13:49:29 -04:00
Quentin Colombet 9766fed9c1 [DeadArgElim] Re-apply: Set unused arguments for internal functions
The re-apply includes fixes to clang tests that were missed in
the original commit.

Original message:
Prior to this patch we would only set to undef the unused arguments of the
external functions. The rationale was that unused arguments of internal
functions wouldn't need to be turned into undef arguments because they
should have been simply eliminated by the time we reach that code.

This is actually not true because there are plenty of cases where we can't
remove unused arguments. For instance, if the internal function is used in
an indirect call, it may not be possible to change the function signature.
Yet, for statically known call-sites we would still like to mark the unused
arguments as undef.

This patch enables the "set undef arguments" optimization on internal
functions when we encounter cases where internal functions cannot be
optimized. I.e., whenever an internal function is marked "live".

Differential Revision: https://reviews.llvm.org/D124699
2022-05-12 08:46:16 -07:00
Pavel Samolysov 098afdb0a0 [ArgPromotion] Make a non-byval promotion attempt first
It makes sense to make a non-byval promotion attempt first and then
fall back to the byval one. The non-byval ('usual') promotion is
generally better, for example it does promotion even when a structure
has more elements than 'MaxElements' but not all of them are actually
used in the function.

Differential Revision: https://reviews.llvm.org/D124514
2022-05-12 16:44:52 +02:00
Vasileios Porpodas 0950d4060c Recommit "[SLP] Make reordering aware of external vectorizable scalar stores."
This reverts commit c2a7904aba.

Original code review: https://reviews.llvm.org/D125111
2022-05-11 16:47:29 -07:00
Arthur Eubanks c2a7904aba Revert "[SLP] Make reordering aware of external vectorizable scalar stores."
This reverts commit 71bcead98b.

Causes crashes, see comments in D125111.
2022-05-11 15:28:00 -07:00
Sanjay Patel 99ef341ce9 [InstCombine] freeze operand in sdiv expansion
As discussed in issue #37809, this transform is not safe
if the input is an undefined value.

This is similar to a recent change for urem:
d428f09b2c

There is no difference in codegen on the basic examples,
but this could lead to regressions. We may need to
improve freeze analysis or lowering if that happens.

Presumably, in real cases that are similar to the tests
where a subsequent transform removes the select, we
will also be able to remove the freeze by seeing that
the parameter has 'noundef'.
2022-05-11 14:01:28 -04:00
Sanjay Patel d428f09b2c [InstCombine] freeze operand in urem expansion
As discussed in issue #37809, this transform is not safe
if the input is an undefined value.

There is no difference in codegen on the basic examples,
but this could lead to regressions. We may need to
improve freeze analysis or lowering if that happens.
2022-05-11 12:47:26 -04:00
Nikita Popov 6001bfcedc [InstCombine] Freeze other uses of frozen value
If there is a freeze %x, we currently replace all other uses of %x
with freeze %x -- as long as they are dominated by the freeze
instruction. This patch extends this behavior to cases where we
did not originally dominate the use by moving the freeze
instruction directly after the definition of the frozen value.

The motivation can be seen in test @combine_and_after_freezing_uses:
Canonicalizing everything to freeze %x allows folds that are based
on value identity (i.e. same operand occurring in two places) to
trigger. This also covers the case from D125248.

Differential Revision: https://reviews.llvm.org/D125321
2022-05-11 16:47:12 +02:00
Alexey Bataev f5d45d70a5 [SLP]Further improvement of the cost model for scalars used in buildvectors.
Further improvement of the cost model for the scalars used in
buildvectors sequences. The main functionality is outlined into
a separate function.
The cost is calculated in the following way:
1. If the Base vector is not undef vector, resizing the very first mask to
have common VF and perform action for 2 input vectors (including non-undef
Base). Other shuffle masks are combined with the resulting after the 1 stage and processed as a shuffle of 2 elements.
2. If the Base is undef vector and have only 1 shuffle mask, perform the
action only for 1 vector with the given mask, if it is not the identity
mask.
3. If > 2 masks are used, perform serie of shuffle actions for 2 vectors,
combing the masks properly between the steps.

The original implementation misses the very first analysis for the Base
vector, so the cost might too optimistic in some cases. But it improves
the cost for the insertelements which are part of the current SLP graph.

Part of D107966.

Differential Revision: https://reviews.llvm.org/D115750
2022-05-11 06:08:55 -07:00
Florian Hahn 635b752211
[VPlan] VPInterleaveRecipe only uses first lane if op not stored.
With opaque pointers, both the stored value and the address can be the
same. Only consider the recipe using the first lane only *if* the
address is not stored.

Fixes #55375.
2022-05-11 11:24:56 +01:00
Nikita Popov c1bb4a881e [SCEVExpander] Deduplicate min/max expansion code (NFC) 2022-05-11 12:11:11 +02:00
Alexander Shaposhnikov da823382d2 [Transform][Utils][NFC] Clean up CtorUtils.cpp 2022-05-11 01:07:54 +00:00
Nick Desaulniers c167c0a4dc [BuildLibCalls] infer inreg param attrs from NumRegisterParameters
We're having a hard time booting the ARCH=i386 Linux kernel with clang
after removing -ffreestanding because instcombine was dropping inreg
from callers during libcall simplification, but not the callees defined
in different translation units. This led the callers and callees to have
wildly different calling conventions, which (predictably) blew up at
runtime.

Infer the inreg param attrs on function declarations from the module
metadata "NumRegisterParameters." This allows us to boot the ARCH=i386
Linux kernel (w/ -ffreestanding removed).

Fixes: https://github.com/llvm/llvm-project/issues/53645

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D125285
2022-05-10 16:21:17 -07:00
Vasileios Porpodas 71bcead98b [SLP] Make reordering aware of external vectorizable scalar stores.
The current reordering scheme only checks the ordering of in-tree operands.
There are some cases, however, where we need to adjust the ordering based on
the ordering of a future SLP-tree who's instructions are not part of the
current tree, but are external users.

This patch is a simple implementation of this. We keep track of scalar stores
that are users of TreeEntries and if they look profitable to vectorize, then
we keep track of their ordering. During the reordering step we take this new
index order into account. This can remove some shuffles in cases like in the
lit test.

Differential Revision: https://reviews.llvm.org/D125111
2022-05-10 15:25:35 -07:00
Sanjay Patel 0353c2c996 [InstCombine] fold shuffles with FP<->Int cast operands
shuffle (cast X), (cast Y), Mask --> cast (shuffle X, Y, Mask)

This is similar to a recent transform with fneg ( b331a7ebc1 ),
but this is intentionally the most conservative first step to
try to avoid regressions in codegen. There are several
restrictions that could be removed as follow-up enhancements.

Note that a cast with a unary shuffle is currently canonicalized
in the other direction (shuffle after cast - D103038 ). We might
want to invert that to be consistent with this patch.
2022-05-10 14:20:43 -04:00
Craig Topper 4b36d9bde7 [CVP] Preserve exact name when converting sext->zext and ashr->lshr.
Previously we took the old name and always appended a numberic suffix.
Since we're doing a 1:1 replacement, it's clearer to keep the original
name exactly.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D125281
2022-05-10 09:13:59 -07:00
Craig Topper 7b362ddda9 [SCCP] Preserve Name when converting SExt->ZExt.
This makes the output IR more readable since we're doing a one to
one replacement.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D125280
2022-05-10 09:13:59 -07:00
Dawid Jurczak 009f6ce0ef [GVNSink] Make GVNSink resistant against self referencing instructions (PR36954)
Before this change GVNSink pass suffers from stack overflow while processing self referenced instruction in unreachable basic block.
According [1] and [2] it's reasonable to make pass resistant against self referencing instructions.
To fix issue we skip sinking analysis when we reach instruction coming from unreachable block.

[1] https://groups.google.com/g/llvm-dev/c/843Tig9IzwA
[2] https://lists.llvm.org/pipermail/llvm-dev/2015-February/082629.html

Differential Revision: https://reviews.llvm.org/D113897
2022-05-10 16:06:12 +02:00
Nikita Popov 0eafef1171 [SCEVExpander] Remove handling for mixed int/pointer min/max (NFCI)
Mixed int/pointer min/max are no longer possible.
2022-05-10 15:11:39 +02:00
Chuanqi Xu 02d6845234 [NFC] [Coroutines] Remove EnableReuseStorageInFrame option
The EnableReuseStorageInFrame option is designed for testing only.
But it is better to use *_PASS_WITH_PARAMS macro to keep consistent with
other passes.
2022-05-10 17:28:43 +08:00
Nikita Popov d222bab672 [InstCombine] Handle GEP scalar/vector base mismatch (PR55363)
30a12f3f63 switched the type check
to use the GEP result type rather than the GEP operand type.
However, the GEP result types may match even if the operand types
don't, in case GEPs with scalar/vector base and vector index
are compared.

Fixes https://github.com/llvm/llvm-project/issues/55363.
2022-05-10 11:26:43 +02:00
Chuanqi Xu beeed0994e [Coroutines] Use PassManager instead of Legacy PassManager internally
This is a following cleanup for the previous work D123918. I missed
serveral places which still use legacy pass managers. This patch tries
to remove them.
2022-05-10 13:15:11 +08:00
Hongtao Yu 9641b9be9d [Inliner] Preserve !prof metadata when converting call to invoke.
When a callee function is inlined via an invoke instruction, every function call inside the callee, if not an invoke,  will be converted to an invoke after cloned to the caller body. I found that during the conversion the !prof metadata was dropped. This in turned caused a cloned indirect call not properly promoted in subsequent passes.

The particular scenario I was investigating was with AutoFDO and thinLTO. In prelink, no ICP was triggered (neither by the sample loader nor PGO ICP), no indirect call was promoted. This is because 1) the particular indirect call did not have inlined samples;  and 2) PGO ICP was intentionally disabled.  After inlining, the prof metadata was dropped. Then in postlink, PGO ICP jumped in but didn't do anything. Thus the opportunity was missed.

I'm making a simple fix to preserve !prof metadata when converting call to invoke.

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D125249
2022-05-09 15:08:09 -07:00
Alexey Bataev 4212ef8a0e Revert "[SLP]Further improvement of the cost model for scalars used in buildvectors."
This reverts commit 99f31acfce and several
others to fix detected crashes, reported in https://reviews.llvm.org/D115750
2022-05-09 13:46:06 -07:00
Alexey Bataev cce80bd8b7 [SLP]Adjust assertion check for scalars in several insertelements.
If the same scalar is inserted several times into the same buildvector,
the mask index can be used already. In this case need to check, that
this scalar is already part of the vectorized buildvector.
2022-05-09 13:07:59 -07:00
Florian Hahn 266ea446ab
Revert "Recommit "[VPlan] Remove uneeded needsVectorIV check.""
This reverts commit 8b48223447.

This triggers an assertion on a test case mentioned in D123720.
Revert while I investigate.
2022-05-09 20:33:14 +01:00
Alexey Bataev 9dc4ced204 [SLP]Try partial store vectorization if supported by target.
We can try to vectorize number of stores less than MinVecRegSize
/ scalar_value_size, if it is allowed by target. Gives an extra
opportunity for the vectorization.

Fixes PR54985.

Differential Revision: https://reviews.llvm.org/D124284
2022-05-09 09:48:15 -07:00
Alexey Bataev 9c3a75eabf [SLP]Fix a crash when preparing a mask for external scalars.
Need to use actual index instead of the tree entry position, since the
insert index may be different than 0. It mean, that we vectorized part
of the buildvector starting from not initial insertelement instruction
beause of some reason.
2022-05-09 07:59:34 -07:00
Florian Hahn 41e142fdc7
Recommit "[SimpleLoopUnswitch] Collect either logical ANDs/ORs but not both."
This reverts commit 7211d5ce07.

This version fixes a crash that caused buildbot failures with the first
version.
2022-05-09 13:49:12 +01:00
David Green 6f9e1ea0ef [VectorCombine] Attempt to fold select shuffles from reductions
Given a commutative reduction leading from a shuffle, the order of the
lanes on the shuffle are not important for the result. This means we can
reorder the shuffle to something simpler, which we try shuffling the
first vector lanes first. This was D123494.

The new shuffle may not be profitable though, and if it is not we can
try the folding of select shuffles from D123911. This, with some
adjustment as the output lane ordering is now unimportant, can allow the
final shuffle to simplify given the inputs to the patterns from D123911.
Where as each transformation on their own are not profitable, the
combination is.

We can only support a single shuffle when called from reductions, but we
are able to sort the ReconstructMask, potentially allowing it to
simplify to an identity or concat mask.

Differential Revision: https://reviews.llvm.org/D125086
2022-05-08 10:32:41 +01:00
Andrew Litteken e38f014c40 [IROutliner] Accomodate blocks containing PHINodes with one entry outside the region and others inside the region.
When a PHINode has an incoming block from outside the region, it must be handled specially when assigning a global value number to each incoming value. A PHINode has multiple predecessors, and we must handle this case rather than only the single predecessor case.

Reviewer: paquette

Differential Revision: https://reviews.llvm.org/D124777
2022-05-07 17:11:21 -05:00
David Green 802e15c576 [SLP] Cluster ordering for loads
Given a load without a better order, this patch partially sorts the
elements to form clusters of adjacent elements in memory. These clusters
can potentially be loaded in fewer loads, meaning less overall shuffling
(for example loading v4i8 clusters of a v16i8 as a single f32 loads, as
opposed to multiple independent bytes loads and inserts).

Differential Revision: https://reviews.llvm.org/D122145
2022-05-07 14:38:11 +01:00
Sanjay Patel 8650f05c97 [InstCombine] fix miscompile when casting int->FP->int
As shown in https://github.com/llvm/llvm-project/issues/55150 -
the existing fold may be wrong when converting to a signed value.
This is a quick fix to avoid the miscompile.

I added tests/comments for all of the signed/unsigned combinations
at either side of the boundary width, and tried to confirm with Alive2:
https://alive2.llvm.org/ce/z/3p9DSu

There are already some TODO items in the test file that suggest
possible refinements, so the regression with ui->FP->si is probably ok.
It seems unlikely that we'd see these kind of edge cases with
non-byte-width integer types in real code. The potential miscompile
went undetected for several years.

This and 747c6a0c73 fixes #55150.

Differential Revision: https://reviews.llvm.org/D124692
2022-05-07 08:46:25 -04:00
Serge Pavlov eb28da89a6 [InstCombine] Remove side effect of replaced constrained intrinsics
If a constrained intrinsic call was replaced by some value, it was not
removed in some cases. The dangling instruction resulted in useless
instructions executed in runtime. It happened because constrained
intrinsics usually have side effect, it is used to model the interaction
with floating-point environment. In some cases side effect is actually
absent or can be ignored.

This change adds specific treatment of constrained intrinsics so that
their side effect can be removed if it actually absents.

Differential Revision: https://reviews.llvm.org/D118426
2022-05-07 19:04:11 +07:00
Chenbing Zheng 394c683d40 [InstCombine] sub(add(X,Y),umin(Y,Z)) --> add(X,usub.sat(Y,Z))
Alive2: https://alive2.llvm.org/ce/z/2UNVbp

Reviewed By: RKSimon, spatel

Differential Revision: https://reviews.llvm.org/D124503
2022-05-07 17:17:48 +08:00
Chenbing Zheng 8eaa1ef0d8 [InstCombine] add casts from splat-a-bit pattern if necessary
Splatting a bit of constant-index across a value:
sext (ashr (trunc iN X to iM), M-1) to iN --> ashr (shl X, N-M), N-1
If the dest type is different, use a cast (adjust use check).

https://alive2.llvm.org/ce/z/acAan3

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124590
2022-05-07 15:34:57 +08:00
Alexander Shaposhnikov f827ee671f [Scalar][NFC] Minor cleanups in CallSiteSplitting.cpp 2022-05-06 23:03:49 +00:00
Florian Hahn 7211d5ce07
Revert "[SimpleLoopUnswitch] Collect either logical ANDs/ORs but not both."
This reverts commit db7a87ed4f.

This seems to cause a PPC buildbot failure:
https://lab.llvm.org/buildbot#builders/93/builds/8787
2022-05-06 22:38:15 +01:00
Sanjay Patel b331a7ebc1 [InstCombine] canonicalize fneg after shuffle
For the unary shuffle pattern, this is opposite to what we try
to do with binops, but it seems better to keep it consistent
with the motivating binary shuffle pattern. On that, it is
clearly better on the usual no-extra uses case.

There is a chance that this will pull an fneg away from some
other binop and cause a regression in codegen, but that should
be invertible in the backend. The transform is birectional:
https://alive2.llvm.org/ce/z/kKaKCU
https://alive2.llvm.org/ce/z/3Desfw

Fixes #45631
2022-05-06 16:30:26 -04:00
Nikita Popov 82190f917a [InstCombine] Fold icmp of select with implied condition
When threading the icmp over the select, check whether the
condition can be folded when taking into account the select
condition.
2022-05-06 17:13:32 +02:00
Nikita Popov 0863abe3ac [InstCombine] Fold icmp of select with non-constant operand
Try to push an icmp into a select even if the icmp operand isn't
constant - perform a generic SimplifyICmpInst instead.

This doesn't appear to impact compile-time much, and forming
logical and/or is generally profitable, as we have very good
support for them.
2022-05-06 16:04:39 +02:00
Max Kazantsev 5a08e81779 [RS4GC] Add support for 'freeze' instruction to findBaseDefiningValue
Because this instruction is a noop, we can simply go through it in
search of the base.
2022-05-06 20:46:29 +07:00
Max Kazantsev e6a7afae03 [NFC] Fix typo in assert message 2022-05-06 20:31:34 +07:00
Nikita Popov b457ac4240 [InstCombine] Extract icmp of select transform (NFC)
To make it either to extend to the case where the other operand
is not a constant.
2022-05-06 14:46:44 +02:00
Fraser Cormack bafab9c09f [InstCombine] Fix scalable-vector bitwise select matching
D113035 enhanced the matching of bitwise selects from vector types. This
change unfortunately introduced crashes as it tries to cast scalable
vector types to integers.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124997
2022-05-06 12:59:39 +01:00
Florian Hahn db7a87ed4f
[SimpleLoopUnswitch] Collect either logical ANDs/ORs but not both.
After D97756, collectHomogenousInstGraphLoopInvariants may collect
conditions for both logical ANDs and logical ORs in case the root is a
select that matches both logical AND & OR.

This means the function won't return invariant values of either AND/OR
chains, but both. This can result in incorrect transformations.

See llvm/test/Transforms/SimpleLoopUnswitch/trivial-unswitch-logical-and-or.ll.
Without the patch, Alive2 rejects the modified tests with:
    Source and target don't have the same return domain.

Note that this also applies to the test case added in D97756
(@test_partial_condition_unswitch_or_select). We can't unswitch on
%cond6, because the graph leading to it contains and AND and an OR.

This only fixes trivial unswitching for now, but a similar problem
likely exists with non-trivial unswitching.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D124526
2022-05-06 09:50:03 +01:00
Marco Elver 9ae87b5973 [Instrumentation] Share InstrumentationIRBuilder between TSan and SanCov
Factor our InstrumentationIRBuilder and share it between ThreadSanitizer
and SanitizerCoverage. Simplify its usage at the same time (use function
of passed Instruction or BasicBlock).

This class may be used in other instrumentation passes in future.

NFCI.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D125038
2022-05-06 09:15:17 +02:00
David Green 100cb9a2ba [VectorCombine] Fold shuffle select pattern
This patch adds a combine to attempt to reduce the costs of certain
select-shuffle patterns. The form of code it attempts to detect is:
  %x = shuffle ...
  %y = shuffle ...
  %a = binop %x, %y
  %b = binop %x, %y
  shuffle %a, %b, selectmask

A classic select-mask will pick items from each lane of a or b. These
do not always have a great lowering on many architectures. This patch
attempts to pack a and b into the lower elements, creating a differently
ordered shuffle for reconstructing the orignal which may be better than
the select mask. This can be better for performance, especially if less
elements of a and b need to be computed and the input shuffles are
cheaper.

Because select-masks are just one form of shuffle, we generalize to any
mask. So long as the backend has decent costmodel for the shuffles, this
can generally improve things when they come up. For more basic cost
models the folds do not appear to be profitable, not getting past the
cost checks.

Differential Revision: https://reviews.llvm.org/D123911
2022-05-06 08:13:18 +01:00
Chuanqi Xu 2d037873a3 [Coroutines] Don't re-materialize for debug instructions
Re-materialize for debug instructions would cause a different code
generated if we enabled `-g`. This is bad. So we disable to
re-materialize for debug instructions.
2022-05-06 13:52:19 +08:00
Chenbing Zheng 4c8c101b49 [InstCombine] try to narrow more shifted bswap-of-zext
Try to narrow more bswap, if the shift amount is less than the zext
(bswap (zext X)) >> C --> (zext (bswap X)) << C'

https://alive2.llvm.org/ce/z/i7ddjn

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D124598
2022-05-06 10:45:10 +08:00
Florian Hahn f9f7aa30f8
[VPlan] Remove dead code to create VPWidenPHIRecipes (NFCI).
After introducing VPWidenPointerInductionRecipe, VPWidenPHIRecipes
should not be created at this point. Turn check into an assert.
2022-05-05 19:29:02 +01:00
Serge Pavlov e1554ac63a Revert "[InstCombine] Remove side effect of replaced constrained intrinsics"
This reverts commit 83914ee96f.
The change caused discussion: https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20220502/1034841.html
2022-05-06 01:09:16 +07:00
Marco Elver 47bdea3f7e [ThreadSanitizer] Add fallback DebugLocation for instrumentation calls
When building with debug info enabled, some load/store instructions do
not have a DebugLocation attached. When using the default IRBuilder, it
attempts to copy the DebugLocation from the insertion-point instruction.
When there's no DebugLocation, no attempt is made to add one.

This is problematic for inserted calls, where the enclosing function has
debug info but the call ends up without a DebugLocation in e.g. LTO
builds that verify that both the enclosing function and calls to
inlinable functions have debug info attached.

This issue was noticed in Linux kernel KCSAN builds with LTO and debug
info enabled:

  | ...
  | inlinable function call in a function with debug info must have a !dbg location
  |   call void @__tsan_read8(i8* %432)
  | ...

To fix, ensure that all calls to the runtime have a DebugLocation
attached, where the possibility exists that the insertion-point might
not have any DebugLocation attached to it.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D124937
2022-05-05 15:21:35 +02:00
Alexey Bataev 99f31acfce [SLP]Further improvement of the cost model for scalars used in buildvectors.
Further improvement of the cost model for the scalars used in
buildvectors sequences. The main functionality is outlined into
a separate function.
The cost is calculated in the following way:
1. If the Base vector is not undef vector, resizing the very first mask to
have common VF and perform action for 2 input vectors (including non-undef
Base). Other shuffle masks are combined with the resulting after the 1 stage and processed as a shuffle of 2 elements.
2. If the Base is undef vector and have only 1 shuffle mask, perform the
action only for 1 vector with the given mask, if it is not the identity
mask.
3. If > 2 masks are used, perform serie of shuffle actions for 2 vectors,
combing the masks properly between the steps.

The original implementation misses the very first analysis for the Base
vector, so the cost might too optimistic in some cases. But it improves
the cost for the insertelements which are part of the current SLP graph.

Part of D107966.

Differential Revision: https://reviews.llvm.org/D115750
2022-05-05 06:04:25 -07:00
Florian Hahn 6bd2b70877
[SimpleLoopUnswitch] Add freeze if branch execs for partial unswitching.
We cannot skip the freezing the condition if the unswitched branch
executes, if the condition is a chain of ANDs/ORs. For example, if if we
have an AND %c1, %c2 with  %c1 == undef and %c2 == 0, there would be no
branch on undef in the original code, but a branch on undef if we
unswitch %c1.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D124603
2022-05-05 09:44:07 +01:00
Chuanqi Xu 405bf90235 [NFC] [Pipelines] Hoist CoroCleanup as Module Pass
This is similar to previous patch https://reviews.llvm.org/D123925. It
could also reduce the time we call declaresCoroCleanupIntrinsics. And it
is helpful for further changes.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D124362
2022-05-05 15:15:09 +08:00
Serge Pavlov 83914ee96f [InstCombine] Remove side effect of replaced constrained intrinsics
If a constrained intrinsic call was replaced by some value, it was not
removed in some cases. The dangling instruction resulted in useless
instructions executed in runtime. It happened because constrained
intrinsics usually have side effect, it is used to model the interaction
with floating-point environment. In some cases it is correct behavior
but often the side effect is actually absent or can be ignored.

This change adds specific treatment of constrained intrinsics so that
their side effect can be removed if it actually absents.

Differential Revision: https://reviews.llvm.org/D118426
2022-05-05 12:02:42 +07:00
Wael Yehia 2407c13aa4 [AIX][PGO] Enable linux style PGO on AIX
This patch switches the PGO implementation on AIX from using the runtime
registration-based section tracking to the __start_SECNAME/__stop_SECNAME
based. In order to enable the recognition of __start_SECNAME/__stop_SECNAME
symbols in the AIX linker, the -bdbg:namedsects:ss needs to be used.

Reviewed By: jsji, MaskRay, davidxl

Differential Revision: https://reviews.llvm.org/D124857
2022-05-05 04:10:39 +00:00
Alexander Shaposhnikov ec7122f64b [InstCombine] Fold ((A&B)^C)|B
Fold ((A&B)^C)|B into C|B.

https://alive2.llvm.org/ce/z/zSGSor

This addresses the issue https://github.com/llvm/llvm-project/issues/55169

Test plan: ninja check-all

Differential revision: https://reviews.llvm.org/D124710
2022-05-05 00:56:20 +00:00
Sanjay Patel 14f257620c [InstCombine] add type constraint to intrinsic+shuffle fold
This check is in the related fold for binops,
but it was missed when the code was adapted
for intrinsics in 432c199e84. The new test
would crash when trying to create a new
intrinsic with mismatched types.
2022-05-04 13:07:26 -04:00
Sanjay Patel 7e6d318c50 [InstCombine] move shuffle after funnel shift with same-shuffled operands
This extends 432c199e84 and 9c4770eaab with an intrinsic
cited directly in issue #46238

Eventually, we will want to use llvm::isTriviallyVectorizable()
or create some new API for this list, but for now, I am intentionally
making a minimum change to reduce risk and only affect an intrinsic
with regression tests in place.
2022-05-04 13:07:26 -04:00
Sanjay Patel 15042f44a2 [InstCombine] propagate FMF when reordering intrinsics and shuffles
This was missed when extending the fold to allow fma with
9c4770eaab
2022-05-04 12:10:38 -04:00
Sanjay Patel 9c4770eaab [InstCombine] move shuffle after fma with same-shuffled operands
https://alive2.llvm.org/ce/z/sD-JVv

This extends 432c199e84 with a 3 arg intrinsic to demonstrate
that the code works with the extra operand.

Eventually, we will want to use llvm::isTriviallyVectorizable()
or create some new API for this list, but for now, I am intentionally
making a minimum change to reduce risk and only affect an intrinsic
with regression tests in place.
2022-05-04 11:50:38 -04:00
Florian Hahn 8b48223447
Recommit "[VPlan] Remove uneeded needsVectorIV check."
This reverts commit f4e1eaa375.

The patch was originally reverted because it uncovered an issue that has
now been fixed in 0ef8ca6d88.
2022-05-04 10:53:42 +01:00
serge-sans-paille 7030654296 [iwyu] Handle regressions in libLLVM header include
Running iwyu-diff on LLVM codebase since fa5a4e1b95 detected a few
regressions, fixing them.

Differential Revision: https://reviews.llvm.org/D124847
2022-05-04 08:32:38 +02:00
Hongtao Yu 3113e5bb52 [CSSPGO] Relax size limitation for priority inlining with preinlined profile
As a follow-up to D124632, I'm turning on unlimited size caps for inlining with preinlined profile. It should be safe as a preinlined profile has "bounded" inline contexts.

No noticeable size or perf delta was seen with two of our internal large services, but I think this is still a good change to be consistent with the other case.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D124793
2022-05-03 18:43:07 -07:00
Hongtao Yu e95ae395aa [CSSPGO][NFC] Replace SampleProfileLoader::ProfileIsCS with FunctionSamples::ProfileIsCS.
The two fields have the same meaning. Their values come from the reader. Therefore I'm removing one.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D124788
2022-05-03 18:32:37 -07:00
Sanjay Patel 432c199e84 [InstCombine] move shuffle after min/max with same-shuffled operands
This is an intrinsic version of the existing fold for binops.
As a first step, I only allowed min/max, but the code is set
up to make adding more intrinsics easy (with more or less than
2 arguments).

This (and possible follow-ups) are discussed in issue #46238.
2022-05-03 16:23:11 -04:00
Augie Fackler 1deea714b3 BuildLibCalls: simplify switch statement slightly
Per feedback on D123086 after submit.

Also added a test for vec_malloc et al attribute inference to show it's
doing the right thing.

The new tests exposed a defect, corrected by adding vec_free to the list of
free functions in MemoryBuiltins.cpp, which had been overlooked all the
way back in D94710, over a year ago.

Differential Revision: https://reviews.llvm.org/D124859
2022-05-03 13:17:33 -04:00
Dawid Jurczak 9c46a9cf61 [NFC][GVNSink] Don't pretend that iteration is over instructions when it's actually over blocks
Differential Revision: https://reviews.llvm.org/D124764
2022-05-03 17:19:40 +02:00
Igor Kirillov 4e5e042d9a [LoopVectorize] Support reductions that store intermediary result
Adds ability to vectorize loops containing a store to a loop-invariant
address as part of a reduction that isn't converted to SSA form due to
lack of aliasing info. Runtime checks are generated to ensure the store
does not alias any other accesses in the loop.

Ordered fadd reductions are not yet supported.

Differential Revision: https://reviews.llvm.org/D110235
2022-05-03 10:12:30 +01:00
David Green 6f81903e89 [LV][SLP] Mark fptosi_sat as vectorizable
This adds fptosi_sat and fptoui_sat to the list of trivially
vectorizable functions, mainly so that the loop vectorizer can vectorize
the instruction. Marking them as trivially vectorizable also allows them
to be SLP vectorized, and Scalarized.

The signature of a fptosi_sat requires two type overrides
(@llvm.fptosi.sat.v2i32.v2f32), unlike other intrinsics that often only
take a single. This patch alters hasVectorInstrinsicOverloadedScalarOpd
to isVectorIntrinsicWithOverloadTypeAtArg, so that it can mark the first
operand of the intrinsic as a overloaded (but not scalar) operand.

Differential Revision: https://reviews.llvm.org/D124358
2022-05-03 09:32:34 +01:00
Vitaly Buka 098e807074 Revert "[DeadArgElim] Set unused arguments for internal functions"
Breaks bots, see https://reviews.llvm.org/D124699

This reverts commit e547a333a4.
2022-05-02 15:10:26 -07:00
Teresa Johnson 084b65f7dc [memprof] Only insert dynamic shadow load when needed
We don't need to insert a load of the dynamic shadow address unless there
are interesting memory accesses to profile.

Split out of D124703.

Differential Revision: https://reviews.llvm.org/D124797
2022-05-02 13:36:00 -07:00
Alexey Bataev e74a73782f [SLP][NFC]Minor code changes for better readability, NFC. 2022-05-02 12:58:25 -07:00
Teresa Johnson a0b5af46a2 [memprof] Don't instrument PGO and other compiler inserted variables
Suppress instrumentation of PGO counter accesses, which is unnecessary
and costly. Also suppress accesses to other compiler inserted variables
starting with "__llvm". This is a slightly expanded variant of what is
done for tsan in shouldInstrumentReadWriteFromAddress.

Differential Revision: https://reviews.llvm.org/D124703
2022-05-02 12:17:52 -07:00
Alexey Bataev 7ea03f0b4e [SLP]Improve reductions analysis and emission, part 1.
Currently SLP vectorizer walks through the instructions and selects
3 main classes of values: 1) reduction operations - instructions with same
reduction opcode (add, mul, min/max, etc.), which build the reduction,
2) reduced values - instructions with the same opcodes, but different
from the reduction opcode, 3) extra arguments - all other values,
instructions from the different basic block rather than the root node,
instructions with to many/less uses.

This scheme is not very efficient. It excludes some instructions and all
non-instruction values from the reductions (constants, proficient
gathers), to many possibly reduced values are marked as extra arguments.
Patch improves this process by introducing a bit extended analysis
stage. During this stage, we still try to select 3 classes of the
values: 1) reduction operations - same as before, 2) possibly reduced
values - all instructions from the current block/non-instructions, which
may build a vectorization tree, 3) extra arguments - instructions from
the different basic blocks. Additionally, an extra sorting of the
possibly reduced values occurs to build the scalar sequences which
highly likely will bed vectorized, e.g. loads are grouped by the
distance between them, constants are grouped together, cmp instructions
are sorted by their compare types and predicates, extractelement
instructions are sorted by the vector operand, etc. Also, these groups
are reordered by their length so the longest group is the first in the
list of the possibly reduced values.

The vectorization process tries to emit the reductions for all these
groups. These reductions, remaining non-vectorized possible reduced
values and extra arguments are then combined into the final expression
just like it was before.

Differential Revision: https://reviews.llvm.org/D114171
2022-05-02 12:03:58 -07:00
Florian Hahn 0ef8ca6d88
[VPlan] Do not create VPWidenCall recipes for scalar vector factors.
'Widen' recipe are only used when actual vector values are generated.
Fix tryToWidenCall to do not create VPWidenCallRecipes for scalar vector
factors.

This was exposed by D123720, because the widened recipes are considered
vector users.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D124718
2022-05-02 19:40:33 +01:00
Quentin Colombet e547a333a4 [DeadArgElim] Set unused arguments for internal functions
Prior to this patch we would only set to undef the unused arguments of the
external functions. The rationale was that unused arguments of internal
functions wouldn't need to be turned into undef arguments because they
should have been simply eliminated by the time we reach that code.

This is actually not true because there are plenty of cases where we can't
remove unused arguments. For instance, if the internal function is used in
an indirect call, it may not be possible to change the function signature.
Yet, for statically known call-sites we would still like to mark the unused
arguments as undef.

This patch enables the "set undef arguments" optimization on internal
functions when we encounter cases where internal functions cannot be
optimized. I.e., whenever an internal function is marked "live".

Differential Revision: https://reviews.llvm.org/D124699
2022-05-02 11:16:32 -07:00
Jonas Paulsson 304378fd09 Reapply "[BuildLibCalls] Introduce getOrInsertLibFunc() for use when building
libcalls." (was 0f8c626). This reverts commit 14d9390.

The patch previously failed to recognize cases where user had defined a
function alias with an identical name as that of the library
function. Module::getFunction() would then return nullptr which is what the
sanitizer discovered.

In this updated version a new function isLibFuncEmittable() has as well been
introduced which is now used instead of TLI->has() anytime a library function
is to be emitted . It additionally also makes sure there is e.g. no function
alias with the same name in the module.

Reviewed By: Eli Friedman

Differential Revision: https://reviews.llvm.org/D123198
2022-05-02 19:37:00 +02:00
Arthur Eubanks b07aab8fc1 [GlobalOpt] Iterate over replaced values deterministically to constprop
If there are pre-existing dead instructions, the order we visit replaced
values can cause us sometimes to not delete dead instructions.

The added test non-deterministically failed without the change.
2022-05-02 09:43:20 -07:00
Nikita Popov 95fedfab6c [InstCombine] Handle non-canonical GEP index in indexed compare fold (PR55228)
Normally the index type will already be canonicalized here, but
this is not guaranteed depending on visitation order. The code
was already accounting for a potentially needed sext, but a trunc
may also be needed.

Add a ConstantExpr::getSExtOrTrunc() helper method to make this
simpler. This matches the corresponding IRBuilder method in behavior.

Fixes https://github.com/llvm/llvm-project/issues/55228.
2022-05-02 17:56:01 +02:00
Augie Fackler c7ae423e39 BuildLibCalls: add alloc-family attribute to many allocator functions
Differential Revision: https://reviews.llvm.org/D123086
2022-05-02 11:12:55 -04:00
Augie Fackler e940456531 BuildLibCalls: infer allocptr attribute for free and realloc() family functions
Differential Revision: https://reviews.llvm.org/D123084
2022-05-02 09:43:21 -04:00
Nikita Popov aae5f8115a [Local] Consider atomic loads from constant global as dead
Per the guidance in
https://llvm.org/docs/Atomics.html#atomics-and-ir-optimization,
an atomic load from a constant global can be dropped, as there can
be no stores to synchronize with. Any write to the constant global
would be UB.

IPSCCP will already drop such loads, but the main helper in Local
doesn't recognize this currently. This is motivated by D118387.

Differential Revision: https://reviews.llvm.org/D124241
2022-05-02 10:52:58 +02:00
Phoebe Wang 7c04454227 [ArgPromotion][Attributor] Update min-legal-vector-width when do promotion
X86 codegen uses function attribute `min-legal-vector-width` to select the proper ABI. The intention of the attribute is to reflect user's requirement when they passing or returning vector arguments. So Clang front-end will iterate the vector arguments and set `min-legal-vector-width` to the width of the maximum for both caller and callee.

It is assumed any middle end optimizations won't care of the attribute expect inlining and argument promotion.
- For inlining, we will propagate the attribute of inlined functions because the inlining functions become the newer caller.
- For argument promotion, we check the `min-legal-vector-width` of the caller and callee and refuse to promote when they don't match.

The problem comes from the optimizations' combination, as shown by https://godbolt.org/z/zo3hba8xW. The caller `foo` has two callees `bar` and `baz`. When doing argument promotion, both `foo` and `bar` has the same `min-legal-vector-width`. So the argument was promoted to vector. Then the inlining inlines `baz` to `foo` and updates `min-legal-vector-width`, which results in ABI mismatch between `foo` and `bar`.

This patch fixes the problem by expanding the concept of `min-legal-vector-width` to indicator of functions arguments. That says, any passes touch functions arguments have to set `min-legal-vector-width` to the value reflects the width of vector arguments. It makes sense to me because any arguments modifications are ABI related and should response for the ABI compatibility.

Differential Revision: https://reviews.llvm.org/D123284
2022-05-02 14:13:05 +08:00
Florian Hahn 5387a38c38
[SimpleLoopUnswitch] Freeze individual OR/AND operands.
In some cases, it is not enough to freeze the final AND/OR operation
when chaining a number of invariant conditions together.

After creating a chain of ANDs/ORs, we assume all unswitched operands to
be either true or false. But if any of the operands is poison, the rest
of the operands could have any value after branching on the frozen
condition.

To avoid that, freeze individual operands, if needed. In some cases this
may lead to unnecessary freezes, but it seems required at least for some
cases (see trivial-unswitch-freeze-individual-conditions.ll)

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D124554
2022-05-01 20:11:05 +01:00
Simon Pilgrim 34f97a3709 [VectorCombine] Merge isa<>/cast<> into dyn_cast<>. NFC.
We want to handle the the assert in VectorCombine so avoid the repeated isa/cast code.
2022-05-01 20:09:10 +01:00
Simon Pilgrim 09761ce295 [SLPVectorizer] Remove weird unicode character from comment. NFCI.
Whatever it was, Visual Assist really didn't like it....
2022-05-01 16:37:21 +01:00
Florian Hahn 8b022f87b0
[SimpleLoopUnswitch] Freeze trivial conditions if needed.
Trivial unswitching can also introduce new branches on undef/poison.
Freeze the conditions if needed.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D124549
2022-04-30 19:53:36 +01:00
Juneyoung Lee 40a2e35599 [InstCombine] Remove the undef-related workaround code in visitSelectInst
This patch removes an old hack in visitSelectInst that was written to avoid miscompilation bugs in loop unswitch.
(Added via https://reviews.llvm.org/D35811)

The legacy loop unswitch pass will be removed after D124376, and the new simple loop unswitch pass correctly uses freeze to avoid introducing UB after D124252.

Since the hack is not necessary anymore, this patch removes it.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D124426
2022-04-30 20:48:42 +09:00
Hongtao Yu bdb8c50a1c [CSSPGO] Turn on priority inlining for probe-only profile
We have seen that the prioirty inliner delivered on-par performance with the old inliner for probe-only CSSPGO profile, as long as without a size budget. I'm turning on the priority inliner for probe-only profile by default.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D124632
2022-04-29 17:31:56 -07:00
Hongtao Yu e36786d15f [CSSPGO] Rename ProfileIsCSNested and ProfileIsCSFlat
To be more clear and definitive, I'm renaming `ProfileIsCSFlat` back to `ProfileIsCS` which stands for full context-sensitive flat profiles.  `ProfileIsCSNested` is now renamed to `ProfileIsPreInlined` and is extended to be applicable for CS flat profiles too. More specifically, `ProfileIsPreInlined` is for any kind of profiles (flat or nested) that contain 'ShouldBeInlined' contexts. The flag is encoded in the profile summary section for extbinary profiles and is computed on-the-fly for text profiles.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D122602
2022-04-29 17:03:52 -07:00
James Y Knight 02aa795785 Revert "[JumpThreading][NFC][CompileTime] Do not recompute BPI/BFI analyzes"
This change has caused non-reproducibility of a self-build of Clang
when using NewPM and providing profile data.

This reverts commit 35f38583d2.
2022-04-29 21:15:47 +00:00
Alexey Bataev 484fcb9888 [SLP][NFC]Fix a comment. 2022-04-29 09:27:13 -07:00
Florian Hahn a80081763c
[SimplifyCFG] Avoid shifting by a too large exponent.
TI->getBitWidth can be > 64 and in those cases the shift will be UB due
to the exponent being too large.

To fix this, cap the shift at 63. I think this should work out fine,
because TableSize is itself a 64 bit type and the maximum table size
must fit in the type. Also, if we would underestimate the size here, at
most we get an extra ZExt.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124608
2022-04-29 15:19:06 +01:00
Anna Thomas 205246cb64 [CompileTime] [Passes] Avoid computing unnecessary analyses. NFC
Similar to c515b2f39e, If there are no loops in the function as seen
through LI, we should avoid computing the remaining expensive analyses
(such as SCEV, BPI).  Reordered the analyses requests and early return
if there are no loops.

The logic of avoiding expensive analyses is applied to LoopVectorizer,
LoopLoadElimination and LoopUnrollPass, i.e. all function passes which operate
on loops.

This is an NFC with compile time improvement.

Differential Revision: https://reviews.llvm.org/D124529
2022-04-29 10:00:06 -04:00
Nikita Popov 1881711fbb [InstCombine] Remove memset of undef value
This removes memset with undef char. We already do this for stores
of undef value.

This comes with the caveat that this optimization is not, strictly
speaking, legal for undef values, because we might be overwriting
a poison value. However, our entire load/store model currently still
operates on undef values, so we need to support undef here as well
for internal consistency.

Once https://github.com/llvm/llvm-project/issues/52930 is resolved,
these and related folds can be limited to poison -- I've added
FIXMEs to that effect.

Differential Revision: https://reviews.llvm.org/D124173
2022-04-29 14:51:18 +02:00
Ricky Zhou 24a133e16f
[LV] Rename CountRoundDown to VectorTripCount (NFC)
The name CountRoundDown is potentially misleading, as the number of
iterations can be rounded up when folding the tail.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D119681
2022-04-29 13:50:00 +01:00
Nikita Popov 982cbed819 [InstCombine] Fold logical and/or of range icmps with nowrap flags
This is an edge-case where we don't convert to bitwise and/or based
on implies poison reasoning, so explicitly try to perform the fold
in logical form. The transform itself is poison-safe, as both icmps
are based on the same value and any nowrap flags are discarded as
part of the fold (https://alive2.llvm.org/ce/z/aCwC8b for the used
example).
2022-04-29 14:42:42 +02:00
Florian Hahn e66127e69b
[VPlan] Simplify & adjust code as suggested in D123005.
Improve code as suggested in D123005. Applied separately, because the
comments where made a diff that has not been rebased to current main.
2022-04-29 13:34:54 +01:00
Nikita Popov 57aaeefc18 [InstCombine] Pass ICmpInsts to foldAndOrOfICmpsUsingRanges() (NFC)
Pass the whole instruction rather than unpacking it. This makes it
easier to reuse the function in another place, as the entire
logic is encapsulated.
2022-04-29 12:46:31 +02:00
Nikita Popov 1f53932a95 [InstCombine] Remove foldAndOrOfEqualityCmpsWithConstants() fold
This fold handles a special subset of foldAndOrOfICmpsUsingRanges(),
use the more generic implementation instead.

The result can differ if a representation using a range comparison
is possible, in which case that is preferred over masking. There is
a canonicalization opportunity here.
2022-04-29 12:23:00 +02:00
Nikita Popov 5515263e44 [InstCombine] Fold and of two ranges differing by mask
This is the de Morgan conjugated variant of the existing fold for
ors. Implement this by switching the range code to always work
on ors and perform invert operands at the start and end. This makes
reasoning easier and makes the extension more obviosuly correct.
2022-04-29 12:01:38 +02:00
Florian Hahn fb4113ef0c
[Passes] Remove legacy LoopUnswitch pass.
The legacy LoopUnswitch pass is only used in the legacy pass manager
pipeline, which is deprecated.

The NewPM replacement is SimpleLoopUnswitch and I think it is time to
remove the legacy LoopUnswitch code.

Fixes #31000.

Reviewed By: aeubanks, Meinersbur, asbirlea

Differential Revision: https://reviews.llvm.org/D124376
2022-04-29 10:30:49 +01:00
Nikita Popov d5ee20fcc9 [InstCombine] Switch an or of icmps fold to use constant ranges
We can express this fold more naturally when working on the constant
range implementation. This change is not entirely NFC, because the
code now also handles cases that don't match the precise pattern
this previously looked for, e.g. we can omit an add on one of the
ranges.
2022-04-29 11:15:54 +02:00
David Green 7047c47918 [VecCombine] Fix sort comparator logic in foldShuffleFromReductions
I think this sort comparator was overly complex, and the windows
expensive check bot agreed, failing as it was not giving a strict weak
ordering. Change it to use the comparison of the mask values as unsigned
integers. This should sort the undef elements to the end whilst keeping
X<Y otherwise.
2022-04-29 09:30:02 +01:00
Nikita Popov 884e9a877b [SimplifyCFG] Replace condition value when threading
Replace the condition value with the known constant value on the
threaded edge. This happens implicitly with phi threading because
we replace with the incoming value, but not for non-phi threading.
2022-04-29 09:50:27 +02:00
Nikita Popov 4e545bdb35 [SimplifyCFG] Thread branches on same condition in more cases (PR54980)
SimplifyCFG implements basic jump threading, if a branch is
performed on a phi node with constant operands. However,
InstCombine canonicalizes such phis to the condition value of a
previous branch, if possible. SimplifyCFG does support this as
well, but only in the very limited case where the same condition
is used in a direct predecessor -- notably, this does not include
the common diamond pattern (i.e. two consecutive if/elses on the
same condition).

This patch extends the code to look back a limited number of
blocks to find a branch on the same value, rather than only
looking at the direct predecessor.

Fixes https://github.com/llvm/llvm-project/issues/54980.

Differential Revision: https://reviews.llvm.org/D124159
2022-04-29 09:44:05 +02:00
Florian Hahn f4e1eaa375
Revert "[VPlan] Remove uneeded needsVectorIV check."
This reverts commit 43842b887e while I
investigate a buildbot failure.

It also reverts the follow-up commit
2883de0514.
2022-04-28 20:16:21 +01:00
David Green ded8187e35 [VectorCombine] Try to reduce shuffle cost for commutative reduction operands
Given a shuffle feeding a commutative reduction, the lane ordering of
the shuffle will not alter the result. This is also true if there are a
number of operations between the reduction and the shuffle, providing
they only operate lane-wise. This patch searches for cases like that in
Vector Combine, allowing us to check the cost of the shuffle vs an
in-order identity shuffle and replace the order if possible. This only
handles a single shuffle at the moment to keep things simple, and is
able to ignore splats that produce results where every result is the
same.

This is a more powerful version of a combine that already happens in
instrcombine, capable of optimizing more cases by looking through more
instructions and being able to cost the shuffle.

Differential Revision: https://reviews.llvm.org/D123494
2022-04-28 19:46:12 +01:00
Alexey Bataev 75e1cf4a6a [COST]Improve cost model for shuffles in SLP.
Introduced masks where they are not added and improved target dependent
cost models to avoid returning of the incorrect cost results after
adding masks.

Differential Revision: https://reviews.llvm.org/D100486
2022-04-28 10:04:41 -07:00
Pavel Samolysov 9197959e13 [ArgPromotion] Move ArgPart and OffsetAndArgPart to anonymous namespace
The structure ArgPart and alias OffsetAndArgPart have been moved
into the anonymous namespace. NFC.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D124617
2022-04-28 09:51:46 -07:00
Pavel Samolysov 6b825e50f7 [ArgPromotion] Change the condition to check the promotion limit
The condition should be 'ArgParts.size() > MaxElements', so that if we
have exactly 3 elements in the 'ArgParts' vector, the promotion should
be allowed because the 'MaxElement' threshold is not exceeded yet.

The default value for 'MaxElement' has been decreased to 2 in order
to avoid an actual change in argument promoting behavior. However,
this changes byval argument transformation behavior by allowing
adding not more than 2 arguments to the function instead of 3 allowed
before.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D124178
2022-04-28 09:42:58 -07:00
Florian Hahn 2883de0514
[VPlan] Fix comment formatting from 43842b887e. 2022-04-28 16:31:48 +01:00
Florian Hahn 43842b887e
[VPlan] Remove uneeded needsVectorIV check.
Remove one of the last remaining uses of ::needsVectorIV, preparing for
its removal. Now that usesScalars is available and based on the
information explicit in VPlan, there is no need to use the pre-computed
needsVectorIV.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D123720
2022-04-28 16:27:34 +01:00
Alexey Bataev 9861ca0c23 Revert "[COST]Improve cost model for shuffles in SLP."
This reverts commit 29a470e380 to fix
a crash reported in https://reviews.llvm.org/D100486#3479989.
2022-04-28 08:11:56 -07:00
Pavel Samolysov 744a837838 [ArgPromotion] Rename variables according to the code style. NFC
Some loop counters ('i', 'e') and variables ('type') were named not
in accordance with the code style and clang-tidy issues warnings
about the using of such variables. This patch renames the variables
and fixes some typos in the comments within the source file.

Differential Revision: https://reviews.llvm.org/D123662
2022-04-28 15:32:05 +02:00
Chris Jackson c792884589 [Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [2/2]
Reland 3f2b76ec90 with the test corrected
to require x86-registered-target.

Differential Revision: https://reviews.llvm.org/D120169
2022-04-28 14:21:56 +01:00
Chris Jackson cd5f9efc4d Revert "[Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [2/2]"
This reverts commit 3f2b76ec90.
2022-04-28 14:07:31 +01:00
Nikita Popov 90dba831ae [InstCombine] Fold or of icmp ne trunc/and
This adds the de Morgan conjugated variant for the existing
"and eq" style fold.

Proof: https://alive2.llvm.org/ce/z/tkNAcG
2022-04-28 15:07:16 +02:00
Chris Jackson 3f2b76ec90 [Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [2/2]
Reland commit 74273d575f following a fix
for a memory leak. The DVIRecoveryRecord vectors now use unique_ptr.

Differential Revision: https://reviews.llvm.org/D120169
2022-04-28 13:55:49 +01:00
Nikita Popov b9dc565147 [GVN] Encode GEPs in offset representation
When using opaque pointers, convert GEPs into offset representation
of the form P + V1 * Scale1 + V2 * Scale2 + ... + ConstantOffset.
This allows us to recognize equivalent address calculations even if
the GEPs don't use the same source element type.

This fixes an opaque pointer codegen regression seen in rustc.

Differential Revision: https://reviews.llvm.org/D124527
2022-04-28 09:32:05 +02:00
Max Kazantsev 35f38583d2 [JumpThreading][NFC][CompileTime] Do not recompute BPI/BFI analyzes
They can already be available, and even if not, DT/LI can be available.
We should not recompute them. Old PM is unchanged because it would
require changing dependencies, and we don't care enough about it.

Differential Revision: https://reviews.llvm.org/D124439
Reviewed By: nikic, aeubanks
2022-04-28 10:46:08 +07:00
Wenju He 96d3be8443 [InferAddressSpaces] Check if AS are the same in isNoopPtrIntCastPair
isNoopAddrSpaceCast is expecting SrcAS is different from DestAS.
If the two AS are the same, consider ptrtoint/inttoptr as noop cast.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D123573
2022-04-28 11:10:55 +08:00
Arthur Eubanks 4e65291837 [OpaquePtr][GlobalOpt] Don't attempt to evaluate global constructors with arguments
Previously all entries in global_ctors had to have the void()* type and
we'd skip evaluating bitcasted functions. With opaque pointers we may
see the function directly.

Fixes #55147.

Reviewed By: #opaque-pointers, nikic

Differential Revision: https://reviews.llvm.org/D124553
2022-04-27 19:00:44 -07:00
Fangrui Song c74a706893 [LegacyPM] Remove ThreadSanitizerLegacyPass
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove ThreadSanitizerLegacyPass.

Reviewed By: #sanitizers, vitalybuka

Differential Revision: https://reviews.llvm.org/D124209
2022-04-27 16:25:41 -07:00
Kirill Stoimenov 761366e6ae Revert "[Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [2/2]"
This reverts commit 74273d575f.

Buildbot: https://lab.llvm.org/buildbot/#/builders/5/builds/22795
Failing with memory leak.
2022-04-27 23:11:48 +00:00
Nicolas Abram Lujan f8a574bf4d [InstCombine] C0 >> (X - C1) --> (C0 << C1) >> X
With the right pre-conditions, we can fold the offset
into the shifted constant:
https://alive2.llvm.org/ce/z/drMRBU
https://alive2.llvm.org/ce/z/cUQv-_

Fixes #55016

Differential Revision: https://reviews.llvm.org/D124369
2022-04-27 14:18:30 -04:00
Martin Sebor efa0f12c0b [InstCombine] Fold strnlen calls in equality to zero.
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123818
2022-04-27 12:03:24 -06:00
Alexey Bataev 29a470e380 [COST]Improve cost model for shuffles in SLP.
Introduced masks where they are not added and improved target dependent
cost models to avoid returning of the incorrect cost results after
adding masks.

Differential Revision: https://reviews.llvm.org/D100486
2022-04-27 10:56:26 -07:00
Wei Wang 26a0d53b15 [CHR] Skip region containing llvm.coro.id
When a block containing llvm.coro.id is cloned during CHR, it inserts an invalid
PHI node with token type to the beginning of the block containing llvm.coro.begin.
To avoid such case, we exclude regions with llvm.coro.id.

Reviewed By: ChuanqiXu

Differential Revision: https://reviews.llvm.org/D124418
2022-04-27 10:27:25 -07:00
Roman Lebedev ffafa71f64
[InstCombine] 'round up integer': if bias is just right, just reuse instructions
This is only useful if we can't create new instruction
because %x.aligned has other uses and already sticks around.
2022-04-27 17:27:02 +03:00
Roman Lebedev aac0afd1dd
[InstCombine] Fold 'round up integer' pattern (when alignment is a power of two)
But don't deal with non-splats.

The test coverage is sufficiently exhaustive,
and alive is happy about the changes there.

Example with constants: https://alive2.llvm.org/ce/z/EUaJ5- / https://alive2.llvm.org/ce/z/Bkng2X
General proof: https://alive2.llvm.org/ce/z/3RjJ5A
2022-04-27 17:26:55 +03:00
Shilei Tian a6b355dd31 [SLP] Fix a typo that causes redundant assertion and potential segment fault
Reviewed By: ABataev

Differential Revision: https://reviews.llvm.org/D124497
2022-04-27 10:07:59 -04:00
Anna Thomas c515b2f39e [IRCE] Avoid computing potentially unnecessary analyses. NFC
IRCE is a function pass that operates on loops. If there are no loops in
the function (as seen through LI), we should avoid computing the
remaining expensive analyses (such as BPI). Reordered the analyses
requests and early return if there are no loops. This is an NFC with
compile time improvement.

The same will be done in a follow-up patch for the loop vectorizer.

Reviewed-By: nikic
Differential Revision: https://reviews.llvm.org/D124478
2022-04-27 09:22:10 -04:00
Chris Jackson 74273d575f [Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [2/2]
This relands commit 8f550368b1.

The test is amended with REQUIRES: x86-registered-target, in line with
the other debuginfo-scev-salvage tests.

Differential Revision: https://reviews.llvm.org/D120169
2022-04-27 13:10:30 +01:00
Chris Jackson 855752e563 Revert [Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics[2/2]
This reverts commit 8f550368b1.
2022-04-27 13:06:03 +01:00
Chris Jackson 8f550368b1 [Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [2/2]
Second of two patches to extend SCEV-based salvaging to dbg.value
intrinsics that have multiple location ops pre-LSR. This second patch
adds the core implementation.

Reviewers: @StephenTozer, @djtodoro

Differential Revision: https://reviews.llvm.org/D120169
2022-04-27 12:47:35 +01:00
Chris Jackson c45e4c140f [Debuginfo][LSR] Add salvaging variadic dbg.value intrinsics [1/2] [NFC]
First of two patches that extends SCEV-based salvaging to enable
salvaging of dbg.value instrinsics that have multiple locations ops
before the Loop Strength Reduction pass.

The existing single-op SCEV-based salvaging can generate variadic
dbg.value intrinsics in order to salvage a dbg.value that has a single
location op. If a dbg.value has multiple location ops before LSR, and
LSR optimises away one or more of the location operands, then currently
no salvaging will be attempted.

Salvaging can now be added, but first this patch cleans up consistency
in both the code and comments, and applies some refactoring to make
application of the new salvaging implementation more straightforward.

- Use SCEVDbgValueBuilder for both types of recovery expressions:
  IV-offset based and iteration count based.
- Combine the functions that write the final DIExpression.
- Move some static functions into member functions.

Reviewers: @Orlando

Differential Revision: https://reviews.llvm.org/D120168
2022-04-27 11:43:05 +01:00
Nikita Popov c103f5e9da [InstCombine] Combine opaque pointer GEPs with mismatching element types
Currently, two GEPs will only be combined if the result element
type of one is the same as the source element type of the other.
However, this means we may miss folding opportunities where the
second GEP could be rewritten using a different element type. This
is especially relevant for opaque pointers, where constant GEPs
often use i8 element type.

Address this by converting GEP indices to offsets, adding them,
and then converting them back to indices. The first (inner) GEP
is allowed to have variable indices as well, in which case only
the constant suffix is converted into an offset.

This should address the regression reported in
https://reviews.llvm.org/D123300#3467615.

Differential Revision: https://reviews.llvm.org/D124459
2022-04-27 09:33:47 +02:00
Alexandros Lamprineas a910337b5d [FuncSpec] Conditional jump or move depends on uninitialised value(s).
I found this bug when performing a two-stage build of clang with
Function Specialization enabled and tuned aggressively. The crash
appears only on release builds.

Fixes https://github.com/llvm/llvm-project/issues/55000.

Before accessing the contents of the ArgInfo iterator inside
SCCPInstVisitor::markArgInFuncSpecialization, we should be
checking that the iterator is valid.

Differential Revision: https://reviews.llvm.org/D124114
2022-04-27 07:28:25 +01:00
Martin Sebor ffed0cfcdb [SimplifyLibCalls] avoid slicing 64-bit integers in an ILP32 build (PR #54739)
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123472
2022-04-26 17:20:56 -06:00
Martin Sebor 449adafabe [InstCombine] Fold strnlen of constant strings.
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123817
2022-04-26 16:15:28 -06:00
Ricky Zhou 4041c44853 [InstCombine] Update predicate when canonicalizing comparisons in canonicalizeClampLike.
canonicalizeClampLike canonicalizes the ule/ugt comparisons to ult/uge,
respectively. However, it does not update the variable holding the
comparison predicate type after doing this. Later code fails to handle
the non-canonical predicate type (specifically, the swap of
ThresholdLowIncl and ThresholdHighExcl when Pred0 has been canonicalized
from ugt to uge). This leads to the miscompile reported in PR53252. Fix
this by updating the comparison predicate after canonicalizing.

Fixes #53252

Differential Revision: https://reviews.llvm.org/D119690
2022-04-26 17:35:45 -04:00
Michael Kruse ff289feeba [OpenMPIRBuilder] Remove ContinuationBB argument from Body callback.
The callback is expected to create a branch to the ContinuationBB (sometimes called FiniBB in some lambdas) argument when finishing. This creates problems:

 1. The InsertPoint used for CodeGenIP does not need to be the end of a block. If it is not, a naive callback will insert a branch instruction into the middle of the block.

 2. The BasicBlock the CodeGenIP is pointing to may or may not have a terminator. There is an conflict where to branch to if the block already has a terminator.

 3. Some API functions work only with block having a terminator. Some workarounds have been used to insert a temporary terminator that is removed again.

 4. Some callbacks are sensitive to whether the BasicBlock has a terminator or not. This creates a callback ordering problem where different callback may have different behaviour depending on whether a previous callback created a terminator or not. The problem also exists for FinalizeCallbackTy where some callbacks do create branch to another "continue" block, but unlike BodyGenCallbackTy does not receive the target as argument. This is not addressed in this patch.

With this patch, the callback receives an CodeGenIP into a BasicBlock where to insert instructions. If it has to insert control flow, it can split the block at that position as needed but otherwise no separate ContinuationBB is needed. In particular, a callback can be empty without breaking the emitted IR. If the caller needs the control flow to branch to a specific target, it can insert the branch instruction itself and pass an InsertPoint before the terminator to the callback.

Certain frontends such as Clang may expect the current IRBuilder position to be at the end of a basic block. In this case its callbacks must split the block at CodeGenIP before setting the IRBuilder position such that the instructions after CodeGenIP are moved to another basic block and before returning create a new branch instruction to the split block.

Some utility functions such as `splitBB` are supporting correct splitting of BasicBlocks, independent of whether they have a terminator or not, returning/setting the InsertPoint of an IRBuilder to the end of split predecessor block, and optionally omitting creating a branch to the split successor block to be added later.

Reviewed By: kiranchandramohan

Differential Revision: https://reviews.llvm.org/D118409
2022-04-26 16:35:01 -05:00
Vasileios Porpodas fa8a9fea47 Recommit "[SLP][TTI] Refactoring of `getShuffleCost` `Args` to work like `getArithmeticInstrCost`"
This reverts commit 6a9bbd9f20.

Code review: https://reviews.llvm.org/D124202
2022-04-26 14:02:40 -07:00
Martin Sebor ce8f42d4af [InstCombine] Fold memrchr calls with a constant character.
Reviewed By: nikic

Differential Revision: //reviews.llvm.org/D123629
2022-04-26 14:02:50 -06:00
Martin Sebor 10c99ce67d [InstCombine] Fold memrchr calls with constant size, bail on excessive.
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123626
Differential Revision: https://reviews.llvm.org/D123628
2022-04-26 14:02:50 -06:00
Martin Sebor 25febbd155 [InstCombine] Fold strnlen with a bound of zero and one.
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123816
2022-04-26 14:02:50 -06:00
Martin Sebor 2807c420cd [InstCombine] add a strnlen handler stub.
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123815
2022-04-26 14:02:49 -06:00
Sanjay Patel 903aa5e0f8 [InstCombine] try to fold icmp with mismatched extended operands
If a value is known to be non-negative and zexted,
that's the same thing as sexted.

So for the purpose of looking past the casts with
an icmp, treat it as if it was a sext:
https://alive2.llvm.org/ce/z/_BDsGV

This is necessary, but not enough to solve the
motivating problem:
https://github.com/llvm/llvm-project/issues/55013

Differential Revision: https://reviews.llvm.org/D124419
2022-04-26 14:26:36 -04:00
Vasileios Porpodas 6a9bbd9f20 Revert "[SLP][TTI] Refactoring of `getShuffleCost` `Args` to work like `getArithmeticInstrCost`"
This reverts commit 55ce296d6f.
2022-04-26 11:25:26 -07:00
Sanjay Patel c8ed784ee6 [InstCombine] fold freeze of partial undef/poison vector constants
We can always replace the undef elements in a vector constant
with regular constants to get rid of the freeze:
https://alive2.llvm.org/ce/z/nfRb4F

The select diffs show that we might do better by adjusting the
logic for a frozen select condition. We may also want to refine
the vector constant replacement to consider forming a splat.

Differential Revision: https://reviews.llvm.org/D123962
2022-04-26 14:16:11 -04:00
Vasileios Porpodas 55ce296d6f [SLP][TTI] Refactoring of `getShuffleCost` `Args` to work like `getArithmeticInstrCost`
Before this patch `Args` was used to pass a broadcat's arguments by SLP.
This patch changes this. `Args` is now used for passing the operands of
the shuffle.

Differential Revision: https://reviews.llvm.org/D124202
2022-04-26 11:11:29 -07:00
Augie Fackler a907d36cfe Attributes: add a new `allocptr` attribute
This continues the push away from hard-coded knowledge about functions
towards attributes. We'll use this to annotate free(), realloc() and
cousins and obviate the hard-coded list of free functions.

Differential Revision: https://reviews.llvm.org/D123083
2022-04-26 13:57:11 -04:00
Igor Kudrin 39ce68886b [LoopPeel][NFCI] Simplify the code to calculate peel count for PGO
This reorganizes the code as a preparation for D123865:

 * Use more descriptive names for variables
 * Simplify a condition by use an already calculated value
   for `MaxPeelCount`
 * Remove a duplicate log entry
 * Report basic values for loop costs

Differential Revision: https://reviews.llvm.org/D124388
2022-04-26 18:44:24 +04:00
Igor Kudrin c71890e158 [LoopPeel][NFC] Exit early if there is no room for peeling
Differential Revision: https://reviews.llvm.org/D123864
2022-04-26 18:43:56 +04:00
Florian Hahn c59d95f6a4
[ConstraintElimination] Check if const. is small enough before using it
Check if the value of a ConstantInt is small enough to be used for
solving before calling getSExtValue.

Fixes #55085
2022-04-26 13:56:32 +01:00
Liqiang Tao b9fc18f89a [llvm][Inline] Remove PriorityInlineOrder in SCC inliner
Since the size of most of SCC's is 1, the PriorityInlineOrder would not change the inline
order in SCC inliner.

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D123608
2022-04-26 20:20:10 +08:00
Florian Hahn 857c612d89
[IPSCCP] Support unfeasible default dests for switch.
At the moment, unfeasible default destinations are not handled properly
in removeNonFeasibleEdges. So far, only unfeasible cases are removed,
but later code expects unreachable blocks to have no predecessors.

This is causing the crash reported in PR49573.

If the default destination is unfeasible it won't be executed. Create
a new unreachable block on demand and use that as default
destination.

Note that at the moment this only is relevant for cases where
resolvedUndefsIn marks the first case as executable. Regular switch
handling has a FIXME/TODO to support determining whether the default
case is feasible or not.

Fixes #48917.

Differential Revision: https://reviews.llvm.org/D113497
2022-04-26 12:41:41 +01:00
Dmitry Makogon d03d2d8aea [RS4GC] Prune inputs of BDV if they are BDV themselves
Don't check whether an input of BDV can be pruned if the input
is the BDV itself. BDV is present in the states map, so in case
the input is the BDV itself, we'd return false. So explicitly check this case.

Differential Revision: https://reviews.llvm.org/D123846
2022-04-26 16:05:00 +07:00
Sanjay Patel 6631907ad2 [InstCombine] use isKnownNonNegative to reduce code duplication; NFC
We may be able to make the ValueTracking wrapper smarter
in the future (for example, analyze a simple recurrence),
so this will automatically benefit if that happens.
2022-04-25 17:13:29 -04:00
Valery N Dmitriev 88b9e46fb5 [SLP] Steer for the best chance in tryToVectorize() when rooting with binary ops.
tryToVectorize() method implements one of searching paths for vectorizable tree roots in SLP vectorizer,
specifically for binary and comparison operations. Order of making probes for various scalar pairs
was defined by its implementation: the instruction operands, then climb over one operand if
the instruction is its sole user and then perform same actions for another operand if previous
attempts failed. Problem with this approach is that among these options we can have more than a
single vectorizable tree candidate and it is not necessarily the one that encountered first.
Trying to build vectorizable tree for each possible combination for just evaluation is expensive.
But we already have lookahead heuristics mechanism which we use for finding best pick among
operands of commutative instructions. It calculates cumulative score for candidates in two
consecutive lanes. This patch introduces use of the heuristics for choosing the best pair among
several combinations. We only try one that looks as most promising for vectorization.
Additional benefit is that we reduce total number of vectorization trees built for probes
because we skip those looking non-profitable early.

Reviewed By: Alexey Bataev (ABataev), Vasileios Porpodas (vporpo)
Differential Revision: https://reviews.llvm.org/D124309
2022-04-25 12:25:33 -07:00
Nathan Lanza 950c95cfdd [coroutines] Get an IntegerType from the value instead of defaulting to 64 bit
This AliasPtr is being created always from an Int64 even for targets
where 32 bit is the proper type. e.g. “thumbv7-none-linux-android16”.
This causes the assert in the `get` func to fail as we're getting a 32
bit from the APInt.

Fix this by simply always just getting the type from the value instead.

Reviewed By: ChuanqiXu

Differential Revision: https://reviews.llvm.org/D123272
2022-04-25 11:10:46 -07:00
Fangrui Song 39e23bb059 [LegacyPM] Remove HWAsanSanitizerLegacyPass
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove AddressSanitizerLegacyPass...

...,
ModuleAddressSanitizerLegacyPass, and ASanGlobalsMetadataWrapperPass.

MemorySanitizerLegacyPass was removed in D123894.
AddressSanitizerLegacyPass was removed in D124216.

Reviewed By: #sanitizers, vitalybuka

Differential Revision: https://reviews.llvm.org/D124337
2022-04-25 10:21:26 -07:00
David Green 9727c77d58 [NFC] Rename Instrinsic to Intrinsic 2022-04-25 18:13:23 +01:00
Florian Hahn 6a6cc5542b
[SimpleLoopUnswitch] Enable freezing of conditions by default.
This fixes a series of mis-compiles by SimpleLoopUnswitch.

My measurements showed no performance regression with -O3 on AArch64
in SPEC2006, SPEC2017 and a set of internal benchmarks.

Fixes #50387, #50430

Depends on D124251.

Reviewed By: nikic, aqjune

Differential Revision: https://reviews.llvm.org/D124252
2022-04-25 14:26:41 +01:00
Nikita Popov e8945110d2 [InstCombine] Remove redundant unsigned underflow fold (NFCI)
This is now handled as a combination of two other folds:
(A+B) <= A & (A+B) != 0  -->  (A+B)-1 < A
(A+B)-1 < A  -->  -B < A
2022-04-25 14:22:43 +02:00
Nikita Popov ee50925894 [InstCombine] Fold (X != 0) & (Y u>= X)
This adds the De Morgan conjugated fold for the existing
(X == 0) | (Y u< X) fold.

Proof: https://alive2.llvm.org/ce/z/3Me3JQ
2022-04-25 13:16:47 +02:00
Nikita Popov 2bec8d6d59 [InstCombine] Fold X + Y + C u< X
This is a variation on the X + Y u< X fold with an extra constant.
Proof: https://alive2.llvm.org/ce/z/VNb8pY
2022-04-25 12:53:39 +02:00
Max Kazantsev 606a000d1a [LoopInstSimplify] Ignore users in unreachable blocks. PR55072
Logic in this pass assumes that all users of loop instructions are
either in the same loop or are LCSSA Phis. In fact, there can also
be users in unreachable blocks that currently break assertions.
Such users don't need to go to the next round of simplifications.

Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D124368
2022-04-25 17:35:28 +07:00
Chenbing Zheng 5805cfb901 [InstCombine] Complete folding of fneg-of-fabs
This patch add a function foldSelectWithFCmpToFabs, and do more combine for
fneg-of-fabs.
With 'nsz':
fold (X <  +/-0.0) ? X : -X or (X <= +/-0.0) ? X : -X to -fabs(x)
fold (X >  +/-0.0) ? X : -X or (X >= +/-0.0) ? X : -X to -fabs(x)

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D123830
2022-04-25 09:53:36 +08:00
Valery N Dmitriev edf7bed87b [SLP][NFC] Outline lookahead heuristics into a separate helper class.
Minor refactoring to reduce size of functional change D124309:
  look-ahead scoring routines pulled out of VLOperands and formed
  new LookAheadHeuristics helper class.

Reviewed By: Alexey Bataev (ABataev), Vasileios Porpodas (vporpo)
Differential Revision: https://reviews.llvm.org/D124313
2022-04-22 18:59:08 -07:00
Paul Kirth 4683a2effa [llvm][misexpect] Avoid division by 0 when using sample profiling
MisExpect diagnostics should not prevent compilation from succeeding, and the
assertion is insufficient to prevent division by zero in release builds.

This patch addresses that by replacing the assert with an early return.

Additionally, it disables MisExpect diagnostics when using sample profiling,
since this is the only known case where this error has manifested.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D124302
2022-04-22 22:48:00 +00:00
Florian Hahn b341c44010
[SimpleLoopUnswitch] Check if freeze is needed for partial unswitching.
We only need to insert a Freeze instruction if any of the conditions
may be poison. Similar checks are already done in the other places
SimpleLoopUnswitch creates Freeze instruction.

Reviewed By: aeubanks, efriedma

Differential Revision: https://reviews.llvm.org/D124259
2022-04-22 21:24:55 +01:00
Simon Pilgrim ffe13960b5 [InstCombine] Fold (A & 2^C1) + A => A & (2^C1 - 1) iff bit C1 in A is a sign bit (PR21929)
Alive2: https://alive2.llvm.org/ce/z/Ygq26C

This is the final missing fold to handle the modulo2 simplification: https://github.com/llvm/llvm-project/issues/22303

Fixes #22303

Differential Revision: https://reviews.llvm.org/D123374
2022-04-22 16:59:02 +01:00
Nikita Popov 369ef9bf60 [InstCombine] Extract code for or of icmp eq zero and icmp fold (NFC)
To make it easier to extend this to the congruent and case.
2022-04-22 16:48:59 +02:00
Nikita Popov ba46ae7bd8 [InstCombine] Merge foldAndOfICmps() and foldOrOfICmps() (NFCI)
Folds are supposed to always be added in conjugated pairs for and
and or. Merge the two functions to make folds for which this is
currently not the case more obvious.
2022-04-22 12:48:03 +02:00
Nikita Popov 3e1d2c352c [InstCombine] Fix or of commuted foldable predicates
1d90e53044 switch this code to store
the predicates and operands in variables, but retained a
swapOperands() call here. Thus the commuted cases were no longer
folded. Additionally, as the change was not reported, the next
InstCombine iteration would not pick it up either.
2022-04-22 12:31:26 +02:00
Nikita Popov 993b166deb Reapply [SimplifyCFG] Handle branch on same condition in pred more directly
Reapplying without changes, after a fix to a dependent patch.

-----

Rather than creating a PHI node and then using the PHI threading
code, directly handle this case in
FoldCondBranchOnValueKnownInPredecessor().

This change is supposed to be NFC-ish, but may cause changes due
to different transform order.
2022-04-22 10:27:38 +02:00
Nikita Popov df18e37541 Reapply [SimplifyCFG] Make FoldCondBranchOnPHI more amenable to extension (NFCI)
Reapply with SmallMapVector instead of SmallDenseMap, which should
address the non-determinism issue.

-----

This general threading transform can be performed whenever we know
a constant value for the condition in a predecessor, which would
currently just be the case of a phi node with constant arguments.
2022-04-22 09:42:11 +02:00
Fangrui Song 16a4d3a85c [LegacyPM] Remove AddressSanitizerLegacyPass
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove AddressSanitizerLegacyPass,
ModuleAddressSanitizerLegacyPass, and ASanGlobalsMetadataWrapperPass.

MemorySanitizerLegacyPass was removed in D123894.

Reviewed By: #sanitizers, vitalybuka

Differential Revision: https://reviews.llvm.org/D124216
2022-04-21 19:25:57 -07:00
Nico Weber 0e0759f441 Revert "[LegacyPM] Remove AddressSanitizerLegacyPass"
This reverts commit e68c589e53.
Breaks check-llvm, see comments on https://reviews.llvm.org/D124216
2022-04-21 22:14:36 -04:00
Fangrui Song e68c589e53 [LegacyPM] Remove AddressSanitizerLegacyPass
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove AddressSanitizerLegacyPass,
ModuleAddressSanitizerLegacyPass, and ASanGlobalsMetadataWrapperPass.

MemorySanitizerLegacyPass was removed in D123894.

Reviewed By: #sanitizers, vitalybuka

Differential Revision: https://reviews.llvm.org/D124216
2022-04-21 18:18:39 -07:00
Vitaly Buka 9be90748f1 Revert "[asan] Emit .size directive for global object size before redzone"
Revert "[docs] Fix underline"

Breaks a lot of asan tests in google.

This reverts commit 365c3e85bc.
This reverts commit 78a784bea4.
2022-04-21 16:21:17 -07:00
Alex Brachet 78a784bea4 [asan] Emit .size directive for global object size before redzone
This emits an `st_size` that represents the actual useable size of an object before the redzone is added.

Reviewed By: vitalybuka, MaskRay, hctim

Differential Revision: https://reviews.llvm.org/D123010
2022-04-21 20:46:38 +00:00
Sanjay Patel 664ae7bbcc [InstCombine] C0 <<{nsw, nuw} (X - C1) --> (C0 >> C1) << X (2nd try)
The first attempt at this missed a check to make sure the offset
constant was in range and caused many bot failures.

That was missed in the Alive2 proof because on overshift creates
poison rather than the assert from APInt. Here's an alternate
attempt at a proof using count-trailing-zeros:
https://alive2.llvm.org/ce/z/pnXQYR

Original commit message:

This is similar to an existing pre-shift-of-constant fold:
8a9c70fc01
...but in this case, we need no-wrap on the shl and a negative
offset:
https://alive2.llvm.org/ce/z/_RVz99
2022-04-21 16:18:46 -04:00
Fangrui Song 35e350d5ba Revert "[SimplifyCFG] Handle branch on same condition in pred more directly" and "[SimplifyCFG] Make FoldCondBranchOnPHI more amenable to extension"
This reverts commit 3df86e799e.
This reverts commit 8988254667.

`[SimplifyCFG] Handle branch on same condition in pred more directly`
caused non-determinism when compiling opt with a bootstrapped clang.
I have to revert the dependent commit as well.
2022-04-21 12:58:58 -07:00
Fangrui Song 409eb5dc3e [LegacyPM] Remove GCOVProfilerLegacyPass
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove GCOVProfilerLegacyPass.

I have checked many LLVM users and only llvm-hs[1] uses the legacy gcov pass.

[1]: https://github.com/llvm-hs/llvm-hs/issues/392

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123829
2022-04-21 10:59:30 -07:00
Fangrui Song d133538b8b [LegacyPM] Remove MemorySanitizerLegacyPass
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove MemorySanitizerLegacyPass.

Differential Revision: https://reviews.llvm.org/D123894
2022-04-21 10:21:46 -07:00
Vasileios Porpodas 889588ee97 [SLP] Refactoring isLegalBroadcastLoad() to use `ElementCount`.
Replacing `unsigned` with `ElementCount` in the argument of `isLegalBroadcastLoad()`.
This helps reduce the diff of a future SLP patch for AArch64.
2022-04-21 10:19:00 -07:00
chenglin.bi 25aba1abb5 Revert "[InstCombine] Add one use limitation for (X * C2) << C1 --> X * (C2 << C1)"
This reverts commit b543d28df7.
2022-04-22 00:56:20 +08:00
chenglin.bi b543d28df7 [InstCombine] Add one use limitation for (X * C2) << C1 --> X * (C2 << C1)
Follow up D123453, add one-use limitation for
(X * C2) << C1 --> X * (C2 << C1)
to make consistent with
lshr (mul nuw x, MulC), ShAmtC -> mul nuw x, (MulC >> ShAmtC)

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124183
2022-04-22 00:32:36 +08:00
Sanjay Patel 8960ba7491 Revert "[InstCombine] C0 <<{nsw, nuw} (X - C1) --> (C0 >> C1) << X"
This reverts commit 5819f4a422.
This caused bots to fail with a crash/assert during the fold,
so some constraint was missed.
2022-04-21 12:15:27 -04:00
Sanjay Patel 5819f4a422 [InstCombine] C0 <<{nsw, nuw} (X - C1) --> (C0 >> C1) << X
This is similar to an existing pre-shift-of-constant fold:
8a9c70fc01
...but in this case, we need no-wrap on the shl and a negative
offset:
https://alive2.llvm.org/ce/z/_RVz99

Fixes #54890
2022-04-21 11:38:27 -04:00
Nikita Popov 46c2b41d02 [InstCombine] Remove dead code (NFC)
This was a leftover condition without code.
2022-04-21 15:53:53 +02:00
Nikola Tesic c5600aef88 [Debugify] Limit number of processed functions for original mode
Debugify in OriginalDebugInfo mode, does (DebugInfo) collect-before-pass & check-after-pass
for each instruction, which is pretty expensive. When used to analyze DebugInfo losses
in large projects (like LLVM), this raises the build time unacceptably.
This patch introduces a limit for the number of processed functions per compile unit.
By default, the limit is set to UINT_MAX (practically unlimited), and by using the introduced
option  -debugify-func-limit  the limit could be set to any positive integer number.

Differential revision: https://reviews.llvm.org/D115714
2022-04-21 13:58:17 +02:00
Nikita Popov 3df86e799e [SimplifyCFG] Handle branch on same condition in pred more directly
Rather than creating a PHI node and then using the PHI threading
code, directly handle this case in
FoldCondBranchOnValueKnownInPredecessor().

This change is supposed to be NFC-ish, but may cause changes due
to different transform order.
2022-04-21 11:22:02 +02:00
Nikita Popov 8988254667 [SimplifyCFG] Make FoldCondBranchOnPHI more amenable to extension
This general threading transform can be performed whenever we know
a constant value for the condition in a predecessor, which would
currently just be the case of a phi node with constant arguments.
2022-04-21 10:49:49 +02:00
Chuanqi Xu 7eaa84eac3 [NFC] Code cleanups for coroutine after we remvoed legacy passes 2022-04-21 15:32:46 +08:00
Chuanqi Xu 483efc9ad0 [Pipelines] Remove Legacy Passes in Coroutines
The legacy passes are deprecated now and would be removed in near
future. This patch tries to remove legacy passes in coroutines.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D123918
2022-04-21 10:59:11 +08:00
Florian Hahn bea69b232f
[VPlan] Initial modeling of middle block in VPlan.
This patch extends the scope of VPlan to also include the exit (aka
middle) block.

For now, the exit block remains empty, but handling of exit values will
subsequently be moved to VPlan, by adding recipes to model exit values
in the exit block.

As a first step, this will allow fixing #51366.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D123457
2022-04-20 19:34:41 +01:00
Craig Topper e3f6c2d288 [InstCombine] Don't look through bitcast from vector in collectInsertionElements.
We're making a recursive call here and everything in the function
assumes we're looking at scalars. This would be violated if we
looked through a bitcast from vectors.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124015
2022-04-20 09:15:32 -07:00
chenglin.bi 1fae4b492d [InstCombine] Fold mul nuw+lshr to a single multiplication when the latter is a factor
if c is divisible by (1 << ShAmtC), we can fold this pattern:
lshr (mul nuw x, c), ShAmtC -> mul nuw x, (c >> ShAmtC)

https://alive2.llvm.org/ce/z/ox4wAt

Fix https://github.com/llvm/llvm-project/issues/54824

Reviewed By: spatel, lebedev.ri, craig.topper

Differential Revision: https://reviews.llvm.org/D123453
2022-04-21 00:13:36 +08:00
Sanjay Patel bf09a925f2 [InstCombine] remove likely redundant ValueTracking-based folds for shifts
This is not expected to have a functional difference as discussed in the
post-commit comments for 8a9c70fc01. All of the motivating tests for
the older fold still optimize as expected because other code can infer
the 'nuw'.
2022-04-20 11:28:31 -04:00
Nikita Popov d727505e40 [SimplifyCFG] Remove one-use limitation in FoldCondBranchOnPHI()
BlockIsSimpleEnoughToThreadThrough() already checks that the phi
(and all other instructions) are not used outside the block, so
this one-use check is not necessary for legality. I also don't
see any reason why it would be necessary for profitability (in
fact, those extra uses will be replaced with constants, which
should be generally profitable).
2022-04-20 15:56:20 +02:00
Chuanqi Xu 5b6742a6bd [NFC] Return correct PreservedAnalysis for CoroEarly
This is a fix for previous typo. It makes no sense to return
PreservedAnalyses::all() if anything is change. This change simplify
codes further.
2022-04-20 16:47:10 +08:00
Fangrui Song 14d9390721 Revert D123198 "[BuildLibCalls] Introduce getOrInsertLibFunc() for use when building libcalls."
test/Transforms/InstCombine/pr39177.ll failed in a -DLLVM_USE_SANITIZER=Undefined build.
```
lib/Transforms/Utils/BuildLibCalls.cpp:1217:17: runtime error: reference binding to null pointer of type 'llvm::Function'
```
`Function &F = *M->getFunction(Name);`

This reverts commit 0f8c626723.
2022-04-19 22:26:10 -07:00
Andrew Browne 204c12eef9 [DFSan] Print an error before calling null extern_weak functions, incase dfsan instrumentation optimized out a null check.
Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D124051
2022-04-19 17:01:41 -07:00
Ilia Diachkov 6c69427e88 [SPIR-V](3/6) Add MC layer, object file support, and InstPrinter
The patch adds SPIRV-specific MC layer implementation, SPIRV object
file support and SPIRVInstPrinter.

Differential Revision: https://reviews.llvm.org/D116462

Authors: Aleksandr Bezzubikov, Lewis Crawford, Ilia Diachkov,
Michal Paszkowski, Andrey Tretyakov, Konrad Trifunovic

Co-authored-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Co-authored-by: Ilia Diachkov <iliya.diyachkov@intel.com>
Co-authored-by: Michal Paszkowski <michal.paszkowski@outlook.com>
Co-authored-by: Andrey Tretyakov <andrey1.tretyakov@intel.com>
Co-authored-by: Konrad Trifunovic <konrad.trifunovic@intel.com>
2022-04-20 01:10:25 +02:00
Paul Kirth bac6cd5bf8 [misexpect] Re-implement MisExpect Diagnostics
Reimplements MisExpect diagnostics from D66324 to reconstruct its
original checking methodology only using MD_prof branch_weights
metadata.

New checks rely on 2 invariants:

1) For frontend instrumentation, MD_prof branch_weights will always be
   populated before llvm.expect intrinsics are lowered.

2) for IR and sample profiling, llvm.expect intrinsics will always be
   lowered before branch_weights are populated from the IR profiles.

These invariants allow the checking to assume how the existing branch
weights are populated depending on the profiling method used, and emit
the correct diagnostics. If these invariants are ever invalidated, the
MisExpect related checks would need to be updated, potentially by
re-introducing MD_misexpect metadata, and ensuring it always will be
transformed the same way as branch_weights in other optimization passes.

Frontend based profiling is now enabled without using LLVM Args, by
introducing a new CodeGen option, and checking if the -Wmisexpect flag
has been passed on the command line.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D115907
2022-04-19 21:23:48 +00:00
Jonas Paulsson 0f8c626723 [BuildLibCalls] Introduce getOrInsertLibFunc() for use when building libcalls.
A new set of overloaded functions named getOrInsertLibFunc() are now supposed
to be used instead of getOrInsertFunction() when building a libcall from
within an LLVM optimizer(). The idea is that this new function also makes
sure that any mandatory argument attributes are added to the function
prototype (after calling getOrInsertFunction()).

inferLibFuncAttributes() is renamed to inferNonMandatoryLibFuncAttrs() as it
only adds attributes that are not necessary for correctness but merely
helping with later optimizations.

Generally, the front end is responsible for building a correct function
prototype with the needed argument attributes. If the middle end however is
the one creating the call, e.g. when replacing one libcall with another, it
then must take this responsibility.

This continues the work of properly handling argument extension if required
by the target ABI when building a lib call. getOrInsertLibFunc() now does
this for all libcalls currently built by any LLVM optimizer. It is expected
that when in the future a new optimization builds a new libcall with an
integer argument it is to be added to getOrInsertLibFunc() with the proper
handling. Note that not all targets have it in their ABI to sign/zero extend
integer arguments to the full register width, but this will be done
selectively as determined by getExtAttrForI32Param().

Review: Eli Friedman, Nikita Popov, Dávid Bolvanský

Differential Revision: https://reviews.llvm.org/D123198
2022-04-19 21:22:07 +02:00
Sanjay Patel 8a9c70fc01 [InstCombine] C0 shift (X add nuw C) --> (C0 shift C) shift X
With 'nuw' we can convert the increment of the shift amount
into a pre-shift (constant fold) of the shifted constant:
https://alive2.llvm.org/ce/z/FkTyR2

Fixes issue #41976
2022-04-19 15:21:34 -04:00
Vasileios Porpodas 8d4b5e0833 [NFC][SLP] Improved description of getShallowScore() and getScoreAtLevelRec()
Differential Revision: https://reviews.llvm.org/D124027
2022-04-19 12:15:36 -07:00
Florian Hahn 4026b718b8
[VPlan] Remove unused SCEV forward declaration (NFC). 2022-04-19 17:16:17 +02:00
Alexey Bataev 883571928c Revert "[SLP]Improve reductions analysis and emission, part 1."
This reverts commit 0e1f4d4d3c to fix
a crash reported in PR54976
2022-04-19 06:17:03 -07:00
Florian Hahn a65f2730d2
[VPlan] Expand induction step in VPlan pre-header.
This patch moves SCEV expansion of steps used by
VPWidenIntOrFpInductionRecipes to the pre-header using
VPExpandSCEVRecipe. This ensures that those steps are expanded while the
CFG is in a valid state. Previously, SCEV expansion may happen during
vector body code-generation, during which the CFG may be invalid,
causing issues with SCEV expansion.

Depends on D122095.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D122096
2022-04-19 13:06:39 +02:00
Chuanqi Xu f9bee35689 [Pipelines] Hoist CoroEarly as a module pass
This change could reduce the time we call `declaresCoroEarlyIntrinsics`.
And it is helpful for future changes.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D123925
2022-04-19 11:04:24 +08:00
Michael Kruse 2d92ee97f1 Reapply "[OpenMP] Refactor OMPScheduleType enum."
This reverts commit af0285122f.

The test "libomp::loop_dispatch.c" on builder
openmp-gcc-x86_64-linux-debian fails from time-to-time.
See #54969. This patch is unrelated.
2022-04-18 21:56:47 -05:00
Vasileios Porpodas b1333f03d9 Recommit "[SLP] Support internal users of splat loads"
Code review: https://reviews.llvm.org/D121940

This reverts commit 359dbb0d3d.
2022-04-18 15:58:01 -07:00
Michael Kruse af0285122f Revert "[OpenMP] Refactor OMPScheduleType enum."
This reverts commit 9ec501da76.

It may have caused the openmp-gcc-x86_64-linux-debian buildbot to fail.
https://lab.llvm.org/buildbot/#/builders/4/builds/20377
2022-04-18 14:38:31 -05:00
Sanjay Patel 3a27b51b27 [InstCombine] reduce code for freeze of undef
The description was ambiguous about the behavior
when boths select arms are constant or both arms
are not constant. I don't think there's any
evidence to support either way, but this matches
the code with a more specified description.

We can extend this to deal with vector constants
with undef/poison elements. Currently, those don't
get folded anywhere.
2022-04-18 15:14:02 -04:00
Vasileios Porpodas 359dbb0d3d Revert "[SLP] Support internal users of splat loads"
This reverts commit f8e1337115.
2022-04-18 12:12:34 -07:00
Michael Kruse 9ec501da76 [OpenMP] Refactor OMPScheduleType enum.
The OMPScheduleType enum stores the constants from libomp's internal sched_type in kmp.h and are used by several kmp API functions. The enum values have an internal structure, namely each scheduling algorithm (e.g.) exists in four variants: unordered, orderend, normerge unordered, and nomerge ordered.

This patch (basically a followup to D114940) splits the "ordered" and "nomerge" bits into separate flags, as was already done for the "monotonic" and "nonmonotonic", so we can apply bit flags operations on them. It also now contains all possible combinations according to kmp's sched_type. Deriving of the OMPScheduleType enum from clause parameters has been moved form MLIR's OpenMPToLLVMIRTranslation.cpp to OpenMPIRBuilder to make available for clang as well. Since the primary purpose of the flag is the binary interface to libomp, it has been made more private to LLVMFrontend. The primary interface for generating worksharing-loop using OpenMPIRBuilder code becomes `applyWorkshareLoop` which derives the OMPScheduleType automatically and calls the appropriate emitter function.

While this is mostly a NFC refactor, it still applies the following functional changes:
 * The logic from OpenMPToLLVMIRTranslation to derive the OMPScheduleType also applies to clang. Most notably, it now applies the nonmonotonic flag for non-static schedules by default.
 * In OpenMPToLLVMIRTranslation, the nonmonotonic default flag was previously not applied if the simd modifier was used. I assume this was a bug, since the effect was due to `loop.schedule_modifier()` returning `mlir::omp::ScheduleModifier::none` instead of `llvm::Optional::None`.
 * In OpenMPToLLVMIRTranslation, the nonmonotonic default flag was set even if ordered was specified, in breach to what the comment before citing the OpenMP specification says. I assume this was an oversight.

The ordered flag with parameter was not considered in this patch. Changes will need to be made (e.g. adding/modifying function parameters) when support for it is added. The lengthy names of the enum values can be discussed, for the moment this is avoiding reusing previously existing enum value names such as `StaticChunked` to avoid confusion.

Reviewed By: peixin

Differential Revision: https://reviews.llvm.org/D123403
2022-04-18 14:03:17 -05:00
Vasileios Porpodas f8e1337115 [SLP] Support internal users of splat loads
Until now we would only accept a broadcast load pattern if it is only used
by a single vector of instructions.

This patch relaxes this, and allows for the broadcast to have more than one
user vector, as long as all of its uses are internal to the SLP graph and
vectorized.

Differential Revision: https://reviews.llvm.org/D121940
2022-04-18 11:59:44 -07:00
Arthur Eubanks 2e6ac54cf4 [LegacyPM] Remove ThinLTO/LTO pipelines
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove the (Thin)LTO pipelines.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D123882
2022-04-18 10:09:41 -07:00
Zarko Todorovski ce87133120 [llvm][IPO] Inclusive language: Rename mergefunc-sanity to mergefunc-verify and remove other instances of sanity in MergeFunctions.cpp
This patch renames the mergefunc-sanity to mergefunc-verify and renames the related functions to use more
inclusive language

Reviewed By: cebowleratibm

Differential Revision: https://reviews.llvm.org/D114374
2022-04-18 11:50:08 -04:00
Aaron Ballman 86cdb2929c Silence a "not all control paths return a value" warning; NFC 2022-04-18 08:54:08 -04:00
Johannes Doerfert e87f10a771 [Attributor] CGSCC pass should not recompute results outside the SCC (reapply)
When we run the CGSCC pass we should only invest time on the SCC. We can
initialize AAs with information from the module slice but we should not
update those AAs. We make an exception for are call site of the SCC as
they are helpful providing information for the SCC.

Minor modifications to pointer privatization allow us to perform it even
in the CGSCC pass, similar to ArgumentPromotion.
2022-04-17 12:48:49 -05:00
Andrew Litteken d7c56a076e [IROutliner] Ensure that phi values that are passed in as arguments are remapped as arguments
Issue: https://github.com/llvm/llvm-project/issues/54430

For incoming values of phi nodes added to an outlined function to accommodate different exit paths in the function, when a value is a constant that is passed into the outlined function as an argument, we find the corresponding value in the first extracted function used to fill the overall outlined function. When this value is an argument, the corresponding value used will be the old value, prior to outlining. This patch maintains a mapping from these values to arguments, and uses this mapping to update the added phi node accordingly.

Reviewers: paquette

Recommit of d6eb480afb

Differential Revision: https://reviews.llvm.org/D122206
2022-04-16 15:47:52 -05:00
eop Chen 38ec33d6b9 [LSR] Update outdated comment 2022-04-16 12:11:15 -07:00
Joseph Huber 984a0dc386 [OpenMP] Use new offloading binary when embedding offloading images
The previous patch introduced the offloading binary format so we can
store some metada along with the binary image. This patch introduces
using this inside the linker wrapper and Clang instead of the previous
method that embedded the metadata in the section name.

Differential Revision: https://reviews.llvm.org/D122683
2022-04-15 20:35:26 -04:00
Johannes Doerfert 3be3b40188 [Attributor][NFCI] Introduce AttributorConfig to bundle all options
Instead of lengthy constructors we can now set the members of a
read-only struct before the Attributor is created. Should make it
clearer what is configurable and also help introducing new options in
the future. This actually added IsModulePass and avoids deduction
through the Function set size. No functional change was intended.
2022-04-15 18:17:19 -05:00
Florian Hahn 73f5d7d0d6
[VPlan] Handle equal address and store ops in onlyFirstLaneDemanded.
With opaque pointers, the stored value and address can be the same.

Previously the code in VPWidenMemoryInstructionRecipe::onlyFirstLaneDemanded
incorrectly considers stores with matching store and pointer operands as
only demanding the first lane, causing a crash.
2022-04-15 22:53:33 +02:00
Johannes Doerfert 04f3a224bc [Attributor][NFC] Introduce a flag to distinguish the scope of a query 2022-04-15 14:56:10 -05:00
Johannes Doerfert bd72acf4d8 [Attributor][NFC] Code cleanup to minimize follow up changes 2022-04-15 14:56:09 -05:00
Johannes Doerfert 2d8e7834b0 [Attributor][NFC] Rename AAPotentialValues to AAPotentialConstantValues 2022-04-15 14:56:09 -05:00
Johannes Doerfert 1fb415fee9 [AMDGPU][FIX] Proper load-store-vectorizer result with opaque pointers
The original code relied on the fact that we needed a bitcast
instruction (for non constant base objects). With opaque pointers there
might not be a bitcast. Always check if reordering is required instead.

Fixes: https://github.com/llvm/llvm-project/issues/54896

Differential Revision: https://reviews.llvm.org/D123694
2022-04-15 13:42:46 -05:00
Fangrui Song 04e094a336 [PGO] Remove legacy PM passes
Legacy PM for optimization pipeline was deprecated in 13.0.0 and Clang dropped
legacy PM support in D123609. This change removes legacy PM passes for PGO so
that downstream projects won't be able to use it. It seems appropriate to start
removing such "add-on" features like instrumentations, before we remove more
stuff after 15.x is branched.

I have checked many LLVM users and only ldc[1] uses the legacy PGO pass.

[1]: https://github.com/ldc-developers/ldc/issues/3961

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D123834
2022-04-15 10:26:43 -07:00
Andrew Browne cddcf2170a [DFSan] Avoid replacing uses of functions in comparisions.
This can cause crashes by accidentally optimizing out checks for
extern_weak_func != nullptr, when replaced with a known-not-null wrapper.

This solution isn't perfect (only avoids replacement on specific patterns)
but should address common cases.

Internal reference: b/185245029

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D123701
2022-04-14 14:14:52 -07:00
Sanjay Patel 2c2568f39e [InstCombine] canonicalize select with signbit test
This is part of solving issue #54750 - in that example
we have both forms of the compare and do not recognize
the equivalence.
2022-04-14 14:28:47 -04:00
Andrew Litteken 6f8eba06c2 Revert "[IROutliner] Ensure that phi values that are passed in as arguments are remapped as arguments"
Failing test due to typo

This reverts commit d6eb480afb.
2022-04-14 12:23:33 -05:00
Andrew Litteken d6eb480afb [IROutliner] Ensure that phi values that are passed in as arguments are remapped as arguments
Issue: https://github.com/llvm/llvm-project/issues/54430

For incoming values of phi nodes added to an outlined function to accommodate different exit paths in the function, when a value is a constant that is passed into the outlined function as an argument, we find the corresponding value in the first extracted function used to fill the overall outlined function. When this value is an argument, the corresponding value used will be the old value, prior to outlining. This patch maintains a mapping from these values to arguments, and uses this mapping to update the added phi node accordingly.

Reviewers: paquette

Differential Revision: https://reviews.llvm.org/D122206
2022-04-14 12:16:23 -05:00
Andrew Litteken a919d3d888 [IROutliner] Ensure that incoming blocks of PHINodes are included in the unique numbering gneration for phi nodes for each exit path
Issue: https://github.com/llvm/llvm-project/issues/54431

PHINodes that need to be generated to accommodate a PHINode outside the region due to different output paths need to have their own numbering to determine the number of output schemes required to properly handle all the outlined regions. This numbering was previously only determined by the order and values of the incoming values, as well as the parent block of the PHINode. This adds the incoming blocks to the calculation of a hash value for these PHINodes as well, and the supporting infrastructure to give each block in a region a corresponding canonical numbering.

Reviewer: paquette

Differential Revision: https://reviews.llvm.org/D122207
2022-04-14 12:13:17 -05:00
chenglin.bi 00871e2f4f [SimplifyCFG] Try to fold switch with single result value and power-of-2 cases to mask+select
When switch with 2^n cases go to one result, check if the 2^n cases can be covered by n bit masks.
If yes we can use "and condition, ~mask" to simplify the switch

case 0 2 4 6 -> and condition, -7
https://alive2.llvm.org/ce/z/jjH_0N

case 0 2 8 10 -> and condition, -11
https://alive2.llvm.org/ce/z/K7E-2V

case 2 4 8 12 -> and (sub condition, 2), -11
https://alive2.llvm.org/ce/z/CrxbYg

Fix one case of https://github.com/llvm/llvm-project/issues/39957

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D122485
2022-04-15 00:10:00 +08:00
Florian Hahn 2c14cdf831
[VPlan] Turn external defs in Value -> VPValue mapping.
This addresses an existing TODO by keeping a mapping of external IR
Value * definitions wrapped in VPValues for use in a VPlan.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D123700
2022-04-14 12:03:09 +02:00
Ruiling Song 1e01f95057 LowerSwitch: Avoid inserting NewDefault block
The NewDefault was used to simplify the updating of PHI nodes, but it
causes some inefficiency for target that will run structurizer later. For
example, for a simple two-case switch, the extra NewDefault is causing
unstructured CFG like:

        O
       / \
      O   O
     / \ / \
    C1  ND C2
     \  |  /
      \ | /
        D

The change is to avoid the ND(NewDefault) block, that is we will get a
structured CFG for above example like:

        O
       / \
      /   \
     O     O
    / \   / \
   C1  \ /  C2
    \-> D <-/

The IR change introduced by this patch should be trivial to other targets,
so I am doing this unconditionally.

Fall-through among the cases will also cause unstructured CFG, but it need
more work and will be addressed in a separate change.

Reviewed by: arsenm

Differential Revision: https://reviews.llvm.org/D123607
2022-04-14 13:30:56 +08:00
Sanjay Patel 0ef46dc0f9 [SimplifyCFG] improve readability in switch-to-select; NFC 2022-04-13 17:14:45 -04:00
serge-sans-paille fa5a4e1b95 [iwyu] Handle regressions in libLLVM header include
Running iwyu-diff on LLVM codebase since a96638e50e detected a few
regressions, fixing them.
2022-04-13 20:53:19 +02:00
Florian Hahn ad95255b92
Revert "[LICM] Only create load in pre-header when promoting load."
This reverts commit 4bf3b7dc92.

This might be causing another buildbot failure.
2022-04-13 20:24:28 +02:00
serge-sans-paille 262eba01b3 Revert "[ValueTracking] Make getStringLenth aware of strdup"
This reverts commit e810d55809.

The commit was not taken into account the fact that strduped string could be
modified. Checking if such modification happens would make the function very
costly, without a test case in mind it's not worth the effort.
2022-04-13 19:17:28 +02:00
Anna Thomas 28f27dd264 Check users of instrinsics instead of traversing entire function.NFC
Updated LowerGuardIntrinsic and LowerWidenableCondition to check for
users of the respective intrinsic, instead of checking for guards and
widenable conditions by traversing the entire function.

This is an NFC. Should save some compile time.
2022-04-13 12:28:51 -04:00
Florian Hahn 4bf3b7dc92
Recommit "[LICM] Only create load in pre-header when promoting load."
This reverts the revert commit 1ddc719680.

This version of the patch sets the initial available value to poison,
which resolves an issue with the SSAUpdater breaking LCSSA form.
2022-04-13 17:20:39 +02:00
Nikita Popov 8c74169990 [SimplifyLibCalls] Don't mark memchr() memory as fully dereferenceable
C11 specifies memchr() as follows:

> The memchr function locates the first occurrence of c (converted
> to an unsigned char) in the initial n characters (each interpreted
> as unsigned char) of the object pointed to by s. The implementation
> shall behave as if it reads the characters sequentially and stops
> as soon as a matching character is found.

In particular, it is well-defined to specify a memchr size larger
than the underlying object, as long as the character is found before
the end of the object.

Differential Revision: https://reviews.llvm.org/D123665
2022-04-13 16:46:18 +02:00
Sanjay Patel cd0d0d633b [SimplifyCFG] make a debug option for case max when converting switch to select
This should be "NFC" as written, but it will make D122485 smaller
and give us more flexibility to experiment with optimization level
vs. compile-time.

Differential Revision: https://reviews.llvm.org/D123625
2022-04-13 06:55:13 -04:00
Vitaly Buka 79fa8be4ae [NFC][msan] Switch pointer to a reference 2022-04-12 18:45:50 -07:00
Alexey Bataev 0e1f4d4d3c [SLP]Improve reductions analysis and emission, part 1.
Currently SLP vectorizer walks through the instructions and selects
3 main classes of values: 1) reduction operations - instructions with same
reduction opcode (add, mul, min/max, etc.), which build the reduction,
2) reduced values - instructions with the same opcodes, but different
from the reduction opcode, 3) extra arguments - all other values,
instructions from the different basic block rather than the root node,
instructions with to many/less uses.

This scheme is not very efficient. It excludes some instructions and all
non-instruction values from the reductions (constants, proficient
gathers), to many possibly reduced values are marked as extra arguments.
Patch improves this process by introducing a bit extended analysis
stage. During this stage, we still try to select 3 classes of the
values: 1) reduction operations - same as before, 2) possibly reduced
values - all instructions from the current block/non-instructions, which
may build a vectorization tree, 3) extra arguments - instructions from
the different basic blocks. Additionally, an extra sorting of the
possibly reduced values occurs to build the scalar sequences which
highly likely will bed vectorized, e.g. loads are grouped by the
distance between them, constants are grouped together, cmp instructions
are sorted by their compare types and predicates, extractelement
instructions are sorted by the vector operand, etc. Also, these groups
are reordered by their length so the longest group is the first in the
list of the possibly reduced values.

The vectorization process tries to emit the reductions for all these
groups. These reductions, remaining non-vectorized possible reduced
values and extra arguments are then combined into the final expression
just like it was before.

Differential Revision: https://reviews.llvm.org/D114171
2022-04-12 17:46:11 -07:00
Muhammad Omair Javaid 42ebfa8269 Revert "[AArch64] Set maximum VF with shouldMaximizeVectorBandwidth"
This reverts commit 64b6192e81.

This broke LLVM AArch64 buildbot clang-aarch64-sve-vls-2stage:

https://lab.llvm.org/buildbot/#/builders/176/builds/1515

llvm-tblgen crashes after applying this patch.
2022-04-13 04:53:07 +05:00
Arthur Eubanks 51561b5e80 [ArgPromo][OpaquePointer] Don't promote mismatched function types
Mismatched call/callee function types is considered an indirect call.

Fixes crash in https://reviews.llvm.org/D123300#3446023.
2022-04-12 15:17:45 -07:00
Nikita Popov 0adadfa68f [MSan] Ensure argument shadow initialized on memcpy
We need to explicitly query the shadow here, because it is lazily
initialized for byval arguments. Without opaque pointers this used to
mostly work out, because there would be a bitcast to `i8*` present, and
that would query, and copy in case of byval, the argument shadow.

Reviewed By: vitalybuka, eugenis

Differential Revision: https://reviews.llvm.org/D123602
2022-04-12 14:53:02 -07:00
Vitaly Buka efdc90baaa Revert "[MSan] Ensure argument shadow initialized on memcpy"
Invalid author.

This reverts commit 163a9f4552.
2022-04-12 14:53:02 -07:00
Vitaly Buka 163a9f4552 [MSan] Ensure argument shadow initialized on memcpy
We need to explicitly query the shadow here, because it is lazily
initialized for byval arguments. Without opaque pointers this used to
mostly work out, because there would be a bitcast to `i8*` present, and
that would query, and copy in case of byval, the argument shadow.

Reviewed By: vitalybuka, eugenis

Differential Revision: https://reviews.llvm.org/D123602
2022-04-12 14:49:52 -07:00
Sanjay Patel d9211be13d [SimplifyCFG] cleanup code for converting switch to select (NFC)
This renames functions for more general usage (and current capitalization style)
before a proposed logic change in D122485.

Differential Revision: https://reviews.llvm.org/D123614
2022-04-12 12:17:54 -04:00
serge-sans-paille e810d55809 [ValueTracking] Make getStringLenth aware of strdup
During strlen compile-time evaluation, make it possible to track size of
strduped strings.

Differential Revision: https://reviews.llvm.org/D123497
2022-04-12 14:47:29 +02:00
Liqin Weng fa4b4f0fcb [InstCombine] fold more constant remainder to select-of-constants remainder
Reviewed By: xbolva00, spatel, Chenbing.Zheng

Differential Revision: https://reviews.llvm.org/D123486
2022-04-12 09:40:56 +08:00
Alexander Shaposhnikov f6bb156fb1 [InstCombine] Fold icmp(X) ? f(X) : C
This diff extends foldSelectInstWithICmp to handle the case icmp(X) ? f(X) : C
when f(X) is guaranteed to be equal to C for all X in the exact range of the inverse predicate.
This addresses the issue https://github.com/llvm/llvm-project/issues/54089.

Differential revision: https://reviews.llvm.org/D123159

Test plan: make check-all
2022-04-12 01:32:55 +00:00
Sanjay Patel 1206a18d41 [InstCombine] guard against splat-mul corner case
The test is already simplified, and I'm not sure how
to write a test to exercise the new clause. But it
protects the 2-bit pattern from miscompiling as noted
in D123453.

https://alive2.llvm.org/ce/z/QPyVfv
(If we managed to fall into the mul transform, it
would wrongly create a zero on this pattern.)
2022-04-11 15:50:13 -04:00
Whitney Tsang 80304c5f88 [LoopUnroll] Always respect user unroll pragma
IMO when user provide unroll pragma, compiler should always respect it.
It is not clear to me why loop unroll pass currently ensure that the
unrolled loop size is limited by PragmaUnrollThreshold.

Reviewed By: Meinersbur

Differential Revision: https://reviews.llvm.org/D119148
2022-04-11 14:33:24 -04:00
Sanjay Patel 7783db55af [InstCombine] try to fold low-mask of ashr to lshr
With one-use, we handle this via demanded-bits.
But We need to handle extra uses to improve issue #54750.

https://alive2.llvm.org/ce/z/aDYkPv
2022-04-11 11:56:40 -04:00
Florian Hahn 1ddc719680
Revert "[LICM] Only create load in pre-header when promoting load."
This reverts commit 42229b96bf.

This appears to cause crashes on multiple bots.
2022-04-11 17:37:23 +02:00
Nikita Popov 9af8cc8d17 [SimplifyLibCalls] Remove unnecessary inbounds check
Even if the GEP is not inbounds, the GEP will have provenance of
the global, and accessing past the extent of the global would be
undefined behavior.
2022-04-11 16:51:09 +02:00
Florian Hahn 42229b96bf
[LICM] Only create load in pre-header when promoting load.
When only a store is sunk, there is no need to create a load in the
pre-header, as the result of the load will never get used.

The dead load can can introduce UB, if the function is marked as
writeonly.

Fixes #51248.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123473
2022-04-11 16:45:18 +02:00
Simon Pilgrim 431e93f4f5 [InstCombine] Fold sub(add(x,y),min/max(x,y)) -> max/min(x,y) (PR38280)
As discussed on Issue #37628, we can flip a min/max node if we're subtracting from the sum of the node's operands

Alive2: https://alive2.llvm.org/ce/z/W_KXfy

Differential Revision: https://reviews.llvm.org/D123399
2022-04-11 11:32:56 +01:00
Florian Hahn 5f1eb74850
[VPlan] Place VPExpandSCEVRecipe in pre-header.
After D121624 models the pre-header in VPlan, VPExpandSCEVRecipes can be
placed there. This ensures SCEV expansion happens before modifying the
CFG during VPlan execution, when CFG is incomplete.

Depends on D121624.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D122095
2022-04-10 10:26:20 +02:00
Florian Hahn 256c6b0ba1
[VPlan] Model pre-header explicitly.
This patch extends the scope of VPlan to also model the pre-header.
The pre-header can be used to place recipes that should be code-gen'd
outside the loop, like SCEV expansion.

Depends on D121623.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121624
2022-04-09 14:19:47 +02:00
Matt Arsenault 9fdd25848a Transforms: Fix code duplication between LowerAtomic and AtomicExpand 2022-04-08 19:06:36 -04:00
Florian Hahn 467dbcd9f1
[LV] Set debug loc after setting insert point.
This fixes the code to actually use the location of the instruction, if
available. Previously, SetInsertPoint would overwrite the insert point
set from the instruction.
2022-04-08 20:34:40 +02:00
Arthur Eubanks b22ffc7b98 [CaptureTracking] Ignore ephemeral values in EarliestEscapeInfo
And thread DSE's ephemeral values to EarliestEscapeInfo.

This allows more precise analysis in DSEState::isReadClobber() via BatchAA.

Followup to D123162.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D123342
2022-04-08 10:07:26 -07:00
Florian Hahn 29fe998eaa
[VPlan] Preserve debug location when creating branch.
Update createEmptyBasicBlock to preserve the debug location of the
previous terminator.
2022-04-08 17:22:53 +02:00
Zaara Syeda 07005440ae [LSR] Optimize unused IVs to final values in the exit block
Loop Strength Reduce sometimes optimizes away all uses of an induction variable
from a loop but leaves the IV increments. When the only remaining use of the IV
is the PHI in the exit block, this patch will call rewriteLoopExitValues to
replace the exit block PHI with the final value of the IV to skip the updates
in each loop iteration.

Differential Revision: https://reviews.llvm.org/D118808
2022-04-08 11:16:37 -04:00
Nikita Popov c8c6362560 [LICM] Pass MemorySSAUpdater by referene (NFC)
Make it clearer that this is a required dependency.
2022-04-08 10:08:57 +02:00
Nikita Popov 5cefe7d9f5 [LoopSink] Require MemorySSA
This makes MemorySSA in LoopSink required, and removes the AST-based
implementation, as well as the related support code in LICM.

Differential Revision: https://reviews.llvm.org/D123288
2022-04-08 09:49:44 +02:00
serge-sans-paille aa15ea47e2 [builtin_object_size] Basic support for posix_memalign
It actually implements support for seeing through loads, using alias analysis to
refine the result.

This is rather limited, but I didn't want to rely on more than available
analysis at that point (to be gentle with compilation time), and it does seem to
catch common scenario, as showcased by the included tests.

Differential Revision: https://reviews.llvm.org/D122431
2022-04-08 09:31:11 +02:00
Evgeniy Brevnov da41214d65 Add support for atomic memory copy lowering
Currently, the utility supports lowering of non atomic memory transfer routines only. This patch adds support for atomic version of memcopy. This may be useful for targets not supporting atomic memcopy.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D118443
2022-04-08 10:41:31 +07:00
Austin Kerbow 26b14c3ea7 [InferAddressSpaces] Fix assert on invalid bitcast placement
Similar to the problem in 0bb25b4603, bitcasts that are inserted must
dominate all uses. When rewriting "values" with "new values" that have
the updated address space, we may replace the "new value" with a bitcast
if one of the original users is an addresspace cast. This bitcast must
be inserted before ALL users, not only before the addresspace cast.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D122964
2022-04-07 20:07:53 -07:00
Chenbing Zheng 467cbb6249 [InstCombine] fold more constant divisor to select-of-constants divisor
By adding a parameter to function FoldOpIntoSelect, we can fold more Ops to Select.
For this example, we tend to fold the division instruction,
so we no longer care whether SelectInst is one use.

This patch slove TODO left in InstCombine/div.ll.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D122967
2022-04-08 10:19:24 +08:00
Augie Fackler f3c702fbd1 InstCombineCalls: fix annotateAnyAllocCallSite to report changes
Spotted during review of D123052.

Differential Revision: https://reviews.llvm.org/D123232
2022-04-07 13:49:09 -04:00
Arthur Eubanks 17fdaccccf [CaptureTracking] Ignore ephemeral values when determining pointer escapeness
Ephemeral values cannot cause a pointer to escape.

No change in compile time:
https://llvm-compile-time-tracker.com/compare.php?from=4371710085ba1c376a094948b806ddd3b88319de&to=c5ddbcc4866f38026737762ee8d7b9b00395d4f4&stat=instructions

This partially fixes some regressions caused by more calls to `__builtin_assume` (D122397).

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D123162
2022-04-07 10:11:14 -07:00
Augie Fackler b916414096 BuildLibCalls: also set allocsize() attributes
This is part of being able to get rid of two more columns in
MemoryBuiltins.cpp's large table. We'll have two more changes before
we can finish the job.

Differential Revision: https://reviews.llvm.org/D119582
2022-04-07 12:38:44 -04:00
Augie Fackler f120be6c86 InstCombineCalls: when adding an align attribute, never reduce it
Sometimes we can infer an align from an allocalign but the function
already promised it'd be more-aligned than the allocalign and there's an
existing align that we shouldn't reduce. Make sure we handle that
correctly.

Differential Revision: https://reviews.llvm.org/D121642
2022-04-07 12:38:44 -04:00
Augie Fackler ca051a46fb InstCombineCalls: infer return alignment from allocalign attributes
This exposes a couple of lingering bugs, which will be fixed in
the next two commits.

Differential Revision: https://reviews.llvm.org/D123052
2022-04-07 12:38:44 -04:00
Nikita Popov e22af03a79 [Sink] Don't sink non-willreturn calls (PR51188)
Fixes https://github.com/llvm/llvm-project/issues/51188.
2022-04-07 16:35:05 +02:00
Simon Pilgrim afa1ae9e0c [InstCombine] SimplifyDemandedUseBits - allow and(srem(X,Pow2),C) -> and(X,C) to work on vector types
Replace m_ConstantInt with m_APInt to match uniform (no-undef) vector remainder amounts.
2022-04-07 15:24:45 +01:00
Simon Pilgrim 5909c67883 [InstCombine] SimplifyDemandedUseBits - add TODO to remove shl node if we only demand known sign bits of the shift source
Similar to what we already perform for ashr/lshr
2022-04-07 14:35:11 +01:00
Simon Pilgrim 5e90224839 [InstCombine] SimplifyDemandedUseBits - remove lshr node if we only demand known sign bit
This is a lshr equivalent to D122340 - if we don't demand any of the additional sign bits introduced by the ashr, the lshr can be treated as an ashr and we can remove the shift entirely if we only demand already known sign bits.

Another step towards PR21929

https://alive2.llvm.org/ce/z/6f3kjq

Differential Revision: https://reviews.llvm.org/D123118
2022-04-07 14:33:31 +01:00
Benjamin Kramer ff485d727f Transforms: Remove unused include
Utils can't depend on Scalar transforms.
2022-04-07 10:40:28 +02:00
Florian Hahn 4388c979da
[VPlan] Use vector.body as header name in VPlan native path.
This brings the VPlan block naming in line with the naming of the
generated basic blocks.
2022-04-07 10:31:12 +02:00
Nikita Popov 674ee4d353 [LoopSink] Use MemorySSA with legacy pass manager
LoopSink with the legacy pass manager still uses AST, because we
can't compute MemorySSA conditionally. I think now that the legacy
pass manager will be removed soon(TM) we don't need to care about
compile-time impact here anymore. Additionally, since MemorySSA is
no longer eagerly optimized, the impact is actually not that high
anymore (~0.2% geomean regression on CTMark).

This just makes legacy PM and new PM behavior line up -- as a
followup I'll drop these options entirely and make MemorySSA use
mandatory.

Differential Revision: https://reviews.llvm.org/D123216
2022-04-07 09:40:29 +02:00
Matt Arsenault 39f1568633 Transforms: Split LowerAtomics into separate Utils and pass
This will allow code sharing from AtomicExpandPass. Not entirely sure
why these exist as separate passes though.
2022-04-06 20:54:45 -04:00
Alina Sbirlea 08075a7ee8 Revert f7381a795a
Roll-forward 29fada4a3d.
Issue triggered was due to UB.

Differential Revision: https://reviews.llvm.org/D121987
2022-04-06 16:06:14 -07:00
Congzhe Cao eac3487510 [LoopInterchange] Try to achieve the most optimal access pattern after interchange
Motivated by pr43326 (https://bugs.llvm.org/show_bug.cgi?id=43326), where a slightly
modified case is as follows.

 void f(int e[10][10][10], int f[10][10][10]) {
   for (int a = 0; a < 10; a++)
     for (int b = 0; b < 10; b++)
       for (int c = 0; c < 10; c++)
         f[c][b][a] = e[c][b][a];
 }

The ideal optimal access pattern after running interchange is supposed to be the following

 void f(int e[10][10][10], int f[10][10][10]) {
   for (int c = 0;  c < 10; c++)
     for (int b = 0; b < 10; b++)
       for (int a = 0; a < 10; a++)
         f[c][b][a] = e[c][b][a];
 }

Currently loop interchange is limited to picking up the innermost loop and finding an order
that is locally optimal for it. However, the pass failed to produce the globally optimal
loop access order. For more complex examples what we get could be quite far from the
globally optimal ordering.

What is proposed in this patch is to do a "bubble-sort" fashion when doing interchange.
By comparing neighbors in `LoopList` in each iteration, we would be able to move each loop
onto a most appropriate place, hence this is an approach that tries to achieve the
globally optimal ordering.

The motivating example above is added as a test case.

Reviewed By: Meinersbur

Differential Revision: https://reviews.llvm.org/D120386
2022-04-06 15:31:56 -04:00
Nikita Popov 1dc1d5a0d2 [SimplifyLibCalls] Use KnownBits helper APIs (NFC)
Use helper APIs for isNonNegative() and getMaxValue() instead of
flipping the zero value and having a long comment explaining why
that is necessary.
2022-04-06 16:01:24 +02:00
Martin Storsjö 46776f7556 Fix warnings about variables that are set but only used in debug mode
Add void casts to mark the variables used, next to the places where
they are used in assert or `LLVM_DEBUG()` expressions.

Differential Revision: https://reviews.llvm.org/D123117
2022-04-06 10:01:46 +03:00
Evgeniy Brevnov acfc785c0e Preserve aliasing info during memory intrinsics lowering
By specification, source and destination of llvm.memcpy.* must either be equal or non-overlapping. This semantics is hard or impossible to figure out once lowered. This patch explicitly marks loads from source and stores to destination as not aliasing if source and destination is known to be not equal.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D118441
2022-04-06 11:33:54 +07:00
Johannes Doerfert af30de7788 [Attributor] Introduce AAInstanceInfo
The Attributor, as many other parts in LLVM, uses pointer equivalence
for `llvm::Value`s. This only works as long as `llvm::Value`s are
dynamically unique, or, to be exact, we will never end up with the same
`llvm::Value` representing two dynamic instances. We already provided a
helper to check the former, namely `AA::isDynamicallyUnique`, however we
could not check the latter. In this patch we move the logic into a
separate AA which helps with the growing complexity and use cases. We
also extend the interface to answer the second question rather than the
first. So we do not determine dynamically uniqueness but if we might end
up with the `llvm::Value` describing a different dynamic instance. Note
that the latter is very much tied to the Attributor capabilities to look
through memory, recursion, etc. so we need to update the logic as we go.
2022-04-05 23:07:13 -05:00
Johannes Doerfert c42aa1be74 [Attributor] Keep loads feeding in `llvm.assume` if stores stays
If a load is only used by an `llvm.assume` and the stores feeding into
the load are not removable, keep the load.
2022-04-05 23:07:12 -05:00
Johannes Doerfert 857bf306d7 [Attributor] Remove broken and duplicated load simplification
We look through loads in the "generic value traversal" and we
consequently don't need to look through them again in AAValueSimplify*.
The test changes stem from the fact that we allowed any simplified
value, incl. non-dynamically unique ones, as long as the underlying
memory was an alloca. This doesn't seem to make sense as allocas do not
protect against dynamically non-unique values. We need to make the
unique check better rather than excluding allocas. That in mind, we can
remove a lot of code by simply relying on the generic value traversal
load look through.

To soften the blow some minor adjustments have been made that allow more
simplification through the now used scheme and some tests have been
given a `norecurse` for now.
2022-04-05 20:49:03 -05:00
Johannes Doerfert a8610d7523 [Attributor] Move recursion reasoning into `AA::isPotentiallyReachable`
With D106397 we ensured that `AAReachability` will not answer queries for
potentially recursive functions. This was necessary as we did not treat
recursion explicitly otherwise. Now that we have
`AA::isPotentiallyReachable` we can make `AAReachability` a purely
intra-procedural AA which does not care about recursion.
`AA::isPotentiallyReachable`, however, does already deal with "going
back" the call graph and can now do so for potentially recursive
functions.
2022-04-05 20:49:03 -05:00
Teresa Johnson ced9a795fd [WPD] Add statistics
Add statistics to count overall devirtualized targets as well as the
various types of devirtualizations applied at callsites.

Differential Revision: https://reviews.llvm.org/D123152
2022-04-05 18:48:23 -07:00
Gulfem Savrun Yeniceri bcf8f2188b Revert "[InstrProfiling] No runtime hook for unused funcs"
This reverts commit c7f91e227a.
This patch caused an issue in Fuchsia source code coverage builders.
2022-04-06 01:41:44 +00:00
Johannes Doerfert 3e8c4366e2 [Attributor] Visit droppable uses in AAIsDead
If we ignore droppable users everything only used in llvm.assume (among
other things) is going to be deleted as dead. This is not helpful.
Instead we want to only delete things we actually don't need anymore. A
follow up will deal with loads in a smarter way.
2022-04-05 18:20:45 -05:00
Bert Abrath 019e7b7f6e
[PartiallyInlineLibCalls] Don't partially inline a musttail libcall.
Partially inlining a libcall that has the musttail attribute
leads to broken LLVM IR, triggering an assertion in the IR verifier.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D123116
2022-04-05 22:30:50 +03:00
Andrew Browne 5748219fd2 [DFSan] Add dfsan-combine-taint-lookup-table option as work around for
false negatives when dfsan-combine-pointer-labels-on-load=0 and
dfsan-combine-offset-labels-on-gep=0 miss data flows through lookup tables.

Example case:
628a2825f8/absl/strings/ascii.h (L182)

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D122787
2022-04-05 11:05:10 -07:00
Matt Devereau 2c3f66519c [SVE] Extend support for folding select + masked gathers
Extend the work done in D106376 to include masked gathers

Differential Revision: https://reviews.llvm.org/D122896
2022-04-05 16:27:11 +00:00
serge-sans-paille 1e02737593 [iwyu] Fix some header include regression
Running iwyu-diff from https://github.com/serge-sans-paille/preprocessor-utils
makes it possible to quickly spot regression in unused includes. This patch
contains the few regressions since the last header cleanup.

Differential Revision: https://reviews.llvm.org/D123036
2022-04-05 15:02:03 +02:00
Jingu Kang 64b6192e81 [AArch64] Set maximum VF with shouldMaximizeVectorBandwidth
Set the maximum VF of AArch64 with 128 / the size of smallest type in loop.

Differential Revision: https://reviews.llvm.org/D118979
2022-04-05 13:16:52 +01:00
Florian Hahn 1ff022e21b
[LV] Add vector.body block to parent loop during skeleton creation.
When creating induction resume values, SCEV queries may rely on
LoopInfo. Make sure vector.body gets added to the loop of the pre-header
during skeleton construction.

%vector.body will be moved to the vector preheader during VPlan
execution.

Fixes #54745.
2022-04-05 11:54:17 +01:00
Jonas Paulsson dbb6a75fbb [LibCalls] Respect TLI.getExtAttrForI32Param() in inferLibFuncAttributes().
getExtAttrForI32Param() is the method to be used for determining the type of
extension attribute (if any) that is to be added for a signed/unsigned
argument.

Previously, the SExt attribute was always added to the i32 ldexp* argument as
it was expected to be ignored by targets not needing it. This patch now
changes this so that it is only added for the targets that need it in the
first place.

Putchar() argument is now also extended as required by the target (SystemZ in
the test), to fix the issue below. Many more libcalls will be handled
similarly in a following patch.

Fixes https://github.com/llvm/llvm-project/issues/54532.

Differential Revision: https://reviews.llvm.org/D123030

Review: Eli Friedman
2022-04-05 10:29:42 +02:00
Johannes Doerfert 79962df386 [Attributor] Allow to reproduce instructions for simplification
When simplify values we might end up with an instruction from a
different scope or just one that does not dominate the use. If the
instruction can be reproduced without side-effect (incl. UB) we can
now do that. For now this is mostly used for speculatable (intrinsic)
calls but as we learn to make things like arguments or loads available
this will become more powerful.

This will also allow us to remove dead stores more easily in a follow
up.
2022-04-04 12:28:08 -05:00
Florian Hahn 1817c526e1
[VPlan] Update VPInterleavedAccessInfo to use getVectorLoopRegion.
Update VPInterleavedAccessInfo  to use the generic getVectorLoopRegion
helper instead of relying on the entry block being the top-most vector
loop region.
2022-04-04 10:26:39 +01:00
Martin Sebor 5ccfd5f6d4 [SimplifyLibCalls] Optimize memchr() with known char+str and unknown length
If both the character and string are known, but the length
potentially isn't, we can optimize the memchr() call to a select
of either the known position of the character or null.

Split off from https://reviews.llvm.org/D122836.
2022-04-04 11:01:33 +02:00
Martin Sebor 5197d2791f [SimplifyLibCalls] Move handling of constant char earlier (NFC)
Handle the simple constant char case before the bitmask optimization.
This will allow extending the code to handle a non-constant size
argument in a followup change.

Split out from https://reviews.llvm.org/D122836.
2022-04-04 11:01:33 +02:00
Martin Sebor d18991debf [SimplifyLibCalls] Fold memchr() with size 1
If the memchr() size is 1, then we can convert the call into a
single-byte comparison. This works even if both the string and the
character are unknown.

Split off from https://reviews.llvm.org/D122836.
2022-04-04 10:41:20 +02:00
Florian Hahn 8cd1892725
[VPlan] Remember previous loop and reset vector loop.
At the moment this is NFC, but will be needed once nested loops are also
modeled as regions. Preparation for D123005.
2022-04-04 09:27:15 +01:00
Nikita Popov a5c3b5748c [MemCpyOpt] Work around PR54682
As discussed on https://github.com/llvm/llvm-project/issues/54682,
MemorySSA currently has a bug when computing the clobber of calls
that access loop-varying locations. I think a "proper" fix for this
on the MemorySSA side might be non-trivial, but we can easily work
around this in MemCpyOpt:

Currently, MemCpyOpt uses a location-less getClobberingMemoryAccess()
call to find a clobber on either the src or dest location, and then
refines it for the src and dest clobber. This was intended as an
optimization, as the location-less API is cached, while the
location-affected APIs are not.

However, I don't think this really makes a difference in practice,
because I don't think anything will use the cached clobbers on
those calls later anyway. On CTMark, this patch seems to be very
mildly positive actually.

So I think this is a reasonable way to avoid the problem for now,
though MemorySSA should also get a fix.

Differential Revision: https://reviews.llvm.org/D122911
2022-04-04 10:19:51 +02:00
Nikita Popov c0cc98251a [Float2Int] Make sure dependent ranges are calculated first (PR54669)
The range calculation in walkForwards() assumes that the ranges of
the operands have already been calculated. With the used visit
order, this is not necessarily the case when there are multiple
roots. (There is nothing guaranteeing that instructions are visited
in topological order.)

Fix this by queuing instructions for reprocessing if the operand
ranges haven't been calculated yet.

Fixes https://github.com/llvm/llvm-project/issues/54669.

Differential Revision: https://reviews.llvm.org/D122817
2022-04-04 10:18:39 +02:00
Augie Fackler 603ae73146 AttributorAttributes: guard against TLI being nullptr
I didn't dig into this very much because it appears to be totally valid
(especially once these properties can come from attributes instead
of only from hard-coded library functions) for TLI to not be defined,
and nothing broke when I added this check, including with all my other
patches applied.

Differential Revision: https://reviews.llvm.org/D122917
2022-04-03 23:19:23 -04:00
Philip Reames 88de27e3fd [LV] Handle non-integral types when considering interleave widening legality
In general, anywhere we might need to insert a blind bitcast, we need to make sure the types are losslessly convertible.

This fixes pr54634.
2022-04-03 20:16:20 -07:00
Philip Reames 7c51669c21 [memcpyopt] Restructure store(load src, dest) form of callslotopt for compile time
The search for the clobbering call is fairly expensive if uses are not optimized at construction.  Defer the clobber walk to the point in the implementation we need it; there are a bunch of bailouts before that point.  (e.g. If the source pointer is not an alloca, we can't do callslotopt.)

On a test case which involves a bunch of copies from argument pointers, this switches memcpyopt from > 1/2 second to < 10ms.
2022-04-03 20:16:20 -07:00
Xiang1 Zhang f830392be7 Correct spelling error in TLS-Load-Hoist 2022-04-04 08:27:54 +08:00
Alexander Shaposhnikov 6cf10b7e6e [InstCombine] Fold srem(X, PowerOf2) == C into (X & Mask) == C for positive C
This diff extends InstCombinerImpl::foldICmpSRemConstant to handle the cases
srem(X, PowerOf2) == C and
srem(X, PowerOf2) != C
for positive C.
This addresses the issue https://github.com/llvm/llvm-project/issues/54650

Differential revision: https://reviews.llvm.org/D122942

Test plan: make check-all
2022-04-03 03:57:05 +00:00
Sanjay Patel 5f8c2b884d [InstCombine] limit icmp fold with sub if other sub user is a phi
This is a hacky fix for:
https://github.com/llvm/llvm-project/issues/54558

As discussed there, codegen regressed when we opened up this transform
to allow extra uses ( 61580d0949 ), and it's not clear how to
undo the transforms at the later stage of compilation.

As noted in the code comments, there's a set of remaining folds that
are still limited to one-use, so we can try harder to refine and
expand the limitations on these folds, but it's likely to be an
up-and-down battle as we find and overcome similar regressions.

Differential Revision: https://reviews.llvm.org/D122909
2022-04-02 19:23:42 -04:00
Sanjay Patel 97ac0cd6c4 [InstCombine] fold fcmp with lossy casted constant (2nd try)
This is a retry of 9397bdc67e - that was reverted until
we had a clang warning in place to alert users about a
possible mistake in source. The warning was added with
ab982eace6.

This is noted as a missing clang warning in #54222,
but it is also a missing optimization opportunity.

Alive2 proofs:
https://alive2.llvm.org/ce/z/Q8drDq
https://alive2.llvm.org/ce/z/pE6LRt

I don't see a single conversion for all predicates
using "getFCmpCode" logic, so other predicates are
left as a TODO item.
2022-04-02 19:23:01 -04:00
Roman Lebedev 308ca349cb
[InstCombine] Fold `(X | C2) ^ C1 --> (X & ~C2) ^ (C1^C2)`
These two are equivalent,
and i *think* the `and` form is more-ish canonical.

General proof: https://alive2.llvm.org/ce/z/RrF5s6

If constant on the (outer) `xor` is an `undef`,
the whole lane is dead: https://alive2.llvm.org/ce/z/mu4Sh2

However, if the constant on the (inner) `or` is an `undef`,
we must sanitize it first: https://alive2.llvm.org/ce/z/MHYJL7
I guess, producing a zero `and`-mask is optimal in that case.

alive-tv is happy about the entirety of `xor-of-or.ll`.
2022-04-03 00:12:56 +03:00
Florian Hahn 95b2aa511e
[VPlan] Set VPlan header block name to vector.body.
This brings the VPlan block naming in line with the naming of the
generated basic blocks.
2022-04-02 19:34:32 +01:00
Florian Hahn 5bedc1f093
[ConstraintElimination] Move logic to build worklist to helper (NFC).
This refactor makes it easier to extend the logic to collect information
from blocks in the future, without even further increasing the size of
eliminateConstriants.
2022-04-02 16:55:05 +01:00
Serge Pavlov c625b6051c Remove duplicate code from wouldInstructionBeTriviallyDead
There is a similar check few lines above in this function.
2022-04-02 16:04:39 +07:00
Florian Hahn f8101e4d68
Recommit "[LV] Remove unneeded createHeaderBranch.(NFCI)"
This reverts commit 14e3650f01.

The issue causing the revert were fixed independently in
a08c90a402 and 14e5f9785c.
2022-04-01 16:53:39 +01:00
Florian Hahn 14e5f9785c
[LV] Add SCEV workaround from 80e8025 to epilogue vector code path.
This was exposed by 14e3650f. The recommit of 14e3650f will hit the
problematic code path requiring the workaround.
test case that crashes without the workaround.
2022-04-01 15:14:47 +01:00
Florian Hahn a08c90a402
[LV] Re-use TripCount from EPI.TripCount.
During skeleton construction for the epilogue vector loop, generic
helpers use getOrCreateTripCount, which will re-expand the trip count
computation. Instead, re-use the TripCount created during main loop
vectorization.
2022-04-01 13:47:34 +01:00
Nikita Popov 792f80e166 [CoroSplit] Use freeze instead of bitcast for dummy instructions
Not all types that can appear in arguments can be bitcasts -- in
particular, bitcasts do not support struct types.
2022-04-01 13:07:25 +02:00
Xiang1 Zhang a56f264958 Refine tls-load-hoista llvm option
Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D122890
2022-04-01 19:03:58 +08:00
Nikita Popov 68d27587e4 [CoroSplit] Handle argument being the frame pointer (PR54523)
If the frame pointer is an argument of the original pointer (which
happens with opaque pointers), then we currently first replace the
argument with undef, which will prevent later replacement of the
old frame pointer with the new one.

Fix this by replacing arguments with some dummy instructions first,
and then replacing those with undef later. This gives us a chance
to replace the frame pointer before it becomes undef.

Fixes https://github.com/llvm/llvm-project/issues/54523.

Differential Revision: https://reviews.llvm.org/D122375
2022-04-01 12:37:29 +02:00
Alexandros Lamprineas f364278c45 [FuncSpec][NFC] Cache code metrics for analyzed functions.
This isn't expected to reduce compilation times as 'max-iters' is set to
one by default, but it helps with recursive functions that require higher
iteration counts.

Differential Revision: https://reviews.llvm.org/D122819
2022-04-01 10:58:26 +01:00
Jorge Gorbe Moya fc7573f29c Revert "[misexpect] Re-implement MisExpect Diagnostics"
This reverts commit 46774df307.
2022-03-31 14:54:41 -07:00
Florian Hahn 14e3650f01
Revert "Recommit "[LV] Remove unneeded createHeaderBranch.(NFCI)""
This reverts commit 8378a71b6c.

It looks like this patch uncovered another issue, e.g. see
https://lab.llvm.org/buildbot/#/builders/168/builds/5518
2022-03-31 19:00:48 +01:00
Paul Kirth 46774df307 [misexpect] Re-implement MisExpect Diagnostics
Reimplements MisExpect diagnostics from D66324 to reconstruct its
original checking methodology only using MD_prof branch_weights
metadata.

New checks rely on 2 invariants:

1) For frontend instrumentation, MD_prof branch_weights will always be
   populated before llvm.expect intrinsics are lowered.

2) for IR and sample profiling, llvm.expect intrinsics will always be
   lowered before branch_weights are populated from the IR profiles.

These invariants allow the checking to assume how the existing branch
weights are populated depending on the profiling method used, and emit
the correct diagnostics. If these invariants are ever invalidated, the
MisExpect related checks would need to be updated, potentially by
re-introducing MD_misexpect metadata, and ensuring it always will be
transformed the same way as branch_weights in other optimization passes.

Frontend based profiling is now enabled without using LLVM Args, by
introducing a new CodeGen option, and checking if the -Wmisexpect flag
has been passed on the command line.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D115907
2022-03-31 17:38:21 +00:00
Nikita Popov 33ac23e7cf [Float2Int] Avoid unnecessary lamdbas (NFC)
Instead of first creating a lambda for calculating the range,
then collecting the ranges for the operands, and then calling the
lambda on those ranges, we can first calculate the operand ranges
and then calculate the result directly in the switch.
2022-03-31 16:13:13 +02:00
Nikita Popov f66975555f [Float2Int] Extract calcRange() method (NFC)
This avoids the awkward "Abort" flag, because we can simply
early-return instead.
2022-03-31 16:13:13 +02:00
Florian Hahn 8378a71b6c
Recommit "[LV] Remove unneeded createHeaderBranch.(NFCI)"
This reverts the revert commit 2760cdc9c6.

This version pulls in the code to create the vector loop object in VPlan
from D121624.

This is needed because otherwise existing LoopInfo verification will
fail, as a loop block doesn't have in-loop successors now that we
do not replace the branch.

Now that we do not add new loops during skeleton construction, there's
also no need to verify LI there.
2022-03-31 14:48:32 +01:00
Serge Pavlov 47b3b76825 Implement inlining of strictfp functions
According to the current design, if a floating point operation is
represented by a constrained intrinsic somewhere in a function, all
floating point operations in the function must be represented by
constrained intrinsics. It imposes additional requirements to inlining
mechanism. If non-strictfp function is inlined into strictfp function,
all ordinary FP operations must be replaced with their constrained
counterparts.

Inlining strictfp function into non-strictfp is not implemented as it
would require replacement of all FP operations in the host function,
which now is undesirable due to expected performance loss.

Differential Revision: https://reviews.llvm.org/D69798
2022-03-31 19:15:52 +07:00
Alexandros Lamprineas b4417075dc [FuncSpec] Constant propagate multiple arguments for recursive functions.
This fixes a TODO in constantArgPropagation() to make it feature complete.
However, I do find myself in agreement with the review comments in
https://reviews.llvm.org/D106426. I don't think we should pursue
specializing such recursive functions as the code size increase becomes
linear to 'max-iters'. Compiling the modified test just with -O3 (no
function specialization) generates the same code.

Differential Revision: https://reviews.llvm.org/D122755
2022-03-31 13:00:08 +01:00
Florian Hahn 2760cdc9c6
Revert "[LV] Remove unneeded createHeaderBranch.(NFCI)"
This reverts commit 32bc83d11e.

This is causing bots with expensive-checks to fail. Revert while I
investigate.
2022-03-31 12:32:50 +01:00
Florian Hahn 32bc83d11e
[LV] Remove unneeded createHeaderBranch.(NFCI)
The only remaining use was to get the exit block of the loop. Instead of
relying on the loop, use the successor of VectorHeaderBB
(LoopMiddleBlock) directly to set VPTransformState::CFG::ExitB

Depends on D121621.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121623
2022-03-31 11:48:52 +01:00
Florian Hahn 2c494f0941
[VPlan] Remove unneeded Loop variable (NFC).
Suggested in D121623. The remaining uses of L can be replaced, reducing
the need for the variable.
2022-03-31 10:34:28 +01:00
Marco Elver b8e49fdcb1 [AddressSanitizer] Allow prefixing memintrinsic calls in kernel mode
Allow receiving memcpy/memset/memmove instrumentation by using __asan or
__hwasan prefixed versions for AddressSanitizer and HWAddressSanitizer
respectively when compiling in kernel mode, by passing params
-asan-kernel-mem-intrinsic-prefix or -hwasan-kernel-mem-intrinsic-prefix.

By default the kernel-specialized versions of both passes drop the
prefixes for calls generated by memintrinsics. This assumes that all
locations that can lower the intrinsics to libcalls can safely be
instrumented. This unfortunately is not the case when implicit calls to
memintrinsics are inserted by the compiler in no_sanitize functions [1].

To solve the issue, normal memcpy/memset/memmove need to be
uninstrumented, and instrumented code should instead use the prefixed
versions. This also aligns with ASan behaviour in user space.

[1] https://lore.kernel.org/lkml/Yj2yYFloadFobRPx@lakrids/

Reviewed By: glider

Differential Revision: https://reviews.llvm.org/D122724
2022-03-31 11:14:42 +02:00
David Green b65267ca7b [LV] Invalidate widening decisions after maximizing vector bandwidth
When MaximizeVectorBandwidth is enabled, we can end up (via calls to
collectUniformsAndScalars/setCostBasedWideningDecision through
calculateRegisterUsage) making widening decisions before we have decided
whether to fold the tail by masking. These decisions will be wrong if we
later decided to fold the tail, for example when the trip count is very
low. It will use incorrect costs for loads that should get masked, using
standard memory operation costs instead.

This still at the moment uses the EmulatedMaskMemRefHack costs (a bit
unfortunately), but the old costs without this change were 1, leading to
too optimistic vectorization.

This slightly changes the way that the MaximizeVectorBandwidth option
works to make it easier to test, always honouring the option if it is
set.

Differential Revision: https://reviews.llvm.org/D120215
2022-03-31 09:19:31 +01:00
Aditya Kumar 368681f803 [GVNHoist] drop debug location according to the debug info guide
According to the LLVM debug info update guide: https://llvm.org/docs/HowToUpdateDebugInfo.html,
"Hoisting identical instructions which appear in several successor
blocks into a predecessor block. In this case there is no single
merged instruction. The rule for dropping locations applies".

Thanks to Yuanbo Li for reporting this.

Reviewed By: dblaikie

Reviewers: sebpop, tejohnson, dblaikie

Differential Revision: https://reviews.llvm.org/D122730
2022-03-30 20:17:53 -07:00
Stephen Long e02f4976ac [LoopIdiom] Merge TBAA of adjacent stores when creating memset
Factor in the TBAA of adjacent stores instead of just the head store
when merging stores into a memset. We were seeing GVN remove a load that
had a TBAA that matched the 2nd store because GVN determined it didn't
match the TBAA of the memset. The memset had the TBAA of only the first
store.

i.e. Loading the field pi_ of shared_count after memset to create an
array of shared_ptr

template<class T>
class shared_ptr {
  T *p;
  shared_count refcount;
};

class shared_count {
  sp_counted_base *pi_;
};

Differential Revision: https://reviews.llvm.org/D122205
2022-03-30 16:54:49 -07:00
Florian Hahn e4543af4e6
[VPlan] Track current vector loop in VPTransformState (NFC).
Instead of looking up the vector loop using the header, keep track of
the current vector loop in VPTransformState. This removes the
requirement for the vector header block being part of the loop up front.

A follow-up patch will move the code to generate the Loop object for the
vector loop to VPRegionBlock.

Depends on D121619.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121621
2022-03-30 22:16:40 +01:00
Chang-Sun Lin Jr c28ce745cf Value-number GVNHoist loads by result type as well as pointer address.
Avoids merge errors when opaque pointers are loaded into different types.

Reviewed by: jcranmer-intel, hiraditya
Differential Revision: https://reviews.llvm.org/D122521
2022-03-30 11:33:49 -07:00
Florian Hahn e8673f2f20
[LV] Do not create separate latch block in VPlan::execute.
Now that all dependencies on creating the latch block up-front have been
removed, there is no need to create it early.

Depends on D121618.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121619
2022-03-30 17:31:38 +01:00
Florian Hahn 8a4077fac0
[LV] Pass LoopHeaderBB directly to updateDominatorTree. (NFC)
At the call site, we already know what the vector header block is. Pass
it directly.
2022-03-30 13:11:20 +01:00
Florian Hahn ecb4171dcb
[LV] Handle zero cost loops in selectInterleaveCount.
In some case, like in the added test case, we can reach
selectInterleaveCount with loops that actually have a cost of 0.

Unfortunately a loop cost of 0 is also used to communicate that the cost
has not been computed yet. To resolve the crash, bail out if the cost
remains zero after computing it.

This seems like the best option, as there are multiple code paths that
return a cost of 0 to force a computation in selectInterleaveCount.
Computing the cost at multiple places up front there would unnecessarily
complicate the logic.

Fixes #54413.
2022-03-29 22:52:43 +01:00
Chris Bieneman 9130e471fe Add DXContainer
DXIL is wrapped in a container format defined by the DirectX 11
specification. Codebases differ in calling this format either DXBC or
DXILContainer.

Since eventually we want to add support for DXBC as a target
architecture and the format is used by DXBC and DXIL, I've termed it
DXContainer here.

Most of the changes in this patch are just adding cases to switch
statements to address warnings.

Reviewed By: pete

Differential Revision: https://reviews.llvm.org/D122062
2022-03-29 14:34:23 -05:00
Florian Hahn d1d3563278
[LV] Move code to place pointer induction increment to VPlan post-processing.
This patch moves the code to set the correct incoming block for the
backedge value to VPlan::execute.

When generating the phi node, the backedge value is temporarily added
using the pre-header as incoming block. The invalid phi node will be
fixed up during VPlan::execute after main VPlan code generation.
At the same time, the backedge value is also moved to the latch.

This change removes the requirement to create the latch block up-front
for VPWidenInductionPHIRecipe::execute, which in turn will enable
modeling the pre-header in VPlan.

Depends on D121617.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121618
2022-03-29 20:27:59 +01:00
Hirochika Matsumoto a3cffc1150 [InstCombine] Fold (ctpop(X) == 1) | (X == 0) into ctpop(X) < 2
https://alive2.llvm.org/ce/z/94yRMN

Fixes #54177

Differential Revision: https://reviews.llvm.org/D122077
2022-03-29 11:30:06 -04:00
Nikita Popov 682ef39b1a [InstCombine] Remove call to getPointerElementType()
This was erroneously re-introduced as part of
bb0b23174e.
2022-03-29 16:52:29 +02:00
Florian Hahn 3dbb5eb2cd
[ConstraintElimination] Move ConstraintInfo after ConstraintTy. (NFC)
Code movement to it slightly easier to use ConstraintTy & co in
ConstraintInfo directly, for follow-up patches.
2022-03-29 09:59:03 +01:00
Serguei Katkov 6444a65514 [LSR] Fixup canonicalization formula and its checker.
According to definition of canonical form, it is a canonical
if scale reg does not contain addrec for loop L then none of bases
should contain addrec for this loop.

The critical word here is "contains".

Current checker of canonical form checks not "containing" property
but "is". So it does not check whether it contains but whether it is.

Fix the checker and canonicalizing utility to follow definition.

Without this fix in the test attached the base formula looking as
reg((-1 * {0,+,8}<nuw><nsw><%bb2>)<nsw>) + 1*reg((8 * (%arg /u 8))<nuw>)
is considered as conanocial while base contains an addrec.
And modified formula we want to insert
reg({0,+,8}<nuw><nsw><%bb2>) + 1*reg((-8 * (%arg /u 8)))
is considered as not canonical.

Reviewed By: mkazantsev
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D122457
2022-03-29 14:05:04 +07:00
serge-sans-paille 01be9be2f2 Cleanup includes: final pass
Cleanup a few extra files, this closes the work on libLLVM dependencies on my
side.

Impact on libLLVM preprocessed output: -35876 lines

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D122576
2022-03-29 09:00:21 +02:00
Paul Kirth 90cb325abd Revert "[misexpect] Re-implement MisExpect Diagnostics"
This reverts commit 2add3fbd97.
2022-03-29 06:20:30 +00:00
Philip Reames 33deaa13b8 [memcpyopt] Common code into performCallSlotOptzn [NFC]
We have the same code repeated in both callers, sink it into callee.

The motivation here isn't just code style, we can also defer the relatively expensive aliasing checks until the cheap structural preconditions have been validated.  (e.g. Don't bother aliasing if src is not an alloca.)  This helps compile time significantly.
2022-03-28 20:10:13 -07:00
Philip Reames 7d6e8f2a96 [slp] Delete dead scalar instructions feeding vectorized instructions
If we vectorize a e.g. store, we leave around a bunch of getelementptrs for the individual scalar stores which we removed. We can go ahead and delete them as well.

This is purely for test output quality and readability. It should have no effect in any sane pipeline.

Differential Revision: https://reviews.llvm.org/D122493
2022-03-28 20:10:13 -07:00
Johannes Doerfert 7df2eba7fa [Attributor][OpenMP] Add assumption for non-call assembly instructions
Inline assembly is scary but we need to support it for the OpenMP GPU
device runtime. The new assumption expresses the fact that it may not
have call semantics, that is, it will not call another function but
simply perform an operation or side-effect. This is important for
reachability in the presence of inline assembly.

Differential Revision: https://reviews.llvm.org/D109986
2022-03-28 20:57:52 -05:00
Johannes Doerfert bb0b23174e [InstCombineCalls] Optimize call of bitcast even w/ parameter attributes
Before we gave up if a call through bitcast had parameter attributes.
Interestingly, we allowed attributes for the return value already. We
now handle both the same way, namely, we drop the ones that are
incompatible with the new type and keep the rest. This cannot cause
"more UB" than initially present.

Differential Revision: https://reviews.llvm.org/D119967
2022-03-28 20:57:52 -05:00
Paul Kirth 2add3fbd97 [misexpect] Re-implement MisExpect Diagnostics
Reimplements MisExpect diagnostics from D66324 to reconstruct its
original checking methodology only using MD_prof branch_weights
metadata.

New checks rely on 2 invariants:

1) For frontend instrumentation, MD_prof branch_weights will always be
   populated before llvm.expect intrinsics are lowered.

2) for IR and sample profiling, llvm.expect intrinsics will always be
   lowered before branch_weights are populated from the IR profiles.

These invariants allow the checking to assume how the existing branch
weights are populated depending on the profiling method used, and emit
the correct diagnostics. If these invariants are ever invalidated, the
MisExpect related checks would need to be updated, potentially by
re-introducing MD_misexpect metadata, and ensuring it always will be
transformed the same way as branch_weights in other optimization passes.

Frontend based profiling is now enabled without using LLVM Args, by
introducing a new CodeGen option, and checking if the -Wmisexpect flag
has been passed on the command line.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D115907
2022-03-28 23:30:04 +00:00
Alina Sbirlea f7381a795a Revert 29fada4a3d
Seeing a test failure with asan in Halide generated code, reverting
while I investigate.

Differential Revision: https://reviews.llvm.org/D121987
2022-03-28 16:17:41 -07:00
chenglin.bi 9a53793ab8 [InstCombine] Fold two select patterns into and-or
select (~a | c), a, b -> and a, (or c, b) https://alive2.llvm.org/ce/z/bnDobs
select (~c & b), a, b -> and b, (or a, c) https://alive2.llvm.org/ce/z/k2jJHJ

Differential Revision: https://reviews.llvm.org/D122152
2022-03-28 16:07:55 -04:00
Florian Hahn e7bf2ea934
[LV] Move code to place induction increment to VPlan post-processing.
This patch moves the code to set the correct incoming block for the
backedge value to VPlan::execute.

When generating the phi node, the backedge value is temporarily added
using the pre-header as incoming block. The invalid phi node will be
fixed up during VPlan::execute after main VPlan code generation.
At the same time, the backedge value is also moved to the latch.

This change removes the requirement to create the latch block up-front
for VPWidenIntOrFpInductionRecipe::execute, which in turn will enable
modeling the pre-header in VPlan.

As an alternative, the increment could be modeled as separate recipe,
but that would require more work and a bit of redundant code, as we need
to create the step-vector during VPWidenIntOrFpInductionRecipe::execute
anyways, to create the values for different parts.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121617
2022-03-28 16:20:02 +01:00
Nikita Popov db561064f6 [GlobalOpt] Handle non-instruction MTI source (PR54572)
This was reusing a cast to GlobalVariable to check for an
Instruction, which means we'll try to dereference a null pointer
if it's not actually a GlobalVariable. We should be casting
MTI->getSource() instead.

I don't think this problem is really specific to opaque pointers,
but it certainly makes it a lot easier to reproduce.

Fixes https://github.com/llvm/llvm-project/issues/54572.
2022-03-28 14:28:47 +02:00
Alexandros Lamprineas 8045bf9d0d [FuncSpec] Support function specialization across multiple arguments.
The current implementation of Function Specialization does not allow
specializing more than one arguments per function call, which is a
limitation I am lifting with this patch.

My main challenge was to choose the most suitable ADT for storing the
specializations. We need an associative container for binding all the
actual arguments of a specialization to the function call. We also
need a consistent iteration order across executions. Lastly we want
to be able to sort the entries by Gain and reject the least profitable
ones.

MapVector fits the bill but not quite; erasing elements is expensive
and using stable_sort messes up the indices to the underlying vector.
I am therefore using the underlying vector directly after calculating
the Gain.

Differential Revision: https://reviews.llvm.org/D119880
2022-03-28 12:01:53 +01:00
Gulfem Savrun Yeniceri ead8586645 [InstrProfiling] Add comments for no runtime hook
This patch adds comments about c7f91e227a, and
follows LLVM style guideline about nested if statements.
2022-03-26 00:26:43 +00:00
Philip Reames f80aaa675f [SLP] Simplify eraseInstruction [NFC]
This simplifies the implementation of eraseInstruction by moving the odd-replace-users-with-undef handling back to the only caller which uses it.  This handling was not obviously correct, so add the asserts which make it clear why this is safe to do at all.  The result is simpler code and stronger assertions.
2022-03-25 12:01:52 -07:00
Florian Hahn 8c3281db49
[ConstraintElimination] Use AddOverflow for offset summation.
Fixes an incorrect transformation due to values overflowing
https://alive2.llvm.org/ce/z/uizoea
2022-03-25 18:08:24 +00:00
Philip Reames 48cc9287f5 Reapply "[SLP] Schedule only sub-graph of vectorizable instructions"" (try 3)
The original commit exposed several missing dependencies (e.g. latent bugs in SLP scheduling).  Most of these were fixed over the weekend and have had several days to bake.  The last was fixed this morning after being noticed in manual review of test changes yesterday.  See the review thread for links to each change.

Original commit message follows:

SLP currently schedules all instructions within a scheduling window which stretches from the first instruction potentially vectorized to the last. This window can include a very large number of unrelated instructions which are not being considered for vectorization. This change switches the code to only schedule the sub-graph consisting of the instructions being vectorized and their transitive users.

This has the effect of greatly reducing the amount of work performed in large basic blocks, and thus greatly improves compile time on degenerate examples. To understand the effects, I added some statistics (not planned for upstream contribution). Here's an illustration from my motivating example:

   Before this patch:

   704357 SLP                          - Number of calcDeps actions
   699021 SLP                          - Number of schedule calls
   5598 SLP                          - Number of ReSchedule actions
   59 SLP                          - Number of ReScheduleOnFail actions
   10084 SLP                          - Number of schedule resets
   8523 SLP                          - Number of vector instructions generated

   After this patch:

   102895 SLP                          - Number of calcDeps actions
   161916 SLP                          - Number of schedule calls
   5637 SLP                          - Number of ReSchedule actions
   55 SLP                          - Number of ReScheduleOnFail actions
   10083 SLP                          - Number of schedule resets
   8403 SLP                          - Number of vector instructions generated

I do want to highlight that there is a small difference in number of generated vector instructions. This example is hitting the bailout due to maximum window size, and the change in scheduling is slightly perturbing when and how we hit it. This can be seen in the RescheduleOnFail counter change. Given that, I think we can safely ignore.

The downside of this change can be seen in the large test diff. We group all vectorizable instructions together at the bottom of the scheduling region. This means that vector instructions can move quite far from their original point in code. While maybe undesirable, I don't see this as being a major problem as this pass is not intended to be a general scheduling pass.

For context, it's worth noting that the pre-scheduling that SLP does while building the vector tree is exactly the sub-graph scheduling implemented by this patch.

Differential Revision: https://reviews.llvm.org/D118538
2022-03-25 10:39:23 -07:00
Philip Reames ec858f0201 [SLP] Optimize stacksave dependence handling [NFC]
After writing the commit message for 4b1bace28, realized that the mentioned optimization was rather straight forward.  We already have the code for scanning a block during region initialization, we can simply keep track if we've seen a stacksave or stackrestore.  If we haven't, none of these dependencies are relevant and we can avoid the relatively expensive scans entirely.
2022-03-25 10:04:10 -07:00
Philip Reames a16308c282 [SLP] Explicit track required stacksave/alloca dependency (try 3)
This is an extension of commit b7806c to handle one last case noticed in test changes for D118538.  Again, this is thought to be a latent bug in the existing code, though this time I have not managed to reduce tests for the original algoritthm.

The prior attempt had failed to account for this case:
  %a = alloca i8
  stacksave
  stackrestore
  store i8 0, i8* %a

If we allow '%a' to reorder into the stacksave/restore region, then the alloca will be deallocated before the use.  We will have taken a well defined program, and introduced a use-after-free bug.

There's also an inverse case where the alloca originally follows the stackrestore, and we need to prevent the reordering it above the restore.

Compile time wise, we potentially do an extra scan of the block for each alloca seen in a bundle.  This is significantly more expensive than the stacksave rooted version and is why I'd tried to avoid this in the initial patch.  There is room to optimize this (by essentially caching a "has stacksave" bit per block), but I'm leaving that to future work if it actually shows up in practice.  Since allocas in bundles should be rare in practice, I suspect we can defer the complexity for a long while.
2022-03-25 10:04:10 -07:00
Gulfem Savrun Yeniceri c7f91e227a [InstrProfiling] No runtime hook for unused funcs
CoverageMappingModuleGen generates a coverage mapping record
even for unused functions with internal linkage, e.g.
static int foo() { return 100; }
Clang frontend eliminates such functions, but InstrProfiling pass
still pulls in profile runtime since there is a coverage record.
Fuchsia uses runtime counter relocation, and pulling in profile
runtime for unused functions causes a linker error:
undefined hidden symbol: __llvm_profile_counter_bias.
Since 389dc94d4b, we do not hook profile runtime for the binaries
that none of its translation units have been instrumented in Fuchsia.
This patch extends that for the instrumented binaries that
consist of only unused functions.

Differential Revision: https://reviews.llvm.org/D122336
2022-03-25 17:03:03 +00:00
Florian Hahn e47d220230
[LV] Use getVectorLoopRegion to retrieve header. (NFC)
Update all places that currently assume the entry block to the plan is
also the vector loop header to use getVectorLoopRegion instead.

getVectorLoopRegion will keep doing the right thing when the pre-header
is modeled explicitly (and becomes the new entry block in the plan).
2022-03-25 16:57:12 +00:00
Hongtao Yu 7a316c0a1f [CSSPGO] Turn on profi and ext-tsp when using probe-based profile.
Probe-based profile leads to a better performance when combined with profi and ext-tsp block layout. I'm turning them on by default.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D122442
2022-03-25 09:09:21 -07:00
Philip Reames d9756fa723 [slp] Factor out a lambda to avoid uplicating code a third time in upcoming patch [nfc] 2022-03-25 09:02:39 -07:00
Simon Pilgrim 6a094a6264 [InstCombine] SimplifyDemandedUseBits - remove ashr node if we only demand known sign bits
We already do this for SelectionDAG, but we're missing it here.

Noticed while re-triaging PR21929

Differential Revision: https://reviews.llvm.org/D122340
2022-03-25 15:39:08 +00:00
Johannes Doerfert a81fff8afd Reapply "[Intrinsics] Add `nocallback` to the default intrinsic attributes"
This reverts commit c5f789050d and
reapplies 7aea3ea8c3 with additional test
changes.
2022-03-25 09:36:50 -05:00
Roman Lebedev f6b60b3b79
[SimplifyCFG] `FoldBranchToCommonDest()`: allow branch-on-select
This whole check is bogus, it's some kind of a profitability check.
For now, simply extend it to not only allow branch-on-binary-ops,
but also on poison-safe logic ops.

Refs. https://github.com/llvm/llvm-project/issues/53861
Refs. https://github.com/llvm/llvm-project/issues/54553
2022-03-25 16:12:17 +03:00
Simon Pilgrim 1a943923b8 [Utils] stripDebugifyMetadata - use cast<> instead of dyn_cast_or_null<> to avoid dereference of nullptr
The pointer is dereferenced immediately, so assert the cast is correct instead of returning nullptr
2022-03-25 10:25:04 +00:00
Fraser Cormack 2e44b7872b [VectorCombine] Insert addrspacecast when crossing address space boundaries
We can not bitcast pointers across different address spaces. This was
previously fixed in D89577 but then in D93229 an enhancement was added
which peeks further through the ponter operand, opening up the
possibility that address-space violations could be introduced.

Instead of bailing as the previous fix did, simply insert an
addrspacecast cast instruction.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D121787
2022-03-24 19:08:08 +00:00
Johannes Doerfert c5f789050d Revert "[Intrinsics] Add `nocallback` to the default intrinsic attributes"
This reverts commit 7aea3ea8c3 as it
breaks the buildbots.

I didn't see these failures in the pre-merge checks, looking into it.
2022-03-24 14:04:41 -05:00
Johannes Doerfert 7aea3ea8c3 [Intrinsics] Add `nocallback` to the default intrinsic attributes
Most intrinsics, especially "default" ones, will not call back into the
IR module. `nocallback` encodes this nicely. As it was not used before,
this patch also makes use of `nocallback` in the Attributor which
results in many more `norecurse` deductions.

Tablegen part is mechanical, test updates by script.

Differential Revision: https://reviews.llvm.org/D118680
2022-03-24 13:50:54 -05:00
Simon Pilgrim 597aefa89c Fix unused variable warning by embedding inside assertion 2022-03-24 17:41:24 +00:00
Florian Hahn 46432a0088
[VPlan] Add VPWidenPointerInductionRecipe.
This patch moves pointer induction handling from VPWidenPHIRecipe to its
own recipe. In the process, it adds all information required to generate
code for pointer inductions without relying on Legal to access the list
of induction phis.

Alternatively VPWidenPHIRecipe could also take an optional pointer to InductionDescriptor.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121615
2022-03-24 14:58:45 +00:00
Sanjay Patel 5dbb53b1b4 [InstCombine] merge shuffled vector negate and multiply
Add the "(0 - X) --> (X * -1)" reverse identity to the list of alternate form binops.

We need a little hack to make the existing logic work because it does not expect to
move constants from op0 to op1, but the code comment hopefully makes that clear.
I don't think there are any other identities like that.

Fixes #54364

Differential Revision: https://reviews.llvm.org/D122390
2022-03-24 10:25:16 -04:00
Djordje Todorovic 9dbc687a5e NFC: [LICM] Update some stale comments
After removing the MaybePromotable, some comments
became stale. This improves them.

Differential Revision: https://reviews.llvm.org/D122319
2022-03-24 14:37:20 +01:00
Alexey Bataev 20973c0841 [SLP][NFC]Fix param name in comments, NFC. 2022-03-24 05:58:42 -07:00
Dávid Bolvanský 4397504c2d [NFCI] Fix set-but-unused warning in InstCombineAddSub.cpp 2022-03-24 08:33:40 +01:00
Dávid Bolvanský 470e1d9584 [NFCI] Fix set-but-unused warning in AddressSanitizer.cpp 2022-03-24 08:13:29 +01:00
Julian Lettner 64902d335c Reland "Lower `@llvm.global_dtors` using `__cxa_atexit` on MachO"
For MachO, lower `@llvm.global_dtors` into `@llvm_global_ctors` with
`__cxa_atexit` calls to avoid emitting the deprecated `__mod_term_func`.

Reuse the existing `WebAssemblyLowerGlobalDtors.cpp` to accomplish this.

Enable fallback to the old behavior via Clang driver flag
(`-fregister-global-dtors-with-atexit`) or llc / code generation flag
(`-lower-global-dtors-via-cxa-atexit`).  This escape hatch will be
removed in the future.

Differential Revision: https://reviews.llvm.org/D121736
2022-03-23 18:36:55 -07:00
Vasileios Porpodas 39aa202aff Recommit "[SLP] Fix lookahead operand reordering for splat loads." attempt 3, fixed assertion crash.
Original review: https://reviews.llvm.org/D121354

This reverts commit e6ead19b77.
2022-03-23 18:32:17 -07:00
Zequan Wu 581dc3c729 Revert "Lower `@llvm.global_dtors` using `__cxa_atexit` on MachO"
This reverts commit 22570bac69.
2022-03-23 16:11:54 -07:00
Johannes Doerfert ee94a4a3d0 [Attributor][FIX] Avoid endless recursion, simple case
There is potential for endless recursion if we try to determine the
underlying objects of a load, just to end up with the load as underlying
object. A proper solution will require us to pass a visited set around.
This will happen as we cleanup genericValueTraversal soon.
2022-03-23 15:55:32 -05:00
minglotus-6 e2074de6a8 [ProfSampleLoader] When disable-sample-loader-inlining is true, merge profiles of inlined instances to outlining versions.
When --disable-sample-loader-inlining is true, skip inline transformation, but merge profiles of inlined instances to outlining versions.

Differential Revision: https://reviews.llvm.org/D121862
2022-03-23 13:02:48 -07:00
chenglin.bi 52f323d0f1 [InstCombine] Fold abs of known negative operand when source is sub
When abs source comes from (x - y), check if a "x > y" dominating
condition exists.

Fixes #54132

Differential Revision: https://reviews.llvm.org/D122013
2022-03-23 15:21:33 -04:00
Arthur Eubanks 9bd66b312c [PassManager][Coroutine] Run passes under -O0 conditionally and run GlobalDCE
CoroSplit lowers various coroutine intrinsics. It's a CGSCC pass and
CGSCC passes don't run on unreachable functions. Normally GlobalDCE will
come along and delete unreachable functions, but we don't run GlobalDCE
under -O0, so an unreachable function with coroutine intrinsics may
never have CoroSplit run on it.

This patch adds GlobalDCE when coroutines intrinsics are present. It
also now runs all coroutine passes conditional when coroutine intrinsics
are present. This should also solve the -O0 regression reported in
D105877 due to LazyCallGraph construction.

Fixes https://github.com/llvm/llvm-project/issues/54117

Reviewed By: ChuanqiXu

Differential Revision: https://reviews.llvm.org/D122275
2022-03-23 11:03:26 -07:00
Arthur Eubanks e6ead19b77 Revert "Recommit "[SLP] Fix lookahead operand reordering for splat loads." attempt 2, fixed assertion crash."
This reverts commit 27bd8f9492.

Causes crashes, see comments in D121973
2022-03-23 10:57:45 -07:00
Nikita Popov 29fada4a3d [EarlyCSE] Don't eagerly optimize MemoryUses
EarlyCSE currently optimizes all MemoryUses upfront. However,
EarlyCSE only actually queries the clobbering memory access for
a subset of uses, namely those where a CSE candidate has already
been identified. Delaying use optimization to the clobber query
improves compile-time in practice.

This change is not NFC because EarlyCSE has a limit on the number
of clobber queries (EarlyCSEMssaOptCap), in which case it falls
back to the defining access. The defining access for uses will now
no longer coincide with the optimized access.

If there are performance regressions from this change, we should
be able to address them by raising this limit.

Differential Revision: https://reviews.llvm.org/D121987
2022-03-23 16:47:35 +01:00
Sanjay Patel 0fcff69bcb [InstCombine] try to narrow shifted bswap-of-zext (2nd try)
The first attempt at this missed a validity check.
This version includes a test of the narrow source
type for modulo-16-bits.

Original commit message:

This is the IR counterpart to 370ebc9d9a
which provided a bswap narrowing fix for issue #53867.

Here we can be more general (although I'm not sure yet
what would happen for illegal types in codegen - too
rare to worry about?):
https://alive2.llvm.org/ce/z/3-CPfo

This will be more effective if we have moved the shift
after the bswap as proposed in D122010, but it is
independent of that patch.

Differential Revision: https://reviews.llvm.org/D122166
2022-03-23 11:28:37 -04:00
Alexandros Lamprineas a687f96b0f [FuncSpec][NFC] Clang-format the source code and fix debug typo. 2022-03-23 14:39:58 +00:00
Nikita Popov ba36556145 [InstrProfiling] Account for missing bitcast/GEP
This code is supposed to clean up a constexpr bitcast/GEP, but
with opaque pointers this ends up dropping references to the
global.
2022-03-23 15:39:39 +01:00
serge-sans-paille 1b89c83254 Cleanup includes: Transforms/Instrumentation & Transforms/Vectorize
Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D122181
2022-03-23 11:06:13 +01:00
Nathan Chancellor 4e0008dcbe
Revert "[InstCombine] try to narrow shifted bswap-of-zext"
This reverts commit 9e9bda2e8f.

This causes a backend error when building the Linux kernel for arm64.
See https://reviews.llvm.org/D122166 for a simplified reproducer.
2022-03-22 17:32:33 -07:00
Vasileios Porpodas 27bd8f9492 Recommit "[SLP] Fix lookahead operand reordering for splat loads." attempt 2, fixed assertion crash.
Original review: https://reviews.llvm.org/D121354

This reverts commit f7d7d2a08d.
2022-03-22 16:41:55 -07:00
Philip Reames 7abefc4222 [instcombine] Fold away memset/memmove from otherwise unused alloca
The motivation for this is that while both memcpyopt and dse will catch this case, both are limited by MSSA's walk back threshold when finding clobbers.  As such, if you have a memcpy of an otherwise dead alloca placed towards the end of a long basic block with lots of other memory instructions, it would be missed.  This is a bit undesirable for such an "obviously" useless bit of code.

As noted in comments, we should probably generalize instcombine's escape analysis peephole (see visitAllocInst) to allow read xor write.  Doing that would subsume this code in a more general way, but is also a more involved change.  For the moment, I went with the easiest fix.
2022-03-22 13:48:48 -07:00
Arthur Eubanks f7d7d2a08d Revert "Recommit "[SLP] Fix lookahead operand reordering for splat loads.""
This reverts commit 79613185d3.

Causes crashes, see comments in https://reviews.llvm.org/D121973.
2022-03-22 13:33:49 -07:00
Sanjay Patel ccf8c969c2 [InstCombine] reorder code, fix formatting; NFC
The affected code can be updated to solve #54364,
so make some cosmetic diffs before real changes.
2022-03-22 16:33:01 -04:00
Florian Hahn 50c8588e44
[LV] Remove Loop argument from createInductionResumeValues (NFCI).
createInductionResumeValues only uses its loop argument only to get the
pre-header, but the pre-header is already known (we created/cached it
earlier). Remove the unneeded loop argument.
2022-03-22 14:23:12 +00:00
Sanjay Patel 60820e53ec [InstCombine] try to canonicalize logical shift after bswap
When shifting by a byte-multiple:
bswap (shl X, C) --> lshr (bswap X), C
bswap (lshr X, C) --> shl (bswap X), C

This is an IR implementation of a transform suggested in D120648.
The "swaps cancel" test models the motivating optimization from
that proposal.

Alive2 checks (as noted in the other review, we could use
knownbits to handle shift-by-variable-amount, but that can be an
enhancement patch):
https://alive2.llvm.org/ce/z/pXUaRf
https://alive2.llvm.org/ce/z/ZnaMLf

Differential Revision: https://reviews.llvm.org/D122010
2022-03-22 09:10:55 -04:00
Djordje Todorovic 91ea247039 [Debugify] Use DebugifyLevel in Debugify original mode
Before this patch the DebugifyLevel option was used for
the synthetic mode, so after this, it will be used in
the original mode as well.

Differential Revision: https://reviews.llvm.org/D115623
2022-03-22 14:04:56 +01:00
Nikita Popov afb526b3f4 [LICM] Handle store of pointer to itself (PR54495)
Rather than iterating over users and comparing operands, iterate
over uses and check operand number. Otherwise, we'll end up
promoting a store twice if it has two equal operands.

This can only happen with opaque pointers, as otherwise both
operands differ by a level of indirection, so a bitcast would have
to be involved.

Fixes https://github.com/llvm/llvm-project/issues/54495.
2022-03-22 14:00:07 +01:00
Sanjay Patel 9e9bda2e8f [InstCombine] try to narrow shifted bswap-of-zext
This is the IR counterpart to 370ebc9d9a
which provided a bswap narrowing fix for issue #53867.

Here we can be more general (although I'm not sure yet
what would happen for illegal types in codegen - too
rare to worry about?):
https://alive2.llvm.org/ce/z/3-CPfo

This will be more effective if we have moved the shift
after the bswap as proposed in D122010, but it is
independent of that patch.

Differential Revision: https://reviews.llvm.org/D122166
2022-03-22 08:22:30 -04:00
Djordje Todorovic 73777b4c35 [Debugify] Optimize debugify original mode
Before we start addressing the issue with having
a lot of false positives when using debugify in
the original mode, we have made a few patches that
should speed up the execution of the testing
utility Passes.

For example, when testing a large project
(let's say LLVM project itself), we can face
a lot of potential DI issues. Usually, we use
-verify-each-debuginfo-preserve (that is very
similar to -debugify-each) -- it collects
DI metadata before each Pass, and after the Pass
it checks if the Pass preserved the DI metadata.
However, we can speed up this process, since we
don't need to collect DI metadata before each
Pass -- we could use the DI metadata that are
collected after the previous Pass from
the pipeline as an input for the next Pass.

This patch speeds up the utility for ~2x.

Differential Revision: https://reviews.llvm.org/D115622
2022-03-22 12:14:00 +01:00
serge-sans-paille a53b689f0c Fix missing include under -DEXPENSIVE_CHECK
Regression introduced by f1985a3f85
2022-03-22 10:37:56 +01:00
serge-sans-paille f1985a3f85 Cleanup includes: Transforms/IPO
Preprocessor output diff: -238205 lines
Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D122183
2022-03-22 10:06:28 +01:00
Chuanqi Xu 902f4708fe [NFC] [Coroutines] Remove unnecessary check and constraints on SmallVector
The CoroSplit pass would check the existence of coroutine intrinsic
before starting work. It is not necessary and wasteful since it would
iterate over the Module.

This patch also removes the constraint on the corresponding of the
SmallVector for the possible coroutines in the Modules. The original
value is 4. Given coroutines is used actually in practice. 4 is really
relatively a low threshold.
2022-03-22 14:24:46 +08:00
Vasileios Porpodas 79613185d3 Recommit "[SLP] Fix lookahead operand reordering for splat loads."
Original review: https://reviews.llvm.org/D121354

The original commit 9136145eb0 broke the build on several targets.

Differential Revision: https://reviews.llvm.org/D121973
2022-03-21 15:57:32 -07:00
Hirochika Matsumoto 86f970e595 [IROutliner][NFC] Fix typo in doc of findOrCreatePHIInBlock
Typo Fix in Documentation

Author: hkmatsumoto

Reviewers: AndrewLitteken

Differential Revision: https://reviews.llvm.org/D121627
2022-03-21 12:34:20 -05:00
Philip Reames ee7324b898 Rename mayBeMemoryDependent to mayHaveNonDefUseDependency [nfc] 2022-03-21 10:01:40 -07:00
Andrew Litteken 4e500df89e [IROutliner] Fix phi nodes when self referential within block but doesn't contain branch
When outlining a phi node, if the the incoming branch is a block contained in the region and the branch from that block is not outlined, we create broken code. The fix is to recognize when that branch from the included incoming block is not contained, and ignore the region.

Reviewer: paquette

Differential Revision: https://reviews.llvm.org/D121311
2022-03-21 11:05:15 -05:00
psamolysov-intel 2ed030ba88 [InferAddressSpaces][NFC] Small code improvements for the InferAddressSpaces pass
There is a bunch of code improvements in the patch: marking as const everything what can be
const and fixing some typos in comments.

Also the patch removes the shadowing parameter TTI from the rewriteWithNewAddressSpaces
method, the TTI parameter is not required because the same field is in the class.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D121671
2022-03-21 11:03:12 -05:00
Alexey Bataev 79a182371e [SLP]Make stricter check for instructions that do not require
scheduling.

Need to check that the instructions with external operands can be
reordered safely before actualy exclude them from the scheduling.
2022-03-21 06:09:12 -07:00
Sophia 72bde608d2 [LV] Fix typo in comment
Reviewed by: fhahn (Florian Hahn)

    Differential Revision: https://reviews.llvm.org/D121781
2022-03-21 20:30:05 +08:00
Florian Hahn 0ebac76e6e
[LV] Remove unneeded Loop argument from completeLoopSkeleton. (NFCI)
completeLoopSkeleton only uses its loop argument only to get the
pre-header, but the pre-header is already known (we created/cached it
earlier). Remove the unneeded loop argument.
2022-03-21 10:07:25 +00:00
Andrew Litteken 38e8880e93 [IROutliner] Do not outlined from functions with optnone
Since the IROutliner is performing an optimization, it should not outline from functions explicitly marked with optnone. This adds an extra check and test to make sure this does not occur.

Reviewers: paquette

Differential Revision: https://reviews.llvm.org/D121567
2022-03-20 23:39:23 -05:00
Florian Hahn 487629cc61
[LV] Remove dead Loop argument from emitMemRuntimeChecks. (NFC) 2022-03-20 21:01:15 +00:00
Philip Reames b7806c8b37 [SLP] Explicit track required stacksave/alloca dependency
The semantics of an inalloca alloca instruction requires that it not be reordered with a preceeding stacksave intrinsic call.  Unfortunately, there's no def/use edge or memory dependence edge.  (THe memory point is slightly subtle, but in general a new allocation can't alias with a call which executes strictly before it comes into existance.)

I'd tried to tackle this same case previously in 689babdf6, but the fix chosen there turned out to be incomplete.  As such, this change contains a fully revert of the first fix attempt.

This was noticed when investigating problems which surfaced with D118538, but this is definitely an existing bug.  This time around, I managed to reduce a couple of additional cases, including one which was being actively miscompiled even without the new scheduling change.  (See test diffs)

Compile time wise, we only spend extra time when seeing a stacksave (rare), and even then we walk the block at most once per schedule window extension.  Likely a non-issue.
2022-03-20 13:58:45 -07:00
Kazu Hirata bce1bf0ee2 [Transform] Apply clang-tidy fixes for readability-redundant-smartptr-get (NFC) 2022-03-20 10:41:22 -07:00
Philip Reames 6253b77da9 [SLP] Respect control dependence within a block during scheduling
This fixes an active miscompile visible in the test changes.  The basic problem is that the scheduling dependency graph didn't have any edges for control dependence within a single basic block.  The result is that we could (and in some rare cases *did*) perform reorderings within a block which could introduce new undefined behavior along paths which didn't previously contain any.

Impact wise, we have two major cases where control is not guaranteed to reach a later instruction in the block: may throw calls, and calls containing infinite loops.
* The former case was mostly covered by the memory dependencies, and to trigger require a function which can throw, but not write to memory.  In theory, such a case is possible, but not likely in practice.
* The later case is likely more of an issue in practice.  After this code was first written, we changed the IR semantics to allow well defined infinite loops without satisifying mustprogress.  Even for C/C++ - which do imply mustprogress - recent changes to how we treat atomics (e.g. an atomic read does not always imply a write) could expose this issue.  I'm a bit shocked we don't seem to have a bug report which hit this in real code actually.

Compile time wise, this results in a single extra scan of the scheduling window in the common case.  Since we stop scanning at the next instruction which isn't guaranteed to execute, no matter what order we traverse instructions in, we scan the block once.  The exception to this is that when we extend the scheduling window downwards, we invalidate all dependencies, and thus rescan.  So the potentially expensive case is when we a call in a big schedule window which is frequently extended.  We could optimize this case (by caching the last instruction not guaranteeed to transfer execution and scanning only the extended window) and starting there), but I decided to leave the complexity until it mattered.  That same case is already degenerate with memory dependences which is more expensive than the control dependence scan.

We could also consider combining the memory dependence and control dependence sets to reduce memory usage, but since it complicates the code slightly and makes debugging a bit harder, I went with the simplest scheme for now.

This was noticed while trying to understand the failures reported against D118538, but is not otherwise related to that change.
2022-03-19 13:36:24 -07:00
Florian Hahn 1a820ff039
[LV] Remove unnecessary uses of Loop* (NFC).
Update functions that previously took a loop pointer but only to get the
pre-header. Instead, pass the block directly. This removes the
requirement for the loop object to be created up-front.
2022-03-19 20:18:47 +00:00
Johannes Doerfert 4166738c38 [OpenMP][FIX] Do not crash when kernels are debug wrapper functions
With debug information enabled (-g) Clang will wrap the actual target
region into a new function which is called from the "kernel". The problem
is that the "kernel" is now basically a wrapper without all the things
we expect. More importantly, if we end up asking for an AAKernelInfo
for the "target region function" we might try to turn it into SPMD mode.
That used to cause an assertion as that function doesn't have an
appropriately named `_exec_mode` global. While the global is going away
soon we still need to make sure to properly handle this case, e.g.,
perform optimizations reliably.

Differential Revision: https://reviews.llvm.org/D122043
2022-03-19 14:15:55 -05:00
Fangrui Song c6692f819e [GlobalOpt] Don't replace alias with aliasee if either alias/aliasee may be preemptible
Generalize D99629 for ELF. A default visibility non-local symbol is preemptible
in a -shared link. `isInterposable` is an insufficient condition.

Moreover, a non-preemptible alias may be referenced in a sub constant expression
which intends to lower to a PC-relative relocation. Replacing the alias with a
preemptible aliasee may introduce a linker error.

Respect dso_preemptable and suppress optimization to fix the abose issues. With
the change, `alias = 345` will not be rewritten to use aliasee in a `-fpic`
compile.
```
int aliasee;
extern int alias __attribute__((alias("aliasee"), visibility("hidden")));
void foo() { alias = 345; } // intended to access the local copy
```

While here, refine the condition for the alias as well.

For some binary formats like COFF, `isInterposable` is a sufficient condition.
But I think canonicalization for the changed case has little advantage, so I
don't bother to add the `Triple(M.getTargetTriple()).isOSBinFormatELF()` or
`getPICLevel/getPIELevel` complexity.

For instrumentations, it's recommended not to create aliases that refer to
globals that have a weak linkage or is preemptible. However, the following is
supported and the IR needs to handle such cases.
```
int aliasee __attribute__((weak));
extern int alias __attribute__((alias("aliasee")));
```

There are other places where GlobalAlias isInterposable usage may need to be
fixed.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D107249
2022-03-18 14:17:05 -07:00
Philip Reames 1093949cff [SLP] Add comment clarifying assumption that tripped me up [NFC]
I keep thinking this assumption is probably exploitable for a bug in the existing implementation, but all of my attempts at writing a test case have failed.  So for the moment, just document this very subtle assumption.
2022-03-18 11:40:19 -07:00
Kazu Hirata 3e0f7c7881 [Vectorize] Fix an 'unused function' warning
This patch fixes:

  llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:3917:13: error:
  unused function 'needToScheduleSingleInstruction'
  [-Werror,-Wunused-function]
2022-03-18 11:24:57 -07:00
Kazu Hirata b3d8c0d069 [Vectorize] Fix an 'unused variable' warning
This patch fixes:

  llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:8148:18: error:
  unused variable 'SDTE' [-Werror,-Wunused-variable]
2022-03-18 11:24:54 -07:00
Nick Desaulniers e1bae23f6f [SCCP] do not clean up dead blocks that have their address taken
[SCCP] do not clean up dead blocks that have their address taken

Fixes a crash observed in IPSCCP.

Because the SCCPSolver has already internalized BlockAddresses as
Constants or ConstantExprs, we don't want to try to update their Values
in the ValueLatticeElement. Instead, continue to propagate these
BlockAddress Constants, continue converting BasicBlocks to unreachable,
but don't delete the "dead" BasicBlocks which happen to have their
address taken.  Leave replacing the BlockAddresses to another pass.

Fixes: https://github.com/llvm/llvm-project/issues/54238
Fixes: https://github.com/llvm/llvm-project/issues/54251

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D121744
2022-03-18 11:02:15 -07:00
Philip Reames 8f108c32bc Revert "[SLP] Optionally preserve MemorySSA"
This reverts commit 1cfa986d68.  See https://github.com/llvm/llvm-project/issues/54256 for why I'm discontinuing the project.

Seperately, it turns out that while this patch does correctly preserve MSSA, it's correct only at the end of the pass; not between vectorization attempts.  Even if we decide to resurrect this, we'll need to fix that before reapplying.
2022-03-18 10:45:59 -07:00
Florian Mayer 078b546555 [HWASan] do not replace lifetime intrinsics with tagged address.
Quote from the LLVM Language Reference
  If ptr is a stack-allocated object and it points to the first byte of the
  object, the object is initially marked as dead. ptr is conservatively
  considered as a non-stack-allocated object if the stack coloring algorithm
  that is used in the optimization pipeline cannot conclude that ptr is a
  stack-allocated object.

By replacing the alloca pointer with the tagged address before this change,
we confused the stack coloring algorithm.

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D121835
2022-03-18 10:39:51 -07:00
Florian Mayer dbc918b649 Revert "[HWASan] do not replace lifetime intrinsics with tagged address."
Failed on buildbot:

/home/buildbot/buildbot-root/llvm-clang-x86_64-sie-ubuntu-fast/build/bin/llc: error: : error: unable to get target for 'aarch64-unknown-linux-android29', see --version and --triple.
FileCheck error: '<stdin>' is empty.
FileCheck command line:  /home/buildbot/buildbot-root/llvm-clang-x86_64-sie-ubuntu-fast/build/bin/FileCheck /home/buildbot/buildbot-root/llvm-project/llvm/test/Instrumentation/HWAddressSanitizer/stack-coloring.ll --check-prefix=COLOR

This reverts commit 208b923e74.
2022-03-18 10:04:48 -07:00
Florian Hahn 5ab421fb4e
[LICM] Add allowspeculation pass options.
This adds a new option to control AllowSpeculation added in D119965 when
using `-passes=...`.

This allows reproducing #54023 using opt.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D121944
2022-03-18 16:51:57 +00:00
Florian Mayer 208b923e74 [HWASan] do not replace lifetime intrinsics with tagged address.
Quote from the LLVM Language Reference
  If ptr is a stack-allocated object and it points to the first byte of the
  object, the object is initially marked as dead. ptr is conservatively
  considered as a non-stack-allocated object if the stack coloring algorithm
  that is used in the optimization pipeline cannot conclude that ptr is a
  stack-allocated object.

By replacing the alloca pointer with the tagged address before this change,
we confused the stack coloring algorithm.

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D121835
2022-03-18 09:45:05 -07:00
Nikita Popov ab2284a643 [LowerConstantIntrinsics] Make TLI a required dependency
The way the pass is actually used in the optimization pipeline,
TLI will be available, but this is not the case when running just
-lower-constant-intrinsics in tests, which ends up being quite
confusing.

Require TLI unconditionally, as we usually do.
2022-03-18 14:59:18 +01:00
Nikita Popov fc8946fae7 [InstCombine] Remove integer SPF of SPF folds (NFCI)
Now that we canonicalize to intrinsics, these folds should no
longer be needed. Only one fold that also applies to floating-point
min/max is retained.
2022-03-18 10:20:48 +01:00
Nikita Popov f96428e16d [MemorySSA] Don't optimize uses during construction
This changes MemorySSA to be constructed in unoptimized form.
MemorySSA::ensureOptimizedUses() can be called to optimize all
uses (once). This should be done by passes where having optimized
uses is beneficial, either because we're going to query all uses
anyway, or because we're doing def-use walks.

This should help reduce the compile-time impact of MemorySSA for
some use cases (the reason why I started looking into this is
D117926), which can avoid optimizing all uses upfront, and instead
only optimize those that are actually queried.

Actually, we have an existing use-case for this, which is EarlyCSE.
Disabling eager use optimization there gives a significant
compile-time improvement, because EarlyCSE will generally only query
clobbers for a subset of all uses (this change is not included in
this patch).

Differential Revision: https://reviews.llvm.org/D121381
2022-03-18 09:56:16 +01:00
Florian Hahn 4a699ae9c6
[LoopSimplifyCFG] Check predecessors of exits before marking them dead.
LoopSimplifyCFG may process loops that are not in
loop-simplify/canonical form. For loops not in canonical form, exit
blocks may be reachable from non-loop blocks and we cannot consider them
as dead if they only are not reachable from the loop itself.

Unfortunately the smallest test I could come up with requires running
multiple passes:
    -passes='loop-mssa(loop-instsimplify,loop-simplifycfg,simple-loop-unswitch)'

The reason is that loops are canonicalized at the beginning of loop
pipelines, so a later transform has to break canonical form in a way
that breaks LoopSimplifyCFG's dead-exit analysis.

Alternatively we could try to require all loop passes to maintain
canonical form. That in turn would also require additional verification.

Fixes #54023, #49931.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D121925
2022-03-18 08:54:44 +00:00
Andrew Wei 0af3e6a22d [InstCombine] Sink instructions with multiple users in a successor block.
This patch tries to sink instructions when they are only used in a successor block.

This is a further enhancement patch based on Anna's commit:
D109700, which allows sinking an instruction having multiple uses in a single user.

In this patch, sink instructions with multiple users in a single successor block will be supported.
It could fix a known issue from rust:
  https://github.com/rust-lang/rust/issues/51346#issuecomment-394443610

Reviewed By: nikic, reames

Differential Revision: https://reviews.llvm.org/D121585
2022-03-18 11:53:45 +08:00
Vasileios Porpodas 9136145eb0 Revert "[SLP] Fix lookahead operand reordering for splat loads." due to build failures
This reverts commit 5efa78985b.
2022-03-17 18:22:04 -07:00
Vasileios Porpodas 5efa78985b [SLP] Fix lookahead operand reordering for splat loads.
Splat loads are inexpensive in X86. For a 2-lane vector we need just one
instruction: `movddup (%reg), xmm0`. Using the standard Splat score leads
to worse code. This patch adds a new score dedicated for splat loads.

Please note that a splat is usually three IR instructions:
- It is usually a load and 2 inserts:
 %ld = load double, double* %gep
 %ins1 = insertelement <2 x double> poison, double %ld, i32 0
 %ins2 = insertelement <2 x double> %ins1, double %ld, i32 1

- But it can also be a load, an insert and a shuffle:
 %ld = load double, double* %gep
 %ins = insertelement <2 x double> poison, double %ld, i32 0
 %shf = shufflevector <2 x double> %ins, <2 x double> poison, <2 x i32> zeroinitializer

Because of this some of the lit tests contain more IR instructions.

Differential Revision: https://reviews.llvm.org/D121354
2022-03-17 18:05:54 -07:00
Paul Kirth 964398ccb1 Revert "Revert "Revert "[misexpect] Re-implement MisExpect Diagnostics"""
This reverts commit 6cf560d69a.
2022-03-18 00:21:33 +00:00
Paul Kirth 6cf560d69a Revert "Revert "[misexpect] Re-implement MisExpect Diagnostics""
I mistakenly reverted my commit, so I'm relanding it.

This reverts commit 10866a1df4.
2022-03-18 00:04:22 +00:00
Paul Kirth 10866a1df4 Revert "[misexpect] Re-implement MisExpect Diagnostics"
This reverts commit e7749d4713.
2022-03-17 23:54:26 +00:00
Paul Kirth e7749d4713 [misexpect] Re-implement MisExpect Diagnostics
Reimplements MisExpect diagnostics from D66324 to reconstruct its
original checking methodology only using MD_prof branch_weights
metadata.

New checks rely on 2 invariants:

1) For frontend instrumentation, MD_prof branch_weights will always be
   populated before llvm.expect intrinsics are lowered.

2) for IR and sample profiling, llvm.expect intrinsics will always be
   lowered before branch_weights are populated from the IR profiles.

These invariants allow the checking to assume how the existing branch
weights are populated depending on the profiling method used, and emit
the correct diagnostics. If these invariants are ever invalidated, the
MisExpect related checks would need to be updated, potentially by
re-introducing MD_misexpect metadata, and ensuring it always will be
transformed the same way as branch_weights in other optimization passes.

Frontend based profiling is now enabled without using LLVM Args, by
introducing a new CodeGen option, and checking if the -Wmisexpect flag
has been passed on the command line.

Differential Revision: https://reviews.llvm.org/D115907
2022-03-17 23:46:23 +00:00
Johannes Doerfert 4308fdf83b [Attributor] Remove more non-deterministic behavior and debug output 2022-03-17 17:42:32 -05:00
Johannes Doerfert 59a6b668ab [OpenMP][FIX] Initialize member to avoid undefined value in debug output 2022-03-17 17:42:32 -05:00
Johannes Doerfert 88ea86c369 [Attributor][FIX] Remove reference into map that might dangle
The reference was taken and the map was modified after. This can (and
did) lead to dangling pointers and all sorts of problems afterwards.
2022-03-17 17:42:32 -05:00
Ellis Hoag f6b5142ac2 [AlwaysInliner] Emit inline remark only when successful
Failures in `InlineFunction()` are caught after D121722, but `emitInlinedIntoBasedOnCost()` should only be called when inlining is successful. This also removes an unnecessary call to `shouldInline()` which always returned `InlineCost::getAlways()`.

Reviewed By: kyulee, nikic

Differential Revision: https://reviews.llvm.org/D121946
2022-03-17 15:40:24 -07:00
Kyungwoo Lee ddb85f34f5 [ObjCARC] Fix non-determinism
We often failed in the assertion, non-deterministically with a large IR:
```
Assertion `notDifferentParent(LocA.Ptr, LocB.Ptr) && "BasicAliasAnalysis doesn't support interprocedural queries."
```
Looking at the comment in https://reviews.llvm.org/D87806, it appears it's actually a module pass for new PM while the legacy PM still works as a function pass.
The fix is to align the same behavior in between new PM and old PM, which initializes ObjCARCContract for each function.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D121949
2022-03-17 15:01:09 -07:00
Andrew Litteken f7d90ad57b [IROutliner] Make sure that loop debug info is stripped.
As pointed out in https://github.com/llvm/llvm-project/issues/54155#issuecomment-1057465479, there was a crash when loop info was being outlined. It was not being properly stripped and adjusted, so would point to the wrong location. This uses similar logic found in the CodeExtractor to adjust the loop debug info.

Reviewer: fhahn, paquette

Differential Revision: https://reviews.llvm.org/D120869
2022-03-17 14:41:53 -06:00
Alexey Bataev d65cc85977 [SLP]Do not schedule instructions with constants/argument/phi operands and external users.
No need to schedule entry nodes where all instructions are not memory
read/write instructions and their operands are either constants, or
arguments, or phis, or instructions from others blocks, or their users
are phis or from the other blocks.
The resulting vector instructions can be placed at
the beginning of the basic block without scheduling (if operands does
not need to be scheduled) or at the end of the block (if users are
outside of the block).
It may save some compile time and scheduling resources.

Differential Revision: https://reviews.llvm.org/D121121
2022-03-17 11:03:45 -07:00
Julian Lettner 22570bac69 Lower `@llvm.global_dtors` using `__cxa_atexit` on MachO
For MachO, lower `@llvm.global_dtors` into `@llvm_global_ctors` with
`__cxa_atexit` calls to avoid emitting the deprecated `__mod_term_func`.

Reuse the existing `WebAssemblyLowerGlobalDtors.cpp` to accomplish this.

Enable fallback to the old behavior via Clang driver flag
(`-fregister-global-dtors-with-atexit`) or llc / code generation flag
(`-lower-global-dtors-via-cxa-atexit`).  This escape hatch will be
removed in the future.

Differential Revision: https://reviews.llvm.org/D121736
2022-03-17 10:47:13 -07:00
Ellis Hoag 84c6689b15 [AlwaysInliner] Check inliner errors even without assserts
When we build clang without asserts we should still check the result of
`InlineFunction()` to be sure there wasn't an error. Otherwise we could
incorrectly merge attributes in the next line.

This also removes a redundent call to `getCaller()`.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D121722
2022-03-17 10:16:23 -07:00
Fraser Cormack fe74183564 [Coroutines][NFC] Format line to 80 cols 2022-03-17 15:34:24 +00:00
Marco Elver cbe1e67ead [Instruction] Introduce getAtomicSyncScopeID()
An analysis may just be interested in checking if an instruction is
atomic but system scoped or single-thread scoped, like ThreadSanitizer's
isAtomic(). Unfortunately Instruction::isAtomic() can only answer the
"atomic" part of the question, but to also check scope becomes rather
verbose.

To simplify and reduce redundancy, introduce a common helper
getAtomicSyncScopeID() which returns the scope of an atomic operation.
Start using it in ThreadSanitizer.

NFCI.

Reviewed By: dvyukov

Differential Revision: https://reviews.llvm.org/D121910
2022-03-17 14:59:37 +01:00
Florian Hahn 151c144350
[LV] Use usesScalars in widenPHIInstruction.
This uses the existing VPlan helpers to check whether there are scalar
uses of a phi recipe. It remove one of the few remaining dependencies on
the cost model from VPlan code generation.

Depends on D121612.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121613
2022-03-17 13:16:32 +00:00
Florian Hahn a6e70e4056
[VPlan] VPInterleaveRecipe only requires the first lane of the address.
VPInterleaveRecipe only uses the first lane of the address. Add
onlyFirstLaneUsed implementation. This is needed for a follow-up patch.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D121612
2022-03-17 11:56:43 +00:00
Nikita Popov 1dbeb64493 [SLP] Avoid unnecessary getIncomingValueForBlock() call (NFC)
This code just wants to check all incoming values, we don't care
care what the incoming block is here.
2022-03-17 12:23:46 +01:00
Nikita Popov 4010a7a5d0 Reapply [InstCombine] Support switch in phi to cond fold
Reapply with an explicit check for multi-edges, as the expected
behavior of multi-edge dominance is unclear (D120811).

-----

For conditional branches, we know the value is i1 0 or i1 1 along
the outgoing edges. For switches we can apply exactly the same
optimization, just with the known values determined by the switch
cases.
2022-03-17 10:03:09 +01:00
Alexey Bataev 150ea76543 Revert "[SLP]Do not schedule instructions with constants/argument/phi operands and external users."
This reverts commit 1eeb2bfe72 to fix
a bug reported in https://reviews.llvm.org/D121121
2022-03-16 13:54:59 -07:00
Florian Hahn 470a975c84
[ConstraintElimination] Add missing dominance check.
When dealing with an unconditional branch, the condition can only added
if BB properly dominates the successor.
2022-03-16 20:01:24 +00:00
Malhar Jajoo a36d269658 [VPlan] Avoid collecting scalars for SVE
This patch ensures scalars (except for uniforms) are no
longer collected (prior to LVP planning phase) for
scalable vectorization.

This is to avoid the chances of generating scalarized
instructions later (during LVP execute phase) as they
are not supported for scalable vectorization.

Relevant test has also been added.

Differential Revision: https://reviews.llvm.org/D121452
2022-03-16 16:33:34 +00:00
Nikita Popov d7cf7ec05d [SROA] Handle over-large loads during presplitting
When a load extends past the extent of the alloca, SROA will
restrict the slice size to extend to the end of the alloca only.
However, presplitting was asserting that the load size and the
slice size match exactly, which does not hold in this case.
Relax the assertion to only require that the load size is greater
or equal than the slice size.
2022-03-16 15:41:11 +01:00
Florian Hahn f473d4aa80
[ConstraintElimination] Support BBs with single successor in CanAdd.
If BB has a single successor, conditions can be added safely.
2022-03-16 14:13:52 +00:00
Alexey Bataev 1eeb2bfe72 [SLP]Do not schedule instructions with constants/argument/phi operands and external users.
No need to schedule entry nodes where all instructions are not memory
read/write instructions and their operands are either constants, or
arguments, or phis, or instructions from others blocks, or their users
are phis or from the other blocks.
The resulting vector instructions can be placed at
the beginning of the basic block without scheduling (if operands does
not need to be scheduled) or at the end of the block (if users are
outside of the block).
It may save some compile time and scheduling resources.

Differential Revision: https://reviews.llvm.org/D121121
2022-03-16 06:05:43 -07:00
Florian Hahn e5822ded56
[FunctionAttrs] Infer argmemonly .
This patch adds initial argmemonly inference, by checking the underlying
objects of locations returned by MemoryLocation.

I think this should cover most cases, except function calls to other
argmemonly functions.

I'm not sure if there's a reason why we don't infer those yet.

Additional argmemonly can improve codegen in some cases. It also makes
it easier to come up with a C reproducer for 7662d1687b (already fixed,
but I'm trying to see if C/C++ fuzzing could help to uncover similar
issues.)

Compile-time impact:
NewPM-O3: +0.01%
NewPM-ReleaseThinLTO: +0.03%
NewPM-ReleaseLTO+g: +0.05%

https://llvm-compile-time-tracker.com/compare.php?from=067c035012fc061ad6378458774ac2df117283c6&to=fe209d4aab5b593bd62d18c0876732ddcca1614d&stat=instructions

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D121415
2022-03-16 10:24:33 +00:00
Nikita Popov 20531b3a6b [RelLookupTableConverter] Avoid querying TTI for declarations
This code queries TTI on a single function, which is considered to
be representative. This is a bit odd, but probably fine in practice.

However, I think we should at least avoid querying declarations,
which e.g. will generally lack target attributes, and for which
we don't seem to ever query TTI in other places.
2022-03-16 10:39:28 +01:00
Philip Reames 1cfa986d68 [SLP] Optionally preserve MemorySSA
This initial patch adds code to preserve MemorySSA through a run of SLP vectorizer. The eventual plan is to use MemorySSA to accelerate SLP's memory dependence checking, but we're a ways from that.  In particular, this patch is correct, but really slow. It's being landed so that we can work incrementally in tree, not because it's expected to be useful to anyone just yet.

The broader effort is being tracked in https://github.com/llvm/llvm-project/issues/54256.  Its worth noting expicitly that this may not work out, and if not, we will be reverting all of the MSSA support in SLP at some point in the next few weeks.

Differential Revision: https://reviews.llvm.org/D117926
2022-03-15 16:36:15 -07:00
Florian Hahn 014f5bcf7a
[FunctionAttrs] Replace MemoryAccessKind with FMRB.
Update FunctionAttrs to use FunctionModRefBehavior instead
MemoryAccessKind.

This allows for adding support for inferring argmemonly and others,
see D121415.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D121460
2022-03-15 19:35:54 +00:00
Sanjay Patel 598721f866 [InstCombine] try harder to propagate 'nsz' through fneg-of-select
This can be viewed as swapping the select arms:
https://alive2.llvm.org/ce/z/jUvFMJ
...so we don't have the 'nsz' problem with the more general fold.

This unlocks other folds for the motivating fabs example.
This was discussed in issue #38828.
2022-03-15 11:05:29 -04:00
Simon Pilgrim 7e4cf582cf [InstCombine] Add general constant support to eq/ne icmp(add(X,C1),add(Y,C2)) -> icmp(add(X,C1-C2),Y) fold
A further extension for Issue #32161

For eq/ne comparisons - the sign mismatch and bounds constraints are redundant, so if the that fold fails, fallback and just fold the constants directly.

https://alive2.llvm.org/ce/z/cdodNQ

The loop rotation test change looks mostly benign - the backend doesn't seem to suffer? https://gcc.godbolt.org/z/dErMY78To

Differential Revision: https://reviews.llvm.org/D121551
2022-03-15 14:17:38 +00:00
Simon Pilgrim 7262eacd41 Revert rG9c542a5a4e1ba36c24e48185712779df52b7f7a6 "Lower `@llvm.global_dtors` using `__cxa_atexit` on MachO"
Mane of the build bots are complaining: Unknown command line argument '-lower-global-dtors'
2022-03-15 13:01:35 +00:00
Nikita Popov 875782bd9e [OpenMPOpt] Avoid pointer element type access during region merging
Hardcode the function type as ParallelTask, which is the guaranteed
pointee type of this runtime function argument (if pointee types
exist). The elimination of the callee bitcast is left for InstCombine.

Differential Revision: https://reviews.llvm.org/D120885
2022-03-15 09:52:46 +01:00
Florian Hahn ca1b2fc9fb
[LV] Remove LoopVectorBody from InnerLoopVectorizer. (NFCI)
Update places still referencing LoopVectorBody to use the vector loop to
get the vector loop header. This is needed to move vector loop
code-generation to VPlan completely, which in turn is needed to model
pre-header & exit blocks in VPlan as well.
2022-03-15 08:22:31 +00:00
Julian Lettner 9c542a5a4e Lower `@llvm.global_dtors` using `__cxa_atexit` on MachO
For MachO, lower `@llvm.global_dtors` into `@llvm_global_ctors` with
`__cxa_atexit` calls to avoid emitting the deprecated `__mod_term_func`.

Reuse the existing `WebAssemblyLowerGlobalDtors.cpp` to accomplish this.

Enable fallback to the old behavior via Clang driver flag
(`-fregister-global-dtors-with-atexit`) or llc / code generation flag
(`-lower-global-dtors-via-cxa-atexit`).  This escape hatch will be
removed in the future.

Differential Revision: https://reviews.llvm.org/D121327
2022-03-14 17:51:18 -07:00
Andrew Browne dbf8c00b09 [DFSan] Remove trampolines to unblock opaque pointers. (Reland with fix)
https://github.com/llvm/llvm-project/issues/54172

Reviewed By: pcc

Differential Revision: https://reviews.llvm.org/D121250
2022-03-14 16:03:25 -07:00
Andrew Litteken 228cc2c38b [IROutliner] Ensure merged PHINodes respect order and incoming blocks, not just incoming values
When matching PHINodes when margining functions the IROutliner only checks that an incoming value exists in phi node in overall function. It doesn't check the length, the order, or that the incoming block also matches. In the given example, we see that both phi nodes have the same incoming values, but from different blocks.

The fix is to to enforce stricter a match of the incoming value, and the incoming block as well when matching the created phi nodes.

Reviewers: paquette

Differential Revision: https://reviews.llvm.org/D121310
2022-03-14 16:48:21 -05:00
Craig Topper ce78e68261 [InstCombine] Fold select based logic of fcmps with same operands when FMF is present.
If we have a logical and/or in select form and the true/false operand
is an fcmp with poison generating FMF, we won't be able to fold it
to an and/or instruction. This prevents us from optimizing the case
where it is a logical operation of two fcmps with identical operands.

This patch adds explicit checks for this case that doesn't rely on
converting to and/or to do the optimization. It reuses the existing
foldLogicOfFCmps, but adds a new flag to disable the other combine
that is inside that function.

FMF flags from the two FCmps are intersected using the logic added in
D121243. The FIXME has been updated to indicate that we can only use
a union for the non-select form.

This allows us to optimize cases like this from compare-fp-3.c in the
gcc torture suite with fast math.

void
test1 (float x, float y)
{
  if ((x==y) && (x!=y))
    link_error0();
}

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D121323
2022-03-14 14:45:07 -07:00
Nick Desaulniers 236695e70c [IRLinker] make IRLinker::AddLazyFor optional (llvm::unique_function). NFC
2 of the 3 callsite of IRMover::move() pass empty lambda functions. Just
make this parameter llvm::unique_function.

Came about via discussion in D120781. Probably worth making this change
regardless of the resolution of D120781.

Reviewed By: dexonsmith

Differential Revision: https://reviews.llvm.org/D121630
2022-03-14 14:37:34 -07:00
Andrew Browne edc33fa569 Revert "[DFSan] Remove trampolines to unblock opaque pointers."
This reverts commit 84af90336f.
2022-03-14 13:47:41 -07:00
Andrew Browne 84af90336f [DFSan] Remove trampolines to unblock opaque pointers.
https://github.com/llvm/llvm-project/issues/54172

Reviewed By: pcc

Differential Revision: https://reviews.llvm.org/D121250
2022-03-14 13:39:49 -07:00
Andrew Litteken c79ab1065e [IROutliner] Separate split PHI nodes from multiple exits by different outlinable regions.
The IR Outliner is supposed to extract the outputs contained in an external phi node and place them into a phi node contained within the outlined function. However, when the output values of two outlined functions with two different output sets are contained within the same phi node, they are counted as the same exit path when first analyzed. In reality, these create two different phi nodes, creating an inconsistency, resulting in a mismatch in the expected number of output paths and a crash.  This fixes that counting when analyzing the outputs by also analyzing the incoming blocks rather than just the incoming values.

Reviewer: paquette

Differential Revision: https://reviews.llvm.org/D121313
2022-03-14 14:56:59 -05:00
Florian Hahn 4a0481e981
[LV] Check for users of truncated IVs, add more detailed comment.
Add missing outside user check for truncated IVs. Also hoist the code in
the helper with additional explanations.

Fixes #54370.
2022-03-14 19:39:30 +00:00
Teresa Johnson fee0bde4c6 [WPD] Extend checking mode to support fallback to indirect call
Extend -wholeprogramdevirt-check to support both the existing
trapping mode on an incorrect devirtualization, as well as a new
mode to fallback to an indirect call on a mismatch. The new mode is

The new mode is useful in cases where we want to enable
devirtualization but cannot fully guarantee whole program visibility
(e.g in the case where LTO has been disabled for a small set of objects
that could potentially override virtual methods without having a symbol
reference to anything in the base class including the vtable).

Remove !prof and !callees metadata (which are used by indirect call
promotion) from both the new direct call and the fallback indirect call
(so that we don't perform another round of promotion on the latter).
Also remove it from the direct call in the non-fallback cases, which
was an oversight, although it didn't seem to cause any issues. Add tests
for the metadata removal covering the various cases.

Differential Revision: https://reviews.llvm.org/D121419
2022-03-14 10:16:28 -07:00
Andrew Litteken 3c90812f3b [IROutliner] Avoid reusing PHINodes that have already been matched when merging outlined functions' phi node blocks
When there are two external phi nodes for two different outlined regions, when compressing the created phi nodes between the two regions, the matching for the second phi node in the second region matches the first phi node created for the first region rather than the second phi node created for the first region. This adds an extra output path where there should not be one.

The fix is the ignore phi nodes that have already been matched for each region.

Reviewer: paquette

Differential Revision: https://reviews.llvm.org/D121312
2022-03-14 12:00:01 -05:00
Nikita Popov 8361c5da30 [SLPVectorizer] Handle external load/store pointer uses with opaque pointers
In this case we may not generate a bitcast, so the new load/store
becomes the external user.
2022-03-14 16:55:09 +01:00
Florian Hahn d621ae30e2
[LV] Remove dead Loop argument from emitMinimumVector... (NFC)
The argument is not used, remove it.
2022-03-14 15:47:40 +00:00
Florian Hahn 3ee2d908a9
[LV] Remove dead Loop argument from emitSCEVChecks. (NFC)
The argument is not used, remove it.
2022-03-14 13:00:03 +00:00
Nikita Popov ce6ca00a92 [CoroSplit] Avoid self-replacement
With opaque pointers, the bitcast might be a no-op, and this can
end up trying to replace a value with itself, which is illegal.
2022-03-14 13:53:31 +01:00
Florian Hahn 8896c36624
[LV] Do not set insert point in completeLoopSkeleton. (NFCI)
The insertion point for the builder used during VPlan code generation is
set during code generation. Setting the insert point here is dead code
and can be removed.
2022-03-14 12:21:26 +00:00
Nikita Popov 3ec44c22b1 [DeadArgElim] Guard against function type mismatch
If the call function type and function type don't match, we should
consider the function live (there is effectively a bitcast
sitting in between).
2022-03-14 13:03:04 +01:00
Nikita Popov cf18ec445d [GVN] Check load type in select PRE
This is no longer implicitly guaranteed with opaque pointers.
2022-03-14 12:46:54 +01:00
Benoit Jacob 9879c555f2 Expose ScalarizerPass options to C++ (not just commandline)
Context: I needed this for https://github.com/google/iree/pull/8474 .
I found that TSan instrumentation expects vector sizes to be <= 16,
and in my project (IREE) we have tests with higher vector sizes.
That left some test functions uninstrumented, resulting in crashes as
instrumented code called into them.

Differential Revision: https://reviews.llvm.org/D121182
2022-03-14 12:00:35 +01:00
Florian Hahn 1c0fc1f074
[VPlan] Ensure each iv user is only visited once in transform.
If a recipe has multiple uses of an IV, we crash. It causes a crash when
building llvm-test-suite.

Exposed by 95f76bff1c.
2022-03-13 21:42:17 +00:00
Florian Hahn 95f76bff1c
[LV] Create & use VPScalarIVSteps for all scalar users.
This patch is a follow-up to D115953. It updates optimizeInductions
to also introduce new VPScalarIVStepsRecipes if an IV has both vector
and scalar uses.

It updates all uses that only need scalar values to use the newly
created recipe for the scalar steps.

This completes untangling of VPWidenIntOrFpInductionRecipe
code-generation. Now the recipe *only* creates the widened vector
values, as it says on the tin.

The code to genereate IR has been moved directly to
VPWidenIntOrFpInductionRecipe::execute.

Note that the recipe has been updated to hold a reference to
ScalarEvolution, which is needed to expand the step, until we can place
the corresponding SCEV expansion in the pre-header.

Depends on D120827.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D120828
2022-03-13 17:15:24 +00:00
serge-sans-paille ed98c1b376 Cleanup includes: DebugInfo & CodeGen
Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D121332
2022-03-12 17:26:40 +01:00
Johannes Doerfert 85daf6973d [Attributor] Remove capture tracker usage and follow uses explicitly
Before we used the capture tracker to follow pointer uses, now we do it
explicitly ourselves through the Attributor API. There are multiple
benefits: For one, the boilerplate is cut down by a lot. The class,
potential copies vector, etc. is all not needed anymore. We also do
avoid explicitly looking through memory here, something that was
duplicated and should only live in the `checkForAllUses~ helper. More
importantly, as we do simplifications we need to make sure all parties
are in sync when they reason about uses. The old way did not allow us to
do this but the new one does as every use visiting AA goes through
`checkForAllUses` now..
2022-03-11 22:56:16 -06:00
Johannes Doerfert f44f60a297 [Attributor] Avoid replacing return operands twice
As replacements will become more complex it is better to have a single
AA responsible for replacing a use. Before this patch AAValueSimplify*
and AAValueSimplifyReturned could both try to replace the returned
value. The latter was marginally better for the old pass manager
when a function was already carrying a `returned` attribute and when
the context of the return instruction was important. The second
shortcoming was resolved by looking for return attributes in the
AAValueSimplifyCallSiteReturned initialization. The old PM impact is
not concerning.

This is yet another step towards the removal of AAReturnedValues, the
very first AA we should now try to eliminate due to the overlapping
logic with value simplification.
2022-03-11 21:55:19 -06:00
Johannes Doerfert 55a970fbd4 [Attributor][FIX] Make sure to not ignore non-load users of stores
When we look through memory for a store we used to allow any other use
of the memory that is reachable. This is generally OK but we need to
make sure to actually let the user look at these properly. For now,
we simply require loads (via exact reloads).
2022-03-11 18:41:13 -06:00
Johannes Doerfert f3ad8cf00e [Attributor] Cleanup manifest and liveness for CGSCC passes
There was some ad-hoc handling of liveness and manifest to avoid
breaking CGSCC guarantees. Things always slipped through though.
This cleanup will:

1) Prevent us from manifesting any "information" outside the CGSCC.
   This might be too conservative but we need to opt-in to annotation
   not try to avoid some problematic ones.
2) Avoid running any liveness analysis outside the CGSCC. We did have
   some AAIsDeadFunction handling to this end but we need this for all
   AAIsDead classes. The reason is that AAIsDead information is only
   correct if we actually manifest it, since we don't (see point 1) we
   cannot actually derive/use it at all. We are currently trying to
   avoid running any AA updates outside the CGSCC but that seems to
   impact things quite a bit.
3) Assert, don't check, that our modifications (during cleanup) modifies
   only CGSCC functions.
2022-03-11 16:46:02 -06:00
Benjamin Kramer dbc32e2aa7 [LoopUnswitch] Use SmallPtrSet instead of std::set. NFCI. 2022-03-11 19:14:34 +01:00
Florian Hahn d3e1094473
[VPlan] Implement VPCanonicalIVPHIRecipe::onlyFirstLaneUsed.
The recipe only uses the first lane of its operands.

Suggested & split off D120827.
2022-03-11 18:07:26 +00:00
Johannes Doerfert 9ddb1a49ac [Attributor][FIX] Avoid double free (and useless state copy)
In an attempt to remove the memory leak we introduced a double free.
The problem was that we allowed a plain copy of the state and it was
actually used. The use was useless, so it is gone now. The copy
constructor is gone as well. The move constructor ensures the Accesses
pointers are owned by a single state, I hope.

Reported by: https://lab.llvm.org/buildbot/#/builders/16/builds/25820
2022-03-11 10:10:36 -06:00
Johannes Doerfert 3570b0c5c7 [Attributor][FIX] Remove memory leak
The leak was introduced when we made things deterministic. It was
reported by the sanitizer buildbot:
 https://lab.llvm.org/buildbot/#/builders/168
2022-03-11 09:52:44 -06:00
Florian Hahn ecea477df3
[VPlan] Helper to check if a recipe uses scalar values of op.
This patch adds a helper to check if a recipe only uses scalars of a
given operand. This is similar to onlyFirstLaneUsed, which was
introduced earlier.

By default, usesScalars falls back on onlyFirstLaneUsed.

Will be used by D120828.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D120827
2022-03-11 13:41:08 +00:00
Florian Hahn e07b899192
[FunctionAttrs] Rename addReadAttrs -> addMemoryAttrs.
The addReadAttrs name is out of date, as the function also adds
the writeonly attribute. addMemoryAttrs is more accurate.
2022-03-11 11:49:22 +00:00
Johannes Doerfert e8fadafe77 [Attributor][NFCI] Make AAPointerInfo deterministic
The order in which we kept accesses was non-deterministic and a debug
output was a pointer value. Fixed both.
2022-03-10 23:27:47 -06:00
Johannes Doerfert 7211dbd01d [Attributor][NFCI] Remove non-deterministic behavior and debug output 2022-03-10 23:27:47 -06:00
Sanjay Patel 3491f2f4b0 [InstCombine] replace negated operand in fcmp with 0.0
X (any pred) -X --> X (any pred) 0.0

This works with all FP values and preserves FMF.
Alive2 examples:
https://alive2.llvm.org/ce/z/dj6jhp

This can also create one of the patterns that we match as "fabs"
as shown in one of the test diffs.
2022-03-10 12:53:32 -05:00
Sanjay Patel 9fac110bf7 Revert "[InstCombine] fold fcmp with lossy casted constant"
This reverts commit 9397bdc67e.

This optimization is likely to surprise programmers as seen
in post-commit comments, so we should add a clang warning
first (that is proposed in D121306).
2022-03-10 10:22:22 -05:00
Nikita Popov 067c035012 [GlobalOpt] Handle undef global_ctors gracefully
If there are no ctors, then this can have an arbirary zero-sized
value. The current code checks for null, but it could also be
undef or poison.

Replacing the specific null check with a check for
non-ConstantArray.
2022-03-10 16:02:12 +01:00
Simon Pilgrim 808d9d260b [InstCombine] Add vector support to icmp(add(X,C1),add(Y,C2)) -> icmp(add(X,C1-C2),Y) fold
As discussed on Issue #32161 this fold can be generalized a lot more than it currently is, but this patch at least adds vector support.

Differential Revision: https://reviews.llvm.org/D121358
2022-03-10 13:30:48 +00:00
Nikita Popov 479d684ba5 [Coroutines] Support opaque pointers in solveTypeName()
As far as I can tell, these names are only intended to be
informative, so just use a generic "PointerType" for opaque pointers.

The code in solveDIType() also treats pointers as basic types (and
does not try to encode the pointed-to type further), so I believe
this should be fine.

Differential Revision: https://reviews.llvm.org/D121280
2022-03-10 09:33:55 +01:00
Xiang1 Zhang c31014322c TLS loads opimization (hoist)
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D120000
2022-03-10 09:29:06 +08:00
Florian Mayer 0f770f4d00 [NFC] [HWASan] document why we tag Size but untag AlignedSize. 2022-03-09 16:18:04 -08:00
Michael Gottesman 0b647fc529 [debug-info] Debug salvage llvm.dbg.addr in original function that point into the coroutine frame when splitting coros.
We are already doing this in the split functions while we clone. This just
handles the original function.

I also updated the coroutine split test to validate that we are always referring
to the msg in the context object instead of in a shadow copy.

rdar://83957028

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D121324
2022-03-09 14:02:09 -08:00
Congzhe Cao abc8ca65c3 [LoopInterchange] Detect output dependency of a store instruction with itself
This patch is motivated by pr48057 where an output dependency is not detected
since loop interchange did not check a store instruction with itself.
Fixed that deficiency.

Reviewed By: bmahjour, Meinersbur, #loopoptwg

Differential Revision: https://reviews.llvm.org/D118102
2022-03-09 15:50:27 -05:00
Alok Kumar Sharma 94823500a7 [DebugInfo][SROA] Correct debug info for global variables in case of SROA
The existing handling produced crash for test case (attached with patch).
Now the function transferSRADebugInfo is modified to
  - Ignore the current variable if it starts after the current Fragment.
  - Ignore the current variable if it ends before the current Fragment.
  - Generate (!DIExpression()) if current variable completely fits the
    current Fragment.
  - Otherwise (as earlier), generate the DW_OP_LLVM_fragment in IR if current
    Fragment partially defines current variable.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D121107
2022-03-10 00:41:30 +05:30
Florian Hahn f98125abb2
Revert "[PassManager] Add pretty stack entries before P->run() call."
This reverts commit 128745cc26.

This increased compile-time unnecessarily. Revert this change and follow
ups 2c7afadb47 & add0c5856d.

http://llvm-compile-time-tracker.com/compare.php?from=338dfcd60f843082bb589b287d890dbd9394eb82&to=128745cc2681c284bc6d0150a319673a6d6e8424&stat=instructions
2022-03-09 18:46:32 +00:00
Andrew Litteken 0b3a6c8d20 [IROutliner] Handling outlined code with no exit paths
As a result of adding multiblock outlining, it became possible to outline the entirety of basic block, and branches that only pointed to the basic blocks contained in the outlined section. This means that there are no exit paths, and no return statement. There was a previous assertion from the older version of the outliner that explicitly made sure there was a return statement. This removes that assertion.

Reviewers: paquette

Differential Revision: https://reviews.llvm.org/D120868
2022-03-09 10:43:48 -08:00
Benoit Jacob 851332a1f2 Fix linking error, undefined class static constants.
Reviewed By: spupyrev

Differential Revision: https://reviews.llvm.org/D121293
2022-03-09 10:01:38 -08:00
Craig Topper f72fe2ef67 [InstCombine] Preserve FMF in foldLogicOfFCmps.
This patch intersects the fast math flags from the two fcmps instead
of dropping them.

I poked at this a bunch with Alive2 for nnan and ninf flags and it seemed
to check out. With the other flags it told me "Couldn't prove the
correctness of the transformation". Not sure if I should just preserve
nnan and ninf?

Reviewed By: spatel, lebedev.ri

Differential Revision: https://reviews.llvm.org/D121243
2022-03-09 09:17:09 -08:00
Florian Hahn a12403cfea
[LV] Do not consider instrs dead if used by phi that's not in plan.
Single value phis won't be modeled in VPlan. If the phi only gets used
outside the loop, the current code misses the fact that the incoming
value is not dead. Update the code to also look through such phis to
check for outside users.

Fixes #54266
2022-03-09 16:04:44 +00:00
Nikita Popov e81f566de6 [Coroutines] Avoid pointer element access for resume function type
For switch ABI, the function type is always "void (%frame*)", so
just hardcode that rather than fetching it from a pointer element
type.
2022-03-09 14:47:17 +01:00
Florian Hahn 128745cc26
[PassManager] Add pretty stack entries before P->run() call.
This patch adds PrettyStackEntries before running passes. The entries
include the pass name and the IR unit the pass runs on.

The information is used the print additional information when a pass
crashes, including the name and a reference to the IR unit on which it
crashed. This is similar to the behavior of the legacy pass manager.

The improved stack trace now includes:

Stack dump:
0.	Program arguments: bin/opt -loop-vectorize -force-vector-width=4 crash.ll
1.	Running pass 'ModuleToFunctionPassAdaptor' on module 'crash.ll'
2.	Running pass 'LoopVectorizePass' on function '@a'

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D120993
2022-03-09 13:01:09 +00:00
Nikita Popov 3d9386a349 [CoroFrame] Avoid pointer element type access for swifterror
These must have pointer-to-pointer type, and with opaque pointers
we don't care about the specific pointer type anymore.
2022-03-09 11:15:10 +01:00
Nikita Popov f682a8386b [Attributor] Use byval type instead of pointer element type
For compatibility with opaque pointers, use the byval type rather
than the pointer element type.

Differential Review: https://reviews.llvm.org/D120983
2022-03-09 09:30:42 +01:00
Florian Mayer 4bfd8a2c5f [NFC] [MTE] [HWASan] fixed orphaned comments. 2022-03-08 16:42:31 -08:00
Florian Mayer af22478933 [NFC] [MTE] [HWASan] simply code. 2022-03-08 16:36:10 -08:00
Vitaly Buka ce29a0429b Revert "Attempt to fix linking issue on the bot"
The issue was fixed with 48c74bb2e2

This reverts commit ac423a8c8a.
2022-03-08 16:16:01 -08:00
Florian Mayer e86bd32b71 [NFC] [HWASan] [MTE] Use function_ref over template. 2022-03-08 15:49:55 -08:00
Vitaly Buka ac423a8c8a Attempt to fix linking issue on the bot 2022-03-08 15:33:10 -08:00
Fangrui Song 48c74bb2e2 [SampleProfileInference] Work around odr-use of const non-inline static data member to fix -O0 builds after D120508
MinBaseDistance may be odr-used by std::max, leading to an undefined symbol linker error:

```
ld.lld: error: undefined symbol: (anonymous namespace)::MinCostMaxFlow::MinBaseDistance
>>> referenced by SampleProfileInference.cpp:744 (/home/ray/llvm-project/llvm/lib/Transforms/Utils/SampleProfileInference.cpp:744)
>>>               lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/SampleProfileInference.cpp.o:((anonymous namespace)::FlowAdjuster::jumpDistance(llvm::FlowJump*) const)
```

Since llvm-project is still using C++ 14, workaround it with a cast.
2022-03-08 14:34:53 -08:00
Florian Hahn e10b0ea371
[ConstraintElimination] Remove over-eager assertion.
After moving the CanAdd check in c60cdb44f7 and using it for
the assume cases as well, the passed in block may not have  a branch
instruction as terminator. This can trigger the assertion. Given the new
use case, it doesn't add value any longer and can be removed.

Fixes https://github.com/llvm/llvm-project/issues/54281
2022-03-08 22:02:08 +00:00
spupyrev 81aedab7dd introducing some profi flags
Differential Revision: https://reviews.llvm.org/D120508
2022-03-08 12:35:15 -08:00
Sanjay Patel 9397bdc67e [InstCombine] fold fcmp with lossy casted constant
This is noted as a missing clang warning in #54222
(and we should still make that enhancement).

Alive2 proofs:
https://alive2.llvm.org/ce/z/Q8drDq
https://alive2.llvm.org/ce/z/pE6LRt

I don't see a single conversion for all predicates
using "getFCmpCode" logic, so other predicates are
left as a TODO item.
2022-03-08 12:41:12 -05:00
Arnold Schwaighofer dcdc1f29bb InstCombine: Can't fold a phi arg load into the phi if the load is from a swifterror address
`swifterror` addresses are only allowed as operands to load, store, and
calls.

The following transformation is not allowed. It would create a phi with a
`swifterror` address operand.

```
 %addr = alloca swifterror i8*
 br %cond, label %bb1, label %b22

 bb1:
   %val1 = load i8*, i8** %addr
   br exit

 bb2:
   %val2 = load i8*, i8** %addr
   br exit

 exit:
   %val = phi [%val1, %bb1] [%val2, %bb2]
```

=>

```
 %addr = alloca swifterror i8*
 br %cond, label %bb1, label %b22

 bb1:
   br exit

 bb2:
   br exit

 exit:
   %val_addr = phi [%addr, %bb1] [%addr, %bb2]
   %val2 = load i8*, i8** %val_addr
```

rdar://89865485

Differential Revision: https://reviews.llvm.org/D121217
2022-03-08 09:09:51 -08:00
Arthur Eubanks 53e5e58670 [NewPM][Inliner] Make inlined calls to functions in same SCC as callee exponentially expensive
Introduce a new attribute "function-inline-cost-multiplier" which
multiplies the inline cost of a call site (or all calls to a callee) by
the multiplier.

When processing the list of calls created by inlining, check each call
to see if the new call's callee is in the same SCC as the original
callee. If so, set the "function-inline-cost-multiplier" attribute of
the new call site to double the original call site's attribute value.
This does not happen when the original call site is intra-SCC.

This is an alternative to D120584, which marks the call sites as
noinline.

Hopefully fixes PR45253.

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D121084
2022-03-07 23:51:09 -08:00
Johannes Doerfert 5b4acb20ff [OpenMP][FIX] Ensure flag to disable de-globalization works properly
If the user disables de-globalization we did not seed the AAHeapToShared
and AAHeapToStack but we still could end up with them through in-flight
lookups. With this patch we disable AAHeapToShared completely if the
user disabled de-globalization. Heap-2-stack is still run though.

Differential Revision: https://reviews.llvm.org/D121059
2022-03-07 23:43:05 -06:00
Philip Reames a2e9c68fcd [SLP] Extract a helper for buildvector [nfc] 2022-03-07 19:11:40 -08:00
Philip Reames 8ab3befa3f [SLP] Fix spelling in a lambda name [NFC] 2022-03-07 18:52:57 -08:00
Ahmed Bougacha 1067f2177a [sancov] Don't instrument calls to bitcast funcs: they're not indirect.
Currently, when instrumenting indirect calls, this uses
CallBase::getCalledFunction to determine whether a given callsite is
eligible.

However, that returns null if:
  this is an indirect function invocation or the function signature
  does not match the call signature.

So, we end up instrumenting direct calls where the callee is a bitcast
ConstantExpr, even though we presumably don't need to.

Use isIndirectCall to ignore those funky direct calls.

Differential Revision: https://reviews.llvm.org/D119594
2022-03-07 12:43:37 -08:00
Roman Lebedev 2f80ea7f4f
[NFC][LV] Use different braces in debug output
The analysis passes output function name encapsulated in `'` braces,
but LV uses `"`. Harmonizing this may help in creating an update script
for the LV costmodel test checks.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D121105
2022-03-07 19:32:37 +03:00
Florian Hahn 4bbee17ecb
[ConstraintElimination] Use ZExtValue for unsigned decomposition.
When decomposing constraints for unsigned conditions, we can use
negative values by zero-extending them, as long as they are less than
the maximum constraint value.

Fixes https://github.com/llvm/llvm-project/issues/54224
2022-03-07 13:34:01 +00:00
Florian Hahn c60cdb44f7
[ConstraintElimination] Only add cond from assume to succs if valid.
Add missing CanAdd check before adding a condition from an assume
to the successor blocks. When adding information from assume to
successor blocks we need to perform the same CanAdd as we do for adding
a condition from a branch.

Fixes https://github.com/llvm/llvm-project/issues/54217
2022-03-07 12:01:15 +00:00
Nikita Popov 0636c93d3e [Attributor] Remove restriction on simplifying function pointers
Dropping this restriction seems to work fine (there are no assertion
failures), so it appears that either the updater got smarter or the
problematic cases are restricted elsewhere.

If doing this still causes issues, then the place to address it
would probably be 8f5bdaf481/llvm/lib/Transforms/IPO/Attributor.cpp (L1856-L1859),
which already prevents replacement outside the SCC, so I'm not
quite sure what this check is intended to avoid.

Differential Revision: https://reviews.llvm.org/D120987
2022-03-07 11:54:37 +01:00
Nikita Popov 1bd33691cb [CoroElide] Remove fallback for frame layout determination
Only determine the frame layout based on dereferenceable and align
attributes, and remove the type-based fallback, which is incompatible
with opaque pointers. The dereferenceable attribute is required,
while the align attribute uses default alignment of 1 (commonly,
align 1 attributes do not get placed, relying on default alignment).

The CoroSplit pass producing the resume function adds the necessary
attributes in 7daed35911/llvm/lib/Transforms/Coroutines/CoroSplit.cpp (L840),
and their presence is checked in coro-debug.ll at least.

Differential Revision: https://reviews.llvm.org/D120988
2022-03-07 11:23:02 +01:00
Nikita Popov 9bca4ea364 [Coroutines] Allow FramePtr to be an Argument
With opaque pointers, after splitRetconCoroutine() the FramePtr
may be an Argument rather than an Instruction. With typed pointers,
this currently doesn't happen because the FramePtr would be a
bitcast instruction.

Fix this by making FramePtr a Value and adding a helper for the
"after FramePtr" insertion point, which would be the start of the
function in the Argument case.

Differential Revision: https://reviews.llvm.org/D120994
2022-03-07 10:58:56 +01:00
Florian Hahn 542c335159
[ConstraintElimination] Remove dead variables when dropping constraints.
This patch extends ConstraintElimination to also remove dead variables
when removing a constraint. When a constraint is removed because it is
out of scope, all new variables added for this constraint can also be
removed.

This keeps the total size of the systems much smaller, because it
reduces the number of variables drastically.

It also fixes a bug where variables where removed incorrectly.

Fixes https://github.com/llvm/llvm-project/issues/54228
2022-03-07 09:04:07 +00:00
Nikita Popov a9b03d9e2e [Attributor] Remove function pointer restriction for AAAlign
This check is not compatible with opaque pointers. We can avoid
it by adjusting the getPointerAlignment() implementation to avoid
creating unnecessary ptrtoint expressions for bitcasted pointers.
The code already uses OnlyIfReduced to not create an expression
if it does not simplify, and this makes sure that folding a
bitcast and ptrtoint into a ptrtoint doesn't count as a
simplification.

Differential Revision: https://reviews.llvm.org/D120904
2022-03-07 10:02:45 +01:00
Nikita Popov d1e880acaa [SCEV] Enable verification in LoopPM
Currently, we hardly ever actually run SCEV verification, even in
tests with -verify-scev. This is because the NewPM LPM does not
verify SCEV. The reason for this is that SCEV verification can
actually change the result of subsequent SCEV queries, which means
that you see different transformations depending on whether
verification is enabled or not.

To allow verification in the LPM, this limits verification to
BECounts that have actually been cached. It will not calculate
new BECounts.

BackedgeTakenInfo::getExact() is still not entirely readonly,
it still calls getUMinFromMismatchedTypes(). But I hope that this
is not problematic in the same way. (This could be avoided by
performing the umin in the other SCEV instance, but this would
require duplicating some of the code.)

Differential Revision: https://reviews.llvm.org/D120551
2022-03-07 09:46:20 +01:00
Johannes Doerfert 5af11ec34b [Attributor] Determine potentially loaded values through memory
We already look through memory to determine where a value that is stored
might pop up again (potential copies). This patch introduces the other
direction with similar logic. If a value is loaded, we can follow all
the accesses to the pointer (or better object) and try to determine what
value might have been stored.
2022-03-06 23:26:37 -06:00
Johannes Doerfert eb73af4af4 [Attributor] Handle undef and null in AAAlignFloating
Both `undef` and `nullptr` are maximally aligned. This is especially
important as we often see `undef` until a proper value has been
identified during simplification.
2022-03-06 23:26:22 -06:00
Johannes Doerfert ad26e199ff [Attributor] Use CFG reasoning also for read accesses
With D106397 we used CFG reasoning to filter out writes that will not
interfere with a given load instruction. With this patch we use the
same logic (modulo the reversal in reachability check order) for store
instructions. As an example, we can now proof stores to shared memory
are dead if all the loads of the shared memory are not reachable from
them.
2022-03-06 23:26:22 -06:00
Johannes Doerfert acb3773491 [Attributor] Improve isValidAtPosition (mostly for old PM)
To minimize the test difference between old and new PM we perform some
local dominance check if no dominator tree is available.
2022-03-06 23:26:21 -06:00
Johannes Doerfert ff758372bd [Attributor][NFCI] Introduce fine-grained anonymous namespaces 2022-03-06 21:28:38 -06:00
Johannes Doerfert 192a34ddb0 [Attributor][OpenMPOpt][FIX] Register simplification callbacks
Heap-2-stack and heap-2-shared can replace an allocation call with
something else. To avoid us deriving information from the allocator
implementation we register a simplification callback now that will
force us to stop at the call site. We probably should create the
replacement memory eagerly and return that instead though.
2022-03-06 21:28:38 -06:00
Johannes Doerfert 5859ae6a5d [Attributor][FIX] Use maximal access for dereferenceability deduction
While we can use range information when we derive dereferenceability we
must make sure to pick he right end of the range. Before we always went
with the minimal offset, which is not correct if we want to combine
the base dereferenceability with some offset. In that case it's the
maximum that gives the correct result.
2022-03-06 21:28:38 -06:00
Johannes Doerfert 1fcd4d0e3b [Attributor][FIX] Initialize stack variable 2022-03-06 21:28:38 -06:00
Johannes Doerfert 6158f4a466 [Attributor][NFCI] No repeated manifest of AAValueSimplifyReturned (CGSCC) 2022-03-06 19:59:23 -06:00
Johannes Doerfert efedf70aa5 [Attributor][NFC] Expose helper with more generic interface
This simply makes the function argument of the
`Attributor::checkForAllInstructions` helper explicit so one can iterate
over instructions in other functions.
2022-03-06 19:59:23 -06:00
Johannes Doerfert 8fa839aa58 [Attributor][NFC] Improve debug messages 2022-03-06 19:59:22 -06:00
William S. Moses 87ec6f41bb [OpenMPIRBuilder] Allocate temporary at the correct block in a nested parallel
The OpenMPIRBuilder has a bug. Specifically, suppose you have two nested openmp parallel regions (writing with MLIR for ease)

```
omp.parallel {
  %a = ...
  omp.parallel {
    use(%a)
  }
}
```

As OpenMP only permits pointer-like inputs, the builder will wrap all of the inputs into a stack allocation, and then pass this
allocation to the inner parallel. For example, we would want to get something like the following:

```
omp.parallel {
  %a = ...
  %tmp = alloc
  store %tmp[] = %a
  kmpc_fork(outlined, %tmp)
}
```

However, in practice, this is not what currently occurs in the context of nested parallel regions. Specifically to the OpenMPIRBuilder,
the entirety of the function (at the LLVM level) is currently inlined with blocks marking the corresponding start and end of each
region.

```
entry:
  ...

parallel1:
  %a = ...
  ...

parallel2:
  use(%a)
  ...

endparallel2:
  ...

endparallel1:
  ...
```

When the allocation is inserted, it presently inserted into the parent of the entire function (e.g. entry) rather than the parent
allocation scope to the function being outlined. If we were outlining parallel2, the corresponding alloca location would be parallel1.

This causes a variety of bugs, including https://github.com/llvm/llvm-project/issues/54165 as one example.

This PR allows the stack allocation to be created at the correct allocation block, and thus remedies such issues.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D121061
2022-03-06 18:34:25 -05:00
Florian Hahn bc00f47c01
[LoopSink] Do not try to sink phi nodes.
Skip phi nodes in the preheader. They may not be considered loop
invariant by the assertion below.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D121010
2022-03-06 11:16:22 +00:00
Roman Lebedev e47257e251
Revert "Reland [SROA] Maintain shadow/backing alloca when some slices are noncapturnig read-only calls to allow alloca partitioning/promotion"
There seems to be one more uncaught problem, SROA may now end up trying
to re-re-repromote the just-promoted shadow alloca, and do that endlessly.

This reverts commit adc0984d81.
2022-03-05 01:09:51 +03:00
Roman Lebedev adc0984d81
Reland [SROA] Maintain shadow/backing alloca when some slices are noncapturnig read-only calls to allow alloca partitioning/promotion
This is inspired by the original variant of D109749 by Graham Hunter,
but is a more general version.

Roughly, instead of promoting the alloca, we call it
a shadow/backing alloca, go through all it's slices,
clone(!) instructions that operated on it,
but make them operate on the cloned alloca,
and promote cloned alloca instead.

This keeps the shadow/backing alloca, and all the original instructions
around, which results in said shadow/backing alloca being
a perfect mirror/representation of the promoted alloca's content,
so calls that take the alloca as arguments (non-capturingly!)
can be supported.

For now, we require that the calls also don't modify the alloca's content,
but that is only to simplify the initial implementation,
and that will be supported in a follow-up.

Overall, this leads to *smaller* codesize:
https://llvm-compile-time-tracker.com/compare.php?from=a8b4f5bbab62091835205f3d648902432a4a5b58&to=aeae054055b125b011c1122f82c86457e159436f&stat=size-total
and is roughly neutral compile-time wise:
https://llvm-compile-time-tracker.com/compare.php?from=a8b4f5bbab62091835205f3d648902432a4a5b58&to=aeae054055b125b011c1122f82c86457e159436f&stat=instructions

This relands commit 703240c71f,
that was reverted by commit 7405581f7c,
because the assertion `isa<LoadInst>(OrigInstr)` didn't hold in practice,
as the newly added test `@select_of_ptrs` shows:
If the pointers into alloca are used by select's/PHI's, then even if
we manage to fracture the alloca, some sub-alloca's will likely remain.
And if there are any non-capturing calls, then we will also decide to
keep the original backing alloca around, and we suddenly ~doubled
the alloca size, and the amount of memory traffic.
I'm not sure if this is a problem or we could live with it,
but let's leave that for later...

Reviewed By: djtodoro

Differential Revision: https://reviews.llvm.org/D113520
2022-03-05 00:14:12 +03:00
Augie Fackler b32735d599 BuildLibCalls: add allocalign attributes for memalign and aligned_alloc
This gets us close to being able to remove a column from the table in
MemoryBuiltins.cpp.

Differential Revision: https://reviews.llvm.org/D117923
2022-03-04 15:57:53 -05:00
Augie Fackler d664c4b73c Attributes: add a new allocalign attribute
This will let us start moving away from hard-coded attributes in
MemoryBuiltins.cpp and put the knowledge about various attribute
functions in the compilers that emit those calls where it probably
belongs.

Differential Revision: https://reviews.llvm.org/D117921
2022-03-04 15:57:53 -05:00
Johannes Doerfert f9c2d6005e [OpenMP][FIX] Ensure custom state machine works
The custom state machine had a check for surplus threads that filtered
the main thread if the kernel was executed by a single warp only. We
now first check for the main thread, then for surplus threads, avoiding
to filter the former out.

Fixes #54214.

Reviewed By: jhuber6

Differential Revision: https://reviews.llvm.org/D121011
2022-03-04 13:51:19 -05:00
Roman Lebedev 7405581f7c
Revert "[SROA] Maintain shadow/backing alloca when some slices are noncapturnig read-only calls to allow alloca partitioning/promotion"
Bots are reporting that the assertion about only expecting loads is wrong.

This reverts commit 703240c71f.
2022-03-04 21:49:30 +03:00
Roman Lebedev 703240c71f
[SROA] Maintain shadow/backing alloca when some slices are noncapturnig read-only calls to allow alloca partitioning/promotion
This is inspired by the original variant of D109749 by Graham Hunter,
but is a more general version.

Roughly, instead of promoting the alloca, we call it
a shadow/backing alloca, go through all it's slices,
clone(!) instructions that operated on it,
but make them operate on the cloned alloca,
and promote cloned alloca instead.

This keeps the shadow/backing alloca, and all the original instructions
around, which results in said shadow/backing alloca being
a perfect mirror/representation of the promoted alloca's content,
so calls that take the alloca as arguments (non-capturingly!)
can be supported.

For now, we require that the calls also don't modify the alloca's content,
but that is only to simplify the initial implementation,
and that will be supported in a follow-up.

Overall, this leads to *smaller* codesize:
https://llvm-compile-time-tracker.com/compare.php?from=a8b4f5bbab62091835205f3d648902432a4a5b58&to=aeae054055b125b011c1122f82c86457e159436f&stat=size-total
and is roughly neutral compile-time wise:
https://llvm-compile-time-tracker.com/compare.php?from=a8b4f5bbab62091835205f3d648902432a4a5b58&to=aeae054055b125b011c1122f82c86457e159436f&stat=instructions

Reviewed By: djtodoro

Differential Revision: https://reviews.llvm.org/D113520
2022-03-04 21:08:43 +03:00
Augie Fackler 5e4c75db3b InstructionCombining: avoid eliding mismatched alloc/free pairs
Prior to this change LLVM would happily elide a call to any allocation
function and a call to any free function operating on the same unused
pointer. This can cause problems in some obscure cases, for example if
the body of operator::new can be inlined but the body of
operator::delete can't, as in this example from jyknight:

    #include <stdlib.h>
    #include <stdio.h>

    int allocs = 0;

    void *operator new(size_t n) {
        allocs++;
        void *mem = malloc(n);
        if (!mem) abort();
        return mem;
    }

    __attribute__((noinline)) void operator delete(void *mem) noexcept {
        allocs--;
        free(mem);
    }

    void deleteit(int*i) { delete i; }
    int main() {
        int*i = new int;
        deleteit(i);
        if (allocs != 0)
          printf("MEMORY LEAK! allocs: %d\n", allocs);
    }

This patch addresses the issue by introducing the concept of an
allocator function family and uses it to make sure that alloc/free
function pairs are only removed if they're in the same family.

Differential Revision: https://reviews.llvm.org/D117356
2022-03-04 10:41:10 -05:00
Nikita Popov 6467d1d275 [CoroFrame] Remove unused insertSpills() return value (NFC) 2022-03-04 15:11:24 +01:00
Nikita Popov 6b5b367858 [Attributor] Remove function pointer type check (NFCI)
This check is not relevant for correctness, it can only avoid
walking some recursive uses if the cast is to a non-function
pointer type. As this distinction will no longer be possible
with opaque pointers and all users will have to be walked
anyway, I'm dropping the check in advance.
2022-03-04 12:09:51 +01:00
Nikita Popov d3a52089eb Reapply [MergeICmps] Don't require GEP
Recommit without changes over 53abe3ff66,
which addressed the cause of the reported crash.

-----

With opaque pointers, the zero-offset load will generally not use
a GEP. Allow a direct load without GEP, which is treated the same
way as a zero-offset GEP.
2022-03-04 11:39:11 +01:00