Commit Graph

26463 Commits

Author SHA1 Message Date
Roman Lebedev 2b437fcd47
[SimplifyCFG] SwitchToLookupTable(): switch to non-permissive DomTree updates
... which requires not deleting a DomTree edge that we just deleted.
2021-01-06 01:52:38 +03:00
Roman Lebedev fa5447aa3f
[NFC][SimplifyCFG] SwitchToLookupTable(): pull out SI->getParent() into a variable 2021-01-06 01:52:38 +03:00
Roman Lebedev d15d81ce15
[SimplifyCFG] FoldValueComparisonIntoPredecessors(): deal with each predecessor only once
If the predecessor is a switch, and BB is not the default destination,
multiple cases could have the same destination. and it doesn't
make sense to re-process the predecessor, because we won't make any changes,
once is enough.

I'm not sure this can be really tested, other than via the assertion
being added here, which fires without the fix.
2021-01-06 01:52:37 +03:00
Roman Lebedev fc96cb2dad
[SimplifyCFG] FoldValueComparisonIntoPredecessors(): switch to non-permissive DomTree updates
... which requires not adding a DomTree edge that we just added.
2021-01-06 01:52:37 +03:00
Roman Lebedev 29ca7d5a1a
[SimplifyCFG] simplifyUnreachable(): fix handling of degenerate same-destination conditional branch
One would hope that it would have been already canonicalized into an
unconditional branch, but that isn't really guaranteed to happen
with SimplifyCFG's visitation order.
2021-01-06 01:52:36 +03:00
Roman Lebedev 3460719f58
[NFC][SimplifyCFG] Add a test with same-destination condidional branch
Reported by Mikael Holmén as post-commit feedback on
https://reviews.llvm.org/rG2d07414ee5f74a09fb89723b4a9bb0818bdc2e18#968162
2021-01-06 01:52:36 +03:00
Roman Lebedev f98535686e
[SimplifyCFG] simplifyUnreachable(): switch to non-permissive DomTree updates
... which requires not removing a DomTree edge if the switch's default
still points at that destination, because it can't be removed;
... and not processing the same predecessor more than once.
2021-01-06 01:52:36 +03:00
Sanjay Patel 6a03f8ab62 [SLP] reduce code for finding reduction costs; NFC
We can get both (vector/scalar) costs in a single switch
instead of sequentially.
2021-01-05 17:35:54 -05:00
Arthur Eubanks 8cf1cc578d [FuncAttrs] Infer noreturn
A function is noreturn if all blocks terminating with a ReturnInst
contain a call to a noreturn function. Skip looking at naked functions
since there may be asm that returns.

This can be further refined in the future by checking unreachable blocks
and taking into account recursion. It looks like the attributor pass
does this, but that is not yet enabled by default.

This seems to help with code size under the new PM since PruneEH does
not run under the new PM, missing opportunities to mark some functions
noreturn, which in turn doesn't allow simplifycfg to clean up dead code.
https://bugs.llvm.org/show_bug.cgi?id=46858.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D93946
2021-01-05 13:25:42 -08:00
Sanjay Patel 5a1d31a284 [SLP] use reduction kind's opcode for cost model queries; NFC
This should be no-functional-change because the reduction kind
opcodes are 1-for-1 mappings to the instructions we are matching
as reductions. But we want to remove the need for the
`OperationData` opcode field because that does not work when
we start matching intrinsics (eg, maxnum) as reduction candidates.
2021-01-05 15:12:40 -05:00
Sanjay Patel d4a999b453 [SLP] reduce code duplication; NFC 2021-01-05 15:12:40 -05:00
Atmn Patel f88a797521 [LoopDeletion] Allows deletion of possibly infinite side-effect free loops
From C11 and C++11 onwards, a forward-progress requirement has been
introduced for both languages. In the case of C, loops with non-constant
conditionals that do not have any observable side-effects (as defined by
6.8.5p6) can be assumed by the implementation to terminate, and in the
case of C++, this assumption extends to all functions. The clang
frontend will emit the `mustprogress` function attribute for C++
functions (D86233, D85393, D86841) and emit the loop metadata
`llvm.loop.mustprogress` for every loop in C11 or later that has a
non-constant conditional.

This patch modifies LoopDeletion so that only loops with
the `llvm.loop.mustprogress` metadata or loops contained in functions
that are required to make progress (`mustprogress` or `willreturn`) are
checked for observable side-effects. If these loops do not have an
observable side-effect, then we delete them.

Loops without observable side-effects that do not satisfy the above
conditions will not be deleted.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D86844
2021-01-05 09:56:16 -05:00
Sanjay Patel 3b8b2c7da2 [SLP] delete unused pairwise reduction option
SLP tries to model 2 forms of vector reductions: pairwise and splitting.
From the cost model code comments, those are defined using an example as:

  /// Pairwise:
  ///  (v0, v1, v2, v3)
  ///  ((v0+v1), (v2+v3), undef, undef)
  /// Split:
  ///  (v0, v1, v2, v3)
  ///  ((v0+v2), (v1+v3), undef, undef)

I don't know the full history of this functionality, but it was partly
added back in D29402. There are apparently no users at this point (no
regression tests change). X86 might have managed to work-around the need
for this through cost model and codegen improvements.

Removing this code makes it easier to continue the work that was started
in D87416 / D88193. The alternative -- if there is some target that is
silently using this option -- is to move this logic into LoopUtils. We
have related/duplicate functionality there via llvm::createTargetReduction().

Differential Revision: https://reviews.llvm.org/D93860
2021-01-05 13:23:07 -05:00
Florian Hahn 8a47e6252a
[VPlan] Re-add interleave group members to plan.
Creating in-loop reductions relies on IR references to map
IR values to VPValues after interleave group creation.

Make sure we re-add the updated member to the plan, so the look-ups
still work as expected

This fixes a crash reported after D90562.
2021-01-05 15:06:47 +00:00
Simon Pilgrim 313d982df6 [IR] Add ConstantInt::getBool helpers to wrap getTrue/getFalse. 2021-01-05 11:01:10 +00:00
Florian Hahn 38c6933dcc
[LV] Simplify lambda in all_of to directly return hasVF() result. (NFC)
The if in the lambda is not necessary. We can directly return the result
of hasVF.
2021-01-05 10:34:06 +00:00
Simon Pilgrim a000366d05 [SimplifyIndVar] createWideIV - make WideIVInfo arg a const ref. NFCI.
The WideIVInfo arg is only ever used as a const.

Fixes cppcheck warning.
2021-01-05 10:31:45 +00:00
Simon Pilgrim 7a97eeb197 [Coroutines] checkAsyncFuncPointer - use cast<> instead of dyn_cast<> for dereferenced pointer. NFCI.
We're immediately dereferencing the casted pointer, so use cast<> which will assert instead of dyn_cast<> which can return null.

Fixes static analyzer warning.
2021-01-05 10:31:45 +00:00
Jeremy Morse 914066fe38 [DebugInfo] Avoid LSR crash on large integer inputs
Loop strength reduction tries to recover debug variable values by looking
for simple offsets from PHI values. In really extreme conditions there may
be an offset used that won't fit in an int64_t, hitting an APInt assertion.

This patch adds a regression test and adjusts the equivalent value
collecting code to filter out any values where the offset can't be
represented by an int64_t. This means that for very large integers with
very large offsets, the variable location will become undef, which is the
same behaviour as before 2a6782bb9f / D87494.

Differential Revision: https://reviews.llvm.org/D94016
2021-01-05 10:25:37 +00:00
Simon Pilgrim 84d5768d97 MemProfiler::insertDynamicShadowAtFunctionEntry - use cast<> instead of dyn_cast<> for dereferenced pointer. NFCI.
We're immediately dereferencing the casted pointer, so use cast<> which will assert instead of dyn_cast<> which can return null.

Fixes static analyzer warning.
2021-01-05 09:34:01 +00:00
Arthur Eubanks e30fbbe9a5 [JumpThreading][NewPM] Skip when target has divergent CF
Matches the legacy pass.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D94028
2021-01-04 16:08:08 -08:00
Roman Lebedev 32c47ebef1
[SimplifyCFG] SimplifyCondBranchToTwoReturns(): switch to non-permissive DomTree updates
... which requires not deleting an edge that just got deleted,
because we could be dealing with a block that didn't go through
ConstantFoldTerminator() yet, and thus has a degenerate cond br
with matching true/false destinations.
2021-01-05 01:26:37 +03:00
Roman Lebedev 110b3d7855
[SimplifyCFG] SimplifyEqualityComparisonWithOnlyPredecessor(): switch to non-permissive DomTree updates
... which requires not deleting an edge that just got deleted.
2021-01-05 01:26:37 +03:00
Roman Lebedev a8604e3d5b
[SimplifyCFG] simplifyIndirectBr(): switch to non-permissive DomTree updates
... which requires not deleting an edge that just got deleted.
2021-01-05 01:26:36 +03:00
Roman Lebedev ed9de61cc3
[SimplifyCFGPass] mergeEmptyReturnBlocks(): switch to non-permissive DomTree updates
... which requires not inserting an edge that already exists.
2021-01-05 01:26:36 +03:00
Roman Lebedev 3fb57222c4
[NFCI] SimplifyCFG: switch to non-permissive DomTree updates, where possible
Notably, this doesn't switch *every* case, remaining cases
don't actually pass sanity checks in non-permissve mode,
and therefore require further analysis.

Note that SimplifyCFG still defaults to not preserving DomTree by default,
so this is effectively a NFC change.
2021-01-05 01:26:36 +03:00
Sanjay Patel 36263a7ccc [LoopUtils] remove redundant opcode parameter; NFC
While here, rename the inaccurate getRecurrenceBinOp()
because that was also used to get CmpInst opcodes.

The recurrence/reduction kind should always refer to the
expected opcode for a reduction. SLP appears to be the
only direct caller of createSimpleTargetReduction(), and
that calling code ideally should not be carrying around
both an opcode and a reduction kind.

This should allow us to generalize reduction matching to
use intrinsics instead of only binops.
2021-01-04 17:05:28 -05:00
Sanjay Patel 9766957524 [LoopUtils] reduce code for creatng reduction; NFC
We can return from each case instead creating a temporary
variable just to have a common return.
2021-01-04 16:05:03 -05:00
Sanjay Patel 58b6c5d932 [LoopUtils] reorder logic for creating reduction; NFC
If we are using a shuffle reduction, we don't need to
go through the switch on opcode - return early.
2021-01-04 16:05:02 -05:00
Whitney Tsang de6d43f16c Revert "[LoopNest] Allow empty basic blocks without loops"
This reverts commit 9a17bff4f7.
2021-01-04 20:42:21 +00:00
Whitney Tsang 9a17bff4f7 [LoopNest] Allow empty basic blocks without loops
Allow loop nests with empty basic blocks without loops in different
levels as perfect.

Reviewers: Meinersbur

Differential Revision: https://reviews.llvm.org/D93665
2021-01-04 19:59:50 +00:00
Philip Reames 7c63aac7bd Revert "[LoopDeletion] Break backedge of loops when known not taken"
This reverts commit dd6bb367d1.

Multi-stage builders are showing an assertion failure w/LCSSA not being preserved on entry to IndVars.  Reason isn't clear, reverting while investigating.
2021-01-04 09:50:47 -08:00
Philip Reames dd6bb367d1 [LoopDeletion] Break backedge of loops when known not taken
The basic idea is that if SCEV can prove the backedge isn't taken, we can go ahead and get rid of the backedge (and thus the loop) while leaving the rest of the control in place. This nicely handles cases with dispatch between multiple exits and internal side effects.

Differential Revision: https://reviews.llvm.org/D93906
2021-01-04 09:19:29 -08:00
Florian Hahn c367258b5c
[SimplifyCFG] Enabled hoisting late in LTO pipeline.
bb7d3af113 disabled hoisting in SimplifyCFG by default, but enabled it
late in the pipeline. But it appears as if the LTO pipelines got missed.

This patch adjusts the LTO pipelines to also enable hoisting in the
later stages.

Unfortunately there's no easy way to add a test for the change I think.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D93684
2021-01-04 16:26:58 +00:00
Florian Hahn e0905553b4
[ArgPromotion] Delay dead GEP removal until doPromotion.
Currently ArgPromotion removes dead GEPs as part of the legality check
in isSafeToPromoteArgument. If no promotion happens, this means the pass
claims no modifications happened, even though GEPs were removed.

This patch fixes the issue by delaying removal of dead GEPs until
doPromotion: isSafeToPromoteArgument can simply skips dead GEPs and
the code in doPromotion dealing with GEPs is updated to account for
dead GEPs. Once we committed to promotion, it should be safe to
remove dead GEPs.

Alternatively isSafeToPromoteArgument could return an additional boolean
to indicate whether it made changes, but this is quite cumbersome and
there should be no real benefit of weeding out some dead GEPs here if we
do not perform promotion.

I added a test for the case where dead GEPs need to be removed when
promotion happens in 578c5a0c6e.

Fixes PR47477.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D93991
2021-01-04 09:51:20 +00:00
Andrew Litteken 5c951623bc [IROutliner] Refactoring errors in the cost model from past patches.
There were was the reuse of a variable that should not have been
occurred due to confusion during committing patches.
2021-01-04 00:11:18 -06:00
Andrew Litteken 05e6ac4eb8 [IROutliner] Removing a duplicate addition, causing overestimates in IROutliner.
There was an extra addition left over from a previous commit for the
cost model, this removes it.
2021-01-03 23:36:28 -06:00
Roman Lebedev 98cd1c33e3
[NFC][SimplifyCFG] Hoist 'original' DomTree verification from simplifyOnce() into run()
This is NFC since SimplifyCFG still currently defaults to not preserving DomTree.

SimplifyCFGOpt::simplifyOnce() is only be called from SimplifyCFGOpt::run(),
and can not be called externally, since SimplifyCFGOpt is defined in .cpp
This avoids some needless verifications, and is thus a bit faster
without sacrificing precision.
2021-01-04 01:02:02 +03:00
Roman Lebedev a7684940f0
[SimplifyCFG] SimplifyTerminatorOnSelect(): fix/tune DomTree updates
We only need to remove non-TrueBB/non-FalseBB successors,
and we only need to do that once. We don't need to insert
any new edges, because no new successors will be added.
2021-01-04 01:02:02 +03:00
Roman Lebedev 70935b9595
[NFC][SimplifyCFG] SimplifyTerminatorOnSelect(): pull out OldTerm->getParent() into a variable 2021-01-04 01:02:02 +03:00
Kazu Hirata ba82c0b315 [llvm] Call *(Set|Map)::erase directly (NFC)
We can erase an item in a set or map without checking its membership
first.
2021-01-03 09:57:47 -08:00
Juneyoung Lee 1fc992bd86 [Scalarizer] Use poison as insertelement's placeholder
This patch makes Scalarizer to use poison as insertelement's placeholder.

It contains two changes in Scalarizer.cpp, and the both changes does not change the semantics of the optimized program.
It is because the placeholder value (poison) is already completely hidden by following insertelement instructions.

The first change at visitBitCastInst() creates poison vector of MidTy and consecutively inserts FanIn times,
which is # of elems of MidTy.
The second change at ScalarizerVisitor::finish() creates poison with Op->getType(), and it is filled with
Count insertelements.

The test diffs show that the poison value is never exposed after insertelements.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93989
2021-01-04 00:35:28 +09:00
Roman Lebedev 5fa241a657
[SimplifyCFG] FoldValueComparisonIntoPredecessors(): fine-tune/fix DomTree preservation, take 2 2021-01-03 01:45:48 +03:00
Roman Lebedev 6a3a8d17eb
[SimplifyCFG] FoldValueComparisonIntoPredecessors(): fine-tune/fix DomTree preservation 2021-01-03 01:45:48 +03:00
Roman Lebedev 7c8b8063b6
[SimplifyCFG][AMDGPU] AMDGPUUnifyDivergentExitNodes: SimplifyCFG isn't ready to preserve PostDomTree
There is a number of transforms in SimplifyCFG that take DomTree out of
DomTreeUpdater, and do updates manually. Until they are fixed,
user passes are unable to claim that PDT is preserved.

Note that the default for SimplifyCFG is still not to preserve DomTree,
so this is still effectively NFC.
2021-01-03 01:45:46 +03:00
Kazu Hirata 530c5af6a4 [Transforms] Construct SmallVector with iterator ranges (NFC) 2021-01-02 09:24:17 -08:00
Florian Hahn c50f9b2351
[LV] Clean up trailing whitespace (NFC).
Clean up some stray whitespace that sneaked in recently.
2021-01-02 16:43:13 +00:00
Roman Lebedev b9da488ad7
[SimplifyCFG] Don't actually take DomTreeUpdater unless we intend to maintain DomTree validity
This guards against unintentional mistakes
like the one i just fixed in previous commit.
2021-01-02 14:40:55 +03:00
Roman Lebedev b4429f3cdd
[SimplifyCFG] Teach removeUndefIntroducingPredecessor to preserve DomTree 2021-01-02 01:01:20 +03:00
Roman Lebedev 657c1e09da
[SimplifyCFG] Teach eliminateDeadSwitchCases() to preserve DomTree, part 2 2021-01-02 01:01:18 +03:00
Roman Lebedev f1ce696056
[SimplifyCFG] Teach tryWidenCondBranchToCondBranch() to preserve DomTree 2021-01-02 01:01:17 +03:00
Roman Lebedev e08fea3b24
[SimplifyCFGPass] Ensure that DominatorTreeWrapperPass is init'd before SimplifyCFG
It's probably better than hoping that it will happen to be
already initialized.
2021-01-02 01:01:17 +03:00
Kazu Hirata f43daf1b62 [SSAUpdater] Remove unused code InstrIsPHI (NFC)
The last use of this function was removed on Jan 4, 2018 in commit
commit 90ecac01e9.
2021-01-01 12:44:52 -08:00
Sanjay Patel c74e8539ff [Analysis] flatten enums for recurrence types
This is almost all mechanical search-and-replace and
no-functional-change-intended (NFC). Having a single
enum makes it easier to match/reason about the
reduction cases.

The goal is to remove `Opcode` from reduction matching
code in the vectorizers because that makes it harder to
adapt the code to handle intrinsics.

The code in RecurrenceDescriptor::AddReductionVar() is
the only place that required closer inspection. It uses
a RecurrenceDescriptor and a second InstDesc to sometimes
overwrite part of the struct. It seem like we should be
able to simplify that logic, but it's not clear exactly
which cmp+sel patterns that we are trying to handle/avoid.
2021-01-01 12:20:16 -05:00
Florian Hahn d9f306aa52
[LV] Fix crash when generating remarks with multi-exit loops.
If DoExtraAnalysis is true (e.g. because remarks are enabled), we
continue with the analysis rather than exiting. Update code to
conditionally check if the ExitBB has phis or not a single predecessor.
Otherwise a nullptr is dereferenced with DoExtraAnalysis.
2021-01-01 13:54:41 +00:00
Roman Lebedev 831636b0e6
[SimplifyCFG] SUCCESS! Teach createUnreachableSwitchDefault() to preserve DomTree
This pretty much concludes patch series for updating SimplifyCFG
to preserve DomTree. All 318 dedicated `-simplifycfg` tests now pass
with `-simplifycfg-require-and-preserve-domtree=1`.

There are a few leftovers that apparently don't have good test coverage.
I do not yet know what gaps in test coverage will the wider-scale testing
reveal, but the default flip might be close.
2021-01-01 03:25:25 +03:00
Roman Lebedev e1440d43bc
[SimplifyCFG] Teach tryToSimplifyUncondBranchWithICmpInIt() to preserve DomTree 2021-01-01 03:25:25 +03:00
Roman Lebedev 8866583953
[SimplifyCFG] Teach FoldValueComparisonIntoPredecessors() to preserve DomTree, part 2 2021-01-01 03:25:24 +03:00
Roman Lebedev a815b6b2b2
[SimplifyCFG] Teach eliminateDeadSwitchCases() to preserve DomTree, part 1 2021-01-01 03:25:24 +03:00
Roman Lebedev 0d2f219d4d
[SimplifyCFG] Teach SimplifyEqualityComparisonWithOnlyPredecessor() to preserve DomTree, part 3 2021-01-01 03:25:23 +03:00
Roman Lebedev 9f17dab1f4
[SimplifyCFG] Teach simplifyIndirectBr() to preserve DomTree 2021-01-01 03:25:23 +03:00
Roman Lebedev b7c463d7b8
[SimplifyCFG] Teach FoldBranchToCommonDest() to preserve DomTree, part 2 2021-01-01 03:25:23 +03:00
Roman Lebedev c1b825d4b8
[SimplifyCFG] Teach FoldValueComparisonIntoPredecessors() to preserve DomTree, part 1 2021-01-01 03:25:22 +03:00
Andrew Litteken 1a9eb19af9 [IROutliner] Adding consistent function attribute merging
When combining extracted functions, they may have different function
attributes. We want to make sure that we do not make any assumptions,
or lose any information. This attempts to make sure that we consolidate
function attributes to their most general case.

Tests:
llvm/test/Transforms/IROutliner/outlining-compatible-and-attribute-transfer.ll
llvm/test/Transforms/IROutliner/outlining-compatible-or-attribute-transfer.ll

Reviewers: jdoefert, paquette

Differential Revision: https://reviews.llvm.org/D87301
2020-12-31 12:30:23 -06:00
Fangrui Song a90b42b0fe [ThinLTO] Default -enable-import-metadata to false
The default value is dependent on `-DLLVM_ENABLE_ASSERTIONS={off,on}` (D22167), which is
error-prone. The few tests checking `!thinlto_src_module` can specify -enable-import-metadata explicitly.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D93959
2020-12-31 10:04:21 -08:00
Dávid Bolvanský ae69fa9b9f [InstCombine] Transform (A + B) - (A & B) to A | B (PR48604)
define i32 @src(i32 %x, i32 %y) {
%0:
  %a = add i32 %x, %y
  %o = and i32 %x, %y
  %r = sub i32 %a, %o
  ret i32 %r
}
=>
define i32 @tgt(i32 %x, i32 %y) {
%0:
  %b = or i32 %x, %y
  ret i32 %b
}
Transformation seems to be correct!

https://alive2.llvm.org/ce/z/2fhW6r
2020-12-31 15:04:32 +01:00
Dávid Bolvanský 742ea77ca4 [InstCombine] Transform (A + B) - (A | B) to A & B (PR48604)
define i32 @src(i32 %x, i32 %y) {
%0:
  %a = add i32 %x, %y
  %o = or i32 %x, %y
  %r = sub i32 %a, %o
  ret i32 %r
}
=>
define i32 @tgt(i32 %x, i32 %y) {
%0:
  %b = and i32 %x, %y
  ret i32 %b
}
Transformation seems to be correct!

https://alive2.llvm.org/ce/z/aQRh2j
2020-12-31 14:03:20 +01:00
Bogdan Graur 8bee4d4e8f Revert "[LoopDeletion] Allows deletion of possibly infinite side-effect free loops"
Test clang/test/Misc/loop-opt-setup.c fails when executed in Release.

This reverts commit 6f1503d598.

Reviewed By: SureYeaah

Differential Revision: https://reviews.llvm.org/D93956
2020-12-31 11:47:49 +00:00
Atmn Patel 6f1503d598 [LoopDeletion] Allows deletion of possibly infinite side-effect free loops
From C11 and C++11 onwards, a forward-progress requirement has been
introduced for both languages. In the case of C, loops with non-constant
conditionals that do not have any observable side-effects (as defined by
6.8.5p6) can be assumed by the implementation to terminate, and in the
case of C++, this assumption extends to all functions. The clang
frontend will emit the `mustprogress` function attribute for C++
functions (D86233, D85393, D86841) and emit the loop metadata
`llvm.loop.mustprogress` for every loop in C11 or later that has a
non-constant conditional.

This patch modifies LoopDeletion so that only loops with
the `llvm.loop.mustprogress` metadata or loops contained in functions
that are required to make progress (`mustprogress` or `willreturn`) are
checked for observable side-effects. If these loops do not have an
observable side-effect, then we delete them.

Loops without observable side-effects that do not satisfy the above
conditions will not be deleted.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D86844
2020-12-30 21:43:01 -05:00
Kazu Hirata 95ea86587c [PGO] Use isa instead of dyn_cast (NFC) 2020-12-30 17:45:38 -08:00
Roman Lebedev 51879a5256
[LoopIdiom] 'left-shift until bittest': don't forget to check that PHI node is in loop header
Fixes an issue reported by Peter Collingbourne in
https://reviews.llvm.org/D91726#2475301
2020-12-30 23:58:41 +03:00
Roman Lebedev 7f221c9196
[SimplifyCFG] Teach SwitchToLookupTable() to preserve DomTree 2020-12-30 23:58:41 +03:00
Roman Lebedev a17025aa61
[SimplifyCFG] Teach switchToSelect() to preserve DomTree 2020-12-30 23:58:40 +03:00
Roman Lebedev c45f765c0d
[SimplifyCFG] Teach SimplifyBranchOnICmpChain() to preserve DomTree 2020-12-30 23:58:40 +03:00
Sanjay Patel 8ca60db40b [LoopUtils] reduce FMF and min/max complexity when forming reductions
I don't know if there's some way this changes what the vectorizers
may produce for reductions, but I have added test coverage with
3567908 and 5ced712 to show that both passes already have bugs in
this area. Hopefully this does not make things worse before we can
really fix it.
2020-12-30 15:22:26 -05:00
Yuanfang Chen 277ebe46c6 Fix `LLVM_ENABLE_MODULES=On` build
for commit 480936e741.
2020-12-30 10:54:04 -08:00
Andrew Litteken fe431103b6 [IROutliner] Adding option to enable outlining from linkonceodr functions
There are functions that the linker is able to automatically
deduplicate, we do not outline from these functions by default. This
allows for outlining from those functions.

Tests:
llvm/test/Transforms/IROutliner/outlining-odr.ll

Reviewers: jroelofs, paquette

Differential Revision: https://reviews.llvm.org/D87309
2020-12-30 12:08:04 -06:00
Sanjay Patel e90ea76380 [IR] remove 'NoNan' param when creating FP reductions
This is no-functional-change-intended (AFAIK, we can't
isolate this difference in a regression test).

That's because the callers should be setting the IRBuilder's
FMF field when creating the reduction and/or setting those
flags after creating. It doesn't make sense to override this
one flag alone.

This is part of a multi-step process to clean up the FMF
setting/propagation. See PR35538 for an example.
2020-12-30 09:51:23 -05:00
Juneyoung Lee 420d046d6b clang-format, address warnings 2020-12-30 23:05:07 +09:00
Juneyoung Lee 9b29610228 Use unary CreateShuffleVector if possible
As mentioned in D93793, there are quite a few places where unary `IRBuilder::CreateShuffleVector(X, Mask)` can be used
instead of `IRBuilder::CreateShuffleVector(X, Undef, Mask)`.
Let's update them.

Actually, it would have been more natural if the patches were made in this order:
(1) let them use unary CreateShuffleVector first
(2) update IRBuilder::CreateShuffleVector to use poison as a placeholder value (D93793)

The order is swapped, but in terms of correctness it is still fine.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D93923
2020-12-30 22:36:08 +09:00
Juneyoung Lee bfedd5d2b6 [ConstraintElimination] Add support for select form of and/or
This patch adds support for select form of and/or.
Currently there is an ongoing effort for moving towards using `select a, b, false` instead of `and i1 a, b` and
`select a, true, b` instead of `or i1 a, b` as well.
D93065 has links to relevant changes.

Alive2 proof: (undef input was disabled due to timeout :( )
- and: https://alive2.llvm.org/ce/z/AgvFbQ
- or: https://alive2.llvm.org/ce/z/KjLJyb

Differential Revision: https://reviews.llvm.org/D93935
2020-12-30 21:27:36 +09:00
Andrew Litteken 30feb93036 [IROutliner] Adding support for swift errors in the IROutliner
Since some values can be swift errors, we need to make sure that we
correctly propagate the parameter attributes.

Tests found at:
llvm/test/Transforms/IROutliner/outlining-swift-error.ll

Reviewers: jroelofs, paquette

Recommit of: 71867ed5e6

Differential Revision: https://reviews.llvm.org/D87742
2020-12-30 01:17:27 -06:00
Andrew Litteken eeb99c2ac2 Revert "[IROutliner] Adding support for swift errors"
This reverts commit 71867ed5e6.

Reverting for lack of commit messages.
2020-12-30 01:17:27 -06:00
Andrew Litteken 71867ed5e6 [IROutliner] Adding support for swift errors 2020-12-30 01:14:55 -06:00
Luo, Yuanke 981a0bd858 [X86] Add x86_amx type for intel AMX.
The x86_amx is used for AMX intrisics. <256 x i32> is bitcast to x86_amx when
it is used by AMX intrinsics, and x86_amx is bitcast to <256 x i32> when it
is used by load/store instruction. So amx intrinsics only operate on type x86_amx.
It can help to separate amx intrinsics from llvm IR instructions (+-*/).
Thank Craig for the idea. This patch depend on https://reviews.llvm.org/D87981.

Differential Revision: https://reviews.llvm.org/D91927
2020-12-30 13:52:13 +08:00
Kazu Hirata 16d20e2554 [Transforms/Utils] Construct SmallVector with iterator ranges (NFC) 2020-12-29 19:23:23 -08:00
Andrew Litteken df4a931c63 [IROutliner] Adding OptRemarks to the IROutliner Pass
This prints OptRemarks at each location where a decision is made to not
outline, or to outline a specific section for the IROutliner pass.

Test:
llvm/test/Transforms/IROutliner/opt-remarks.ll

Reviewers: jroelofs, paquette

Differential Revision: https://reviews.llvm.org/D87300
2020-12-29 15:52:08 -06:00
Roman Lebedev 39a56f7f17
[SimplifyCFG] Teach SimplifyTerminatorOnSelect() to preserve DomTree 2020-12-30 00:48:12 +03:00
Roman Lebedev ec0b671a61
[SimplifyCFG] Teach SimplifyCondBranchToCondBranch() to preserve DomTree 2020-12-30 00:48:12 +03:00
Roman Lebedev 307156246f
[SimplifyCFG] Teach mergeConditionalStoreToAddress() to preserve DomTree 2020-12-30 00:48:11 +03:00
Roman Lebedev d4c0abb4a3
[SimplifyCFG] Teach FoldCondBranchOnPHI() to preserve DomTree 2020-12-30 00:48:11 +03:00
Roman Lebedev b8121b2e62
[SimplifyCFG] Teach SinkCommonCodeFromPredecessors() to preserve DomTree 2020-12-30 00:48:11 +03:00
Roman Lebedev 18c407bf4c
[SimplifyCFG] Teach HoistThenElseCodeToIf() to preserve DomTree 2020-12-30 00:48:10 +03:00
Roman Lebedev fe9bdd9621
[SimplifyCFG] Teach SimplifyEqualityComparisonWithOnlyPredecessor() to preserve DomTree, part 2 2020-12-30 00:48:10 +03:00
Roman Lebedev 6027e05dbf
[SimplifyCFG] Teach SimplifyEqualityComparisonWithOnlyPredecessor() to preserve DomTree, part 1 2020-12-30 00:48:10 +03:00
Sanjay Patel 8d18bc8e6d [Utils] reduce code in createTargetReduction(); NFC
The switch duplicated the translation in getRecurrenceBinOp().
This code is still weird because it translates to the TTI
ReductionFlags for min/max, but then createSimpleTargetReduction()
converts that back to RecurrenceDescriptor::MinMaxRecurrenceKind.
2020-12-29 15:56:19 -05:00
Sanjay Patel 21a3a0225d [SLP] replace local reduction enum with RecurrenceKind; NFCI
I'm not sure if the SLP enum was created before the IVDescriptor
RecurrenceDescriptor / RecurrenceKind existed, but the code in
SLP is now redundant with that class, so it just makes things
more complicated to have both. We eventually call LoopUtils
createSimpleTargetReduction() to create reduction ops, so we
might as well standardize on those enum names.

There's still a question of whether we need to use TTI::ReductionFlags
vs. MinMaxRecurrenceKind, but that can be another clean-up step.

Another option would just be to flatten the enums in RecurrenceDescriptor
into a single enum. There isn't much benefit (smaller switches?) to
having a min/max subset.
2020-12-29 14:52:11 -05:00
Andrew Litteken 6df161a2fb [IROutliner] Adding a cost model, and debug option to turn the model off.
This adds a cost model that takes into account the total number of
machine instructions to be removed from each region, the number of
instructions added by adding a new function with a set of instructions,
and the instructions added by handling arguments.

Tests not adding flags:

llvm/test/Transforms/IROutliner/outlining-cost-model.ll

Reviewers: jroelofs, paquette

Differential Revision: https://reviews.llvm.org/D87299
2020-12-29 12:43:41 -06:00
Roman Lebedev 374ef57f13
[InstCombine] 'hoist xor-by-constant from xor-by-value': completely give up on constant exprs
As Mikael Holmén is noting in the post-commit review for the first fix
https://reviews.llvm.org/rGd4ccef38d0bb#967466
not hoisting constantexprs is not enough,
because if the xor originally was a constantexpr (i.e. X is a constantexpr).
`SimplifyAssociativeOrCommutative()` in `visitXor()` will immediately
undo this transform, thus again causing an infinite combine loop.

This transform has resulted in a surprising number of constantexpr failures.
2020-12-29 16:28:18 +03:00
Arthur Eubanks c2ef06d3dd [NewPM] Port infer-address-spaces
And add it to the AMDGPU opt pipeline.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D93880
2020-12-28 19:58:12 -08:00
Kazu Hirata 5d2529f28f [Scalar] Construct SmallVector with iterator ranges (NFC) 2020-12-28 19:55:18 -08:00
Andrew Litteken 1e23802507 [IROutliner] Merging identical output blocks for extracted functions.
Many of the sets of output stores will be the same. When a block is
created, we check if there is an output block with the same set of store
instructions. If there is, we map the output block of the region back
to the block, so that the extra argument controlling the switch
statement can be set to the appropriate block value.

Tests:
- llvm/test/Transforms/IROutliner/outlining-same-output-blocks.ll

Reviewers: jroelofs, paquette

Differential Revision: https://reviews.llvm.org/D87298
2020-12-28 21:01:48 -06:00
Andrew Litteken e6ae623314 [IROutliner] Adding support for consolidating functions with different output arguments.
Certain regions can have values introduced inside the region that are
used outside of the region. These may not be the same for each similar
region, so we must create one over arching set of arguments for the
consolidated function.

We do this by iterating over the outputs for each extracted function,
and creating as many different arguments to encapsulate the different
outputs sets. For each output set, we create a different block with the
necessary stores from the value to the output register. There is then
one switch statement, controlled by an argument to the function, to
differentiate which block to use.

Changed Tests for consistency:
llvm/test/Transforms/IROutliner/extraction.ll
llvm/test/Transforms/IROutliner/illegal-assumes.ll
llvm/test/Transforms/IROutliner/illegal-memcpy.ll
llvm/test/Transforms/IROutliner/illegal-memmove.ll
llvm/test/Transforms/IROutliner/illegal-vaarg.ll

Tests to test new functionality:
llvm/test/Transforms/IROutliner/outlining-different-output-blocks.ll
llvm/test/Transforms/IROutliner/outlining-remapped-outputs.ll
llvm/test/Transforms/IROutliner/outlining-same-output-blocks.ll

Reviewers: jroelofs, paquette

Differential Revision: https://reviews.llvm.org/D87296
2020-12-28 16:17:07 -06:00
Nikita Popov 4a16c507cb [InstCombine] Disable unsafe select transform behind a flag
This disables the poison-unsafe select -> and/or transform behind
a flag (we continue to perform the fold by default). This is intended
to simplify evaluation and testing while we teach various passes
to directly recognize the select pattern.

This only disables the main select -> and/or transform. A number of
related ones are instead changed to canonicalize to the a ? b : false
and a ? true : b forms which represent and/or respectively. This
requires a bit of care to avoid infinite loops, as we do not want
!a ? b : false to be converted into a ? false : b.

The basic idea here is the same as D93065, but keeps the change
behind a flag for now.

Differential Revision: https://reviews.llvm.org/D93840
2020-12-28 22:43:52 +01:00
Roman Lebedev ef93f7a11c
[SimplifyCFG] FoldBranchToCommonDest: gracefully handle unreachable code ()
We might be dealing with an unreachable code,
so the bonus instruction we clone might be self-referencing.

There is a sanity check that all uses of bonus instructions
that are not in the original block with said bonus instructions
are PHI nodes, and that is obviously not the case
for self-referencing instructions..

So if we find such an use, just rewrite it.

Thanks to Mikael Holmén for the reproducer!

Fixes https://bugs.llvm.org/show_bug.cgi?id=48450#c8
2020-12-28 23:31:19 +03:00
Philip Reames 4b33b23877 Reapply "[LV] Vectorize (some) early and multiple exit loops"" w/fix for builder
This reverts commit 4ffcd4fe9a thus restoring e4df6a40da.

The only change from the original patch is to add "llvm::" before the call to empty(iterator_range).  This is a speculative fix for the ambiguity reported on some builders.
2020-12-28 10:13:28 -08:00
Arthur Eubanks 4ffcd4fe9a Revert "[LV] Vectorize (some) early and multiple exit loops"
This reverts commit e4df6a40da.

Breaks Windows bots, e.g. http://45.33.8.238/win/30472/step_4.txt
and http://lab.llvm.org:8011/#/builders/83/builds/2078/steps/5/logs/stdio
2020-12-28 10:05:41 -08:00
Philip Reames e4df6a40da [LV] Vectorize (some) early and multiple exit loops
This patch is a major step towards supporting multiple exit loops in the vectorizer. This patch on it's own extends the loop forms allowed in two ways:

    single exit loops which are not bottom tested
    multiple exit loops w/ a single exit block reached from all exits and no phis in the exit block (because of LCSSA this implies no values defined in the loop used later)

The restrictions on multiple exit loop structures will be removed in follow up patches; disallowing cases for now makes the code changes smaller and more obvious. As before, we can only handle loops with entirely analyzable exits. Removing that restriction is much harder, and is not part of currently planned efforts.

The basic idea here is that we can force the last iteration to run in the scalar epilogue loop (if we have one). From the definition of SCEV's backedge taken count, we know that no earlier iteration can exit the vector body. As such, we can leave the decision on which exit to be taken to the scalar code and generate a bottom tested vector loop which runs all but the last iteration.

The existing code already had the notion of requiring one iteration in the scalar epilogue, this patch is mainly about generalizing that support slightly, making sure we don't try to use this mechanism when tail folding, and updating the code to reflect the difference between a single exit block and a unique exit block (very mechanical).

Differential Revision: https://reviews.llvm.org/D93317
2020-12-28 09:40:42 -08:00
Roman Lebedev d4ccef38d0
[InstCombine] 'hoist xor-by-constant from xor-by-value': ignore constantexprs
As it is being reported (in post-commit review) in
https://reviews.llvm.org/D93857
this fold (as i expected, but failed to come up with test coverage
despite trying) has issues with constant expressions.
Since we only care about true constants, which constantexprs are not,
don't perform such hoisting for constant expressions.
2020-12-28 20:15:20 +03:00
Yevgeny Rouban d76c1d2247 [RS4GC] Lazily set changed flag when folding single entry phis
The function FoldSingleEntryPHINodes() is changed to return if
it has changed IR or not. This return value is used by RS4GC to
set the MadeChange flag respectively.

Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D93810
2020-12-28 10:54:21 +07:00
Juneyoung Lee 9d70dbdc2b [InstCombine] use poison as placeholder for undemanded elems
Currently undef is used as a don’t-care vector when constructing a vector using a series of insertelement.
However, this is problematic because undef isn’t undefined enough.
Especially, a sequence of insertelement can be optimized to shufflevector, but using undef as its placeholder makes shufflevector a poison-blocking instruction because undef cannot be optimized to poison.
This makes a few straightforward optimizations incorrect, such as:

```
;  https://bugs.llvm.org/show_bug.cgi?id=44185

define <4 x float> @insert_not_undef_shuffle_translate_commute(float %x, <4 x float> %y, <4 x float> %q) {
  %xv = insertelement <4 x float> %q, float %x, i32 2
  %r = shufflevector <4 x float> %y, <4 x float> %xv, <4 x i32> { 0, 6, 2, undef }
  ret <4 x float> %r ; %r[3] is undef
}
=>
define <4 x float> @insert_not_undef_shuffle_translate_commute(float %x, <4 x float> %y, <4 x float> %q) {
  %r = insertelement <4 x float> %y, float %x, i32 1
  ret <4 x float> %r ; %r[3] = %y[3], incorrect if %y[3] = poison
}

Transformation doesn't verify!
ERROR: Target is more poisonous than source
```

I’d like to suggest
1. Using poison as insertelement’s placeholder value (IRBuilder::CreateVectorSplat should be patched too)
2. Updating shufflevector’s semantics to return poison element if mask is undef

Note that poison is currently lowered into UNDEF in SelDag, so codegen part is okay.
m_Undef() matches PoisonValue as well, so existing optimizations will still fire.

The only concern is hidden miscompilations that will go incorrect when poison constant is given.
A conservative way is copying all tests having `insertelement undef` & replacing it with `insertelement poison` & run Alive2 on it, but it will create many tests and people won’t like it. :(

Instead, I’ll simply locally maintain the tests and run Alive2.
If there is any bug found, I’ll report it.

Relevant links: https://bugs.llvm.org/show_bug.cgi?id=43958 , http://lists.llvm.org/pipermail/llvm-dev/2019-November/137242.html

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93586
2020-12-28 08:58:15 +09:00
Florian Hahn 4ad41902e8
[GVN] Correctly set modified status when doing PRE on indices.
This patch updates GVN to correctly return the modified status, if PRE
is performed on indices. It fixes a crash when building the test-suite
with EXPENSIVE_CHECKS and LTO.
2020-12-27 21:58:31 +00:00
Juneyoung Lee d3f1f7b6bc [EarlyCSE] Use m_LogicalAnd/Or matchers to handle branch conditions
EarlyCSE's handleBranchCondition says:

```
// If the condition is AND operation, we can propagate its operands into the
// true branch. If it is OR operation, we can propagate them into the false
// branch.
```

This holds for the corresponding select patterns as well.

This is a part of an ongoing work for disabling buggy select->and/or transformations.
See llvm.org/pr48353 and D93065 for more context

Proof:
and: https://alive2.llvm.org/ce/z/MQWodU
or: https://alive2.llvm.org/ce/z/9GLbB_

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93842
2020-12-28 05:36:26 +09:00
Juneyoung Lee f1d648b973 [GVN] Use m_LogicalAnd/Or to propagate equality from branch conditions
This patch makes GVN recognize `select c1, c2, false` as well as `select c1, true, c2`
branch condition and propagate equality from these.

See llvm.org/pr48353, D93065

Differential Revision: https://reviews.llvm.org/D93841
2020-12-28 05:28:38 +09:00
Florian Hahn 0ea3749b3c
[LV] Set up branch from middle block earlier.
Previously the branch from the middle block to the scalar preheader & exit
was being set-up at the end of skeleton creation in completeLoopSkeleton.
Inserting SCEV or runtime checks may result in LCSSA phis being created,
if they are required. Adjusting branches afterwards may break those
PHIs.

To avoid this, we can instead create the branch from the middle block
to the exit after we created the middle block, so we have the final CFG
before potentially adjusting/creating PHIs.

This fixes a crash for the included test case. For the non-crashing
case, this is almost a NFC with respect to the generated code. The
only change is the order of the predecessors of the involved branch
targets.

Note an assertion was moved from LoopVersioning() to
LoopVersioning::versionLoop. Adjusting the branches means loop-simplify
form may be broken before constructing LoopVersioning. But LV only uses
LoopVersioning to annotate the loop instructions with !noalias metadata,
which does not require loop-simplify form.

This is a fix for an existing issue uncovered by D93317.
2020-12-27 18:21:12 +00:00
Kazu Hirata 8299fb8f25 [Transforms] Use llvm::append_range (NFC) 2020-12-27 09:57:29 -08:00
Kazu Hirata 789d250613 [CodeGen, Transforms] Use *Map::lookup (NFC) 2020-12-27 09:57:27 -08:00
Sanjay Patel badf0f20f3 [SLP] rename reduction variables for readability; NFC
I am hoping to extend the reduction matching code, and it is
hard to distinguish "ReductionData" from "ReducedValueData".
So extend the tree/root metaphor to include leaves.

Another problem is that the name "OperationData" does not
provide insight into its purpose. I'm not sure if we can alter
that underlying data structure to make the code clearer.
2020-12-26 11:20:25 -05:00
Sanjay Patel c4ca108966 [SLP] use switch to improve readability; NFC
This will get more complicated when we handle intrinsics like maxnum.
2020-12-26 10:59:45 -05:00
Kazu Hirata 46bea9b297 [Local] Remove unused function RemovePredecessorAndSimplify (NFC)
The last use of the function was removed on Sep 29, 2010 in commit
99c985c37d.
2020-12-25 09:35:20 -08:00
Roman Lebedev 25aebe2ccf
[LoopIdiom] 'left-shift-until-bittest': keep no-wrap flags on shift, fix edge-case miscompilation for %x.next
While `%x.curr` is always safe to compute, because `LoopBackedgeTakenCount`
will always be smaller than `bitwidth(X)`, i.e. we never get poison,
rewriting `%x.next` is more complicated, however, because `X << LoopTripCount`
will be poison iff `LoopTripCount == bitwidth(X)` (which will happen
iff `BitPos` is `bitwidth(x) - 1` and `X` is `1`).

So unless we know that isn't the case (as alive2 notes, we know it's safe
to do iff shift had no-wrap flags, or bitpos does not indicate signbit,
or we know that %x is never `1`), we'll need to emit an alternative,
safe IR, by either just shifting the `%x.curr`, or conditionally selecting
between the computed `%x.next` and `0`..
Former IR looks better so let's do that.

While there, ensure that we don't drop no-wrap flags from said shift.
2020-12-24 21:20:52 +03:00
Roman Lebedev d9ebaeeb46
[InstCombine] Hoist xor-by-constant from xor-by-value
This is one of the deficiencies that can be observed in
https://godbolt.org/z/YPczsG after D91038 patch set.

This exposed two missing folds, one was fixed by the previous commit,
another one is `(A ^ B) | ~(A ^ B) --> -1` / `(A ^ B) & ~(A ^ B) --> 0`.

`-early-cse` will catch it: https://godbolt.org/z/4n1T1v,
but isn't meaningful to fix it in InstCombine,
because we'd need to essentially do our own CSE,
and we can't even rely on `Instruction::isIdenticalTo()`,
because there are no guarantees that the order of operands matches.
So let's just accept it as a loss.
2020-12-24 21:20:50 +03:00
Roman Lebedev 5b78303433
[InstCombine] Fold `a & ~(a ^ b)` to `x & y`
```
----------------------------------------
define i32 @and_xor_not_common_op(i32 %a, i32 %b) {
%0:
  %b2 = xor i32 %b, 4294967295
  %t2 = xor i32 %a, %b2
  %t4 = and i32 %t2, %a
  ret i32 %t4
}
=>
define i32 @and_xor_not_common_op(i32 %a, i32 %b) {
%0:
  %t4 = and i32 %a, %b
  ret i32 %t4
}
Transformation seems to be correct!
```
2020-12-24 21:20:49 +03:00
Roman Lebedev b3021a72a6
[IR][InstCombine] Add m_ImmConstant(), that matches on non-ConstantExpr constants, and use it
A pattern to ignore ConstantExpr's is quite common, since they frequently
lead into infinite combine loops, so let's make writing it easier.
2020-12-24 21:20:47 +03:00
Roman Lebedev ff3749fc79
[NFC] SimplifyCFGOpt::simplifyUnreachable(): pacify unused variable warning
Thanks to Luke Benes for pointing it out.
2020-12-24 21:20:46 +03:00
Kazu Hirata df812115e3 [CodeGen, Transforms] Use llvm::any_of (NFC) 2020-12-24 09:08:36 -08:00
Simon Pilgrim 89abe1cf83 [InstCombine] foldICmpUsingKnownBits - use KnownBits signed/unsigned getMin/MaxValue helpers. NFCI.
Replace the local compute*SignedMinMaxValuesFromKnownBits methods with the equivalent KnownBits helpers to determine the min/max value ranges.
2020-12-24 14:22:26 +00:00
Nikita Popov ef2f843347 Revert "[InstCombine] Check inbounds in load/store of gep null transform (PR48577)"
This reverts commit 899faa50f2.

Upon further consideration, this does not fix the right issue.
Doing this fold for non-inbounds GEPs is legal, because the
resulting pointer is still based-on null, which has no associated
address range, and as such and access to it is UB.

https://bugs.llvm.org/show_bug.cgi?id=48577#c3
2020-12-24 12:36:56 +01:00
Nikita Popov 90177912a4 Revert "[InstCombine] Fold gep inbounds of null to null"
This reverts commit eb79fd3c92.

This causes stage2 crashes, possibly due to StringMap being
miscompiled. Reverting for now.
2020-12-24 10:20:31 +01:00
Roman Lebedev f8079355c6
[InstCombine] canonicalizeAbsNabs(): don't propagate NSW flag for NABS patter
As Nuno is noting in post-commit review in
https://reviews.llvm.org/D87188#2467915
it is not correct to keep NSW for negated abs pattern,
so don't do that.
2020-12-24 00:06:09 +03:00
Nikita Popov 759b8c11c3 [InstCombine] Handle different pointer types when folding gep of null
The source pointer type is not necessarily the same as the result
pointer type, so we can't simply return the original null pointer,
it might be a different one.
2020-12-23 21:58:26 +01:00
Nikita Popov eb79fd3c92 [InstCombine] Fold gep inbounds of null to null
Effectively, this is what we were previously already doing when
the GEP was used in conjunction with a load or store, but this
fold can also be applied more generally:

> The only in bounds address for a null pointer in the default
> address-space is the null pointer itself.
2020-12-23 21:41:53 +01:00
Nikita Popov 899faa50f2 [InstCombine] Check inbounds in load/store of gep null transform (PR48577)
If the GEP isn't inbounds, then accessing a GEP of null location
is generally not UB.

While this is a minimal fix, the GEP of null handling should
probably be its own fold.
2020-12-23 21:03:22 +01:00
Craig Topper 897990e614 [IROutliner] Use isa instead of dyn_cast where the casted value isn't used. NFC
Fixes unused variable warnings.
2020-12-23 11:40:15 -08:00
Roman Lebedev 2b61e7c68c
[LoopIdiom] 'left-shift until bittest' idiom: support rewriting loop as countable, allow extra cruft
The current state of the transform is still not enough to support
my motivational pattern, because it has one more "induction variable".

I have delayed posting this patch, because originally even just rewriting
the loop as countable wasn't enough to nicely transform my motivational pattern,
because i expected that extra IV to be rewritten afterwards,
but it wasn't happening until i fixed that in D91800.

So, this patch allows the  'left-shift until bittest' loop idiom
as long as the inserted ops are cheap,
and lifts any and all extra use checks on the instructions.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D92754
2020-12-23 22:28:10 +03:00
Roman Lebedev a0ddc61c5b
[LoopIdiom] 'left-shift until bittest' idiom: support canonical sign bit mask
If the bitmask is for sign bit, instcombine would have canonicalized
the pattern into a proper sign bit check. Supporting that is still
simple, but requires a bit of a roundtrip - we first have to use
`decomposeBitTestICmp()`, and the rest again just works.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D91726
2020-12-23 22:28:09 +03:00
Roman Lebedev cb2e5980ba
[LoopIdiom] 'left-shift until bittest' idiom: support constant bit mask
The handing of the case where the mask is a constant is trivial,
if said constant is a power of two, the bit in question is log2(mask),
rest just works.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D91725
2020-12-23 22:28:09 +03:00
Roman Lebedev e124844709
[LoopIdiom] Introduce 'left-shift until bittest' idiom
The motivation here is the following inner loop in fp16/fp24 -> fp32 expander,
that runs as part of the floating-point DNG decompression in RawSpeed library:
cd380bb9a2/src/librawspeed/decompressors/DeflateDecompressor.cpp (L112-L115)
```
      while (!(fp32_fraction & (1 << 23))) {
        fp32_exponent -= 1;
        fp32_fraction <<= 1;
      }
```
(https://godbolt.org/z/r13YMh)
As one might notice, that loop is currently uncountable, and that whole code stays scalar.
Yet, it is rather trivial to make that loop countable:
 https://godbolt.org/z/do8WMz
and we can prove that via alive2:
 https://alive2.llvm.org/ce/z/7vQnji (ha nice, isn't it?)
... and that allow for the whole fp16->fp32 code to vectorize:
 https://godbolt.org/z/7hYr13

Now, while i'd love to get there, i feel like i should take it in steps.

For now, this introduces support for the most basic case,
where the bit position is known as a variable,
and the loop *will* go away (has no live-outs other than the recurrence,
no extra instructions in the loop).

I have added sufficient (i believe) test coverage,
and alive2 is happy with those transforms.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D91038
2020-12-23 22:28:09 +03:00
Andrew Litteken b1191c8438 [IROutliner] Adding support for elevating constants that are not the same in each region to arguments
When there are constants that have the same structural location, but not
the same value, between different regions, we cannot simply outline the
region. Instead, we find the constants that are not the same in each
location, and promote them to arguments to be passed into the respective
functions. At each call site, we pass the constant in as an argument
regardless of type.

Added/Edited Tests:

llvm/test/Transforms/IROutliner/outlining-constants-vs-registers.ll
llvm/test/Transforms/IROutliner/outlining-different-constants.ll
llvm/test/Transforms/IROutliner/outlining-different-globals.ll

Reviewers: paquette, jroelofs

Differential Revision: https://reviews.llvm.org/D87294
2020-12-23 13:03:05 -06:00
Evgeniy Brevnov 9fb074e7bb [BPI] Improve static heuristics for "cold" paths.
Current approach doesn't work well in cases when multiple paths are predicted to be "cold". By "cold" paths I mean those containing "unreachable" instruction, call marked with 'cold' attribute and 'unwind' handler of 'invoke' instruction. The issue is that heuristics are applied one by one until the first match and essentially ignores relative hotness/coldness
 of other paths.

New approach unifies processing of "cold" paths by assigning predefined absolute weight to each block estimated to be "cold". Then we propagate these weights up/down IR similarly to existing approach. And finally set up edge probabilities based on estimated block weights.

One important difference is how we propagate weight up. Existing approach propagates the same weight to all blocks that are post-dominated by a block with some "known" weight. This is useless at least because it always gives 50\50 distribution which is assumed by default anyway. Worse, it causes the algorithm to skip further heuristics and can miss setting more accurate probability. New algorithm propagates the weight up only to the blocks that dominates and post-dominated by a block with some "known" weight. In other words, those blocks that are either always executed or not executed together.

In addition new approach processes loops in an uniform way as well. Essentially loop exit edges are estimated as "cold" paths relative to back edges and should be considered uniformly with other coldness/hotness markers.

Reviewed By: yrouban

Differential Revision: https://reviews.llvm.org/D79485
2020-12-23 22:47:36 +07:00
Kazu Hirata 3c707d73f2 [NewGVN] Remove for_each_found (NFC)
The last use of the function was removed on Sep 30, 2017 in commit
9b926e90d3.
2020-12-22 20:13:27 -08:00
Sanjay Patel 0d15d4b6f4 [SLP] use operand index abstraction for number of operands
I think this is NFC currently, but the bug would be exposed
when we allow binary intrinsics (maxnum, etc) as candidates
for reductions.

The code in matchAssociativeReduction() is using
OperationData::getNumberOfOperands() when comparing whether
the "EdgeToVisit" iterator is in-bounds, so this code must
use the same (potentially offset) operand value to set
the "EdgeToVisit".
2020-12-22 16:05:39 -05:00
Arnold Schwaighofer 333108e8be Add a llvm.coro.end.async intrinsic
The llvm.coro.end.async intrinsic allows to specify a function that is
to be called as the last action before returning. This function will be
inlined after coroutine splitting.

This function can contain a 'musttail' call to allow for guaranteed tail
calling as the last action.

Differential Revision: https://reviews.llvm.org/D93568
2020-12-22 10:52:28 -08:00
Florian Hahn ef4dbb2b7a [LV] Use ScalarEvolution::getURemExpr to reduce duplication.
ScalarEvolution should be able to handle both constant and variable trip
counts using getURemExpr, so we do not have to handle them separately.

This is a small simplification of a56280094e.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D93677
2020-12-22 14:48:42 +00:00
Florian Hahn c0c0ae16c3
[VPlan] Make VPInstruction a VPDef
This patch turns updates VPInstruction to manage the value it defines
using VPDef. The VPValue is used  during VPlan construction and
codegeneration instead of the plain IR reference where possible.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90565
2020-12-22 09:53:47 +00:00
Gil Rapaport a56280094e [LV] Avoid needless fold tail
When the trip-count is provably divisible by the maximal/chosen VF, folding the
loop's tail during vectorization is redundant. This commit extends the existing
test for constant trip-counts to any trip-count known to be divisible by
maximal/selected VF by SCEV.

Differential Revision: https://reviews.llvm.org/D93615
2020-12-22 10:25:20 +02:00
Ta-Wei Tu d7a6f3a105 [LoopNest] Extend `LPMUpdater` and adaptor to handle loop-nest passes
This is a follow-up patch of D87045.

The patch implements "loop-nest mode" for `LPMUpdater` and `FunctionToLoopPassAdaptor` in which only top-level loops are operated.

`createFunctionToLoopPassAdaptor` decides whether the returned adaptor is in loop-nest mode or not based on the given pass. If the pass is a loop-nest pass or the pass is a `LoopPassManager` which contains only loop-nest passes, the loop-nest version of adaptor is returned; otherwise, the normal (loop) version of adaptor is returned.

Reviewed By: Whitney

Differential Revision: https://reviews.llvm.org/D87531
2020-12-22 08:47:38 +08:00
Congzhe Cao c60a58f8d4 [InstCombine] Add check of i1 types in select-to-zext/sext transformation
When doing select-to-zext/sext transformations, we should
not handle TrueVal and FalseVal of i1 type otherwise it
would result in zext/sext i1 to i1.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D93272
2020-12-21 18:46:24 -05:00
Michael Forster d56982b6f5 Remove unused variables.
Differential Revision: https://reviews.llvm.org/D93635
2020-12-21 16:24:43 +01:00
Simon Pilgrim 88c5b50060 [AggressiveInstCombine] Generalize foldGuardedRotateToFunnelShift to generic funnel shifts (REAPPLIED)
The fold currently only handles rotation patterns, but with the maturation of backend funnel shift handling we can now realistically handle all funnel shift patterns.

This should allow us to begin resolving PR46896 et al.

Ensure we block poison in a funnel shift value - similar to rG0fe91ad463fea9d08cbcd640a62aa9ca2d8d05e0

Reapplied with fix for PR48068 - we weren't checking that the shift values could be hoisted from their basicblocks.

Differential Revision: https://reviews.llvm.org/D90625
2020-12-21 15:22:27 +00:00
Florian Hahn f250892373
[VPlan] Make VPRecipeBase inherit from VPDef.
This patch makes VPRecipeBase a direct subclass of VPDef, moving the
SubclassID to VPDef.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90564
2020-12-21 13:34:00 +00:00
Florian Hahn cd608dc8d3
[VPlan] Use VPDef for VPInterleaveRecipe.
This patch turns updates VPInterleaveRecipe to manage the values it defines
using VPDef. The VPValue is used  during VPlan construction and
codegeneration instead of the plain IR reference where possible.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90562
2020-12-21 10:56:53 +00:00
David Sherwood 3bf7d47a97 [NFC][InstructionCost] Remove isValid() asserts in SLPVectorizer.cpp
An earlier patch introduced asserts that the InstructionCost is
valid because at that time the ReuseShuffleCost variable was an
unsigned. However, now that the variable is an InstructionCost
instance the asserts can be removed.

See this thread for context:
http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

See this patch for the introduction of the type:
https://reviews.llvm.org/D91174
2020-12-21 09:12:28 +00:00
Kazu Hirata 5d24935f22 [PGO] Remove dead member variable InstrumentFuncEntry (NFC)
This patch removes InstrumentFuncEntry as it is dead.

The constructor of FuncPGOInstrumentation passes InstrumentFuncEntry
to MST, but it doesn't make a local copy as a member variable.
2020-12-20 09:57:05 -08:00
Andrew Litteken 7c6f28a438 [IROutliner] Deduplicating functions that only require inputs.
Extracted regions can have both inputs and outputs.  In addition, the
CodeExtractor removes inputs that are only used in llvm.assumes, and
sunken allocas (values are used entirely in the extracted region as
denoted by lifetime intrinsics).  We also cannot combine sections that
have different constants in the same structural location, and these
constants will have to elevated to argument. This patch deduplicates
extracted functions that only have inputs and non of the special cases.

We test that correctly deduplicate in:
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll

Reviewers: jroelofs, paquette

Differential Revision: https://reviews.llvm.org/D86978
2020-12-19 17:34:34 -06:00
Andrew Litteken b8a2b6af37 Revert "[IROutliner] Deduplicating functions that only require inputs."
Missing reviewers and differential revision in commit message.

This reverts commit 5cdc4f57e5.
2020-12-19 17:33:49 -06:00
Andrew Litteken 5cdc4f57e5 [IROutliner] Deduplicating functions that only require inputs.
Extracted regions can have both inputs and outputs.  In addition, the
CodeExtractor removes inputs that are only used in llvm.assumes, and
sunken allocas (values are used entirely in the extracted region as
denoted by lifetime intrinsics).  We also cannot combine sections that
have different constants in the same structural location, and these
constants will have to elevated to argument. This patch deduplicates
extracted functions that only have inputs and non of the special cases.

We test that correctly deduplicate in:
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll
2020-12-19 17:26:29 -06:00
Roman Lebedev c043f5055e
[SimplifyCFG] Teach FoldBranchToCommonDest() to preserve DomTree, part 1
... for conditional branch case
2020-12-20 00:18:36 +03:00
Roman Lebedev 262ff9c23e
[SimplifyCFG] Teach TryToMergeLandingPad() to preserve DomTree 2020-12-20 00:18:36 +03:00
Roman Lebedev 6a1617d67c
[SimplifyCFG] Teach SimplifyCondBranchToTwoReturns() to preserve DomTree, part 2
... for the custom case returning void.
2020-12-20 00:18:36 +03:00
Roman Lebedev b94520c9ee
[SimplifyCFG] Teach SimplifyCondBranchToTwoReturns() to preserve DomTree, part 1
... for the general case of returning a value.
2020-12-20 00:18:35 +03:00
Roman Lebedev 4d87a6ad13
[NFCI][SimplifyCFG] SimplifyCondBranchToTwoReturns(): pull out BI->getParent() into a variable 2020-12-20 00:18:35 +03:00
Roman Lebedev 83659c7076
[SimplifyCFG] simplifySingleResume(): FoldReturnIntoUncondBranch() already knows how to preserve DomTree
... so just ensure that we pass DomTreeUpdater it into it.

Apparently, there were no dedicated tests just for that functionality,
so i'm adding one here.
2020-12-20 00:18:34 +03:00
Roman Lebedev b7d00e29b7
[SimplifyCFG] Teach simplifySingleResume() to preserve DomTree 2020-12-20 00:18:34 +03:00
Roman Lebedev c209b88dd4
[SimplifyCFG] Teach simplifyCommonResume() to preserve DomTree 2020-12-20 00:18:34 +03:00
Roman Lebedev 76e74d9395
[SimplifyCFG] Teach removeEmptyCleanup() to preserve DomTree 2020-12-20 00:18:33 +03:00
Roman Lebedev 4be8707e64
[SimplifyCFG] Teach FoldTwoEntryPHINode() to preserve DomTree
Still boring, simply drop all edges to successors of DomBlock,
and add an edge to to BB instead.
2020-12-20 00:18:33 +03:00
Roman Lebedev b43b77ff9b
[NFCI][SimlifyCFG] simplifyOnce(): also perform DomTree validation
And that exposes that a number of tests don't *actually* manage to
maintain DomTree validity, which is inline with my observations.

Once again, SimlifyCFG pass currently does not require/preserve DomTree
by default, so this is effectively NFC.
2020-12-20 00:18:32 +03:00
Andrew Litteken c52bcf3a9b [IRSim][IROutliner] Limit to extracting regions that only require
inputs.

Extracted regions can have both inputs and outputs.  In addition, the
CodeExtractor removes inputs that are only used in llvm.assumes, and
sunken allocas (values are used entirely in the extracted region as
denoted by lifetime intrinsics).  We also cannot combine sections that
have different constants in the same structural location, and these
constants will have to elevated to argument. This patch limits the
extracted regions to those that only require inputs, and do not have any
 other special cases.

We test that we do not outline the wrong constants in:
test/Transforms/IROutliner/outliner-different-constants.ll
test/Transforms/IROutliner/outliner-different-globals.ll
test/Transforms/IROutliner/outliner-constant-vs-registers.ll

We test that correctly outline in:
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll

Reviewers: paquette, plofti

Differential Revision: https://reviews.llvm.org/D86977
2020-12-19 13:33:54 -06:00
Kazu Hirata 56edfcada9 [Target, Transforms] Use contains (NFC) 2020-12-19 10:43:19 -08:00
Aditya Kumar 1ab4db0f84 [HotColdSplit] Reflect full cost of parameters in split penalty
Make the penalty for splitting a region more accurately reflect the cost
of materializing all of the inputs/outputs to/from the region.

This almost entirely eliminates code growth within functions which
undergo splitting in key internal frameworks, and reduces the size of
those frameworks between 2.6% to 3%.

rdar://49167240

Patch by: Vedant Kumar(@vsk)
Reviewers: hiraditya,rjf,t.p.northover
Reviewed By: hiraditya,rjf

Differential Revision: https://reviews.llvm.org/D59715
2020-12-18 17:06:17 -08:00
Akira Hatanaka ffd982f7db [ObjC][ARC] Fix a bug where the inline-asm retain/claim RV marker wasn't
inserted when the original call had a 'returned' argument

The code is testing whether the instruction BBI points to is the call
that is paired up with the retainRV/claimRV call, but it doesn't work
when the call has a 'returned' argument since GetArgRCIdentityRoot looks
through 'returned' arguments.

rdar://72485383
2020-12-18 16:59:06 -08:00
Sanjay Patel 37d0dda739 [SLP] fix typo; NFC 2020-12-18 16:55:52 -05:00
Nikita Popov 1f1145006b [DSE] Use correct memory location for read clobber check
MSSA DSE starts at a killing store, finds an earlier store and
then checks that the earlier store is not read along any paths
(without being killed first). However, it uses the memory location
of the killing store for that, not the earlier store that we're
attempting to eliminate.

This has a number of problems:

* Mismatches between what BasicAA considers aliasing and what DSE
  considers an overwrite (even though both are correct in isolation)
  can result in miscompiles. This is PR48279, which D92045 tries to
  fix in a different way. The problem is that we're using a location
  from a store that is potentially not executed and thus may be UB,
  in which case analysis results can be arbitrary.
* Metadata on the killing store may be used to determine aliasing,
  but there is no guarantee that the metadata is valid, as the specific
  killing store may not be executed. Using the metadata on the earlier
  store is valid (it is the store we're removing, so on any execution
  where its removal may be observed, it must be executed).
* The location is imprecise. For full overwrites the killing store
  will always have a location that is larger or equal than the earlier
  access location, so it's beneficial to use the earlier access
  location. This is not the case for partial overwrites, in which
  case either location might be smaller. There is some room for
  improvement here.

Using the earlier access location means that we can no longer cache
which accesses are read for a given killing store, as we may be
querying different locations. However, it turns out that simply
dropping the cache has no notable impact on compile-time.

Differential Revision: https://reviews.llvm.org/D93523
2020-12-18 20:26:53 +01:00
Kazu Hirata 5ac37725df [GVNHoist] Remove successorDominate (NFC)
The function was introduced on Aug 25, 2016 in commit
5f0d0e60d1.

Its last use was removed on Sep 13, 2017 in commit
dfa8741c96.
2020-12-18 10:29:52 -08:00
Roman Lebedev 897c985e1e
[InstCombine] Canonicalize SPF to abs intrinsic
This patch enables canonicalization of SPF_ABS and SPF_ABS
to the abs intrinsic.

This is a recommit, the original try was
05d4c4ebc2,
but it was reverted due to an apparent miscompile,
which since then has just been fixed by the previous commit.

Differential Revision: https://reviews.llvm.org/D87188
2020-12-18 21:18:14 +03:00
Whitney Tsang 2a814cd9e1 Ensure SplitEdge to return the new block between the two given blocks
This PR implements the function splitBasicBlockBefore to address an
issue
that occurred during SplitEdge(BB, Succ, ...), inside splitBlockBefore.
The issue occurs in SplitEdge when the Succ has a single predecessor
and the edge between the BB and Succ is not critical. This produces
the result ‘BB->Succ->New’. The new function splitBasicBlockBefore
was added to splitBlockBefore to handle the issue and now produces
the correct result ‘BB->New->Succ’.

Below is an example of splitting the block bb1 at its first instruction.

/// Original IR
bb0:
	br bb1
bb1:
        %0 = mul i32 1, 2
	br bb2
bb2:
/// IR after splitEdge(bb0, bb1) using splitBasicBlock
bb0:
	br bb1
bb1:
	br bb1.split
bb1.split:
        %0 = mul i32 1, 2
	br bb2
bb2:
/// IR after splitEdge(bb0, bb1) using splitBasicBlockBefore
bb0:
	br bb1.split
bb1.split
	br bb1
bb1:
        %0 = mul i32 1, 2
	br bb2
bb2:

Differential Revision: https://reviews.llvm.org/D92200
2020-12-18 17:37:17 +00:00
Arnamoy Bhattacharyya 06d5b1c9ad [SROA] Remove Dead Instructions while creating speculative instructions
The SROA pass tries to be lazy for removing dead instructions that are collected during iterative run of the pass in the DeadInsts list.  However it does not remove instructions from the dead list while running eraseFromParent() on those instructions.

This causes (rare) null pointer dereferences.  For example, in the speculatePHINodeLoads() instruction, in the following code snippet:

```
   while (!PN.use_empty()) {
     LoadInst *LI = cast<LoadInst>(PN.user_back());
     LI->replaceAllUsesWith(NewPN);
     LI->eraseFromParent();
   }
```

If the Load instruction LI belongs to the DeadInsts list, it should be removed when eraseFromParent() is called.  However, the bug does not show up in most cases, because immediately in the same function, a new LoadInst is created in the following line:

```
LoadInst *Load = PredBuilder.CreateAlignedLoad(
         LoadTy, InVal, Alignment,
         (PN.getName() + ".sroa.speculate.load." + Pred->getName()));
```

This new LoadInst object takes the same memory address of the just deleted LI using eraseFromParent(), therefore the bug does not materialize.  In very rare cases, the addresses differ and therefore, a dangling pointer is created, causing a crash.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D92431
2020-12-18 11:47:02 -05:00
Sanjay Patel 47aaa99c0e [VectorCombine] allow peeking through GEPs when creating a vector load
This is an enhancement motivated by https://llvm.org/PR16739
(see D92858 for another).

We can look through a GEP to find a base pointer that may be
safe to use for a vector load. If so, then we shuffle (shift)
the necessary vector element over to index 0.

Alive2 proof based on 1 of the regression tests:
https://alive2.llvm.org/ce/z/yPJLkh

The vector translation is independent of endian (verify by
changing to leading 'E' in the datalayout string).

Differential Revision: https://reviews.llvm.org/D93229
2020-12-18 09:25:03 -05:00
Yevgeny Rouban f0e3d1d6ca [IndVars] Fix adding trunc instructions to unwind blocks
Truncate instruction must not be inserted before landing pads.
The insertion point is fixed.
2020-12-18 12:52:23 +07:00
Kazu Hirata b621116716 [Transforms] Use llvm::erase_if (NFC) 2020-12-17 19:53:10 -08:00
Rong Xu 31c0b8700b Fix clang-ppc64le-rhel buildbot build error
ix buildbot build error due to
commit 3733463d: [IR][PGO] Add hot func attribute and use hot/cold
attribute in func section
2020-12-17 19:14:43 -08:00
Rong Xu 3733463dbb [IR][PGO] Add hot func attribute and use hot/cold attribute in func section
Clang FE currently has hot/cold function attribute. But we only have
cold function attribute in LLVM IR.

This patch adds support of hot function attribute to LLVM IR.  This
attribute will be used in setting function section prefix/suffix.
Currently .hot and .unlikely suffix only are added in PGO (Sample PGO)
compilation (through isFunctionHotInCallGraph and
isFunctionColdInCallGraph).

This patch changes the behavior. The new behavior is:
(1) If the user annotates a function as hot or isFunctionHotInCallGraph
    is true, this function will be marked as hot. Otherwise,
(2) If the user annotates a function as cold or
    isFunctionColdInCallGraph is true, this function will be marked as
    cold.

The changes are:
(1) user annotated function attribute will used in setting function
    section prefix/suffix.
(2) hot attribute overwrites profile count based hotness.
(3) profile count based hotness overwrite user annotated cold attribute.

The intention for these changes is to provide the user a way to mark
certain function as hot in cases where training input is hard to cover
all the hot functions.

Differential Revision: https://reviews.llvm.org/D92493
2020-12-17 18:41:12 -08:00
Andrew Litteken cea807602a [IRSim][IROutliner] Adding InstVisitor to disallow certain operations.
This adds a custom InstVisitor to return false on instructions that
should not be allowed to be outlined.  These match the illegal
instructions in the IRInstructionMapper with exception of the addition
of the llvm.assume intrinsic.

Tests all the tests marked: illegal-*-.ll with a test for each kind of
instruction that has been marked as illegal.

Reviewers: jroelofs, paquette

Differential Revisions: https://reviews.llvm.org/D86976
2020-12-17 19:33:57 -06:00
Roman Lebedev 2d07414ee5
[SimplifyCFG] Teach simplifyUnreachable() to preserve DomTree
Pretty boring, removeUnwindEdge() already known how to update DomTree,
so if we are to call it, we must first flush our own pending updates;
otherwise, we just stop predecessors from branching to us,
and for certain predecessors, stop their predecessors from
branching to them also.
2020-12-18 00:37:22 +03:00
Roman Lebedev 2ee724863e
[SimplifyCFG] ConstantFoldTerminator() already knows how to preserve DomTree
... so just ensure that we pass DomTreeUpdater it into it.

Fixes DomTree preservation for a number of tests,
all of which are marked as such so that they do not regress.
2020-12-18 00:37:22 +03:00
Roman Lebedev 164e0847a5
[SimplifyCFG] DeleteDeadBlock() already knows how to preserve DomTree
... so just ensure that we pass DomTreeUpdater it into it.

Fixes DomTree preservation for a large number of tests,
all of which are marked as such so that they do not regress.
2020-12-18 00:37:21 +03:00
Bangtian Liu 511cfe9441 Revert "Ensure SplitEdge to return the new block between the two given blocks"
This reverts commit d20e0c3444.
2020-12-17 21:00:37 +00:00
Johannes Doerfert 994bb6eb7d [OpenMP][NFC] Provide a new remark and documentation
If a GPU function is externally reachable we give up trying to find the
(unique) kernel it is called from. This can hinder optimizations. Emit a
remark and explain mitigation strategies.

Reviewed By: tianshilei1992

Differential Revision: https://reviews.llvm.org/D93439
2020-12-17 14:38:26 -06:00
Andrew Litteken dae34463e3 [IRSim][IROutliner] Adding the extraction basics for the IROutliner.
Extracting the similar regions is the first step in the IROutliner.

Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
sort them by how many instructions will be removed.  Each
IRSimilarityCandidate is used to define an OutlinableRegion.  Each
region is ordered by their occurrence in the Module and the regions that
are not compatible with previously outlined regions are discarded.

Each region is then extracted with the CodeExtractor into its own
function.

We test that correctly extract in:
test/Transforms/IROutliner/extraction.ll
test/Transforms/IROutliner/address-taken.ll
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll

Recommit of bf899e8913 fixing memory
leaks.

Reviewers: paquette, jroelofs, yroux

Differential Revision: https://reviews.llvm.org/D86975
2020-12-17 11:27:26 -06:00
Nabeel Omer df2b9a3e02 [DebugInfo] Avoid re-ordering assignments in LCSSA
The LCSSA pass makes use of a function insertDebugValuesForPHIs() to
propogate dbg.value() intrinsics to newly inserted PHI instructions. Faulty
behaviour occurs when the parent PHI of a newly inserted PHI is not the
most recent assignment to a source variable. insertDebugValuesForPHIs ends
up propagating a value that isn't the most recent assignemnt.

This change removes the call to insertDebugValuesForPHIs() from LCSSA,
preventing incorrect dbg.value intrinsics from being propagated.
Propagating variable locations between blocks will occur later, during
LiveDebugValues.

Differential Revision: https://reviews.llvm.org/D92576
2020-12-17 16:17:32 +00:00
Bangtian Liu d20e0c3444 Ensure SplitEdge to return the new block between the two given blocks
This PR implements the function splitBasicBlockBefore to address an
issue
that occurred during SplitEdge(BB, Succ, ...), inside splitBlockBefore.
The issue occurs in SplitEdge when the Succ has a single predecessor
and the edge between the BB and Succ is not critical. This produces
the result ‘BB->Succ->New’. The new function splitBasicBlockBefore
was added to splitBlockBefore to handle the issue and now produces
the correct result ‘BB->New->Succ’.

Below is an example of splitting the block bb1 at its first instruction.

/// Original IR
bb0:
	br bb1
bb1:
        %0 = mul i32 1, 2
	br bb2
bb2:
/// IR after splitEdge(bb0, bb1) using splitBasicBlock
bb0:
	br bb1
bb1:
	br bb1.split
bb1.split:
        %0 = mul i32 1, 2
	br bb2
bb2:
/// IR after splitEdge(bb0, bb1) using splitBasicBlockBefore
bb0:
	br bb1.split
bb1.split
	br bb1
bb1:
        %0 = mul i32 1, 2
	br bb2
bb2:

Differential Revision: https://reviews.llvm.org/D92200
2020-12-17 16:00:15 +00:00
Florian Hahn 01089c876b
[InstCombine] Preserve !annotation on newly created instructions.
If the source instruction has !annotation metadata, all instructions
created during combining should also have it. Tell the builder to
add it.

The !annotation system was discussed on llvm-dev as part of
'RFC: Combining Annotation Metadata and Remarks'
(http://lists.llvm.org/pipermail/llvm-dev/2020-November/146393.html)

This patch is based on an earlier patch by Francis Visoiu Mistrih.

Reviewed By: thegameg, lebedev.ri

Differential Revision: https://reviews.llvm.org/D91444
2020-12-17 15:20:23 +00:00
Florian Hahn 75c04bfc61
[SimplifyCFG] Preserve !annotation in FoldBranchToCommonDest.
When folding a branch to a common destination, preserve !annotation on
the created instruction, if the terminator of the BB that is going to be
removed has !annotation. This should ensure that !annotation is attached
to the instructions that 'replace' the original terminator.

Reviewed By: jdoerfert, lebedev.ri

Differential Revision: https://reviews.llvm.org/D93410
2020-12-17 14:06:58 +00:00
Jun Ma 0138399903 [InstCombine] Remove scalable vector restriction in InstCombineCasts
Differential Revision: https://reviews.llvm.org/D93389
2020-12-17 22:02:33 +08:00
Florian Hahn 29077ae860
[IRBuilder] Generalize debug loc handling for arbitrary metadata.
This patch extends IRBuilder to allow adding/preserving arbitrary
metadata on created instructions.

Instead of using references to specific metadata nodes (like DebugLoc),
IRbuilder now keeps a vector of (metadata kind, MDNode *) pairs, which
are added to each created instruction.

The patch itself is a NFC and only moves the existing debug location
handling over to the new system. In a follow-up patch it will be used to
preserve !annotation metadata besides !dbg.

The current approach requires iterating over MetadataToCopy to avoid
adding duplicates, but given that the number of metadata kinds to
copy/preserve is going to be very small initially (0, 1 (for !dbg) or 2
(!dbg and !annotation)) that should not matter.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D93400
2020-12-17 13:27:43 +00:00
Cullen Rhodes 1fd3a04775 [LV] Disable epilogue vectorization for scalable VFs
Epilogue vectorization doesn't support scalable vectorization factors
yet, disable it for now.

Reviewed By: sdesmalen, bmahjour

Differential Revision: https://reviews.llvm.org/D93063
2020-12-17 12:14:03 +00:00
dfukalov 9ed8e0caab [NFC] Reduce include files dependency and AA header cleanup (part 2).
Continuing work started in https://reviews.llvm.org/D92489:

Removed a bunch of includes from "AliasAnalysis.h" and "LoopPassManager.h".

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D92852
2020-12-17 14:04:48 +03:00
Barry Revzin 92310454bf Make LLVM build in C++20 mode
Part of the <=> changes in C++20 make certain patterns of writing equality
operators ambiguous with themselves (sorry!).
This patch goes through and adjusts all the comparison operators such that
they should work in both C++17 and C++20 modes. It also makes two other small
C++20-specific changes (adding a constructor to a type that cases to be an
aggregate, and adding casts from u8 literals which no longer have type
const char*).

There were four categories of errors that this review fixes.
Here are canonical examples of them, ordered from most to least common:

// 1) Missing const
namespace missing_const {
    struct A {
    #ifndef FIXED
        bool operator==(A const&);
    #else
        bool operator==(A const&) const;
    #endif
    };

    bool a = A{} == A{}; // error
}

// 2) Type mismatch on CRTP
namespace crtp_mismatch {
    template <typename Derived>
    struct Base {
    #ifndef FIXED
        bool operator==(Derived const&) const;
    #else
        // in one case changed to taking Base const&
        friend bool operator==(Derived const&, Derived const&);
    #endif
    };

    struct D : Base<D> { };

    bool b = D{} == D{}; // error
}

// 3) iterator/const_iterator with only mixed comparison
namespace iter_const_iter {
    template <bool Const>
    struct iterator {
        using const_iterator = iterator<true>;

        iterator();

        template <bool B, std::enable_if_t<(Const && !B), int> = 0>
        iterator(iterator<B> const&);

    #ifndef FIXED
        bool operator==(const_iterator const&) const;
    #else
        friend bool operator==(iterator const&, iterator const&);
    #endif
    };

    bool c = iterator<false>{} == iterator<false>{} // error
          || iterator<false>{} == iterator<true>{}
          || iterator<true>{} == iterator<false>{}
          || iterator<true>{} == iterator<true>{};
}

// 4) Same-type comparison but only have mixed-type operator
namespace ambiguous_choice {
    enum Color { Red };

    struct C {
        C();
        C(Color);
        operator Color() const;
        bool operator==(Color) const;
        friend bool operator==(C, C);
    };

    bool c = C{} == C{}; // error
    bool d = C{} == Red;
}

Differential revision: https://reviews.llvm.org/D78938
2020-12-17 10:44:10 +00:00
Florian Hahn eba09a2db9
[InstCombine] Preserve !annotation for newly created instructions.
When replacing an instruction with !annotation with a newly created
replacement, add the !annotation metadata to the replacement.

This mostly covers cases where the new instructions are created using
the ::Create helpers. Instructions created by IRBuilder will be handled
by D91444.

Reviewed By: thegameg

Differential Revision: https://reviews.llvm.org/D93399
2020-12-17 09:06:51 +00:00
Kazu Hirata 4ad5b634f6 [GCN] Remove unused function handleNewInstruction (NFC)
The function was added without a user on Dec 22, 2016 in commit
7e274e02ae.  It seems to be unused since
then.
2020-12-16 21:57:48 -08:00
Hongtao Yu ac068e014b [CSSPGO] Consume pseudo-probe-based AutoFDO profile
This change enables pseudo-probe-based sample counts to be consumed by the sample profile loader under the regular `-fprofile-sample-use` switch with minimal adjustments to the existing sample file formats. After the counts are imported, a probe helper, aka, a `PseudoProbeManager` object, is automatically launched to verify the CFG checksum of every function in the current compilation against the corresponding checksum from the profile. Mismatched checksums will cause a function profile to be slipped. A `SampleProfileProber` pass is scheduled before any of the `SampleProfileLoader` instances so that the CFG checksums as well as probe mappings are available during the profile loading time. The `PseudoProbeManager` object is set up right after the profile reading is done. In the future a CFG-based fuzzy matching could be done in `PseudoProbeManager`.

Samples will be applied only to pseudo probe instructions as well as probed callsites once the checksum verification goes through. Those instructions are processed in the same way that regular instructions would be processed in the line-number-based scenario. In other words, a function is processed in a regular way as if it was reduced to just containing pseudo probes (block probes and callsites).

**Adjustment to profile format **

A CFG checksum field is being added to the existing AutoFDO profile formats. So far only the text format and the extended binary format are supported. For the text format, a new line like
```
!CFGChecksum: 12345
```
is added to the end of the body sample lines. For the extended binary profile format, we introduce a metadata section to store the checksum map from function names to their CFG checksums.

Differential Revision: https://reviews.llvm.org/D92347
2020-12-16 15:57:18 -08:00
alex-t 35ec3ff76d Disable Jump Threading for the targets with divergent control flow
Details: Jump Threading does not make sense for the targets with divergent CF
         since they do not use branch prediction for speculative execution.
         Also in the high level IR there is no enough information to conclude that the branch is divergent or uniform.
         This may cause errors in further CF lowering.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D93302
2020-12-17 02:40:54 +03:00
Roman Lebedev d22a47e9ff
[SimplifyCFG] Teach mergeEmptyReturnBlocks() to preserve DomTree
A first real transformation that didn't already knew how to do that,
but it's pretty tame - either change successor of all the predecessors
of a block and carefully delay deletion of the block until afterwards
the DomTree updates are appled, or add a successor to the block.

There wasn't a great test coverage for this, so i added extra, to be sure.
2020-12-17 01:03:50 +03:00
Roman Lebedev 5cce4aff18
[SimplifyCFG] TryToSimplifyUncondBranchFromEmptyBlock() already knows how to preserve DomTree
... so just ensure that we pass DomTreeUpdater it into it.

Fixes DomTree preservation for a large number of tests,
all of which are marked as such so that they do not regress.
2020-12-17 01:03:49 +03:00
Roman Lebedev 49dac4aca0
[SimplifyCFG] MergeBlockIntoPredecessor() already knows how to preserve DomTree
... so just ensure that we pass DomTreeUpdater it into it.

Fixes DomTree preservation for a large number of tests,
all of which are marked as such so that they do not regress.
2020-12-17 01:03:49 +03:00
Roman Lebedev 4fc169f664
[SimplifyCFG] removeUnreachableBlocks() already knows how to preserve DomTree
... so just ensure that we pass DomTreeUpdater it into it.

Apparently, there were no dedicated tests just for that functionality,
so i'm adding one here.
2020-12-17 01:03:49 +03:00
Rong Xu 0abd744597 [PGO] Use the sum of profile counts to fix the function entry count
Raw profile count values for each BB are not kept after profile
annotation. We record function entry count and branch weights
and use them to compute the count when needed.  This mechanism
works well in a perfect world, but often breaks in real programs,
because of number prevision, inconsistent profile, or bugs in
BFI). This patch uses sum of profile count values to fix
function entry count to make the BFI count close to real profile
counts.

Differential Revision: https://reviews.llvm.org/D61540
2020-12-16 13:37:43 -08:00
Nikita Popov e728024808 [DSE] Pass MemoryLocation by const ref (NFC) 2020-12-16 21:47:46 +01:00
Sanjay Patel 38ebc1a13d [VectorCombine] optimize alignment for load transform
Here's another minimal step suggested by D93229 / D93397 .
(I'm trying to be extra careful in these changes because
load transforms are easy to get wrong.)

We can optimistically choose the greater alignment of a
load and its pointer operand. As the test diffs show, this
can improve what would have been unaligned vector loads
into aligned loads.

When we enhance with gep offsets, we will need to adjust
the alignment calculation to include that offset.

Differential Revision: https://reviews.llvm.org/D93406
2020-12-16 15:25:45 -05:00
Sanjay Patel aaaf0ec72b [VectorCombine] loosen alignment constraint for load transform
As discussed in D93229, we only need a minimal alignment constraint
when querying whether a hypothetical vector load is safe. We still
pass/use the potentially stronger alignment attribute when checking
costs and creating the new load.

There's already a test that changes with the minimum code change,
so splitting this off as a preliminary commit independent of any
gep/offset enhancements.

Differential Revision: https://reviews.llvm.org/D93397
2020-12-16 12:25:18 -05:00
Whitney Tsang fa3693ad0b [LoopNest] Handle loop-nest passes in LoopPassManager
Per http://llvm.org/OpenProjects.html#llvm_loopnest, the goal of this
patch (and other following patches) is to create facilities that allow
implementing loop nest passes that run on top-level loop nests for the
New Pass Manager.

This patch extends the functionality of LoopPassManager to handle
loop-nest passes by specializing the definition of LoopPassManager that
accepts both kinds of passes in addPass.

Only loop passes are executed if L is not a top-level one, and both
kinds of passes are executed if L is top-level. Currently, loop nest
passes should have the following run method:

PreservedAnalyses run(LoopNest &, LoopAnalysisManager &,
LoopStandardAnalysisResults &, LPMUpdater &);

Reviewed By: Whitney, ychen
Differential Revision: https://reviews.llvm.org/D87045
2020-12-16 17:07:14 +00:00
Caroline Concatto be9184bc55 [SLPVectorizer]Migrate getEntryCost to return InstructionCost
This patch also changes:
  the return type of getGatherCost and
  the signature of the debug function dumpTreeCosts
to use InstructionCost.

This patch is part of a series of patches to use InstructionCost instead of
unsigned/int for the cost model functions.

See this thread for context:
http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

See this patch for the introduction of the type:
https://reviews.llvm.org/D91174

Depends on D93049

Differential Revision: https://reviews.llvm.org/D93127
2020-12-16 14:18:40 +00:00
Caroline Concatto 07217e0a1b [CostModel]Migrate getTreeCost() to use InstructionCost
This patch changes the type of cost variables (for instance: Cost, ExtractCost,
SpillCost) to use InstructionCost.
This patch also changes the type of cost variables to InstructionCost in other
functions that use the result of getTreeCost()
This patch is part of a series of patches to use InstructionCost instead of
unsigned/int for the cost model functions.

See this thread for context:
http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Depends on D91174

Differential Revision: https://reviews.llvm.org/D93049
2020-12-16 13:08:37 +00:00
Bangtian Liu c10757200d Revert "Ensure SplitEdge to return the new block between the two given blocks"
This reverts commit cf638d793c.
2020-12-16 11:52:30 +00:00
Philip Reames 1f6e15566f [LV] Weaken a unnecessarily strong assert [NFC]
Account for the fact that (in the future) the latch might be a switch not a branch.  The existing code is correct, minus the assert.
2020-12-15 19:07:53 -08:00
Philip Reames af7ef895d4 [LV] Extend dead instruction detection to multiple exiting blocks
Given we haven't yet enabled multiple exiting blocks, this is currently non functional, but it's an obvious extension which cleans up a later patch.

I don't think this is worth review (as it's pretty obvious), if anyone disagrees, feel feel to revert or comment and I will.
2020-12-15 18:46:32 -08:00
Bangtian Liu cf638d793c Ensure SplitEdge to return the new block between the two given blocks
This PR implements the function splitBasicBlockBefore to address an
issue
that occurred during SplitEdge(BB, Succ, ...), inside splitBlockBefore.
The issue occurs in SplitEdge when the Succ has a single predecessor
and the edge between the BB and Succ is not critical. This produces
the result ‘BB->Succ->New’. The new function splitBasicBlockBefore
was added to splitBlockBefore to handle the issue and now produces
the correct result ‘BB->New->Succ’.

Below is an example of splitting the block bb1 at its first instruction.

/// Original IR
bb0:
	br bb1
bb1:
        %0 = mul i32 1, 2
	br bb2
bb2:
/// IR after splitEdge(bb0, bb1) using splitBasicBlock
bb0:
	br bb1
bb1:
	br bb1.split
bb1.split:
        %0 = mul i32 1, 2
	br bb2
bb2:
/// IR after splitEdge(bb0, bb1) using splitBasicBlockBefore
bb0:
	br bb1.split
bb1.split
	br bb1
bb1:
        %0 = mul i32 1, 2
	br bb2
bb2:

Differential Revision: https://reviews.llvm.org/D92200
2020-12-15 23:32:29 +00:00
Johannes Doerfert dcaec81211 [OpenMP] Use assumptions during ICV tracking
The OpenMP 5.1 assumptions `no_openmp` and `no_openmp_routines` allow us
to ignore calls that would otherwise prevent ICV tracking.

Once we track more ICVs we might need to distinguish the ones that could
be impacted even with `no_openmp_routines`.

Reviewed By: sstefan1

Differential Revision: https://reviews.llvm.org/D92050
2020-12-15 16:51:34 -06:00
Johannes Doerfert d08d490a4c [OpenMPOpt][NFC] Clang format 2020-12-15 16:51:34 -06:00
Roman Lebedev e113317958
[NFCI][SimplifyCFG] Add basic scaffolding for gradually making the pass DomTree-aware
Two observations:
1. Unavailability of DomTree makes it impossible to make
  `FoldBranchToCommonDest()` transform in certain cases,
   where the successor is dominated by predecessor,
   because we then don't have PHI's, and can't recreate them,
   well, without handrolling 'is dominated by' check,
   which doesn't really look like a great solution to me.
2. Avoiding invalidating DomTree in SimplifyCFG will
   decrease the number of `Dominator Tree Construction` by 5
   (from 28 now, i.e. -18%) in `-O3` old-pm pipeline
   (as per `llvm/test/Other/opt-O3-pipeline.ll`)
   This might or might not be beneficial for compile time.

So the plan is to make SimplifyCFG preserve DomTree, and then
eventually make DomTree fully required and preserved by the pass.

Now, SimplifyCFG is ~7KLOC. I don't think it will be nice
to do all this uplifting in a single mega-commit,
nor would it be possible to review it in any meaningful way.

But, i believe, it should be possible to do this in smaller steps,
introducing the new behavior, in an optional way, off-by-default,
opt-in option, and gradually fixing transforms one-by-one
and adding the flag to appropriate test coverage.

Then, eventually, the default should be flipped,
and eventually^2 the flag removed.

And that is what is happening here - when the new off-by-default option
is specified, DomTree is required and is claimed to be preserved,
and SimplifyCFG-internal assertions verify that the DomTree is still OK.
2020-12-16 00:38:00 +03:00
Philip Reames a81db8b315 [LV] Restructure handling of -prefer-predicate-over-epilogue option [NFC]
This should be purely non-functional.  When touching this code for another reason, I found the handling of the PredicateOrDontVectorize piece here very confusing.  Let's make it an explicit state (instead of an implicit combination of two variables), and use early return for options/hint processing.
2020-12-15 12:38:13 -08:00
Simon Pilgrim a3bd67f222 SeparateConstOffsetFromGEP::lowerToSingleIndexGEPs - don't use dyn_cast_or_null. NFCI.
ResultPtr is guaranteed to be non-null - and using dyn_cast_or_null causes unnecessary static analyzer warnings.

We can't say the same for FirstResult AFAICT, so keep dyn_cast_or_null for that.
2020-12-15 17:27:25 +00:00
Florian Hahn 7ea3932ab1
[AnnotationRemarks] Also generate annotation remarks when using -O0.
The AnnotationRemarks pass is already run at the end of the module
pipeline. This patch also adds it before bailing out for -O0, so remarks
are also generated with -O0.
2020-12-15 14:46:52 +00:00
Florian Hahn 7186a3965a
[VPlan] Use VPDef for VPWidenSelectRecipe.
This patch turns updates VPWidenSelectRecipe to manage the value
it defines using VPDef.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90560
2020-12-15 14:15:01 +00:00
Jun Ma 52a3267ffa [InstCombine] Remove scalable vector restriction in foldVectorBinop
Differential Revision: https://reviews.llvm.org/D93289
2020-12-15 21:14:59 +08:00
Jun Ma ffe84d90e9 [InstCombine][NFC] Change cast of FixedVectorType to dyn_cast. 2020-12-15 20:36:57 +08:00
Jun Ma e12f584578 [InstCombine] Remove scalable vector restriction in InstCombineCompares
Differential Revision: https://reviews.llvm.org/D93269
2020-12-15 20:36:57 +08:00
Jun Ma 2ac58e21a1 [InstCombine] Remove scalable vector restriction when fold SelectInst
Differential Revision: https://reviews.llvm.org/D93083
2020-12-15 20:36:57 +08:00
Florian Hahn 318f5798d8
[VPlan] Use VPDef for VPWidenGEPRecipe.
This patch turns updates VPWidenGEPRecipe to manage the value it defines
using VPDef. The VPValue is used  during VPlan construction and
codegeneration instead of the plain IR reference where possible.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90561
2020-12-15 09:30:14 +00:00
Florian Hahn ad1161f9b5
[VPlan] Use VPdef for VPWidenCall.
This patch turns updates VPWidenREcipe to manage the value it defines
using VPDef.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90559
2020-12-15 09:20:07 +00:00
Nico Weber a852ee199c Reland "[MachineDebugify] Insert synthetic DBG_VALUE instructions"
This reverts commit 841f9c937f.
The change landed many months ago; something else broke those tests.
2020-12-14 22:34:23 -05:00
Nico Weber 841f9c937f Revert "[MachineDebugify] Insert synthetic DBG_VALUE instructions"
This reverts commit 2a5675f11d.
The tests it adds fail: https://reviews.llvm.org/D78135#2453736
2020-12-14 22:14:48 -05:00
Reid Kleckner d2ed9d6b7e Revert "ADT: Migrate users of AlignedCharArrayUnion to std::aligned_union_t, NFC"
We determined that the MSVC implementation of std::aligned* isn't suited
to our needs. It doesn't support 16 byte alignment or higher, and it
doesn't really guarantee 8 byte alignment. See
https://github.com/microsoft/STL/issues/1533

Also reverts "ADT: Change AlignedCharArrayUnion to an alias of std::aligned_union_t, NFC"

Also reverts "ADT: Remove AlignedCharArrayUnion, NFC" to bring back
AlignedCharArrayUnion.

This reverts commit 4d8bf870a8.

This reverts commit d10f9863a5.

This reverts commit 4b5dc150b9.
2020-12-14 17:04:06 -08:00
Rong Xu 54e03d03a7 [PGO] Verify BFI counts after loading profile data
This patch adds the functionality to compare BFI counts with real
profile
counts right after reading the profile. It will print remarks under
-Rpass-analysis=pgo, or the internal option -pass-remarks-analysis=pgo.

Differential Revision: https://reviews.llvm.org/D91813
2020-12-14 15:56:10 -08:00
Gulfem Savrun Yeniceri 7c0e3a77bc [clang][IR] Add support for leaf attribute
This patch adds support for leaf attribute as an optimization hint
in Clang/LLVM.

Differential Revision: https://reviews.llvm.org/D90275
2020-12-14 14:48:17 -08:00
Sanjay Patel d399f870b5 [VectorCombine] make load transform poison-safe
As noted in D93229, the transform from scalar load to vector load
potentially leaks poison from the extra vector elements that are
being loaded.

We could use freeze here (and x86 codegen at least appears to be
the same either way), but we already have a shuffle in this logic
to optionally change the vector size, so let's allow that
instruction to serve both purposes.

Differential Revision: https://reviews.llvm.org/D93238
2020-12-14 17:42:01 -05:00
Craig Topper 25067f179f [LoopIdiomRecognize] Teach detectShiftUntilZeroIdiom to recognize loops where the counter is decrementing.
This adds support for loops like

unsigned clz(unsigned x) {
    unsigned w = sizeof (x) * CHAR_BIT;
    while (x) {
        w--;
        x >>= 1;
    }

    return w;
}

and

unsigned clz(unsigned x) {
    unsigned w = sizeof (x) * CHAR_BIT - 1;
    while (x >>= 1) {
        w--;
    }

    return w;
}

To support these we look for add x, -1 as well as add x, 1 that
we already matched. If the value was -1 we need to subtract from
the initial counter value instead of adding to it.

Fixes PR48404.

Differential Revision: https://reviews.llvm.org/D92745
2020-12-14 14:25:05 -08:00
Philip Reames f5fe8493e5 [LAA] Relax restrictions on early exits in loop structure
his is a preparation patch for supporting multiple exits in the loop vectorizer, by itself it should be mostly NFC. This patch moves the loop structure checks from LAA to their respective consumers (where duplicates don't already exist).  Moving the checks does end up changing some of the optimization warnings and debug output slightly, but nothing that appears to be a regression.

Why do this? Well, after auditing the code, I can't actually find anything in LAA itself which relies on having all instructions within a loop execute an equal number of times. This patch simply makes this explicit so that if one consumer - say LV in the near future (hopefully) - wants to handle a broader class of loops, it can do so.

Differential Revision: https://reviews.llvm.org/D92066
2020-12-14 12:44:01 -08:00
Roman Lebedev 59560e8589
[SimplifyCFG] FoldBranchToCommonDest(): temporairly put back restrictions on liveout uses of bonus instructions (PR48450)
Even though d38205144f was mostly a correct
fix for the external non-PHI users, it's not a *generally* correct fix,
because the 'placeholder' values in those trivial PHI's we create
shouldn't be *always* 'undef', but the PHI itself for the backedges,
else we end up with wrong value, as the `@pr48450_2` test shows.

But we can't just do that, because we can't check that the PHI
can be it's own incoming value when coming from certain predecessor,
because we don't have a dominator tree.

So until we can address this correctness problem properly,
ensure that we don't perform the transformation
if there are such problematic external uses.

Making dominator tree available there is going to be involved,
since `-simplifycfg` pass currently does not preserve/update domtree...
2020-12-14 20:14:31 +03:00
Roman Lebedev e8360a8e1e
[NFC][SimplifyCFG] FoldBranchToCommonDest(): pull out 'common successor' into a variable
Makes it easier to use it elsewhere
2020-12-14 20:14:31 +03:00
Stanislav Mekhanoshin 87d7757bbe [SLP] Control maximum vectorization factor from TTI
D82227 has added a proper check to limit PHI vectorization to the
maximum vector register size. That unfortunately resulted in at
least a couple of regressions on SystemZ and x86.

This change reverts PHI handling from D82227 and replaces it with
a more general check in SLPVectorizerPass::tryToVectorizeList().
Moved to tryToVectorizeList() it allows to restart vectorization
if initial chunk fails.

However, this function is more general and handles not only PHI
but everything which SLP handles. If vectorization factor would
be limited to maximum vector register size it would limit much
more vectorization than before leading to further regressions.
Therefore a new TTI callback getMaximumVF() is added with the
default 0 to preserve current behavior and limit nothing. Then
targets can decide what is better for them.

The callback gets ElementSize just like a similar getMinimumVF()
function and the main opcode of the chain. The latter is to avoid
regressions at least on the AMDGPU. We can have loads and stores
up to 128 bit wide, and <2 x 16> bit vector math on some
subtargets, where the rest shall not be vectorized. I.e. we need
to differentiate based on the element size and operation itself.

Differential Revision: https://reviews.llvm.org/D92059
2020-12-14 08:49:40 -08:00
Markus Lavin 2a6782bb9f Reland [DebugInfo] Improve dbg preservation in LSR.
Use SCEV to salvage additional @llvm.dbg.value that have turned into
referencing undef after transformation (and traditional
salvageDebugInfo).  Before rewrite (but after introduction of new
induction variables) use SCEV to compute an equivalent set of values for
each @llvm.dbg.value in the loop body (among the loop header PHI-nodes).
After rewrite (and dead PHI elimination) update those @llvm.dbg.value
now referencing undef by picking a remaining value from its equivalence
set.  Allow match with offset by inserting compensation code in the
DIExpression.

Fixes : PR38815

Differential Revision: https://reviews.llvm.org/D87494
2020-12-14 16:15:18 +01:00
Florian Hahn e42e5263bd
[VPlan] Make VPWidenMemoryInstructionRecipe a VPDef.
This patch updates VPWidenMemoryInstructionRecipe to use VPDef
to manage the value it produces instead of inheriting from VPValue.

Reviewed By: gilr

Differential Revision: https://reviews.llvm.org/D90563
2020-12-14 14:13:59 +00:00
Anton Afanasyev fac7c7ec3c [SLP] Fix vector element size for the store chains
Vector element size could be different for different store chains.
This patch prevents wrong computation of maximum number of elements
for that case.

Differential Revision: https://reviews.llvm.org/D93192
2020-12-14 15:51:43 +03:00
Kazu Hirata 5891ad4e22 [Transforms] Use llvm::erase_value (NFC) 2020-12-13 09:48:47 -08:00
Florian Hahn 533f85767c
[VPlan] Use interleaveComma in printOperands() (NFC). 2020-12-13 16:29:16 +00:00
Roman Lebedev d38205144f
[SimplifyCFG] FoldBranchToCommonDest(): bonus instrns must only be used by PHI nodes in successors (PR48450)
In particular, if the successor block, which is about to get a new
predecessor block, currently only has a single predecessor,
then the bonus instructions will be directly used within said successor,
which is fine, since the block with bonus instructions dominates that
successor. But once there's a new predecessor, the IR is no longer valid,
and we don't fix it, because we only update PHI nodes.

Which means, the live-out bonus instructions must be exclusively used
by the PHI nodes in successor blocks. So we have to form trivial PHI nodes.
which will then be successfully updated to recieve cloned bonus instns.

This all works fine, except for the fact that we don't have access to
the dominator tree, and we don't ignore unreachable code,
so we sometimes do end up having to deal with some weird IR.

Fixes https://bugs.llvm.org/show_bug.cgi?id=48450
2020-12-13 00:06:57 +03:00
Nikita Popov afbb6d97b5 [CVP] Simplify and generalize switch handling
CVP currently handles switches by checking an equality predicate
on all edges from predecessor blocks. Of course, this can only
work if the value being switched over is defined in a different block.

Replace this implementation with a call to getPredicateAt(), which
also does the predecessor edge predicate check (if not defined in
the same block), but can also do quite a bit more: It can reason
about phi-nodes by checking edge predicates for incoming values,
it can reason about assumes, and it can reason about block values.

As such, this makes the implementation both simpler and more
powerful. The compile-time impact on CTMark is in the noise.
2020-12-12 21:12:27 +01:00
Kazu Hirata 215c1b1935 [Transforms] Use is_contained (NFC) 2020-12-12 09:37:49 -08:00
David Green ab97c9bdb7 [LV] Fix scalar cost for tail predicated loops
When it comes to the scalar cost of any predicated block, the loop
vectorizer by default regards this predication as a sign that it is
looking at an if-conversion and divides the scalar cost of the block by
2, assuming it would only be executed half the time. This however makes
no sense if the predication has been introduced to tail predicate the
loop.

Original patch by Anna Welker

Differential Revision: https://reviews.llvm.org/D86452
2020-12-12 14:21:40 +00:00
Fangrui Song b5ad32ef5c Migrate deprecated DebugLoc::get to DILocation::get
This migrates all LLVM (except Kaleidoscope and
CodeGen/StackProtector.cpp) DebugLoc::get to DILocation::get.

The CodeGen/StackProtector.cpp usage may have a nullptr Scope
and can trigger an assertion failure, so I don't migrate it.

Reviewed By: #debug-info, dblaikie

Differential Revision: https://reviews.llvm.org/D93087
2020-12-11 12:45:22 -08:00
Marco Elver c28b18af19 [KernelAddressSanitizer] Fix globals exclusion for indirect aliases
GlobalAlias::getAliasee() may not always point directly to a
GlobalVariable. In such cases, try to find the canonical GlobalVariable
that the alias refers to.

Link: https://github.com/ClangBuiltLinux/linux/issues/1208

Reviewed By: dvyukov, nickdesaulniers

Differential Revision: https://reviews.llvm.org/D92846
2020-12-11 12:20:40 +01:00
David Sherwood 9b76160e53 [Support] Introduce a new InstructionCost class
This is the first in a series of patches that attempts to migrate
existing cost instructions to return a new InstructionCost class
in place of a simple integer. This new class is intended to be
as light-weight and simple as possible, with a full range of
arithmetic and comparison operators that largely mirror the same
sets of operations on basic types, such as integers. The main
advantage to using an InstructionCost is that it can encode a
particular cost state in addition to a value. The initial
implementation only has two states - Normal and Invalid - but these
could be expanded over time if necessary. An invalid state can
be used to represent an unknown cost or an instruction that is
prohibitively expensive.

This patch adds the new class and changes the getInstructionCost
interface to return the new class. Other cost functions, such as
getUserCost, etc., will be migrated in future patches as I believe
this to be less disruptive. One benefit of this new class is that
it provides a way to unify many of the magic costs in the codebase
where the cost is set to a deliberately high number to prevent
optimisations taking place, e.g. vectorization. It also provides
a route to represent the extremely high, and unknown, cost of
scalarization of scalable vectors, which is not currently supported.

Differential Revision: https://reviews.llvm.org/D91174
2020-12-11 08:12:54 +00:00
Hongtao Yu 705a4c149d [CSSPGO] Pseudo probe encoding and emission.
This change implements pseudo probe encoding and emission for CSSPGO. Please see RFC here for more context: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s

Pseudo probes are in the form of intrinsic calls on IR/MIR but they do not turn into any machine instructions. Instead they are emitted into the binary as a piece of data in standalone sections.  The probe-specific sections are not needed to be loaded into memory at execution time, thus they do not incur a runtime overhead. 

**ELF object emission**

The binary data to emit are organized as two ELF sections, i.e, the `.pseudo_probe_desc` section and the `.pseudo_probe` section. The `.pseudo_probe_desc` section stores a function descriptor for each function and the `.pseudo_probe` section stores the actual probes, each fo which corresponds to an IR basic block or an IR function callsite. A function descriptor is stored as a module-level metadata during the compilation and is serialized into the object file during object emission.

Both the probe descriptors and pseudo probes can be emitted into a separate ELF section per function to leverage the linker for deduplication.  A `.pseudo_probe` section shares the same COMDAT group with the function code so that when the function is dead, the probes are dead and disposed too. On the contrary, a `.pseudo_probe_desc` section has its own COMDAT group. This is because even if a function is dead, its probes may be inlined into other functions and its descriptor is still needed by the profile generation tool.

The format of `.pseudo_probe_desc` section looks like:

```
.section   .pseudo_probe_desc,"",@progbits
.quad   6309742469962978389  // Func GUID
.quad   4294967295           // Func Hash
.byte   9                    // Length of func name
.ascii  "_Z5funcAi"          // Func name
.quad   7102633082150537521
.quad   138828622701
.byte   12
.ascii  "_Z8funcLeafi"
.quad   446061515086924981
.quad   4294967295
.byte   9
.ascii  "_Z5funcBi"
.quad   -2016976694713209516
.quad   72617220756
.byte   7
.ascii  "_Z3fibi"
```

For each `.pseudoprobe` section, the encoded binary data consists of a single function record corresponding to an outlined function (i.e, a function with a code entry in the `.text` section). A function record has the following format :

```
FUNCTION BODY (one for each outlined function present in the text section)
    GUID (uint64)
        GUID of the function
    NPROBES (ULEB128)
        Number of probes originating from this function.
    NUM_INLINED_FUNCTIONS (ULEB128)
        Number of callees inlined into this function, aka number of
        first-level inlinees
    PROBE RECORDS
        A list of NPROBES entries. Each entry contains:
          INDEX (ULEB128)
          TYPE (uint4)
            0 - block probe, 1 - indirect call, 2 - direct call
          ATTRIBUTE (uint3)
            reserved
          ADDRESS_TYPE (uint1)
            0 - code address, 1 - address delta
          CODE_ADDRESS (uint64 or ULEB128)
            code address or address delta, depending on ADDRESS_TYPE
    INLINED FUNCTION RECORDS
        A list of NUM_INLINED_FUNCTIONS entries describing each of the inlined
        callees.  Each record contains:
          INLINE SITE
            GUID of the inlinee (uint64)
            ID of the callsite probe (ULEB128)
          FUNCTION BODY
            A FUNCTION BODY entry describing the inlined function.
```

To support building a context-sensitive profile, probes from inlinees are grouped by their inline contexts. An inline context is logically a call path through which a callee function lands in a caller function. The probe emitter builds an inline tree based on the debug metadata for each outlined function in the form of a trie tree. A tree root is the outlined function. Each tree edge stands for a callsite where inlining happens. Pseudo probes originating from an inlinee function are stored in a tree node and the tree path starting from the root all the way down to the tree node is the inline context of the probes. The emission happens on the whole tree top-down recursively. Probes of a tree node will be emitted altogether with their direct parent edge. Since a pseudo probe corresponds to a real code address, for size savings, the address is encoded as a delta from the previous probe except for the first probe. Variant-sized integer encoding, aka LEB128, is used for address delta and probe index.

**Assembling**

Pseudo probes can be printed as assembly directives alternatively. This allows for good assembly code readability and also provides a view of how optimizations and pseudo probes affect each other, especially helpful for diff time assembly analysis.

A pseudo probe directive has the following operands in order: function GUID, probe index, probe type, probe attributes and inline context. The directive is generated by the compiler and can be parsed by the assembler to form an encoded `.pseudoprobe` section in the object file.

A example assembly looks like:

```
foo2: # @foo2
# %bb.0: # %bb0
pushq %rax
testl %edi, %edi
.pseudoprobe 837061429793323041 1 0 0
je .LBB1_1
# %bb.2: # %bb2
.pseudoprobe 837061429793323041 6 2 0
callq foo
.pseudoprobe 837061429793323041 3 0 0
.pseudoprobe 837061429793323041 4 0 0
popq %rax
retq
.LBB1_1: # %bb1
.pseudoprobe 837061429793323041 5 1 0
callq *%rsi
.pseudoprobe 837061429793323041 2 0 0
.pseudoprobe 837061429793323041 4 0 0
popq %rax
retq
# -- End function
.section .pseudo_probe_desc,"",@progbits
.quad 6699318081062747564
.quad 72617220756
.byte 3
.ascii "foo"
.quad 837061429793323041
.quad 281547593931412
.byte 4
.ascii "foo2"
```

With inlining turned on, the assembly may look different around %bb2 with an inlined probe:

```
# %bb.2:                                # %bb2
.pseudoprobe    837061429793323041 3 0
.pseudoprobe    6699318081062747564 1 0 @ 837061429793323041:6
.pseudoprobe    837061429793323041 4 0
popq    %rax
retq
```

**Disassembling**

We have a disassembling tool (llvm-profgen) that can display disassembly alongside with pseudo probes. So far it only supports ELF executable file.

An example disassembly looks like:

```
00000000002011a0 <foo2>:
  2011a0: 50                    push   rax
  2011a1: 85 ff                 test   edi,edi
  [Probe]:  FUNC: foo2  Index: 1  Type: Block
  2011a3: 74 02                 je     2011a7 <foo2+0x7>
  [Probe]:  FUNC: foo2  Index: 3  Type: Block
  [Probe]:  FUNC: foo2  Index: 4  Type: Block
  [Probe]:  FUNC: foo   Index: 1  Type: Block  Inlined: @ foo2:6
  2011a5: 58                    pop    rax
  2011a6: c3                    ret
  [Probe]:  FUNC: foo2  Index: 2  Type: Block
  2011a7: bf 01 00 00 00        mov    edi,0x1
  [Probe]:  FUNC: foo2  Index: 5  Type: IndirectCall
  2011ac: ff d6                 call   rsi
  [Probe]:  FUNC: foo2  Index: 4  Type: Block
  2011ae: 58                    pop    rax
  2011af: c3                    ret
```

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D91878
2020-12-10 17:29:28 -08:00
Mitch Phillips 7ead5f5aa3 Revert "[CSSPGO] Pseudo probe encoding and emission."
This reverts commit b035513c06.

Reason: Broke the ASan buildbots:
  http://lab.llvm.org:8011/#/builders/5/builds/2269
2020-12-10 15:53:39 -08:00
Zequan Wu b5216b2950 [PGO] Enable preinline and cleanup when optimize for size
Differential Revision: https://reviews.llvm.org/D91673
2020-12-10 12:29:17 -08:00
Sanjay Patel 4f051fe374 [InstCombine] avoid crash sinking to unreachable block
The test is reduced from the example in D82005.

Similar to 94f6d365e, the test here would assert in
the DomTree when we tried to convert a select to a
phi with an unreachable block operand.

We may want to add some kind of guard code in DomTree
itself to avoid this sort of problem.
2020-12-10 13:10:26 -05:00
Sanjay Patel 12b684ae02 [VectorCombine] improve readability; NFC
If we are going to allow adjusting the pointer for GEPs,
rearranging the code a bit will make it easier to follow.
2020-12-10 13:10:26 -05:00
Hongtao Yu b035513c06 [CSSPGO] Pseudo probe encoding and emission.
This change implements pseudo probe encoding and emission for CSSPGO. Please see RFC here for more context: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s

Pseudo probes are in the form of intrinsic calls on IR/MIR but they do not turn into any machine instructions. Instead they are emitted into the binary as a piece of data in standalone sections.  The probe-specific sections are not needed to be loaded into memory at execution time, thus they do not incur a runtime overhead. 

**ELF object emission**

The binary data to emit are organized as two ELF sections, i.e, the `.pseudo_probe_desc` section and the `.pseudo_probe` section. The `.pseudo_probe_desc` section stores a function descriptor for each function and the `.pseudo_probe` section stores the actual probes, each fo which corresponds to an IR basic block or an IR function callsite. A function descriptor is stored as a module-level metadata during the compilation and is serialized into the object file during object emission.

Both the probe descriptors and pseudo probes can be emitted into a separate ELF section per function to leverage the linker for deduplication.  A `.pseudo_probe` section shares the same COMDAT group with the function code so that when the function is dead, the probes are dead and disposed too. On the contrary, a `.pseudo_probe_desc` section has its own COMDAT group. This is because even if a function is dead, its probes may be inlined into other functions and its descriptor is still needed by the profile generation tool.

The format of `.pseudo_probe_desc` section looks like:

```
.section   .pseudo_probe_desc,"",@progbits
.quad   6309742469962978389  // Func GUID
.quad   4294967295           // Func Hash
.byte   9                    // Length of func name
.ascii  "_Z5funcAi"          // Func name
.quad   7102633082150537521
.quad   138828622701
.byte   12
.ascii  "_Z8funcLeafi"
.quad   446061515086924981
.quad   4294967295
.byte   9
.ascii  "_Z5funcBi"
.quad   -2016976694713209516
.quad   72617220756
.byte   7
.ascii  "_Z3fibi"
```

For each `.pseudoprobe` section, the encoded binary data consists of a single function record corresponding to an outlined function (i.e, a function with a code entry in the `.text` section). A function record has the following format :

```
FUNCTION BODY (one for each outlined function present in the text section)
    GUID (uint64)
        GUID of the function
    NPROBES (ULEB128)
        Number of probes originating from this function.
    NUM_INLINED_FUNCTIONS (ULEB128)
        Number of callees inlined into this function, aka number of
        first-level inlinees
    PROBE RECORDS
        A list of NPROBES entries. Each entry contains:
          INDEX (ULEB128)
          TYPE (uint4)
            0 - block probe, 1 - indirect call, 2 - direct call
          ATTRIBUTE (uint3)
            reserved
          ADDRESS_TYPE (uint1)
            0 - code address, 1 - address delta
          CODE_ADDRESS (uint64 or ULEB128)
            code address or address delta, depending on ADDRESS_TYPE
    INLINED FUNCTION RECORDS
        A list of NUM_INLINED_FUNCTIONS entries describing each of the inlined
        callees.  Each record contains:
          INLINE SITE
            GUID of the inlinee (uint64)
            ID of the callsite probe (ULEB128)
          FUNCTION BODY
            A FUNCTION BODY entry describing the inlined function.
```

To support building a context-sensitive profile, probes from inlinees are grouped by their inline contexts. An inline context is logically a call path through which a callee function lands in a caller function. The probe emitter builds an inline tree based on the debug metadata for each outlined function in the form of a trie tree. A tree root is the outlined function. Each tree edge stands for a callsite where inlining happens. Pseudo probes originating from an inlinee function are stored in a tree node and the tree path starting from the root all the way down to the tree node is the inline context of the probes. The emission happens on the whole tree top-down recursively. Probes of a tree node will be emitted altogether with their direct parent edge. Since a pseudo probe corresponds to a real code address, for size savings, the address is encoded as a delta from the previous probe except for the first probe. Variant-sized integer encoding, aka LEB128, is used for address delta and probe index.

**Assembling**

Pseudo probes can be printed as assembly directives alternatively. This allows for good assembly code readability and also provides a view of how optimizations and pseudo probes affect each other, especially helpful for diff time assembly analysis.

A pseudo probe directive has the following operands in order: function GUID, probe index, probe type, probe attributes and inline context. The directive is generated by the compiler and can be parsed by the assembler to form an encoded `.pseudoprobe` section in the object file.

A example assembly looks like:

```
foo2: # @foo2
# %bb.0: # %bb0
pushq %rax
testl %edi, %edi
.pseudoprobe 837061429793323041 1 0 0
je .LBB1_1
# %bb.2: # %bb2
.pseudoprobe 837061429793323041 6 2 0
callq foo
.pseudoprobe 837061429793323041 3 0 0
.pseudoprobe 837061429793323041 4 0 0
popq %rax
retq
.LBB1_1: # %bb1
.pseudoprobe 837061429793323041 5 1 0
callq *%rsi
.pseudoprobe 837061429793323041 2 0 0
.pseudoprobe 837061429793323041 4 0 0
popq %rax
retq
# -- End function
.section .pseudo_probe_desc,"",@progbits
.quad 6699318081062747564
.quad 72617220756
.byte 3
.ascii "foo"
.quad 837061429793323041
.quad 281547593931412
.byte 4
.ascii "foo2"
```

With inlining turned on, the assembly may look different around %bb2 with an inlined probe:

```
# %bb.2:                                # %bb2
.pseudoprobe    837061429793323041 3 0
.pseudoprobe    6699318081062747564 1 0 @ 837061429793323041:6
.pseudoprobe    837061429793323041 4 0
popq    %rax
retq
```

**Disassembling**

We have a disassembling tool (llvm-profgen) that can display disassembly alongside with pseudo probes. So far it only supports ELF executable file.

An example disassembly looks like:

```
00000000002011a0 <foo2>:
  2011a0: 50                    push   rax
  2011a1: 85 ff                 test   edi,edi
  [Probe]:  FUNC: foo2  Index: 1  Type: Block
  2011a3: 74 02                 je     2011a7 <foo2+0x7>
  [Probe]:  FUNC: foo2  Index: 3  Type: Block
  [Probe]:  FUNC: foo2  Index: 4  Type: Block
  [Probe]:  FUNC: foo   Index: 1  Type: Block  Inlined: @ foo2:6
  2011a5: 58                    pop    rax
  2011a6: c3                    ret
  [Probe]:  FUNC: foo2  Index: 2  Type: Block
  2011a7: bf 01 00 00 00        mov    edi,0x1
  [Probe]:  FUNC: foo2  Index: 5  Type: IndirectCall
  2011ac: ff d6                 call   rsi
  [Probe]:  FUNC: foo2  Index: 4  Type: Block
  2011ae: 58                    pop    rax
  2011af: c3                    ret
```

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D91878
2020-12-10 09:50:08 -08:00
Jun Ma 137674f882 [TruncInstCombine] Remove scalable vector restriction
Differential Revision: https://reviews.llvm.org/D92819
2020-12-10 18:00:19 +08:00
Jianzhou Zhao ea981165a4 [dfsan] Track field/index-level shadow values in variables
*************
* The problem
*************
See motivation examples in compiler-rt/test/dfsan/pair.cpp. The current
DFSan always uses a 16bit shadow value for a variable with any type by
combining all shadow values of all bytes of the variable. So it cannot
distinguish two fields of a struct: each field's shadow value equals the
combined shadow value of all fields. This introduces an overtaint issue.

Consider a parsing function

   std::pair<char*, int> get_token(char* p);

where p points to a buffer to parse, the returned pair includes the next
token and the pointer to the position in the buffer after the token.

If the token is tainted, then both the returned pointer and int ar
tainted. If the parser keeps on using get_token for the rest parsing,
all the following outputs are tainted because of the tainted pointer.

The CL is the first change to address the issue.

**************************
* The proposed improvement
**************************
Eventually all fields and indices have their own shadow values in
variables and memory.

For example, variables with type {i1, i3}, [2 x i1], {[2 x i4], i8},
[2 x {i1, i1}] have shadow values with type {i16, i16}, [2 x i16],
{[2 x i16], i16}, [2 x {i16, i16}] correspondingly; variables with
primary type still have shadow values i16.

***************************
* An potential implementation plan
***************************

The idea is to adopt the change incrementially.

1) This CL
Support field-level accuracy at variables/args/ret in TLS mode,
load/store/alloca still use combined shadow values.

After the alloca promotion and SSA construction phases (>=-O1), we
assume alloca and memory operations are reduced. So if struct
variables do not relate to memory, their tracking is accurate at
field level.

2) Support field-level accuracy at alloca
3) Support field-level accuracy at load/store

These two should make O0 and real memory access work.

4) Support vector if necessary.
5) Support Args mode if necessary.
6) Support passing more accurate shadow values via custom functions if
necessary.

***************
* About this CL.
***************
The CL did the following

1) extended TLS arg/ret to work with aggregate types. This is similar
to what MSan does.

2) implemented how to map between an original type/value/zero-const to
its shadow type/value/zero-const.

3) extended (insert|extract)value to use field/index-level progagation.

4) for other instructions, propagation rules are combining inputs by or.
The CL converts between aggragate and primary shadow values at the
cases.

5) Custom function interfaces also need such a conversion because
all existing custom functions use i16. It is unclear whether custome
functions need more accurate shadow propagation yet.

6) Added test cases for aggregate type related cases.

Reviewed-by: morehouse

Differential Revision: https://reviews.llvm.org/D92261
2020-12-09 19:38:35 +00:00
Sanjay Patel b2ef264096 [VectorCombine] allow peeking through an extractelt when creating a vector load
This is an enhancement to load vectorization that is motivated by
a pattern in https://llvm.org/PR16739.
Unfortunately, it's still not enough to make a difference there.
We will have to handle multi-use cases in some better way to avoid
creating multiple overlapping loads.

Differential Revision: https://reviews.llvm.org/D92858
2020-12-09 10:36:14 -05:00
Roman Lebedev e6f2a79d7a
[InstCombine] canonicalizeSaturatedAdd(): last fold is only valid for strict comparison (PR48390)
We could create uadd.sat under incorrect circumstances
if a select with -1 as the false value was canonicalized
by swapping the T/F values. Unlike the other transforms
in the same function, it is not invariant to equality.

Some alive proofs: https://alive2.llvm.org/ce/z/emmKKL

Based on original patch by David Green!

Fixes https://bugs.llvm.org/show_bug.cgi?id=48390

Differential Revision: https://reviews.llvm.org/D92717
2020-12-09 18:19:09 +03:00
Anton Afanasyev e5bf2e8989 [SLP] Use the width of value truncated just before storing
For stores chain vectorization we choose the size of vector
elements to ensure we fit to minimum and maximum vector register
size for the number of elements given. This patch corrects vector
element size choosing the width of value truncated just before
storing instead of the width of value stored.

Fixes PR46983

Differential Revision: https://reviews.llvm.org/D92824
2020-12-09 16:38:45 +03:00
Sander de Smalen d568cff696 [LoopVectorizer][SVE] Vectorize a simple loop with with a scalable VF.
* Steps are scaled by `vscale`, a runtime value.
* Changes to circumvent the cost-model for now (temporary)
  so that the cost-model can be implemented separately.

This can vectorize the following loop [1]:

   void loop(int N, double *a, double *b) {
     #pragma clang loop vectorize_width(4, scalable)
     for (int i = 0; i < N; i++) {
       a[i] = b[i] + 1.0;
     }
   }

[1] This source-level example is based on the pragma proposed
separately in D89031. This patch only implements the LLVM part.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D91077
2020-12-09 11:25:21 +00:00
Sander de Smalen adc37145de [LoopVectorizer] NFC: Remove unnecessary asserts that VF cannot be scalable.
This patch removes a number of asserts that VF is not scalable, even though
the code where this assert lives does nothing that prevents VF being scalable.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D91060
2020-12-09 11:25:21 +00:00
Joe Ellis 80c33de2d3 [SelectionDAG] Add llvm.vector.{extract,insert} intrinsics
This commit adds two new intrinsics.

- llvm.experimental.vector.insert: used to insert a vector into another
  vector starting at a given index.

- llvm.experimental.vector.extract: used to extract a subvector from a
  larger vector starting from a given index.

The codegen work for these intrinsics has already been completed; this
commit is simply exposing the existing ISD nodes to LLVM IR.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D91362
2020-12-09 11:08:41 +00:00
Philip Reames 5171b7b40e [indvars] Common a bit of code [NFC] 2020-12-08 15:25:48 -08:00
Anna Thomas 29356e3279 [ScalarizeMaskedMemIntrin] Add new PM support
This patch adds new PM support for the pass and the pass can be now used
during middle-end transforms. The old pass is remamed to
ScalarizeMaskedMemIntrinLegacyPass.

Reviewed-By: skatkov, aeubanks
Differential Revision: https://reviews.llvm.org/D92743
2020-12-08 17:15:22 -05:00
Benjamin Kramer 5f18e2f31e Move createScalarizeMaskedMemIntrinPass to Scalar.h 2020-12-08 19:08:09 +01:00
Benjamin Kramer 10987e30be Remove unused include. NFC.
This is also a layering violation.
2020-12-08 19:03:56 +01:00
Anna Thomas 09f2f9605f [ScalarizeMaskedMemIntrinsic] Move from CodeGen into Transforms
ScalarizeMaskedMemIntrinsic is currently a codeGen level pass. The pass
is actually operating on IR level and does not use any code gen specific
passes.  It is useful to move it into transforms directory so that it
can be more widely used as a mid-level transform as well (apart from
usage in codegen pipeline).
In particular, we have a usecase downstream where we would like to use
this pass in our mid-level pipeline which operates on IR level.

The next change will be to add support for new PM.

Reviewers: craig.topper, apilipenko, skatkov
Reviewed-By: skatkov
Differential Revision: https://reviews.llvm.org/D92407
2020-12-08 12:25:58 -05:00
Xun Li 31e60b9133 [coroutine] should disable inline before calling coro split
This is a rework of D85812, which didn't land.
When callee coroutine function is inlined into caller coroutine function before coro-split pass, llvm will emits "coroutine should have exactly one defining @llvm.coro.begin". It seems that coro-early pass can not handle this quiet well.
So we believe that unsplited coroutine function should not be inlined.
This patch fix such issue by not inlining function if it has attribute "coroutine.presplit" (it means the function has not been splited) to fix this issue
test plan: check-llvm, check-clang

In D85812, there was suggestions on moving the macros to Attributes.td to avoid circular header dependency issue.
I believe it's not worth doing just to be able to use one constant string in one place.
Today, there are already 3 possible attribute values for "coroutine.presplit": c6543cc6b8/llvm/lib/Transforms/Coroutines/CoroInternal.h (L40-L42)
If we move them into Attributes.td, we would be adding 3 new attributes to EnumAttr, just to support this, which I think is an overkill.

Instead, I think the best way to do this is to add an API in Function class that checks whether this function is a coroutine, by checking the attribute by name directly.

Differential Revision: https://reviews.llvm.org/D92706
2020-12-08 08:53:08 -08:00
Teresa Johnson 77b509710c [ICP] Don't promote when target not defined in module
This guards against cases where the symbol was dead code eliminated in
the binary by ThinLTO, and we have a sample profile collected for one
binary but used to optimize another.

Most of the benefit from ICP comes from inlining the target, which we
can't do with only a declaration anyway. If this is in the pre-ThinLTO
link step (e.g. for instrumentation based PGO), we will attempt the
promotion again in the ThinLTO backend after importing anyway, and we
don't need the early promotion to facilitate that.

Differential Revision: https://reviews.llvm.org/D92804
2020-12-08 07:45:36 -08:00
Sjoerd Meijer 1e260f955d [LICM][docs] Document that LICM is also a canonicalization transform. NFC.
This documents that LICM is a canonicalization transform, which we discussed
recently in:

http://lists.llvm.org/pipermail/llvm-dev/2020-December/147184.html

but which was also discused earlier, e.g. in:

http://lists.llvm.org/pipermail/llvm-dev/2019-September/135058.html
2020-12-08 11:56:35 +00:00
Evgeniy Brevnov 2d1b024d06 [DSE][NFC] Need to be carefull mixing signed and unsigned types
Currently in some places we use signed type to represent size of an access and put explicit casts from unsigned to signed.
For example: int64_t EarlierSize = int64_t(Loc.Size.getValue());

Even though it doesn't loos bits (immidiatly) it may overflow and we end up with negative size. Potentially that cause later code to work incorrectly. A simple expample is a check that size is not negative.

I think it would be safer and clearer if we use unsigned type for the size and handle it appropriately.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D92648
2020-12-08 16:53:37 +07:00
Valentin Churavy 700cf7dcc9 [VNCoercion] Disallow coercion between different ni addrspaces
I'm not sure if it would be legal by the IR reference to introduce
an addrspacecast here, since the IR reference is a bit vague on
the exact semantics, but at least for our usage of it (and I
suspect for many other's usage) it is not. For us, addrspacecasts
between non-integral address spaces carry frontend information that the
optimizer cannot deduce afterwards in a generic way (though we
have frontend specific passes in our pipline that do propagate
these). In any case, I'm sure nobody is using it this way at
the moment, since it would have introduced inttoptrs, which
are definitely illegal.

Fixes PR38375

Co-authored-by: Keno Fischer <keno@alumni.harvard.edu>

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D50010
2020-12-07 20:19:48 -05:00
Sanjay Patel 5fe1a49f96 [SLP] fix typo in debug string; NFC 2020-12-07 15:09:21 -05:00
Bardia Mahjour 4db9b78c81 [LV] Epilogue Vectorization with Optimal Control Flow - Default Enablement
This patch enables epilogue vectorization by default per reviewer requests.

Differential Revision: https://reviews.llvm.org/D89566
2020-12-07 14:29:36 -05:00
Florian Hahn 32825e8636
[ConstraintElimination] Tweak placement in pipeline.
This patch adds the ConstraintElimination pass to the LTO pipeline and
also runs it after SCCP in the function simplification pipeline.

This increases the number of cases we can elimination. Pending further
tuning.
2020-12-07 19:08:40 +00:00
Simon Pilgrim 50dd1dba6e [IPO] Fix operator precedence warning. NFCI.
Check the entire assertion condition before && with the message.
2020-12-07 18:23:54 +00:00
Alexey Bataev 438682de6a [SLP]Merge reorder and reuse shuffles.
It is possible to merge reuse and reorder shuffles and reduce the total
cost of the ivectorization tree/number of final instructions.

Differential Revision: https://reviews.llvm.org/D92668
2020-12-07 07:50:00 -08:00
Jun Ma 216689ace7 [Coroutines] Add DW_OP_deref for transformed dbg.value intrinsic.
Differential Revision: https://reviews.llvm.org/D92462
2020-12-07 10:24:44 +08:00
Craig Topper 305fcc9122 [LoopIdiomRecognize] Merge a conditional operator with an earlier if and remove an extra temporary variable. NFC
The CountPrev variable was only used to forward a value from
the if statement to the conditional operator under the same
condition.

While there move some variable declarations to their first
assignment.
2020-12-06 15:23:18 -08:00
Fangrui Song 2832f3528c [Transforms] Delete unused declarations from NewGVN/CoroSplit/ValueMapper 2020-12-06 13:04:01 -08:00
Wenlei He 6b989a1710 [CSSPGO] Infrastructure for context-sensitive Sample PGO and Inlining
This change adds the context-senstive sample PGO infracture described in CSSPGO RFC (https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s). It introduced an abstraction between input profile and profile loader that queries input profile for functions. Specifically, there's now the notion of base profile and context profile, and they are managed by the new SampleContextTracker for adjusting and merging profiles based on inline decisions. It works with top-down profiled guided inliner in profile loader (https://reviews.llvm.org/D70655) for better inlining with specialization and better post-inline profile fidelity. In the future, we can also expose this infrastructure to CGSCC inliner in order for it to take advantage of context-sensitive profile. This change is the consumption part of context-sensitive profile (The generation part is in this stack: https://reviews.llvm.org/D89707). We've seen good results internally in conjunction with Pseudo-probe (https://reviews.llvm.org/D86193). Pacthes for integration with Pseudo-probe coming up soon.

Currently the new infrastructure kick in when input profile contains the new context-sensitive profile; otherwise it's no-op and does not affect existing AutoFDO.

**Interface**

There're two sets of interfaces for query and tracking respectively exposed from SampleContextTracker. For query, now instead of simply getting a profile from input for a function, we can explicitly query base profile or context profile for given call path of a function. For tracking, there're separate APIs for marking context profile as inlined, or promoting and merging not inlined context profile.

- Query base profile (`getBaseSamplesFor`)
Base profile is the merged synthetic profile for function's CFG profile from any outstanding (not inlined) context. We can query base profile by function.

- Query context profile (`getContextSamplesFor`)
Context profile is a function's CFG profile for a given calling context. We can query context profile by context string.

- Track inlined context profile (`markContextSamplesInlined`)
When a function is inlined for given calling context, we need to mark the context profile for that context as inlined. This is to make sure we don't include inlined context profile when synthesizing base profile for that inlined function.

- Track not-inlined context profile (`promoteMergeContextSamplesTree`)
When a function is not inlined for given calling context, we need to promote the context profile tree so the not inlined context becomes top-level context. This preserve the sub-context under that function so later inline decision for that not inlined function will still have context profile for its call tree. Note that profile will be merged if needed when promoting a context profile tree if any of the node already exists at its promoted destination.

**Implementation**

Implementation-wise, `SampleContext` is created as abstraction for context. Currently it's a string for call path, and we can later optimize it to something more efficient, e.g. context id. Each `SampleContext` also has a `ContextState` indicating whether it's raw context profile from input, whether it's inlined or merged, whether it's synthetic profile created by compiler. Each `FunctionSamples` now has a `SampleContext` that tells whether it's base profile or context profile, and for context profile what is the context and state.

On top of the above context representation, a custom trie tree is implemented to track and manager context profiles. Specifically, `SampleContextTracker` is implemented that encapsulates a trie tree with `ContextTireNode` as node. Each node of the trie tree represents a frame in calling context, thus the path from root to a node represents a valid calling context. We also track `FunctionSamples` for each node, so this trie tree can serve efficient query for context profile. Accordingly, context profile tree promotion now becomes moving a subtree to be under the root of entire tree, and merge nodes for subtree if this move encounters existing nodes.

**Integration**

`SampleContextTracker` is now also integrated with AutoFDO, `SampleProfileReader` and `SampleProfileLoader`. When we detected input profile contains context-sensitive profile, `SampleContextTracker` will be used to track profiles, and all profile query will go to `SampleContextTracker` instead of `SampleProfileReader` automatically. Tracking APIs are called automatically for each inline decision from `SampleProfileLoader`.

Differential Revision: https://reviews.llvm.org/D90125
2020-12-06 11:49:18 -08:00
Kazu Hirata ddb002d7c7 [InstCombine] Remove replacePointer (NFC)
The declaration was introduced on Feb 10, 2017 in commit
ba01ed00fe without a corresponding
definition.
2020-12-06 10:24:08 -08:00
Sanjay Patel 94f6d365e4 [InstCombine] avoid crash on phi with unreachable incoming block (PR48369) 2020-12-06 09:31:47 -05:00
Fangrui Song 204d0d51b3 [MemProf] Make __memprof_shadow_memory_dynamic_address dso_local in static relocation model
The x86-64 backend currently has a bug which uses a wrong register when for the GOTPCREL reference.
The program will crash without the dso_local specifier.
2020-12-05 21:36:31 -08:00
Florian Hahn 4ceecc820b [ConstraintElimination] Handle constraints with all zero var coeffs.
Constraints where all variable coefficients are 0 do not add any useful
information. When checking, we can check if they are always true/false.
2020-12-05 12:06:53 +00:00
Kazu Hirata 8006043b13 [IRCE] Remove unused IsSigned and its accessor (NFC)
IsSigned and its accessor, isSigned, were introduced on Oct 25, 2017
in commit 9ac7021a25.  The last use was
removed on Nov 20, 2017 in commit
268467869b.
2020-12-04 21:26:12 -08:00
Jianzhou Zhao a28db8b27a [dfsan] Add empty APIs for field-level shadow
This is a child diff of D92261.

This diff adds APIs that return shadow type/value/zero from origin
objects. For the time being these APIs simply returns primitive
shadow type/value/zero. The following diff will be implementing the
conversion.

As D92261 explains, some cases still use primitive shadow during
the incremential changes. The cases include
1) alloca/load/store
2) custom function IO
3) vectors
At the cases this diff does not use the new APIs, but uses primitive
shadow objects explicitly.

Reviewed-by: morehouse

Differential Revision: https://reviews.llvm.org/D92629
2020-12-04 21:42:07 +00:00
Duncan P. N. Exon Smith d10f9863a5 ADT: Migrate users of AlignedCharArrayUnion to std::aligned_union_t, NFC
Prepare to delete `AlignedCharArrayUnion` by migrating its users over to
`std::aligned_union_t`.

I will delete `AlignedCharArrayUnion` and its tests in a follow-up
commit so that it's easier to revert in isolation in case some
downstream wants to keep using it.

Differential Revision: https://reviews.llvm.org/D92516
2020-12-04 12:34:49 -08:00
Duncan P. N. Exon Smith 5b267fb796 ADT: Stop peeking inside AlignedCharArrayUnion, NFC
Update all the users of `AlignedCharArrayUnion` to stop peeking inside
(to look at `buffer`) so that a follow-up patch can replace it with an
alias to `std::aligned_union_t`.

This was reviewed as part of https://reviews.llvm.org/D92512, but I'm
splitting this bit out to commit first to reduce churn in case the
change to `AlignedCharArrayUnion` needs to be reverted for some
unexpected reason.
2020-12-04 11:07:42 -08:00
Hiroshi Yamauchi f9c3954a6e Fix for Bug 48055.
Differential Revision: https://reviews.llvm.org/D92599
2020-12-04 11:05:01 -08:00
Arthur Eubanks 7f6f9f4cf9 [NewPM] Make pass adaptors less templatey
Currently PassBuilder.cpp is by far the file that takes longest to
compile. This is due to tons of templates being instantiated per pass.

Follow PassManager by using wrappers around passes to avoid making
the adaptors templated on the pass type. This allows us to move various
adaptors' run methods into .cpp files.

This reduces the compile time of PassBuilder.cpp on my machine from 66
to 39 seconds. It also reduces the size of opt from 685M to 676M.

Reviewed By: dexonsmith

Differential Revision: https://reviews.llvm.org/D92616
2020-12-04 08:30:50 -08:00
Evgeniy Brevnov 061cebb46f [NFC][NARY-REASSOCIATE] Restructure code to aviod isPotentiallyReassociatable
Currently we have to duplicate the same checks in isPotentiallyReassociatable and tryReassociate. With simple pattern like add/mul this may be not a big deal. But the situation gets much worse when I try to add support for min/max. Min/Max may be represented by several instructions and can take different forms. In order reduce complexity for upcoming min/max support we need to restructure the code a bit to avoid mentioned code duplication.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D88286
2020-12-04 16:19:43 +07:00
Evgeniy Brevnov f61c29b3a7 [NARY-REASSOCIATE] Simplify traversal logic by post deleting dead instructions
Currently we delete optimized instructions as we go. That has several negative consequences. First it complicates traversal logic itself. Second if newly generated instruction has been deleted the traversal is repeated from scratch.

But real motivation for the change is upcoming change with support for min/max reassociation. Here we employ SCEV expander to generate code. As a result newly generated instructions may be inserted not right before original instruction (because SCEV may do hoisting) and there is no way to know 'next' instruction.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D88285
2020-12-04 16:17:50 +07:00
Kazu Hirata e2fc11cf9f [JumpThreading] Call eraseBlock when folding a conditional branch
This patch teaches the jump threading pass to call BPI->eraseBlock
when it folds a conditional branch.

Without this patch, BranchProbabilityInfo could end up with stale edge
probabilities for the basic block containing the conditional branch --
one edge probability with less than 1.0 and the other for a removed
edge.

Differential Revision: https://reviews.llvm.org/D92608
2020-12-03 23:50:17 -08:00
Max Kazantsev 12b6c5e682 Return "[IndVars] ICmpInst should not prevent IV widening"
This reverts commit 4bd35cdc3a.

The patch was reverted during the investigation. The investigation
shown that the patch did not cause any trouble, but just exposed
the existing problem that is addressed by the previous patch
"[IndVars] Quick fix LHS/RHS bug". Returning without changes.
2020-12-04 12:34:43 +07:00
Max Kazantsev 3df0daceb2 [IndVars] Quick fix LHS/RHS bug
The code relies on fact that LHS is the NarrowDef but never
really checks it. Adding the conservative restrictive check,
will follow-up with handling of case where RHS is a NarrowDef.
2020-12-04 12:34:42 +07:00
Jianzhou Zhao 80e326a8c4 [dfsan] Support passing non-i16 shadow values in TLS mode
This is a child diff of D92261.

It extended TLS arg/ret to work with aggregate types.

For a function
  t foo(t1 a1, t2 a2, ... tn an)
Its arguments shadow are saved in TLS args like
  a1_s, a2_s, ..., an_s
TLS ret simply includes r_s. By calculating the type size of each shadow
value, we can get their offset.

This is similar to what MSan does. See __msan_retval_tls and __msan_param_tls
from llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp.

Note that this change does not add test cases for overflowed TLS
arg/ret because this is hard to test w/o supporting aggregate shdow
types. We will be adding them after supporting that.

Reviewed-by: morehouse

Differential Revision: https://reviews.llvm.org/D92440
2020-12-04 02:45:07 +00:00
Philip Reames 0c866a3d6a [LoopVec] Support non-instructions as argument to uniform mem ops
The initial step of the uniform-after-vectorization (lane-0 demanded only) analysis was very awkwardly written. It would revisit use list of each pointer operand of a widened load/store. As a result, it was in the worst case O(N^2) where N was the number of instructions in a loop, and had restricted operand Value types to reduce the size of use lists.

This patch replaces the original algorithm with one which is at most O(2N) in the number of instructions in the loop. (The key observation is that each use of a potentially interesting pointer is visited at most twice, once on first scan, once in the use list of *it's* operand. Only instructions within the loop have their uses scanned.)

In the process, we remove a restriction which required the operand of the uniform mem op to itself be an instruction.  This allows detection of uniform mem ops involving global addresses.

Differential Revision: https://reviews.llvm.org/D92056
2020-12-03 14:51:44 -08:00
dfukalov 2ce38b3f03 [NFC] Reduce include files dependency.
1. Removed #include "...AliasAnalysis.h" in other headers and modules.
2. Cleaned up includes in AliasAnalysis.h.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D92489
2020-12-03 18:25:05 +03:00
Max Kazantsev 4bd35cdc3a Revert "[IndVars] ICmpInst should not prevent IV widening"
This reverts commit 0c9c6ddf17.

We are seeing some failures with this patch locally. Not clear
if it's causing them or just triggering a problem in another
place. Reverting while investigating.
2020-12-03 18:01:41 +07:00
modimo c1ba991e8d [NFC] Fix typo 2020-12-02 22:23:57 -08:00
Jianzhou Zhao bd726d2796 [dfsan] Rename ShadowTy/ZeroShadow with prefix Primitive
This is a child diff of D92261.

After supporting field/index-level shadow, the existing shadow with type
i16 works for only primitive types.

Reviewed-by: morehouse

Differential Revision: https://reviews.llvm.org/D92459
2020-12-03 05:31:01 +00:00
Florian Hahn 2304528bb5 [ConstraintElimination] Make sure arguments of std:pow match.
This should fix a build failure on some systems, e.g. solaris11-sparcv9
http://lab.llvm.org:8014/#/builders/22
2020-12-02 22:23:26 +00:00
Hongtao Yu 24d4291ca7 [CSSPGO] Pseudo probes for function calls.
An indirect call site needs to be probed for its potential call targets. With CSSPGO a direct call also needs a probe so that a calling context can be represented by a stack of callsite probes. Unlike pseudo probes for basic blocks that are in form of standalone intrinsic call instructions, pseudo probes for callsites have to be attached to the call instruction, thus a separate instruction would not work.

One possible way of attaching a probe to a call instruction is to use a special metadata that carries information about the probe. The special metadata will have to make its way through the optimization pipeline down to object emission. This requires additional efforts to maintain the metadata in various places. Given that the `!dbg` metadata is a first-class metadata and has all essential support in place , leveraging the `!dbg` metadata as a channel to encode pseudo probe information is probably the easiest solution.

With the requirement of not inflating `!dbg` metadata that is allocated for almost every instruction, we found that the 32-bit DWARF discriminator field which mainly serves AutoFDO can be reused for pseudo probes. DWARF discriminators distinguish identical source locations between instructions and with pseudo probes such support is not required. In this change we are using the discriminator field to encode the ID and type of a callsite probe and the encoded value will be unpacked and consumed right before object emission. When a callsite is inlined, the callsite discriminator field will go with the inlined instructions. The `!dbg` metadata of an inlined instruction is in form of a scope stack. The top of the stack is the instruction's original `!dbg` metadata and the bottom of the stack is for the original callsite of the top-level inliner. Except for the top of the stack, all other elements of the stack actually refer to the nested inlined callsites whose discriminator field (which actually represents a calliste probe) can be used together to represent the inline context of an inlined PseudoProbeInst or CallInst.

To avoid collision with the baseline AutoFDO in various places that handles dwarf discriminators where a check against  the `-pseudo-probe-for-profiling` switch is not available, a special encoding scheme is used to tell apart a pseudo probe discriminator from a regular discriminator. For the regular discriminator, if all lowest 3 bits are non-zero, it means the discriminator is basically empty and all higher 29 bits can be reversed for pseudo probe use.

Callsite pseudo probes are inserted in `SampleProfileProbePass` and a target-independent MIR pass `PseudoProbeInserter` is added to unpack the probe ID/type from `!dbg`.

Note that with this work the switch -debug-info-for-profiling will not work with -pseudo-probe-for-profiling anymore. They cannot be used at the same time.

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D91756
2020-12-02 13:45:20 -08:00
Jianzhou Zhao dad5d95883 [dfsan] Rename CachedCombinedShadow to be CachedShadow
At D92261, this type will be used to cache both combined shadow and
converted shadow values.

Reviewed-by: morehouse

Differential Revision: https://reviews.llvm.org/D92458
2020-12-02 21:39:16 +00:00
jasonliu a65d8c5d72 [XCOFF][AIX] Generate LSDA data and compact unwind section on AIX
Summary:
AIX uses the existing EH infrastructure in clang and llvm.
The major differences would be
1. AIX do not have CFI instructions.
2. AIX uses a new personality routine, named __xlcxx_personality_v1.
   It doesn't use the GCC personality rountine, because the
   interoperability is not there yet on AIX.
3. AIX do not use eh_frame sections. Instead, it would use a eh_info
section (compat unwind section) to store the information about
personality routine and LSDA data address.

Reviewed By: daltenty, hubert.reinterpretcast

Differential Revision: https://reviews.llvm.org/D91455
2020-12-02 18:42:44 +00:00
Bardia Mahjour a7e2c26939 [LV] Epilogue Vectorization with Optimal Control Flow (Recommit)
This is yet another attempt at providing support for epilogue
vectorization following discussions raised in RFC http://llvm.1065342.n5.nabble.com/llvm-dev-Proposal-RFC-Epilog-loop-vectorization-tt106322.html#none
and reviews D30247 and D88819.

Similar to D88819, this patch achieve epilogue vectorization by
executing a single vplan twice: once on the main loop and a second
time on the epilogue loop (using a different VF). However it's able
to handle more loops, and generates more optimal control flow for
cases where the trip count is too small to execute any code in vector
form.

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D89566
2020-12-02 10:09:56 -05:00
Sanjay Patel 56fd29e93b [SLP] use 'match' for binop/select; NFC
This might be a small improvement in readability, but the
real motivation is to make it easier to adapt the code to
deal with intrinsics like 'maxnum' and/or integer min/max.

There is potentially help in doing that with D92086, but
we might also just add specialized wrappers here to deal
with the expected patterns.
2020-12-02 09:04:08 -05:00
Alex Zinenko 240dd92432 [OpenMPIRBuilder] forward arguments as pointers to outlined function
OpenMPIRBuilder::createParallel outlines the body region of the parallel
construct into a new function that accepts any value previously defined outside
the region as a function argument. This function is called back by OpenMP
runtime function __kmpc_fork_call, which expects trailing arguments to be
pointers. If the region uses a value that is not of a pointer type, e.g. a
struct, the produced code would be invalid. In such cases, make createParallel
emit IR that stores the value on stack and pass the pointer to the outlined
function instead. The outlined function then loads the value back and uses as
normal.

Reviewed By: jdoerfert, llitchev

Differential Revision: https://reviews.llvm.org/D92189
2020-12-02 14:59:41 +01:00
David Sherwood 71bd59f0cb [SVE] Add support for scalable vectors with vectorize.scalable.enable loop attribute
In this patch I have added support for a new loop hint called
vectorize.scalable.enable that says whether we should enable scalable
vectorization or not. If a user wants to instruct the compiler to
vectorize a loop with scalable vectors they can now do this as
follows:

  br i1 %exitcond, label %for.end, label %for.body, !llvm.loop !2
  ...
  !2 = !{!2, !3, !4}
  !3 = !{!"llvm.loop.vectorize.width", i32 8}
  !4 = !{!"llvm.loop.vectorize.scalable.enable", i1 true}

Setting the hint to false simply reverts the behaviour back to the
default, using fixed width vectors.

Differential Revision: https://reviews.llvm.org/D88962
2020-12-02 13:23:43 +00:00
Chen Zheng 3cb7d62452 [LSR][NFC] don't collect chains when isNumRegsMajorCostOfLSR is false.
Reviewed By: samparker

Differential Revision: https://reviews.llvm.org/D92159
2020-12-01 22:29:33 -05:00
Jianzhou Zhao 405ea2b93d [msan] Replace 8 by kShadowTLSAlignment
Reviewed-by: eugenis

Differential Revision: https://reviews.llvm.org/D92275
2020-12-02 01:09:49 +00:00
Fangrui Song a5309438fe static const char *const foo => const char foo[]
By default, a non-template variable of non-volatile const-qualified type
having namespace-scope has internal linkage, so no need for `static`.
2020-12-01 10:33:18 -08:00
Bardia Mahjour c94af03f7f Revert "[LV] Epilogue Vectorization with Optimal Control Flow"
This reverts commit 9c5504adce.
Reverting to investigate build failure in http://lab.llvm.org:8011/#/builders/98/builds/1461/steps/9
2020-12-01 12:50:36 -05:00
Bardia Mahjour 9c5504adce [LV] Epilogue Vectorization with Optimal Control Flow
This is yet another attempt at providing support for epilogue
vectorization following discussions raised in RFC http://llvm.1065342.n5.nabble.com/llvm-dev-Proposal-RFC-Epilog-loop-vectorization-tt106322.html#none
and reviews D30247 and D88819.

Similar to D88819, this patch achieve epilogue vectorization by
executing a single vplan twice: once on the main loop and a second
time on the epilogue loop (using a different VF). However it's able
to handle more loops, and generates more optimal control flow for
cases where the trip count is too small to execute any code in vector
form.

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D89566
2020-12-01 12:04:29 -05:00
Nikita Popov 624af932a8 [MemCpyOpt] Port to MemorySSA
This is a straightforward port of MemCpyOpt to MemorySSA following
the approach of D26739. MemDep queries are replaced with MSSA queries
without changing the overall structure of the pass. Some care has
to be taken to account for differences between these APIs
(MemDep also returns reads, MSSA doesn't).

Differential Revision: https://reviews.llvm.org/D89207
2020-12-01 17:57:41 +01:00
Clement Courbet 735e6c888e [MergeICmps] Fix missing split.
We were not correctly splitting a blocks for chains of length 1.

Before that change, additional instructions for blocks in chains of
length 1 were not split off from the block before removing (this was
done correctly for chains of longer size).
If this first block contained an instruction referenced elsewhere,
deleting the block, would result in invalidation of the produced value.

This caused a miscompile which motivated D92297 (before D17993,
nonnull and dereferenceable attributed were not added so MergeICmps were
not triggered.) The new test gep-references-bb.ll demonstrate the issue.

The regression was introduced in
rG0efadbbcdeb82f5c14f38fbc2826107063ca48b2.

This supersedes D92364.

Test case by MaskRay (Fangrui Song).

Differential Revision: https://reviews.llvm.org/D92375
2020-12-01 16:50:55 +01:00
Sanjay Patel 9f60b8b3d2 [InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.

It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
	sub	w8, w0, w1
	lsr	w0, w8, #31

vs:
	cmp	w0, w1
	cset	w0, lt

If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.

A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.

Alive proof:
https://rise4fun.com/Alive/o4E

  Name: sign-bit splat
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = ashr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = sext %c

  Name: sign-bit LSB
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = lshr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = zext %c
2020-12-01 09:58:11 -05:00
Florian Hahn 7a4f1d59b8 [ConstraintElimination] Decompose GEP %ptr, ZEXT(SHL()).
Add support to decompose a GEP with a ZEXT(SHL()) operand.
2020-12-01 14:23:21 +00:00
Bhramar Vatsa fd679107d6
[InstCombine] Optimize away the unnecessary multi-use sign-extend
C.f. https://bugs.llvm.org/show_bug.cgi?id=47765

Added a case for handling the sign-extend (Shl+AShr) for multiple uses,
to optimize it away for an individual use,
when the demanded bits aren't affected by sign-extend.

https://rise4fun.com/Alive/lgf

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D91343
2020-12-01 16:54:00 +03:00
Roman Lebedev 94ead0190f
[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold, 2
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
2020-12-01 16:54:00 +03:00
Roman Lebedev 52533b52b8
Revert "[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold"
It seems i have missed checklines, temporairly reverting,
will reland momentairly..

This reverts commit aa1aa13509.
2020-12-01 15:47:04 +03:00
Roman Lebedev aa1aa13509
[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
2020-12-01 15:13:08 +03:00
Roman Lebedev 8e29e20e0d
[InstCombine] Evaluate new shift amount for sext(ashr(shl(trunc()))) fold in wide type (PR48343)
It is not correct to compute that new shift amount in it's narrow type
and only then extend it into the wide type:

----------------------------------------
Optimization: PR48343 good
Precondition: (width(%X) == width(%r))
  %o0 = trunc %X
  %o1 = shl %o0, %Y
  %o2 = ashr %o1, %Y
  %r = sext %o2
=>
  %n0 = sext %Y
  %n1 = sub width(%o0), %n0
  %n2 = sub width(%X), %n1
  %n3 = shl %X, %n2
  %r = ashr %n3, %n2

Done: 2016
Optimization is correct!

----------------------------------------
Optimization: PR48343 bad
Precondition: (width(%X) == width(%r))
  %o0 = trunc %X
  %o1 = shl %o0, %Y
  %o2 = ashr %o1, %Y
  %r = sext %o2
=>
  %n0 = sub width(%o0), %Y
  %n1 = sub width(%X), %n0
  %n2 = sext %n1
  %n3 = shl %X, %n2
  %r = ashr %n3, %n2

Done: 1
ERROR: Domain of definedness of Target is smaller than Source's for i9 %r

Example:
%X i9 = 0x000 (0)
%Y i4 = 0x3 (3)
%o0 i4 = 0x0 (0)
%o1 i4 = 0x0 (0)
%o2 i4 = 0x0 (0)
%n0 i4 = 0x1 (1)
%n1 i4 = 0x8 (8, -8)
%n2 i9 = 0x1F8 (504, -8)
%n3 i9 = 0x000 (0)
Source value: 0x000 (0)
Target value: undef


I.e. we should be computing it in the wide type from the beginning.

Fixes https://bugs.llvm.org/show_bug.cgi?id=48343
2020-12-01 15:13:07 +03:00
Roman Lebedev 15f8060f6f
[SimplifyCFG] FoldBranchToCommonDest: don't require that cmp of br is last instruction
There is no correctness need for that, and since we allow live-out
uses, this could theoretically happen, because currently nothing
will move the cond to right before the branch in those tests.
But regardless, lifting that restriction even makes the transform
easier to understand.

This makes the transform happen in 81 more cases (+0.55%)
)
2020-12-01 15:13:06 +03:00
Cullen Rhodes cba4accda0 [LV] Clamp VF hint when unsafe
In the following loop the dependence distance is 2 and can only be
vectorized if the vector length is no larger than this.

  void foo(int *a, int *b, int N) {
    #pragma clang loop vectorize(enable) vectorize_width(4)
    for (int i=0; i<N; ++i) {
      a[i + 2] = a[i] + b[i];
    }
  }

However, when specifying a VF of 4 via a loop hint this loop is
vectorized. According to [1][2], loop hints are ignored if the
optimization is not safe to apply.

This patch introduces a check to bail of vectorization if the user
specified VF is greater than the maximum feasible VF, unless explicitly
forced with '-force-vector-width=X'.

[1] https://llvm.org/docs/LangRef.html#llvm-loop-vectorize-and-llvm-loop-interleave
[2] https://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-loop-hint-optimizations

Reviewed By: sdesmalen, fhahn, Meinersbur

Differential Revision: https://reviews.llvm.org/D90687
2020-12-01 11:30:34 +00:00
Caroline Concatto 4b0ef2b075 [NFC][CostModel]Extend class IntrinsicCostAttributes to use ElementCount Type
This patch replaces the attribute  `unsigned VF`  in the class
IntrinsicCostAttributes by `ElementCount VF`.
This is a non-functional change to help upcoming patches to compute the cost
model for scalable vector inside this class.

Differential Revision: https://reviews.llvm.org/D91532
2020-12-01 11:12:51 +00:00
Florian Hahn efa9728a50 [ConstraintElimination] Decompose GEP %ptr, SHL().
Add support the decompose a GEP with an SHL operand.
2020-12-01 10:58:36 +00:00
Sjoerd Meijer f44ba25135 ExtractValue instruction costs
Instruction ExtractValue wasn't handled in
LoopVectorizationCostModel::getInstructionCost(). As a result, it was modeled
as a mul which is not really accurate. Since it is free (most of the times),
this now gets a cost of 0 using getInstructionCost.

This is a follow-up of D92208, that required changing this regression test.
In a follow up I will look at InsertValue which also isn't handled yet.

Differential Revision: https://reviews.llvm.org/D92317
2020-12-01 10:42:23 +00:00
Greg Parker bcc802fa36 [DSE] Remove a redundant call to getLocForWriteEx()
Differential Revision: https://reviews.llvm.org/D92263
2020-11-30 21:12:24 -08:00
Mircea Trofin 5fe10263ab [llvm][inliner] Reuse the inliner pass to implement 'always inliner'
Enable performing mandatory inlinings upfront, by reusing the same logic
as the full inliner, instead of the AlwaysInliner. This has the
following benefits:
- reduce code duplication - one inliner codebase
- open the opportunity to help the full inliner by performing additional
function passes after the mandatory inlinings, but before th full
inliner. Performing the mandatory inlinings first simplifies the problem
the full inliner needs to solve: less call sites, more contextualization, and,
depending on the additional function optimization passes run between the
2 inliners, higher accuracy of cost models / decision policies.

Note that this patch does not yet enable much in terms of post-always
inline function optimization.

Differential Revision: https://reviews.llvm.org/D91567
2020-11-30 12:03:39 -08:00
Hongtao Yu 64fa8cce22 [CSSPGO] Pseudo probe instrumentation pass
This change introduces a pseudo probe instrumentation pass for block instrumentation. Please refer to https://reviews.llvm.org/D86193 for the whole story.

Given the following LLVM IR:

```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
  %cmp = icmp eq i32 %x, 0
   br i1 %cmp, label %bb1, label %bb2
bb1:
   br label %bb3
bb2:
   br label %bb3
bb3:
   ret void
}
```

The instrumented IR will look like below. Note that each llvm.pseudoprobe intrinsic call represents a pseudo probe at a block, of which the first parameter is the GUID of the probe’s owner function and the second parameter is the probe’s ID.

```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
   %cmp = icmp eq i32 %x, 0
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 1)
   br i1 %cmp, label %bb1, label %bb2
bb1:
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 2)
   br label %bb3
bb2:
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 3)
   br label %bb3
bb3:
   call void @llvm.pseudoprobe(i64 837061429793323041, i64 4)
   ret void
}
```

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D86499
2020-11-30 10:16:54 -08:00
Florian Hahn fe83adb05a
[VPlan] Use VPUser to manage VPPredInstPHIRecipe operand (NFC).
VPPredInstPHIRecipe is one of the recipes that was missed during the
initial conversion. This patch adjusts the recipe to also manage its
operand using VPUser.
2020-11-30 13:09:58 +00:00
Roman Lebedev b0e9b7c59f
[NFC][SimplifyCFG] Add STATISTIC() to the FoldValueComparisonIntoPredecessors() fold 2020-11-30 12:27:16 +03:00
Max Kazantsev 0c9c6ddf17 [IndVars] ICmpInst should not prevent IV widening
If we decided to widen IV with zext, then unsigned comparisons
should not prevent widening (same for sext/sign comparisons).
The result of comparison in wider type does not change in this case.

Differential Revision: https://reviews.llvm.org/D92207
Reviewed By: nikic
2020-11-30 10:51:31 +07:00
Fangrui Song 5408fdcd78 [VPlan] Fix -Wunused-variable after a813090072 2020-11-29 10:38:01 -08:00
Florian Hahn 4bc9b909d7
[VPlan] Use VPValue and VPUser ops to print VPReplicateRecipe. 2020-11-29 18:28:27 +00:00
Florian Hahn a813090072
[VPlan] Manage stored values of interleave groups using VPUser (NFC)
Interleave groups also depend on the values they store. Manage the
stored values as VPUser operands. This is currently a NFC, but is
required to allow VPlan transforms and to manage generated vector values
exclusively in VPTransformState.
2020-11-29 17:24:36 +00:00
Andrew Litteken a8a43b6338 Revert "[IRSim][IROutliner] Adding the extraction basics for the IROutliner."
Reverting commit due to address sanitizer errors.

> Extracting the similar regions is the first step in the IROutliner.
> 
> Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
> sort them by how many instructions will be removed.  Each
> IRSimilarityCandidate is used to define an OutlinableRegion.  Each
> region is ordered by their occurrence in the Module and the regions that
> are not compatible with previously outlined regions are discarded.
> 
> Each region is then extracted with the CodeExtractor into its own
> function.
> 
> We test that correctly extract in:
> test/Transforms/IROutliner/extraction.ll
> test/Transforms/IROutliner/address-taken.ll
> test/Transforms/IROutliner/outlining-same-globals.ll
> test/Transforms/IROutliner/outlining-same-constants.ll
> test/Transforms/IROutliner/outlining-different-structure.ll
> 
> Reviewers: paquette, jroelofs, yroux
> 
> Differential Revision: https://reviews.llvm.org/D86975

This reverts commit bf899e8913.
2020-11-27 19:55:57 -06:00
Andrew Litteken bf899e8913 [IRSim][IROutliner] Adding the extraction basics for the IROutliner.
Extracting the similar regions is the first step in the IROutliner.

Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
sort them by how many instructions will be removed.  Each
IRSimilarityCandidate is used to define an OutlinableRegion.  Each
region is ordered by their occurrence in the Module and the regions that
are not compatible with previously outlined regions are discarded.

Each region is then extracted with the CodeExtractor into its own
function.

We test that correctly extract in:
test/Transforms/IROutliner/extraction.ll
test/Transforms/IROutliner/address-taken.ll
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll

Reviewers: paquette, jroelofs, yroux

Differential Revision: https://reviews.llvm.org/D86975
2020-11-27 19:08:29 -06:00
Florian Hahn ae008798a4
[VPlan] Use VPTransformState::set in widenGEP.
This patch updates widenGEP to manage the resulting vector values using
the VPValue of VPWidenGEP recipe.
2020-11-27 17:01:55 +00:00
Francesco Petrogalli 8e0148dff7 [AllocaInst] Update `getAllocationSizeInBits` to return `TypeSize`.
Reviewed By: peterwaller-arm, sdesmalen

Differential Revision: https://reviews.llvm.org/D92020
2020-11-27 16:39:10 +00:00
Sjoerd Meijer 10ad64aa3b [SLP] Dump Tree costs. NFC.
This adds LLVM_DEBUG messages to dump the (intermediate) tree cost
calculations, which is useful to trace and see how the final cost is
calculated.
2020-11-27 11:37:33 +00:00
Roman Lebedev b33fbbaa34
Reland [SimplifyCFG] FoldBranchToCommonDest: lift use-restriction on bonus instructions
This was orginally committed in 2245fb8aaa.
but was immediately reverted in f3abd54958
because of a PHI handling issue.

Original commit message:

1. It doesn't make sense to enforce that the bonus instruction
   is only used once in it's basic block. What matters is
   whether those user instructions fit within our budget, sure,
   but that is another question.
2. It doesn't make sense to enforce that said bonus instructions
   are only used within their basic block. Perhaps the branch
   condition isn't using the value computed by said bonus instruction,
   and said bonus instruction is simply being calculated
   to be used in successors?

So iff we can clone bonus instructions, to lift these restrictions,
we just need to carefully update their external uses
to use the new cloned instructions.

Notably, this transform (even without this change) appears to be
poison-unsafe as per alive2, but is otherwise (including the patch) legal.

We don't introduce any new PHI nodes, but only "move" the instructions
around, i'm not really seeing much potential for extra cost modelling
for the transform, especially since now we allow at most one such
bonus instruction by default.

This causes the fold to fire +11.4% more (13216 -> 14725)
as of vanilla llvm test-suite + RawSpeed.

The motivational pattern is IEEE-754-2008 Binary16->Binary32
extension code:
ca57d77fb2/src/librawspeed/common/FloatingPoint.h (L115-L120)
^ that should be a switch, but it is not now: https://godbolt.org/z/bvja5v
That being said, even thought this seemed like this would fix it: https://godbolt.org/z/xGq3TM
apparently that fold is happening somewhere else afterall,
so something else also has a similar 'artificial' restriction.
2020-11-27 12:47:15 +03:00