If the condition of a select is a compare, pass its predicate to
TTI::getCmpSelInstrCost to get a more accurate cost value instead
of passing BAD_ICMP_PREDICATE.
I noticed that the commit message from D90070 had a comment about the
vectorized select predicate possibly being composed of other compares with
different predicate values, but I wasn't able to construct an example
where this was an actual issue. If this is an issue, I guess we could
add another check that the block isn't predicated for any reason.
Reviewed By: dmgreen, fhahn
Differential Revision: https://reviews.llvm.org/D114646
VPWidenIntOrFpInductionRecipes can either be constructed with a PHI and
an optional cast or a PHI and a trunc instruction. Reflect this in 2
separate constructors. This also simplifies a follow-up change.
This reverts change 2c391a5/D87551. As noted in the llvm-dev thread "LICM as canonical form" sent earlier today, introducing this was a major design change made without sufficient cause.
A profile driven LICM is not an unreasonable design, it simply is not what we have. Switching to such a model requires a lot more work than just this patch, and broad aggeement that is the right direction for the optimizer as a whole.
Worth noting is that all the tests included in the reverted changed are probably handled if we allow running unconstrained LICM, and later run LoopSink. As such, we have no public examples which motivate a profit based hoisting approach.
When Control Flow Guard Check is inserted, funclet bundle was not checked. Therefore, it didn't generate code correctly when a target function has "funclet" bundle.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D114914
This fixes a bug in 740057d. There's two ways to describe the issue:
* One caller hasn't yet proven nocapture on the argument. Given that, the inference routine is responsible for bailing out on a potential capture.
* Even if we know the argument is nocapture, the access inference needs to traverse the exact set of users the capture tracking would (or exit conservatively). Even if capture tracking can prove a store is non-capturing (e.g. to a local alloc which doesn't escape), we still need to track the copy of the pointer to see if it's later reloaded and accessed again.
Note that all the test changes except the newly added ones appear to be false negatives. That is, cases where we could prove writeonly, but the current code isn't strong enough. That's why I didn't spot this originally.
The earlier usage of wouldInstructionBeTriviallyDead is based on the
assumption that the use_count of that instruction being checked will be
zero. This patch separates the API into two different ones:
1. The strictly conservative one where the instruction is trivially dead iff the uses are dead.
2. The slightly relaxed form, where an instruction is dead along paths where it is not used.
The second form can be used in identifying instructions that are valid
to sink down to uses (D109917).
Reviewed-By: reames
Differential Revision: https://reviews.llvm.org/D114647
DSE has some extra logic to determine the write location of library
calls like str*cpy and str*cat. This patch moves the logic to a new
MemoryLocation:getForDest variant, which takes a call and TLI.
This patch should be NFC, because no other places take advantage of the
new helper yet.
Suggested by @reames post-commit 7eec832def.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D114872
Since coroutine would be splitted into pieces, compiler would move the
dbg.declare intrinsic after the Storage is created to make sure the
corresponding dbg instruction is still available aftet splitted.
However, it would be problematic if the storage instruction is an
InvokeInst, which is a terminator. We couldn't move instruction after an
InvokeInst. This patch tries to move the dbg.declare intrinsic in the
normal destination of the InvokeInst. It should make sense due to the
Storage should be invalid in exception path.
This change extends the current logic for inferring readonly and readnone argument attributes to also infer writeonly.
This change is deliberately minimal; there's a couple of areas for follow up.
* I left out all call handling and thus any benefit from the SCC walk. When examining the test changes, I realized the existing code is imprecise, and am going to fix that in it's own revision before adding in the writeonly handling. (Mostly because updating the tests is hard when I, the human, can't figure out whether the result is correct.)
* I left out handling for storing a value (as opposed to storing to a pointer). This should benefit readonly/readnone as well, and applies to a bunch of other instructions. Seemed worth having as a separate review.
Differential Revision: https://reviews.llvm.org/D114963
This is a continuation of D109860 and D109903.
An important challenge for profile inference is caused by the fact that the
sample profile is collected on a fully optimized binary, while the block and
edge frequencies are consumed on an early stage of the compilation that operates
with a non-optimized IR. As a result, some of the basic blocks may not have
associated sample counts, and it is up to the algorithm to deduce missing
frequencies. The problem is illustrated in the figure where three basic
blocks are not present in the optimized binary and hence, receive no samples
during profiling.
We found that it is beneficial to treat all such blocks equally. Otherwise the
compiler may decide that some blocks are “cold” and apply undesirable
optimizations (e.g., hot-cold splitting) regressing the performance. Therefore,
we want to distribute the counts evenly along the blocks with missing samples.
This is achieved by a post-processing step that identifies "dangling" subgraphs
consisting of basic blocks with no sampled counts; once the subgraphs are
found, we rebalance the flow so as every branch probability is 50:50 within the
subgraphs.
Our experiments indicate up to 1% performance win using the optimization on
some binaries and a significant improvement in the quality of profile counts
(when compared to ground-truth instrumentation-based counts)
{F19093045}
Reviewed By: hoy
Differential Revision: https://reviews.llvm.org/D109980
This is a continuation of D109860.
Traditional flow-based algorithms cannot guarantee that the resulting edge
frequencies correspond to a *connected* flow in the control-flow graph. For
example, for an instance in the attached figure, a flow-based (or any other)
inference algorithm may produce an output in which the hot loop is disconnected
from the entry block (refer to the rightmost graph in the figure). Furthermore,
creating a connected minimum-cost maximum flow is a computationally NP-hard
problem. Hence, we apply a post-processing adjustments to the computed flow
by connecting all isolated flow components ("islands").
This feature helps to keep all blocks with sample counts connected and results
in significant performance wins for some binaries.
{F19077343}
Reviewed By: hoy
Differential Revision: https://reviews.llvm.org/D109903
If the extractelement instruction is used multiple times in the
different tree entries (either vectorized, or gathered), need to
compensate the scalar cost of such instructions. They are completely
removed if all users are part of the tree but we need to compensate the
cost only once for each instruction.
Differential Revision: https://reviews.llvm.org/D114958
Need to outline the code for finding common vectors in insertelement
instructions into a separate function for future patches. It also
improves the process by adding some extra checks for early exit and
fixes a bug where it always finds the match because of erroneous compare
of the same values.
Differential Revision: https://reviews.llvm.org/D114909
If several shuffle instructions are emitted, some of them might
same/compatible (less defined) with the previously emitted ones. Such
shuffles can be removed safely, improving the total cost of the
vectorized code.
Differential Revision: https://reviews.llvm.org/D114087
When doing load/store promotion within LICM, if we
cannot prove that it is safe to sink the store we won't
hoist the load, even though we can prove the load could
be dereferenced and moved outside the loop. This patch
implements the load promotion by moving it in the loop
preheader by inserting proper PHI in the loop. The store
is kept as is in the loop. By doing this, we avoid doing
the load from a memory location in each iteration.
Please consider this small example:
loop {
var = *ptr;
if (var) break;
*ptr= var + 1;
}
After this patch, it will be:
var0 = *ptr;
loop {
var1 = phi (var0, var2);
if (var1) break;
var2 = var1 + 1;
*ptr = var2;
}
This addresses some problems from [0].
[0] https://bugs.llvm.org/show_bug.cgi?id=51193
Differential revision: https://reviews.llvm.org/D113289
Add support for memset_pattern{4,8} similar to the existing
memset_pattern16 handling.
Reviewed By: ab
Differential Revision: https://reviews.llvm.org/D114883
This fixes the assertion failure reported in https://reviews.llvm.org/D114889#3166417,
by making RecursivelyDeleteTriviallyDeadInstructionsPermissive()
more permissive. As the function accepts a WeakTrackingVH, even if
originally only Instructions were inserted, we may end up with
different Value types after a RAUW operation. As such, we should
not assume that the vector only contains instructions.
Notably this matches the behavior of the
RecursivelyDeleteTriviallyDeadInstructions() function variant which
accepts a single value rather than vector.
`memcpy_chk` can be treated like `memcpy`, with the exception that it
may not return (if it aborts the program).
See D114793 for a similar patch for `memset_chk`.
Reviewed By: xbolva00
Differential Revision: https://reviews.llvm.org/D114863
5c77aa2b91 [unroll] Use early return in shouldFullUnroll [nfc]
wasn't quite NFC since !(x <= y) is x > y rather than x >= y
Credit to Justin Bogner for spotting the bug
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D114894
Previously we missed cloning metadata on function declarations because
we don't call CloneFunctionInto() on declarations in CloneModule().
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D113812
The benefits of sampling-based PGO crucially depends on the quality of profile
data. This diff implements a flow-based algorithm, called profi, that helps to
overcome the inaccuracies in a profile after it is collected.
Profi is an extended and significantly re-engineered classic MCMF (min-cost
max-flow) approach suggested by Levin, Newman, and Haber [2008, Complementing
missing and inaccurate profiling using a minimum cost circulation algorithm]. It
models profile inference as an optimization problem on a control-flow graph with
the objectives and constraints capturing the desired properties of profile data.
Three important challenges that are being solved by profi:
- "fixing" errors in profiles caused by sampling;
- converting basic block counts to edge frequencies (branch probabilities);
- dealing with "dangling" blocks having no samples in the profile.
The main implementation (and required docs) are in SampleProfileInference.cpp.
The worst-time complexity is quadratic in the number of blocks in a function,
O(|V|^2). However a careful engineering and extensive evaluation shows that
the running time is (slightly) super-linear. In particular, instances with
1000 blocks are solved within 0.1 second.
The algorithm has been extensively tested internally on prod workloads,
significantly improving the quality of generated profile data and providing
speedups in the range from 0% to 5%. For "smaller" benchmarks (SPEC06/17), it
generally improves the performance (with a few outliers) but extra work in
the compiler might be needed to re-tune existing optimization passes relying on
profile counts.
UPD Dec 1st 2021:
- synced the declaration and definition of the option `SampleProfileUseProfi ` to use type `cl::opt<bool`;
- added `inline` for `SampleProfileInference<BT>::findUnlikelyJumps` and `SampleProfileInference<BT>::isExit` to avoid linking problems on windows.
Reviewed By: wenlei, hoy
Differential Revision: https://reviews.llvm.org/D109860
This bases the CleanupConstantGlobalUsers() implementation around
the ConstantFoldLoadFromConst() API. The general approach is that
we discover all users while looking through casts, and then
constant fold loads and drop stores and memintrinsics.
This avoids special cases and limitations in the previous
implementation, which is also incompatible with opaque pointers.
The result is a bit more powerful than before, because we now use
more general load folding logic which can for example look through
pointer bitcasts between different sizes. This is where the test
changes come from, as we now fold more loads and can thus remove
more globals.
Differential Revision: https://reviews.llvm.org/D114889
Instead of using `DenseMap::find()` and `DenseMap::insert()`, use
`DenseMap::operator[]` to get a reference to the profile data and update
the reference. This simplifies the changes in D114565.
Reviewed By: kyulee
Differential Revision: https://reviews.llvm.org/D114828
If two reaching kernels disagree on the execution mode we cannot guard a
function right now. Ensure we do not as we otherwise will cause a
deadlock.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D114866
Improved the calculation of the shuffled extracts, where possible. Need
to calculate the cost for the extracted scalars if some users are not
insertelements + improved the total estimation of the shuffled scalars
used in insertelements build vectors.
Differential Revision: https://reviews.llvm.org/D113782
Undefined vector might be not only the UndefValue, but also it can be
a constant vector with undef ot poison elements, need to check for this
kind of undef too.
Differential Revision: https://reviews.llvm.org/D114873
The code in widenMemoryInstruction has already been transitioned
to only rely on information provided by VPWidenMemoryInstructionRecipe
directly.
Moving the code directly to VPWidenMemoryInstructionRecipe::execute
completes the transition for the recipe.
It provides the following advantages:
1. Less indirection, easier to see what's going on.
2. Removes accesses to fields of ILV.
2) in particular ensures that no dependencies on
fields in ILV for vector code generation are re-introduced.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D114324
When doing load/store promotion within LICM, if we
cannot prove that it is safe to sink the store we won't
hoist the load, even though we can prove the load could
be dereferenced and moved outside the loop. This patch
implements the load promotion by moving it in the loop
preheader by inserting proper PHI in the loop. The store
is kept as is in the loop. By doing this, we avoid doing
the load from a memory location in each iteration.
Please consider this small example:
loop {
var = *ptr;
if (var) break;
*ptr= var + 1;
}
After this patch, it will be:
var0 = *ptr;
loop {
var1 = phi (var0, var2);
if (var1) break;
var2 = var1 + 1;
*ptr = var2;
}
This addresses some problems from [0].
[0] https://bugs.llvm.org/show_bug.cgi?id=51193
Differential revision: https://reviews.llvm.org/D113289
The memset_chk library function should match memset's attributes with
respect of memory effects (argmemonly, writeonly). It also does not
raise exceptions. It may not return, in case it aborts the program.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D114793
This allows for better optimization of 'stores-of-existing-values' and
possibly helps passes further down the pipeline.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D113712
Fix printing of LoopNestPasses when using the opt pipeline printer
option -print-pipeline-passes.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D114771
The below commit is causing assertion when insert element type is not integer
type such as half. This is because the transformation is creating zext before
doing bitwise OR, and the zext is supported only for integer types
80ab06c599
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D114734
Extended support for undefined source vector/extract indices/non-fixed
vector types, also no need to check for the parent of the extractelement
instructions with the constant indicies.
Differential Revision: https://reviews.llvm.org/D114121
For nodes that are in a cycle of a profiled call graph, the current order the underlying scc_iter computes purely depends on how those nodes are reached from outside the SCC and inside the SCC, based on the Tarjan algorithm. This does not honor profile edge hotness, thus does not gurantee hot callsites to be inlined prior to cold callsites. To mitigate that, I'm adding an extra sorter on top of scc_iter to sort scc functions in the order of callsite hotness, instead of changing the internal of scc_iter.
Sorting on callsite hotness can be optimally based on detecting cycles on a directed call graph, i.e, to remove the coldest edge until a cycle is broken. However, detecting cycles isn't cheap. I'm using an MST-based approach which is faster and appear to deliver some performance wins.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D114204
Using the optimized access enables additional optimizations in cases
where the defining access is a non-aliasing store.
Alternatively we could also walk upwards and skip non-aliasing defs
here, but my experiments so far showed that this will noticeably
increase compile-time for little extra gain compared to just using the
optimized access.
Improvements of dse.NumRedundantStores on MultiSource/CINT2006/CPF2006
on X86 with -O3:
test-suite...-typeset/consumer-typeset.test 1.00 76.00 7500.0%
test-suite.../Benchmarks/Bullet/bullet.test 3.00 12.00 300.0%
test-suite...006/453.povray/453.povray.test 3.00 6.00 100.0%
test-suite...telecomm-gsm/telecomm-gsm.test 1.00 2.00 100.0%
test-suite...ediabench/gsm/toast/toast.test 1.00 2.00 100.0%
test-suite...marks/7zip/7zip-benchmark.test 1.00 2.00 100.0%
test-suite...ications/JM/lencod/lencod.test 7.00 10.00 42.9%
test-suite...6/464.h264ref/464.h264ref.test 6.00 8.00 33.3%
test-suite...ications/JM/ldecod/ldecod.test 6.00 7.00 16.7%
test-suite...006/447.dealII/447.dealII.test 33.00 33.00 0.0%
test-suite...6/471.omnetpp/471.omnetpp.test NaN 1.00 nan%
test-suite...006/450.soplex/450.soplex.test NaN 2.00 nan%
test-suite.../CINT2006/403.gcc/403.gcc.test NaN 7.00 nan%
test-suite...lications/ClamAV/clamscan.test NaN 1.00 nan%
test-suite...CI_Purple/SMG2000/smg2000.test NaN 3.00 nan%
Follow-up to D111727.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D112315
This patch adds the `__kmpc_get_warp_size` OpenMP RTL function to the
externalization RAII struct. This was getting optimized out and then
being replaced with an undefined value once added back in, causing bugs
for complex reductions.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D114802
The code in widenSelectInstruction has already been transitioned
to only rely on information provided by VPWidenSelectRecipe directly.
Moving the code directly to VPWidenSelectRecipe::execute completes
the transition for the recipe.
It provides the following advantages:
1. Less indirection, easier to see what's going on.
2. Removes accesses to fields of ILV.
2) in particular ensures that no dependencies on
fields in ILV for vector code generation are re-introduced.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D114323
We ask `TTI.getAddressComputationCost()` about the cost of computing vector address,
and then multiply it by the vector width. This doesn't make any sense,
it implies that we'd do a vector GEP and then scalarize the vector of pointers,
but there is no such thing in the vectorized IR, we perform scalar GEP's.
This is *especially* bad on X86, and was effectively prohibiting any scalarized
vectorization of gathers/scatters, because `X86TTIImpl::getAddressComputationCost()`
says that cost of vector address computation is `10` as compared to `1` for scalar.
The computed costs are similar to the ones with D111222+D111220,
but we end up without masked memory intrinsics that we'd then have to
expand later on, without much luck. (D111363)
Differential Revision: https://reviews.llvm.org/D111460
Fixes PR#52190. There is already a check for converting ashr instructions with non-negative left-hand sides into lshr; this patch adds an optimization to remove ashr altogether if the left-hand side is known to be in the range [-1, 1).
Differential Revision: https://reviews.llvm.org/D113835