At the moment, LoopAccessAnalysis is a loop analysis for the new pass
manager. The issue with that is that LAI caches SCEV expressions and
modifications in a loop may impact SCEV expressions in other loops, but
we do not have a convenient way to invalidate LAI for other loops
withing a loop pipeline.
To avoid this issue, turn it into a function analysis which returns a
manager object that keeps track of the individual LAI objects per loop.
Fixes#50940.
Fixes#51669.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D134606
Update both memprof and callsite metadata to reflect inlined functions.
For callsite metadata this is simply a concatenation of each cloned
call's call stack with that of the inlined callsite's.
For memprof metadata, each profiled memory info block (MIB) is either
moved to the cloned allocation call or left on the original allocation
call depending on whether its context matches the newly refined call
stack context on the cloned call. We also reapply context trimming
optimizations based on the refined set of contexts on each of the calls
(cloned and original), via utilities in MemoryProfileInfo.
Depends on D128142.
Differential Revision: https://reviews.llvm.org/D128143
See also related RFCs:
RFC: Sanitizer-based Heap Profiler [1]
RFC: A binary serialization format for MemProf [2]
RFC: IR metadata format for MemProf [3]*
* Note that the IR metadata format has changed from the RFC during
implementation, as described in the preceeding patch adding the basic
metadata and verification support.
The matching is performed during the normal PGO annotation phase, to
ensure that the inlines applied in the IR at that point are a subset
of the inlines in the profiled binary and thus reflected in the
profile's call stacks. This is important because the call frames are
associated with functions in the profile based on the inlining in the
symbolized call stacks, and this simplifies locating the subset of
profile data relevant for matching onto each function's IR.
The PGOInstrumentationUse pass is enhanced to perform matching for
whatever combination of memprof and regular PGO profile data exists in
the profile.
Using the utilities introduced in D128854:
The memprof profile data for each context is converted to "cold" or
"notcold" based on parameterized thresholds for size, access count, and
lifetime. The memprof allocation contexts are trimmed to the minimal
amount of context required to uniquely identify whether the context is
cold or not cold. For allocations where all profiled contexts have the
same allocation type, no memprof metadata is attached and instead the
allocation call is directly annotated with an attribute specifying the
alloction type. This is the same attributed that will be applied to
allocation calls once cloned for different contexts, and later used
during LibCall simplification to emit allocation hints [4].
Depends on D128141 and D128854.
[1] https://lists.llvm.org/pipermail/llvm-dev/2020-June/142744.html
[2] https://lists.llvm.org/pipermail/llvm-dev/2021-September/153007.html
[3] https://discourse.llvm.org/t/rfc-ir-metadata-format-for-memprof/59165
[4] ab87cf382d
Differential Revision: https://reviews.llvm.org/D128142
This is an extension of the existing min/max+select fold (which already
has a very large number of variations) to allow a vector shuffle because
that's what we have in the motivating example from issue #42100.
A couple of Alive2 checks of variants (I don't know how to generalize
these in Alive):
https://alive2.llvm.org/ce/z/jUFAqT
And verify the PR42100 test:
https://alive2.llvm.org/ce/z/3EcASf
It's possible there is some generalization of the fold or a
VectorCombine/SLP answer for the motivating test, but I haven't found a
better/smaller solution yet.
We can also add even more variants here as follow-up patches. For example,
we can have shuffle followed by min/max; we also don't have this
canonicalization or the reverse:
https://alive2.llvm.org/ce/z/StHD9f
Differential Revision: https://reviews.llvm.org/D134879
breakLoopBackedge may remove blocks and loops. Also clear block &
loop disposition to avoid the cache containing invalid blocks and loops.
The coverage for the change is provided when using an ASAN build of opt
to run the LoopDeletion unit tests; without the fix, pointers to invalid
objects would be used.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D134663
The bulk of the implementation is common between 'release' mode (==AOT-ed
model) and 'development' mode (for training), the main difference is
that in development mode, we may also log features (for training logs),
inject scoring information and then produce the log file.
Differential Revision: https://reviews.llvm.org/D133616
This patch teaches the module inliner a traversal order designed for
the instrumentation FDO (+ThinLTO) scenario.
The new traversal order prioritizes call sites in the following order:
1. Those call sites that are expected to reduce the caller size
2. Those call sites that have gone through the cost-benefit analaysis
3. The remaining call sites
With this fairly simple traversal order, a large internel benchmark
yields performance comparable to the bottom-up inliner -- both in
terms of the execution performance and .text* sizes.
Big thanks goes to Liqiang Tao for the module inliner infrastructure.
I still have hacks outside this patch to prevent excessively long
compilation or .text* size explosion. I'm trying to come up with
acceptable solutions in near future.
Differential Revision: https://reviews.llvm.org/D134376
CFG with cycles may requires additional passes of "while (Changed)"
iteration if to propagate data back from latter blocks to earlier blocks,
ordered according to depth_fist.
OR logic, used for ::May, converge to stable state faster then AND logic
use for ::Must.
Though the better solution is to switch to some some form of queue, but
having that this one is good enough, I will consider to do that later.
We can switch ::Must to OR logic if we calculate "may be dead" instead
of direct "must be alive" and then convert values to match existing
interface.
Additionally it fixes correctness in "@cycle" test.
Reviewed By: kstoimenov, fmayer
Differential Revision: https://reviews.llvm.org/D134796
Interestingly, MathExtras.h doesn't use <cmath> declaration, so move it out of
that header and include it when needed.
No functional change intended, but there's no longer a transitive include
fromMathExtras.h to cmath.
This is purely NFC restructure in advance of a change which actually exposes zero strides. This is mostly because I find this interface confusing each time I look at it.
After unrolling a loop, the block and loop dispositions need to be
cleared. As we don't know which SCEVs in the loop/blocks may be
impacted, completely clear the cache. This should also fix some cases
where deleted loops remained in the LoopDispositions cache.
This fixes a verification failure surfaced by D134531.
I am planning on reviewing/updating the existing uses of
forgetLoopDispositions to check if they should be replaced by
forgetBlockAndLoopDispositions.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D134612
The test shows that we would fail to consistently fold the
instruction based on the max value operand. This is also
the root cause for issue #57986, but I'll add an instcombine
test + assert for that exact problem in another commit.
It's still possible that there's a simpler way to specify
the conditions needed for this set of folds, but "getStrictPred"
converts >= to > for example, so there's no need to explicitly
check that.
InlineCostCallAnalyzer encourages inlining of the last call to the
static function by subtracting LastCallToStaticBonus from Cost.
This patch introduces getStaticBonusApplied to make available the
amount of LastCallToStaticBonus applied.
The intent is to allow the module inliner to determine whether
inlining a given call site is expected to reduce the caller size with
an expression like:
IC.getCost() + IC.getStaticBonusApplied() < 0
This patch does not add a use of getStaticBonus yet.
Differential Revision: https://reviews.llvm.org/D134373
This extends e5d15e1162 to handle the inverse predicates
(there's probably a more elegant way to specify the preds).
These patterns correspond to the existing simplify:
max (min X, Y), X --> X
...and extra preds for (non)equality.
The tests cycle through all 10 icmp preds for each min/max
variant with 4 swapped operand patterns each (and the min/max
operands are commuted in every other test within those).
Some Alive2 examples to verify:
https://alive2.llvm.org/ce/z/XMvEKQhttps://alive2.llvm.org/ce/z/QpMChr
This is similar to the existing simplify:
max (max X, Y), X --> max X, Y
...but the select condition can be one of
several predicates as shown in the tests.
The tests cycle through all 10 icmp preds for
each min/max variant with 4 swapped operand
patterns each (and the min/max operands are
commuted in every other test within those).
Some Alive2 examples to verify:
https://alive2.llvm.org/ce/z/lCAQm4https://alive2.llvm.org/ce/z/kzxVXC
This reverts commit 794b7ea960, and
thus restores commit a212d8da94, and
follow on fixes 0cd6763fa9,
e9ff53d42f, and
37c6a25e9a.
Use a hash function (BLAKE3) instead of hash_combine/hash_code which are
not guaranteed to be stable across executions.
Additionally, it adds a "REQUIRES: x86_64-linux" to the tests that have
raw profile inputs to avoid failures on big endian bots.
Reviewers: snehasish, davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D128142
The mul by constant costmodels handle power-of-2 constants, but not negated-power-of-2, despite the backends handling both.
This patch adds the OperandValueProperties::OP_NegatedPowerOf2 enum and wires it for use for basic mul cost analysis and SLP handling.
Fixes#50778
Differential Revision: https://reviews.llvm.org/D111968
The current implementation for call sites is pretty convoluted
when you take the underlying implementation of the used APIs
into account. We will query the call site attributes, and then
fall back to the function attributes while taking into account
operand bundles. However, getModRefBehavior() already has it's
own (more accurate) logic for combining call-site FMRB with
function FMRB.
Clean this up by extracting a function that only fetches FMRB
from attributes, which can be directly used in getModRefBehavior()
for functions, and needs to be combined with an operand-bundle
respecting fallback in the call site case.
One caveat (that makes this non-NFC) is that CallBase function
attribute lookups allow using attributes from functions with
mismatching signature. To ensure we don't regress quality, do
the same for the function FMRB fallback.
This reverts commit a212d8da94, and follow
on fixes 0cd6763fa9,
e9ff53d42f, and
37c6a25e9a.
After re-reading the documentation for hash_combine, I don't think this
is the appropriate hash function to use for computing the hash to use as
a stack id in the metadata, since it is not guaranteed to produce stable
values across executions. I have not hit this problem, but plan to
switch to using an MD5 hash. I am hitting an issue with one of the bots
(https://lab.llvm.org/buildbot/#/builders/171/builds/20732)
where the values produced are only the lower 32 bits of the expected
hash values, however, which I assume is related to the implementation of
hash_combine and hash_code.
I believe I fixed all of the other bot failures with the follow on fixes,
which I'll merge into the new version before reapplying.
Spurious ref edges are ref edges that still exist in the call graph even
though the corresponding IR reference no longer exists. This can cause
issues when deleting a dead function which has a spurious ref edge
pointed at it because currently we expect the dead function's RefSCC to
be trivial.
In the case that the dead function's RefSCC is not trivial, remove all
ref edges from other nodes in the RefSCC to it.
Removing a ref edge can result in splitting RefSCCs. There's actually no
reason to revisit those RefSCCs because currently we only run passes on
SCCs, and we've already added all SCCs in the RefSCC to the worklist.
(as opposed to removing the ref edge in
updateCGAndAnalysisManagerForPass() which can modify the call graph of
SCCs we have not visited yet). We also don't expect that RefSCC
refinement will allow us to glean any more information for optimization
use. Also, doing so would drastically increase the complexity of
LazyCallGraph::removeDeadFunction(), requiring us to return a list of
invalidated RefSCCs and new RefSCCs to add to the worklist.
Fixes#56503
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D133907
Profile matching and IR annotation for memprof profiles.
See also related RFCs:
RFC: Sanitizer-based Heap Profiler [1]
RFC: A binary serialization format for MemProf [2]
RFC: IR metadata format for MemProf [3]*
* Note that the IR metadata format has changed from the RFC during
implementation, as described in the preceeding patch adding the basic
metadata and verification support.
The matching is performed during the normal PGO annotation phase, to
ensure that the inlines applied in the IR at that point are a subset
of the inlines in the profiled binary and thus reflected in the
profile's call stacks. This is important because the call frames are
associated with functions in the profile based on the inlining in the
symbolized call stacks, and this simplifies locating the subset of
profile data relevant for matching onto each function's IR.
The PGOInstrumentationUse pass is enhanced to perform matching for
whatever combination of memprof and regular PGO profile data exists in
the profile.
Using the utilities introduced in D128854:
The memprof profile data for each context is converted to "cold" or
"notcold" based on parameterized thresholds for size, access count, and
lifetime. The memprof allocation contexts are trimmed to the minimal
amount of context required to uniquely identify whether the context is
cold or not cold. For allocations where all profiled contexts have the
same allocation type, no memprof metadata is attached and instead the
allocation call is directly annotated with an attribute specifying the
alloction type. This is the same attributed that will be applied to
allocation calls once cloned for different contexts, and later used
during LibCall simplification to emit allocation hints [4].
Depends on D128141 and D128854.
[1] https://lists.llvm.org/pipermail/llvm-dev/2020-June/142744.html
[2] https://lists.llvm.org/pipermail/llvm-dev/2021-September/153007.html
[3] https://discourse.llvm.org/t/rfc-ir-metadata-format-for-memprof/59165
[4] ab87cf382d
Differential Revision: https://reviews.llvm.org/D128142
While we can't express this with attributes yet, we can model
these intrinsics as readonly + writing inaccessible memory (for
the control dependence) in FMRB. This way we don't need to
special-case them in getModRefInfo(), it falls out of the usual
logic.
Based on D130896, we can model operand bundles more precisely. In
addition to the baseline ModRefBehavior, a reading/clobbering operand
bundle may also read/write all locations. For example, a memcpy with
deopt bundle can read any memory, but only write argument memory.
This means that getModRefInfo() for memcpy with a pointer that does
not alias the arguments results in Ref, rather than ModRef, without
the need to implement any special handling.
Differential Revision: https://reviews.llvm.org/D130980
We can handle vectors inside simplifyWithOpReplaced(), as long as
cross-lane operations are excluded. The equality can hold (or not
hold) for each vector lane independently, so we shouldn't use the
replacement value from other lanes.
I believe the only operations relevant here are shufflevector (where
all previous bugs were seen) and calls (which might use shuffle-like
intrinsics and would require more careful classification).
Differential Revision: https://reviews.llvm.org/D134348
Currently if we mark an SCC as invalid, if we haven't set UR.UpdatedC, we won't propagate the PreservedAnalyses up to the parent pass (adaptor/pass manager).
In the provided test case, we inline the function into itself then delete it as it has no users. The SCC is marked as invalid without providing a replacement UR.UpdatedC. Then the CGSCC pass manager and adaptor discard the PreservedAnalyses. Instead, handle PreservedAnalyses first before bailing due to the invalid SCC.
Fixes crashes due to out of date analyses.
This patch factors out common code in InlineOrder.cpp.
Without this patch, the model is to ask classes like SizePriority and
CostPriority to compare a pair of call sites:
bool hasLowerPriority(const CallBase *L, const CallBase *R) const override {
while these priority classes have their own caches of priorities:
DenseMap<const CallBase *, PriorityT> Priorities;
This model results in a lot of duplicate code like hasLowerPriority
and updateAndCheckDecreased.
This patch changes the model so that priority classes just have two
methods to compute a priority for a given call site and to compare two
previously computed priorities (as opposed to call sites).
Facilities like hasLowerPriority and updateAndCheckDecreased move to
PriorityInlineOrder along with the map from call sites to their
priorities. PriorityInlineOrder becomes a template class so that it
can accommodate different priority classes.
Differential Revision: https://reviews.llvm.org/D134149
The IR from https://github.com/llvm/llvm-project/issues/57368 results
in an assert firing when trying to create a runtime check for the
forked pointer. One of the forks is fine since it's loop invariant,
but the other is a scAddExpr (containing a scAddRecExpr, so not
invariant) when RtCheck::insert expects a scAddRecExpr.
This is a simple fix to just avoid forks which aren't AddRec or
loop invariant. We can allow it as a forked pointer later with
more work.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D133020
I believe this is no longer necessary, as the underlying problem
has been fixed in a different way: Nowadays, we will adjust the
location size to beforeOrAfterPointer() if the pointer is not loop
invariant. This makes merging results translated across loop
backedges safe.
The two tests in phi-translation.ll show an improvement while still
being correct: The loads in the loop no longer alias with noalias
pointers, but still alias with the store in the entry block (which
they originally did not -- this is the bug that
PerformedPhiTranslation originally fixed).
Differential Revision: https://reviews.llvm.org/D133404
Otherwise LLVM will optimise strrchr into memrchr on Windows resulting in linker error:
```
$ cat memrchr_test.c
int main(int argc, char **argv) {
return (long)strrchr("KkMm", argv[argc-1][0]);
}
$ clang memrchr_test.c -O
memrchr_test.c:3:12: warning: cast to smaller integer type 'long' from 'char *' [-Wpointer-to-int-cast]
return (long)strrchr("KkMm", argv[argc-1][0]);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
ld.lld: error: undefined symbol: memrchr
>>> referenced by D:/msys64/tmp/memrchr_test-e7aabd.o:(main)
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
Example taken from MSYS2 Discord and tested with windows-gnu target.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D134134