Commit Graph

11909 Commits

Author SHA1 Message Date
Matt Arsenault 841a0edd03 ConstantFolding: Constant fold some canonicalizes
+/-0 is obviously foldable. Other non-special, non-subnormal
values are also probably OK. For denormal values, check
the calling function's denormal mode. For now, don't fold
denormals to the input for IEEE mode because as far as I know
the langref is still pretending LLVM's float isn't IEEE.

Also folds undef to 0, although NaN may make more sense. Skips
folding nans and infinities, although it should be OK to fold those
in a future change.
2022-11-18 10:35:19 -08:00
Roman Lebedev be1f994311
[Analysis] `isSafeToLoadUnconditionally()`: `lifetime` intrinsics can be ignored
In practice this means that we can speculate more loads in SROA.
This e.g. comes up in https://godbolt.org/z/G8716s6sj,
although we are missing second half of the puzzle to optimize that.
2022-11-17 20:48:27 +03:00
Stanislav Mekhanoshin bcaf31ec3f [AMDGPU] Allow finer grain control of an unaligned access speed
A target can return if a misaligned access is 'fast' as defined
by the target or not. In reality there can be different levels
of 'fast' and 'slow'. This patch changes the boolean 'Fast'
argument of the allowsMisalignedMemoryAccesses family of functions
to an unsigned representing its speed.

A target can still define it as it wants and the direct translation
of the current code uses 0 and 1 for current false and true. This
makes the change an NFC.

Subsequent patch will start using an actual value of speed in
the load/store vectorizer to compare if a vectorized access going
to be not just fast, but not slower than before.

Differential Revision: https://reviews.llvm.org/D124217
2022-11-17 09:23:53 -08:00
Matt Arsenault af7e80b7cb ValueTracking: Look through copysign in isKnownNeverInfinity 2022-11-17 08:52:09 -08:00
Matt Arsenault 29e4363dda ValueTracking: Look through fneg in isKnownNeverNaN 2022-11-17 08:35:49 -08:00
luxufan 3053a323c7 [MemorySSA] Relax assert condition in createDefinedAccess
If globals-aa is enabled, because of the deletion of memory instructions, there
may be call instruction that is not in ModOrRefSet but is a MemoryUseOrDef.
This causes the crash in the process of cloning uses and defs.

Fixes https://github.com/llvm/llvm-project/issues/58719

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D137553
2022-11-17 23:43:20 +08:00
Matt Arsenault ba1669c81f ValueTracking: Look through fabs and fneg in isKnownNeverInfinity 2022-11-17 00:06:15 -08:00
Matt Arsenault d24fe812ec ValueTracking: Look through canonicalize in isKnownNeverInfinity 2022-11-17 00:06:15 -08:00
Teresa Johnson 9eacbba290 Restore "[MemProf] ThinLTO summary support" with more fixes
This restores commit 98ed423361 and
follow on fix 00c22351ba, which were
reverted in 5d938eb6f7 due to an
MSVC bot failure. I've included a fix for that failure.

Differential Revision: https://reviews.llvm.org/D135714
2022-11-16 09:42:41 -08:00
Matt Arsenault ad3c91eedc MemoryBuiltins: Don't check for unsized allocas
The verifier rejects these.
2022-11-16 09:13:11 -08:00
Jeremy Morse 5d938eb6f7 Revert "Restore "[MemProf] ThinLTO summary support" with fixes"
This reverts commit 00c22351ba.
This reverts commit 98ed423361.

Seemingly MSVC has some kind of issue with this patch, in terms of linking:

  https://lab.llvm.org/buildbot/#/builders/123/builds/14137

I'll post more detail on D135714 momentarily.
2022-11-16 11:21:02 +00:00
Matt Arsenault fde4ef1973 InstSimplify: Fold arithmetic_fence as idempotent 2022-11-15 22:29:34 -08:00
Teresa Johnson 98ed423361 Restore "[MemProf] ThinLTO summary support" with fixes
This restores 4745945500, which was
reverted in commit 452a14efc8, along with
fixes for a couple of bot failures.
2022-11-15 08:55:17 -08:00
Nikita Popov 8051c1db95 [AST] Don't use WeakVH for unknown insts (NFCI)
After D138014 we do not support using AST with IR that is being
mutated. As such, we also no longer need to track unknown
instructions using WeakVH. Replace with AssertingVH to make sure
that they are not invalidated.
2022-11-15 16:46:05 +01:00
Teresa Johnson 452a14efc8 Revert "[MemProf] ThinLTO summary support"
This reverts commit 4745945500.

Revert while I try to fix a couple of non-Linux build failures.
2022-11-15 07:39:40 -08:00
Nikita Popov db5855d0e4 [AST] Restrict AliasSetTracker to immutable IR
This restricts usage of AliasSetTracker to IR that does not change.
We used to use it during LICM where the underlying IR could change,
but remaining uses all use AST as part of a separate analysis phase.

This is split out from D137955, which makes use of the new guarantee
to switch to BatchAA.

Differential Revision: https://reviews.llvm.org/D138014
2022-11-15 16:27:28 +01:00
Teresa Johnson 4745945500 [MemProf] ThinLTO summary support
Implements the ThinLTO summary support for memprof related metadata.

This includes support for the assembly format, and for building the
summary from IR during ModuleSummaryAnalysis.

To reduce space in both the bitcode format and the in memory index,
we do 2 things:
1. We keep a single vector of all uniq stack id hashes, and record the
   index into this vector in the callsite and allocation memprof
   summaries.
2. When building the combined index during the LTO link, the callsite
   and allocation memprof summaries are only kept on the FunctionSummary
   of the prevailing copy.

Differential Revision: https://reviews.llvm.org/D135714
2022-11-15 06:45:12 -08:00
Fangrui Song 77bf0df376 Revert "[opt][clang] Enable using -module-summary/-flto=thin with -S/-emit-llvm"
This reverts commit bf8381a8bc.

There is a layering violation: LLVMAnalysis depends on LLVMCore, so
LLVMCore should not include LLVMAnalysis header
llvm/Analysis/ModuleSummaryAnalysis.h
2022-11-14 15:51:03 -08:00
Alexander Shaposhnikov bf8381a8bc [opt][clang] Enable using -module-summary/-flto=thin with -S/-emit-llvm
Enable using -module-summary with -S
(similarly to what currently can be achieved with opt <input> -o - | llvm-dis).
This is a recommit of ef9e62469.

Test plan: ninja check-all

Differential revision: https://reviews.llvm.org/D137768
2022-11-14 23:24:08 +00:00
Sanjay Patel 0a37290dc8 [InstSimplify] restrict logic fold with partial undef vector
https://alive2.llvm.org/ce/z/4ncsnX

Fixes #58977
2022-11-14 12:09:23 -05:00
Nikita Popov b71bf080d0 [AA] Move MayBeCrossIteration into AAQI (NFC)
Move the MayBeCrossIteration flag from BasicAA into AAQI. This is
in preparation for exposing it to users of the AA API.
2022-11-14 16:42:22 +01:00
Nikita Popov 458ae539df [AST] Remove legacy AliasSetPrinter pass
A NewPM version of this pass exists, drop the legacy version of
this testing-only pass.
2022-11-14 15:50:38 +01:00
Florian Hahn 758699c399
[VectorUtils] Skip interleave members with diff type and alloca sizes.
Currently, codegen doesn't support cases where the type size doesn't
match the alloc size. Skip them for now.

Fixes #58722.
2022-11-13 22:06:20 +00:00
Matt Arsenault 4f2f7e84ff Analysis: Reorder code in isDereferenceableAndAlignedPointer
GEPs should be the most common and basic case, so try that first.
2022-11-11 16:38:51 -08:00
Sanjay Patel 21f1b2da95 [InstSimplify] fold fsub nnan with Inf operand
Similar to fbc2c8f2fb, but if we have a non-canonical
fsub with constant operand 1, then flip the sign of the
Infinity:
https://alive2.llvm.org/ce/z/vKWfhW

If Infinity is operand 0, then the sign remains:
https://alive2.llvm.org/ce/z/73d97C
2022-11-11 08:42:44 -05:00
Sanjay Patel fbc2c8f2fb [InstSimplify] fold X +nnan Inf
If we exclude NaN (and therefore the opposite Inf),
anything plus Inf is Inf:
https://alive2.llvm.org/ce/z/og3dj9
2022-11-10 17:13:26 -05:00
Matt Arsenault 7dd27a75a2 InstSimplify: Fold fdiv nnan ninf x, 0 -> poison
https://alive2.llvm.org/ce/z/JxX5in
2022-11-07 08:43:22 -08:00
Nikita Popov a50c269c73 [InstCombine] Handle load smaller than one byte in memset forward
APInt::getSplat() requires that the new size is >= the original
one. If we're loading less than 8 bits, truncate instead.

Fixes https://github.com/llvm/llvm-project/issues/58845.
2022-11-07 17:04:27 +01:00
David Green b46427b9a2 [InstSimplify] (~A & B) | ~(A | B) --> ~A with logical and
According to https://alive2.llvm.org/ce/z/opsdrb, it is valid to convert
(~A & B) | ~(A | B) --> ~A even if the And is a Logical And. This came
up from the vector masking of predicated blocks.

Differential Revision: https://reviews.llvm.org/D137435
2022-11-07 10:03:18 +00:00
Mircea Trofin 5617fb1411 [MLGO][NFC] Use std::map instead of DenseMap to avoid use after free
In `MLInlineAdvisor::getAdviceImpl`, we call `getCachedFPI` twice, once
for the caller, once for the callee, so the second may invalidate the
reference obtained by the first because the underlying implementation of
the cache is a `DenseMap`. `std::map` doesn't have that problem.
2022-11-04 16:07:24 -07:00
Nikita Popov 2f211f865d [LVI] Improve debug message (NFC) 2022-11-04 16:58:02 +01:00
Karthik Senthil d9c52c31a0 [LV][IVDescriptors] Fix recurrence identity element for FMin and FMax reductions
For a min and max reduction idioms, the identity (i.e. neutral) element
should be datatype's highest and lowest possible values respectively.
Current implementation in IVDescriptors incorrectly returns -Inf for FMin
reduction and +Inf for FMax reduction. This patch fixes this bug which
was causing incorrect reduction computation results in loops vectorized
by LV.

Differential Revision: https://reviews.llvm.org/D137220
2022-11-04 10:39:37 -04:00
Nikita Popov 304f1d59ca [IR] Switch everything to use memory attribute
This switches everything to use the memory attribute proposed in
https://discourse.llvm.org/t/rfc-unify-memory-effect-attributes/65579.
The old argmemonly, inaccessiblememonly and inaccessiblemem_or_argmemonly
attributes are dropped. The readnone, readonly and writeonly attributes
are restricted to parameters only.

The old attributes are auto-upgraded both in bitcode and IR.
The bitcode upgrade is a policy requirement that has to be retained
indefinitely. The IR upgrade is mainly there so it's not necessary
to update all tests using memory attributes in this patch, which
is already large enough. We could drop that part after migrating
tests, or retain it longer term, to make it easier to import IR
from older LLVM versions.

High-level Function/CallBase APIs like doesNotAccessMemory() or
setDoesNotAccessMemory() are mapped transparently to the memory
attribute. Code that directly manipulates attributes (e.g. via
AttributeList) on the other hand needs to switch to working with
the memory attribute instead.

Differential Revision: https://reviews.llvm.org/D135780
2022-11-04 10:21:38 +01:00
Nikita Popov 2ddcf721a0 [InstCombine] Perform memset -> load forwarding
InstCombine does some basic store to load forwarding. One case it
currently misses is the case where the store is actually a memset.
This patch adds support for this case. This is a minimal
implementation that only handles a load at the memset base address,
without an offset.

GVN is already capable of performing this optimization. Having it
in InstCombine can help with phase ordering issues, similar to the
existing store to load forwarding.

Differential Revision: https://reviews.llvm.org/D137323
2022-11-03 16:03:57 +01:00
Nikita Popov 68b24c3b44 [CVP] Simplify comparisons without constant operand
CVP currently only tries to simplify comparisons if there is a
constant operand. However, even if both are non-constant, we may
be able to determine the result of the comparison based on range
information.

IPSCCP is already capable of doing this, but because it runs very
early, it may miss some cases.

Differential Revision: https://reviews.llvm.org/D137253
2022-11-03 15:35:27 +01:00
Nikita Popov 96a74c4527 [ValueLattice] Fix typo in condition (NFC)
Fix typo pointed out by Roman Divacky.

There should be no functional change, as the rest of the code will
return nullptr for undef anyway. The condition is just there for
clarity.
2022-11-02 17:52:13 +01:00
Nikita Popov 134bda4b61 [ValueLattice] Use DL-aware folding in getCompare()
Use DL-aware ConstantFoldCompareInstOperands() API instead of
ConstantExpr API. The practical effect of this is that SCCP can
now fold comparisons that require DL.
2022-11-02 10:41:11 +01:00
Nikita Popov 28b31d9ccc [ValueLattice] Move getCompare() out of line (NFC)
This is a fairly large method that is unlikely to benefit from
inlining.
2022-11-02 10:33:44 +01:00
Nikita Popov 41dba9e6a3 [AA] Remove some overloads (NFC)
Having all these instruction-specific overloads does not seem to
provide any compile-time benefit, so drop them in favor of the
generic methods accepting "const Instruction *". Only leave behind
the per-instruction AAQI overloads, which are part of the internal
implementation.
2022-11-02 10:21:10 +01:00
Philip Reames 9472a810ed Address post commit review feedback from D137046
It was pointed out the verifier rejects inttoptr and ptrtoint casts with inputs and outputs whose scalability doesn't match.  As such, checking the input type separately from the type of the cast itself is redundant.
2022-11-01 13:36:13 -07:00
Philip Reames 2e999b7dd1 Allow scalable vectors in ComputeNumSignBits and isKnownNonNull
This is a follow up to D136470 which extends the same scheme used there to ComputeNumSignBits and isKnownNonNull. As a reminder, for scalable vectors we track a single bit which is implicitly broadcast to all lanes. We do not know how many lanes there are statically, and thus have to be conservative along paths which require exact sizes.

Differential Revision: https://reviews.llvm.org/D137046
2022-11-01 09:29:42 -07:00
Nikita Popov 45143240b2 [AA] Add missing const qualifier (NFC) 2022-11-01 16:17:18 +01:00
Patrick Walton 01859da84b [AliasAnalysis] Introduce getModRefInfoMask() as a generalization of pointsToConstantMemory().
The pointsToConstantMemory() method returns true only if the memory pointed to
by the memory location is globally invariant. However, the LLVM memory model
also has the semantic notion of *locally-invariant*: memory that is known to be
invariant for the life of the SSA value representing that pointer. The most
common example of this is a pointer argument that is marked readonly noalias,
which the Rust compiler frequently emits.

It'd be desirable for LLVM to treat locally-invariant memory the same way as
globally-invariant memory when it's safe to do so. This patch implements that,
by introducing the concept of a *ModRefInfo mask*. A ModRefInfo mask is a bound
on the Mod/Ref behavior of an instruction that writes to a memory location,
based on the knowledge that the memory is globally-constant memory (in which
case the mask is NoModRef) or locally-constant memory (in which case the mask
is Ref). ModRefInfo values for an instruction can be combined with the
ModRefInfo mask by simply using the & operator. Where appropriate, this patch
has modified uses of pointsToConstantMemory() to instead examine the mask.

The most notable optimization change I noticed with this patch is that now
redundant loads from readonly noalias pointers can be eliminated across calls,
even when the pointer is captured. Internally, before this patch,
AliasAnalysis was assigning Ref to reads from constant memory; now AA can
assign NoModRef, which is a tighter bound.

Differential Revision: https://reviews.llvm.org/D136659
2022-10-31 13:03:41 -07:00
Philip Reames 93798fb740 Address post commit style comment from 087bb0f 2022-10-31 11:16:14 -07:00
Geza Lore d5e59e99f4 [ValueTracking] Improve performance of programUndefinedIfUndefOrPoison (NFC)
programUndefinedIfUndefOrPoison used to eagerly propagate the fact that
a value is poison to the users of the value. The problem is that if the
value has a lot of uses (orders of magnitude more than the scanning
limit we use in this function), then we spend the bulk of our time in
eagerly propagating the poison property, which we will mostly never use
later anyway due to the scanning limit.

I have a test case (of ~50k lines of machine generated C++), where this
results in ~60% of 35s compilation time being spent doing just this
eager propagation.

This patch changes programUndefinedIfUndefOrPoison to only propagate to
instructions actually visited, looking back to see if their operands are
poison. This should be equivalent and no functional change is intended,
but we regain virtually all of the 60% compilation time spent in this
function in my test case (i.e.: a 2.5x total compilation speedup).

Differential Revision: https://reviews.llvm.org/D137027
2022-10-31 10:20:11 +01:00
Nikita Popov efbb4d0245 [BasicAA] Include MayBeCrossIteration in cache key
Rather than switching to a new AAQI instance with empty cache when
MayBeCrossIteration is toggled, include the value in the cache key.

The implementation redundantly include the information in both sides
of the pair, but that seems simpler than trying to store it only on
one side.

Differential Revision: https://reviews.llvm.org/D136175
2022-10-31 09:59:42 +01:00
Philip Reames 35a1161c24 [ValueTracking] Assert known bits sanity in isKnownNonZero
These are the same asserts we have in other query routines; cover this interface too.
2022-10-30 10:53:52 -07:00
Simon Pilgrim 55a11b542e [VectorUtils] Add getShuffleDemandedElts helper
We have similar code to translate a demanded elements mask for a shuffle's operands in multiple places - this patch adds a helper function to VectorUtils and updates a number of locations to use it directly.

Differential Revision: https://reviews.llvm.org/D136832
2022-10-30 17:03:55 +00:00
Philip Reames 087bb0f1fe Allow scalable vectors in computeKnownBits
This extends the computeKnownBits analysis to support scalable vectors. The critical detail is in deciding how to represent the demanded elements of a vector whose length is unknown at compile time.

For this patch, I adopt the convention that we track one bit which corresponds to all lanes. That is, that bit is implicitly broadcast to all lanes of the scalable vector resulting in all lanes being demanded. This is the same convention we use in getSplatValue in SelectionDAG.

Note that this convention doesn't actually impact much. Most of the code is agnostic to the interpretation of the demanded elements, and the few cases which actually care need case by case handling anyways. In this patch, I just bail out of those cases.

A prior patch (D128159) proposed using a different convention in SDAG. I don't see any strong reason to prefer one scheme over the other, so I propose we go with this one as it's conceptually the simplest. Getting known and demanded bit optimizations unblocked at all is a significant win.

I've locally implemented this scheme in reasonable large parts of ValueTracking.cpp and SelectionDAG equivalents, and have not hit any blockers. If this is approved, I plan to post a series of patches plumbing this through all the relevant parts.

In the discussion on that patch, a preference was expressed for introducing some form of abstraction around the demanded elements. I'll note that I've played with several variations on that idea locally, and have yet to find anything which results in more readable code. If anyone has concrete ideas in this area, I'm happy to explore in follow up patches. I'd strongly prefer to be making API changes in NFC manner with tests in place.

Differential Revision: https://reviews.llvm.org/D136470
2022-10-30 08:44:37 -07:00
Nikita Popov 8e5f57d738 [BasicAA] Remove redundant libcall handling
The writeonly attribute for memset_pattern16 (and other referenced
libcalls) is being added by InferFunctionAttrs nowadays. No need
to special-case it here.
2022-10-27 12:01:33 +02:00