This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
memset_chk may not write the number of bytes specified by the third
argument, if it is larger than the destination size (specified as 4th
argument).
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D115167
The Assignment Tracking debug-info feature is outlined in this RFC:
https://discourse.llvm.org/t/
rfc-assignment-tracking-a-better-way-of-specifying-variable-locations-in-ir
DeadStoreElimmination shortens stores that are shadowed by later stores such
that the overlapping part of the earlier store is omitted. Insert an unlinked
dbg.assign intrinsic with a variable fragment that describes the omitted part
to signal that that fragment of the variable has a stale value in memory.
Reviewed By: jmorse
Differential Revision: https://reviews.llvm.org/D133315
If the location ptr to be killed is in no loop and the Function does not
have irreducible loops, then we can regard it as loop invariant.
Differential Revision: https://reviews.llvm.org/D135369
Stop assuming that an 'int' is 32 bits in helpers that emit libcalls
to lib functions that had 'int' in the signature. For most targets
this is NFC. For a target with 16 bit 'int' type this could help out
detecting if trying to emit a libcall with incorrect signature.
Similarly we now derive the type mapping to 'size_t' by asking TLI
about the size of 'size_t'. This should be NFC (at least for in-tree
targets) since getSizeTSize(), in TLI, is deriving the size in the
same way as DataLayout::getIntPtrType().
Differential Revision: https://reviews.llvm.org/D135065
For noop store of the form of LoadI and StoreI,
An invariant should be kept is that the memory state of the related
MemoryLoc before LoadI is the same as before StoreI.
For this example:
```
define void @pr49927(i32* %q, i32* %p) {
%v = load i32, i32* %p, align 4
store i32 %v, i32* %q, align 4
store i32 %v, i32* %p, align 4
ret void
}
```
Here the definition of the store's destination is different with the
definition of the load's destination, which it seems that the
invariant mentioned above is broken. But the definition of the
store's destination would write a value that is LoadI, actually, the
invariant is still kept. So we can safely ignore it.
Fixes https://github.com/llvm/llvm-project/issues/49271
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D132657
The type information of the store values can diverge when checking for valid
mask store candidates to eliminate via DSE. This patch checks for equivalence
wrt to size and element count.
Reviewed By: fhahn, rui.zhang
Differential Revision: https://reviews.llvm.org/D132700
For noop store of the form of LoadI and StoreI,
An invariant should be kept is that the memory state of the related
MemoryLoc before LoadI is the same as before StoreI.
For this example:
```
define void @pr49927(i32* %q, i32* %p) {
%v = load i32, i32* %p, align 4
store i32 %v, i32* %q, align 4
store i32 %v, i32* %p, align 4
ret void
}
```
Here the definition of the store's destination is different with the
definition of the load's destination, which it seems that the
invariant mentioned above is broken. But the definition of the
store's destination would write a value that is LoadI, actually, the
invariant is still kept. So we can safely ignore it.
Differential Revision: https://reviews.llvm.org/D132657
Remove isFreeCall() in favor of getFreedOperand(). Replace the
two remaining uses with a getFreedOperand() != nullptr check, as
they only care that something is getting freed. (The usage in DSE
is correct as such. The allocator-related checks in CFLGraph look
rather questionable in general.)
We currently assume in a number of places that free-like functions
free their first argument. This is true for all hardcoded free-like
functions, but with the new attribute-based design, the freed
argument is supposed to be indicated by the allocptr attribute.
To make sure we handle this correctly once allockind(free) is
respected, add a getFreedOperand() helper which returns the freed
argument, rather than just indicating whether the call frees *some*
argument.
This migrates most but not all users of isFreeCall() to the new
API. The remaining users are a bit more tricky.
Drop the requirement that getInitialValueOfAllocation() must be
passed an allocator function, shifting the responsibility for
checking that into the function (which it does anyway). The
motivation is to avoid some calls to isAllocationFn(), which has
somewhat ill-defined semantics (given the number of
allocator-related attributes we have floating around...)
(For this function, all we eventually need is an allockind of
zeroed or uninitialized.)
Differential Revision: https://reviews.llvm.org/D127274
For non-mem-intrinsic and non-lifetime `CallBase`s, the current
`isRemovable` function only checks if the `CallBase` 1. has no uses 2.
will return 3. does not throw:
80fb782336/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp (L1017)
But we should also exclude invokes even in case they don't throw,
because they are terminators and thus cannot be removed. While it
doesn't seem to make much sense for `invoke`s to have an `nounwind`
target, this kind of code can be generated and is also valid bitcode.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D128224
And thread DSE's ephemeral values to EarliestEscapeInfo.
This allows more precise analysis in DSEState::isReadClobber() via BatchAA.
Followup to D123162.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D123342
This changes MemorySSA to be constructed in unoptimized form.
MemorySSA::ensureOptimizedUses() can be called to optimize all
uses (once). This should be done by passes where having optimized
uses is beneficial, either because we're going to query all uses
anyway, or because we're doing def-use walks.
This should help reduce the compile-time impact of MemorySSA for
some use cases (the reason why I started looking into this is
D117926), which can avoid optimizing all uses upfront, and instead
only optimize those that are actually queried.
Actually, we have an existing use-case for this, which is EarlyCSE.
Disabling eager use optimization there gives a significant
compile-time improvement, because EarlyCSE will generally only query
clobbers for a subset of all uses (this change is not included in
this patch).
Differential Revision: https://reviews.llvm.org/D121381
This builds on @fhahn's D112313, and caches the liveOnEntry node as a optimized access. D112313 tied to only cache a known clobber. This change adds caching the fact that no clobber exists. It still does not cache may-clobber results.
Differential Revision: https://reviews.llvm.org/D120842
There is a general WalkerStepLimit adjustment higher up in the
loop, and I don't see any reason why this particular case would
need additional adjustment. Furthermore, this could underflow.
This should be no functional change, as the cases supported by the
helper and the cases supported by DSE are currently the same, the
code structure is just slightly different.
Change the DSE calloc handling to assume that it is
inaccessiblememonly, i.e. the defining access is liveOnEntry.
Differential Revision: https://reviews.llvm.org/D117543
canSkipDef() currently skips inaccessiblememonly calls, but not
if they are allocation functions. This check was added in D103009,
but actually seems to be a leftover from a previous implementation
in D101440. canSkipDef() is not used on the storeIsNoop() path,
where the relevant transform ended up being implemented.
Differential Revision: https://reviews.llvm.org/D117005
This change removes a direct check for calloc-like allocation functions, and instead handles the generic case where we're storing a constant to constant initialized memory. This is mostly to remove the call to isCallocLike, but if someone downstream happens to have an initialized alloc which initializes to e.g. -1, this will also kick in for them. (I don't know of such an example ftr.)
For these "visible on unwind/ret" checks we only care about the
fact that no other code has access to the pointer (unless it
escapes). A noalias call is sufficient for this, it does not
have to be a known allocation function.
This is basically the same change as D116728, but for DSE rather
than LICM.
If the killing store overwrites the whole object, we know that the
preceding store is dead, regardless of the accessed offset or size.
This case was previously only handled if the size of the dead store
was also known.
This allows us to perform conventional DSE for calls that write to
an argument (but without known size).
Differential Revision: https://reviews.llvm.org/D116267
This fixes a typo in 81d69e1bda.
Of course we should only skip the particular store if it isn't
removable, not bail out of the whole loop. Add a test to cover
this case.
At this point the instruction may either have an analyzable
write or be a terminator. For terminators, isRemovable() is not
necessarily well-defined. Move the check until after we have ensured
that it is not a terminator.
The only non-trivial change here is that the isReadClobber()
check for redundant stores is now on the DefLoc, not the
UpperLoc. This is semantically the right location to use, though
in practice it makes no difference (the locations are either the
same, or the def inst does not read).
We have Value->stripInBoundsConstantOffsets() which does what we
want here, but the inbounds requirement isn't actually necessary.
We should probably add Value->stripConstantOffsets() as well.
Remove the special casing for intrinsics in MemoryLocation::getForDest()
and handle them through the general attribute based code. On the DSE
side, this means that isRemovable() now needs to handle more than a
hardcoded list of intrinsics. We consider everything apart from
volatile memory intrinsics and lifetime markers to be removable.
This allows us to perform DSE on intrinsics that DSE has not been
specially taught about, using a matrix store as an example here.
There is an interesting test change for invariant.start, but I
believe that optimization is correct. It only looks a bit odd
because the code is immediate UB anyway.
Differential Revision: https://reviews.llvm.org/D116210
Fix handling of alloc-like instructions in isGuaranteedLoopInvariant(). It was not valid when the 'KillingDef' was outside of the loop, while the 'CurrentDef' was inside the loop. In that case, the 'KillingDef' only overwrites the definition from the last iteration of the loop, and not the ones of all iterations. Therefor it does not make the 'CurrentDef' to be dead, and must not remove it.
Fixing issue : https://github.com/llvm/llvm-project/issues/52774
Reviewed by: Florian Hahn
Differential revision: https://reviews.llvm.org/D115965
We can only drop calls if they have an analyzable write, the return
value is not used, they don't throw and they don't diverge. The last
two conditions were previously not checked, because all the libcalls
with analyzable writes already happened to satisfy those conditions
anyway. This may not be true for generalizations (with D115904 in mind).
No test changes because the necessary attributes are already inferred
for currently supported libcalls.
Differential Revision: https://reviews.llvm.org/D115962