With typed pointers the pointer operand type checks the address space
and the load/store type. With opaque pointers we have to check the
load/store type separately.
This rewrites ArgPromotion to be based on offsets rather than GEP
structure. We inspect all loads at constant offsets and remember
which types are loaded at which offsets. Then we promote based on
those types.
This generalizes ArgPromotion to work with bitcasted loads, and
is compatible with opaque pointers.
This patch also fixes incorrect handling of alignment during
argument promotion. Previously, the implementation only checked
that the pointer is dereferenceable, but was happy to speculate
overaligned loads. (I would have fixed this separately in advance,
but I found this hard to do with the previous implementation
approach).
Differential Revision: https://reviews.llvm.org/D118685
As long as *all* the invokes in the set are indirect,
we can merge them, but don't merge direct invokes into the set,
even though it would be legal to do.
Alloca promotion can only deal with cases where the load/store
types match the alloca type (it explicitly does not support
bitcasted load/stores).
With opaque pointers this is no longer enforced through the pointer
type, so add an explicit check.
If the original invokes had uses, the uses must have been in PHI's,
but that immediately results in the incoming values being incompatible.
But we'll replace uses of the original invokes with the use of the
merged invoke, so as long as the incoming values become compatible
after that, we can merge.
Even if the invokes have normal destination, iff it's the same block,
we can merge them. For now, require that there are no PHI nodes,
and the returned values of invokes aren't used.
D108992 added KnownBits handling for 'Quadratic Reciprocity' self-multiplication patterns (bit[1] == 0), which can be used for non-undef values (poison is OK).
This patch adds noundef selfmultiply handling to value tracking so demanded bits patterns can make use of it.
Differential Revision: https://reviews.llvm.org/D117995
Before walking all the callers, check whether we have a
dereferenceable attribute directly on the argument.
Also make it clearer that the code currently does not treat
alignment correctly.
The helper `Attributor::checkForAllReturnedValuesAndReturnInsts`
simplifies the returned value optimistically. In `AAUndefinedBehavior`
we cannot use such optimistic values when deducing UB. As a result, we
assumed UB for the return value of a function because we initially
(=optimistically) thought the function return is `undef`. While we later
adjusted this properly, the `AAUndefinedBehavior` was under the
impression the return value is "known" (=fix) and could never change.
To correct this we use `Attributor::checkForAllInstructions` and then
manually to perform simplification of the return value, only allowing
known values to be used. This actually matches the other UB deductions.
Fixes#53647
Use existing functionality to strip constant offsets that works well
with AS casts and avoids the code duplication.
Since we strip AS casts during the computation of the offset we also
need to adjust the APInt properly to avoid mismatches in the bit width.
This code ensures the caller of `compute` sees APInts that match the
index type size of the value passed to `compute`, not the value result
of the strip pointer cast.
Fixes#53559.
Differential Revision: https://reviews.llvm.org/D118727
The vectorizer will choose at times to "vectorize" loops with a scalar
factor (VF=1) with interleaving (IC > 1). This can occasionally produce
better code than the unroller (notable for reductions where it can
produce independent reduction chains that are combined after the loop).
At times this is not very beneficial though, for example when runtime
checks are needed or when the scalar code requires predication.
This addresses the second point, preventing the vectorizer from
interleaving when the scalar loop will require predication. This
prevents it from making a bit of a mess, that is worse than the original
and better left for the unroller to unroll if beneficial. It helps
reverse some of the regressions from D118090.
Differential Revision: https://reviews.llvm.org/D118566
This is a follow-up suggested in D119060.
Instead of checking each of the bottom 2 bits individually,
we can check them together and handle the possibility that
we demand both together.
https://alive2.llvm.org/ce/z/C2ihC2
Differential Revision: https://reviews.llvm.org/D119139
D43208 extracted `useEmulatedMaskMemRefHack()` from legality into cost model.
What it essentially does is prevents scalarized vectorization of masked memory operations:
```
// TODO: Cost model for emulated masked load/store is completely
// broken. This hack guides the cost model to use an artificially
// high enough value to practically disable vectorization with such
// operations, except where previously deployed legality hack allowed
// using very low cost values. This is to avoid regressions coming simply
// from moving "masked load/store" check from legality to cost model.
// Masked Load/Gather emulation was previously never allowed.
// Limited number of Masked Store/Scatter emulation was allowed.
```
While i don't really understand about what specifically `is completely broken`
was talking about, i believe that at least on X86 with AVX2-or-later,
this is no longer true. (or at least, i would like to know what is still broken).
So i would like to follow suit after D111460, and like wise disable that hack for AVX2+.
But since this was added for X86 specifically, let's just instead completely remove this hack.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D114779
Enabled loop interchange support for floating point reductions
if it is allowed to reorder floating point operations.
Previously when we encouter a floating point PHI node in the
outer loop exit block, we bailed out since we could not detect
floating point reductions in the early days. Now we remove this
limiation since we are able to detect floating point reductions.
Reviewed By: #loopoptwg, Meinersbur
Differential Revision: https://reviews.llvm.org/D117450
As per LangRef's definition of `noreturn` attribute:
```
noreturn
This function attribute indicates that the function never returns
normally, hence through a return instruction.
This produces undefined behavior at runtime if the function
ever does dynamically return. nnotated functions may still
raise an exception, i.a., nounwind is not implied.
```
So if we `invoke` a `noreturn` function, and the normal destination
of an invoke is not an `unreachable`, point it at the new `unreachable`
block.
The change/fix from the original commit is that we now actually create
the new block, and don't just repurpose the original block,
because said normal destination block could have other users.
This reverts commit db1176ce66,
relanding commit 598833c987.
As per LangRef's definition of `noreturn` attribute:
```
noreturn
This function attribute indicates that the function never returns
normally, hence through a return instruction.
This produces undefined behavior at runtime if the function
ever does dynamically return. nnotated functions may still
raise an exception, i.a., nounwind is not implied.
```
Changes the remark to emit on the function call that captures the globalized
variable instead of the globalized variable itself. The user should be able to
see which variable it was in the argument list of the function.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D106980
While nowadays SimplifyCFG knows how to hoist code from then-else blocks,
sink code from unconditional predecessors, and even promote the latter
by tail-merging `ret`/`resume` function terminators, that isn't everything.
While i (& others) have been trying to deal with merging/sinking `unreachable`,
apparently perhaps the more impactful remaining problem is merging the `throw`
calls.
If we start at the `landingpad`, all the predecessors are unwind edges of `invoke`s,
and in some cases some of the `invoke`s are mergeable.
```
/// This is a weird mix of hoisting and sinking. Visually, it goes from:
/// [...] [...]
/// | |
/// [invoke0] [invoke1]
/// / \ / \
/// [cont0] [landingpad] [cont1]
/// to:
/// [...] [...]
/// \ /
/// [invoke]
/// / \
/// [cont] [landingpad]
```
This simplifies the IR/CFG, at the cost of debug info and extra PHI nodes.
Note that we don't require for *all* the `invokes` of the `landingpad`
to be mergeable, they can form more than a single set, we gracefully handle that.
For now, i completely disallowed normal destination, PHI nodes and indirect invokes
but that can be supported.
Out of all the CTMark projects, only 7zip is C++, so there isn't much impact:
https://llvm-compile-time-tracker.com/compare.php?from=ba8eb31bd9542828f6424e15a3014f80f14522c8&to=722fc871c84f14157d45c2159bc9c8c7e2825785&stat=size-total
... but there it currently causes size-total decrease.
Differential Revision: https://reviews.llvm.org/D117805
This patch adds initial support for signed conditions. To do so,
ConstraintElimination maintains two separate systems, one with facts
from signed and one for unsigned conditions.
To start with this means information from signed and unsigned conditions
is kept completely separate. When it is safe to do so, information from
signed conditions may be also transferred to the unsigned system and
vice versa. That's left for follow-ups.
In the initial version, de-composition of signed values just handles
constants and otherwise just uses the value, without trying to
decompose the operation. Again this can be extended in follow-up
changes.
The main benefit of this limited signed support is proving >=s 0
pre-conditions added in D118799. But even this initial version also
fixes PR53273.
Depends on D118799.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D118806
With this patch pre-conditions can be added to a list of constraints.
Constraints with pre-conditions can only be used if all pre-conditions
are satisfied when the constraint is used.
The pre-conditions at the moment are specified as a list of
(Predicate, Value *,Value *) tuples. This allow easily checking them
like any other condition, using the existing infrastructure.
This then is used to limit GEP decomposition to cases where we can
prove that offsets are signed positive.
This fixes a couple of incorrect transforms where GEP offsets where
assumed to be signed positive, but they were not.
Note that this effectively disables GEP decomposition, as there's no
support for reasoning about signed predicates. D118806 adds initial
signed support.
Fixes PR49624.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D118799
This makes the statepoint methods in IRBuilder accept a
FunctionCallee, which carries both the callee and function type.
This is used to add the elementtype attribute to the statepoint call.
RS4GC requires an additional tweak to actually preserve that attribute
-- previously the attributes on the call were completely overwritten.
Differential Revision: https://reviews.llvm.org/D118886