Unfortunately, it's not just constant expressions that can trap,
we might also have a trapping constant expression nested inside
a constant aggregate.
Perform the check during phi folding on Constant rather than
ConstantExpr, and extend the Constant::mayTrap() implementation
to also recursive into ConstantAggregates, not just ConstantExprs.
Fixes https://github.com/llvm/llvm-project/issues/49839.
Clang-format InstructionSimplify and convert all "FunctionName"s to
"functionName". This patch does touch a lot of files but gets done with
the cleanup of InstructionSimplify in one commit.
This is the alternative to the less invasive clang-format only patch: D126783
Reviewed By: spatel, rengolin
Differential Revision: https://reviews.llvm.org/D126889
This adds two conjugated folds:
* A | B -> B if A implies B (https://alive2.llvm.org/ce/z/R6GU4j)
* A & B -> A if A implies B (https://alive2.llvm.org/ce/z/EGMqyy)
If A and B are icmps themselves, we will usually fold this through
other logic already (though the tests show a couple additional cases
we previously missed). However, isImpliedCond() also supports A
being of the form X & Y, which allows us to handle cases like
(X & Y) | B where X implies B. This addresses the regression from
D125398.
Something that notably doesn't work yet is the (X | Y) & B case.
This is due to an asymmetry in the isImpliedCondition()
implementation that will have to be addressed separately.
Differential Revision: https://reviews.llvm.org/D125530
This issue reproduces in the context of LoopDeletion, because the
bitcast does not get simplified away there. For a plain -inst-simplify
run the bitcast would get folded away first.
Fixes https://github.com/llvm/llvm-project/issues/54615.
If a constrained intrinsic call was replaced by some value, it was not
removed in some cases. The dangling instruction resulted in useless
instructions executed in runtime. It happened because constrained
intrinsics usually have side effect, it is used to model the interaction
with floating-point environment. In some cases side effect is actually
absent or can be ignored.
This change adds specific treatment of constrained intrinsics so that
their side effect can be removed if it actually absents.
Differential Revision: https://reviews.llvm.org/D118426
Currently the fsub optimizations in InstSimplify don't know how to fold
-0.0 - (-X) to X when the constrained intrinsics are used. This adds partial
support. The rest of the support will come later with work on the IR
matchers.
This review is split out from D107285.
Differential Revision: https://reviews.llvm.org/D123396
In D111530, I suggested that we add some relatively basic pattern-matching
folds for shifts and funnel shifts and avoid a more specialized solution
if possible.
We can start by implementing at least one of these in IR because it's
easier to write the code and verify with Alive2:
https://alive2.llvm.org/ce/z/qHpmNn
This will need to be adapted/extended for SDAG to handle the motivating
bug ( #49541 ) because the patterns only appear later with that example
(added some tests: bb850d422b)
This can be extended within InstSimplify to handle cases where we 'and'
with a shift too (in that case, kill the funnel shift).
We could also handle patterns where the shift and funnel shift directions
are inverted, but I think it's better to canonicalize that instead to
avoid pattern-match case explosion.
Differential Revision: https://reviews.llvm.org/D120253
The code was using exact sizing only, but since what we really need is just to make sure the offsets are in bounds, a minimum bound on the object size is sufficient.
To demonstrate the difference, support computing minimum sizes from obects of scalable vector type.
Remove some code which tried to handle the case of comparing two allocas where an object size could not be precisely computed. This code had zero coverage in tree, and at least one nasty bug.
The bug comes from the fact that the code uses the size of the result pointer as a proxy for whether the alloca can be of size zero. Since the result of an alloca is *always* a pointer type, and a pointer type can *never* be empty, this check was a nop. As a result, we blindly consider a zero offset from two allocas to never be equal. They can in fact be equal when one or more of the allocas is zero sized.
This is particularly ugly because instcombine contains the exact opposite rule. If instcombine reaches the allocas first, it combines them into one (making them equal). If instsimplify reaches the compare first, it would consider them not equal. This creates all kinds of fun scenarios for order of optimization reaching different and contradictory conclusions.
In a prior review I was asked to move the helper function canIgnoreSNaN()
out to FPEnv.h. This wasn't possible at the time because that function
needs the fast math flags, and including them includes lots of other stuff
that isn't needed.
This patch moves the fast math flags out into a new FMF.h file unchanged,
and moves the helper function out to FPEnv.h also unchanged. This ticket
only moves code around.
Differential Revision: https://reviews.llvm.org/D119752
Currently the fsub optimizations in InstSimplify don't know how to fold
X - -0.0 to X when we know X is not zero and the constrained intrinsics
are used. This adds the support.
This review is split out from D107285.
Differential Revision: https://reviews.llvm.org/D119746
Instead of doing an inbounds strip first and another non-inbounds
strip afterward for equality comparisons, directly do a single
inbounds or non-inbounds strip based on whether we have an equality
predicate or not.
This is NFC-ish in that the alloca equality codepath is the only
part that sees additional non-inbounds offsets now, and for that
codepath it doesn't matter whether or not the GEP is inbounds, as
it does a stronger check itself. InstCombine would infer inbounds
for such GEPs.
Currently the fsub optimizations in InstSimplify don't know how to fold X
- +0.0 to X when using the constrained intrinsics. This adds the support.
This review is split out from D107285.
Differential Revision: https://reviews.llvm.org/D118928
This doesn't require callers to put the pointer operand and the indices
in a container like a vector when calling the function. This is not
really an issue with the existing callers. But when using it from
IRBuilder the inputs are available as separate pointer value and indices
ArrayRef.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D117038
We could use knownbits on both operands for even more folds (and there are
already tests in place for that), but this is enough to recover the example
from:
https://github.com/llvm/llvm-project/issues/51934
(the tests are derived from the code in that example)
I am assuming no noticeable compile-time impact from this because udiv/urem
are rare opcodes.
Differential Revision: https://reviews.llvm.org/D116616
Alternative to D116817.
This introduces a new value-based folding interface for Or (FoldOr),
which takes 2 values and returns an existing Value or a constant if the
Or can be simplified. Otherwise nullptr is returned. This replaces the
more restrictive CreateOr which takes 2 constants.
This is the used to implement a folder that uses InstructionSimplify.
The logic to simplify `Or` instructions is moved there. Subsequent
patches are going to transition other CreateXXX to the more general
FoldXXX interface.
Reviewed By: nikic, lebedev.ri
Differential Revision: https://reviews.llvm.org/D116935
D92270 updated constant expression folding to fold inbounds GEP to
poison if the base is undef. Apply the same logic to SimplifyGEPInst.
The justification is that we can choose an out-of-bounds pointer as base
pointer.
Reviewed By: nikic, lebedev.ri
Differential Revision: https://reviews.llvm.org/D117015
We should not lose analysis precision if an 'add' has both no-wrap
flags (nsw and nuw) compared to just one or the other.
This patch is modeled on a similar construct that was added with
D59386.
I don't think it is possible to expose a problem with an unsigned
compare because of the way this was coded (nuw is handled first).
InstCombine has an assert that fires with the example from:
https://github.com/llvm/llvm-project/issues/52884
...because it was expecting InstSimplify to handle this kind of
pattern with an smax.
Fixes#52884
Differential Revision: https://reviews.llvm.org/D116322
Adding following fold opportunity:
((A | B) ^ A) & ((A | B) ^ B) --> 0
Reviewed By: spatel, rampitec
Differential Revision: https://reviews.llvm.org/D115755
Refer to https://llvm.org/PR52546.
Simplifies the following cases:
not(X) == 0 -> X != 0 -> X
not(X) <=u 0 -> X >u 0 -> X
not(X) >=s 0 -> X <s 0 -> X
not(X) != 1 -> X == 1 -> X
not(X) <=u 1 -> X >=u 1 -> X
not(X) >s 1 -> X <=s -1 -> X
Differential Revision: https://reviews.llvm.org/D114666
Reduce code duplication for commutative pattern matching
and fix a miscompile.
We can't safely propagate an undef element in this transform:
https://alive2.llvm.org/ce/z/s5xy55
We needed a stricter version of m_Not for D114462, but I wasn't
sure if that was going to be required anywhere else, so I didn't bother
to make that reusable.
It turns out we have one more existing simplification that needs
this (currently miscompiles):
https://alive2.llvm.org/ce/z/9-nTKi
And there's at least one more fold in that family that we could add.
Differential Revision: https://reviews.llvm.org/D114882
https://alive2.llvm.org/ce/z/4PaPDy
There's a related fold where the inner 'or' is replaced by 'and',
but that needs to be more careful about matching a 'not'.
Reduce duplication for handling the top-level commuted operands.
There are several other folds that should be moved in here, but
we need to make sure there's good test coverage.
(~a & b) ^ (a | b) --> a
This is the swapped and/or (Demorgan?) sibling fold for
the fold added with D114462 ( 892648b18a ).
This case is easier to specify because we are returning
a root value, not a 'not':
https://alive2.llvm.org/ce/z/SRzj4f
(a & b) ^ (~a | b) --> ~a
I was looking for a shortcut to reduce some of the complex logic
folds that are currently up for review (D113216
and others in that stack), and I found this missing from
instcombine/instsimplify.
There is a trade-off in putting it into instsimplify: because
we can't create new values here, we need a strict 'not' op (no
undef elements). Otherwise, the fold is not valid:
https://alive2.llvm.org/ce/z/k_AGGj
If this was in instcombine instead, we could create the proper
'not'. But having the fold here benefits other passes like GVN
that use instsimplify as an analysis.
There is a related fold where 'and' and 'or' are swapped, and
that is planned as a follow-up commit.
Differential Revision: https://reviews.llvm.org/D114462
As described in https://bugs.llvm.org/show_bug.cgi?id=52429 this
fold is incorrect, because inbounds only guarantees that the
pointers don't wrap in the unsigned space: It is possible that
the sign boundary is crossed by an object.
I'm dropping the fold entirely rather than adjusting it, because
computePointerICmp() fully subsumes it (just with correct predicate
handling).
Differential Revision: https://reviews.llvm.org/D113343
Currently strip.invariant/launder.invariant are handled by
constructing constant expressions with the intrinsics skipped.
This takes an alternative approach of accumulating the offset
using stripAndAccumulateConstantOffsets(), with a flag to look
through invariant.group intrinsics.
Differential Revision: https://reviews.llvm.org/D112382
Currently the fadd optimizations in InstSimplify don't know how to do this
NoSignedZeros "X + 0.0 ==> X" fold when using the constrained intrinsics.
This adds the support.
This review is derived from D106362 with some improvements from D107285
and is a follow-on to D111085.
Differential Revision: https://reviews.llvm.org/D111450
Currently the fadd optimizations in InstSimplify don't know how to do this
"X + -0.0 ==> X" fold when using the constrained intrinsics. This adds the
support.
This commit is derived from D106362 with some improvements from D107285.
Differential Revision: https://reviews.llvm.org/D111085
https://alive2.llvm.org/ce/z/QagQMn
This fold is handled by instcombine via SimplifyUsingDistributiveLaws(),
but we are missing the sibliing fold for 'logical and' (implemented with
'select'). Retrofitting the code in instcombine looks much harder
than just adding a small adjustment here, and this is potentially more
efficient and beneficial to other passes.
Stop using APInt constructors and methods that were soft-deprecated in
D109483. This fixes all the uses I found in llvm, except for the APInt
unit tests which should still test the deprecated methods.
Differential Revision: https://reviews.llvm.org/D110807
This renames the primary methods for creating a zero value to `getZero`
instead of `getNullValue` and renames predicates like `isAllOnesValue`
to simply `isAllOnes`. This achieves two things:
1) This starts standardizing predicates across the LLVM codebase,
following (in this case) ConstantInt. The word "Value" doesn't
convey anything of merit, and is missing in some of the other things.
2) Calling an integer "null" doesn't make any sense. The original sin
here is mine and I've regretted it for years. This moves us to calling
it "zero" instead, which is correct!
APInt is widely used and I don't think anyone is keen to take massive source
breakage on anything so core, at least not all in one go. As such, this
doesn't actually delete any entrypoints, it "soft deprecates" them with a
comment.
Included in this patch are changes to a bunch of the codebase, but there are
more. We should normalize SelectionDAG and other APIs as well, which would
make the API change more mechanical.
Differential Revision: https://reviews.llvm.org/D109483
These are similar to the rotate pattern added with:
dcf659e821
...but we don't have guard ops on the shift amount,
so we don't canonicalize to the intrinsic.
declare void @llvm.assume(i1)
define i32 @src(i32 %shamt, i32 %bitwidth) {
; subtract must be in range of bitwidth
%lt = icmp ule i32 %bitwidth, 32
call void @llvm.assume(i1 %lt)
%r = lshr i32 -1, %shamt
%s = sub i32 %bitwidth, %shamt
%l = shl i32 -1, %s
%o = or i32 %r, %l
ret i32 %o
}
define i32 @tgt(i32 %shamt, i32 %bitwidth) {
ret i32 -1
}
https://alive2.llvm.org/ce/z/aF7WHx
This is already done within InstCombine:
https://alive2.llvm.org/ce/z/MiGE22
...but leaving it out of analysis makes it
harder to avoid infinite loops there.
This is already done within InstCombine:
https://alive2.llvm.org/ce/z/MiGE22
...but leaving it out of analysis makes it
harder to avoid infinite loops there.
D106850 introduced a simplification for llvm.vscale by looking at the
surrounding function's vscale_range attributes. The call that's being
simplified may not yet have been inserted into the IR. This happens for
example during function cloning.
This patch fixes the issue by checking if the instruction is in a
parent basic block.
Users, especially the Attributor, might replace multiple operands at
once. The actual implementation of simplifyWithOpReplaced is able to
handle that just fine, the interface was simply not allowing to replace
more than one operand at a time. This is exposing a more generic
interface without intended changes for existing code.
Differential Revision: https://reviews.llvm.org/D106189
Currently InstructionSimplify.cpp knows how to simplify floating point
instructions that have a NaN operand. It does not know how to handle the
matching constrained FP intrinsic.
This patch teaches it how to simplify so long as the exception handling
is not "fpexcept.strict".
Differential Revision: https://reviews.llvm.org/D103169
If any operand of a math op is poison, that takes
precedence over general undef/NaN.
This should not be visible with binary ops because
it requires 2 constant operands to trigger (and if
both operands of a binop are constant, that should
get handled first in ConstantFolding).
We already have a fold for variable index with constant vector,
but if we can determine a scalar splat value, then it does not
matter whether that value is constant or not.
We overlooked this fold in D102404 and earlier patches,
but the fixed vector variant is shown in:
https://llvm.org/PR50817
Alive2 agrees on that:
https://alive2.llvm.org/ce/z/HpijPC
The same logic applies to scalable vectors.
Differential Revision: https://reviews.llvm.org/D104867
This is the cause of the miscompile in:
https://llvm.org/PR50944
The problem has likely existed for some time, but it was made visible with:
5af8bacc94 ( D104661 )
handleOtherCmpSelSimplifications() assumed it can convert select of
constants to bool logic ops, but that does not work with poison.
We had a very similar construct in InstCombine, so the fix here
mimics the fix there.
The bug is in instsimplify, but I'm not sure how to reproduce it outside of
instcombine. The reason this is visible in instcombine is because we have a
hack (FIXME) to bypass simplification of a select when it has an icmp user:
955f125899/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp (L2632)
So we get to an unusual case where we are trying to simplify an instruction
that has an operand that would have already simplified if we had processed
it in normal order.
Differential Revision: https://reviews.llvm.org/D105298
This adds more poison folding optimizations to InstSimplify.
Since all binary operators propagate poison, these are fine.
Also, the precondition of `select cond, undef, x` -> `x` is relaxed to allow the case when `x` is undef.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D104661
We already have this fold:
fadd float poison, 1.0 --> poison
...via ConstantFolding, so this makes the behavior consistent
if the other operand(s) are non-constant.
The fold for undef was added before poison existed as a
value/type in IR.
This came up in D102673 / D103169
because we're trying to sort out the more complicated handling
for constrained math ops.
We should have the handling for the regular instructions done
first, so we can build on that (or diverge as needed).
Differential Revision: https://reviews.llvm.org/D104383
We can look through invariant group intrinsics for the purposes of
simplifying the result of a load.
Since intrinsics can't be constants, but we also don't want to
completely rewrite load constant folding, we convert the load operand to
a constant. For GEPs and bitcasts we just treat them as constants. For
invariant group intrinsics, we treat them as a bitcast.
Relanding with a check for self-referential values.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D101103
This patch allows that scalable vector can also use the fold that already
exists for fixed vector, only when the lane index is lower than the minimum
number of elements of the vector.
Differential Revision: https://reviews.llvm.org/D102404
We can look through invariant group intrinsics for the purposes of
simplifying the result of a load.
Since intrinsics can't be constants, but we also don't want to
completely rewrite load constant folding, we convert the load operand to
a constant. For GEPs and bitcasts we just treat them as constants. For
invariant group intrinsics, we treat them as a bitcast.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D101103
This is similar to the fix in c590a9880d ( PR49832 ), but
we missed handling the pattern for select of bools (no compare
inst).
We can't substitute a vector value because the equality condition
replacement that we are attempting requires that the condition
is true/false for the entire value. Vector select can be partly
true/false.
I added an assert for vector types, so we shouldn't hit this again.
Fixed formatting while auditing the callers.
https://llvm.org/PR50500
The semantics of select with undefined/poison condition
are not explicitly stated in the LangRef, but this matches
comments in the code and Alive2 appears to concur:
https://alive2.llvm.org/ce/z/KXytmd
We can find this pattern after demanded elements transforms.
As noted in D101191, fuzzers are finding infinite loops because
we may not account for this pattern in other passes.
The previous rule:
(insert_vector _, (extract_vector X, 0), 0) -> X
is not quite correct. The correct fold should be:
(insert_vector Y, (extract_vector X, 0), 0) -> X
where: Y is X, or Y is undef
This commit updates the pattern.
Reviewed By: peterwaller-arm, paulwalker-arm
Differential Revision: https://reviews.llvm.org/D102699
This commit removes some redundant {insert,extract}_vector intrinsic
chains by implementing the following patterns as instsimplifies:
(insert_vector _, (extract_vector X, 0), 0) -> X
(extract_vector (insert_vector _, X, 0), 0) -> X
Reviewed By: peterwaller-arm
Differential Revision: https://reviews.llvm.org/D101986
This reverts commit ea1a0d7c9a.
While this is strictly more powerful, it is also strictly slower.
InstSimplify intentionally does not perform many folds that it
is allowed to perform, if doing so requires a KnownBits calculation
that will be repeated in InstCombine.
Maybe it's worthwhile to do this here, but that needs a more
explicitly stated motivation, evaluated in a review.