The change implements constant folding of ‘llvm.experimental.constrained.fcmp’
and ‘llvm.experimental.constrained.fcmps’ intrinsics.
Differential Revision: https://reviews.llvm.org/D110322
With opaque pointers, it's possible to directly call a function with
a different signature, without an intermediate bitcast. However,
lot's of code using getCalledFunction() reasonably assumes that the
signatures match (which is always true without opaque pointers).
Add an explicit check to that effect.
The test case is from D105313, where I ran into the problem, but on
further investigation this also affects lots of other code, we just
have little coverage with mismatching signatures. The change from
D105313 is still desirable for other reasons, but this patch
addresses the root problem when it comes to opaque pointers.
Differential Revision: https://reviews.llvm.org/D105733
Use the new PM syntax when specifying the pipeline in regression
tests previously running
"opt -newgvn ..."
Instead we now do
"opt -passes=newgvn ..."
Notice that this also changes the aa-pipeline to become the default
aa-pipeline instead of just basic-aa. Since these tests haven't been
explicitly requesting basic-aa in the past (compared to the test cases
updated in a separate patch involving "-basic-aa -newgvn") it is
assumed that the exact aa-pipeline isn't important for the validity
of the test cases. An alternative could have been to add
-aa-pipeline=basic-aa as well to the run lines, but that might just
add clutter in case the test cases do not care about the aa-pipeline.
This is another step to move away from the legacy PM syntax when
specifying passes in opt.
Differential Revision: https://reviews.llvm.org/D118341
The behavior in Analysis (knownbits) implements poison semantics already,
and we expect the transforms (for example, in instcombine) derived from
those semantics, so this patch changes the LangRef and remaining code to
be consistent. This is one more step in removing "undef" from LLVM.
Without this, I think https://github.com/llvm/llvm-project/issues/53330
has a legitimate complaint because that report wants to allow subsequent
code to mask off bits, and that is allowed with undef values. The clang
builtins are not actually documented anywhere AFAICT, but we might want
to add that to remove more uncertainty.
Differential Revision: https://reviews.llvm.org/D117912
Peculiarly, the necessary code to handle pointers (including the
check for non-integral address spaces) is already in place,
because we were already allowing vectors of pointers here, just
not plain pointers.
The reinterpret load code will convert undef values into zero.
Check the uniform value case before it to produce a better result
for all-undef initializers.
However, the uniform value handling will return the uniform value
even if the access is out of bounds, while the reinterpret load
code will return undef. Add an explicit check to retain the
previous result in this case.
We could use knownbits on both operands for even more folds (and there are
already tests in place for that), but this is enough to recover the example
from:
https://github.com/llvm/llvm-project/issues/51934
(the tests are derived from the code in that example)
I am assuming no noticeable compile-time impact from this because udiv/urem
are rare opcodes.
Differential Revision: https://reviews.llvm.org/D116616
D92270 updated constant expression folding to fold inbounds GEP to
poison if the base is undef. Apply the same logic to SimplifyGEPInst.
The justification is that we can choose an out-of-bounds pointer as base
pointer.
Reviewed By: nikic, lebedev.ri
Differential Revision: https://reviews.llvm.org/D117015
In particular, this also preserves undef when loading from padding,
rather than converting it to zero through a different codepath.
This is the remaining part of D115924.
There are a number of places that specially handle loads from a
uniform value where all the bits are the same (zero, one, undef,
poison), because we a) don't care about the load offset in that
case b) it bypasses casts that might not be legal generally but
do work with uniform values.
We had multiple implementations of this, with a different set of
supported values each time. This replaces two usages with a more
complete helper. Other usages will be replaced separately, because
they have larger impact.
This is part of D115924.
This folded (null + X) == g to false, but of course this is
incorrect if X == g.
Possibly this got confused with the null == g case, which is
already handled elsewhere.
This is now testing (null + g3) != g3 and still coming up with
"true" as the answer. The original case was a less obvious
miscompile with index overflow involved.
This fold is not correct, because indices might evaluate to zero
even if they are not a literal zero integer. Additionally, this
fold would be wrong (in the general case) for non-i8 types as well,
due to index overflow.
Drop this fold and instead let the target-dependent constant
folder compute the actual offset and fold the comparison based
on that.
This fold is incorrect, because it assumes that all indices are
non-zero. This happens to be true for the test as written, but
doesn't hold if we use an extern weak global instead, for which
ptrtoint might be zero.
Add separate tests for the simple constant int case.
We can fold an equality or unsigned icmp between base+offset1 and
base+offset2 with inbounds offsets by comparing the offsets directly.
This replaces a pair of specialized folds that tried to reason
based on the GEP structure instead. One of those folds was plain
wrong (because it does not account for negative offsets), while
the other is unnecessarily complicated and limited (e.g. it will
fail with bitcasts involved).
The disadvantage of this change is that it requires data layout,
so the fold is no longer performed by datalayout-independent
constant folding. I don't think this is a loss in practice, but
it does regress the ConstantExprFold.ll test, which checks folding
without running any passes.
Differential Revision: https://reviews.llvm.org/D116332
We should not lose analysis precision if an 'add' has both no-wrap
flags (nsw and nuw) compared to just one or the other.
This patch is modeled on a similar construct that was added with
D59386.
I don't think it is possible to expose a problem with an unsigned
compare because of the way this was coded (nuw is handled first).
InstCombine has an assert that fires with the example from:
https://github.com/llvm/llvm-project/issues/52884
...because it was expecting InstSimplify to handle this kind of
pattern with an smax.
Fixes#52884
Differential Revision: https://reviews.llvm.org/D116322
An inbounds GEP may still cross the sign boundary, so signed icmps
cannot be folded (https://alive2.llvm.org/ce/z/XSgi4D). This was
previously fixed for other folds in this function, but this one
was missed.
Adding following fold opportunity:
((A | B) ^ A) & ((A | B) ^ B) --> 0
Reviewed By: spatel, rampitec
Differential Revision: https://reviews.llvm.org/D115755
This fixes the assertion failure reported at
https://reviews.llvm.org/D114889#3198921 with a straightforward
check, until the cleaner fix in D115924 can be reapplied.
This reverts commit 9fd4f80e33.
This breaks SingleSource/Regression/C/gcc-c-torture/execute/pr19687.c
in test-suite. Either the test is incorrect, or clang is generating
incorrect union initialization code. I've submitted
https://reviews.llvm.org/D115994 to fix the test, assuming my
interpretation is correct. Reverting this in the meantime as it
may take some time to resolve.
There are a number of places that specially handle loads from a
uniform value where all the bits are the same (zero, one, undef,
poison), because we a) don't care about the load offset in that
case and b) it bypasses casts that might not be legal generally
but do work with uniform values.
We had multiple implementations of this, with a different set of
supported values each time, as well as incomplete type checks in
some cases. In particular, this fixes the assertion reported in
https://reviews.llvm.org/D114889#3198921, as well as a similar
assertion that could be triggered via constant folding.
Differential Revision: https://reviews.llvm.org/D115924
Usually the case where the types are the same ends up being handled
fine because it's legal to do a trivial bitcast to the same type.
However, this is not true for aggregate types. Short-circuit the
whole code if the types match exactly to account for this.
The test is switched to use -instsimplify as it is in the
InstSimplify directory. In this particular case InstCombine does
fold the load (in a very roundabout way), but InstSimplify does not.
Refer to https://llvm.org/PR52546.
Simplifies the following cases:
not(X) == 0 -> X != 0 -> X
not(X) <=u 0 -> X >u 0 -> X
not(X) >=s 0 -> X <s 0 -> X
not(X) != 1 -> X == 1 -> X
not(X) <=u 1 -> X >=u 1 -> X
not(X) >s 1 -> X <=s -1 -> X
Differential Revision: https://reviews.llvm.org/D114666
See D114666 for proposed code change to instsimplify.
The difference between the CHECK result of these 2 tests
highlights missed folds in instsimplify
(e.g. (icmp eq (xor X, true), false) -> X) that are
already being handled by instcombine.
The tests are based on:
llvm/test/Transforms/InstSimplify/icmp-bool-constant.ll
Differential Revision: https://reviews.llvm.org/D115209
Reduce code duplication for commutative pattern matching
and fix a miscompile.
We can't safely propagate an undef element in this transform:
https://alive2.llvm.org/ce/z/s5xy55