30a12f3f63 switched the type check
to use the GEP result type rather than the GEP operand type.
However, the GEP result types may match even if the operand types
don't, in case GEPs with scalar/vector base and vector index
are compared.
Fixes https://github.com/llvm/llvm-project/issues/55363.
Try to push an icmp into a select even if the icmp operand isn't
constant - perform a generic SimplifyICmpInst instead.
This doesn't appear to impact compile-time much, and forming
logical and/or is generally profitable, as we have very good
support for them.
Normally the index type will already be canonicalized here, but
this is not guaranteed depending on visitation order. The code
was already accounting for a potentially needed sext, but a trunc
may also be needed.
Add a ConstantExpr::getSExtOrTrunc() helper method to make this
simpler. This matches the corresponding IRBuilder method in behavior.
Fixes https://github.com/llvm/llvm-project/issues/55228.
This is a hacky fix for:
https://github.com/llvm/llvm-project/issues/54558
As discussed there, codegen regressed when we opened up this transform
to allow extra uses ( 61580d0949 ), and it's not clear how to
undo the transforms at the later stage of compilation.
As noted in the code comments, there's a set of remaining folds that
are still limited to one-use, so we can try harder to refine and
expand the limitations on these folds, but it's likely to be an
up-and-down battle as we find and overcome similar regressions.
Differential Revision: https://reviews.llvm.org/D122909
This is a retry of 9397bdc67e - that was reverted until
we had a clang warning in place to alert users about a
possible mistake in source. The warning was added with
ab982eace6.
This is noted as a missing clang warning in #54222,
but it is also a missing optimization opportunity.
Alive2 proofs:
https://alive2.llvm.org/ce/z/Q8drDqhttps://alive2.llvm.org/ce/z/pE6LRt
I don't see a single conversion for all predicates
using "getFCmpCode" logic, so other predicates are
left as a TODO item.
X (any pred) -X --> X (any pred) 0.0
This works with all FP values and preserves FMF.
Alive2 examples:
https://alive2.llvm.org/ce/z/dj6jhp
This can also create one of the patterns that we match as "fabs"
as shown in one of the test diffs.
This reverts commit 9397bdc67e.
This optimization is likely to surprise programmers as seen
in post-commit comments, so we should add a clang warning
first (that is proposed in D121306).
As discussed on Issue #32161 this fold can be generalized a lot more than it currently is, but this patch at least adds vector support.
Differential Revision: https://reviews.llvm.org/D121358
This is noted as a missing clang warning in #54222
(and we should still make that enhancement).
Alive2 proofs:
https://alive2.llvm.org/ce/z/Q8drDqhttps://alive2.llvm.org/ce/z/pE6LRt
I don't see a single conversion for all predicates
using "getFCmpCode" logic, so other predicates are
left as a TODO item.
This is a recommit without changes. I originally reverted this
due to a significant code-size regression on tramp3d-v4, however
further investigation showed that in the tramp3d-v4 case this
change enables additional optimizations (in particular more
jump threading), which happens to reduce the size of a function
just enough to be eligible for inlining at hot callsites, which
results in the code size increase. As such, this was just bad
luck.
-----
This one-use limitation is artificial, we do not increase
instruction count if we perform the fold with multiple uses. The
motivating case is shown in @sub_eq_zero_select, where the one-use
limitation causes us to miss a subsequent select fold.
I believe the backend is pretty good about reusing flag-producing
subs for cmps with same operands, so I think doing this is fine.
Differential Revision: https://reviews.llvm.org/D120337
This one-use limitation is artificial, we do not increase
instruction count if we perform the fold with multiple uses. The
motivating case is shown in @sub_eq_zero_select, where the one-use
limitation causes us to miss a subsequent select fold.
I believe the backend is pretty good about reusing flag-producing
subs for cmps with same operands, so I think doing this is fine.
Differential Revision: https://reviews.llvm.org/D120337
Replace matchSelectPattern pattern match with the more general m_SMin so that it can handle smin intrinsics as well as the icmp+select pattern
Noticed while reviewing regressions from D98152
Following Sanjay's proposal from discussion in D118317, this patch
generalizes and-reduce handling to fold the following pattern
```
icmp ne (bitcast(icmp ne (lhs, rhs)), 0)
```
into
```
icmp ne (bitcast(lhs), bitcast(rhs))
```
https://alive2.llvm.org/ce/z/WDcuJ_
Differential Revision: https://reviews.llvm.org/D118431
Reviewed By: lebedev.ri
It causes builds to fail with
llvm/include/llvm/Support/Casting.h:269:
typename llvm::cast_retty<X, Y*>::ret_type llvm::cast(Y*)
[with X = llvm::IntegerType; Y = const llvm::Type; typename llvm::cast_retty<X, Y*>::ret_type = const llvm::IntegerType*]:
Assertion `isa<X>(Val) && "cast<Ty>() argument of incompatible type!"' failed.
See the code review for link to a reproducer.
> This patch introduces folding of and-reduce idiom and generates code
> that is easier to read and which is lest costly in terms of icmp operations.
> The folding is
> ```
> icmp eq (bitcast(icmp ne (lhs, rhs)), 0)
> ```
> into
> ```
> icmp eq(bitcast(lhs), bitcast(rhs))
> ```
>
> See PR53419.
>
> Differential Revision: https://reviews.llvm.org/D118317
> Reviewed By: lebedev.ri, spatel
This reverts commit 8599bb0f26.
This also revertes the dependent change:
"[Test] Add 'ne' tests for and-reduce pattern folding"
This reverts commit a4aaa59953.
This patch introduces folding of and-reduce idiom and generates code
that is easier to read and which is lest costly in terms of icmp operations.
The folding is
```
icmp eq (bitcast(icmp ne (lhs, rhs)), 0)
```
into
```
icmp eq(bitcast(lhs), bitcast(rhs))
```
See PR53419.
Differential Revision: https://reviews.llvm.org/D118317
Reviewed By: lebedev.ri, spatel
This commit optimizes the code sequence:
icmp-XXX (ashr-exact (X, C_1), C_2).
Instcombine already implements this optimization for sgt, and this
patch adds support to additional predicates. The transformation is legal
for all predicates if the 'exact' flag is set, and to SGE, UGE, SLT, ULT
when the exact flag is not present.
This pattern is found in the std::vector bounds checks code of the at()
method.
Alive2 proof:
https://alive2.llvm.org/ce/z/JT_WL8
Differential Revision: https://reviews.llvm.org/D117252
This function returns an upper bound on the number of bits needed
to represent the signed value. Use "Max" to match similar functions
in KnownBits like countMaxActiveBits.
Rename APInt::getMinSignedBits->getSignificantBits. Keeping the old
name around to keep this patch size down. Will do a bulk rename as
follow up.
Rename KnownBits::countMaxSignedBits->countMaxSignificantBits.
Reviewed By: lebedev.ri, RKSimon, spatel
Differential Revision: https://reviews.llvm.org/D116522
((Op1 + C) & C) u< Op1 --> Op1 != 0
((Op1 + C) & C) u>= Op1 --> Op1 == 0
Op0 u> ((Op0 + C) & C) --> Op0 != 0
Op0 u<= ((Op0 + C) & C) --> Op0 == 0
https://alive2.llvm.org/ce/z/iUfXJNhttps://alive2.llvm.org/ce/z/caAtjj
define i1 @src(i8 %x, i8 %y) {
; the add/mask must be with a low-bit mask (0x01ff...)
%y1 = add i8 %y, 1
%pop = call i8 @llvm.ctpop.i8(i8 %y1)
%ismask = icmp eq i8 %pop, 1
call void @llvm.assume(i1 %ismask)
%a = add i8 %x, %y
%m = and i8 %a, %y
%r = icmp ult i8 %m, %x
ret i1 %r
}
define i1 @tgt(i8 %x, i8 %y) {
%r = icmp ne i8 %x, 0
ret i1 %r
}
I suspect this can be generalized in some way, but this
is the pattern I'm seeing in a motivating test based on
issue #52851.
We need to make sure that the GEP source element types match.
A caveat here is that the used GEP source element type can be
arbitrary if no offset is stripped from the original GEP -- the
transform is somewhat inconsistent in that it always starts from
a GEP, but might not actually look through it if it has multiple
indices.
We need to also check that the source element type is the same,
otherwise the indices may have different meaning. The added
addrspacecast demonstrates that we do still need to check the
pointer type.
This is a follow-on suggested in D112634.
Two folds that were added with that patch are subsumed in the call to
decomposeBitTestICmp, and two other folds are potentially inverted.
The deleted folds were very specialized by instcombine standards
because they were restricted to legal integer types based on the data
layout. This generalizes the canonical form independent of target/types.
This change has a reasonable chance of exposing regressions either in
IR or codegen, but I don't have any evidence for either of those yet.
A spot check of asm across several in-tree targets shows variations
that I expect are mostly neutral.
We have one improvement in an existing IR test that I noted with a
comment. Using mask ops might also make more code match with D114272.
Differential Revision: https://reviews.llvm.org/D114386
InstCombine converts range tests of the form (X > C1 && X < C2) or
(X < C1 || X > C2) into checks of the form (X + C3 < C4) or
(X + C3 > C4). It is possible to express all range tests in either
of these forms (with different choices of constants), but currently
neither of them is considered canonical. We may have equivalent
range tests using either ult or ugt.
This proposes to canonicalize all range tests to use ult. An
alternative would be to canonicalize to either ult or ugt depending
on the specific constants involved -- e.g. in practice we currently
generate ult for && style ranges and ugt for || style ranges when
going through the insertRangeTest() helper. In fact, the "clamp like"
fold was relying on this, which is why I had to tweak it to not
assume whether inversion is needed based on just the predicate.
Proof: https://alive2.llvm.org/ce/z/_SP_rQ
Differential Revision: https://reviews.llvm.org/D113366
This introduces a new ComputeMinSignedBits method for ValueTracking that
returns the BitWidth - SignBits + 1 from ComputeSignBits, and represents
the minimum bit size for the value as a signed integer. Similar to the
existing APInt::getMinSignedBits method, this can make some of the
reasoning around ComputeSignBits more natural.
See https://reviews.llvm.org/D112298
Fixes a crash observed by oss-fuzz in 39934. Issue at hand is that code expects a pattern match on m_Mul to imply the operand is a mul instruction, however mul constexprs are also valid here.
Inspired by D111968, provide a isNegatedPowerOf2() wrapper instead of obfuscating code with (-Value).isPowerOf2() patterns, which I'm sure are likely avenues for typos.....
Differential Revision: https://reviews.llvm.org/D111998
There may be some other patterns like this or a generalization,
but this is an example that I noticed would definitely regress
with a planned follow-up to D111410.
https://alive2.llvm.org/ce/z/GVpQDb
The test diffs show that we have better analysis/folds for 'add'
(although we should at least have the simplifications
independently, so we don't have the one-use restriction).
This is related to solving regressions that would appear in
transforms related to D111410, and that is part of a series
of enhancements that may eventually helpi solve PR34047.
https://alive2.llvm.org/ce/z/3tB9KG
define i1 @src(i8 %x, i8 %C, i8 %C2) {
%sub = sub nuw i8 %C2, %x
%r = icmp slt i8 %sub, %C
ret i1 %r
}
define i1 @tgt(i8 %x, i8 %C, i8 %C2) {
%Cnot = xor i8 %C, -1
%C2not = xor i8 %C2, -1
%add = add nuw i8 %x, %C2not
%r = icmp sgt i8 %add, %Cnot
ret i1 %r
}
There were 2 related but over-specified folds for:
C1 - X == C
One allowed multi-use but was limited to equal constants.
The other allowed different constants but disallowed multi-use.
This combines the 2 folds into a more general match.
The test diffs show the multi-use cases that were falling
through the cracks.
https://alive2.llvm.org/ce/z/4_hEt2
define i1 @src(i8 %x, i8 %subC, i8 %C) {
%s = sub i8 %subC, %x
%r = icmp eq i8 %s, %C
ret i1 %r
}
define i1 @tgt(i8 %x, i8 %subC, i8 %C) {
%newC = sub i8 %subC, %C
%isneg = icmp eq i8 %x, %newC
ret i1 %isneg
}