Commit Graph

273 Commits

Author SHA1 Message Date
Nikita Popov e3cec17b2d [InstSimplify] Remove incorrect icmp of gep fold (PR52429)
As described in https://bugs.llvm.org/show_bug.cgi?id=52429 this
fold is incorrect, because inbounds only guarantees that the
pointers don't wrap in the unsigned space: It is possible that
the sign boundary is crossed by an object.

I'm dropping the fold entirely rather than adjusting it, because
computePointerICmp() fully subsumes it (just with correct predicate
handling).

Differential Revision: https://reviews.llvm.org/D113343
2021-11-06 21:03:21 +01:00
Sanjay Patel 727e642e97 [InstCombine] generalize fold for mask-with-signbit-splat
(iN X s>> (N-1)) & Y --> (X < 0) ? Y : 0

https://alive2.llvm.org/ce/z/qeYhdz

I was looking at a missing abs() transform and found my way to this
generalization of an existing fold that was added with D67799.
As discussed in that review, we want to make sure codegen handles
this difference well, and for all of the targets/types that I
spot-checked, it looks good.

I am leaving the existing fold in place in this commit because
it covers a potentially missing icmp fold, but I plan to remove
that as a follow-up commit as suggested during review.

Differential Revision: https://reviews.llvm.org/D111410
2021-10-15 16:25:48 -04:00
Sanjay Patel a35673f4cf [InstCombine] add tests for (i32 X s>> 31) & Y; NFC
Also regenerate some check lines to more accurately show
current transforms via name changes.
2021-10-08 11:07:57 -04:00
Sanjay Patel 72d991c42e [InstCombine] add/move tests for icmp with gep operand(s); NFC 2021-09-28 09:40:52 -04:00
Sanjay Patel a488c7879e [InstCombine] reduce signbit test of logic ops to cmp with zero
This is the pattern from the description of:
https://llvm.org/PR50816

There might be a way to generalize this to a smaller or more
generic pattern, but I have not found it yet.

https://alive2.llvm.org/ce/z/ShzJoF

define i1 @src(i8 %x) {
  %add = add i8 %x, -1
  %xor = xor i8 %x, -1
  %and = and i8 %add, %xor
  %r = icmp slt i8 %and, 0
  ret i1 %r
}

define i1 @tgt(i8 %x) {
  %r = icmp eq i8 %x, 0
  ret i1 %r
}
2021-07-12 09:01:26 -04:00
Sanjay Patel 4b9d698243 [InstCombine][tests] add tests for signbit + logic; NFC
PR50816
2021-07-12 09:01:26 -04:00
Sanjay Patel 9eb613b2de [InstSimplify] do not propagate poison from select arm to icmp user
This is the cause of the miscompile in:
https://llvm.org/PR50944

The problem has likely existed for some time, but it was made visible with:
5af8bacc94 ( D104661 )
handleOtherCmpSelSimplifications() assumed it can convert select of
constants to bool logic ops, but that does not work with poison.
We had a very similar construct in InstCombine, so the fix here
mimics the fix there.

The bug is in instsimplify, but I'm not sure how to reproduce it outside of
instcombine. The reason this is visible in instcombine is because we have a
hack (FIXME) to bypass simplification of a select when it has an icmp user:
955f125899/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp (L2632)

So we get to an unusual case where we are trying to simplify an instruction
that has an operand that would have already simplified if we had processed
it in normal order.

Differential Revision: https://reviews.llvm.org/D105298
2021-07-01 17:40:07 -04:00
Sanjay Patel 09e3bf01b3 [InstCombine][test] add tests for icmp simplify miscompile (PR50944); NFC 2021-07-01 17:40:07 -04:00
Juneyoung Lee 395607af3c Reapply [ConstantFold] Fold more operations to poison
This was reverted to mitigate mitigate miscompiles caused by
the logical and/or to bitwise and/or fold. Reapply it now that
the underlying issue has been fixed by D101191.

-----

This patch folds more operations to poison.

Alive2 proof: https://alive2.llvm.org/ce/z/mxcb9G (it does not contain tests about div/rem because they fold to poison when raising UB)

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D92270
2021-05-13 16:04:12 +02:00
Juneyoung Lee 8a156d1c27 [InstCombine] Fully disable select to and/or i1 folding
This is a patch that disables the poison-unsafe select -> and/or i1 folding.

It has been blocking D72396 and also has been the source of a few miscompilations
described in llvm.org/pr49688 .
D99674 conditionally blocked this folding and successfully fixed the latter one.
The former one was still blocked, and this patch addresses it.

Note that a few test functions that has `_logical` suffix are now deoptimized.
These are created by @nikic to check the impact of disabling this optimization
by copying existing original functions and replacing and/or with select.

I can see that most of these are poison-unsafe; they can be revived by introducing
freeze instruction. I left comments at fcmp + select optimizations (or-fcmp.ll, and-fcmp.ll)
because I think they are good targets for freeze fix.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D101191
2021-05-06 09:29:52 +09:00
Roman Lebedev 046bb8ea7c
[NFC][InstCombine] Autogenerate some checklines being affected by upcoming change 2021-03-23 00:51:18 +03:00
Juneyoung Lee 06829034ca Revert "[ConstantFold] Fold more operations to poison"
This reverts commit 53040a968d due to its
bad interaction with select i1 -> and/or i1 transformation.

This fixes:
https://bugs.llvm.org/show_bug.cgi?id=49005
https://bugs.llvm.org/show_bug.cgi?id=48435
2021-02-04 00:24:02 +09:00
Nikita Popov fb063c933f [InstCombine] Duplicate tests for logical and/or (NFC)
This replicates existing and/or tests to also test variants using
select. This should help us get a more accurate view on which
optimizations we're missing if we disable the select -> and/or
fold.
2021-01-12 21:50:41 +01:00
Roman Lebedev 897c985e1e
[InstCombine] Canonicalize SPF to abs intrinsic
This patch enables canonicalization of SPF_ABS and SPF_ABS
to the abs intrinsic.

This is a recommit, the original try was
05d4c4ebc2,
but it was reverted due to an apparent miscompile,
which since then has just been fixed by the previous commit.

Differential Revision: https://reviews.llvm.org/D87188
2020-12-18 21:18:14 +03:00
Juneyoung Lee 53040a968d [ConstantFold] Fold more operations to poison
This patch folds more operations to poison.

Alive2 proof: https://alive2.llvm.org/ce/z/mxcb9G (it does not contain tests about div/rem because they fold to poison when raising UB)

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D92270
2020-11-29 21:19:48 +09:00
Roman Lebedev e465f9c303
[InstCombine] Negator: - (C - %x) --> %x - C (PR47997)
This relaxes one-use restriction on that `sub` fold,
since apparently the addition of Negator broke
preexisting `C-(C2-X) --> X+(C-C2)` (with C=0) fold.
2020-11-03 16:06:51 +03:00
Roman Lebedev 482d65331b
[NFC][InstCombine] Add test coverage for PR47997 2020-11-03 16:06:50 +03:00
Simon Pilgrim de885f1b2a [InstCombine] Add (icmp ne A, 0) | (icmp ne B, 0) --> (icmp ne (A|B), 0) vector support
Scalar cases were already being handled by foldLogOpOfMaskedICmps (so this was dead code), but refactoring to support non-uniform vectors will take some time, so tweak this fold in the meantime.
2020-10-19 15:41:21 +01:00
Simon Pilgrim ecd25086d1 [InstCombine] Add (icmp eq B, 0) | (icmp ult/gt A, B) -> (icmp ule A, B-1) vector support 2020-10-19 15:23:48 +01:00
Simon Pilgrim aba7275bb3 [InstCombine] Add (icmp ne A, 0) | (icmp ne B, 0) --> (icmp ne (A|B), 0) tests 2020-10-19 13:42:53 +01:00
Simon Pilgrim 3dd2f02bb0 [InstCombine] Add (icmp eq B, 0) | (icmp ult A, B) -> (icmp ule A, B-1) vector tests 2020-10-19 11:48:32 +01:00
Nikita Popov 13e19d2e7c Revert "[InstCombine] Canonicalize SPF_ABS to abs intrinc"
This reverts commit 05d4c4ebc2.

mstorsjo reports a miscompile after this change in
https://reviews.llvm.org/D87188#2281093. Reverting until I can
investigate this.
2020-09-18 09:38:26 +02:00
Nikita Popov 05d4c4ebc2 [InstCombine] Canonicalize SPF_ABS to abs intrinc
Enable canonicalization of SPF_ABS and SPF_NABS to the abs intrinsic.

To be conservative, the one-use check on the comparison is retained,
this may be relaxed if all goes well.

It's pretty likely that this will uncover places that missing
handling for the abs() intrinsic. Please report any seen performance
regressions.

Differential Revision: https://reviews.llvm.org/D87188
2020-09-17 22:28:34 +02:00
Sanjay Patel 11d8eedfa5 [InstCombine] move/add tests for icmp with mul operands; NFC 2020-09-07 12:40:37 -04:00
Nikita Popov ada8a17d94 [InstCombine] Fold abs intrinsic eq zero
Following the same transform for the select version of abs.
2020-09-05 15:11:38 +02:00
Nikita Popov 94c71d6aa1 [InstCombine] Add tests for abs intrinsic eq zero (NFC) 2020-09-05 15:11:38 +02:00
Sanjay Patel ec06b38130 [InstCombine] canonicalize 'not' ops before logical shifts
This reverses the existing transform that would uniformly canonicalize any 'xor' after any shift. In the case of logical shifts, that turns a 'not' into an arbitrary 'xor' with constant, and that's probably not as good for analysis, SCEV, or codegen.

The SCEV motivating case is discussed in:
http://bugs.llvm.org/PR47136

There's an analysis motivating case at:
http://bugs.llvm.org/PR38781

I did draft a patch that would do the same for 'ashr' but that's questionable because it's just swapping the position of a 'not' and uncovers at least 2 missing folds that we would probably need to deal with as preliminary steps.

Alive proofs:
https://rise4fun.com/Alive/BBV

  Name: shift right of 'not'
  Pre: C2 == (-1 u>> C1)
  %a = lshr i8 %x, C1
  %r = xor i8 %a, C2
  =>
  %n = xor i8 %x, -1
  %r = lshr i8 %n, C1

  Name: shift left of 'not'
  Pre: C2 == (-1 << C1)
  %a = shl i8 %x, C1
  %r = xor i8 %a, C2
  =>
  %n = xor i8 %x, -1
  %r = shl i8 %n, C1

  Name: ashr of 'not'
  %a = ashr i8 %x, C1
  %r = xor i8 %a, -1
  =>
  %n = xor i8 %x, -1
  %r = ashr i8 %n, C1

Differential Revision: https://reviews.llvm.org/D86243
2020-08-22 09:38:13 -04:00
Roman Lebedev f5df5cd558
Recommit "[InstCombine] Negator: -(X << C) --> X * (-1 << C)"
This reverts commit ac70b37a00
which reverted commit 8aeb2fe13a
because codegen tests got broken and i needed time to investigate.

This shows some regressions in tests, but they are all around GEP's,
so i'm not really sure how important those are.

https://rise4fun.com/Alive/1Gn
2020-08-05 15:59:13 +03:00
Roman Lebedev ac70b37a00
Revert "[InstCombine] Negator: -(X << C) --> X * (-1 << C)"
Breaks codegen tests, will recommit later.

This reverts commit 8aeb2fe13a.
2020-08-05 03:19:38 +03:00
Roman Lebedev 8aeb2fe13a
[InstCombine] Negator: -(X << C) --> X * (-1 << C)
This shows some regressions in tests, but they are all around GEP's,
so i'm not really sure how important those are.

https://rise4fun.com/Alive/1Gn
2020-08-05 03:13:14 +03:00
Roman Lebedev 8fd57b06a4
[NFC][InstCombine] Fix value names (s/%tmp/%i/) and autogenerate a few tests being affected by negator change 2020-08-05 03:12:14 +03:00
Simon Pilgrim 769b979930 [InstCombine] Add (vXi1 trunc(lshr(x,c))) -> icmp_eq(and(x,c')) support for non-uniform vectors
As noted on PR46531, we were only performing this transform on uniform vectors as we were using the m_APInt pattern matcher to extract the shift amount.

Differential Revision: https://reviews.llvm.org/D83035
2020-07-02 16:56:33 +01:00
Simon Pilgrim 103d62e131 [InstCombine] Add some (vXi1 trunc(lshr(x,c))) -> icmp_eq(and(x,c')) tests for vectors with undef elements
Suggested on D83035
2020-07-02 16:04:30 +01:00
Simon Pilgrim 421c02e5c6 [InstCombine] Add some (vXi1 trunc(lshr(x,c))) -> icmp_eq(and(x,c')) tests for non-uniform vectors
As noticed on PR46531
2020-07-02 11:56:51 +01:00
Sanjay Patel 4abab5c5ca [InstCombine] generalize canonicalization of masked equality comparisons
(X | MaskC) == C --> (X & ~MaskC) == C ^ MaskC
  (X | MaskC) != C --> (X & ~MaskC) != C ^ MaskC

We have more analyis for 'and' patterns and already lean this way
in the existing code, so this should be neutral or better in IR.

If this does not do as well in codegen, the problem already exists
and we should fix that based on target costs/heuristics.

http://volta.cs.utah.edu:8080/z/oP3ecL

define void @src(i8 %x, i8 %OrC, i8 %C, i1* %p0, i1* %p1) {
  %or = or i8 %x, %OrC
  %eq = icmp eq i8 %or, %C
  store i1 %eq, i1* %p0

  %ne = icmp ne i8 %or, %C
  store i1 %ne, i1* %p1
  ret void
}

define void @tgt(i8 %x, i8 %OrC, i8 %C, i1* %p0, i1* %p1) {
  %NotOrC = xor i8 %OrC, -1
  %a = and i8 %x, %NotOrC
  %NewC = xor i8 %C, %OrC
  %eq = icmp eq i8 %a, %NewC
  store i1 %eq, i1* %p0

  %ne = icmp ne i8 %a, %NewC
  store i1 %ne, i1* %p1
  ret void
}
2020-04-25 11:31:57 -04:00
Eric Christopher 45dca04395 Exclude bitcast and ext/trunc signbit optimization on ppc_fp128
Revision a1c05fe <https://reviews.llvm.org/rGa1c05fe20f3def1f1be9f50d2adefc6b6f1578ad>
removed bitcast from the list of problematic transformations, however:

  %97 = fptrunc ppc_fp128 %2 to double            // we need to check ppc_fp128 here to prevent the transformation
  %98 = bitcast double %97 to i64                 // a1c05fe checks ppc_fp128 at here
  %99 = icmp slt i64 %98, 0
  %100 = zext i1 %99 to i8
  store i8 %100, i8* %7, align 1

so this patch does that. I'm also disabling it in the presence of extend just in case.

I verified separately that the hash of -std::infinity and std::infinity don't match now.

Differential Revision: https://reviews.llvm.org/D77911
2020-04-10 17:07:55 -07:00
Sanjay Patel a1c05fe20f [InstCombine] exclude bitcast of ppc_fp128 in icmp signbit fold
Based on the post-commit comments for rG0f56bbc, there might
be a problem with this transform:

(bitcast (fpext/fptrunc X)) to iX) < 0 --> (bitcast X to iY) < 0

...and the ppc_fp128 data type, so conservatively bypass if we
are bitcasting a ppc_fp128.

We might be able to account for endian or other differences to
enable this for PowerPC again if that is useful.

Differential Revision: https://reviews.llvm.org/D77642
2020-04-08 08:56:19 -04:00
Sanjay Patel e268ec8e0d [InstCombine] add icmp+cast tests for ppc_fp128; NFC
See post-commit comments for rG0f56bbc.
2020-04-07 07:35:01 -04:00
Sanjay Patel 0f56bbc1a5 [InstCombine] reduce FP-casted and bitcasted signbit check
PR45305:
https://bugs.llvm.org/show_bug.cgi?id=45305

Alive2 proofs:
http://volta.cs.utah.edu:8080/z/bVyrko
http://volta.cs.utah.edu:8080/z/Vxpz9q
2020-03-27 17:33:59 -04:00
Sanjay Patel e72730ee3a [InstCombine] add tests for FP cast+bitcast signbit checks; NFC
PR45305:
https://bugs.llvm.org/show_bug.cgi?id=45305
2020-03-27 17:25:25 -04:00
Nikita Popov 9adedd146d [InstCombine] Relax preconditions for ashr+and+icmp fold (PR44754)
Fix for https://bugs.llvm.org/show_bug.cgi?id=44754. We already have
a fold that converts icmp (and (ashr X, C3), C2), C1 into
icmp (and C2'), C1', but it imposed overly strict requirements on the
transform.

Relax this by checking that both C2 and C1 don't shift out bits
(in a signed sense) when forming the new constants.

Alive proofs (https://rise4fun.com/Alive/PTz0):

    Name: ashr_legal
    Pre: ((C2 << C3) >> C3) == C2 && ((C1 << C3) >> C3) == C1
    %a = ashr i16 %x, C3
    %b = and i16 %a, C2
    %c = icmp i16 %b, C1
    =>
    %d = and i16 %x, C2 << C3
    %c = icmp i16 %d, C1 << C3

    Name: ashr_shiftout_eq
    Pre: ((C2 << C3) >> C3) == C2 && ((C1 << C3) >> C3) != C1
    %a = ashr i16 %x, C3
    %b = and i16 %a, C2
    %c = icmp eq i16 %b, C1
    =>
    %c = false

Note that >> corresponds to ashr here. The case of an equality
comparison has some special handling in this transform, because
it will form to a true/false result if the condition on the comparison
constant it violated.

Differential Revision: https://reviews.llvm.org/D74294
2020-02-18 17:49:46 +01:00
Nikita Popov 9bc6bc2d8c [InstCombine] Add more tests for icmp+and+ashr; NFC 2020-02-18 17:47:48 +01:00
Nikita Popov d4627b90a0 [InstCombine] Avoid modifying instructions in-place
As discussed on D73919, this replaces a few cases where we were
modifying multiple operands of instructions in-place with the
creation of a new instruction, which we generally prefer nowadays.

This tends to be more readable and less prone to worklist management
bugs.

Test changes are only superficial (instruction naming and order).
2020-02-08 17:05:56 +01:00
Mikhail Maltsev b6534b2a26 [Analysis] Don't assume that unsigned overflow can't happen in EmitGEPOffset (PR42699)
Summary:
Currently when computing a GEP offset using the function EmitGEPOffset
for the following instruction

  getelementptr inbounds i32, i32* %p, i64 %offs

we get

  mul nuw i64 %offs, 4

Unfortunately we cannot assume that unsigned wrapping won't happen
here because %offs is allowed to be negative.

Making such assumptions can lead to miscompilations: see the new test
test24_neg_offs in InstCombine/icmp.ll. Without the patch InstCombine
would generate the following comparison:

   icmp eq i64 %offs, 4611686018427387902; 0x3ffffffffffffffe

Whereas the correct value to compare with is -2.

This patch replaces the NUW flag with NSW in the multiplication
instructions generated by EmitGEPOffset and adjusts the test suite.

https://bugs.llvm.org/show_bug.cgi?id=42699

Reviewers: chandlerc, craig.topper, ostannard, lebedev.ri, spatel, efriedma, nlopes, aqjune

Reviewed By: lebedev.ri

Subscribers: reames, lebedev.ri, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68342

llvm-svn: 375089
2019-10-17 08:59:06 +00:00
Sanjay Patel eb8d39e113 [InstCombine] allow icmp+binop folds before min/max bailout (PR43310)
This has the potential to uncover missed analysis/folds as shown in the
min/max code comment/test, but fewer restrictions on icmp folds should
be better in general to solve cases like:
https://bugs.llvm.org/show_bug.cgi?id=43310

llvm-svn: 372510
2019-09-22 14:31:53 +00:00
Sanjay Patel d2a524288d [InstCombine] add tests for icmp fold hindered by min/max; NFC
llvm-svn: 372509
2019-09-22 14:23:22 +00:00
Sanjay Patel f201b1c918 [InstCombine] add/move tests for icmp with add operand; NFC
llvm-svn: 371988
2019-09-16 14:05:19 +00:00
Sanjay Patel c5cd808156 [InstCombine] remove unneeded one-use checks for icmp fold
This fold and several others were added in:
rL125734 <https://reviews.llvm.org/rL125734>
...with no explanation for the one-use checks other than the code
comments about register pressure.

Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.

This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.

rL371940 is a related patch in this series.

llvm-svn: 371981
2019-09-16 12:54:34 +00:00
Sanjay Patel 14ce3fde04 [InstCombine] add icmp tests with extra uses; NFC
llvm-svn: 371979
2019-09-16 12:19:18 +00:00
Sanjay Patel 3daf168fa9 [InstCombine] remove unneeded one-use checks for icmp fold
This fold and several others were added in:
rL125734
...with no explanation for the one-use checks other than the code
comments about register pressure.

Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.

There are similar checks as noted with the TODO comments. I'm
hoping to remove those restrictions too, but if any of these
does cause a regression, it should be easier to correct by making
small, individual commits.

This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.

llvm-svn: 371940
2019-09-15 20:56:34 +00:00