Commit Graph

501 Commits

Author SHA1 Message Date
Craig Topper 1d149d08d3 [InstCombine] Remove insertRangeTest code that handles the equality case.
For equality, the function called getTrue/getFalse with the VT
of the comparison input. But getTrue/getFalse need the boolean VT.
So if this code ever executed, it would assert.

I believe these cases are removed by InstSimplify so we don't get here.

So this patch just fixes up an assert to exclude the equality
possibility and removes the broken code.

llvm-svn: 366649
2019-07-21 06:43:38 +00:00
Craig Topper 8fabdfe9fc [InstCombine] Don't use AddOne/SubOne to see if two APInts are 1 apart. Use APInt operations instead. NFCI
AddOne/SubOne create new Constant objects. That seems heavy for
comparing ConstantInts which wrap APInts. Just do the math on
on the APInts and compare them.

llvm-svn: 366648
2019-07-21 05:26:05 +00:00
Rui Ueyama 49a3ad21d6 Fix parameter name comments using clang-tidy. NFC.
This patch applies clang-tidy's bugprone-argument-comment tool
to LLVM, clang and lld source trees. Here is how I created this
patch:

$ git clone https://github.com/llvm/llvm-project.git
$ cd llvm-project
$ mkdir build
$ cd build
$ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug \
    -DLLVM_ENABLE_PROJECTS='clang;lld;clang-tools-extra' \
    -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DLLVM_ENABLE_LLD=On \
    -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ../llvm
$ ninja
$ parallel clang-tidy -checks='-*,bugprone-argument-comment' \
    -config='{CheckOptions: [{key: StrictMode, value: 1}]}' -fix \
    ::: ../llvm/lib/**/*.{cpp,h} ../clang/lib/**/*.{cpp,h} ../lld/**/*.{cpp,h}

llvm-svn: 366177
2019-07-16 04:46:31 +00:00
Sanjay Patel 2675b0c8ab [InstCombine] squash is-not-power-of-2 using ctpop
This is the Demorgan'd 'not' of the pattern handled in:
D63660 / rL364153

This is another intermediate IR step towards solving PR42314:
https://bugs.llvm.org/show_bug.cgi?id=42314

We can test if a value is not a power-of-2 using ctpop(X) > 1,
so combining that with an is-zero check of the input is the
same as testing if not exactly 1 bit is set:

(X == 0) || (ctpop(X) u> 1) --> ctpop(X) != 1

llvm-svn: 364246
2019-06-24 22:35:26 +00:00
Sanjay Patel 13a5ae58fc [InstCombine] squash is-power-of-2 that uses ctpop
This is another intermediate IR step towards solving PR42314:
https://bugs.llvm.org/show_bug.cgi?id=42314

We can test if a value is power-of-2-or-0 using ctpop(X) < 2,
so combining that with a non-zero check of the input is the
same as testing if exactly 1 bit is set:

(X != 0) && (ctpop(X) u< 2) --> ctpop(X) == 1

Differential Revision: https://reviews.llvm.org/D63660

llvm-svn: 364153
2019-06-23 14:22:37 +00:00
Sanjay Patel 760f61ab36 [InstCombine] try harder to form rotate (funnel shift) (PR20750)
We have a similar match for patterns ending in a truncate. This
should be ok for all targets because the default expansion would
still likely be better from replacing 2 'and' ops with 1.

Attempt to show the logic equivalence in Alive (which doesn't
currently have funnel-shift in its vocabulary AFAICT):

  %shamt = zext i8 %i to i32
  %m = and i32 %shamt, 31
  %neg = sub i32 0, %shamt
  %and4 = and i32 %neg, 31
  %shl = shl i32 %v, %m
  %shr = lshr i32 %v, %and4
  %or = or i32 %shr, %shl
  =>
  %a = and i8 %i, 31
  %shamt2 = zext i8 %a to i32
  %neg2 = sub i32 0, %shamt2
  %and4 = and i32 %neg2, 31
  %shl = shl i32 %v, %shamt2
  %shr = lshr i32 %v, %and4
  %or = or i32 %shr, %shl

https://rise4fun.com/Alive/V9r

llvm-svn: 360605
2019-05-13 17:28:19 +00:00
Craig Topper 16dc165046 [InstCombine] Don't transform ((C1 OP zext(X)) & C2) -> zext((C1 OP X) & C2) if either zext or OP has another use.
If they have other users we'll just end up increasing the instruction count.

We might be able to weaken this to only one of them having a single use if we can prove that the and will be removed.

Fixes PR41164.

Differential Revision: https://reviews.llvm.org/D59630

llvm-svn: 356690
2019-03-21 17:50:49 +00:00
Sanjay Patel 5b820323ca [InstCombine] fold logic-of-nan-fcmps (PR41069)
Combine 2 fcmps that are checking for nan-ness:
   and (fcmp ord X, 0), (and (fcmp ord Y, 0), Z) --> and (fcmp ord X, Y), Z
   or  (fcmp uno X, 0), (or  (fcmp uno Y, 0), Z) --> or  (fcmp uno X, Y), Z

This is an exact match for a minimal reassociation pattern.
If we want to handle this more generally that should go in
the reassociate pass and allow removing this code.

This should fix:
https://bugs.llvm.org/show_bug.cgi?id=41069

llvm-svn: 356471
2019-03-19 16:39:17 +00:00
Sanjay Patel 587fd849f0 [InstCombine] Fix matchRotate bug when one operand is a ConstantExpr shift
This bug seems to be harmless in release builds, but will cause an error in UBSAN
builds or an assertion failure in debug builds.

When it gets to this opcode comparison, it assumes both of the operands are BinaryOperators,
but the prior m_LogicalShift will also match a ConstantExpr. The cast<BinaryOperator> will
assert in a debug build, or reading an invalid value for BinaryOp from memory with
((BinaryOperator*)constantExpr)->getOpcode() will cause an error in a UBSAN build.

The test I added will fail without this change in debug/UBSAN builds, but not in release.

Patch by: @AndrewScheidecker (Andrew Scheidecker)

Differential Revision: https://reviews.llvm.org/D58049

llvm-svn: 353736
2019-02-11 19:26:27 +00:00
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Sanjay Patel d023dd60e9 [InstCombine] canonicalize another raw IR rotate pattern to funnel shift
This is matching the equivalent of the DAG expansion, 
so it should never end up with worse perf than the 
original code even if the target doesn't have a rotate
instruction.

llvm-svn: 350672
2019-01-08 22:39:55 +00:00
Sanjay Patel 3d5bb15a1d [CmpInstAnalysis] fix function signature for ICmp code to predicate; NFC
The old function underspecified the return type, took an unused parameter,
and had a misleading name.

llvm-svn: 348292
2018-12-04 18:53:27 +00:00
Sanjay Patel 472652ef68 [CmpInstAnalysis] fix formatting; NFC
There are potential improvements to the structure of this API
raised by D54994, but remove some cosmetic blemishes before
making any functional changes.

llvm-svn: 348149
2018-12-03 15:48:30 +00:00
Sanjay Patel 6072842770 [InstCombine] fix formatting for matchBSwap(); NFC
We should have a similar function for matching rotate and/or 
funnel shift, so tidy up the related existing call.

llvm-svn: 346871
2018-11-14 16:03:36 +00:00
Sanjay Patel 3b206305fd [InstCombine] try harder to form select from logic ops (2nd try)
The original patch was committed here:
rL344609
...and reverted:
rL344612
...because it did not properly check/test data types before calling
ComputeNumSignBits(). 

The tests that caused bot failures for the previous commit are 
over-reaching front-end tests that run the entire -O optimizer 
pipeline: 
    Clang :: CodeGen/builtins-systemz-zvector.c
    Clang :: CodeGen/builtins-systemz-zvector2.c

I've added a negative test here to ensure coverage for that case.
The new early exit check also tests the type of the 'B' parameter,
so we don't waste time on matching if either value is unsuitable.

Original commit message:

This is part of solving PR37549:
https://bugs.llvm.org/show_bug.cgi?id=37549

The patterns shown here are a special case of something
that we already convert to select. Using ComputeNumSignBits()
catches that case (but not the more complicated motivating
patterns yet).

The backend has hooks/logic to convert back to logic ops
if that's better for the target.

llvm-svn: 345149
2018-10-24 15:17:56 +00:00
Sanjay Patel bb3dd34e62 revert rL344609: [InstCombine] try harder to form select from logic ops
I noticed a missing check and added it at rL344610, but there actually
are codegen tests that will fail without that, so I'll edit those and
submit a fixed patch with more tests.

llvm-svn: 344612
2018-10-16 15:26:08 +00:00
Sanjay Patel f6a7c8b1fc [InstCombine] make sure type is integer before calling ComputeNumSignBits
llvm-svn: 344610
2018-10-16 14:44:50 +00:00
Sanjay Patel 0c48c977b8 [InstCombine] try harder to form select from logic ops
This is part of solving PR37549:
https://bugs.llvm.org/show_bug.cgi?id=37549

The patterns shown here are a special case of something
that we already convert to select. Using ComputeNumSignBits()
catches that case (but not the more complicated motivating
patterns yet).

The backend has hooks/logic to convert back to logic ops
if that's better for the target.

llvm-svn: 344609
2018-10-16 14:35:21 +00:00
Sanjay Patel 79dceb2903 [InstCombine] name change: foldShuffledBinop -> foldVectorBinop; NFC
This function will deal with more than shuffles with D50992, and I 
have another potential per-element fold that could live here.

llvm-svn: 343692
2018-10-03 15:20:58 +00:00
Craig Topper 8fc05ce340 [InstCombine] Fold (xor (min/max X, Y), -1) -> (max/min ~X, ~Y) when X and Y are freely invertible.
This allows the xor to be removed completely.

This might help with recomitting r341674, but seems good regardless.

Coincidentally fixes PR38915.

Differential Revision: https://reviews.llvm.org/D51964

llvm-svn: 342163
2018-09-13 18:52:58 +00:00
Craig Topper 040c2b0acf [InstCombine] Fold (min/max ~X, Y) -> ~(max/min X, ~Y) when Y is freely invertible
If the ~X wasn't able to simplify above the max/min, we might be able to simplify it by moving it below the max/min.

I had to modify the ~(min/max ~X, Y) transform to prevent getting stuck in a loop when we saw the new ~(max/min X, ~Y) before the ~Y had been folded away to remove the new not.

Differential Revision: https://reviews.llvm.org/D51398

llvm-svn: 341674
2018-09-07 16:19:50 +00:00
Sanjay Patel 93bd15a005 [InstCombine] add xor+not folds
This fold is needed to avoid a regression when we try
to recommit rL300977. 
We can't see the most basic win currently because 
demanded bits changes the patterns:
https://rise4fun.com/Alive/plpp

llvm-svn: 341559
2018-09-06 16:23:40 +00:00
Sanjay Patel 63cf26cf01 [InstCombine] fix xor-or-xor fold to check uses and handle commutes
I'm probably missing some way to use m_Deferred to remove the code
duplication, but that can be a follow-up.

The improvement in demand_shrink_nsw.ll is an example of missing
the fold because the pattern matching was deficient. I didn't try
to follow the bits in that test, but Alive says it's correct:
https://rise4fun.com/Alive/ugc

llvm-svn: 341426
2018-09-04 23:22:13 +00:00
Sanjay Patel 0f70f86ce0 [InstCombine] make ((X & C) ^ C) form consistent for vectors
It would be better to create a 'not' here, but that's not possible yet.

llvm-svn: 341410
2018-09-04 21:17:14 +00:00
Sanjay Patel a89f183253 [InstCombine] simplify code for xor folds; NFCI
This is just a cleanup step. The TODO comments show
what is wrong with the 'and' version of the fold.
Fixing this should be part of recommitting:
rL300977

llvm-svn: 341405
2018-09-04 21:00:13 +00:00
Sanjay Patel 2fe1f62c88 [InstCombine] simplify xor/not folds; NFCI
llvm-svn: 341336
2018-09-03 18:40:56 +00:00
Sanjay Patel d75064e6d5 [InstCombine] allow add+not --> sub for arbitrary vector constants.
llvm-svn: 341335
2018-09-03 18:21:59 +00:00
Sanjay Patel 17e709b66a [InstCombine] allow not+sub fold for arbitrary vector constants
The fold was implemented for the general case but use-limitation,
but the later constant version which didn't check uses was only
matching splat constants.

llvm-svn: 341292
2018-09-02 19:31:45 +00:00
Sanjay Patel 099b1a4b0c [InstCombine] simplify code for 'or' fold
This is no-outwardly-visible-change intended, so no test.
But the code is smaller and more efficient. The check for
a 'not' op is intended to avoid the expensive value tracking
call when it should not be necessary, and it might prevent
infinite looping when we resurrect:
rL300977

llvm-svn: 341280
2018-09-01 15:08:59 +00:00
Craig Topper 3d8fe39ca7 [InstCombine] Pull simple checks above a more complicated one. NFCI
I'm assuming its easier to make sure the RHS of an XOR is all ones than it is to check for the many select patterns we have. So lets check that first. Same with the one use check.

llvm-svn: 340321
2018-08-21 19:17:00 +00:00
Amara Emerson 070ac768ff [InstCombine] Fix IC trying to create a xor of pointer types.
rdar://42473741

Differential Revision: https://reviews.llvm.org/D50775

llvm-svn: 339796
2018-08-15 17:46:22 +00:00
Roman Lebedev 3534874fbf [InstCombine] Re-land: Optimize redundant 'signed truncation check pattern'.
Summary:
This comes with `Implicit Conversion Sanitizer - integer sign change` (D50250):
```
signed char test(unsigned int x) { return x; }
```
`clang++ -fsanitize=implicit-conversion -S -emit-llvm -o - /tmp/test.cpp -O3`
* Old: {F6904292}
* With this patch: {F6904294}

General pattern:
  X & Y

Where `Y` is checking that all the high bits (covered by a mask `4294967168`)
are uniform, i.e.  `%arg & 4294967168`  can be either  `4294967168`  or  `0`
Pattern can be one of:
  %t = add        i32 %arg,    128
  %r = icmp   ult i32 %t,      256
Or
  %t0 = shl       i32 %arg,    24
  %t1 = ashr      i32 %t0,     24
  %r  = icmp  eq  i32 %t1,     %arg
Or
  %t0 = trunc     i32 %arg  to i8
  %t1 = sext      i8  %t0   to i32
  %r  = icmp  eq  i32 %t1,     %arg
This pattern is a signed truncation check.

And `X` is checking that some bit in that same mask is zero.
I.e. can be one of:
  %r = icmp sgt i32   %arg,    -1
Or
  %t = and      i32   %arg,    2147483648
  %r = icmp eq  i32   %t,      0

Since we are checking that all the bits in that mask are the same,
and a particular bit is zero, what we are really checking is that all the
masked bits are zero.
So this should be transformed to:
  %r = icmp ult i32 %arg, 128

The transform itself ended up being rather horrible, even though i omitted some cases.
Surely there is some infrastructure that can help clean this up that i missed?

https://rise4fun.com/Alive/3Ou

The initial commit (rL339610)
was reverted, since the first assert was being triggered.
The @positive_with_extra_and test now has coverage for that case.

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: RKSimon, erichkeane, vsk, llvm-commits

Differential Revision: https://reviews.llvm.org/D50465

llvm-svn: 339621
2018-08-13 21:54:37 +00:00
Roman Lebedev 28a42c7706 Revert "[InstCombine] Optimize redundant 'signed truncation check pattern'."
At least one buildbot was able to actually trigger that assert
on the top of the function. Will investigate.

This reverts commit r339610.

llvm-svn: 339612
2018-08-13 20:46:22 +00:00
Roman Lebedev 4c4750771f [InstCombine] Optimize redundant 'signed truncation check pattern'.
Summary:
This comes with `Implicit Conversion Sanitizer - integer sign change` (D50250):
```
signed char test(unsigned int x) { return x; }
```
`clang++ -fsanitize=implicit-conversion -S -emit-llvm -o - /tmp/test.cpp -O3`
* Old: {F6904292}
* With this patch: {F6904294}

General pattern:
  X & Y

Where `Y` is checking that all the high bits (covered by a mask `4294967168`)
are uniform, i.e.  `%arg & 4294967168`  can be either  `4294967168`  or  `0`
Pattern can be one of:
  %t = add        i32 %arg,    128
  %r = icmp   ult i32 %t,      256
Or
  %t0 = shl       i32 %arg,    24
  %t1 = ashr      i32 %t0,     24
  %r  = icmp  eq  i32 %t1,     %arg
Or
  %t0 = trunc     i32 %arg  to i8
  %t1 = sext      i8  %t0   to i32
  %r  = icmp  eq  i32 %t1,     %arg
This pattern is a signed truncation check.

And `X` is checking that some bit in that same mask is zero.
I.e. can be one of:
  %r = icmp sgt i32   %arg,    -1
Or
  %t = and      i32   %arg,    2147483648
  %r = icmp eq  i32   %t,      0

Since we are checking that all the bits in that mask are the same,
and a particular bit is zero, what we are really checking is that all the
masked bits are zero.
So this should be transformed to:
  %r = icmp ult i32 %arg, 128

https://rise4fun.com/Alive/3Ou

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: RKSimon, erichkeane, vsk, llvm-commits

Differential Revision: https://reviews.llvm.org/D50465

llvm-svn: 339610
2018-08-13 20:33:08 +00:00
Craig Topper 8caccc32b5 [InstCombine] Fix typo in comment. NFC
llvm-svn: 339532
2018-08-13 00:54:23 +00:00
Craig Topper 8bb49218bc [InstCombine] Replace call to haveNoCommonBitsSet in visitXor with just the special case that doesn't use computeKnownBits.
Summary: computeKnownBits is expensive. The cases that would be detected by the computeKnownBits portion of haveNoCommonBitsSet were already handled by the earlier call to SimplifyDemandedInstructionBits.

Reviewers: spatel, lebedev.ri

Reviewed By: lebedev.ri

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D50604

llvm-svn: 339531
2018-08-13 00:38:27 +00:00
Roman Lebedev a677651a5a [InstCombine] De Morgan: sink 'not' into 'xor' (PR38446)
Summary:
https://rise4fun.com/Alive/IT3

Comes up in the [most ugliest]  `signed int` -> `signed char`  case of
`-fsanitize=implicit-conversion` (https://reviews.llvm.org/D50250)
Previously, we were stuck with `not`: {F6867736}
But now we are able to completely get rid of it: {F6867737}
(FIXME: why are we loosing the metadata? that seems wrong/strange.)

Here, we only want to do that it we will be able to completely
get rid of that 'not'.

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: vsk, erichkeane, llvm-commits

Differential Revision: https://reviews.llvm.org/D50301

llvm-svn: 339243
2018-08-08 13:31:19 +00:00
Sanjay Patel 9a801cb598 [InstCombine] simplify code for A & (A ^ B) --> A & ~B
This fold was written in an odd way and tried to avoid
an endless loop by bailing out on all constants instead
of the supposedly problematic case of -1. But (X & -1) 
should always be simplified before we reach here, so I'm
not sure how that is a problem.

There were no tests for the commuted patterns, so I added
those at rL338364.

llvm-svn: 338367
2018-07-31 13:00:03 +00:00
Sanjay Patel 78e4b4d3c4 [InstCombine] not(sub X, Y) --> add (not X), Y
The tests with constants show a missing optimization.
Analysis for adds is better than subs, so this can also
help with other transforms. And codegen is better with 
adds for targets like x86 (destructive ops, no sub-from).

https://rise4fun.com/Alive/llK

llvm-svn: 338118
2018-07-27 10:54:48 +00:00
Sanjay Patel 70043b7e9a [InstCombine] return when SimplifyAssociativeOrCommutative makes a change
This bug was created by rL335258 because we used to always call instsimplify
after trying the associative folds. After that change it became possible
for subsequent folds to encounter unsimplified code (and potentially assert
because of it). 

Instead of carrying changed state through instcombine, we can just return 
immediately. This allows instsimplify to run, so we can continue assuming
that easy folds have already occurred.

llvm-svn: 336965
2018-07-13 01:18:07 +00:00
Sanjay Patel 7b0fc75f73 [InstCombine] simplify binops before trying other folds
This is outwardly NFC from what I can tell, but it should be more efficient 
to simplify first (despite the name, SimplifyAssociativeOrCommutative does
not actually simplify as InstSimplify does - it creates/morphs instructions).

This should make it easier to refactor duplicated code that runs for all binops.

llvm-svn: 335258
2018-06-21 17:06:36 +00:00
Sanjay Patel 3cd1aa88f9 [InstCombine] fold another shifty abs pattern to cmp+sel (PR36036)
The bug report:
https://bugs.llvm.org/show_bug.cgi?id=36036

...requests a DAG change for this, but an IR canonicalization
probably handles most cases. If we still want to match this
pattern in the backend, there's a proposal for that too:
D47831

Alive proofs including nsw/nuw cases that were first noted in:
D46988

https://rise4fun.com/Alive/Kmp

This patch is largely copied from the existing code that was
initially added with:
D40984
...but I didn't see much gain from trying to share code.

llvm-svn: 334137
2018-06-06 21:58:12 +00:00
David Blaikie 31b98d2e99 Move Analysis/Utils/Local.h back to Transforms
Review feedback from r328165. Split out just the one function from the
file that's used by Analysis. (As chandlerc pointed out, the original
change only moved the header and not the implementation anyway - which
was fine for the one function that was used (since it's a
template/inlined in the header) but not in general)

llvm-svn: 333954
2018-06-04 21:23:21 +00:00
Sanjay Patel bbc6d60677 [InstCombine] call simplify before trying vector folds
As noted in the review thread for rL333782, we could have
made a bug harder to hit if we were simplifying instructions
before trying other folds. 

The shuffle transform in question isn't ever a simplification;
it's just a canonicalization. So I've renamed that to make that 
clearer.

This is NFCI at this point, but I've regenerated the test file 
to show the cosmetic value naming difference of using 
instcombine's RAUW vs. the builder.

Possible follow-ups:
1. Move reassociation folds after simplifies too.
2. Refactor common code; we shouldn't have so much repetition.

llvm-svn: 333820
2018-06-02 16:27:44 +00:00
Roman Lebedev c0ecd06428 Revert rL333106 / D46814: [InstCombine] Fold unfolded masked merge pattern with variable mask!
In post-commit review, Eric Christopher notes that many
new MSan warnings are being observed with this patch.

The probable reason is: if 'y' is undef here and we could
evaluate it twice and get different results.
We can't increase the number of uses of a value.

llvm-svn: 333631
2018-05-31 06:00:36 +00:00
Roman Lebedev 6b6c553bb8 [InstCombine] Fold unfolded masked merge pattern with variable mask!
Summary:
Finally fixes [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]].

Now that the backend is all done, we can finally fold it!

The canonical unfolded masked merge pattern is
```(x &  m) | (y & ~m)```
There is a second, equivalent variant:
```(x | ~m) & (y |  m)```
Only one of them (the or-of-and's i think) is canonical.
And if the mask is not a constant, we should fold it to:
```((x ^ y) & M) ^ y```

https://rise4fun.com/Alive/ndQw

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: nicholas, RKSimon, llvm-commits

Differential Revision: https://reviews.llvm.org/D46814

llvm-svn: 333106
2018-05-23 17:47:52 +00:00
Craig Topper bd332588bd [InstCombine] Propagate the nsw/nuw flags from the add in the 'shifty' abs pattern to the sub in the select version.
According to alive this is valid. I'm hoping to use this to make an assumption that the sign bit is zero after this sequence. The only way it wouldn't be is if the input was INT__MIN, but by preserving the flags we can make doing this to INT_MIN UB.

The nuw flags is weird because it creates such a contradiction that the original number would have to be positive meaning we could remove the select entirely, but we don't get that far.

Differential Revision: https://reviews.llvm.org/D46988

llvm-svn: 332623
2018-05-17 16:29:52 +00:00
Adrian Prantl 5f8f34e459 Remove \brief commands from doxygen comments.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

  for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46290

llvm-svn: 331272
2018-05-01 15:54:18 +00:00
Omer Paparo Bivas 82ef8e19ef [InstCombine] Adjusting bswap pattern matching to hold for And/Shift mixed case
Differential Revision: https://reviews.llvm.org/D45731

Change-Id: I85d4226504e954933c41598327c91b2d08192a9d
llvm-svn: 331257
2018-05-01 12:25:46 +00:00
Roman Lebedev aa4faec114 [InstCombine] Unfold masked merge with constant mask
Summary:
As discussed in D45733, we want to do this in InstCombine.

https://rise4fun.com/Alive/LGk

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: chandlerc, xbolva00, llvm-commits

Differential Revision: https://reviews.llvm.org/D45867

llvm-svn: 331205
2018-04-30 17:59:33 +00:00
Roman Lebedev 136867931a [InstCombine] Canonicalize variable mask in masked merge
Summary:
Masked merge has a pattern of: `((x ^ y) & M) ^ y`.
But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`,
We should canonicalize the pattern to non-inverted mask.

https://rise4fun.com/Alive/Yol

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D45664

llvm-svn: 331112
2018-04-28 15:45:07 +00:00
Roman Lebedev 6959b8e76f [PatternMatch] Stabilize the matching order of commutative matchers
Summary:
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the `LHS` and `RHS` matchers:
1. match `RHS` matcher to the `first` operand of binary operator,
2. and then match `LHS` matcher to the `second` operand of binary operator.

This works ok.
But it complicates writing of commutative matchers, where one would like to match
(`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side.

This is additionally complicated by the fact that `m_Specific()` stores the `Value *`,
not `Value **`, so it won't work at all out of the box.

The last problem is trivially solved by adding a new `m_c_Specific()` that stores the
`Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing
one because i guess all the current users are ok with existing behavior,
and this additional pointer indirection may have performance drawbacks.
Also, i'm storing pointer, not reference, because for some mysterious-to-me reason
it did not work with the reference.

The first one appears trivial, too.
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**:
1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator,
2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator.

Surprisingly, `$ ninja check-llvm` still passes with this.
But i expect the bots will disagree..

The motivational unittest is included.
I'd like to use this in D45664.

Reviewers: spatel, craig.topper, arsenm, RKSimon

Reviewed By: craig.topper

Subscribers: xbolva00, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D45828

llvm-svn: 331085
2018-04-27 21:23:20 +00:00
Sanjay Patel 30be665e82 [PatternMatch] allow undef elements when matching a vector zero
This is the last step in getting constant pattern matchers to allow
undef elements in constant vectors.

I'm adding a dedicated m_ZeroInt() function and building m_Zero() from
that. In most cases, calling code can be updated to use m_ZeroInt()
directly when there's no need to match pointers, but I'm leaving that
efficiency optimization as a follow-up step because it's not always
clear when that's ok.

There are just enough icmp folds in InstSimplify that can be used for 
integer or pointer types, that we probably still want a generic m_Zero()
for those cases. Otherwise, we could eliminate it (and possibly add a
m_NullPtr() as an alias for isa<ConstantPointerNull>()).

We're conservatively returning a full zero vector (zeroinitializer) in
InstSimplify/InstCombine on some of these folds (see diffs in InstSimplify),
but I'm not sure if that's actually necessary in all cases. We may be 
able to propagate an undef lane instead. One test where this happens is 
marked with 'TODO'.
 

llvm-svn: 330550
2018-04-22 17:07:44 +00:00
Roman Lebedev f84bfb2147 [InstCombine] Simplify 'xor' to 'or' if no common bits are set.
Summary:
In order to get the whole fold as specified in [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]],
let's first handle the simple straight-forward things.
Let's start with the `and` -> `or` simplification.

The one obvious thing missing here: the constant mask is not handled.
I have an idea how to handle it, but it will require some thinking,
and is not strictly required here, so i've left that for later.

https://rise4fun.com/Alive/Pkmg

Reviewers: spatel, craig.topper, eli.friedman, jingyue

Reviewed By: spatel

Subscribers: llvm-commits

Was reviewed as part of https://reviews.llvm.org/D45631

llvm-svn: 330103
2018-04-15 18:59:44 +00:00
Artur Gainullin d928201ac5 Eliminate a bitwise 'not' op of 'not' min/max by inverting the min/max.
Bitwise 'not' of the min/max could be eliminated in the pattern:

%notx = xor i32 %x, -1
%cmp1 = icmp sgt[slt/ugt/ult] i32 %notx, %y
%smax = select i1 %cmp1, i32 %notx, i32 %y
%res = xor i32 %smax, -1

https://rise4fun.com/Alive/lCN

Reviewers: spatel

Reviewed by: spatel

Subscribers: a.elovikov, llvm-commits

Differential Revision: https://reviews.llvm.org/D45317

llvm-svn: 329791
2018-04-11 10:29:37 +00:00
Sanjay Patel 93e64dd9a1 [PatternMatch] allow undef elements when matching vector FP +0.0
This continues the FP constant pattern matching improvements from:
https://reviews.llvm.org/rL327627
https://reviews.llvm.org/rL327339
https://reviews.llvm.org/rL327307

Several integer constant matchers also have this ability. I'm
separating matching of integer/pointer null from FP positive zero
and renaming/commenting to make the functionality clearer.

llvm-svn: 328461
2018-03-25 21:16:33 +00:00
Sanjay Patel 94c91b78e7 [InstCombine] add folds for xor-of-icmp signbit tests (PR36682)
This is a retry of r328119 which was reverted at r328145 because
it could crash by trying to combine icmps with different operand
types. This version has a check for that and additional tests.

Original commit message:

This is part of solving:
https://bugs.llvm.org/show_bug.cgi?id=36682

There's also a leftover improvement from the long-ago-closed:
https://bugs.llvm.org/show_bug.cgi?id=5438

https://rise4fun.com/Alive/dC1

llvm-svn: 328197
2018-03-22 14:08:16 +00:00
David Blaikie 2be3922807 Fix a couple of layering violations in Transforms
Remove #include of Transforms/Scalar.h from Transform/Utils to fix layering.

Transforms depends on Transforms/Utils, not the other way around. So
remove the header and the "createStripGCRelocatesPass" function
declaration (& definition) that is unused and motivated this dependency.

Move Transforms/Utils/Local.h into Analysis because it's used by
Analysis/MemoryBuiltins.cpp.

llvm-svn: 328165
2018-03-21 22:34:23 +00:00
Reid Kleckner 762331be07 Revert r328119 "[InstCombine] add folds for xor-of-icmp signbit tests (PR36682)"
This asserts when compiling safe_numerics_unittest.cpp in Chromium with
MSan.

llvm-svn: 328145
2018-03-21 20:35:36 +00:00
Sanjay Patel 778032f39d [InstCombine] add folds for xor-of-icmp signbit tests (PR36682)
This is part of solving:
https://bugs.llvm.org/show_bug.cgi?id=36682

There's also a leftover improvement from the long-ago-closed:
https://bugs.llvm.org/show_bug.cgi?id=5438

https://rise4fun.com/Alive/dC1

llvm-svn: 328119
2018-03-21 17:17:13 +00:00
Hiroshi Yamauchi e6a3dc7699 Simplify more cases of logical ops of masked icmps.
Summary:
For example,

((X & 255) != 0) && ((X & 15) == 8) -> ((X & 15) == 8).
((X & 7) != 0) && ((X & 15) == 8) -> false.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D43835

llvm-svn: 327450
2018-03-13 21:13:18 +00:00
Craig Topper ee99aa4dd0 [InstCombine] Replace calls to getNumUses with hasNUses or hasNUsesOrMore
getNumUses is a linear time operation. It traverses the user linked list to the end and counts as it goes. Since we are only interested in small constant counts, we should use hasNUses or hasNUsesMore more that terminate the traversal as soon as it can provide the answer.

There are still two other locations in InstCombine, but changing those would force a rebase of D44266 which if accepted would remove them.

Differential Revision: https://reviews.llvm.org/D44398

llvm-svn: 327315
2018-03-12 18:46:05 +00:00
Sanjay Patel 8fdd87f929 [InstCombine] move constant check into foldBinOpIntoSelectOrPhi; NFCI
Also, rename 'foldOpWithConstantIntoOperand' because that's annoyingly 
vague. The constant check is redundant in some cases, but it allows 
removing duplication for most of the calls.

llvm-svn: 326329
2018-02-28 16:36:24 +00:00
Simon Pilgrim 19495198af [InstCombine] Add constant vector support for ~(C >> Y) --> ~C >> Y
Includes adding m_NonNegative constant pattern matcher

llvm-svn: 324825
2018-02-10 21:46:09 +00:00
Sanjay Patel 1d68112c4b [InstCombine] narrow masked zexted binops (PR35792)
This is guarded by shouldChangeType(), so the tests show that
we don't do the fold if the narrower type is not legal. Note
that there is a proposal (D42424) that would change the results
for the specific cases shown in these tests. That difference is
also discussed in PR35792:
https://bugs.llvm.org/show_bug.cgi?id=35792

Alive proofs for the cases handled here as well as the bitwise 
logic binops that we should already do better on:
https://rise4fun.com/Alive/c97
https://rise4fun.com/Alive/Lc5E
https://rise4fun.com/Alive/kdf

llvm-svn: 323437
2018-01-25 16:34:36 +00:00
Sanjay Patel 5a0cdac174 [InstCombine] canonicalize shifty abs(): ashr+add+xor --> cmp+neg+sel
We want to do this for 2 reasons:
1. Value tracking does not recognize the ashr variant, so it would fail to match for cases like D39766.
2. DAGCombiner does better at producing optimal codegen when we have the cmp+sel pattern.

More detail about what happens in the backend:
1. DAGCombiner has a generic transform for all targets to convert the scalar cmp+sel variant of abs 
   into the shift variant. That is the opposite of this IR canonicalization.
2. DAGCombiner has a generic transform for all targets to convert the vector cmp+sel variant of abs 
   into either an ABS node or the shift variant. That is again the opposite of this IR canonicalization.
3. DAGCombiner has a generic transform for all targets to convert the exact shift variants produced by #1 or #2
   into an ISD::ABS node. Note: It would be an efficiency improvement if we had #1 go directly to an ABS node 
   when that's legal/custom.
4. The pattern matching above is incomplete, so it is possible to escape the intended/optimal codegen in a 
   variety of ways.
   a. For #2, the vector path is missing the case for setlt with a '1' constant.
   b. For #3, we are missing a match for commuted versions of the shift variants.
5. Therefore, this IR canonicalization can only help get us to the optimal codegen. The version of cmp+sel 
   produced by this patch will be recognized in the DAG and converted to an ABS node when possible or the 
   shift sequence when not.
6. In the following examples with this patch applied, we may get conditional moves rather than the shift 
   produced by the generic DAGCombiner transforms. The conditional move is created using a target-specific 
   decision for any given target. Whether it is optimal or not for a particular subtarget may be up for debate.

define i32 @abs_shifty(i32 %x) {
  %signbit = ashr i32 %x, 31 
  %add = add i32 %signbit, %x  
  %abs = xor i32 %signbit, %add 
  ret i32 %abs
}

define i32 @abs_cmpsubsel(i32 %x) {
  %cmp = icmp slt i32 %x, zeroinitializer
  %sub = sub i32 zeroinitializer, %x
  %abs = select i1 %cmp, i32 %sub, i32 %x
  ret i32 %abs
}

define <4 x i32> @abs_shifty_vec(<4 x i32> %x) {
  %signbit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> 
  %add = add <4 x i32> %signbit, %x  
  %abs = xor <4 x i32> %signbit, %add 
  ret <4 x i32> %abs
}

define <4 x i32> @abs_cmpsubsel_vec(<4 x i32> %x) {
  %cmp = icmp slt <4 x i32> %x, zeroinitializer
  %sub = sub <4 x i32> zeroinitializer, %x
  %abs = select <4 x i1> %cmp, <4 x i32> %sub, <4 x i32> %x
  ret <4 x i32> %abs
}

> $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=x86_64 -mattr=avx 
> abs_shifty:
> 	movl	%edi, %eax
> 	negl	%eax
> 	cmovll	%edi, %eax
> 	retq
> 
> abs_cmpsubsel:
> 	movl	%edi, %eax
> 	negl	%eax
> 	cmovll	%edi, %eax
> 	retq
> 
> abs_shifty_vec:
> 	vpabsd	%xmm0, %xmm0
> 	retq
> 
> abs_cmpsubsel_vec:
> 	vpabsd	%xmm0, %xmm0
> 	retq
> 
> $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=aarch64
> abs_shifty:
> 	cmp	w0, #0                  // =0
> 	cneg	w0, w0, mi
> 	ret
> 
> abs_cmpsubsel: 
> 	cmp	w0, #0                  // =0
> 	cneg	w0, w0, mi
> 	ret
>                                        
> abs_shifty_vec: 
> 	abs	v0.4s, v0.4s
> 	ret
> 
> abs_cmpsubsel_vec: 
> 	abs	v0.4s, v0.4s
> 	ret
> 
> $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=powerpc64le 
> abs_shifty:  
> 	srawi 4, 3, 31
> 	add 3, 3, 4
> 	xor 3, 3, 4
> 	blr
> 
> abs_cmpsubsel:
> 	srawi 4, 3, 31
> 	add 3, 3, 4
> 	xor 3, 3, 4
> 	blr
> 
> abs_shifty_vec:   
> 	vspltisw 3, -16
> 	vspltisw 4, 15
> 	vsubuwm 3, 4, 3
> 	vsraw 3, 2, 3
> 	vadduwm 2, 2, 3
> 	xxlxor 34, 34, 35
> 	blr
> 
> abs_cmpsubsel_vec: 
> 	vspltisw 3, -16
> 	vspltisw 4, 15
> 	vsubuwm 3, 4, 3
> 	vsraw 3, 2, 3
> 	vadduwm 2, 2, 3
> 	xxlxor 34, 34, 35
> 	blr
>

Differential Revision: https://reviews.llvm.org/D40984

llvm-svn: 320921
2017-12-16 16:41:17 +00:00
Sanjay Patel 6840c5ff75 [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145

In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.

But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.

By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.

Differential Revision: https://reviews.llvm.org/D37427

llvm-svn: 312591
2017-09-05 23:13:13 +00:00
Sanjay Patel bc6da4e40f [InstCombine] replace unnecessary fcmp fold with assert
See https://reviews.llvm.org/rL312411 for related InstSimplify tests.

llvm-svn: 312421
2017-09-02 18:10:29 +00:00
Sanjay Patel 64fc5daf42 [InstCombine] combine foldAndOfFCmps and foldOrOfFcmps; NFCI
In addition to removing chunks of duplicated code, we don't
want these to diverge. If there's a fold for one, there
should be a fold of the other via DeMorgan's Laws.

llvm-svn: 312420
2017-09-02 17:53:33 +00:00
Sanjay Patel 275bb5a14e [InstCombine] fix misnamed locals and use them to reduce code; NFCI
We had these locals:
Value *Op0RHS = LHS->getOperand(1);
Value *Op1LHS = RHS->getOperand(0);
...so we confusingly transposed the meaning of left/right and op0/op1.

llvm-svn: 312418
2017-09-02 17:17:17 +00:00
Sanjay Patel da6f9b2fee [InstCombine] remove unnecessary code; NFC
llvm-svn: 312416
2017-09-02 16:32:37 +00:00
Sanjay Patel 4c52f765a5 [InstCombine] move related functions next to each other; NFC
This makes it easier to see that they're almost duplicates.
As with the similar icmp functions, there should be identical 
folds for both logic ops because those are DeMorganized variants.

llvm-svn: 312415
2017-09-02 16:30:27 +00:00
Craig Topper d3b465606a [InstCombine] Don't require the compare types to be the same in getMaskedTypeForICmpPair.
A future patch will make the code look through truncates feeding the compare. So the compares might be different types but the pretruncated types might be the same.

This should be safe because we still require the same Value* to be used truncated or not in both compares. So that serves to ensure the types are the same.

llvm-svn: 312381
2017-09-01 21:27:31 +00:00
Craig Topper 085c1f4dea [InstCombine] When converting decomposeBitTestICmp's APInt return to ConstantInt, make sure we use the type from the Value* that was also returned from decomposeBitTestICmp.
Previously we used the type from the LHS of the compare, but a future patch will change decomposeBitTestICmp to look through truncates so it will return a pretruncated Value* and the type needs to match that.

llvm-svn: 312380
2017-09-01 21:27:29 +00:00
Craig Topper ec4b82571c [InstCombine] Remove check for sext of vector icmp from shouldOptimizeCast
Looks like for 'and' and 'or' we end up performing at least some of the transformations this is bocking in a round about way anyway.

For 'and sext(cmp1), sext(cmp2) we end up later turning it into 'select cmp1, sext(cmp2), 0'. Then we optimize that back to sext (and cmp1, cmp2). This is the same result we would have gotten if shouldOptimizeCast hadn't blocked it. We do something analogous for 'or'.

With this patch we allow that transformation to happen directly in foldCastedBitwiseLogic. And we now support the same thing for 'xor'. This is definitely opening up many other cases, but since we already went around it for some cases hopefully it's ok.

Differential Revision: https://reviews.llvm.org/D36213

llvm-svn: 311508
2017-08-22 23:40:15 +00:00
Craig Topper 775ffcc8f5 [InstCombine] Move the checks for pointer types in getMaskedTypeForICmpPair earlier in the function
I don't think there's any reason to have them scattered about and on all 4 operands. We already have an early check that both compares must be the same type. And within a given compare the LHS and RHS must have the same type. Beyond that I don't think there's anyway this function returns anything valid for pointer types. So let's just return early and be done with it.

Differential Revision: https://reviews.llvm.org/D36561

llvm-svn: 311383
2017-08-21 21:00:45 +00:00
Craig Topper 0aa3a19512 Recommit r310869, "[InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify"
This recommits r310869, with the moved files and no extra changes.

Original commit message:

This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too.

I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself.

I also had to make decomposeBitTest support vectors since InstSimplify needs that.

As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library.

Differential Revision: https://reviews.llvm.org/D36593

llvm-svn: 310889
2017-08-14 21:39:51 +00:00
Craig Topper 69fa8e0d99 Revert r310869 "[InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify"
Failed to add the two files that moved. And then added an extra change I didn't mean to while trying to fix that. Reverting everything.

llvm-svn: 310873
2017-08-14 19:09:32 +00:00
Craig Topper 2f0b450666 [InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify
This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too.

I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself.

I also had to make decomposeBitTest support vectors since InstSimplify needs that.

As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library.

Differential Revision: https://reviews.llvm.org/D36593

llvm-svn: 310869
2017-08-14 18:49:42 +00:00
Craig Topper f720099007 [InstCombine] Simplify and inline FoldOrWithConstants/FoldXorWithConstants
Summary:
These functions were overly complicated. The body of this function was rechecking for an And operation to find the constant, but we already knew we were looking at two Ands ORed together and the pieces are in variables. We already had earlier nearby code that checked for ConstantInts. So just inline the remaining parts into the earlier code.

Next step is to use m_APInt instead of ConstantInt.

Reviewers: spatel, efriedma, davide, majnemer

Reviewed By: spatel

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D36439

llvm-svn: 310806
2017-08-14 00:04:21 +00:00
Craig Topper 9a6110b2d3 [InstCombine] Make (X|C1)^C2 -> X^(C1^C2) iff X&~C1 == 0 work for splat vectors
This also corrects the description to match what was actually implemented. The old comment said X^(C1|C2), but it implemented X^((C1|C2)&~(C1&C2)). I believe ((C1|C2)&~(C1&C2)) is equivalent to (C1^C2).

Differential Revision: https://reviews.llvm.org/D36505

llvm-svn: 310658
2017-08-10 20:35:34 +00:00
Craig Topper 57b4d8646b [InstCombine] Fix a crash in getSelectCondition if we happen to have two inverse vectors of i1 constants.
We used to try to truncate the constant vector to vXi1, but if it's already i1 this would fail. Instead we now use IRBuilder::getZExtOrTrunc which should check the type and only create a trunc if needed. I believe this should trigger constant folding in the IRBuilder and ultimately do the same thing just with the additional type check.

llvm-svn: 310639
2017-08-10 17:48:14 +00:00
Craig Topper 5706c01c0b [InstCombine] Use regular dyn_cast instead of a matcher for a simple case. NFC
llvm-svn: 310446
2017-08-09 06:17:48 +00:00
Aaron Ballman 428f0fe910 Removing an unused variable that was missed with the refactoring in r310272; NFC.
llvm-svn: 310285
2017-08-07 19:26:17 +00:00
Craig Topper 7091a743b4 [InstCombine] Support (X | C1) & C2 --> (X & C2^(C1&C2)) | (C1&C2) for vector splats
Note the original code I deleted incorrectly listed this as (X | C1) & C2 --> (X & C2^(C1&C2)) | C1 Which is only valid if C1 is a subset of C2. This relied on SimplifyDemandedBits to remove any extra bits from C1 before we got to that code.

My new implementation avoids relying on that behavior so that it can be naively verified with alive.

Differential Revision: https://reviews.llvm.org/D36384

llvm-svn: 310272
2017-08-07 18:10:39 +00:00
Craig Topper 576fb91aef [InstCombine] Remove shift handling from OptAndOp.
Summary: This is all handled by SimplifyDemandedBits.

Reviewers: spatel, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D36382

llvm-svn: 310234
2017-08-06 23:30:49 +00:00
Craig Topper a1693a2ed3 [InstCombine] Support (X ^ C1) & C2 --> (X & C2) ^ (C1&C2) for vector splats.
llvm-svn: 310233
2017-08-06 23:11:49 +00:00
Craig Topper 9cbdbefd0f [InstCombine] Support '(C - X) ^ signmask -> (C + signmask - X)' and '(X + C) ^ signmask -> (X + C + signmask)' for vector splats.
llvm-svn: 310232
2017-08-06 22:17:21 +00:00
Craig Topper b5bf016015 [InstCombine] Support ~(c-X) --> X+(-c-1) and ~(X-c) --> (-c-1)-X for splat vectors.
llvm-svn: 310195
2017-08-06 06:28:41 +00:00
Craig Topper 9ffda5ab86 [InstCombine] Fold (C - X) ^ signmask -> (C + signmask - X).
llvm-svn: 310186
2017-08-05 20:00:44 +00:00
Craig Topper 760ff6ee87 [InstCombine] Remove the (not (sext)) case from foldBoolSextMaskToSelect and inline the remaining code to match visitOr
Summary:
The (not (sext)) case is really (xor (sext), -1) which should have been simplified to (sext (xor, 1)) before we got here. So we shouldn't need to handle it.

With that taken care of we only need to two cases so don't need the swap anymore. This makes us in sync with the equivalent code in visitOr so inline this to match.

Reviewers: spatel, eli.friedman, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36240

llvm-svn: 310063
2017-08-04 16:07:20 +00:00
Craig Topper 4068e4eec5 [InstCombine] Remove explicit code for folding (xor(zext(cmp)), 1) and (xor(sext(cmp)), -1) to ext(!cmp).
As far as I can tell this should be handled by foldCastedBitwiseLogic which is called later in visitXor.

Differential Revision: https://reviews.llvm.org/D36214

llvm-svn: 309882
2017-08-02 20:30:27 +00:00
Craig Topper ae9b87d10c [InstCombine] Support sext in foldLogicCastConstant
This adds support for sext in foldLogicCastConstant. This is a prerequisite for D36214.

Differential Revision: https://reviews.llvm.org/D36234

llvm-svn: 309880
2017-08-02 20:25:56 +00:00
Sanjay Patel dac0ab272c [InstCombine] allow mask hoisting transform for vector types
llvm-svn: 309627
2017-07-31 21:01:53 +00:00
Craig Topper 2072aca51c [InstCombine] Move (0 - x) & 1 --> x & 1 to SimplifyDemandedUseBits.
This removes a dedicated matcher and allows us to support more than just an AND masking the lower bit.

llvm-svn: 308124
2017-07-16 05:37:58 +00:00
Craig Topper d918d5b36b [InstCombine] Improve the expansion in SimplifyUsingDistributiveLaws to handle cases where one side doesn't simplify, but the other side resolves to an identity value
Summary:
If one side simplifies to the identity value for inner opcode, we can replace the value with just the operation that can't be simplified.

I've removed a couple now unneeded special cases in visitAnd and visitOr. There are probably other cases I missed.

Reviewers: spatel, majnemer, hfinkel, dberlin

Reviewed By: spatel

Subscribers: grandinj, llvm-commits, spatel

Differential Revision: https://reviews.llvm.org/D35451

llvm-svn: 308111
2017-07-15 21:49:49 +00:00
Sanjay Patel 3437ee2740 [InstCombine] improve (1 << x) & 1 --> zext(x == 0) folding
1. Add a one-use check to prevent increasing instruction count.
2. Generalize the pattern matching to include vector types.

llvm-svn: 308105
2017-07-15 17:26:01 +00:00
Sanjay Patel 55b9f88ecc [InstCombine] allow (0 - x) & 1 --> x & 1 for vectors
llvm-svn: 308098
2017-07-15 15:29:47 +00:00
Sanjay Patel 27339133a7 [InstCombine] remove dead code/tests; NFCI
These patterns and tests were added to InstSimplify with:
https://reviews.llvm.org/rL303004

llvm-svn: 308096
2017-07-15 15:01:33 +00:00
Craig Topper fde4723ebe [IR] Add Type::isIntOrIntVectorTy(unsigned) similar to the existing isIntegerTy(unsigned), but also works for vectors.
llvm-svn: 307492
2017-07-09 07:04:03 +00:00
Craig Topper bb4069e439 [InstCombine] Make InstCombine's IRBuilder be passed by reference everywhere
Previously the InstCombiner class contained a pointer to an IR builder that had been passed to the constructor. Sometimes this would be passed to helper functions as either a pointer or the pointer would be dereferenced to be passed by reference.

This patch makes it a reference everywhere including the InstCombiner class itself so there is more inconsistency. This a large, but mechanical patch. I've done very minimal formatting changes on it despite what clang-format wanted to do.

llvm-svn: 307451
2017-07-07 23:16:26 +00:00
Craig Topper 79ab643da8 [Constants] If we already have a ConstantInt*, prefer to use isZero/isOne/isMinusOne instead of isNullValue/isOneValue/isAllOnesValue inherited from Constant. NFCI
Going through the Constant methods requires redetermining that the Constant is a ConstantInt and then calling isZero/isOne/isMinusOne.

llvm-svn: 307292
2017-07-06 18:39:47 +00:00
Craig Topper 95e4142f94 [InstCombine] Change helper method to a file local static method. NFC
llvm-svn: 307275
2017-07-06 16:24:23 +00:00
Craig Topper fc42acef92 [InstCombine] Clarify comment to mention other transform that it does. NFC
llvm-svn: 307274
2017-07-06 16:24:22 +00:00
Craig Topper 22795de20a [InstCombine] Add single use checks to SimplifyBSwap to ensure we are really saving instructions
Bswap isn't a simple operation so we need to make sure we are really removing a call to it before doing these simplifications.

For the case when both LHS and RHS are bswaps I've allowed it to be moved if either LHS or RHS has a single use since that at least allows us to move it later where it might find another bswap to combine with and it decreases the use count on the other side so maybe the other user can be optimized.

Differential Revision: https://reviews.llvm.org/D34974

llvm-svn: 307273
2017-07-06 16:24:21 +00:00
Craig Topper cc418b656a [InstCombine] Use CmpInst::Predicate with m_Cmp instead of ICmpInst::Predicate. NFC
There isn't really an ICmpInst version so we're just accessing the CmpInst version through inheritance.

llvm-svn: 307199
2017-07-05 20:31:00 +00:00
Craig Topper 8036970008 [InstCombine] Add a TODO for a probable missing single use check. NFC
Will try to fix it soon, but in case I forget.

llvm-svn: 307003
2017-07-03 05:54:16 +00:00
Craig Topper 766ce6e9cf [InstCombine] Support BITWISE_OP( BSWAP(x), CONSTANT ) -> BSWAP( BITWISE_OP(x, BSWAP(CONSTANT) ) ) for splat vectors.
llvm-svn: 307002
2017-07-03 05:54:15 +00:00
Craig Topper 32fce4d647 [InstCombine] Remove support for BITWISE_OP(CONSTANT, BSWAP(x)) -> BSWAP(OP(BSWAP(CONSTANT), x)).
Constants were already canonicalized to the right hand side before we got here.

llvm-svn: 307000
2017-07-03 05:54:13 +00:00
Craig Topper 1e4643a98e [InstCombine] Support BITWISE_OP(BSWAP(A),BSWAP(B))->BSWAP(BITWISE_OP(A, B)) for vectors.
llvm-svn: 306999
2017-07-03 05:54:13 +00:00
Craig Topper c6948c25cc [InstCombine] Remove an if that should have been guaranteed by the caller. Replace with an assert. NFC
llvm-svn: 306997
2017-07-03 05:54:11 +00:00
Craig Topper f60ab47098 [InstCombine] Fold (a | b) ^ (~a | ~b) --> ~(a ^ b) and (a & b) ^ (~a & ~b) --> ~(a ^ b)
Summary:
I came across this while thinking about what would happen if one of the operands in this xor pattern was itself a inverted (A & ~B) ^ (~A & B)-> (A^B).

The patterns here assume that the (~a | ~b) will be demorganed to ~(a & b) first. Though I wonder if there's a multiple use case that would prevent the demorgan.

Reviewers: spatel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34870

llvm-svn: 306967
2017-07-02 01:15:51 +00:00
Craig Topper 880bf82685 [InstCombine] In foldXorToXor, move the commutable matcher from the LHS match to the RHS match. No meaningful change intended.
There are two conditions ORed here with similar checks and each contain two matches that must be true for the if to succeed. With the commutable match on the first half of the OR then both ifs basically have the same first part and only the second part distinguishs. With this change we move the commutable match to second half and make the first half unique.

This caused some tests to change because we now produce a commuted result, but this shouldn't matter in practice.

llvm-svn: 306800
2017-06-30 07:37:41 +00:00
Craig Topper 798a19ab8e [InstCombine] In visitXor, use m_Not on the instruction itself instead of looking for all ones in Op1. This is consistent with 3 other not checks before this one. NFCI
llvm-svn: 306617
2017-06-29 00:07:08 +00:00
Craig Topper 0de5e6a729 [InstCombine] Add one use checks to or/and->xnor folding
If the components of the and/or had multiple uses, this transform created an additional instruction.

This patch makes sure we remove one of the components.

Differential Revision: https://reviews.llvm.org/D34498

llvm-svn: 306027
2017-06-22 16:12:02 +00:00
Sanjay Patel d1e811979c [InstCombine] reverse bitcast + bitwise-logic canonicalization (PR33138)
There are 2 parts to this patch made simultaneously to avoid a regression.

We're reversing the canonicalization that moves bitwise vector ops before bitcasts. 
We're moving bitwise vector ops *after* bitcasts instead. That's the 1st and 3rd hunks 
of the patch. The motivation is that there's only one fold that currently depends on 
the existing canonicalization (see next), but there are many folds that would 
automatically benefit from the new canonicalization. 
PR33138 ( https://bugs.llvm.org/show_bug.cgi?id=33138 ) shows why/how we have these 
patterns in IR.

There's an or(and,andn) pattern that requires an adjustment in order to continue matching
to 'select' because the bitcast changes position. This match is unfortunately complicated 
because it requires 4 logic ops with optional bitcast and sext ops.

Test diffs:

  1. The bitcast.ll and bitcast-bigendian.ll changes show the most basic difference - 
     bitcast comes before logic.
  2. There are also tests with no diffs in bitcast.ll that verify that we're still doing 
     folds that were enabled by the previous canonicalization.
  3. icmp-xor-signbit.ll shows the payoff. We don't need to adjust existing icmp patterns 
     to look through bitcasts.
  4. logical-select.ll contains several tests for the or(and,andn) --> select fold to 
     verify that we are still handling those cases. The lone diff shows the movement of 
     the bitcast from the new canonicalization rule.

Differential Revision: https://reviews.llvm.org/D33517

llvm-svn: 306011
2017-06-22 15:46:54 +00:00
Sanjay Patel e800df8eac [InstCombine] add peekThroughBitcast() helper; NFC
This is an NFC portion of D33517. We have similar helpers in the backend.

llvm-svn: 306008
2017-06-22 15:28:01 +00:00
Craig Topper a074c101e5 [InstCombine] Cleanup using commutable matchers. Make a couple helper methods standalone static functions. Put 'if' around variable declaration instead of after. NFC
llvm-svn: 305941
2017-06-21 18:57:00 +00:00
Sanjay Patel 4ccbd58d70 [InstCombine] fix code/test comments for r305792; NFC
These diffs were in the last version of the patch in D33342,
but I accidentally committed the previous rev. 

llvm-svn: 305793
2017-06-20 12:45:46 +00:00
Sanjay Patel adca825dc1 [InstCombine] try to canonicalize xor-of-icmps to and-of-icmps
We have a large portfolio of folds for and-of-icmps and or-of-icmps in InstSimplify and InstCombine, 
but hardly anything for xor-of-icmps. Rather than trying to rethink and translate all of those folds, 
we can use the truth table definition of xor:

X ^ Y --> (X | Y) & !(X & Y)

...to see if we can convert the xor to and/or and then use the existing folds.

http://rise4fun.com/Alive/J9v

Differential Revision: https://reviews.llvm.org/D33342

llvm-svn: 305792
2017-06-20 12:40:55 +00:00
Craig Topper a7529b68cc [InstCombine] Cleanup some duplicated one use checks
Summary:
These 4 patterns have the same one use check repeated twice for each. Once without a cast and one with. But the cast has no effect on what method is called.

For the OR case I believe it is always profitable regardless of the number of uses since we'll never increase the instruction count.

For the AND case I believe it is profitable if the pair of xors has one use such that we'll get rid of it completely. Or if the C value is something freely invertible, in which case the not doesn't cost anything.

Reviewers: spatel, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34308

llvm-svn: 305705
2017-06-19 16:23:49 +00:00
Craig Topper da6ea0d3e8 [InstCombine] Fold (!iszero(A & K1) & !iszero(A & K2)) -> (A & (K1 | K2)) == (K1 | K2) if K1 and K2 are a 1-bit mask
Summary: This is the demorganed version of the case we already handle for the OR of iszero.

Reviewers: spatel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34244

llvm-svn: 305548
2017-06-16 05:10:37 +00:00
Craig Topper 2ba991ff2c [InstCombine] Add two FIXMEs for bad single use checks. NFC
llvm-svn: 305510
2017-06-15 21:38:48 +00:00
Craig Topper f2d3e6d3d5 [InstCombine] Make the context instruction parameter of foldOrOfICmps a reference to discourage passing nullptr and to remove the '&' from all of the call sites. NFC
llvm-svn: 305493
2017-06-15 19:09:51 +00:00
Craig Topper 6eec9e21a5 [InstCombine] Handle (iszero(A & K1) | iszero(A & K2)) -> (A & (K1 | K2)) != (K1 | K2) when the one of the Ands is commuted relative to the other
Currently we expect A to be on the same side in both Ands but nothing guarantees that.

While there also switch to using matchers for some of the code.

Differential Revision: https://reviews.llvm.org/D34230

llvm-svn: 305487
2017-06-15 17:55:20 +00:00
Craig Topper a420562257 [InstCombine] Pass a proper context instruction to all of the calls into InstSimplify
Summary: This matches the behavior we already had for compares and makes us consistent everywhere.

Reviewers: dberlin, hfinkel, spatel

Reviewed By: dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D33604

llvm-svn: 305049
2017-06-09 03:21:29 +00:00
Craig Topper 73ba1c84be [InstCombine][InstSimplify] Use APInt::isNullValue/isOneValue to reduce compiled code for comparing APInts with 0 and 1. NFC
These methods are specifically optimized to only counting leading zeros without an additional uint64_t compare.

llvm-svn: 304876
2017-06-07 07:40:37 +00:00
Craig Topper d4039f7283 [InstCombine] Add an InstCombine specific wrapper around isKnownToBeAPowerOfTwo to shorten code. NFC
We have wrappers for several other ValueTracking methods that take care of passing all of the analysis and assumption cache parameters. This extends it to isKnownToBeAPowerOfTwo.

llvm-svn: 303924
2017-05-25 21:51:12 +00:00
Sanjay Patel 5e456b943a [InstCombine] add helper to foldXorOfICmps(); NFCI
Also, fix the old-style capitalization of the related functions
and move them to the 'private' section of the class since they
are just helpers of the visit* functions.

As shown in the post-commit comments for D32143, we are missing
folds for xor-of-icmps. 

llvm-svn: 303381
2017-05-18 20:53:16 +00:00
Craig Topper 1a36b7d836 [ValueTracking] Replace all uses of ComputeSignBit with computeKnownBits.
This patch finishes off the conversion of ComputeSignBit to computeKnownBits.

Differential Revision: https://reviews.llvm.org/D33166

llvm-svn: 303035
2017-05-15 06:39:41 +00:00
Sanjay Patel 40a87a909b [InstCombine] remove fold that swaps xor/or with constants; NFCI
// (X ^ C1) | C2 --> (X | C2) ^ (C1&~C2)

This canonicalization was added at:
https://reviews.llvm.org/rL7264 

By moving xors out/down, we can more easily combine constants. I'm adding
tests that do not change with this patch, so we can verify that those kinds
of transforms are still happening.

This is no-functional-change-intended because there's a later fold:
// (X^C)|Y -> (X|Y)^C iff Y&C == 0
...and demanded-bits appears to guarantee that any fold that would have
hit the fold we're removing here would be caught by that 2nd fold.

Similar reasoning was used in:
https://reviews.llvm.org/rL299384

The larger motivation for removing this code is that it could interfere with 
the fix for PR32706:
https://bugs.llvm.org/show_bug.cgi?id=32706

Ie, we're not checking if the 'xor' is actually a 'not', so we could reverse
a 'not' optimization and cause an infinite loop by altering an 'xor X, -1'. 

Differential Revision: https://reviews.llvm.org/D33050

llvm-svn: 302733
2017-05-10 21:33:55 +00:00
Sanjay Patel 7caaa79879 [InstCombine] clean up matchDeMorgansLaws(); NFCI
The motivation for getting rid of dyn_castNotVal is to allow fixing:
https://bugs.llvm.org/show_bug.cgi?id=32706

So this was supposed to be functional-change-intended for the case
of inverting constants and applying DeMorgan. However, I can't find 
any cases where that pattern will actually get to matchDeMorgansLaws()
because we have other folds in visitAnd/visitOr that do the same
thing. So this ends up just being a clean-up patch with slight efficiency
improvement, but no-functional-change-intended.

llvm-svn: 302581
2017-05-09 20:05:05 +00:00
Sanjay Patel a1c8814891 [InstCombine] add folds for not-of-shift-right
This is another step towards getting rid of dyn_castNotVal, 
so we can recommit:
https://reviews.llvm.org/rL300977

As the tests show, we were missing the lshr case for constants
and both ashr/lshr vector splat folds. The ashr case with constant
was being performed inefficiently in 2 steps. It's also possible
there was a latent bug in that case because we can't do that fold
if the constant is positive:
http://rise4fun.com/Alive/Bge

llvm-svn: 302465
2017-05-08 20:49:59 +00:00
Sanjay Patel 599e65b1ff [InstSimplify] use ConstantRange to simplify or-of-icmps
We can simplify (or (icmp X, C1), (icmp X, C2)) to 'true' or one of the icmps in many cases.
I had to check some of these with Alive to prove to myself it's right, but everything seems 
to check out. Eg, the deleted code in instcombine was completely ignoring predicates with
mismatched signedness.

This is a follow-up to:
https://reviews.llvm.org/rL301260
https://reviews.llvm.org/D32143

llvm-svn: 302370
2017-05-07 15:11:40 +00:00
Sanjay Patel 6381db18fe [InstCombine] don't use DeMorgan's Law on integer constants (2nd try)
This was originally checked in here:
https://reviews.llvm.org/rL301923

And reverted here:
https://reviews.llvm.org/rL301924

Because there's a clang test that would fail after this. I fixed/removed the
offending CHECK lines in:
https://reviews.llvm.org/rL301928

So let's try this again. Original commit message:

This is the fold that causes the infinite loop in BoringSSL
(https://github.com/google/boringssl/blob/master/crypto/cipher/e_rc2.c)
when we fix instcombine demanded bits to prefer 'not' ops as in https://reviews.llvm.org/D32255.

There are 2 or 3 problems with dyn_castNotVal, and I don't think we can
reinstate https://reviews.llvm.org/D32255 until dyn_castNotVal is completely eliminated.

1. As shown here, it transforms 'not' into random xor. This transform is harmful to SCEV and codegen because 'not' can often be folded while random xor cannot.
2. It does not transform vector constants. This is actually a good thing, but if you don't believe the above argument, then we shouldn't have excluded vectors.
3. It tries to avoid transforming not(not(X)). That's nice, but it doesn't match the greedy nature of instcombine. If we DeMorganize a pattern that has an extra 'not' in it: ~(~(~X) & Y) --> (~X | ~Y)

  That's just another case of DeMorgan, so we should trust that we'll fold that pattern too: (~X | ~ Y) --> ~(X & Y)

Differential Revision: https://reviews.llvm.org/D32665

llvm-svn: 301929
2017-05-02 15:31:40 +00:00
Sanjay Patel da0b4deafa revert r301923 : [InstCombine] don't use DeMorgan's Law on integer constants
There's a clang test that is wrongly using -O1 and failing after this commit.

llvm-svn: 301924
2017-05-02 14:48:23 +00:00
Sanjay Patel 096a981982 [InstCombine] don't use DeMorgan's Law on integer constants
This is the fold that causes the infinite loop in BoringSSL 
(https://github.com/google/boringssl/blob/master/crypto/cipher/e_rc2.c) 
when we fix instcombine demanded bits to prefer 'not' ops as in D32255.

There are 2 or 3 problems with dyn_castNotVal, and I don't think we can 
reinstate D32255 until dyn_castNotVal is completely eliminated.
1. As shown here, it transforms 'not' into random xor. This transform is 
   harmful to SCEV and codegen because 'not' can often be folded while 
   random xor cannot.
2. It does not transform vector constants. This is actually a good thing, 
   but if you don't believe the above argument, then we shouldn't have 
   excluded vectors.
3. It tries to avoid transforming not(not(X)). That's nice, but it doesn't
   match the greedy nature of instcombine. If we DeMorganize a pattern 
   that has an extra 'not' in it:
   ~(~(~X) & Y) --> (~X | ~Y)

   That's just another case of DeMorgan, so we should trust that we'll fold
   that pattern too:
   (~X | ~ Y) --> ~(X & Y)

Differential Revision: https://reviews.llvm.org/D32665

llvm-svn: 301923
2017-05-02 14:31:30 +00:00
Sanjay Patel 59d0aeaafe [InstCombine] check one-use before applying DeMorgan nor/nand folds
If we have ~(~X & Y), it only makes sense to transform it to (X | ~Y) when we do not need 
the intermediate (~X & Y) value. In that case, we would need an extra instruction to 
generate ~Y + 'or' (as shown in the test changes).

It's ok if we have multiple uses of ~X or Y, however. In those cases, we may not reduce the
instruction count or critical path, but we might improve throughput because we can generate 
~X and ~Y in parallel. Whether that actually makes perf sense or not for a target is something 
we can't answer in IR.

Differential Revision: https://reviews.llvm.org/D32703

llvm-svn: 301848
2017-05-01 22:25:42 +00:00
Sanjay Patel 73d8c43da8 [InstCombine] fix matcher to bind to specific operand (PR32830)
Matching any random value would be very wrong:
https://bugs.llvm.org/show_bug.cgi?id=32830

llvm-svn: 301594
2017-04-27 21:55:03 +00:00
Daniel Berlin 2c75c63063 InstCombine: Use the new SimplifyQuery versions of Simplify*. Use AssumptionCache, DominatorTree, TargetLibraryInfo everywhere.
llvm-svn: 301464
2017-04-26 20:56:07 +00:00
Craig Topper ba01143193 [InstCombine] Add missing commute handling to (A | B) & (B ^ (~A)) -> (A & B)
The matching here wasn't able to handle all the possible commutes. It always assumed the not would be on the left of the xor, but that's not guaranteed.

Differential Revision: https://reviews.llvm.org/D32474

llvm-svn: 301316
2017-04-25 15:19:04 +00:00
Craig Topper c4b48a32f0 [InstCombine] Use commutable matchers to reduce some code. NFC
llvm-svn: 301294
2017-04-25 06:02:11 +00:00
Sanjay Patel 35c362ebbb [InstSimplify] use ConstantRange to simplify more and-of-icmps
We can simplify (and (icmp X, C1), (icmp X, C2)) to one of the icmps in many cases. 
I had to check some of these with Alive to prove to myself it's right, but everything 
seems to check out. Eg, the code in instcombine was completely ignoring predicates with 
mismatched signedness.

Handling or-of-icmps would be a follow-up step.

Differential Revision: https://reviews.llvm.org/D32143

llvm-svn: 301260
2017-04-24 21:52:39 +00:00
Sanjay Patel 0889225f51 [InstSimplify] move (A & ~B) | (A ^ B) -> (A ^ B) from InstCombine
This is a straight cut and paste, but there's a bigger problem: if this
fold exists for simplifyOr, there should be a DeMorganized version for
simplifyAnd. But more than that, we have a patchwork of ad hoc logic
optimizations in InstCombine. There should be some structure to ensure 
that we're not missing sibling folds across and/or/xor.
 

llvm-svn: 301213
2017-04-24 18:24:36 +00:00
Sanjay Patel e0c26e0640 [InstCombine] add/move folds for [not]-xor
We handled all of the commuted variants for plain xor already,
although they were scattered around and sometimes folded less
efficiently using distributive laws. We had no folds for not-xor.

Handling all of these patterns consistently is part of trying to 
reinstate:
https://reviews.llvm.org/rL300977

llvm-svn: 301144
2017-04-23 22:00:02 +00:00
Sanjay Patel d13b0bfdac [InstCombine] add pattern matches for commuted variants of xor-to-xor
There's probably some better way to write this that eliminates the
code duplication without hurting readability, but at least this
eliminates the logic holes and is hopefully slightly more efficient
than creating new instructions.

llvm-svn: 301129
2017-04-23 16:03:00 +00:00
Sanjay Patel 3b863f8a1e [InstCombine] use 'match' to reduce code; NFCI
The later uses of dyn_castNotVal in this block are either
incomplete (doesn't handle vector constants) or overstepping
(shouldn't handle constants at all), but this first use is
just unnecessary. 'I' is obviously not a constant, and it 
can't be a not-of-a-not because that would already be
instsimplified.

llvm-svn: 301088
2017-04-22 18:05:35 +00:00
Craig Topper bcfd2d1789 [APInt] Rename getSignBit to getSignMask
getSignBit is a static function that creates an APInt with only the sign bit set. getSignMask seems like a better name to convey its functionality. In fact several places use it and then store in an APInt named SignMask.

Differential Revision: https://reviews.llvm.org/D32108

llvm-svn: 300856
2017-04-20 16:56:25 +00:00
Davide Italiano cdc937d0fc [InstCombine] Matchers work with both ConstExpr and Instructions.
So, `cast<Instruction>` is not guaranteed to succeed. Change the
code so that we create a new constant and use it in the newly
created instruction, as it's done in other places in InstCombine.

OK'ed by Sanjay/Craig. Fixes PR32686.

llvm-svn: 300495
2017-04-17 20:49:50 +00:00
Sanjay Patel ef9f586bb2 [InstCombine] allow (X != C1 && X != C2) and similar patterns to match splat vector constants
llvm-svn: 300402
2017-04-15 17:55:06 +00:00