Commit Graph

588 Commits

Author SHA1 Message Date
Philip Reames fbffd126b8 [NFC] Factor out a helper function for checking if a block has a potential early implicit exit.
llvm-svn: 327065
2018-03-08 21:25:30 +00:00
Sanjay Patel 7ed0bc26ac [ValueTracking] move helpers for SelectPatterns from InstCombine to ValueTracking
Most of the folds based on SelectPatternResult belong in InstSimplify rather than
InstCombine, so the helper code should be available to other passes/analysis.

llvm-svn: 326812
2018-03-06 16:57:55 +00:00
Vedant Kumar 1a8456da17 Fix more spelling mistakes in comments of LLVM Analysis passes
Patch by Reshabh Sharma!

Differential Revision: https://reviews.llvm.org/D43939

llvm-svn: 326601
2018-03-02 18:57:02 +00:00
Vedant Kumar d319674a81 Fixed spelling mistake in comments of LLVM Analysis passes
Patch by Reshabh Sharma!

Differential Revision: https://reviews.llvm.org/D43861

llvm-svn: 326352
2018-02-28 19:08:52 +00:00
Craig Topper 301991080e [ValueTracking] Teach cannotBeOrderedLessThanZeroImpl to look through ExtractElement.
This is similar to what's done in computeKnownBits and computeSignBits. Don't do anything fancy just collect information valid for any element.

Differential Revision: https://reviews.llvm.org/D43789

llvm-svn: 326237
2018-02-27 19:53:45 +00:00
Craig Topper 69c8972fd1 [ValueTracking] Teach cannotBeOrderedLessThanZeroImpl to handle vector constants.
Summary: This allows vector fabs to be removed in more cases.

Reviewers: spatel, arsenm, RKSimon

Reviewed By: spatel

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D43739

llvm-svn: 326138
2018-02-26 22:33:17 +00:00
Elena Demikhovsky 945b7e5aa6 Adding a width of the GEP index to the Data Layout.
Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout.
p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits.
The index size parameter is optional, if not specified, it is equal to the pointer size.

Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width.
It works fine if you can convert pointer to integer for address calculation and all registered targets do this.
But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout.
http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html

I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account.
This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size.

Differential Revision: https://reviews.llvm.org/D42123

llvm-svn: 325102
2018-02-14 06:58:08 +00:00
Sanjay Patel a60aec1ab7 [ValueTracking] don't crash when assumptions conflict (PR36270)
The last assume in the test says that %B12 is 0. 
The first assume says that %and1 is less than %B12. 
Therefore, %and1 is unsigned less than 0...does not compute.

That means this line:
Known.Zero.setHighBits(RHSKnown.countMinLeadingZeros() + 1);
...tries to set more bits than exist.

Differential Revision: https://reviews.llvm.org/D43052

llvm-svn: 324610
2018-02-08 14:52:40 +00:00
Simon Pilgrim 9f2ae7e2d1 [InstCombine][ValueTracking] Match non-uniform constant power-of-two vectors
Generalize existing constant matching to work with non-uniform constant vectors as well.

Differential Revision: https://reviews.llvm.org/D42818

llvm-svn: 324369
2018-02-06 18:39:23 +00:00
Sanjay Patel 1d91ec34b2 [ValueTracking] add recursion depth param to matchSelectPattern
We're getting bug reports:
https://bugs.llvm.org/show_bug.cgi?id=35807
https://bugs.llvm.org/show_bug.cgi?id=35840
https://bugs.llvm.org/show_bug.cgi?id=36045
...where we blow up the stack in value tracking because other passes are sending 
in selects that have an operand that is itself the select.

We don't currently have a reliable way to avoid analyzing dead code that may take 
non-standard forms, so bail out when things go too far.

This mimics the recursion depth limitations in other parts of value tracking.

Unfortunately, this pushes the underlying problems for other passes (jump-threading,
simplifycfg, correlated-propagation) into hiding. If someone wants to uncover those
again, the first draft of this patch on Phab would do that (it would assert rather
than bail out).

Differential Revision: https://reviews.llvm.org/D42442

llvm-svn: 323331
2018-01-24 15:20:37 +00:00
Sanjay Patel e63d8dda5a [ValueTracking] recognize min/max-of-min/max with notted ops (PR35875)
This was originally planned as the fix for:
https://bugs.llvm.org/show_bug.cgi?id=35834
...but simpler transforms handled that case, so I implemented a 
lesser solution. It turns out we need to handle the case with 'not'
ops too because the real code example that we are trying to solve:
https://bugs.llvm.org/show_bug.cgi?id=35875
...has extra uses of the intermediate values, so we can't rely on 
smaller canonicalizations to get us to the goal.

As with rL321672, I've tried to show every possibility in the
codegen tests because that's the simplest way to prove we're doing
the right thing in the wide variety of permutations of this pattern.

We can also show an InstCombine win because we added a fold for
this case in:
rL321998 / D41603

An Alive proof for one variant of the pattern to show that the 
InstCombine and codegen results are correct:
https://rise4fun.com/Alive/vd1

Name: min3_nots
  %nx = xor i8 %x, -1
  %ny = xor i8 %y, -1
  %nz = xor i8 %z, -1
  %cmpxz = icmp slt i8 %nx, %nz
  %minxz = select i1 %cmpxz, i8 %nx, i8 %nz
  %cmpyz = icmp slt i8 %ny, %nz
  %minyz = select i1 %cmpyz, i8 %ny, i8 %nz
  %cmpyx = icmp slt i8 %y, %x
  %r = select i1 %cmpyx, i8 %minxz, i8 %minyz
=>
  %cmpxyz = icmp slt i8 %minxz, %ny
  %r = select i1 %cmpxyz, i8 %minxz, i8 %ny

Name: min3_nots_alt
  %nx = xor i8 %x, -1
  %ny = xor i8 %y, -1
  %nz = xor i8 %z, -1
  %cmpxz = icmp slt i8 %nx, %nz
  %minxz = select i1 %cmpxz, i8 %nx, i8 %nz
  %cmpyz = icmp slt i8 %ny, %nz
  %minyz = select i1 %cmpyz, i8 %ny, i8 %nz
  %cmpyx = icmp slt i8 %y, %x
  %r = select i1 %cmpyx, i8 %minxz, i8 %minyz
=>
  %xz = icmp sgt i8 %x, %z
  %maxxz = select i1 %xz, i8 %x, i8 %z
  %xyz = icmp sgt i8 %maxxz, %y
  %maxxyz = select i1 %xyz, i8 %maxxz, i8 %y
  %r = xor i8 %maxxyz, -1

llvm-svn: 322283
2018-01-11 15:13:47 +00:00
Sanjay Patel 7dfe96ad16 [ValueTracking] remove overzealous assert
The test is derived from a failing fuzz test:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5008

Credit to @rksimon for pointing out the problem.

llvm-svn: 322016
2018-01-08 18:31:13 +00:00
Sanjay Patel 7811430588 [ValueTracking] recognize min/max of min/max patterns
This is part of solving PR35717:
https://bugs.llvm.org/show_bug.cgi?id=35717

The larger IR optimization is proposed in D41603, but we can show 
the improvement in ValueTracking using codegen tests because 
SelectionDAG creates min/max nodes based on ValueTracking. 

Any target with min/max ops should show wins here. I chose AArch64
vector ops because they're clean and uniform.

Some Alive proofs for the tests (can't put more than 2 tests in 1 
page currently because the web app says it's too long):
https://rise4fun.com/Alive/WRN
https://rise4fun.com/Alive/iPm
https://rise4fun.com/Alive/HmY
https://rise4fun.com/Alive/CNm
https://rise4fun.com/Alive/LYf

llvm-svn: 321672
2018-01-02 20:56:45 +00:00
Simon Pilgrim 6720726d27 [ValueTracking] Don't assume shift values are in range
Reduced (as best I could...) from oss-fuzz #4857 test case

llvm-svn: 321634
2018-01-01 22:44:59 +00:00
Sanjay Patel 9a39979dd2 [ValueTracking] ignore FP signed-zero when detecting a casted-to-integer fmin/fmax pattern
This is a preliminary step for the patch discussed in D41136 (and denoted here with the FIXME comment).

When we match an FP min/max that is cast to integer, any intermediate difference between +0.0 or -0.0 
should be muted in the result by the conversion (either fptosi or fptoui) of the result. Thus, we can 
enable 'nsz' for the purpose of matching fmin/fmax.

Note that there's probably room to generalize this more, possibly by fixing the current calls to the
weak version of isKnownNonZero() in matchSelectPattern() to the more powerful recursive version.

Differential Revision: https://reviews.llvm.org/D41333

llvm-svn: 321456
2017-12-26 15:09:19 +00:00
Haicheng Wu a446151552 [InlineCost] Find repeated loads in the callee
SROA analysis of InlineCost can figure out that some stores can be removed
after inlining and then the repeated loads clobbered by these stores are also
free.  This patch finds these clobbered loads and adjust the inline cost
accordingly.

Differential Revision: https://reviews.llvm.org/D33946

llvm-svn: 320814
2017-12-15 14:34:41 +00:00
Simon Dardis 70dbd5fbd0 Infer lowest bits of an integer Multiply when the low bits of the operands are known
When the lowest bits of the operands to an integer multiply are known, the low bits of the result are deducible.
Code to deduce known-zero bottom bits already existed, but this change improves on that by deducing known-ones.

Patch by: Pedro Ferreira

Reviewers: craig.topper, sanjoy, efriedma

Differential Revision: https://reviews.llvm.org/D34029

llvm-svn: 320269
2017-12-09 23:25:57 +00:00
Evgeniy Stepanov c667c1f47a Hardware-assisted AddressSanitizer (llvm part).
Summary:
This is LLVM instrumentation for the new HWASan tool. It is basically
a stripped down copy of ASan at this point, w/o stack or global
support. Instrumenation adds a global constructor + runtime callbacks
for every load and store.

HWASan comes with its own IR attribute.

A brief design document can be found in
clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier).

Reviewers: kcc, pcc, alekseyshl

Subscribers: srhines, mehdi_amini, mgorny, javed.absar, eraman, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D40932

llvm-svn: 320217
2017-12-09 00:21:41 +00:00
Igor Laevsky cec8f47e77 [InstCombine] Don't crash on out of bounds shifts
Differential Revision: https://reviews.llvm.org/D40649

llvm-svn: 319761
2017-12-05 12:18:15 +00:00
Sam McCall d0d43e6f14 Revert "[ValueTracking] Pass only a single lambda to computeKnownBitsFromShiftOperator by using KnownBits struct instead of separate APInts. NFCI"
This reverts commit r319624, which seems to cause a miscompile (breaks the
multistage PPC buildbots)

llvm-svn: 319652
2017-12-04 12:51:49 +00:00
Craig Topper 199acd88e3 [ValueTracking] Pass only a single lambda to computeKnownBitsFromShiftOperator by using KnownBits struct instead of separate APInts. NFCI
llvm-svn: 319624
2017-12-02 23:42:17 +00:00
Sanjay Patel 20df88a754 [ValueTracking] use 'auto' with 'dyn_cast'; NFC
llvm-svn: 318058
2017-11-13 17:56:23 +00:00
Sanjay Patel 9e3d8f4b39 [ValueTracking] simplify code in CannotBeNegativeZero() with match(); NFCI
llvm-svn: 318055
2017-11-13 17:40:47 +00:00
Dan Gohman 2c74fe977d Add an @llvm.sideeffect intrinsic
This patch implements Chandler's idea [0] for supporting languages that
require support for infinite loops with side effects, such as Rust, providing
part of a solution to bug 965 [1].

Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual
effect, but which appears to optimization passes to have obscure side effects,
such that they don't optimize away loops containing it. It also teaches
several optimization passes to ignore this intrinsic, so that it doesn't
significantly impact optimization in most cases.

As discussed on llvm-dev [2], this patch is the first of two major parts.
The second part, to change LLVM's semantics to have defined behavior
on infinite loops by default, with a function attribute for opting into
potential-undefined-behavior, will be implemented and posted for review in
a separate patch.

[0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html
[1] https://bugs.llvm.org/show_bug.cgi?id=965
[2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html

Differential Revision: https://reviews.llvm.org/D38336

llvm-svn: 317729
2017-11-08 21:59:51 +00:00
Craig Topper 81d772c28a [ValueTracking] Use APInt::isNullValue/isOneValue which are more efficient for large APInts.
llvm-svn: 317712
2017-11-08 19:38:45 +00:00
Sanjay Patel 86d24f1668 [ValueTracking] readonly (const) is a requirement for converting sqrt to llvm.sqrt; nnan is not
As discussed in D39204, this is effectively a revert of rL265521 which required nnan 
to vectorize sqrt libcalls based on the old LangRef definition of llvm.sqrt. Now that
the definition has been updated so the libcall and intrinsic have the same semantics
apart from potentially setting errno, we can remove the nnan requirement.

We have the right check to know that errno is not set:

if (!ICS.onlyReadsMemory())

...ahead of the switch.

This will solve https://bugs.llvm.org/show_bug.cgi?id=27435 assuming that's being 
built for a target with -fno-math-errno.

Differential Revision: https://reviews.llvm.org/D39642

llvm-svn: 317519
2017-11-06 22:40:09 +00:00
Artur Gainullin af7ba8ff6b Improve clamp recognition in ValueTracking.
Summary:
ValueTracking was recognizing not all variations of clamp. Swapping of
true value and false value of select was added to fix this problem. The
first patch was reverted because it caused miscompile in NVPTX target. 
Added corresponding test cases.

Reviewers: spatel, majnemer, efriedma, reames

Subscribers: llvm-commits, jholewinski

Differential Revision: https://reviews.llvm.org/D39240

llvm-svn: 316795
2017-10-27 20:53:41 +00:00
Craig Topper 8e8b6efdc8 [ValueTracking] Remove unnecessary temporary APInt from computeNumSignBitsVectorConstant.
We can just use getNumSignBits instead of inverting negative numbers.

llvm-svn: 316266
2017-10-21 16:35:41 +00:00
Craig Topper b98ee58511 [ValueTracking] Simplify the known bits code for constant vectors a little.
Neither of these cases really require a temporary APInt outside the loop. For the ConstantDataSequential case the APInt will never be larger than 64-bits so its fine to just call getElementAsAPInt. For ConstantVector we can get the APInt by reference and only make a copy where the inversion is needed.

llvm-svn: 316265
2017-10-21 16:35:39 +00:00
Nikolai Bozhenov fa8c5514c5 [ValueTracking] Enabling ValueTracking patch by default
(recommit #2 after checking for timeout issue). 

The original patch was an improvement to IR ValueTracking on
non-negative integers. It has been checked in to trunk (D18777,
r284022). But was disabled by default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.

Reviewers: reames, hfinkel
 
Differential Revision: https://reviews.llvm.org/D34101
 
Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 316208
2017-10-20 10:08:47 +00:00
Nikolai Bozhenov 8dcab54cb4 Revert r315992 because of a found miscompilation failure
llvm-svn: 316164
2017-10-19 15:36:18 +00:00
Nikolai Bozhenov 9723f12491 Fixup patch for revision rL316070.
Added check that type of CmpConst and source type of trunc are equal
for correct matching of the case when we can set widened C constant
equal to CmpConstant.

  %cond = cmp iN %x, CmpConst
  %tr = trunc iN %x to iK
  %narrowsel = select i1 %cond, iK %t, iK C

Patch by: Gainullin, Artur <artur.gainullin@intel.com>

llvm-svn: 316082
2017-10-18 14:24:50 +00:00
Nikolai Bozhenov 74c047eabb Improve lookThroughCast function.
Summary:
When we have the following case:

  %cond = cmp iN %x, CmpConst
  %tr = trunc iN %x to iK
  %narrowsel = select i1 %cond, iK %t, iK C

We could possibly match only min/max pattern after looking through cast.
So it is more profitable if widened C constant will be equal CmpConst.
That is why just set widened C constant equal to CmpConst, because there
is a further check in this function that trunc CmpConst == C.

Also description for lookTroughCast function was added.

Reviewers: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38536

Patch by: Artur Gainullin <artur.gainullin@intel.com>

llvm-svn: 316070
2017-10-18 09:28:09 +00:00
Nikolai Bozhenov 346f4329c4 Improve clamp recognition in ValueTracking.
Summary:
ValueTracking was recognizing not all variations of clamp. Swapping of
true value and false value of select was added to fix this problem. This
change breaks the canonical form of cmp inside the matchMinMax function,
that is why additional checks for compare predicates is needed. Added
corresponding test cases.

Reviewers: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38531

Patch by: Artur Gainullin <artur.gainullin@intel.com>

llvm-svn: 315992
2017-10-17 11:50:48 +00:00
Sanjay Patel b7d1238cfc [ValueTracking] fix typos, formatting; NFC
llvm-svn: 315909
2017-10-16 14:46:37 +00:00
Sanjay Patel e272be7c9a [ValueTracking] return zero when there's conflict in known bits of a shift (PR34838)
Poison allows us to return a better result than undef.

llvm-svn: 315595
2017-10-12 17:31:46 +00:00
Hiroshi Inoue b49b015bed [ScheduleDAGInstrs] fix behavior of getUnderlyingObjectsForCodeGen when no identifiable object found
This patch fixes the bug introduced in https://reviews.llvm.org/D35907; the bug is reported by http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20171002/491452.html.

Before D35907, when GetUnderlyingObjects fails to find an identifiable object, allMMOsOkay lambda in getUnderlyingObjectsForInstr returns false and Objects vector is cleared. This behavior is unintentionally changed by D35907.

This patch makes the behavior for such case same as the previous behavior.
Since D35907 introduced a wrapper function getUnderlyingObjectsForCodeGen around GetUnderlyingObjects, getUnderlyingObjectsForCodeGen is modified to return a boolean value to ask the caller to clear the Objects vector.

Differential Revision: https://reviews.llvm.org/D38735

llvm-svn: 315565
2017-10-12 06:26:04 +00:00
Vivek Pandya 9590658fb8 [NFC] Convert OptimizationRemarkEmitter old emit() calls to new closure
parameterized emit() calls

Summary: This is not functional change to adopt new emit() API added in r313691.

Reviewed By: anemet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38285

llvm-svn: 315476
2017-10-11 17:12:59 +00:00
Adam Nemet 0965da2055 Rename OptimizationDiagnosticInfo.* to OptimizationRemarkEmitter.*
Sync it up with the name of the class actually defined here.  This has been
bothering me for a while...

llvm-svn: 315249
2017-10-09 23:19:02 +00:00
Nuno Lopes 404f106d71 Merge isKnownNonNull into isKnownNonZero
It now knows the tricks of both functions.
Also, fix a bug that considered allocas of non-zero address space to be always non null

Differential Revision: https://reviews.llvm.org/D37628

llvm-svn: 312869
2017-09-09 18:23:11 +00:00
Sanjay Patel 6840c5ff75 [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145

In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.

But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.

By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.

Differential Revision: https://reviews.llvm.org/D37427

llvm-svn: 312591
2017-09-05 23:13:13 +00:00
Eugene Zelenko 75075efe5e [Analysis, Transforms] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 312383
2017-09-01 21:37:29 +00:00
Craig Topper 7227ebad9c [ValueTracking] Add assertions that the starting Depth in isKnownToBeAPowerOfTwo and ComputeNumSignBitsImpl is not above MaxDepth
The function does an equality check later to terminate the recursion, but that won't work if its starts out too high. Similar assert already exists in computeKnownBits.

llvm-svn: 311400
2017-08-21 22:56:12 +00:00
Amjad Aboud 88ffa3afe2 [InstCombine] Teach ComputeNumSignBitsImpl to handle integer multiply instruction.
Differential Revision: https://reviews.llvm.org/D36679

llvm-svn: 311206
2017-08-18 22:56:55 +00:00
Hal Finkel b03dd4be70 [ValueTracking] Don't delete assumes of side-effectful instructions
ValueTracking has to strike a balance when attempting to propagate information
backwards from assumes, because if the information is trivially propagated
backwards, it can appear to LLVM that the assumption is known to be true, and
therefore can be removed.

This is sound (because an assumption has no semantic effect except for causing
UB), but prevents the assume from allowing further optimizations.

The isEphemeralValueOf check exists to try and prevent this issue by not
removing the source of an assumption. This tries to make it a little bit more
general to handle the case of side-effectful instructions, such as in

  %0 = call i1 @get_val()
  %1 = xor i1 %0, true
  call void @llvm.assume(i1 %1)

Patch by Ariel Ben-Yehuda, thanks!

Differential Revision: https://reviews.llvm.org/D36590

llvm-svn: 310859
2017-08-14 17:11:43 +00:00
Chandler Carruth 37c7b08710 [ValueTracking] Revert r310583 which enabled functionality that still is
causing compile time issues.

Moreover, the patch *deleted* the flag in addition to changing the
default, and links to a code review that doesn't even discuss the flag
and just has an update to a Clang test case.

I've followed up on the commit thread to ask for numbers on compile time
at this point, leaving the flag in place until things stabilize, and
pointing at specific code that seems to exhibit excessive compile time
with this patch.

Original commit message for r310583:
"""
[ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2.

The original patch was an improvement to IR ValueTracking on
non-negative integers. It has been checked in to trunk (D18777,
r284022). But was disabled by default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
""""

llvm-svn: 310816
2017-08-14 07:03:24 +00:00
Nikolai Bozhenov d97136c182 [ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
 
Reviewers: reames, hfinkel
 
Differential Revision: https://reviews.llvm.org/D34101
 
Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 310583
2017-08-10 11:24:57 +00:00
Davide Italiano 1a943a90f5 [ValueTracking] Turn a test into an assertion.
As discussed with Chad, this should never happen, but this
assertion is basically free, so, keep it around just in case.

llvm-svn: 310493
2017-08-09 16:06:54 +00:00
Davide Italiano 30e5194287 [ValueTracking] Honour recursion limit.
The recently improved support for `icmp` in ValueTracking
(r307304) exposes the fact that `isImplied` condition doesn't
really bail out if we hit the recursion limit (and calls
`computeKnownBits` which increases the depth and asserts).

Differential Revision:  https://reviews.llvm.org/D36512

llvm-svn: 310481
2017-08-09 15:13:50 +00:00
Craig Topper b498a23f0e [KnownBits][ValueTracking] Move the math for calculating known bits for add/sub into a static method in KnownBits object
I want to reuse this code in SimplifyDemandedBits handling of Add/Sub. This will make that easier.

Wonder if we should use it in SelectionDAG's computeKnownBits too.

Differential Revision: https://reviews.llvm.org/D36433

llvm-svn: 310378
2017-08-08 16:29:35 +00:00