Commit Graph

412 Commits

Author SHA1 Message Date
Roman Lebedev 5a654bfeab
Revert "[InstCombine] `sext(trunc(x)) --> sext(x)` iff trunc is NSW (PR49543)"
I forgot about the case where we sign-extend to width smaller than the original.

This reverts commit 1e6ca23ab8.
2021-04-21 01:11:15 +03:00
Roman Lebedev 1e68d338c1
Revert "[InstCombine] "Bypass" NUW trunc of lshr if we are going to sext the result (PR49543)"
I forgot about the case where we sign-extend to width smaller than the original.

This reverts commit 41b71f718b.
2021-04-21 01:11:14 +03:00
Roman Lebedev 41b71f718b
[InstCombine] "Bypass" NUW trunc of lshr if we are going to sext the result (PR49543)
This is a more convoluted form of the same pattern "sext of NSW trunc",
but in this case the operand of trunc was a right-shift,
and the truncation chops off just the zero bits that were shifted-in.
2021-04-21 00:31:46 +03:00
Roman Lebedev 1e6ca23ab8
[InstCombine] `sext(trunc(x)) --> sext(x)` iff trunc is NSW (PR49543)
If we can tell that trunc only chops off sign bits, and not all of them,
then we can simply sign-extend the trunc's source.
2021-04-21 00:31:45 +03:00
Luo, Yuanke bcdaccfe34 [X86][AMX] Verify illegal types or instructions for x86_amx.
This patch is related to https://reviews.llvm.org/D100032 which define
some illegal types or operations for x86_amx. There are no arguments,
arrays, pointers, vectors or constants of x86_amx.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D100472
2021-04-20 16:14:22 +08:00
Juneyoung Lee 1c10201d96 Update InstCombine to use undef matcher instead
This is a patch to use m_Undef() matcher instead of isa<UndefValue>().

As suggested in D100122, this update is separately committed.
2021-04-18 11:05:36 +09:00
Philip Reames 4bf8985f4f Replace calls to IntrinsicInst::Create with CallInst::Create [nfc]
There is no IntrinsicInst::Create.  These are binding to the method in the super type.  Be explicitly about which method is being called.
2021-04-06 13:23:58 -07:00
Nashe Mncube 5d929794a8 [llvm-opt] Bug fix within combining FP vectors
A bug was found within InstCombineCasts where a function call
is only implemented to work with FixedVectors. This caused a
crash when a ScalableVector was passed to this function.
This commit introduces a regression test which recreates the
failure and a bug fix.

Differential Revision: https://reviews.llvm.org/D98351
2021-03-23 12:13:41 +00:00
Roman Lebedev d37fe26a2b
[NFC][IR] Type: add getWithNewType() method
Sometimes you want to get a type with same vector element count
as the current type, but different element type,
but there's no QOL wrapper to do that. Add one.
2021-03-23 00:50:58 +03:00
Philip Reames 5698537f81 Update basic deref API to account for possiblity of free [NFC]
This patch is plumbing to support work towards the goal outlined in the recent llvm-dev post "[llvm-dev] RFC: Decomposing deref(N) into deref(N) + nofree".

The point of this change is purely to simplify iteration on other pieces on way to making the switch. Rebuilding with a change to Value.h is slow and painful, so I want to get the API change landed. Once that's done, I plan to more closely audit each caller, add the inference rules in their own patch, then post a patch with the langref changes and test diffs. The value of the command line flag is that we can exercise the inference logic in standalone patches without needing the whole switch ready to go just yet.

Differential Revision: https://reviews.llvm.org/D98908
2021-03-19 11:17:19 -07:00
Luo, Yuanke 66fbf5fafb [X86][AMX] Prevent transforming load pointer from <256 x i32>* to x86_amx*.
The load/store instruction will be transformed to amx intrinsics
in the pass of AMX type lowering. Prohibiting the pointer cast
make that pass happy.

Differential Revision: https://reviews.llvm.org/D98247
2021-03-14 09:24:56 +08:00
Sanjay Patel 4224a36957 [InstCombine] avoid creating an extra instruction in zext fold and possible inf-loop
The structure of this fold is suspect vs. most of instcombine
because it creates instructions and tries to delete them
immediately after.

If we don't have the operand types for the icmps, then we are
not behaving as assumed. And as shown in PR49475, we can inf-loop.
2021-03-13 08:30:51 -05:00
Florian Hahn c701f85c45
[STLExtras] Use return type from operator* of the wrapped iter.
Currently make_early_inc_range cannot be used with iterators with
operator* implementations that do not return a reference.

Most notably in the LLVM codebase, this means the User iterator ranges
cannot be used with make_early_inc_range, which slightly simplifies
iterating over ranges while elements are removed.

Instead of directly using BaseT::reference as return type of operator*,
this patch uses decltype to get the actual return type of the operator*
implementation in WrappedIteratorT.

This patch also updates a few places to use make use of
make_early_inc_range.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D93992
2021-01-10 14:41:13 +00:00
Kazu Hirata 33bf1cad75 [llvm] Use *Set::contains (NFC) 2021-01-07 20:29:34 -08:00
Jun Ma 0138399903 [InstCombine] Remove scalable vector restriction in InstCombineCasts
Differential Revision: https://reviews.llvm.org/D93389
2020-12-17 22:02:33 +08:00
Jun Ma 2ac58e21a1 [InstCombine] Remove scalable vector restriction when fold SelectInst
Differential Revision: https://reviews.llvm.org/D93083
2020-12-15 20:36:57 +08:00
Roman Lebedev 94ead0190f
[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold, 2
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
2020-12-01 16:54:00 +03:00
Roman Lebedev 52533b52b8
Revert "[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold"
It seems i have missed checklines, temporairly reverting,
will reland momentairly..

This reverts commit aa1aa13509.
2020-12-01 15:47:04 +03:00
Roman Lebedev aa1aa13509
[InstCombine] Improve vector undef handling for sext(ashr(shl(trunc()))) fold
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
2020-12-01 15:13:08 +03:00
Roman Lebedev 8e29e20e0d
[InstCombine] Evaluate new shift amount for sext(ashr(shl(trunc()))) fold in wide type (PR48343)
It is not correct to compute that new shift amount in it's narrow type
and only then extend it into the wide type:

----------------------------------------
Optimization: PR48343 good
Precondition: (width(%X) == width(%r))
  %o0 = trunc %X
  %o1 = shl %o0, %Y
  %o2 = ashr %o1, %Y
  %r = sext %o2
=>
  %n0 = sext %Y
  %n1 = sub width(%o0), %n0
  %n2 = sub width(%X), %n1
  %n3 = shl %X, %n2
  %r = ashr %n3, %n2

Done: 2016
Optimization is correct!

----------------------------------------
Optimization: PR48343 bad
Precondition: (width(%X) == width(%r))
  %o0 = trunc %X
  %o1 = shl %o0, %Y
  %o2 = ashr %o1, %Y
  %r = sext %o2
=>
  %n0 = sub width(%o0), %Y
  %n1 = sub width(%X), %n0
  %n2 = sext %n1
  %n3 = shl %X, %n2
  %r = ashr %n3, %n2

Done: 1
ERROR: Domain of definedness of Target is smaller than Source's for i9 %r

Example:
%X i9 = 0x000 (0)
%Y i4 = 0x3 (3)
%o0 i4 = 0x0 (0)
%o1 i4 = 0x0 (0)
%o2 i4 = 0x0 (0)
%n0 i4 = 0x1 (1)
%n1 i4 = 0x8 (8, -8)
%n2 i9 = 0x1F8 (504, -8)
%n3 i9 = 0x000 (0)
Source value: 0x000 (0)
Target value: undef


I.e. we should be computing it in the wide type from the beginning.

Fixes https://bugs.llvm.org/show_bug.cgi?id=48343
2020-12-01 15:13:07 +03:00
Simon Pilgrim 310f62b4ff [InstCombine] narrowFunnelShift - fold trunc/zext or(shl(a,x),lshr(b,sub(bw,x))) -> fshl(a,b,x) (PR35155)
As discussed on PR35155, this extends narrowFunnelShift (recently renamed from narrowRotate) to support basic funnel shift patterns.

Unlike matchFunnelShift we don't include the computeKnownBits limitation as extracting the pattern from the zext/trunc layers should be a indicator of reasonable funnel shift codegen, in D89139 we demonstrated how to efficiently promote funnel shifts to wider types.

Differential Revision: https://reviews.llvm.org/D89542
2020-10-24 12:42:43 +01:00
Simon Pilgrim 1cf347e48b [InstCombine] narrowRotate - minor refactoring for funnel shift support. NFC.
Prep work for PR35155 - renamed narrowRotate to narrowFunnelShift, rewrote some comments and adjusted code to collect separate shift values, although we bail if they don't match (still only rotations are only actually folded).

I'm trying to match matchFunnelShift as much as possible in case we finally get to merge these one day.
2020-10-16 11:27:28 +01:00
Simon Pilgrim 89657b3a3b [InstCombine] narrowRotate - canonicalize to OR(SHL,LSHR). NFCI.
Match the canonicalization code that was added to matchFunnelShift at rG02295e6d1a15
2020-10-14 16:45:00 +01:00
Simon Pilgrim 9c3138bd6d [InstCombine] visitTrunc - pass through undefs for trunc(shift(trunc/ext(x),c)) patterns
Based on the recent patches D88475 and D88429 where we are losing undef values due to extension/comparisons.

I've added a Constant::mergeUndefsWith method that merges the undef scalar/elements from another Constant into a specific Constant.

Differential Revision: https://reviews.llvm.org/D88687
2020-10-13 14:35:18 +01:00
Simon Pilgrim d9f064dc0b [InstCombine] visitTrunc - trunc(shl(X, C)) --> shl(trunc(X),trunc(C)) vector support
Annoyingly vectors aren't supported by shouldChangeType(), but we have precedents for always performing this on vector types (e.g. narrowBinOp).

Differential Revision: https://reviews.llvm.org/D89067
2020-10-08 22:07:51 +01:00
Simon Pilgrim 0cf48a7065 [InstCombine] visitTrunc - trunc (*shr (trunc A), C) --> trunc(*shr A, C)
Attempt to fold trunc (*shr (trunc A), C) --> trunc(*shr A, C) iff the shift amount if small enough that all zero/sign bits created by the shift are removed by the last trunc.

Helps fix the regressions encountered in D88316.

I've tweaked a couple of shift values as suggested by @lebedev.ri to ensure we have coverage of shift values close (above/below) to the max limit.

Differential Revision: https://reviews.llvm.org/D88429
2020-09-29 18:27:42 +01:00
Simon Pilgrim b610d73b3f [InstCombine] visitTrunc - remove dead trunc(lshr (zext A), C) combine. NFCI.
I added additional test coverage at rG7a55989dc4305 - but all are handled independently of this combine and http://lab.llvm.org:8080/coverage/coverage-reports/ indicates the code is never used.

Differential revision: https://reviews.llvm.org/D88492
2020-09-29 17:15:16 +01:00
Simon Pilgrim 89a8a0c910 [InstCombine] Inherit exact flags on extended shifts in trunc (lshr (sext A), C) --> (ashr A, C)
This was missed in D88475
2020-09-29 15:32:09 +01:00
Simon Pilgrim 14ff38e235 [InstCombine] visitTrunc - trunc (lshr (sext A), C) --> (ashr A, C) non-uniform support
This came from @lebedev.ri's suggestion to use m_SpecificInt_ICMP for D88429 - since I was going to change the m_APInt to m_Constant for that patch I thought I would do it for the only other user of the APInt first.

I've added a ConstantExpr::getUMin helper - its trivial to add UMAX/SMIN/SMAX but thought I'd wait until we have use cases.

Differential Revision: https://reviews.llvm.org/D88475
2020-09-29 15:01:16 +01:00
David Sherwood 59c4d5aad0 [SVE] Fix InstCombinerImpl::PromoteCastOfAllocation for scalable vectors
In this patch I've fixed some warnings that arose from the implicit
cast of TypeSize -> uint64_t. I tried writing a variety of different
cases to show how this optimisation might work for scalable vectors
and found:

1. The optimisation does not work for cases where the cast type
is scalable and the allocated type is not. This because we need to
know how many times the cast type fits into the allocated type.
2. If we pass all the various checks for the case when the allocated
type is scalable and the cast type is not, then when creating the
new alloca we have to take vscale into account. This leads to
sub-optimal IR that is worse than the original IR.
3. For the remaining case when both the alloca and cast types are
scalable it is hard to find examples where the optimisation would
kick in, except for simple bitcasts, because we typically fail the
ABI alignment checks.

For now I've changed the code to bail out if only one of the alloca
and cast types is scalable. This means we continue to support the
existing cases where both types are fixed, and also the specific case
when both types are scalable with the same size and alignment, for
example a simple bitcast of an alloca to another type.

I've added tests that show we don't attempt to promote the alloca,
except for simple bitcasts:

  Transforms/InstCombine/AArch64/sve-cast-of-alloc.ll

Differential revision: https://reviews.llvm.org/D87378
2020-09-23 08:43:05 +01:00
Eli Friedman 96ef6998df [InstCombine] Fix a couple crashes with extractelement on a scalable vector.
Differential Revision: https://reviews.llvm.org/D86989
2020-09-02 18:02:07 -07:00
Christopher Tetreault 640f20b0c7 [SVE] Remove calls to VectorType::getNumElements from InstCombine
Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D82237
2020-08-31 12:59:10 -07:00
Sanjay Patel 912c09e845 [InstCombine] eliminate a pointer cast around insertelement
I'm not sure if this solves PR46839 completely, but reducing the casting should help:
https://bugs.llvm.org/show_bug.cgi?id=46839

Differential Revision: https://reviews.llvm.org/D85647
2020-08-12 09:08:17 -04:00
Sanjay Patel bebca662d4 [InstCombine] rearrange code for readability; NFC
The code comment refers to the path where we change the
size of the integer type, so handle that first, otherwise
deal with the general case.
2020-08-10 08:07:29 -04:00
Sebastian Neubauer 2a6c871596 [InstCombine] Move target-specific inst combining
For a long time, the InstCombine pass handled target specific
intrinsics. Having target specific code in general passes was noted as
an area for improvement for a long time.

D81728 moves most target specific code out of the InstCombine pass.
Applying the target specific combinations in an extra pass would
probably result in inferior optimizations compared to the current
fixed-point iteration, therefore the InstCombine pass resorts to newly
introduced functions in the TargetTransformInfo when it encounters
unknown intrinsics.
The patch should not have any effect on generated code (under the
assumption that code never uses intrinsics from a foreign target).

This introduces three new functions:
TargetTransformInfo::instCombineIntrinsic
TargetTransformInfo::simplifyDemandedUseBitsIntrinsic
TargetTransformInfo::simplifyDemandedVectorEltsIntrinsic

A few target specific parts are left in the InstCombine folder, where
it makes sense to share code. The largest left-over part in
InstCombineCalls.cpp is the code shared between arm and aarch64.

This allows to move about 3000 lines out from InstCombine to the targets.

Differential Revision: https://reviews.llvm.org/D81728
2020-07-22 15:59:49 +02:00
Florian Hahn 31971ca1c6 [InstCombine] Try to narrow expr if trunc cannot be removed.
Narrowing an input expression of a truncate to a type larger than the
result of the truncate won't allow removing the truncate, but it may
enable further optimizations, e.g. allowing for larger vectorization
factors.

For now this is intentionally limited to integer types only, to avoid
producing new vector ops that might not be suitable for the target.

If we know that the only user is a trunc, we can also be allow more
cases, e.g. also shortening expressions with some additional shifts.

I would appreciate feedback on the best place to do such a narrowing.

This fixes PR43580.

Reviewers: spatel, RKSimon, lebedev.ri, xbolva00

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D82973
2020-07-03 20:22:51 +01:00
Simon Pilgrim eb0e7acbd4 [InstCombine] canEvaluateTruncated - use KnownBits to check for inrange shift amounts
Currently canEvaluateTruncated can only attempt to truncate shifts if they are scalar/uniform constant amounts that are in range.

This patch replaces the constant extraction code with KnownBits handling, using the KnownBits::getMaxValue to check that the amounts are inrange.

This enables support for nonuniform constant cases, and also variable shift amounts that have been masked somehow. Annoyingly, this still won't work for vectors with (demanded) undefs as KnownBits returns nothing in those cases, but its a definite improvement on what we currently have.

Differential Revision: https://reviews.llvm.org/D83127
2020-07-03 16:02:10 +01:00
Simon Pilgrim 3da42f4810 [InstCombine] Add sext(ashr(shl(trunc(x),c),c)) folding support for vectors
Replacing m_ConstantInt with m_Constant permits folding of vectors as well as scalars.

Differential Revision: https://reviews.llvm.org/D83058
2020-07-03 10:04:37 +01:00
Simon Pilgrim 769b979930 [InstCombine] Add (vXi1 trunc(lshr(x,c))) -> icmp_eq(and(x,c')) support for non-uniform vectors
As noted on PR46531, we were only performing this transform on uniform vectors as we were using the m_APInt pattern matcher to extract the shift amount.

Differential Revision: https://reviews.llvm.org/D83035
2020-07-02 16:56:33 +01:00
Guillaume Chatelet 8dbafd24d6 [Alignment][NFC] Transition and simplify calls to DL::getABITypeAlignment
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82977
2020-07-02 11:28:02 +00:00
Roman Lebedev 381054a989
[InstCombine] visitBitCast(): do not crash on weird `bitcast <1 x i8*> to i8*`
Even if we know that RHS of a bitcast is a pointer,
we can't assume LHS is, because it might be
a single-element vector of pointer.
2020-06-25 00:58:53 +03:00
Sanjay Patel d50366d29f [InstCombine] improve matching for sext-lshr-trunc patterns, part 2
Similar to rG42f488b63a04

This is intended to preserve the logic of the existing transform,
but remove unnecessary restrictions on uses and types.

https://rise4fun.com/Alive/oS0

  Name: narrow input
  Pre: C1 <= width(C1) - 24
  %B = sext i8 %A
  %C = lshr %B, C1
  %r = trunc %C to i24
  =>
  %s = ashr i8 %A, trunc(umin(C1, 7))
  %r = sext i8 %s to i24

  Name: wide input
  Pre: C1 <= width(C1) - 24
  %B = sext i24 %A
  %C = lshr %B, C1
  %r = trunc %C to i8
  =>
  %s = ashr i24 %A, trunc(umin(C1, 23))
  %r = trunc i24 %s to i8
2020-06-08 14:41:50 -04:00
Sanjay Patel 42f488b63a [InstCombine] improve matching for sext-lshr-trunc patterns
This is intended to preserve the logic of the existing transform,
but remove unnecessary restrictions on uses and types.

https://rise4fun.com/Alive/pYfR

  Pre: C1 <= width(C1) - 8
  %B = sext i8 %A
  %C = lshr %B, C1
  %r = trunc %C to i8
   =>
  %r = ashr i8 %A, trunc(umin(C1, 7))
2020-06-08 11:55:30 -04:00
Sanjay Patel af7587d755 [InstCombine] reduce code duplication in visitTrunc(); NFC 2020-06-08 11:15:44 -04:00
Christopher Tetreault 8f8029b458 [SVE] Eliminate calls to default-false VectorType::get() from InstCombine
Reviewers: efriedma, david-arm, fpetrogalli, spatel

Reviewed By: david-arm

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D80334
2020-05-29 15:31:31 -07:00
David Sherwood f254f1d94e [SVE] Remove getNumElements() warnings in InstCombiner::visitBitCast
Whilst trying to compile this test to assembly:

  CodeGen/aarch64-sve-intrinsics/acle_sve_reinterpret.c

I discovered some warnings were firing in InstCombiner::visitBitCast
due to calls to getNumElements() for scalable vector types. These
calls only really made sense for fixed width vectors so I have fixed
up the code appropriately.

Differential Revision: https://reviews.llvm.org/D80559
2020-05-29 08:00:08 +01:00
Serge Pavlov 4d20e31f73 [FPEnv] Intrinsic llvm.roundeven
This intrinsic implements IEEE-754 operation roundToIntegralTiesToEven,
and performs rounding to the nearest integer value, rounding halfway
cases to even. The intrinsic represents the missed case of IEEE-754
rounding operations and now llvm provides full support of the rounding
operations defined by the standard.

Differential Revision: https://reviews.llvm.org/D75670
2020-05-26 19:24:58 +07:00
Sanjay Patel c048a02b5b [InstCombine] fold FP trunc into exact itofp
Similar to D79116 and rGbfd512160fe0 - if the 1st cast
is exact, then we can go directly to the destination
type because there is no double-rounding.
2020-05-24 09:30:19 -04:00
Sanjay Patel 7eed772a27 [PatternMatch] abbreviate vector inst matchers; NFC
Readability is not reduced with these opcodes/match lines,
so reduce odds of awkward wrapping from 80-col limit.
2020-05-24 09:19:47 -04:00
Sanjay Patel bfd512160f [InstCombine] improve analysis of FP->int->FP to eliminate fpextend
This was originally in D79116.
Converting from a narrow-enough FP source value to integer and
back to FP guarantees that the conversion to FP is exact because
of UB/poison-on-overflow.

This was suggested in PR36617:
https://bugs.llvm.org/show_bug.cgi?id=36617#c19
2020-05-17 09:06:57 -04:00