Commit Graph

7573 Commits

Author SHA1 Message Date
Simon Pilgrim 64687f2cc3 [X86][SSE] canonicalizeShuffleWithBinOps - add PERMILPS/PERMILPD + PERMPD/PERMQ + INSERTPS handling.
Bail if the INSERTPS would introduce zeros across the binop.
2021-03-16 13:52:08 +00:00
Simon Pilgrim 772155793b [X86][SSE] isHorizontalBinOp - ensure we clear any unused source operands to improve HADD/SUB matching
Our shuffle matching for HADD/SUB patterns wasn't clearing repeated ops in 'fake unary' style shuffle masks (unpack(x,x) etc.), preventing matching of add(fakeunary(),fakeunary()) style patterns.
2021-03-15 16:24:29 +00:00
Simon Pilgrim 814339454d [X86][SSE] canonicalizeShuffleWithBinOps - handle target shuffles.
Fold SHUFFLE(BINOP(SHUFFLE(X),SHUFFLE(Y))) -> BINOP(SHUFFLE'(X),SHUFFLE'(Y)) style patterns as well as the existing shuffles of constants.
2021-03-15 15:01:29 +00:00
Simon Pilgrim 07232f4507 [X86][SSE] canonicalizeShuffleWithBinOps - add X86ISD::PSHUFB handling.
Recommit rGcd938ab162b0ac560dd0e9fee290980c7e0e47e5 with an early-out if the pshub would introduce zeros across the binop.
2021-03-15 12:43:30 +00:00
Simon Pilgrim 75a184dacf Revert rG9ba577eca2e339726bfaad4e615c6324a705b292 "[X86][SSE] canonicalizeShuffleWithBinOps - handle target shuffles. NFCI."
Sorry this wasn't supposed to be committed yet (and certainly not tagged as NFCI....)
2021-03-15 12:23:44 +00:00
Simon Pilgrim 9ba577eca2 [X86][SSE] canonicalizeShuffleWithBinOps - handle target shuffles. NFCI.
Fold SHUFFLE(BINOP(SHUFFLE(X),SHUFFLE(Y))) -> BINOP(SHUFFLE'(X),SHUFFLE'(Y)) style patterns as well as the existing shuffles of constants.
2021-03-15 11:59:25 +00:00
Simon Pilgrim 6878be5dc3 [X86][SSE] Attempt to merge single-op hops for slow targets.
For slow-hop targets, see if any single-op hops are duplicating work already done on another (dual-op) hop, which can sometimes occur as isHorizontalBinOp tries to find potential duplicates (but can't merge them itself). If so, reuse the other hop and shuffle the result.
2021-03-15 09:30:20 +00:00
Simon Pilgrim 6cb7dddaf4 [X86][AVX] Insert zeros byte elements into 256/512-bit vectors using shuffle/and
Avoid extracting/inserting subvectors which makes it more difficult for shuffle combining to merge them together.
2021-03-12 15:16:36 +00:00
Simon Pilgrim 33dcdd414c [X86] Provide lighter weight getTargetShuffleMask wrapper. NFCI.
Most callers to getTargetShuffleMask don't use the IsUnary flag.
2021-03-12 15:16:35 +00:00
Simon Pilgrim bc5e9ec2dc Revert rGcd938ab162b0ac560dd0e9fee290980c7e0e47e5 "[X86] canonicalizeShuffleWithBinOps - add X86ISD::PSHUFB handling."
Investigating an issue reported by @bkramer, possibly when the PSHUFB mask generates zero elements.
2021-03-11 13:14:00 +00:00
Simon Pilgrim 77394c12a4 [X86] Don't attempt to fold sub(C1, xor(X, C2)) with opaque constants
Fixes PR49451
2021-03-11 12:06:40 +00:00
Simon Pilgrim d0884541cc [X86] canonicalizeShuffleWithBinOps - add binary shuffle handling 2021-03-09 13:57:03 +00:00
Simon Pilgrim f71cee136d [X86] Break if-else chain. NFCI.
Both if blocks affect control flow - we don't need the else.

Fixes clang-tidy warning.
2021-03-08 11:44:31 +00:00
Simon Pilgrim cd938ab162 [X86] canonicalizeShuffleWithBinOps - add X86ISD::PSHUFB handling. 2021-03-07 12:56:35 +00:00
Simon Pilgrim 772a501bf4 [X86] canonicalizeShuffleWithBinOps - shuffle oneuse constants.
We can freely shuffle all ones/zeros constants but we can also freely shuffle other constants as long as they only have one use.
2021-03-07 11:17:03 +00:00
Alexey Lapshin cf7cdaff64 [X86][VARARG] Avoid spilling xmm registers for va_start.
That review is extracted from D69372.
It fixes https://bugs.llvm.org/show_bug.cgi?id=42219 bug.

For the noimplicitfloat mode, the compiler mustn't generate
floating-point code if it was not asked directly to do so.
This rule does not work with variable function arguments currently.
Though compiler correctly guards block of code, which copies xmm vararg
parameters with a check for %al, it does not protect spills for xmm registers.
Thus, such spills are generated in non-protected areas and could break code,
which does not expect floating-point data. The problem happens in -O0
optimization mode. With this optimization level there is used
FastRegisterAllocator, which spills virtual registers at basic block boundaries.
Register Allocator does not protect spills with additional control-flow modifications.
Thus to resolve that problem, it is suggested to not copy incoming physical
registers into virtual registers. Instead, store incoming physical xmm registers
into the memory from scratch.

Differential Revision: https://reviews.llvm.org/D80163
2021-03-06 15:25:47 +03:00
Simon Pilgrim d7b8cb4d57 [X86] X86ISelLowering.cpp - try to use for-range loops. NFCI. 2021-03-05 11:09:14 +00:00
Simon Pilgrim 7cbc5df438 [X86] X86TargetLowering::isSafeMemOpType - break if-else chain. NFCI.
All if-else blocks return - fixes clang-tidy warning.
2021-03-04 12:15:08 +00:00
Simon Pilgrim 1584e55a26 [X86] canonicalizeShuffleWithBinOps - handle general unaryshuffle(binop(x,c)) patterns not just xor(x,-1)
Generalize the shuffle(not(x)) -> not(shuffle(x)) fold to handle any binop with 0/-1.

Hopefully we can further generalize to help push target unary/binary shuffles through binops similar to what we do in DAGCombiner::visitVECTOR_SHUFFLE
2021-03-04 10:44:38 +00:00
Simon Pilgrim aa4afebbf9 [X86] Fold scalar_to_vector(x) -> extract_subvector(broadcast(x),0) iff broadcast(x) exists
Add handling for reusing an existing broadcast(x) to a wider vector.
2021-03-03 15:50:37 +00:00
Benjamin Kramer 10c256ccaf Revert "[X86] Fold shuffle(not(x),undef) -> not(shuffle(x,undef))"
This reverts commit 925093d88a.

Causes an infinite loop when compiling some shuffles:

$ cat bugpoint-reduced-simplified.ll
target triple = "x86_64-unknown-linux-gnu"

define void @foo() {
entry:
  %0 = load i8, i8* undef, align 1
  %broadcast.splatinsert = insertelement <16 x i8> poison, i8 %0, i32 0
  %1 = icmp ne <16 x i8> %broadcast.splatinsert, zeroinitializer
  %2 = shufflevector <16 x i1> %1, <16 x i1> undef, <16 x i32> zeroinitializer
  %wide.load = load <16 x i8>, <16 x i8>* undef, align 1
  %3 = icmp ne <16 x i8> %wide.load, zeroinitializer
  %4 = and <16 x i1> %3, %2
  %5 = zext <16 x i1> %4 to <16 x i8>
  store <16 x i8> %5, <16 x i8>* undef, align 1
  ret void
}

$ llc < bugpoint-reduced-simplified.ll
<timeout>
2021-03-02 11:24:07 +01:00
Simon Pilgrim 925093d88a [X86] Fold shuffle(not(x),undef) -> not(shuffle(x,undef))
Move NOT out to expose more AND -> ANDN folds
2021-03-01 14:47:39 +00:00
Simon Pilgrim ab3ea27b6f [X86][AVX] Reuse existing VBROADCAST(x) for SCALAR_TO_VECTOR(x)
Similar to what we already do for BROADCASTs of different vector sizes - if we're going to broadcast it anyway might as well reuse it.
2021-02-28 11:37:27 +00:00
Craig Topper 993f4d8ffa [X86] Fix a couple comments that said LHS where they meant RHS. NFC 2021-02-27 17:14:17 -08:00
James Y Knight 6de6455752 Use getAlign() on atomicrmw/cmpxchg instructions, now that it's available.
These locations were missed as part of adding alignment to the
instructions, and were still making their own alignment assumptions.
2021-02-26 15:06:15 -05:00
Simon Pilgrim ed1f45bce9 [X86][AVX] SimplifyDemandedBitsForTargetNode - add basic X86ISD::VBROADCAST handling.
Simplify through to the scalar/vector source operand.
2021-02-26 16:13:14 +00:00
Simon Pilgrim 7ac4c956af [X86] Remove unnecessary custom lowering of vXi1 SADDSAT/SSUBSAT/UADDSAT/USUBSAT
As discussed on D97478. The removal of the custom tag causes some changes in the add/sub-overflow expansion as it no longer expands to sat-arith codegen.
2021-02-26 12:10:23 +00:00
Simon Pilgrim aefe8f2f6c [DAG] Fold vXi1 multiplies -> and
This allows us to remove X86 custom lowering of vXi1 MUL, which helps simplify a load of mask math.

Mentioned in D97478 post review.
2021-02-26 11:46:12 +00:00
Simon Pilgrim 40b8b4a466 [X86] Remove unnecessary custom lowering of v16i1/v32i1 ADD/SUB
These were missed in D97478
2021-02-26 11:46:11 +00:00
Craig Topper ceaedfb5fc [X86] Remove custom lowering of vXi1 ADD/SUB now that they are canonicalized to XOR in getNode.
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D97478
2021-02-25 08:52:41 -08:00
Simon Pilgrim 8b82669d56 [X86][SSE] Move unaryshuffle(xor(x,-1)) -> xor(unaryshuffle(x),-1) fold into helper. NFCI.
We should be able to extend this "canonicalizeShuffleWithBinOps" to handle more generic binop cases where either/both operands can be cheaply shuffled.
2021-02-25 10:56:23 +00:00
Simon Pilgrim b568d3d6c9 [X86] Add vector support to sub(C1, xor(X, C2)) -> add(xor(X, ~C2), C1+1) fold. 2021-02-21 21:51:27 +00:00
Simon Pilgrim 3ab32c94a4 [X86] Replace explicit constant handling in sub(C1, xor(X, C2)) -> add(xor(X, ~C2), C1+1) fold. NFCI.
NFC cleanup before adding vector support - rely on the SelectionDAG to handle everything for us.
2021-02-21 21:40:32 +00:00
Simon Pilgrim bae04a3e2d [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - remove unnecessary BITCASTs.
In conjunction with the 'vperm2x128(bitcast(x),bitcast(y),c) -> bitcast(vperm2x128(x,y,c))' fold in combineTargetShuffle, this should remove any unnecessary bitcasts around vperm2x128 lane shuffles.
2021-02-21 18:40:32 +00:00
Simon Pilgrim a6a258f1da [X86][AVX] Fold concat(extract_subvector(v0,c0), extract_subvector(v1,c1)) -> vperm2x128
Fixes regression exposed by removing bitcasts across logic-ops in D96206.

Differential Revision: https://reviews.llvm.org/D96206
2021-02-21 14:50:43 +00:00
Simon Pilgrim 2885d1251f [X86] Fold bitcast(logic(bitcast(X), Y)) --> logic'(X, bitcast(Y)) for int-int bitcasts
Extend the existing combine that handles bitcasting for fp-logic ops to also help remove logic ops across bitcasts to/from the same integer types.

This helps improve AVX512 predicate handling for D/Q logic ops and also allows DAGCombine's scalarizeExtractedBinop to remove some annoying gpr->simd->gpr transfers.

The concat_vectors regression in pr40891.ll will be addressed in a followup commit on this patch.

Differential Revision: https://reviews.llvm.org/D96206
2021-02-21 14:40:54 +00:00
Simon Pilgrim 761bbed264 [DAG] foldSubToUSubSat - fold sub(a,trunc(umin(zext(a),b))) -> usubsat(a,trunc(umin(b,SatLimit)))
This moves the last custom x86 USUBSAT fold to generic DAGCombine.

Completes PR40111

Differential Revision: https://reviews.llvm.org/D96703
2021-02-20 12:02:07 +00:00
Simon Pilgrim 2258b367db [X86][AVX] getFauxShuffleMask - decode VBROADCAST(EXTRACT_VECTOR_ELT(V,0))
Handle the case where we're broadcasting a scalar extracted from another vector.
2021-02-19 11:06:53 +00:00
Wang, Pengfei c98644c2ec [X86] Fix a codegen crash in getSetCCResultType
This patch fixes some crashes coming from
X86ISelLowering::getSetCCResultType, which would occasionally return
an EVT constructed from an invalid MVT, which has a null Type pointer.

This patch refers to D95434.

Differential Revision: https://reviews.llvm.org/D97036
2021-02-19 17:30:10 +08:00
Simon Pilgrim 05c64ea672 [DAG] Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))) (REAPPLIED)
Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))) -> bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))

Attempt to fold from a shuffle of a pair of binops to a binop of shuffles, as long as one/both of the binop sources are also shuffles that can be merged with the outer shuffle. This should guarantee that we remove one binop without introducing any additional shuffles.

Technically there's potential for a merged shuffle's lowering to be poorer than the original shuffle, but it could also be better, and I'm not seeing any regressions as long as we keep the 'don't merge splats' rule already present in MergeInnerShuffle.

This expands and generalizes an existing X86 combine and attempts to merge either of each binop's sources (with an on-the-fly commutation of the shuffle mask) - we couldn't do that in the x86 version as it had to stay in a form that DAGCombine's MergeInnerShuffle would still recognise.

Fixes issue raised by @saugustine in rG5aa8f4c0843a where we were failing to replace null shuffle operands from MergeInnerShuffle to UNDEFs.

Differential Revision: https://reviews.llvm.org/D96345
2021-02-17 11:42:43 +00:00
Sterling Augustine 5aa8f4c084 Revert "[DAG] Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d)))"
This reverts commit 5dfba562dd.

That commit causes an assertion failure with the following repro:

typedef long b __attribute__((__vector_size__(16)));
b *d;
b e;
b __attribute__((__always_inline__)) c(b h, b i) {
  return (__attribute__((__vector_size__(8 * sizeof(short)))) short)h + i;
}
j() {
  b k, l, m, n, o[6], p, q;
  m = d[5];
  b r = m;
  b s = f(r, 8);
  q = s;
  l = d[1];
  p = l;
  t(q);
  n = c(m, l);
  o[1] = c(s, f(p, 8));
  k = __builtin_shufflevector(n, o[1], 0, 2);
  e = __builtin_ia32_psrlwi128(k, j);
}

./bin/clang -cc1 -triple x86_64-grtev4-linux-gnu -emit-obj -O1 -std=c99 test.c
2021-02-16 12:48:15 -08:00
Simon Pilgrim 5dfba562dd [DAG] Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d)))
Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))) -> bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))

Attempt to fold from a shuffle of a pair of binops to a binop of shuffles, as long as one/both of the binop sources are also shuffles that can be merged with the outer shuffle. This should guarantee that we remove one binop without introducing any additional shuffles.

Technically there's potential for a merged shuffle's lowering to be poorer than the original shuffle, but it could also be better, and I'm not seeing any regressions as long as we keep the 'don't merge splats' rule already present in MergeInnerShuffle.

This expands and generalizes an existing X86 combine and attempts to merge either of each binop's sources (with an on-the-fly commutation of the shuffle mask) - we couldn't do that in the x86 version as it had to stay in a form that DAGCombine's MergeInnerShuffle would still recognise.

Differential Revision: https://reviews.llvm.org/D96345
2021-02-16 15:46:34 +00:00
Simon Pilgrim 4841a225b7 [DAG] Move basic USUBSAT pattern matches from X86 to DAGCombine
Begin transitioning the X86 vector code to recognise sub(umax(a,b) ,b) or sub(a,umin(a,b)) USUBSAT patterns to make it more generic and available to all targets.

This initial patch just moves the basic umin/umax patterns to DAG, removing some vector-only checks on the way - these are some of the patterns that the legalizer will try to expand back to so we can be reasonably relaxed about matching these pre-legalization.

We can handle the trunc(sub(..))) variants as well, which helps with patterns where we were promoting to a wider type to detect overflow/saturation.

The remaining x86 code requires some cleanup first - some of it isn't actually tested etc. I also need to resurrect D25987.

Differential Revision: https://reviews.llvm.org/D96413
2021-02-12 18:22:57 +00:00
Simon Pilgrim eb31c3c5cb Revert rGe1172959226689a "[X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - merge VPERMILPD ops with different low/high masks."
Revert this while I investigate a downstream breakage report.
2021-02-10 10:26:44 +00:00
Simon Pilgrim 89d9ff8229 [X86][SSE] foldShuffleOfHorizOp - add SHUFPS v4f32 handling
Fold shufps(hop(x,y),hop(z,w)) -> permute(hop(x,z)) - this is very similar to the equivalent unpack fold.

I did start trying to convert foldShuffleOfHorizOp to handle generic shuffle masks but we're relying on a lot of special cases at the moment.
2021-02-09 14:18:45 +00:00
Simon Pilgrim 598ceb25d4 [X86][AVX] Fold extract_subvector(splat, c) -> extract_subvector(splat, 0)
We already do this for VBROADCASTs, extend this for any splat that SelectionDAG::isSplatValue recognises as well.
2021-02-07 11:42:41 +00:00
Craig Topper 6f4f0efd89 [X86] Don't pass a 1 to the second argument of ISD::FP_ROUND in LowerFCOPYSIGN.
I don't think we have any reason to believe the FP_ROUND here doesn't change the value.

Found while trying to see if we still need the fp128 block in CanCombineFCOPYSIGN_EXTEND_ROUND.
Removing that check caused this FP_ROUND to fire for fp128 which introduced a libcall expansion that asserted for this being a 1.

Reviewed By: RKSimon, pengfei

Differential Revision: https://reviews.llvm.org/D96098
2021-02-06 10:29:01 -08:00
Simon Pilgrim e117295922 [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - merge VPERMILPD ops with different low/high masks.
Now that PR48908 has been dealt with, we can handle v4f64 permute cases by extracting the low/high lane VPERMILPD masks and creating a new mask based on which lanes are referenced by the VPERM2F128 mask.
2021-02-06 15:58:02 +00:00
Craig Topper 11ef356d9e [TargetLowering] Use Align in allowsMisalignedMemoryAccesses.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D96097
2021-02-04 19:22:06 -08:00
Simon Pilgrim 31b85e1c0b [X86] Use VT::changeVectorElementType helper where possible. NFCI. 2021-02-04 15:03:56 +00:00