Commit Graph

1416 Commits

Author SHA1 Message Date
Rosie Sumpter e5edc1b5ee [AArch64][SVE] Ensure PTEST operands have type nxv16i1
Currently any legal predicate types will be pattern-matched when
creating a PTEST instruction. This could be a problem in future since
PTEST always uses the .B specifier for the operand, but it is not
always guaranteed that the extra lanes of unpacked types (e.g. nxv4i1)
are zero. This patch ensures the operands of PTEST are type nxv16i1,
where the undef lanes are set to zero.

Differential Revision: https://reviews.llvm.org/D129282/
2022-07-12 09:27:59 +01:00
David Green 4334cbd49b [AArch64] Remove incorrect use of DemandElts
This call to computeKnownBits was passing in a 0xff mask, looking like
it was expecting it to be used as a DemandBits, not a DemandElts mask.
2022-07-08 11:38:00 +01:00
Florian Hahn afdedd405e
[AArch64] Try to re-use extended operand for SETCC with vector ops.
Try to re-use an already extended operand for SetCC with vector operands
feeding an extended select. Doing so avoids requiring another full
extension of the SET_CC result when lowering the select.

This improves lowering for certain extend/cmp/select patterns operating.
For example with  v16i8, this replaces 6 instructions for the extra extension
with 4 separate selects.

This improves the generated code for loops like the one below in
combination with D96522.

    int foo(uint8_t *p, int N) {
      unsigned long long sum = 0;
      for (int i = 0; i < N ; i++, p++) {
        unsigned int v = *p;
        sum += (v < 127) ? v : 256 - v;
      }
      return sum;
    }

https://clang.godbolt.org/z/Wco866MjY

On the AArch64 cores I have access to, the patch improves performance of
the vector loop by ~10%.

This could be generalized per follow-ups, but the initial version
targets one of the more important cases in combination with D96522.

Alive2 modeling:
* sext EQ https://alive2.llvm.org/ce/z/5upBvb
* sext NE https://alive2.llvm.org/ce/z/zbEcJp
* zext EQ https://alive2.llvm.org/ce/z/_xMwof
* zext NE https://alive2.llvm.org/ce/z/5FwKfc
* zext unsigned predicate: https://alive2.llvm.org/ce/z/iEwLU3
* sext signed predicate: https://alive2.llvm.org/ce/z/aMBega

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D120481
2022-07-07 16:50:00 -07:00
Bradley Smith 60d6be5dd3 [LegalizeTypes] Replace vecreduce_xor/or/and with vecreduce_add/umax/umin if not legal
This is done during type legalization since the target representation of
these nodes may not be valid until after type legalization, and after
type legalization the fact that these are dealing with i1 types may be
lost.

Differential Revision: https://reviews.llvm.org/D128996
2022-07-07 09:33:54 +00:00
Sander de Smalen 5d4f6ce229 [AArch64][SVE] Zero other lanes when doing OR reduction on unpacked predicate using ptest.
When the predicate vector is unpacked, we cannot assume anything about the
values in the other lanes. We have to make sure we use the correct
predicate where we know that the other lanes have been zeroed.

Reviewed By: RosieSumpter

Differential Revision: https://reviews.llvm.org/D129081
2022-07-06 16:12:44 +00:00
Sander de Smalen 95e08824fa [AArch64] Add support for various operations on nxv1i1 types.
The supported operations are:
* Logical operations (and, or, xor, bic)
* Logical reductions (and, or, xor, [us]min, [us]max)
* Conversions to/from svbool_t
* Predicate count (CNTP)

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D128835
2022-07-06 15:57:11 +00:00
David Sherwood 77b13a57a9 [AArch64][SME] Add SME addha/va intrinsics
This patch adds new the following SME intrinsics:

  @llvm.aarch64.sme.addva
  @llvm.aarch64.sme.addha

Differential Revision: https://reviews.llvm.org/D127861
2022-07-05 09:47:17 +01:00
Sander de Smalen bf89d24f53 [AArch64] NFC: Move safe predicate casting to a separate function.
This patch puts the code to safely bitcast a predicate, and possibly zero
any undefined lanes when doing a widening cast, into one place and merges
the functionality with lowerConvertToSVBool.

This is some cleanup inspired by D128665.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D128926
2022-07-04 10:32:54 +00:00
Sander de Smalen 690db16422 [AArch64] Make nxv1i1 types a legal type for SVE.
One motivation to add support for these types are the LD1Q/ST1Q
instructions in SME, for which we have defined a number of load/store
intrinsics which at the moment still take a `<vscale x 16 x i1>` predicate
regardless of their element type.

This patch adds basic support for the nxv1i1 type such that it can be passed/returned
from functions, as well as some basic support to support some existing tests that
result in a nxv1i1 type. It also adds support for splats.

Other operations (e.g. insert/extract subvector, logical ops, etc) will be
supported in follow-up patches.

Reviewed By: paulwalker-arm, efriedma

Differential Revision: https://reviews.llvm.org/D128665
2022-07-01 15:11:13 +00:00
Matt Devereau 5166345f50 [SVE][AArch64] Refine hasSVEArgsOrReturn
As described in aapcs64 (https://github.com/ARM-software/abi-aa/blob/2022Q1/aapcs64/aapcs64.rst#scalable-vector-registers)
AAVPCS is used only when registers z0-z7 take an SVE argument. This fixes the case where floats occupy the lower bits
of registers z0-z7 but SVE arguments in registers greater than z7 cause a function to use AAVPCS where it should use AAPCS.

Moving SVE function deduction from AArch64RegisterInfo::hasSVEArgsOrReturn to AArch64TargetLowering::LowerFormalArguments
where physical register lowering is more accurate fixes this.

Differential Revision: https://reviews.llvm.org/D127209
2022-07-01 13:24:55 +00:00
Matt Devereau 018a0dd5c8 [AArch64][SVE] Create AArch64ISD node for DUPQLANE128
Create an AArch64ISD node instead of emitting machine node DUP_ZZI_Q.
This allows a simpler DAG combine for work previously attempted
in https://reviews.llvm.org/D128503

Differential Revision: https://reviews.llvm.org/D128902
2022-07-01 11:46:24 +00:00
Sander de Smalen fbefc62a96 [AArch64][SME] Sink tile offset operands into the loop for load/store instructions.
This helps ISel decompose the generic offset for the tile into a base + offset.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D128508
2022-06-28 10:28:36 +01:00
David Sherwood 054faac9f9 [AArch64][SME] Add SVE2 psel, uclamp, sclamp and revd IR intrinsics
When the SME feature is enabled we also gain access to a few extra
SVE2 instructions. This patch adds LLVM IR intrinsics to make use
of these new instructions:

  @llvm.aarch64.sve.psel
  @llvm.aarch64.sve.revd
  @llvm.aarch64.sve.sclamp
  @llvm.aarch64.sve.uclamp

Differential Revision: https://reviews.llvm.org/D128332
2022-06-28 10:25:06 +01:00
David Sherwood f916ee0fb1 [AArch64][SME] Add SME outer product intrinsics
This patch adds the following intrinsics to support the SME ACLE:

  * @llvm.aarch64.sme.mopa: Non-widening outer product + accumulate
  * @llvm.aarch64.sme.mops: Non-widening outer product + subtract
  * @llvm.aarch64.sme.mopa.wide: Widening outer product + accumulate
  * @llvm.aarch64.sme.mops.wide: Widening outer product + subtract
  * @llvm.aarch64.sme.smopa.wide: Widening signed sum of outer product + accumulate
  * @llvm.aarch64.sme.smops.wide: Widening signed sum of outer product + subtract
  * @llvm.aarch64.sme.umopa.wide: Widening unsigned sum of outer product + accumulate
  * @llvm.aarch64.sme.umops.wide: Widening unsigned sum of outer product + subtract
  * @llvm.aarch64.sme.sumopa.wide: Widening signed by unsigned sum of outer product + accumulate
  * @llvm.aarch64.sme.sumops.wide: Widening signed by unsigned sum of outer product + subtract
  * @llvm.aarch64.sme.usmopa.wide: Widening unsigned by signed sum of outer product + accumulate
  * @llvm.aarch64.sme.usmops.wide: Widening unsigned by signed sum of outer product + subtract

Differential Revision: https://reviews.llvm.org/D127956
2022-06-28 09:41:44 +01:00
David Green 03c65c0d32 [AArch64] Convert vector add(ext, ext) into ext(add(ext, ext))
Given a vector add or sub from extends that needs more that one 'step'
(i.e i8 to i32 or i16 to i64), we can transform the sequence to
sext(add(ext, ext)), to allow the add(ext, ext) to become a single uaddl
and a larger extend, producing less instructions in total.
https://alive2.llvm.org/ce/z/S2T4k-

Differential Revision: https://reviews.llvm.org/D128426
2022-06-24 10:04:28 +01:00
Paul Walker e8716179eb [SVE] Make ISD::SPLAT_VECTOR a legal operation.
The implication of this patch being AArch64ISD::DUP no longer
supports scalable vectors.

Differential Revision: https://reviews.llvm.org/D128265
2022-06-23 00:42:47 +01:00
David Sherwood aa0a413df8 [AArch64][SME] Add some SME PSTATE setting/query intrinsics
This patch adds support for:

* Querying the PSTATE.SM state with @llvm.aarch64.sme.get.pstatesm
* Reading/writing the TPIDR2 register with new
@llvm.aarch64.sme.get.tpidr2 and @llvm.aarch64.sme.set.tpidr2
intrinsics.

Tests added here:

  CodeGen/AArch64/sme-get-pstatesm.ll
  CodeGen/AArch64/sme-read-write-tpidr2.ll

Differential Revision: https://reviews.llvm.org/D127957
2022-06-22 10:26:45 +01:00
Paul Walker 7b285ae0e8 [SVE] Lower "unpredicated" sabd/uabd intrinsics to ISD::ABDS/U.
This enables an existing transformation that when combined with an
add will emit saba/uaba instructions.

Differential Revision: https://reviews.llvm.org/D128198
2022-06-22 00:02:51 +01:00
David Green 3f81841474 [AArch64] Add Extract(DUP(C)) as a canonical constant.
As a followup to D128144, this adds extract(DUP(C)) as a canonical
constant to prevent it being transformed back into a BUILD_VECTOR,
leading to an infinite loop.
2022-06-21 09:51:22 +01:00
David Green c0ecbfa4fd [AArch64] Known bits for AArch64ISD::DUP
An AArch64ISD::DUP is just a splat, where the known bits for each lane
are the same as the input. This teaches that to computeKnownBitsForTargetNode.

Problems arise for constants though, as a constant BUILD_VECTOR can be
lowered to an AArch64ISD::DUP, which SimplifyDemandedBits would then
turn back into a constant BUILD_VECTOR leading to an infinite cycle.
This has been prevented by adding a isTargetCanonicalConstantNode node
to prevent the conversion back into a BUILD_VECTOR.

Differential Revision: https://reviews.llvm.org/D128144
2022-06-20 19:11:57 +01:00
David Sherwood 013358632e [AArch64][SME] Add the zero intrinsic
The SME zero instruction takes a mask as an input declaring which
64-bit element tiles should be zeroed. There is a 1:1 mapping
between the zero intrinsic and the instruction, however we also
want to make the register allocator aware that some tile
registers are being written to.

We can actually just use the custom inserter for a pseudo instruction
to correctly mark all the appropriate registers in the mask as
implicitly defined by the operation.

 Differential Revision: https://reviews.llvm.org/D127843
2022-06-20 14:27:59 +01:00
Simon Pilgrim e4a124dda5 [DAG] Fold (srl (shl x, c1), c2) -> and(shl/srl(x, c3), m)
Similar to the existing (shl (srl x, c1), c2) fold

Part of the work to fix the regressions in D77804

Differential Revision: https://reviews.llvm.org/D125836
2022-06-20 08:37:38 +01:00
David Sherwood 6f6fa5aa10 [AArch64][SME] Add SME cntsb/h/w/d intrinsics
These intrinsics return the number of elements in a streaming
vector, for example aarch64.sme.cntsw returns the number of
32-bit elements. When in streaming mode these are equivalent
to aarch64.sve.cntb/h/w/d with an input value of 1.

I have implemented these intrinsics using the rdsvl instruction
and added tests here:

  CodeGen/AArch64/SME/sme-intrinsics-rdsvl.ll

Differential Revision: https://reviews.llvm.org/D127853
2022-06-16 10:50:25 +01:00
David Sherwood 5fa2416ea0 [AArch64][SME] Add SME read/write intrinsics that map to the mova instruction
This patch adds implementations for the read/write SME ACLE intrinsics:

  @llvm.aarch64.sme.read.horiz
  @llvm.aarch64.sme.read.vert
  @llvm.aarch64.sme.write.horiz
  @llvm.aarch64.sme.write.vert

These all map to the SME mova instruction.

Differential Revision: https://reviews.llvm.org/D127414
2022-06-15 10:31:07 +01:00
David Sherwood bd61664167 [AArch64][SME] Add ldr/str (fill/spill) intrinsics
This patch adds implementations for the fill/spill SME ACLE intrinsics:

    @llvm.aarch64.sme.ldr
    @llvm.aarch64.sme.str

Differential Revision: https://reviews.llvm.org/D127317
2022-06-14 13:58:22 +01:00
Rosie Sumpter 2c4e44752d [AArch64][SME] Add load/store intrinsics
This patch adds implementations for the load/store SME ACLE intrinsics:
  - @llvm.aarch64.sme.ld1*
  - @llvm.aarch64.sme.st1*

Differential Revision: https://reviews.llvm.org/D127210
2022-06-14 11:11:22 +01:00
Guillaume Chatelet d1a27d0b9c [NFC][Alignment] Use proper version of getAlign 2022-06-13 12:59:38 +00:00
David Green 338fd211e7 [AArch64] Generate FADDP from shuffled fadd
As a follow up to D126686, this does the same fold for floating point
add and shuffle. In this case it is limited to reassoc either x[0]+x[1]
or x[1]+x[0] for both result[0] and results[1].

Differential Revision: https://reviews.llvm.org/D127087
2022-06-11 14:16:37 +01:00
Ahmed Bougacha c68b469e07 [AArch64][SVE] Don't crash on pre-legalizer types in extload combine.
This was assuming the vector types were MVTs, but they don't have to be.

Note that the concrete output of the test isn't very useful, since it's
dominated by nonsensical calling convention lowering for the weird types.

Differential Revision: https://reviews.llvm.org/D126505
2022-06-09 10:33:21 -07:00
Paul Walker a1121c31d8 [SVE] Fix incorrect code generation for bitcasts of unpacked vector types.
Bitcasting between unpacked scalable vector types of different
element counts is not a NOP because the live elements are laid out
differently.
               01234567
e.g. nxv2i32 = XX??XX??
     nxv4f16 = X?X?X?X?

Differential Revision: https://reviews.llvm.org/D126957
2022-06-08 10:30:07 +01:00
Guillaume Chatelet 0788186182 [Alignment][NFC] Remove usage of MemSDNode::getAlignment
I can't remove the function just yet as it is used in the generated .inc files.
I would also like to provide a way to compare alignment with TypeSize since it came up a few times.

Differential Revision: https://reviews.llvm.org/D126910
2022-06-07 13:52:20 +00:00
David Green 4ea1b43527 [AArch64] Generate ADDP from shuffled add
This adds a fold of add(x, shuffle(x, <1,0,3,2,5,4,...>), into
shuffle(addp(x), <0,0,1,1,2,2,..>. The ADDP instruction takes two
vectors and returns one, adding adjacent pairs. So we match x in a
custom combine as it is lowered from a v8i32. The original code
would be 2 rev64 and 2 add, with the new code being a single addp
with a zip1;zip2 shuffle, producing smaller code.

Differential Revision: https://reviews.llvm.org/D126686
2022-06-06 11:39:51 +01:00
Paul Walker 2dde272db7 [SVE] Refactor sve-bitcast.ll to include all combinations for legal types.
Patch enables custom lowering for MVT::nxv4bf16 because otherwise
the refactored test file triggers a selection failure.

The reason for the refactoring it to highlight cases where the
generated code is wrong.
2022-06-03 12:09:19 +01:00
Paul Walker 48ea26a387 [SVE] Fixed custom lowering of ISD::INSERT_SUBVECTOR.
LowerINSERT_SUBVECTOR emits AArch64ISD::UUNPK## when lowering
scalable vector floating point INSERT_SUBVECTOR. However, these
nodes only make sense for integer types and thus isel patterns do
not exist for floating point, which leads to isel failures.

This patch ensures floating point operands are cast to integer
before the core lowering takes place.

Fixes: #55037

Differential Revision: https://reviews.llvm.org/D126487
2022-06-02 14:51:04 +01:00
Paul Walker 1fe4953d89 [SVE] Remove custom lowering of scalable vector MGATHER & MSCATTER operations.
Differential Revision: https://reviews.llvm.org/D126255
2022-06-02 11:19:52 +01:00
Sander de Smalen 9c38fc111b [AArch64] Remove references to Streaming SVE from target features.
Following discussion on D120261 and D121208 it seems better to remove the
concept of Streaming SVE from the subtarget/assembler predicates and
instead reason about 'SVE' and 'SME' as its higher level features, rather
than trying to model this runtime mode through explicit feature flags.

This patch is largely NFC.

Reviewed By: paulwalker-arm, david-arm

Differential Revision: https://reviews.llvm.org/D125977
2022-05-31 16:25:01 +02:00
David Green 9a3144d078 [AArch64] Reuse larger DUP if available
If both a v2i32 DUP(x) and a v4i32 DUP(x) node exists, we can re-use the
larger node using a vector extract to obtain the smaller. This comes up
in the smull/smlal code, but needs a small fixup to allow the smull2
code in tryExtendDUPToExtractHigh/performAddSubLongCombine to still
match smull2 extracts.

Differential Revision: https://reviews.llvm.org/D126449
2022-05-29 19:42:13 +01:00
Florian Hahn 786c687810
[AArch64] Add support for FMA intrinsics to shouldSinkOperands.
If the fma operates on a legal vector type, the indexed variants can be
used, if the second operand is a splat of a valid index.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D126234
2022-05-27 10:37:03 +01:00
Zongwei Lan ad73ce318e [Target] use getSubtarget<> instead of static_cast<>(getSubtarget())
Differential Revision: https://reviews.llvm.org/D125391
2022-05-26 11:22:41 -07:00
Florian Hahn 0cc981e021
[AArch64] implement isReassocProfitable, disable for (u|s)mlal.
Currently reassociating add expressions can lead to failing to select
(u|s)mlal. Implement isReassocProfitable to skip reassociating
expressions that can be lowered to (u|s)mlal.

The same issue exists for the *mlsl variants as well, but the DAG
combiner doesn't use the isReassocProfitable hook before reassociating.
To be fixed in a follow-up commit as this requires DAGCombiner changes
as well.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D125895
2022-05-23 09:39:00 +01:00
David Green 6ef5e242f2 [AArch64] Fix assumptions on input type of tryCombineFixedPointConvert
It is possible for the input type to not be v2i64 or v4i32, so weaken
the assertion to a return, fixing the crash in the new test.

Fixes #55606
2022-05-23 08:55:54 +01:00
Paul Walker 258dac43d6 [SVE] Enable use of 32bit gather/scatter indices for fixed length vectors
Differential Revision: https://reviews.llvm.org/D125193
2022-05-22 12:32:30 +01:00
Paul Walker 216f546c84 [SVE] Refactor lowering for fixed length MGATHER/MSCATTER.
Lower fixed length MGATHER/MSCATTER operations to scalable vector
equivalents, which are then lowered to SVE specific nodes. This
two stage process is in preparation for making scalable vector
MGATHER/MSCATTER operations legal.

Differential Revision: https://reviews.llvm.org/D125192
2022-05-21 10:14:45 +01:00
Rahul Anand R 534ea8bca5 [AArch64] Generate AND in place of CSEL for predicated CTTZ
This patch implements a for a target specific optimization that replaces
the cmp and csel from cttz with an and mask.

Recommitted with a fix for truncated value sizes.

Differential Revision: https://reviews.llvm.org/D123782
2022-05-20 13:41:32 +01:00
David Green 602f81ec33 [AArch64] Fix zero element TBL indices
A TBL instruction will fill out-of-range values with 0's, something used
in D121139 to turn tbl2 with a zero input into tbl1s. This works OK for
v16i8, but for v8i8 the input is still treated as a v16i8, so
out-of-range values (like a lane index of 8) would end up loading values
from the top half of the input register. Clean this up by detecting the
out of range values and making sure they really use out of range values.
There is a fix for swapped indices of 64bit input vectors too, which
could be incorrectly adjusted if the zerovector was the first operand.

Fixes #55545

Differential Revision: https://reviews.llvm.org/D125865
2022-05-19 13:54:35 +01:00
Jay Foad 6bec3e9303 [APInt] Remove all uses of zextOrSelf, sextOrSelf and truncOrSelf
Most clients only used these methods because they wanted to be able to
extend or truncate to the same bit width (which is a no-op). Now that
the standard zext, sext and trunc allow this, there is no reason to use
the OrSelf versions.

The OrSelf versions additionally have the strange behaviour of allowing
extending to a *smaller* width, or truncating to a *larger* width, which
are also treated as no-ops. A small amount of client code relied on this
(ConstantRange::castOp and MicrosoftCXXNameMangler::mangleNumber) and
needed rewriting.

Differential Revision: https://reviews.llvm.org/D125557
2022-05-19 11:23:13 +01:00
David Green 4c6a070a2c [AArch64] Teach perfect shuffles tables about D-lane movs
Similar to D123386, this adds D-Movs to the AArch64 perfect shuffle
tables, slightly lowering the costs a little more. This is a rough
improvement in general, especially if you ignore mov v0.16b, v2.16b type
moves that are often artefacts of the calling convention.

The D register movs are encoded as (0x4 | LaneIdx), and to generate a D
register move we are required to bitcast into a higher type, but it is
otherwise very similar to the S-lane mov's already supported.

Differential Revision: https://reviews.llvm.org/D125477
2022-05-17 18:16:45 +01:00
Simon Pilgrim d40b7f0d5a [DAG] Fold (shl (srl x, c), c) -> and(x, m) even if srl has other uses
If we're using shift pairs to mask, then relax the one use limit if the shift amounts are equal - we'll only be generating a single AND node.

AArch64 has a couple of regressions due to this, so I've enforced the existing one use limit inside a AArch64TargetLowering::shouldFoldConstantShiftPairToMask callback.

Part of the work to fix the regressions in D77804

Differential Revision: https://reviews.llvm.org/D125607
2022-05-17 13:40:11 +01:00
Paul Walker ee8aa351e4 [AArch64] Use ADDV for boolean xor reductions.
NEON does not have native support for xor reductions. However, when
reducing predicate vectors the operation is synonymous with an add
reduction that is supported.

Differential Revision: https://reviews.llvm.org/D125605
2022-05-16 22:34:12 +01:00
Paul Walker 7dd05ba9ed [SelectionDAG] Remove duplicate "is scaled" information from gather/scatter SDNodes.
During early gather/scatter enablement two different approaches
were taken to represent scaled indices:

* A Scale operand whereby byte_offsets = Index * Scale
* An IndexType whereby byte_offsets = Index * sizeof(MemVT.ElementType)

Having multiple representations is bad as shown by this patch which
fixes instances where the two are out of sync. The dedicated scale
operand is more flexible and pervasive so this patch removes the
UNSCALED values from IndexType. This means all indices are scaled
but the scale can be one, hence unscaled. SDNodes now use the scale
operand to answer the "isScaledIndex" question.

I toyed with the idea of keeping the UNSCALED enums and helper
functions but because they will have no uses and force SDNodes to
validate the set of supported values I figured it's best to remove
them. We can re-add them if there's a real need. For similar
reasons I've kept the IndexType enum when a bool could be used as I
think being explicitly looks better.

Depends On D123347

Differential Revision: https://reviews.llvm.org/D123381
2022-05-16 20:47:52 +01:00