Commit Graph

64867 Commits

Author SHA1 Message Date
Mark Schimmel 417f8ea4ba [ARC] ARCRegisterInfo cleanup prior to adding core register pairs (ARC32) and 64-bit core registers (ARC64)
Differential Revision: https://reviews.llvm.org/D11108
2021-10-07 13:01:49 -07:00
Jay Foad 5b8befdd02 [X86] Special-case ADD of two identical registers in convertToThreeAddress
X86InstrInfo::convertToThreeAddress would convert this:

  %1:gr32 = ADD32rr killed %0:gr32(tied-def 0), %0:gr32, implicit-def dead $eflags

to this:

  undef %2.sub_32bit:gr64 = COPY killed %0:gr32
  undef %3.sub_32bit:gr64_nosp = COPY %0:gr32
  %1:gr32 = LEA64_32r killed %2:gr64, 1, killed %3:gr64_nosp, 0, $noreg

Note that in the ADD32rr, %0 was used twice and the first use had a kill
flag, which is what MachineInstr::addRegisterKilled does.

In the converted code, each use of %0 is copied to a new reg, and the
first COPY inherits the kill flag from the ADD32rr. This causes
machine verification to fail (if you force it to run after
TwoAddressInstructionPass) because the second COPY uses %0 after it is
killed. Note that machine verification is currently disabled after
TwoAddressInstructionPass but this is a step towards being able to
enable it.

Fix this by not inserting more than one COPY from the same source
register.

Differential Revision: https://reviews.llvm.org/D110829
2021-10-07 19:50:27 +01:00
Craig Topper c4803bd416 [RISCV] Handle vector of pointer in getTgtMemIntrinsic for strided load/store.
getScalarSizeInBits() doesn't work if the scalar type is a pointer.
For that we need to go through DataLayout.
2021-10-07 10:11:56 -07:00
Mark Schimmel 20c074ee96 C] Add option to ARCOptAddrMode to disable the pass and diagnose errors
Fixed formatting issues reported by clang-format

Differential Revision: https://reviews.llvm.org/D111255
2021-10-07 09:07:07 -07:00
Bradley Smith 5be266db7a [AArch64][SVE] Improve VECTOR_SPLICE codegen for VL > 128-bit
Differential Revision: https://reviews.llvm.org/D111135
2021-10-07 15:28:55 +00:00
Jack Andersen bd4dad87f4 [MachineInstr] Move MIParser's DBG_VALUE RegState::Debug invariant into MachineInstr::addOperand
Based on the reasoning of D53903, register operands of DBG_VALUE are
invariably treated as RegState::Debug operands. This change enforces
this invariant as part of MachineInstr::addOperand so that all passes
emit this flag consistently.

RegState::Debug is inconsistently set on DBG_VALUE registers throughout
LLVM. This runs the risk of a filtering iterator like
MachineRegisterInfo::reg_nodbg_iterator to process these operands
erroneously when not parsed from MIR sources.

This issue was observed in the development of the llvm-mos fork which
adds a backend that relies on physical register operands much more than
existing targets. Physical RegUnit 0 has the same numeric encoding as
$noreg (indicating an undef for DBG_VALUE). Allowing debug operands into
the machine scheduler correlates $noreg with RegUnit 0 (i.e. a collision
of register numbers with different zero semantics). Eventually, this
causes an assert where DBG_VALUE instructions are prohibited from
participating in live register ranges.

Reviewed By: MatzeB, StephenTozer

Differential Revision: https://reviews.llvm.org/D110105
2021-10-07 16:08:52 +01:00
Michael Forster 53801a59eb Fix two unused-variable warnings. 2021-10-07 15:17:58 +02:00
Chen Zheng 1bf05fbc98 [PowerPC] refactor rewriteLoadStores for reusing; nfc
This is split from https://reviews.llvm.org/D108750.
Refactor rewriteLoadStores() so that we can reuse the outlined
functions.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D110314
2021-10-07 12:59:20 +00:00
David Green 73346f5848 [ARM] Introduce a MQPRCopy
Currently when creating tail predicated loops, we need to validate that
all the live-outs of a loop will be equivalent with and without tail
predication, and if they are not we cannot legally create a
tail-predicated loop, leaving expensive vctp and vpst instructions in
the loop. These notably can include register-allocation instructions
like stack loads and stores, and copys lowered from COPYs to MVE_VORRs.

Instead of trying to prove this is valid late in the pipeline, this
patch introduces a MQPRCopy pseudo instruction that COPY is lowered to.
This can then either be converted to a MVE_VORR where possible, or to a
couple of VMOVD instructions if not. This way they do not behave
differently within and outside of tail-predications regions, and we can
know by construction that they are always valid. The idea is that we can
do the same with stack load and stores, converting them to VLDR/VSTR or
VLDM/VSTM where required to prove tail predication is always valid.

This does unfortunately mean inserting multiple VMOVD instructions,
instead of a single MVE_VORR, but my experiments show it to be an
improvement in general.

Differential Revision: https://reviews.llvm.org/D111048
2021-10-07 12:52:12 +01:00
Cullen Rhodes 14cb138b15 [AArch64][SME] Update DUP (predicate) instruction
Changes in architecture revision 00eac1:
  * Renamed to PSEL.
  * Copies whole source register.
  * Element type suffix removed from destination.
  * Element index no longer optional and '#' prefix has been removed.

The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-09

Depends on D111212.

Reviewed By: kmclaughlin

Differential Revision: https://reviews.llvm.org/D111213
2021-10-07 08:55:11 +00:00
Cullen Rhodes 42ba79b7b0 [AArch64][SME] Update tile slice index offset
Changes in architecture revision 00eac1:
  * Tile slice index offset no longer prefixed with '#'.
  * The syntax for 128-bit (.Q) ZA tile slice accesses must now include
    an explicit zero index.

The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-09

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D111212
2021-10-07 08:55:10 +00:00
Itay Bookstein 40ec1c0f16 [IR][NFC] Rename getBaseObject to getAliaseeObject
To better reflect the meaning of the now-disambiguated {GlobalValue,
GlobalAlias}::getBaseObject after breaking off GlobalIFunc::getResolverFunction
(D109792), the function is renamed to getAliaseeObject.
2021-10-06 19:33:10 -07:00
Craig Topper 58b68e70eb [X86] Don't use popcnt for parity if only bits 7:0 of the input can be non-zero.
Without popcnt we had a special case for using the parity flag from a single test i8 test instruction if only bits 7:0 could be non-zero. That special case is still useful when we have popcnt.

To reach this special case, we enable custom lowering of parity for i16/i32/i64 even when popcnt is enabled. The check for POPCNT being enabled is now after the special case in LowerPARITY.

Fixes PR52093

Differential Revision: https://reviews.llvm.org/D111249
2021-10-06 14:39:57 -07:00
Wolfgang Pieb f53d05135e [UBSAN][PS4] For the PS4 target, emit the ud2 ocpode for ubsan traps.
Reviewed By: probinson

Differential Revision: https://reviews.llvm.org/D111172
2021-10-06 11:49:36 -07:00
Stefan Pintilie 740086596c [PowerPC] Fix issue with lowering byval parameters.
Lowering of byval parameters with sizes that are not represented by a single
store require multiple stores to properly address the correct size of the
parameter.

Sizes that cannot be done with a single store are 3 bytes, 5 bytes, 6 bytes,
7 bytes. It is not correct to simply perform an 8 byte store and for these
elements because then the store would be larger than the element and alias
analysis would assume that this is undefined behaivour and return NoAlias
for them.

This patch adds the correct stores so that the size of the store is not larger
than the size of the element.

Reviewed By: nemanjai, #powerpc

Differential Revision: https://reviews.llvm.org/D108795
2021-10-06 13:19:15 -05:00
Shivam Gupta 7a189333ed [NFC] Add doxygen comment for hasFp in RISCVFrameLowering.cpp 2021-10-06 22:35:28 +05:30
Pengxuan Zheng b0045f5595 [ARM] Fix a bug in finding a pair of extracts to create VMOVRRD
D100244 missed a check on the ResNo of the extract's operand 0 when finding a
pair of extracts to combine into a VMOVRRD (extract(x, n); extract(x, n+1) ->
VMOVRRD(extract x, n/2)). As a result, it can incorrectly pair an extract(x, n)
with another extract(x:3, n+1) for example. This patch fixes the bug by adding
the proper check on ResNo.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D111188
2021-10-06 10:03:32 -07:00
Simon Pilgrim 21661607ca [llvm] Replace report_fatal_error(std::string) uses with report_fatal_error(Twine)
As described on D111049, we're trying to remove the <string> dependency from error handling and replace uses of report_fatal_error(const std::string&) with the Twine() variant which can be forward declared.
2021-10-06 12:04:30 +01:00
Nathan Sidwell c11e7b59d2 [X86][NFC] structure-return simplificiation
The X86 backend only needs to know whether structure return is via an
sret pointer.  This removes the categorization enumeration and
adjusts, templatizes and renames the related functions.

Differential Revision: https://reviews.llvm.org/D109966
2021-10-06 03:12:48 -07:00
Simon Pilgrim 0776924a17 [CostModel][X86] getCmpSelInstrCost - treat BAD_PREDICATEs the same as the worst case cost predicates for ICMP/FCMP instructions
As suggested on D111024, we should treat getCmpSelInstrCost calls without a specific predicate as matching the worst case predicate cost.

These regressions will be addressed with a mixture of D111024 and fixing other specific getCmpSelInstrCost calls to have realistic predicates.
2021-10-06 10:14:56 +01:00
Jonas Paulsson 3562076dfc [SystemZ] Temporarily revert memcmp and memcpy patches
Seem to cause test failures in compiler-rt.

Revert "[SystemZ] Implement memcmp of variable length with CLC."
This reverts commit 7a4e9a0c73.

Revert "[SystemZ] Implement memcpy of variable length with MVC."
This reverts commit c6c13c58ee.
2021-10-06 11:05:18 +02:00
David Spickett fc36fb4d23 Revert "Second Recommit "[AArch64] Split bitmask immediate of bitwise AND operation""
This reverts commit 13f3c39f36.

Due to test failures in stage 2 clang tests on AArch64 bots.
2021-10-06 08:39:48 +00:00
Paulo Matos 0c7495848a [WebAssembly] Fix call_indirect on funcrefs
The currently implementation of funcrefs is broken since it is putting
the funcref itself on the stack before the call_indirect. Instead what
should be on the stack is the constant 0, which is the index at which
we store the funcref in __funcref_call_table.

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D111152
2021-10-06 10:11:53 +02:00
Paulo Matos 91fe069c35 [WebAssembly] De-duplicate WasmAddressSpace and refactor reftype predicates
This is a non-functional change to remove the duplicate
WasmAddressSpace enum and refactor reftype predicates by moving them
to the Utilities source file.

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D111144
2021-10-06 09:56:23 +02:00
Carl Ritson adf7043a9f [AMDGPU] Only remove branches in SIInstrInfo::removeBranch
Without this change _term instructions can be removed during
critical edge splitting.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D111126
2021-10-06 10:34:26 +09:00
Heejin Ahn 3ec1760d91 [WebAssembly] Remove WasmTagType
This removes `WasmTagType`. `WasmTagType` contained an attribute and a
signature index:
```
struct WasmTagType {
  uint8_t Attribute;
  uint32_t SigIndex;
};
```

Currently the attribute field is not used and reserved for future use,
and always 0. And that this class contains `SigIndex` as its property is
a little weird in the place, because the tag type's signature index is
not an inherent property of a tag but rather a reference to another
section that changes after linking. This makes tag handling in the
linker also weird that tag-related methods are taking both `WasmTagType`
and `WasmSignature` even though `WasmTagType` contains a signature
index. This is because the signature index changes in linking so it
doesn't have any info at this point. This instead moves `SigIndex` to
`struct WasmTag` itself, as we did for `struct WasmFunction` in D111104.

In this CL, in lib/MC and lib/Object, this now treats tag types in the
same way as function types. Also in YAML, this removes `struct Tag`,
because now it only contains the tag index. Also tags set `SigIndex` in
`WasmImport` union, as functions do.

I think this makes things simpler and makes tag handling more in line
with function handling. These two shares similar properties in that both
of them have signatures, but they are kind of nominal so having the same
signature doesn't mean they are the same element.

Also a drive-by fix: the reserved 'attirubute' part's encoding changed
from uleb32 to uint8 a while ago. This was fixed in lib/MC and
lib/Object but not in YAML. This doesn't change object files because the
field's value is always 0 and its encoding is the same for the both
encoding.

This is effectively NFC; I didn't mark it as such just because it
changed YAML test results.

Reviewed By: sbc100, tlively

Differential Revision: https://reviews.llvm.org/D111086
2021-10-05 17:11:22 -07:00
Simon Pilgrim 2e5daac217 [llvm] Update report_fatal_error calls from raw_string_ostream to use Twine(OS.str())
As described on D111049, we're trying to remove the <string> dependency from error handling and replace uses of report_fatal_error(const std::string&) with the Twine() variant which can be forward declared.

We can use the raw_string_ostream::str() method to perform the implicit flush() and return a reference to the std::string container that we can then wrap inside Twine().
2021-10-05 18:42:12 +01:00
Jonas Paulsson 7a4e9a0c73 [SystemZ] Implement memcmp of variable length with CLC.
Following the same pattern of memset/memcpy, this patch implements a variable
length memcmp with a CLC loop followed by an EXRL instruction.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D107380
2021-10-05 18:20:36 +02:00
Jonas Paulsson c6c13c58ee [SystemZ] Implement memcpy of variable length with MVC.
Instead of making a memcpy libcall, emit an MVC loop and an EXRL instruction
the same way as is already done for memset 0.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D106874
2021-10-05 17:14:41 +02:00
Peter Waller be26e6ff73 [AArch64][SVE] Remove redundant PTEST following PNEXT/PFIRST
PNEXT and PFIRST set the NZCV flags, so the subsequent PTEST can be
optimized away in AArch64InstrInfo::optimizePTestInstr.

See-also: https://reviews.llvm.org/D93292

Differential Revision: https://reviews.llvm.org/D110177
2021-10-05 15:10:48 +00:00
Matthew Devereau 2ac1999937 [AArch64][SVE] Propagate math flags from intrinsics to instructions
Retain floating-point math flags inside instCombineSVEVectorBinOp
2021-10-05 15:39:13 +01:00
Roman Lebedev 3f9b235482
[X86][Costmodel] Load/store i64/f64 Stride=6 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1jfGddcre - for intels `Block RThroughput: =36.0`; for ryzens, `Block RThroughput: =12.0`
So could pick cost of `36`

For store we have:
https://godbolt.org/z/ao9srMT8r - for intels `Block RThroughput: =30.0`; for ryzens, `Block RThroughput: =12.0`
So we could pick cost of `30`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111094
2021-10-05 16:58:58 +03:00
Roman Lebedev e2784c5d8c
[X86][Costmodel] Load/store i64/f64 Stride=6 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/rc8jYxW6M - for intels `Block RThroughput: =18.0`; for ryzens, `Block RThroughput: =6.0`
So could pick cost of `18`.

For store we have:
https://godbolt.org/z/9PhPEr65G - for intels `Block RThroughput: =15.0`; for ryzens, `Block RThroughput: =6.0`
So we could pick cost of `15`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111093
2021-10-05 16:58:58 +03:00
Roman Lebedev 3960693048
[X86][Costmodel] Load/store i64/f64 Stride=6 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/onese7rec - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: =3.0`
So could pick cost of `6`.

For store we have:
https://godbolt.org/z/bMd7dddnT - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=6.0`
So we could pick cost of `8`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111092
2021-10-05 16:58:58 +03:00
Roman Lebedev 79d6d12d95
[X86][Costmodel] Load/store i32/f32 Stride=6 VF=16 interleaving costs
This one required quite a bit of an assembly surgery, but i think it's in the right ballpark..

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/na97Kb96o - for intels `Block RThroughput: <=64.0`; for ryzens, `Block RThroughput: <=32.0`
So could pick cost of `64`.

For store we have:
https://godbolt.org/z/GG1WeoKar - for intels `Block RThroughput: =66.0`; for ryzens, `Block RThroughput: <=27.5`
So we could pick cost of `66`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111091
2021-10-05 16:58:58 +03:00
Roman Lebedev 2996a2b50f
[X86][Costmodel] Load/store i32/f32 Stride=6 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/jK85GWKaK - for intels `Block RThroughput: =31.0`; for ryzens, `Block RThroughput: <=17.0`
So could pick cost of `31`.

For store we have:
https://godbolt.org/z/hPWWhEEf9 - for intels `Block RThroughput: =33.0`; for ryzens, `Block RThroughput: <=13.8`
So we could pick cost of `33`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111089
2021-10-05 16:58:57 +03:00
Roman Lebedev d51532d8aa
[X86][Costmodel] Load/store i32/f32 Stride=6 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/szEj1ceee - for intels `Block RThroughput: =15.0`; for ryzens, `Block RThroughput: <=8.8`
So could pick cost of `15`.

For store we have:
https://godbolt.org/z/81bq4fTo1 - for intels `Block RThroughput: =12.0`; for ryzens, `Block RThroughput: <=10.0`
So we could pick cost of `12`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111087
2021-10-05 16:58:57 +03:00
Roman Lebedev 764fd5f463
[X86][Costmodel] Load/store i32/f32 Stride=6 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/aec96Thee - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=3.3`
So could pick cost of `6`.

For store we have:
https://godbolt.org/z/aec96Thee - for intels `Block RThroughput: =9.0`; for ryzens, `Block RThroughput: <=3.0`
So we could pick cost of `9`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111083
2021-10-05 16:58:57 +03:00
Roman Lebedev c800119c46
[X86][Costmodel] Load/store i64/f64 Stride=4 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/3M3hbq7n8 - for intels `Block RThroughput: =20.0`; for ryzens, `Block RThroughput: =8.0`
So could pick cost of `20`.

For store we have:
https://godbolt.org/z/zvnPYWTx7 - for intels `Block RThroughput: =20.0`; for ryzens, `Block RThroughput: =8.0`
So we could pick cost of `20`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111076
2021-10-05 16:58:57 +03:00
Roman Lebedev 000ce0bfd5
[X86][Costmodel] Load/store i64/f64 Stride=4 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/MTKdzjvnr - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=4.0`
So could pick cost of `8`.

For store we have:
https://godbolt.org/z/cMYEvqoah - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=4.0`
So we could pick cost of `8`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111075
2021-10-05 16:58:57 +03:00
Roman Lebedev dcc2b0d933
[X86][Costmodel] Load/store i64/f64 Stride=4 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/z197317d1 - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: =2.0`
So could pick cost of `6`.

For store we have:
https://godbolt.org/z/8dzszjf9q - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=4.0`
So we could pick cost of `6`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111073
2021-10-05 16:58:57 +03:00
Roman Lebedev 7d91037fd2
[X86][Costmodel] Load/store i32/f32 Stride=4 VF=16 interleaving costs
This one required quite a bit of assembly surgery, but the trend continues, so i think this is right.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/EKWdj8cKT - for intels `Block RThroughput: <=32.0`; for ryzens, `Block RThroughput: <=24.0`
So could pick cost of `32`.

For store we have:
https://godbolt.org/z/zj4bb9P75 - for intels `Block RThroughput: =32.0`; for ryzens, `Block RThroughput: <=16.0`
So we could pick cost of `32`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111064
2021-10-05 16:58:57 +03:00
Roman Lebedev 4aee1e5b93
[X86][Costmodel] Load/store i32/f32 Stride=4 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/a6rxMG6ec - for intels `Block RThroughput: =16.0`; for ryzens, `Block RThroughput: <=12.0`
So could pick cost of `16`.

For store we have:
https://godbolt.org/z/ced1bdqc9 - for intels `Block RThroughput: =16.0`; for ryzens, `Block RThroughput: <=8.0`
So we could pick cost of `16`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111063
2021-10-05 16:58:57 +03:00
Roman Lebedev 3c2e22b795
[X86][Costmodel] Load/store i32/f32 Stride=4 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/avq1oz98W - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: =4.0`
So could pick cost of `8`.

For store we have:
https://godbolt.org/z/89PGMc1qs - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=6.0`
So we could pick cost of `6`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111061
2021-10-05 16:58:57 +03:00
Roman Lebedev b6234c1edf
[X86][Costmodel] Load/store i32/f32 Stride=4 VF=2 interleaving costs
Finally, we are getting to the heavy-hitter stuff!

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/7crGWoar6 - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So could pick cost of `4`.

For store we have:
https://godbolt.org/z/T8aq3MszM - for intels `Block RThroughput: =5.0`; for ryzens, `Block RThroughput: <=2.0`
So we could pick cost of `5`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111060
2021-10-05 16:58:56 +03:00
kpyzhov 095c48fdf3 [AMDGPU] Use "hostcall" module flag instead of searching for ockl_hostcall_internal() declaration.
The current way to detect hostcalls by looking for "ockl_hostcall_internal()" function in the module seems to be not reliable enough. The LTO may rename the "ockl_hostcall_internal()" function when an application is compiled with "-fgpu-rdc", and MetadataStreamer pass to fail to detect hostcalls, therefore it does not set the "hidden_hostcall_buffer" kernel argument.
This change adds a new module flag: hostcall that can be used to detect whether GPU functions use host calls for printf.

Differential revision: https://reviews.llvm.org/D110337
2021-10-05 09:56:04 -04:00
Hsiangkai Wang 80a6456306 [RISCV] Update to vlm.v and vsm.v according to v1.0-rc1.
vle1.v  -> vlm.v
vse1.v  -> vsm.v

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106044
2021-10-05 21:49:54 +08:00
Jay Foad 9ce4f37206 [AMDGPU][GlobalISel] Fix legalization of G_UMULH
Scalarize before narrowing because the narrowing implementation does not
work on vectors. This matches what we do for regular G_MUL.

Differential Revision: https://reviews.llvm.org/D111129
2021-10-05 10:56:02 +01:00
Tim Northover 5f65ee260d AArch64+GISel: legalize vector remainder operations. 2021-10-05 10:20:10 +01:00
Amara Emerson 3fe475367c [AArch64][GlobalISel] Legalize G_VECREDUCE_AND.
These are handled identically to the already handled G_VECREDUCE_OR instructions.
2021-10-05 00:39:29 -07:00
Amara Emerson 8bde5e58c0 Delay outgoing register assignments to last.
The delayed stack protector feature which is currently used for SDAG (and thus
allows for more commonly generating tail calls) depends on being able to extract
the tail call into a separate return block. To do this it also has to extract
the vreg->physreg copies that set up the call's arguments, since if it doesn't
then the call inst ends up using undefined physregs in it's new spliced block.

SelectionDAG implementations can do this because they delay emitting register
copies until  *after* the stack arguments are set up. GISel however just
processes and emits the arguments in IR order, so stack arguments always end up
last, and thus this breaks the code that looks for any register arg copies that
precede the call instruction.

This patch adds a thunk argument to the assignValueToReg() and custom assignment
hooks. For outgoing arguments, register assignments use this return param to
return a thunk that does the actual generating of the copies. We collect these
until all the outgoing stack assignments have been done and then execute them,
so that the copies (and perhaps some artifacts like G_SEXTs) are placed after
any stores.

Differential Revision: https://reviews.llvm.org/D110610
2021-10-04 12:33:20 -07:00
Simon Pilgrim e8477045f6 [X86][SLM] Fix BSR/BSF port usage
Both ports are required for BitScan ops. Update the port usage and distributed throughput based off the most recent llvm-exegesis captures (PR36895) and what Intel AoM / Agner reports as well.
2021-10-04 19:52:53 +01:00
David Green 30dc53db36 [AArch64] Disable AArch64StorePairSuppress under optsize
AArch64StorePairSuppress will prevent the creation of LDP's based on
scheduling info. This shouldn't apply when optimizing for size though,
where the size decrease should be considered more important.

Differential Revision: https://reviews.llvm.org/D110809
2021-10-04 18:28:15 +01:00
Simon Pilgrim bf30c48419 [X86] SimplifyDemandedVectorEltsForTargetNode - simplify PMADDWD for known zero elements
Noticed while investigating the regressions in D110995 - if the RHS element is already zero, then we don't need the corresponding LHS element.

Technically we could also recheck RHS once we have LHS's known zeros, but I haven't seen any missed opportunities from that yet.
2021-10-04 14:36:45 +01:00
Roman Lebedev cef0a693b6
[X86][Costmodel] Load/store i64/f64 Stride=3 VF=16 interleaving costs
This required huge amount of assembly surgery, but i think this is about right.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/z11crMEcj - for intels `Block RThroughput: =20.0`; for ryzens, `Block RThroughput: <=18.0`
So could pick cost of `25`.

For store we have:
https://godbolt.org/z/eqT4ze3j4 - for intels `Block RThroughput: =24.0`; for ryzens, `Block RThroughput: <=16.0`
So we could pick cost of `24`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111031
2021-10-04 14:35:17 +03:00
Roman Lebedev ede0611e79
[X86][Costmodel] Load/store i64/f64 Stride=3 VF=8 interleaving costs
This one required quite a bit of assembly surgery.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/oYWv4cTnK - for intels `Block RThroughput: =10.0`; for ryzens, `Block RThroughput: <=8.0`
So pick cost of `10`.

For store we have:
https://godbolt.org/z/33GMhrsG9 - for intels `Block RThroughput: =12.0`; for ryzens, `Block RThroughput: <=8.0`
So pick cost of `12`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111027
2021-10-04 14:35:01 +03:00
Roman Lebedev eb9a694c17
[X86][Costmodel] Load/store i64/f64 Stride=3 VF=4 interleaving costs
This one required quite a bit of assembly surgery.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/Tce3osvcz - for intels `Block RThroughput: =5.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `5`.

For store we have:
https://godbolt.org/z/oc3arEcnE - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `6`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111026
2021-10-04 14:34:47 +03:00
Roman Lebedev d3bbe781ea
[X86][Costmodel] Load/store i64/f64 Stride=3 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/sz5qdKnr4 - for intels `Block RThroughput: =1.0`; for ryzens, `Block RThroughput: <=1.0`
So pick cost of `1`.

For store we have:
https://godbolt.org/z/Kzdjff63v - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111025
2021-10-04 14:34:33 +03:00
Roman Lebedev 4ca5bc07af
[X86][Costmodel] Load/store i32/f32 Stride=3 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/5fqrh4qqo - for intels `Block RThroughput: =14.0`; for ryzens, `Block RThroughput: <=12.0`
So pick cost of `14`.

For store we have:
https://godbolt.org/z/5fqrh4qqo - for intels `Block RThroughput: =22.0`; for ryzens, `Block RThroughput: <=16.0`
So pick cost of `22`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111022
2021-10-04 14:34:19 +03:00
Roman Lebedev 198aa84973
[X86][Costmodel] Load/store i32/f32 Stride=3 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/zdz5Ga6fs - for intels `Block RThroughput: =7.0`; for ryzens, `Block RThroughput: <=6.0`
So pick cost of `7`.

For store we have:
https://godbolt.org/z/qn71513ac - for intels `Block RThroughput: =11.0`; for ryzens, `Block RThroughput: <=8.0`
So pick cost of `11`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111021
2021-10-04 14:34:05 +03:00
Roman Lebedev a93411c3af
[X86][Costmodel] Load/store i32/f32 Stride=3 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/d8PdhEszo - for intels `Block RThroughput: =3.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `3`.

For store we have:
https://godbolt.org/z/WojonfG5n - for intels `Block RThroughput: =5.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `5`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111020
2021-10-04 14:34:03 +03:00
Roman Lebedev 3e93fcdfc8
[X86][Costmodel] Load/store i32/f32 Stride=3 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/z8qa14bs3 - for intels `Block RThroughput: =3.0`; for ryzens, `Block RThroughput: =1.5`
So pick cost of `3`.

For store we have:
https://godbolt.org/z/GYGajoc4K - for intels `Block RThroughput: <=4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111019
2021-10-04 14:31:50 +03:00
Stefan Pintilie 4fc2f4979c [PowerPC] Fix __builtin_ppc_load2r to return short instead of int.
This patch fixes the return value of the builtin __builtin_ppc_load2r to
correctly return short instead of int.

Reviewed By: nemanjai, #powerpc

Differential Revision: https://reviews.llvm.org/D110771
2021-10-04 06:17:02 -05:00
Jay Foad 566690b067 [APFloat] Remove BitWidth argument from getAllOnesValue
There's no need to pass this in explicitly because it is
trivially available from the semantics.
2021-10-04 11:32:16 +01:00
Jay Foad a9bceb2b05 [APInt] Stop using soft-deprecated constructors and methods in llvm. NFC.
Stop using APInt constructors and methods that were soft-deprecated in
D109483. This fixes all the uses I found in llvm, except for the APInt
unit tests which should still test the deprecated methods.

Differential Revision: https://reviews.llvm.org/D110807
2021-10-04 08:57:44 +01:00
Roman Lebedev 67f1ee2e38
[X86][Costmodel] Load/store i16 Stride=3 VF=32 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/rMaYr67hz - for intels `Block RThroughput: =56.0`; for ryzens, `Block RThroughput: <=17.8`
So pick cost of `56`.

For store we have:
https://godbolt.org/z/eMsbKqnvv - for intels `Block RThroughput: <=54.0`; for ryzens, `Block RThroughput: <=15.0`
So pick cost of `54`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111018
2021-10-03 23:40:35 +03:00
Roman Lebedev 3cbc0a07f9
[X86][Costmodel] Load/store i16 Stride=3 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1T6MMzeh3 - for intels `Block RThroughput: =28.0`; for ryzens, `Block RThroughput: <=8.5`
So pick cost of `28`.

For store we have:
https://godbolt.org/z/1T6MMzeh3 - for intels `Block RThroughput: <=27.0`; for ryzens, `Block RThroughput: <=7.0`
So pick cost of `27`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111017
2021-10-03 23:40:21 +03:00
Roman Lebedev 72f8a9244a
[X86][Costmodel] Load/store i16 Stride=3 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/Mh9MnnT8W - for intels `Block RThroughput: =9.0`; for ryzens, `Block RThroughput: <=2.3`
So pick cost of `9`.

For store we have:
https://godbolt.org/z/Mh9MnnT8W - for intels `Block RThroughput: <=12.0`; for ryzens, `Block RThroughput: <=3.3`
So pick cost of `12`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111016
2021-10-03 23:40:05 +03:00
Roman Lebedev 04f1469cb4
[X86][Costmodel] Load/store i16 Stride=3 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/sP4j1173f - for intels `Block RThroughput: =7.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `7`.

For store we have:
https://godbolt.org/z/sP4j1173f - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `6`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111015
2021-10-03 23:39:51 +03:00
Roman Lebedev 8e8fb77aa4
[X86][Costmodel] Load/store i16 Stride=3 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/xnE988aej - for intels `Block RThroughput: =5.0`; for ryzens, `Block RThroughput: <=2.5`
So pick cost of `5`.

For store we have:
https://godbolt.org/z/rMGT31Tnh - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111014
2021-10-03 23:39:36 +03:00
Roman Lebedev a5e5883ef5
[X86][Costmodel] Load/store i8 Stride=6 VF=32 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/c1jjKqP7b - for intels `Block RThroughput: <=82.0`; for ryzens, `Block RThroughput: <=26.0`
So pick cost of `82`.

For store we have:
https://godbolt.org/z/YM4ErY8x7 - for intels `Block RThroughput: <=90.0`; for ryzens, `Block RThroughput: <=25.5`
So pick cost of `90`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111013
2021-10-03 23:39:22 +03:00
Roman Lebedev bd5ba437fd
[X86][Costmodel] Load/store i8 Stride=6 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/Gz8hhqfTM - for intels `Block RThroughput: <=43.0`; for ryzens, `Block RThroughput: <=14.0`
So pick cost of `43`.

For store we have:
https://godbolt.org/z/9vrdssYa8 - for intels `Block RThroughput: <=27.0`; for ryzens, `Block RThroughput: <=12.0`
So pick cost of `27`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111012
2021-10-03 23:39:08 +03:00
Roman Lebedev 0b27f9c088
[X86][Costmodel] Load/store i8 Stride=6 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/v98qPTTf6 - for intels `Block RThroughput: =18.0`; for ryzens, `Block RThroughput: =6.0`
So pick cost of `18`.

For store we have:
https://godbolt.org/z/rn5T9E8q6 - for intels `Block RThroughput: <=16.0`; for ryzens, `Block RThroughput: <=4.5`
So pick cost of `16`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111011
2021-10-03 23:38:54 +03:00
Roman Lebedev 6fe4cce558
[X86][Costmodel] Load/store i8 Stride=6 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/4sWhs396o - for intels `Block RThroughput: =14.0`; for ryzens, `Block RThroughput: <=7.0`
So pick cost of `14`.

For store we have:
https://godbolt.org/z/4sWhs396o - for intels `Block RThroughput: =9.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `9`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111010
2021-10-03 23:38:40 +03:00
Roman Lebedev 396b95e5c9
[X86][Costmodel] Load/store i8 Stride=6 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/jvj6jzns5 - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `6`.

For store we have:
https://godbolt.org/z/ros7eebMP - for intels `Block RThroughput: =7.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `7`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D111008
2021-10-03 23:38:10 +03:00
David Green 20b1a16a69 [ARM] Mark <= -1 immediate constant as cheap
A <= -1 constant on a compare can be converted to a < 0 operation, which
is usually cheap. If we mark the constant as cheap, preventing hoisting,
we allow that fold to happen even across different blocks.

Differential Revision: https://reviews.llvm.org/D109360
2021-10-03 19:30:08 +01:00
Simon Pilgrim 164cc2781f [X86] Split Cannonlake + Icelake Tuning. NFC
The Ice/Tiger/RocketLake specs were inheriting the tuning settings from CannonLake, a previous architecture. We shouldn't have this dependency, so I've copied the current tuning settings so we can make future adjustments to both CNL + ICL etc. more easily.
2021-10-03 18:38:47 +01:00
Simon Pilgrim b85bf520dc [CostModel][X86] X86TTIImpl::getCmpSelInstrCost - try to use Predicate argument directly first (PR48337)
There's still a lot of cases where getCmpSelInstrCost fails to specify a predicate, once those are in place we should be able to remove the fallback to the Instruction argument entirely.
2021-10-03 17:16:51 +01:00
Dávid Bolvanský fb84aa2a8f Fixed warnings in target/parser codes produced by -Wbitwise-instead-of-logicala 2021-10-03 15:04:01 +02:00
Kazu Hirata c1e32b3fc0 [Target] Migrate from getNumArgOperands to arg_size (NFC)
Note that getNumArgOperands is considered a legacy name.  See
llvm/include/llvm/IR/InstrTypes.h for details.
2021-10-02 12:06:29 -07:00
Simon Pilgrim 7cae0daee6 [X86][Atom] Fix BSR/BSF uops + port usage
Both ports are required for BitScan ops. Update the uops counts + port usage based off the most recent llvm-exegesis captures (PR36895) and what Intel AoM / Agner reports as well.
2021-10-02 19:09:44 +01:00
Craig Topper 33d20977b7 Revert "[RISCV] Add an GPR def to the Zvlseg SPILL/RELOAD pseudos"
This reverts commit 1f16191906.

We're seeing some issues with this internally. It seems that when
the spill is created by register allocation, the GPR doesn't get
allocated and an assertion fires during virtual register rewriting.

The .mir test case contains the spill before register allocation so
register allocation sees it as any other instruction.
2021-10-02 10:44:11 -07:00
Simon Pilgrim 9452ec722c [X86][SSE] Fix typo + infinite-loop in HOP(HOP'(X,X),HOP'(Y,Y)) fold (PR52040)
PR52040 identified several issues with the HOP(HOP'(X,X),HOP'(Y,Y)) -> HOP(PERMUTE(HOP'(X,Y)),PERMUTE(HOP'(X,Y)) slow-HOP fold.

Not only was there a copy+paste typo when accessing the inner HOP operands, but the (unnecessary) ReplaceAllUsesOfValueWith call was missing one use checks.

Now that we have better shuffle combines of HOPs we can just return a new HOP() sequence and not use ReplaceAllUsesOfValueWith at all - this actually improved pair_sum_v8i32_v4i32 codegen as it kicks off further shuffle combines.
2021-10-02 15:31:12 +01:00
Simon Pilgrim bb42cc2090 [X86] decomposeMulByConstant - decompose legal vXi32 multiplies on SlowPMULLD targets and all vXi64 multiplies
X86's decomposeMulByConstant never permits mul decomposition to shift+add/sub if the vector multiply is legal.

Unfortunately this isn't great for SSE41+ targets which have PMULLD for vXi32 multiplies, but is often quite slow. This patch proposes to allow decomposition if the target has the SlowPMULLD flag (i.e. Silvermont). We also always decompose legal vXi64 multiplies - even latest IceLake has really poor latencies for PMULLQ.

Differential Revision: https://reviews.llvm.org/D110588
2021-10-02 12:35:25 +01:00
Simon Pilgrim 8e7f6039fa [X86] Atom SSE shift-by-variable take 2uops/3uops not 1uop
Based off the most recent llvm-exegesis captures (PR36895) and what Intel AoM / Agner / InstLatX64 reports as well.
2021-10-02 12:28:41 +01:00
Roman Lebedev acb459574a
[X86][Costmodel] Load/store i8 Stride=4 VF=32 interleaving costs
While we already model this tuple, the load cost is divergent from reality, so fix it.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/zWMhhnPYa - for intels `Block RThroughput: =56.0`; for ryzens, `Block RThroughput: <=24.0`
So pick cost of `56`.

For store we have:
https://godbolt.org/z/vnqqjWx51 - for intels `Block RThroughput: =12.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `12`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110971
2021-10-02 13:40:21 +03:00
Roman Lebedev 0e71ae6da8
[X86][Costmodel] Load/store i8 Stride=4 VF=16 interleaving costs
While we already model this tuple, the values are divergent from reality, so fix them.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/TrGW7cKsE - for intels `Block RThroughput: =24.0`; for ryzens, `Block RThroughput: <=12.0`
So pick cost of `24`.

For store we have:
https://godbolt.org/z/Mh7qaqEfe - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `8`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110970
2021-10-02 13:40:21 +03:00
Roman Lebedev 74e4a0e327
[X86][Costmodel] Load/store i8 Stride=4 VF=8 interleaving costs
While we already model this tuple, the values are divergent from reality, so fix them.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/v7746Wcf7 - for intels `Block RThroughput: =12.0`; for ryzens, `Block RThroughput: <=6.0`
So pick cost of `12`.

For store we have:
https://godbolt.org/z/aEeEohEbP - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110969
2021-10-02 13:40:20 +03:00
Roman Lebedev ae08362cb8
[X86][Costmodel] Load/store i8 Stride=4 VF=4 interleaving costs
While we already model this tuple, the store cost is divergent from reality, so fix it.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1n4bPh7Tn - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

For store we have:
https://godbolt.org/z/r8K9sveqo - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110968
2021-10-02 13:40:20 +03:00
Roman Lebedev 935b9693ae
[X86][Costmodel] Load/store i8 Stride=4 VF=2 interleaving costs
While we already model this tuple, the values are divergent from reality, so fix them.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/KP6nn36zs - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

For store we have:
https://godbolt.org/z/ov95zhrq6 - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110966
2021-10-02 13:40:20 +03:00
Roman Lebedev 448c939839
[X86][Costmodel] Load/store i8 Stride=3 VF=32 interleaving costs
For VF=16, costs are correct.
For VF=32, load cost is divergent.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/qKjevqf4W - for intels `Block RThroughput: <=14.0`; for ryzens, `Block RThroughput: <=4.5`
So pick cost of `14`.

For store we have:
https://godbolt.org/z/xTssTq319 - for intels `Block RThroughput: =13.0`; for ryzens, `Block RThroughput: <=5.5`
So pick cost of `13`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110961
2021-10-02 13:39:15 +03:00
Roman Lebedev d1460c88a6
[X86][Costmodel] Load/store i8 Stride=3 VF=8 interleaving costs
While we already model this tuple, the values are divergent from reality, so fix them.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1jeocxj55 - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `6`.

For store we have:
https://godbolt.org/z/fr7xfa3K5 - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `6`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110960
2021-10-02 13:39:15 +03:00
Roman Lebedev f1df2d8eaf
[X86][Costmodel] Load/store i8 Stride=3 VF=4 interleaving costs
While we already model this tuple, the values are divergent from reality, so fix them.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/obWz3PrfK - for intels `Block RThroughput: =3.0`; for ryzens, `Block RThroughput: <=1.5`
So pick cost of `3`.

For store we have:
https://godbolt.org/z/orjPshn3h - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110958
2021-10-02 13:39:10 +03:00
Roman Lebedev 8a3c64c3a2
[X86][Costmodel] Load/store i8 Stride=3 VF=2 interleaving costs
While we already model this tuple, the values are divergent from reality, so fix them.

The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/WYscYMcW4 - for intels `Block RThroughput: =3.0`; for ryzens, `Block RThroughput: <=1.5`
So pick cost of `3`.

For store we have:
https://godbolt.org/z/e9qvYdbbs - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110956
2021-10-02 13:39:05 +03:00
Amara Emerson f41a9cf859 [AArch64][GlobalISel] Lower G_SMULH/G_UMULH unless its one of the supported types.
s32 was also incorrectly marked as a supported type, and was causing fallbacks
because we don't support it.
2021-10-01 22:15:23 -07:00
Jessica Paquette 96843d220d [AArch64][GlobalISel] Change G_ANYEXT fed by scalar G_ICMP to G_ZEXT
This is a common pattern:

```
    %icmp:_(s32) = G_ICMP intpred(eq), ...
    %ext:_(s64) = G_ANYEXT %icmp(s32)
    %and:_(s64) = G_AND %ext, 1
```

Here's an example: https://godbolt.org/z/T13f6o8zE

This pattern appears because of the following combine in the
LegalizationArtifactCombiner:

```
// zext(trunc x) - > and (aext/copy/trunc x), mask
```

Which kicks in when we widen the result of G_ICMP from 1 bit to 32 bits.

We know that, on AArch64, a scalar G_ICMP will produce 0 or 1. So the result
of `%ext` will always be 0 or 1 as well.

We have some KnownBits combines which eliminate redundant G_ANDs with masks.
These combines don't kick in with G_ANYEXT.

So, if we replace the G_ANYEXT with G_ZEXT in this situation, the KnownBits
based combines can remove the redundant G_AND.

I wasn't sure if it woud be more appropriate to

* Take this route
* Put this in the LegalizationArtifactCombiner.
* Allow 64 bit G_ICMP destinations

I decided on this route because

1) It's simple

2) I'm not sure if philosophically-speaking, we should be handling non-artifact
instructions + target-specific details like TargetBooleanContents in the
LegalizationArtifactCombiner

3) There is a lot of existing code which assumes we only have 32 bit G_ICMP
destinations. So, adding support for 64-bit destinations seems rather invasive
right now. I think that adding support for 64-bit destinations, or modelling
G_ICMP as ADDS/SUBS/etc is probably cleaner long term though.

This gives minor code size savings on all CTMark benchmarks.

Differential Revision: https://reviews.llvm.org/D110959
2021-10-01 15:01:20 -07:00
Daniil Fukalov 47d6274d4c [NFC][AMDGPU] Reduce includes dependencies, part 2
1. Splitted out some parts of R600 target to separate modules/headers.
2. Reduced some include lists in headers.
3. Minor forward declarations, redundant includes and flags in GCNSubtarget
   cleanup.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D109351
2021-10-01 17:50:20 +03:00
Roman Lebedev 3e260efdfc
[X86][Costmodel] Load/store i64/f64 Stride=2 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1WMTojvfW - for intels `Block RThroughput: =16.0`; for ryzens, `Block RThroughput: <=8.0`
So pick cost of `16`.

For store we have:
https://godbolt.org/z/1WMTojvfW - for intels `Block RThroughput: =16.0`; for ryzens, `Block RThroughput: <=16.0`
So pick cost of `16`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110840
2021-10-01 17:48:14 +03:00
Roman Lebedev abd37de63e
[X86][Costmodel] Load/store i64/f64 Stride=2 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/PGYbYKPq8 - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `8`.

For store we have:
https://godbolt.org/z/PGYbYKPq8 - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=8.0`
So pick cost of `8`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110838
2021-10-01 17:48:14 +03:00
Roman Lebedev 71bc31b907
[X86][Costmodel] Load/store i64/f64 Stride=2 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/j5co1qWEW - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

For store we have:
https://godbolt.org/z/j5co1qWEW - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110837
2021-10-01 17:48:14 +03:00
Roman Lebedev 612e5b05a2
[X86][Costmodel] Load/store i64/f64 Stride=2 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/8a1cfGeMn - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: =1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/jMdcM47bx - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `2`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110835
2021-10-01 17:48:14 +03:00
Roman Lebedev ea76cb87ee
[X86][Costmodel] Load/store i32/f32 Stride=2 VF=32 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

Here for `store` pattern we are starting to have spilling,
so accurate modelling may be problematic,
although if i drop the spilling, the measurements don't change.

For load we have:
https://godbolt.org/z/1oTTnncbx - for intels `Block RThroughput: =16.0`; for ryzens, `Block RThroughput: <=8.0`
So pick cost of `16`.

For store we have:
https://godbolt.org/z/1oTTnncbx - for intels `Block RThroughput: =16.0`; for ryzens, `Block RThroughput: =8.0`
So pick cost of `16`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110761
2021-10-01 17:48:14 +03:00
Roman Lebedev 80cd8da78d
[X86][Costmodel] Load/store i32/f32 Stride=2 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/M9eev3xe8 - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `8`.

For store we have:
https://godbolt.org/z/M9eev3xe8 - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: =4.0`
So pick cost of `8`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110756
2021-10-01 17:48:14 +03:00
Roman Lebedev 3a0643e9c2
[X86][Costmodel] Load/store i32/f32 Stride=2 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/n8aMKeo4E - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

For store we have:
https://godbolt.org/z/n8aMKeo4E - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: =2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110755
2021-10-01 17:48:13 +03:00
Roman Lebedev b12aeaec9a
[X86][Costmodel] Load/store i32/f32 Stride=2 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/EM5Ean7bd - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: =1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/EM5Ean7bd - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `2`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110754
2021-10-01 17:48:13 +03:00
Roman Lebedev f44d9009c2
[X86][Costmodel] Load/store i32/f32 Stride=2 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/4rY96hnGT - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: =1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/vbo37Y3r9 - for intels `Block RThroughput: =1.0`; for ryzens, `Block RThroughput: =0.5`
So pick cost of `1`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110753
2021-10-01 17:48:13 +03:00
Fraser Cormack 52c60459f5 [RISCV][NFC] Reformat a line of frame lowering code 2021-10-01 14:25:47 +01:00
Fraser Cormack fcaa64d947 [RISCV][NFC] Add closing parentheses to frame layout comments 2021-10-01 11:58:55 +01:00
Matthew Devereau f085a9db8b [AArch64][SVE] Replace fmul, fadd and fsub LLVM IR instrinsics with LLVM IR binary ops
Replacing fmul and fadd instrinsics with their binary ops results
more succinct AArch64 SVE output, e.g.:

4:   65428041 	fmul	z1.h, p0/m, z1.h, z2.h
8:   65408020 	fadd	z0.h, p0/m, z0.h, z1.h
->
4:   65620020   fmla    z0.h, p0/m, z1.h, z2.h
2021-10-01 11:24:46 +01:00
Krasimir Georgiev 685f1bfd0a Revert "[LoopVectorize] Permit vectorisation of more select(cmp(), X, Y) reduction patterns"
It appears to cause stage2 clang build failures, e.g.,
https://lab.llvm.org/buildbot/#/builders/74/builds/7145.

This reverts commit 1fb37334bd.
2021-10-01 11:39:43 +02:00
David Sherwood 1fb37334bd [LoopVectorize] Permit vectorisation of more select(cmp(), X, Y) reduction patterns
This patch adds further support for vectorisation of loops that involve
selecting an integer value based on a previous comparison. Consider the
following C++ loop:

  int r = a;
  for (int i = 0; i < n; i++) {
    if (src[i] > 3) {
      r = b;
    }
    src[i] += 2;
  }

We should be able to vectorise this loop because all we are doing is
selecting between two states - 'a' and 'b' - both of which are loop
invariant. This just involves building a vector of values that contain
either 'a' or 'b', where the final reduced value will be 'b' if any lane
contains 'b'.

The IR generated by clang typically looks like this:

  %phi = phi i32 [ %a, %entry ], [ %phi.update, %for.body ]
  ...
  %pred = icmp ugt i32 %val, i32 3
  %phi.update = select i1 %pred, i32 %b, i32 %phi

We already detect min/max patterns, which also involve a select + cmp.
However, with the min/max patterns we are selecting loaded values (and
hence loop variant) in the loop. In addition we only support certain
cmp predicates. This patch adds a new pattern matching function
(isSelectCmpPattern) and new RecurKind enums - SelectICmp & SelectFCmp.
We only support selecting values that are integer and loop invariant,
however we can support any kind of compare - integer or float.

Tests have been added here:

  Transforms/LoopVectorize/AArch64/sve-select-cmp.ll
  Transforms/LoopVectorize/select-cmp-predicated.ll
  Transforms/LoopVectorize/select-cmp.ll

Differential Revision: https://reviews.llvm.org/D108136
2021-10-01 08:41:03 +01:00
Yonghong Song 3562ad3ebe BPF: implement isLegalAddressingMode() properly
Latest upstream llvm caused the kernel bpf selftest emitting the
following warnings:

  In file included from progs/profiler3.c:6:
  progs/profiler.inc.h:489:2: warning: loop not unrolled:
    the optimizer was unable to perform the requested transformation;
    the transformation might be disabled or specified as part of an unsupported
    transformation ordering [-Wpass-failed=transform-warning]
          for (int i = 0; i < MAX_PATH_DEPTH; i++) {
          ^

Further bisecting shows this SimplifyCFG patch ([1]) changed
the condition on how to fold branch to common dest. This caused
some unroll pragma is not honored in selftests/bpf.

The patch [1] test getUserCost() as the condition to
perform the certain basic block folding transformation.
For the above example, before the loop unroll pass, the control flow
looks like:
    cond_block:
       branch target: body_block, cleanup_block
    body_block:
       branch target: cleanup_block, end_block
    end_block:
       branch target: cleanup_block, end10_block
    end10_block:
       %add.ptr = getelementptr i8, i8* %payload.addr.0, i64 %call2
       %inc = add nuw nsw i32 %i.0, 1
       branch target: cond_block

In the above, %call2 is an unknown scalar.

Before patch [1], end10_block will be folded into end_block, forming
the code like
    cond_block:
       branch target: body_block, cleanup_block
    body_block:
       branch target: cleanup_block, end_block
    end_block:
       branch target: cleanup_block, cond_block
and the compiler is happy to perform unrolling.

With patch [1], getUserCost(), which calls getGEPCost(), which calls
isLegalAddressingMode() in TargetLoweringBase.cpp, considers IR
  %add.ptr = getelementptr i8, i8* %payload.addr.0, i64 %call2
is free, so the above basic block folding transformation is not performed
and unrolling does not happen.

For BPF target, the IR
  %add.ptr = getelementptr i8, i8* %payload.addr.0, i64 %call2
is not free and we don't have ld/st instruction address with 'r+r' mode.

This patch implemented a BPF hook for isLegalAddressingMode(), which is
identical to Mips isLegalAddressingMode() implementation where
the address pattern like 'r+r', 'r+r+i' or '2*r' are not allowed.
With testing kernel bpf selftests, all loop not unrolled warnings
are gone and all selftests run successfully.

  [1] https://reviews.llvm.org/D108837

Differential Revision: https://reviews.llvm.org/D110789
2021-09-30 16:41:15 -07:00
Craig Topper a21c557955 [RISCV] Remove Zbproposedc extension
This consists of 3 compressed instructions, c.not, c.neg, and c.zext.w.
I believe these have been picked up by the Zce effort using different
encodings. I don't think it makes sense to keep them in bitmanip. It
will eventually cause a conflict if/when Zce is implemented in llvm.

Differential Revision: https://reviews.llvm.org/D110871
2021-09-30 14:23:05 -07:00
Albion Fung 4195ed9959 [PowerPC] Improved codegen related to xscvdpsxws/xscvdpuxws
This patch removes the uneccessary mf/mtvsr generated in conjunction
with xscvdpsxws/xscvdpuxws.

Differential revision: https://reviews.llvm.org/D109902
2021-09-30 14:31:00 -05:00
Stanislav Mekhanoshin 244aa7f735 [AMDGPU] move hasAGPRs/hasVGPRs into header
It is now very simple and can go right into the header
allowing optimizer to combine callers, such as isVGPRClass
and similar.

It does not need anything from the TRI itself anymore, so
make it static class member along with the callers.

Differential Revision: https://reviews.llvm.org/D110762
2021-09-30 10:02:02 -07:00
Kazu Hirata f631173d80 [llvm] Migrate from arg_operands to args (NFC)
Note that arg_operands is considered a legacy name.  See
llvm/include/llvm/IR/InstrTypes.h for details.
2021-09-30 08:51:21 -07:00
David Green f9aa8623fe [ARM] Add more MVE intrinsics to sink splats to
This adds a few more unpredicated intrinsics to sink splats to, in order
to create more qr instruction variants. Notably this includes
saddsat/uaddsat but also some of the unpredicated mve intrinsics.

Differential Revision: https://reviews.llvm.org/D110333
2021-09-30 14:41:23 +01:00
Jingu Kang 13f3c39f36 Second Recommit "[AArch64] Split bitmask immediate of bitwise AND operation"
This reverts the revert commit c07f709969 with
bug fixes.

Differential Revision: https://reviews.llvm.org/D109963
2021-09-30 09:27:08 +01:00
Ruiling Song 52785989e9 AMDGPU: Broadcast scalar boolean to vector boolean explicitly
This is used to fix wrong code generation of s_add_co_select_user in
test/CodeGen/AMDGPU/expand-scalar-carry-out-select-user.ll

  s_addc_u32 s4, s6, 0
  s_cselect_b64 vcc, 1, 0    <-- vcc set as 0x1 if SCC==1
  v_mov_b32_e32 v1, s4
  s_cmp_gt_u32 s6, 31
  v_cndmask_b32_e32 v1, 0, v1, vcc

If the s_addc_u32 set SCC, then we will get value 0x1 in VCC.
The v_cndmask will do per thread selection with VCC as condition
register. As VCC only gets the first bit being set, only the first
thread/lane in destination register can get correct result if the
very first lane is active. In fact, we should broadcast the value to all
active lanes of the final register.

The idea here is doing this broadcast to vector boolean explicitly
instead of lowering it into a COPY from SCC which would be interpreted as
selecting between 0/1.

This is used to replace D109754.

Reviewed-by: foad, alex-t

Differential Revision: https://reviews.llvm.org/D109889
2021-09-30 10:15:01 +08:00
Amara Emerson 1c0e8a98e4 [AArch64][GlobalISel] Widen G_BUILD_VECTOR source & dest element types to s8. 2021-09-29 15:11:30 -07:00
Ricky Taylor e1e3b6ee72 [M68k] Avoid UB in disassembler
When reading 32 bits a 32-bit shift would be executed.

This is undefined behaviour, but in this case we can just replace the
entire scratch value to avoid it.

Differential Revision: https://reviews.llvm.org/D110769
2021-09-29 22:07:14 +01:00
Stefan Pintilie fb4e44c4e7 [PowerPC] The builtins load8r and store8r are Power 7 plus.
This patch makes sure that the builtins __builtin_ppc_load8r and
__ builtin_ppc_store8r are only available for Power 7 and up.
Currently the builtins seem to produce incorrect code if used for
Power 6 or before.

Reviewed By: nemanjai, #powerpc

Differential Revision: https://reviews.llvm.org/D110653
2021-09-29 14:34:40 -05:00
Roman Lebedev 2d42a192e0
[X86][Costmodel] Load/store i8 Stride=2 VF=32 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/xz6x7c35P - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=2.5`
So pick cost of `6`.

For store we have:
https://godbolt.org/z/xz6x7c35P - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110709
2021-09-29 21:52:45 +03:00
Roman Lebedev bac60c55e0
[X86][Costmodel] Load/store i8 Stride=2 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/a9hv4z47v - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: =2.0`
So pick cost of `4`.

For store we have:
https://godbolt.org/z/6GfPn1b79 - for intels `Block RThroughput: =3.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `3`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110708
2021-09-29 21:52:45 +03:00
Roman Lebedev 1962185671
[X86][Costmodel] Load/store i8 Stride=2 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

Identical to VF=2.

For load we have:
https://godbolt.org/z/4TEbdzbMM - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/MYfzGPf3Y - for intels `Block RThroughput: =1.0`; for ryzens, `Block RThroughput: <=0.5`
So pick cost of `1`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110705
2021-09-29 21:52:45 +03:00
Roman Lebedev 08face1f9a
[X86][Costmodel] Load/store i8 Stride=2 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

Identical to VF=2.

For load we have:
https://godbolt.org/z/sGE41GYo7 - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/ba5r3s9xa - for intels `Block RThroughput: =1.0`; for ryzens, `Block RThroughput: <=0.5`
So pick cost of `1`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110704
2021-09-29 21:52:45 +03:00
Roman Lebedev 7d52628eb0
[X86][Costmodel] Load/store i8 Stride=2 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/caKqjr9hb - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/6TTn3eKj8 - for intels `Block RThroughput: =1.0`; for ryzens, `Block RThroughput: <=0.5`
So pick cost of `1`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110702
2021-09-29 21:52:44 +03:00
Jay Foad f9b68304a2 [AMDGPU] Enable machine verification after AMDGPUISelDAGToDAG
This was introduced in D32628 but it does not seem to be required any
more. At least it does not show any problems in check-llvm in an
LLVM_ENABLE_EXPENSIVE_CHECKS build.

Differential Revision: https://reviews.llvm.org/D110692
2021-09-29 18:47:19 +01:00
Kazu Hirata 9a640a1cb8 [AArch64] Remove redundant declaration createAArch64ObjectTargetStreamer (NFC)
Note that createAArch64ObjectTargetStreamer is declared in
AArch64TargetStreamer.h and defined in AArch64TargetStreamer.cpp.

Identified with readability-redundant-declaration.
2021-09-29 09:08:41 -07:00
David Green e9adcbde31 [AArch64] Model Cortex-A55 Q register NEON instructions
Cortex-A55 has 2 64bit NEON vector units, meaning a 128bit instruction
requires taking both units (and can only be issued as the first
instruction in a dual issue pair). This patch models that by splitting
the WriteV SchedWrite into two - the WriteVd that reads/writes only
64bit operands, and the WriteVq that read/writes 128bit registers. The
A55 schedule then uses this distinction to model the WriteVq as taking
both resource units, and starting a Schedule Group and WriteVd as taking
one as before.

I believe this is more correct, even if it does not lead to much better
performance.

Differential Revision: https://reviews.llvm.org/D108766
2021-09-29 16:55:31 +01:00
Jay Foad 9886f21bc1 [MSP430] Recognize Bi as an indirect branch in analyzeBranch. NFC.
Recognize Bi as an unconditional branch, just like JMP. This allows
machine verification to run after MSP430BranchSelector without failing
this assertion:

virtual bool llvm::MSP430InstrInfo::analyzeBranch(llvm::MachineBasicBlock &, llvm::MachineBasicBlock *&, llvm::MachineBasicBlock *&, SmallVectorImpl<llvm::MachineOperand> &, bool) const: Assertion `I->getOpcode() == MSP430::JCC && "Invalid conditional branch"' failed.

Note that machine verification is currently disabled after
addPreEmitPass passes because of problems on other targets, so this is
currently NFC.

Differential Revision: https://reviews.llvm.org/D110691
2021-09-29 16:43:11 +01:00
Simon Pilgrim 676f2809b5 [CostModel][AArch64] Don't dereference CostTblEntry before null check.
Fix static analysis warning that we check for null Entry after dereferencing it.

I don't think this can actually happen as i8/i16 should legalize to use the i32 path which should return a cost - but I'd rather play it safe that rely on an implicit type legalization.
2021-09-29 16:35:29 +01:00
David Green 8a645fc44b [AArch64] Enable type promotion for AArch64
This enables the type promotion pass for AArch64, which acts as a
CodeGenPrepare pass to promote illegal integers to legal ones,
especially useful for removing extends that would otherwise require
cross-basic-block analysis.

I have enabled this generally, for both ISel and GlobalISel. In some
quick experiments it appeared to help GlobalISel remove extra extends in
places too, but that might just be missing optimizations that are better
left for later. We can disable it again if required.

In my experiments, this can improvement performance in some cases, and
codesize was a small improvement. SPEC was a very small improvement,
within the noise. Some of the test cases show extends being moved out of
loops, often when the extend would be part of a cmp operand, but that
should reduce the latency of the instruction in the loop on many cpus.
The signed-truncation-check tests are increasing as they are no longer
matching specific DAG combines.

We also hope to add some additional improvements to the pass in the near
future, to capture more cases of promoting extends through phis that
have come up in a few places lately.

Differential Revision: https://reviews.llvm.org/D110239
2021-09-29 15:13:12 +01:00
Nemanja Ivanovic 09b67aa1c3 [PowerPC] Implement builtin for vbpermd
The instruction has similar semantics to vbpermq but for doublewords.
It was added in Power9 and the ABI documents the builtin.

Differential revision: https://reviews.llvm.org/D107899
2021-09-29 06:34:31 -05:00
Sander de Smalen 87bcbd61b5 [AArch64][SVE] Fix extract_subvector patterns for unpacked fp types.
The patterns added in D110163 were incorrect, since it used the wrong
element widths for its shuffles.

Example for nxv2f16 extract_subvector(nxv8f16 %in, 6):
  <a|b|c|d|e|f|g|h>
               ^^^
           extract g and h.

  => UUNPKHI .h -> .s results in:
  <e  |f  |g  |h  >

  => UUNPKHI .s -> .d results in:
  <g      |h      >

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D110523
2021-09-29 11:14:49 +01:00
Martin Storsjö d6216e2cd1 [X86] Fix handling of i128<->fp on Windows
On Windows, i128 arguments are passed as indirect arguments, and
they are returned in xmm0.

This is mostly fixed up by `WinX86_64ABIInfo::classify` in Clang, making
the IR functions return v2i64 instead of i128, and making the arguments
indirect. However for cases where libcalls are generated in the target
lowering, the lowering uses the default x86_64 calling convention for
i128, where they are passed/returned as a register pair.

Add custom lowering logic, similar to the existing logic for i128
div/mod (added in 4a406d32e9),
manually making the libcall (while overriding the return type to
v2i64 or passing the arguments as pointers to arguments on the stack).

X86CallingConv.td doesn't seem to handle i128 at all, otherwise
the windows specific behaviours would ideally be implemented as
overrides there, in generic code, handling these cases automatically.

This fixes https://bugs.llvm.org/show_bug.cgi?id=48940.

Differential Revision: https://reviews.llvm.org/D110413
2021-09-29 13:05:59 +03:00
Amara Emerson e6ed880e47 [AArch64][GlobalISel] Make some vector G_SMULH/G_UMULH legal. 2021-09-29 02:35:29 -07:00
hsmahesha c0735cb9f1 [AMDGPU] Do not internalize ASan device library functions.
ASan device library functions (those starts with the prefix __asan_)
are at the moment undergoing through undesired optimizations due to
internalization. Hence, in order to avoid such undesired optimizations
on ASan device library functions, do not internalize them in the first
place.

Reviewed By: yaxunl

Differential Revision: https://reviews.llvm.org/D110468
2021-09-29 07:19:02 +05:30
Sterling Augustine c07f709969 Revert "Recommit "[AArch64] Split bitmask immediate of bitwise AND operation""
This reverts commit 73a196a11c.

Causes crashes as reported in https://reviews.llvm.org/D109963
2021-09-28 18:02:06 -07:00
Jessica Paquette 241c7b1473 [AArch64][GlobalISel] Run overlapping_and after legalization
When we have code with truncates, those truncates may be changed into G_ANDs
with constants. These may, in turn, feed into other G_AND instructions.

Running this combine post-legalize allows us to optimize examples like this one:

https://godbolt.org/z/zrGY4dfEW

SDAG currently optimizes the example above so that there is only one `and`.
GISel doesn't optimize it, because the G_AND we'd optimize here is translated
as a G_TRUNC. Later, that G_TRUNC is turned into a G_AND during legalization.

Differential Revision: https://reviews.llvm.org/D110667
2021-09-28 17:13:34 -07:00
Roman Lebedev b6b7860954
[X86][Costmodel] Load/store i16 Stride=6 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For this tuple, measuring becomes problematic since there's a lot of spilling going on,
but apparently all these memory ops do not affect worst-case estimate at all here.

For load we have:
https://godbolt.org/z/5qGb9odP6 - for intels `Block RThroughput: <=106.0`; for ryzens, `Block RThroughput: <=34.8`
So pick cost of `106`.

For store we have:
https://godbolt.org/z/KrWcv4Ph7 - for intels `Block RThroughput: =58.0`; for ryzens, `Block RThroughput: <=20.5`
So pick cost of `58`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110593
2021-09-28 19:15:08 +03:00
Roman Lebedev 24e42f7d28
[X86][Costmodel] Load/store i16 Stride=6 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/3Tc5s897j - for intels `Block RThroughput: =39.0`; for ryzens, `Block RThroughput: <=13.5`
So pick cost of `39`.

For store we have:
https://godbolt.org/z/fo1h9E67e - for intels `Block RThroughput: =21.0`; for ryzens, `Block RThroughput: <=12.0`
So pick cost of `21`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110592
2021-09-28 19:15:07 +03:00
Roman Lebedev b3011bcc78
[X86][Costmodel] Load/store i16 Stride=6 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1Wcaf9c7T - for intels `Block RThroughput: =9.0`; for ryzens, `Block RThroughput: <=4.5`
So pick cost of `9`.

For store we have:
https://godbolt.org/z/1Wcaf9c7T - for intels `Block RThroughput: =15.0`; for ryzens, `Block RThroughput: <=6.0`
So pick cost of `15`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110591
2021-09-28 19:15:01 +03:00
Roman Lebedev aa93c55889
[X86][Costmodel] Load/store i16 Stride=6 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/bhscej4WM - for intels `Block RThroughput: =13.0`; for ryzens, `Block RThroughput: <=7.0`
So pick cost of `13`.

For store we have:
https://godbolt.org/z/Yf4Pfnxbq - for intels `Block RThroughput: =10.0`; for ryzens, `Block RThroughput: <=3.5`
So pick cost of `10`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110590
2021-09-28 19:14:56 +03:00
Quinn Pham 70391b3468 [PowerPC] FP compare and test XL compat builtins.
This patch is in a series of patches to provide builtins for
compatability with the XL compiler. This patch adds builtins for compare
exponent and test data class operations on floating point values.

Reviewed By: #powerpc, lei

Differential Revision: https://reviews.llvm.org/D109437
2021-09-28 11:01:51 -05:00
Kazu Hirata 9e4f1f9265 [SystemZ] Remove redundant declaration SystemZMnemonicSpellCheck (NFC)
Note that SystemZMnemonicSpellCheck is defined in
SystemZGenAsmMatcher.inc, which SystemZAsmParser.cpp includes.

Identified with readability-redundant-declaration.
2021-09-28 08:38:05 -07:00
David Green fdd8c10959 [ARM] Delay reverting WLS in arm-block-placement
As we have to split blocks, we may be left in an invalid loop state
after a WLS is reverted to a DLS. Instead remember the WLS that could
not be fixed and revert them after finishing processing all other loops.

Differential Revision: https://reviews.llvm.org/D110567
2021-09-28 15:38:29 +01:00
Jingu Kang 73a196a11c Recommit "[AArch64] Split bitmask immediate of bitwise AND operation"
This reverts the revert commit f85d8a5bed
with bug fixes.

Original message:

    MOVi32imm + ANDWrr ==> ANDWri + ANDWri
    MOVi64imm + ANDXrr ==> ANDXri + ANDXri

    The mov pseudo instruction could be expanded to multiple mov instructions later.
    In this case, try to split the constant operand of mov instruction into two
    bitmask immediates. It makes only two AND instructions intead of multiple
    mov + and instructions.

    Added a peephole optimization pass on MIR level to implement it.

    Differential Revision: https://reviews.llvm.org/D109963
2021-09-28 15:26:29 +01:00
David Green 2c53215e99 [ARM] Skip debug info in recomputeVPTBlockMask
The ARMLowOverheadLoops pass recalculates VPT block masks when it
converts VCMP's inside VPT blocks into VPT's. The function to do so
doesn't seem to handle debug info though, leading to invalid block
creation or asserts at compile time. Make sure the function skips any
debug info between the MVE instructions it inspects.

Differential Revision: https://reviews.llvm.org/D110564
2021-09-28 14:58:13 +01:00
hyeongyu kim 86bf234d0b [IR] Change the default value of InstertElement to poison (1/4)
This patch is for fixing potential insertElement-related bugs like D93818.
```
V = UndefValue::get(VecTy);
for(...)
  V = Builder.CreateInsertElementy(V, Elt, Idx);
=>
V = PoisonValue::get(VecTy);
for(...)
  V = Builder.CreateInsertElementy(V, Elt, Idx);
```
Like above, this patch changes the placeholder V to poison.
The patch will be separated into several commits.

Reviewed By: aqjune

Differential Revision: https://reviews.llvm.org/D110311
2021-09-28 22:29:16 +09:00
Jingu Kang f85d8a5bed Revert "[AArch64] Split bitmask immediate of bitwise AND operation"
This reverts commit 864b206796.

Reverting due to error on buildbots.
2021-09-28 13:28:09 +01:00
Jingu Kang 864b206796 [AArch64] Split bitmask immediate of bitwise AND operation
MOVi32imm + ANDWrr ==> ANDWri + ANDWri
MOVi64imm + ANDXrr ==> ANDXri + ANDXri

The mov pseudo instruction could be expanded to multiple mov instructions later.
In this case, try to split the constant operand of mov instruction into two
bitmask immediates. It makes only two AND instructions intead of multiple
mov + and instructions.

Added a peephole optimization pass on MIR level to implement it.

Differential Revision: https://reviews.llvm.org/D109963
2021-09-28 11:57:43 +01:00
Liu, Chen3 57e8f840b6 [X86][FP16] Fix a bug when Combine the FADD(A, FMA(B, C, 0)) to FMA(B, C, A).
This bug was introduced by D109953. The operand order of generated FMA
is wrong.

Differential Revision: https://reviews.llvm.org/D110606
2021-09-28 11:38:53 +08:00
Jozef Lawrynowicz 6cfb4d46ba [llvm-readobj] Support dumping of MSP430 ELF attributes
The MSP430 ABI supports build attributes for specifying
the ISA, code model, data model and enum size in ELF object files.

Differential Revision: https://reviews.llvm.org/D107969
2021-09-28 00:56:11 +03:00
Roman Lebedev 2a7a768dad
[X86][Costmodel] Load/store i16 Stride=4 VF=32 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For this tuple, measuring becomes problematic since there's a lot of spilling going on,
but apparently all these memory ops do not affect worst-case estimate at all here.

For load we have:
https://godbolt.org/z/zP4hd8MT6 - for intels `Block RThroughput: =150.0`; for ryzens, `Block RThroughput: <=59`
So pick cost of `150`.

For store we have:
https://godbolt.org/z/vKb8zTK8E - for intels `Block RThroughput: =32.0`; for ryzens, `Block RThroughput: <=24.0`
So pick cost of `64`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110548
2021-09-27 22:20:01 +03:00
Roman Lebedev ee5a050e2e
[X86][Costmodel] Load/store i16 Stride=4 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/Wd9cKab83 - for intels `Block RThroughput: =75.0`; for ryzens, `Block RThroughput: <=29.5`
So pick cost of `75`. (note that `# 32-byte Reload` does not affect throughput there.)

For store we have:
https://godbolt.org/z/Wd9cKab83 - for intels `Block RThroughput: =32.0`; for ryzens, `Block RThroughput: <=12.0`
So pick cost of `32`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110543
2021-09-27 22:20:01 +03:00
Roman Lebedev 5615d6a6dd
[X86][Costmodel] Load/store i16 Stride=4 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/dd8T5P471 - for intels `Block RThroughput: =33.0`; for ryzens, `Block RThroughput: <=14.5`
So pick cost of `33`.

For store we have:
https://godbolt.org/z/zPxcKWhn4 - for intels `Block RThroughput: =10.0`; for ryzens, `Block RThroughput: <=6.0`
So pick cost of `10`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110541
2021-09-27 22:20:01 +03:00
Roman Lebedev df2b42d12e
[X86][Costmodel] Load/store i16 Stride=4 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/rnsf639Wh - for intels `Block RThroughput: =17.0`; for ryzens, `Block RThroughput: <=7.5`
So pick cost of `17`.

For store we have:
https://godbolt.org/z/565KKrcY6 - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: =2.0`
So pick cost of `6`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110537
2021-09-27 22:20:01 +03:00
Roman Lebedev 45caac91c4
[X86][Costmodel] Load/store i16 Stride=4 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/5EYc6r9nh - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: <=3.0`
So pick cost of `6`.

For store we have:
https://godbolt.org/z/z61e5d6GE - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=1.0`
So pick cost of `2`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110536
2021-09-27 22:20:01 +03:00
Praveen Velliengiri e90b512c4d [AMDGPU] Change ASAN init/fini kernels linkage to external.
HSA runtime fails to find the symbols for Init and Fini kernels as
they mark with internal linkage, changing the linkage to external
to fix those errors.

Differential Revision: https://reviews.llvm.org/D110054
2021-09-27 11:50:37 -06:00
Quinn Pham 682e15f371 [PowerPC] Fix td pattern for P10 VSLDBI and VSRDBI
This patch fixes the pattern for the P10 instructions Vector Shift Left
Double by Bit Immediate VN-form and Vector Shift Right Double by Bit
Immediate VN-form. The third argument should be a target constant (`timm`)
instead of an `i32` because an immediate is expected.

Reviewed By: lei

Differential Revision: https://reviews.llvm.org/D109920
2021-09-27 12:36:18 -05:00
Craig Topper a2a07e8db3 [RISCV] Fold store of vmv.x.s to a vse with VL=1.
This can avoid a loss of decoupling with the scalar unit on cores
with decoupled scalar and vector units.

We should support FP too, but those use extract_element and not a
custom ISD node so it is a little different. I also left a FIXME
in the test for i64 extract and store on RV32.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D109482
2021-09-27 09:54:46 -07:00
Craig Topper 933182e948 [RISCV] Improve support for forming widening multiplies when one input is a scalar splat.
If one input of a fixed vector multiply is a sign/zero extend and
the other operand is a splat of a scalar, we can use a widening
multiply if the scalar value has sufficient sign/zero bits.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D110028
2021-09-27 09:37:07 -07:00
Kazu Hirata b68a62b3a9 [Lanai] Remove redundant declaration getTheLanaiTarget (NFC)
Note that getTheLanaiTarget is declared in
TargetInfo/LanaiTargetInfo.h, which LanaiDisassembler.cpp includes.

Identified with readability-redundant-declaration.
2021-09-27 08:58:27 -07:00
Sebastian Neubauer bf980930e5 [AMDGPU] Ignore KILLs when forming clauses
KILL instructions are sometimes present and prevented hard
clauses from being formed.

Fix this by ignoring all meta instructions in clauses.

Differential Revision: https://reviews.llvm.org/D106042
2021-09-27 16:33:52 +02:00
Roman Lebedev 7424deb743
[X86][Costmodel] Load/store i16 Stride=2 VF=32 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/q6GbK89br - for intels `Block RThroughput: =18.0`; for ryzens, `Block RThroughput: <=7.0`
So pick cost of `18`.

For store we have:
https://godbolt.org/z/Yzfoo5TnW - for intels `Block RThroughput: =8.0`; for ryzens, `Block RThroughput: <=4.0`
So pick cost of `8`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110507
2021-09-27 14:21:12 +03:00
Roman Lebedev a5113e9445
[X86][Costmodel] Load/store i16 Stride=2 VF=16 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/Y1E7qnjz8 - for intels `Block RThroughput: =9.0`; for ryzens, `Block RThroughput: <=3.5`
So pick cost of `9`.

For store we have:
https://godbolt.org/z/Y1E7qnjz8 - for intels `Block RThroughput: =4.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `4`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110506
2021-09-27 14:20:11 +03:00
Roman Lebedev 70c90cc5bd
[X86][Costmodel] Load/store i16 Stride=2 VF=8 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/e5YE99a4P - for intels `Block RThroughput: =6.0`; for ryzens, `Block RThroughput: =2.0`
So pick cost of `6`.

For store we have:
https://godbolt.org/z/3vM4KsE1n - for intels `Block RThroughput: =3.0`; for ryzens, `Block RThroughput: <=2.0`
So pick cost of `3`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110505
2021-09-27 14:18:29 +03:00
Roman Lebedev 49e532aa52
[X86][Costmodel] Load/store i16 Stride=2 VF=4 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/1j3nf3dro - for intels `Block RThroughput: =2.0`; for ryzens, `Block RThroughput: <=1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/4n1zvP37j - for intels `Block RThroughput: =1.0`; for ryzens, `Block RThroughput: <=0.5`
So pick cost of `1`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110504
2021-09-27 14:15:25 +03:00
David Green bb2d23dcd4 [ARM] Improve detection of fallthough when aligning blocks
We align non-fallthrough branches under Cortex-M at O3 to lead to fewer
instruction fetches. This improves that for the block after a LE or
LETP. These blocks will still have terminating branches until the
LowOverheadLoops pass is run (as they are not handled by analyzeBranch,
the branch is not removed until later), so canFallThrough will return
false. These extra branches will eventually be removed, leaving a
fallthrough, so treat them as such and don't add unnecessary alignments.

Differential Revision: https://reviews.llvm.org/D107810
2021-09-27 11:21:21 +01:00
Simon Pilgrim 468ff703e1 [X86] combineVectorHADDSUB - remove the broken HOP(x,x) merging code (PR51974)
This intention of this code turns out to be superfluous as we can handle this with shuffle combining, and it has a critical flaw in that it doesn't check for dependencies.

Fixes PR51974
2021-09-27 10:41:22 +01:00
Fraser Cormack d48f6df1f8 [RISCV] Create the correct mask type when lowering EXTRACT_VECTOR_ELT
This particular case was creating a `VMSET_VL` using the old
fixed-length type in order to pass a mask to other custom nodes
operating on the scalable container type. This kind of thing wasn't
caught for us; I only noticed when experimenting with odd-length
vectors, where it was trying to generate an invalid `v3i1` MVT.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D110420
2021-09-27 09:43:40 +01:00
Freddy Ye 902ec6142a [X86][ISel] Lowering FROUND(f16) and FROUNDEVEN(f16)
When AVX512FP16 is enabled, FROUND(f16) cannot be dealt with
TypeLegalize, and no libcall in libm is ready for fround(f16) now.
FROUNDEVEN(f16) has related instruction in AVX512FP16.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D110312
2021-09-27 13:35:03 +08:00
Simon Pilgrim c0eff50fc5 [X86][SSE] combineMulToPMADDWD - enable sext_extend_vector_inreg(vXi16) -> zext_extend_vector_inreg(vXi16) fold
The plan is to allow combineMulToPMADDWD to match illegal vector types (as long as they're still pow2), which should allow us to start removing the 128-bit limit on more of the PMADDWD combines.
2021-09-26 19:37:23 +01:00
Simon Pilgrim ed3e4917b3 [X86] Fold PACK(*_EXTEND_VECTOR_INREG, UNDEF) -> *_EXTEND_VECTOR_INREG
For 128-bit vectors, we can remove a PACK of a EXTEND_VECTOR_INREG node and just create a smaller extension to the result/packed type.
2021-09-26 19:37:22 +01:00
Simon Pilgrim 3fe9767204 [X86] Fold ADD(VPMADDWD(X,Y),VPMADDWD(Z,W)) -> VPMADDWD(SHUFFLE(X,Z), SHUFFLE(Y,W))
Merge addition of VPMADDWD nodes if each element pair doesn't use the upper element in each pair (i.e. its zero) - we can generalize this to either element in the pair if we one day create VPMADDWD with zero lower elements.

There are still a number of issues with extending/shuffling with 256/512-bit VPMADDWD nodes so this initially only works for v2i32/v4i32 cases - I'm working on removing all these limitations but there's still a bit of yak shaving to go.....
2021-09-26 18:08:29 +01:00
Kazu Hirata c4ae4a745d [RISCV] Remove redundant declaration RISCVMnemonicSpellCheck (NFC)
Note that RISCVMnemonicSpellCheck is defined in
RISCVGenAsmMatcher.inc, which RISCVAsmParser.cpp includes.

Identified with readability-redundant-declaration.
2021-09-26 09:26:57 -07:00
Roman Lebedev d9413f46b3
[X86][Costmodel] Load/store i16 VF=2 interleaving costs
The only sched models that for cpu's that support avx2
but not avx512 are: haswell, broadwell, skylake, zen1-3

For load we have:
https://godbolt.org/z/M8vEKs5jY - for intels `Block RThroughput: =2.0`;
                                  for ryzens, `Block RThroughput: <=1.0`
So pick cost of `2`.

For store we have:
https://godbolt.org/z/Kx1nKz7je - for intels `Block RThroughput: =1.0`;
                                  for ryzens, `Block RThroughput: <=0.5`
So pick cost of `1`.

I'm directly using the shuffling asm the llc produced,
without any manual fixups that may be needed
to ensure sequential execution.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D103144
2021-09-26 19:13:23 +03:00
Simon Pilgrim 3538ee763d [CostModel][X86] Improve AVX1/AVX2 v16i32->v16i16/v16i8 truncation costs (PR51972)
Based off worst case btver2 (AVX1) and haswell (AVX2) llvm-mca reports
2021-09-26 13:43:46 +01:00
Simon Pilgrim 8c83bd3bd4 [CostModel][X86] Adjust vXi32 multiply costs if it can be performed using PMADDWD
Update the costs to match the codegen from combineMulToPMADDWD - not only can we use PMADDWD is its zero-extended, but also if its a constant or sign-extended from a vXi16 (which can be replaced with a zero-extension).
2021-09-25 16:28:48 +01:00
Simon Pilgrim eb7c78c2c5 [X86][SSE] combineMulToPMADDWD - mask off upper bits of sign-extended vXi32 constants
If we are multiplying by a sign-extended vXi32 constant, then we can mask off the upper 16 bits to allow folding to PMADDWD and make use of its implicit sign-extension from i16
2021-09-25 15:50:45 +01:00
Simon Pilgrim 2a4fa0c27c [X86][SSE] combineMulToPMADDWD - enable sext(v8i16) -> zext(v8i16) fold on sub-128 bit vectors 2021-09-25 15:50:45 +01:00
Kazu Hirata 44c401bdc3 [Mips] Remove redundant declarations (NFC)
Note that identical declarations immediately precede what's being
removed in this patch.

Identified with readability-redundant-declaration.
2021-09-25 07:41:11 -07:00
Simon Pilgrim f5a26ccae2 [X86][SSE] combineMulToPMADDWD - enable sext(v8i16) -> zext(v8i16) fold on pre-SSE41 targets
We already do this on SSE41 targets where we have sext/zext instructions, now that combineShiftToPMULH handles SSE2 targets, we can enable this here as well.
2021-09-25 14:35:31 +01:00
Simon Pilgrim 4c72b10f0a [X86] X86FastISel::fastMaterializeConstant - break if-else chain to fix llvm-else-after-return warning. NFCI
All previous if-else cases return
2021-09-25 14:31:14 +01:00
Simon Pilgrim a25f25c3b7 [X86] combineShiftToPMULH - relax from ISA from SSE41 to SSE2
With improved shuffle combines (in particular canonicalizeShuffleWithBinOps), we can now usefully perform this on any SSE2+ target.

We should be able to remove this entirely and just use DAGCombiner's combineShiftToMULH if we can someday get it to support illegal (pre-widened) types.
2021-09-25 14:08:03 +01:00
David Green 883758ed48 [ARM] Fix Arm block placement creating branches after jump tables.
Given:
 - A jump table
 - Which jumps to the next block
 - The next block ends in a WLS
 - Where the WLS conditionally jumps to block earlier in the program.

The Arm block placement pass would attempt to move the block containing
the WLS earlier, as the WLS instruction can only branch forward. In
doing so it would add a branch from the jumptable block to the WLS
block, thinking it previously fell-through.

This in itself would be fine, if a little inefficient, but the constant
island pass expects all instructions after a jump-table branch to have
been removed by analyzeBranch. So it gets confused and can assign the
same labels to multiple jump table blocks.

I've changed the condition to the same as used in analyzeBranch.
2021-09-25 11:32:25 +01:00
Jim Lin ed687c0211 [RISCV] Fix incorrect operand type of inst alias for InstR4
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D110381
2021-09-25 11:25:12 +08:00
Craig Topper 715cf6ffb9 [RISCV] Add another isel optimization for (and (shl X, c2), c1).
Where c1 is a shifted mask with 32-c2 leading zeros and c3 trailing
zeros and c3>c2. We can select it as (slli (srliw X, c3-c2), c3).
2021-09-24 15:10:25 -07:00
Stanislav Mekhanoshin cf74ef134c [AMDGPU] Limit promote alloca max size in functions
Non-entry functions have 32 caller saved VGPRs available. If we
promote alloca to consume more registers we will have to spill
CSRs. There is no reason to eliminate scratch access to get
another scratch access instead.

Differential Revision: https://reviews.llvm.org/D110372
2021-09-24 13:38:39 -07:00
Anirudh Prasad a9ae2436fc [SystemZ][z/OS] Introduce the GOFFMCAsmInfo Interface for z/OS
- This patch adds in the GOFFMCAsmInfo interfaces for the z/OS target.
- This patch decouples the previously existing SystemZMCAsmInfo interface for the ELF target and the z/OS target.
- This patch also removes a small test in the SystemZAsmLexerTest.cpp. The reason for this is because, the test is set up for the s390x-ibm-linux (SystemZ ELF triple), and the test checks a function which is overridden only for the z/OS target. The reason we can't change the test to use a z/OS triple outright is because there is still missing support which prevents the successful running of a test (assert in AsmParser.cpp due to missing GOFFAsmParser support)

Reviewed By: uweigand, abhina.sreeskantharajan

Differential Revision: https://reviews.llvm.org/D110077
2021-09-24 16:25:41 -04:00
Anirudh Prasad ebe06910ce [NFC] Replace hard-coded usages of SystemZ::R15D with SpecialRegisters API
This patch changes hard-coded usages of SystemZ::R15D with calls to the getStackPointerRegister function. Uses in the LowerCall function are avoided to avoid merge conflicts with an expected upcoming patch.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D109702
2021-09-24 15:20:57 -04:00
Anirudh Prasad e09a1dc475 [SystemZ][z/OS] Add GOFF Support to the DataLayout
- This patch adds in the GOFF mangling support to the LLVM data layout string. A corresponding additional line has been added into the data layout section in the language reference documentation.
- Furthermore, this patch also sets the right data layout string for the z/OS target in the SystemZ backend.

Reviewed By: uweigand, Kai, abhina.sreeskantharajan, MaskRay

Differential Revision: https://reviews.llvm.org/D109362
2021-09-24 14:09:01 -04:00
Victor Huang 6e1aaf18af [PowerPC] Mark splat immediate instructions as rematerializable
This patch marks splat immediate instructions XXSPLTIW and XXSPLTIDP as
rematerializable to prevent MachineLICM from moving them out of loops.

Reviewed By: lei, amy

Differential revision: https://reviews.llvm.org/D108823
2021-09-24 12:03:34 -05:00
Stanislav Mekhanoshin 082e22f3d7 [AMDGPU] Always reserve flat scratch SGPR for architected flat scratch
With architected flat scratch it becomes readonly. We must always
reserve SGPR pair for it even if we do not use scratch at all since
an attempt to write to SGPRs mapped to FLAT_SCRATCH results in
memory violation.

This is not needed since GFX10 with architected flat scratch though
since special SGPRs are not carving space from normal SGPRs.

Differential Revision: https://reviews.llvm.org/D110376
2021-09-24 09:46:31 -07:00
Simon Pilgrim d8fc9f8727 [X86][SSE] combineMulToPMADDWD - replace sext(v8i16) -> zext(v8i16)
As suggested on D108522, if we're sign extending a v4i16 source before multiplying as a v4i32, then we can replace that with a zero extension and rely on the implicit sign-extension of PMADDWD.
2021-09-24 16:42:01 +01:00
Sanjay Patel 09e71c367a [x86] convert logic-of-FP-compares to FP logic-of-vector-compares
This is motivated by the examples and discussion in:
https://llvm.org/PR51245
...and related bugs.

By using vector compares and vector logic, we can convert 2 'set'
instructions into 1 'movd' or 'movmsk' and generally improve
throughput/reduce instructions.

Unfortunately, we don't have a complete vector compare ISA before
AVX, so I left SSE-only out of this patch. Ie, we'd need extra logic
ops to simulate the missing predicates for SSE 'cmpp*', so it's not
as clearly a win.

Differential Revision: https://reviews.llvm.org/D110342
2021-09-24 11:38:19 -04:00
Hsiangkai Wang 7d39a8a921 [RISCV] (1/2) Add the tail policy argument to builtins/intrinsics.
Add the tail policy argument to LLVM IR intrinsics. There are two policies for tail elements. Tail agnostic means users do not care about the values in the tail elements and tail undisturbed means the values in the tail elements need to be kept after the operation. In order to let users control the tail policy, we add an additional argument at the end of the argument list.

For unmasked operations, we have no maskedoff and the tail policy is always tail agnostic. If users want to keep tail elements under unmasked operations, they could use all one mask in the masked operations to do it. So, we only add the additional argument for masked operations for most cases. There are exceptions listed below.

In this patch, we do not handle the following cases to reduce the complexity of the patch. There could be two separate patches for them.

* Use dest argument to control tail policy
vmerge.vvm/vmerge.vxm/vmerge.vim (add _t builtins with additional dest argument)
vfmerge.vfm (add _t builtins with additional dest argument)
vmv.v.v (add _t builtins with additional dest argument)
vmv.v.x (add _t builtins with additional dest argument)
vmv.v.i (add _t builtins with additional dest argument)
vfmv.v.f (add _t builtins with additional dest argument)
vadc.vvm/vadc.vxm/vadc.vim (add _t builtins with additional dest argument)
vsbc.vvm/vsbc.vxm (add _t builtins with additional dest argument)

* Always has tail argument for masked/unmasked intrinsics
Vector Single-Width Integer Multiply-Add Instructions (add _t and _mt builtins)
Vector Widening Integer Multiply-Add Instructions (add _t and _mt builtins)
Vector Single-Width Floating-Point Fused Multiply-Add Instructions (add _t and _mt builtins)
Vector Widening Floating-Point Fused Multiply-Add Instructions (add _t and _mt builtins)
Vector Reduction Operations (add _t and _mt builtins)
Vector Slideup Instructions (add _t and _mt builtins)
Vector Slidedown Instructions (add _t and _mt builtins)

Discussion: https://github.com/riscv/rvv-intrinsic-doc/pull/101

Differential Revision: https://reviews.llvm.org/D105092
2021-09-24 17:09:50 +08:00
Simon Pilgrim dade83c02a [X86][SLM] Fix ADDQ/SUBQ/CMPEQQ throughput to account for running on either port.
Testing on a SLM box suggests these can run on either port, but the throughput is 4cy on either (inc MMX versions). Confirmed with Intel AoM / Agner / InstLatX64.
2021-09-24 10:06:14 +01:00
Jonas Paulsson ea92283449 [SystemZ] Implement ISD::BITCAST for fp128 -> i128.
The type legalizer has by default no method of doing this bitcast other than
storing and reloading the value from stack.

This patch implements a custom lowering of this operation using extractions
of subregs (z13 and earlier using FP128 register pairs), or of vector
elements (with 'vector enhancements 1' using VR128 FP registers).

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D110346
2021-09-24 10:26:45 +02:00
Amara Emerson 661ab70314 [AArch64][GlobalISel] Fix crash in the extend(extract_vector_elt) optimization.
It was assuming that GPR extends could only have destination sizes of 32 or 64
bits, but for AArch64 we allow < 32 bits even without matching size physregs.
2021-09-23 23:07:16 -07:00
Christudasan Devadasan 7a62a5b56d [AMDGPU] Legalize initialized LDS variables
We don't allow an initializer for LDS variables
and there is an early abort during instruction
selection. This patch legalizes them by ignoring
the init values. During assembly emission, proper
error reporting already exists for such instances.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D109901
2021-09-23 22:53:20 -04:00
Craig Topper 40b230f685 [RISCV] Limit transformAddImmMulImm to prevent an infinite loop.
This fixes an issue reported in D108607.
2021-09-23 15:53:11 -07:00
Vang Thao 1443ba6163 [AMDGPU] Propagate defining src reg for AGPR to AGPR Copys
On targets that do not support AGPR to AGPR copying directly, try to find the
defining accvgpr_write and propagate its source vgpr register to the copies
before register allocation so the source vgpr register does not get clobbered.

The postrapseudos pass also attempt to propagate the defining accvgpr_write but
if the register to propagate is clobbered, it will give up and create new
temporary vgpr registers instead.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D108830
2021-09-23 15:17:53 -07:00
Craig Topper 70f50114f3 [RISCV] Add another isel optimization for (and (shl x, c2), c1)
Turn (and (shl x, c2), c1) -> (slli (srli x, c3-c2), c3) if c1 is a
shifted mask with no leading zeros and c3 trailing zeros where c3
is greater than c2.
2021-09-23 14:18:07 -07:00
Nico Weber 3fa43da7a3 [llvm] Fix a copy-pasto
We should use IMAGE_REL_I386_SECREL in the i386 section of this file.

IMAGE_REL_I386_SECREL and IMAGE_REL_AMD64_SECREL have the same
numeric value 0xB, so this doesn't change behavior.
2021-09-23 15:34:01 -04:00
Craig Topper 4a69551d66 [RISCV] Add more isel optimizations for (and (shr x, c2), c1).
Turn (and (shr x, c2), c1) -> (slli (srli x, c2+c3), c3) if c1 is a
shifted mask with c2 leading zeros and c3 trailing zeros.

When the leading zeros is C2+32 we can use SRLIW in place of SRLI.
2021-09-23 11:29:04 -07:00
Sanjay Patel 74ba4b769a [x86] move combiner state check into convertIntLogicToFPLogic(); NFC
This function can be adapted to solve bugs like PR51245,
but it could require differentiating the combiner timing
between the existing and new transforms.
2021-09-23 14:28:22 -04:00
Thomas Lively 2f519825ba [WebAssembly] Add prototype relaxed SIMD fma/fms instructions
Add experimental clang builtins, LLVM intrinsics, and backend definitions for
the new {f32x4,f64x2}.{fma,fms} instructions in the relaxed SIMD proposal:
https://github.com/WebAssembly/relaxed-simd/blob/main/proposals/relaxed-simd/Overview.md.
Do not allow these instructions to be selected without explicit user opt-in.

Differential Revision: https://reviews.llvm.org/D110295
2021-09-23 11:01:36 -07:00
Piotr Sobczak 2ac53fffae [AMDGPU] Avoid processing functions in amdgpu-propagate-attributes pass for shaders
The pass amdgpu-propagate-attributes ("Early/Late propagate attributes
from kernels to functions") is currently run also for shaders, where
it does nothing. Modify the check so the pass only processes functions
for kernels.

Differential Revision: https://reviews.llvm.org/D109961
2021-09-23 16:46:56 +02:00
Simon Pilgrim c931d35216 [CostModel][X86] Increase i64 mul cost from 1 to 2
Only the most recent cpus support really 1cy 64-bit multiplies, and the X64 cost table represents a realistic worst case. The 1cy value was also discouraging vectorization when most vXi64 PMULDQ expansions aren't actually slower than scalarization.

Noticed while investigating PR51436.
2021-09-23 14:48:21 +01:00
Jim Lin fbacf5ad38 [RISCV] Add missing op type OPERAND_UIMM2, OPERAND_UIMM3 and OPERAND_UIMM7 for verifyInstruction
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D110307
2021-09-23 19:30:46 +08:00
Fraser Cormack e7c879a69d [RISCV][VP] Add support for VP_REDUCE_* operations
This patch adds codegen support for lowering the vector-predicated
reduction intrinsics to RVV instructions. The process is similar to that
of the other reduction intrinsics, save for the fact that every VP
reduction has a start value. We reuse the existing custom "VL" nodes,
adding extra patterns where required to handle non-true masks.

To support these nodes, the `RISCVISD::VECREDUCE_*_VL` nodes have been
given an explicit "merge" operand. This is to faciliate the VP
reductions, where we must be careful to ensure that even if no operation
is performed (when VL=0) we still produce the start value. The RVV
reductions don't update the destination register under these conditions,
so we tie the splatted start value to the output register.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D107657
2021-09-23 11:11:05 +01:00
Jay Foad 6cef28ed2d [TII] Remove the MFI argument to convertToThreeAddress. NFC.
This simplifies the API and addresses a FIXME in
TwoAddressInstructionPass::convertInstTo3Addr.

Differential Revision: https://reviews.llvm.org/D110229
2021-09-23 08:58:46 +01:00
Liu, Chen3 76656ec8ec [X86][FP16] Combine the FADD(A, FMA(B, C, 0)) to FMA(B, C, A)
This patch is to support transform something like
_mm512_add_ph(acc, _mm512_fmadd_pch(a, b, _mm512_setzero_ph()))
to _mm512_fmadd_pch(a, b, acc).

Differential Revision: https://reviews.llvm.org/D109953
2021-09-23 15:37:08 +08:00
Mikael Holmen e7b169a8ae [AMDGPU] Fix gcc warnings about unused variables [NFC] 2021-09-23 08:08:00 +02:00
Usman Nadeem 3b12282b0e [AArch64][SVE][InstCombine] Eliminate redundant chains of tuple get/set
Differential Revision: https://reviews.llvm.org/D109667

Change-Id: I06a3c28e3658ecda109a3a1b73265828274ab2ea
2021-09-22 20:59:46 -07:00
Wang, Pengfei ebec077e07 [X86][FP16] Change the order of the operands in complex FMA intrinsics to allow swap between the mul operands.
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D109658
2021-09-23 11:02:48 +08:00
Zhi An Ng 1552179ac0 [WebAssembly] Add relaxed-simd feature
This currently only defines a constant, but it the future will be used
to gate builtins for experimenting and prototyping relaxed-simd proposal
(https://github.com/WebAssembly/relaxed-simd/).

Differential Revision: https://reviews.llvm.org/D110111
2021-09-22 14:52:50 -07:00
Craig Topper f0a422f935 [RISCV] Add fcvt.s.w(u)/fcvt.d.w(u)/fcvt.h.w(u) to hasAllNBitUsers
These instructions only read the lower 32 bits of their input.
2021-09-22 14:24:26 -07:00
Craig Topper b33a1cc05b [RISCV] Optimize vp.store with an all ones mask to avoid a vmset.
We can use riscv_vse intrinsic instead of riscv_vse_mask. The code here
is based on similar code for handling masked.scatter and vp.scatter.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D110206
2021-09-22 09:12:47 -07:00
Hongtao Yu d9b511d8e8 [CSSPGO] Set PseudoProbeInserter as a default pass.
Currenlty PseudoProbeInserter is a pass conditioned on a target switch. It works well with a single clang invocation. It doesn't work so well when the backend is called separately (i.e, through the linker or llc), where user has always to pass -pseudo-probe-for-profiling explictly. I'm making the pass a default pass that requires no command line arg to trigger, but will be actually run depending on whether the CU comes with `llvm.pseudo_probe_desc` metadata.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D110209
2021-09-22 09:09:48 -07:00
Simon Pilgrim b1f38a27f0 [Target][CodeGen] Remove default CostKind arguments on inner/impl TTI overrides
Based off a discussion on D110100, we should be avoiding default CostKinds whenever possible.

This initial patch removes them from the 'inner' target implementation callbacks - these should only be used by the main TTI calls, so this should guarantee that we don't cause changes in CostKind by missing it in an inner call. This exposed a few missing arguments in getGEPCost and reduction cost calls that I've cleaned up.

Differential Revision: https://reviews.llvm.org/D110242
2021-09-22 15:28:08 +01:00
Sander de Smalen 6375ca4059 [AArch64][SVE] Add extract_subvector patterns for unpacked fp16 and bfloat types.
Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D110163
2021-09-22 14:25:17 +01:00
Tim Northover 3a00e58c2f AArch64: use indivisible cmpxchg for 128-bit atomic loads at O0
Like normal atomicrmw operations, at -O0 the simple register-allocator can
insert spills into the LL/SC loop if it's expanded and visible when regalloc
runs. This can cause the operation to never succeed by repeatedly clearing the
monitor. Instead expand to a cmpxchg, which has a pseudo-instruction for -O0.
2021-09-22 14:20:43 +01:00
David Green 02cd8a6b91 [ARM] Allow smaller VMOVL in tail predicated loops
This allows VMOVL in tail predicated loops so long as the the vector
size the VMOVL is extending into is less than or equal to the size of
the VCTP in the tail predicated loop. These cases represent a
sign-extend-inreg (or zero-extend-inreg), which needn't block tail
predication as in https://godbolt.org/z/hdTsEbx8Y.

For this a vecsize has been added to the TSFlag bits of MVE
instructions, which stores the size of the elements that the MVE
instruction operates on. In the case of multiple size (such as a
MVE_VMOVLs8bh that extends from i8 to i16, the largest size was be
chosen). The sizes are encoded as 00 = i8, 01 = i16, 10 = i32 and 11 =
i64, which often (but not always) comes from the instruction encoding
directly. A unit test was added, and although only a subset of the
vecsizes are currently used, the rest should be useful for other cases.

Differential Revision: https://reviews.llvm.org/D109706
2021-09-22 12:07:52 +01:00
Sander de Smalen ab3607c0ed [AArch64][SVE] Add missing load/store patterns for unpacked bfloat vectors.
Reviewed By: c-rhodes

Differential Revision: https://reviews.llvm.org/D110063
2021-09-22 09:45:33 +01:00
Jay Foad 0205806d0f [AMDGPU] Convert mac/fmac to mad/fma when folding output modifiers
Use of output modifiers forces VOP3 encoding for a VOP2 mac/fmac
instruction, so we might as well convert it to the more flexible VOP3-
only mad/fma form.

With this change, the only way we should emit VOP3-encoded mac/fmac is
if regalloc chooses registers that require the VOP3 encoding, e.g. sgprs
for both src0 and src1. In all other cases the mac/fmac should either be
converted to mad/fma or shrunk to VOP2 encoding.

Differential Revision: https://reviews.llvm.org/D110156
2021-09-22 09:36:34 +01:00
Jay Foad 3828ea6181 [AMDGPU] Divergence-driven instruction selection for mul i32
Differential Revision: https://reviews.llvm.org/D109881
2021-09-22 09:36:34 +01:00
Matt Arsenault ec55dcedce AMDGPU: Refactor getWavesPerEU to separate flat workgroup size query
Add an overload to pass the flat workgroup range in separately. This
will allow the attributor to use the assumed value for
amdgpu-flat-workgroup-sizes when inferring amdgpu-waves-per-eu.
2021-09-21 22:57:17 -04:00
Chen Zheng ffa9fa9ed2 [PowerPC] prepare for udpate form with non-const increment.
This is a follow-up of D105872. Now we are able to prepare for update
form with non-const increment.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D106032
2021-09-22 02:54:28 +00:00
Usman Nadeem 645b8f5365 [AArch64][SVE] Add patterns to generate ADR instruction
Differential Revision: https://reviews.llvm.org/D109665

Change-Id: I9d2928688b80b804a16f52928e2057749ec2c0b2
2021-09-21 15:50:49 -07:00
Arthur Eubanks e42234383e Make DiagnosticInfoResourceLimit's limit param required
And always print it.

This makes some LLVM diagnostics match up better with Clang's diagnostics.

Updated some AMDGPU uses of DiagnosticInfoResourceLimit and now we print
better diagnostics for those.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D110204
2021-09-21 15:27:58 -07:00
Kirill Stoimenov 2649999579 [asan] Fixed a bug causing a crash when redzone optimization kicked in on X86 with -asan-optimize-callbacks flag on.
This change adds the ASan intrinsic to the list whihc are setting hasCopyImplyingStackAdjustment.

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D110012
2021-09-21 22:26:03 +00:00
Craig Topper b81e26c7f4 Recommit "[X86] Clear kill flags when rewriting SETCC uses in flag copy lowering."
This time with the right bug number.

When we rewrite the setcc we replace set old setcc output register
with the new CondReg. But since CondReg can be shared by other
replacements, we don't know if the kill flags for the old register
are valid for CondReg. So be conservative and remove them.

The test case has a SETCCr and a SETCCm on the same condition so
they end up sharing the same CondReg. The SETCCr had one use with
a kill flag. This kill flag isn't valid after the replacement because
CondReg needs a live range extending to the later SETCCm replacment.

Fixes PR51903.
2021-09-21 14:59:25 -07:00
Craig Topper 51a82e051e Revert "[X86] Clear kill flags when rewriting SETCC uses in flag copy lowering."
This reverts commit 7550f146ff.

I botched the bug number.
2021-09-21 14:33:44 -07:00
Craig Topper 7550f146ff [X86] Clear kill flags when rewriting SETCC uses in flag copy lowering.
When we rewrite the setcc we replace set old setcc output register
with the new CondReg. But since CondReg can be shared by other
replacements, we don't know if the kill flags for the old register
are valid for CondReg. So be conservative and remove them.

The test case has a SETCCr and a SETCCm on the same condition so
they end up sharing the same CondReg. The SETCCr had one use with
a kill flag. This kill flag isn't valid after the replacement because
CondReg needs a live range extending to the later SETCCm replacment.

Fixes PR51908.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D110046
2021-09-21 14:29:46 -07:00
alex-t 1a33294652 [AMDGPU] Filtering out the inactive lanes bits when lowering copy to SCC
Normally, given that the DA results are kept consistent over the selection DAG, uniform comparisons get selected to S_CMP_* but divergent to V_CMP_*.  Sometimes, for the sake of efficiency,  SSA subgraphs may be converted to VALU to avoid repeatedly copying data back and forth. Hence we have to be able to sustain the correctness passing the i1 from VALU to SALU context and vice versa.

VALU operations only process the active lanes of the VGPR and ignore inactive ones.
Active lanes correspond to 1 bit in the EXEC mask register.
SALU represents i1 as just one bit but VALU as 64bits: 0/1 and 0/(0xffffffffffffffff & EXEC) respectively.
SALU uses one-bit conditional flag SCC but VALU - VCC that is a pair of 32-bit SGPRs

To expose SCC to the VALU context we need to convert the one-bit boolean value to the appropriate 64bit.
To return back to the SALU context we need to do the opposite.

To correctly convert 64bit VALU boolean to either 0 or 1 we need to filter out the bits corresponding to the inactive lanes.

Reviewed By: piotr

Differential Revision: https://reviews.llvm.org/D109900
2021-09-21 21:19:31 +03:00
Brendon Cahoon cbdf624bb8 [AMDGPU] Correctly merge alias.scope and noalias metadata for memops
When adding alias.scope and noalias metadata to a memcpy function,
the alias.scope and noalias metadata from the operands are merged.
The rule for merging alias.scope is to take the intersection of
the domains and the union of the scopes within those domains.
The rule for merging noalias is to take the intersection.

The bug is that AMDGPULowerModuleLDS was using concatenation for
both alias.scope and noalias. For example, when f1 and f2 are added
to the LDS structure and there is a memcpy(f2, f1, sizeof(f1)).
Then, concatenation creates noalias metadata for the memcpy that
includes both {f1, f2}. That means that the memcpy is assumed
not to alias a prior load of f2, which enables the optimizer to
remove a load of f2 that occurs after mempcy.

The function MDNode::getmostGenericAliasScope defines the semantics
for alias.scope. There is a function, combineMetadata in Local.cpp,
that uses intersect for noalias.

Differential Revision: https://reviews.llvm.org/D110049
2021-09-21 13:02:01 -05:00
Craig Topper 7c975665b4 [RISCV] Make some arrays of constants 'static const'. NFC
This helps the compiler generate better code.
2021-09-21 10:52:47 -07:00
Amy Kwan 2af57b6099 [PowerPC] Add prefix load pattern for fpext to v2f64
This patch adds a prefixed load pattern involving v2f32 fpext v2f64, where we
are dealing with a value with an offset that fits into a 34-bit signed immediate.
A reduced test case is also added to patch that tests the pattern, in which the
pattern is tested in the big endian CHECKs of the newly added test.

Differential Revision: https://reviews.llvm.org/D109887
2021-09-21 12:45:24 -05:00
Craig Topper aeb63d464f [RISCV] Teach RISCVTargetLowering::shouldSinkOperands to sink splats for and/or/xor.
This requires a minor change to CodeGenPrepare to ensure that
shouldSinkOperands will be called for And.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D110106
2021-09-21 10:07:29 -07:00
Dmitry Preobrazhensky 3500e7d2b0 [AMDGPU][MC][GFX7][GFX10] Corrected image_atomic_fcmpswap
Differential Revision: https://reviews.llvm.org/D109616
2021-09-21 18:06:02 +03:00
Ben Shi b3052013b4 [RISCV] Optimize (add (mul x, c0), c1)
Optimize (add (mul x, c0), c1) -> (ADDI (MUL (ADDI, c1/c0), c0), c1%c0),
if c1/c0 and c1%c0 are simm12, while c1 is not.

Optimize (add (mul x, c0), c1) -> (MUL (ADDI, c1/c0), c0),
if c1%c0 is zero, and c1/c0 is simm12 while c1 is not.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D108607
2021-09-21 14:13:14 +00:00
Dmitry Preobrazhensky b8e7f53208 [AMDGPU][MC][GFX10] Enabled dlc for FLAT and GLOBAL atomics
Differential Revision: https://reviews.llvm.org/D109614
2021-09-21 16:23:20 +03:00
Jonas Paulsson a48b43f981 [SystemZ] Emit EXRL target instructions before text section is ended.
SystemZ adds the EXRL target instructions in the end of each file. This must
be done before debug info emission since that may end the text section, and
therefore this is now done in emitConstantPools() (instead of in
emitEndOfAsmFile).

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D109513
2021-09-21 14:32:28 +02:00
Nicholas Guy 9e4d72675f [AArch64] Improve schedule modelling on the Cortex-A55
Enables the FuseAddress feature in the Cortex-A55 scheduling model

Differential Revision: https://reviews.llvm.org/D109323
2021-09-21 13:03:34 +01:00
Jay Foad 598bebeaa6 [AMDGPU] Prefer fmac over fma when selecting FMA_W_CHAIN
FMA_W_CHAIN is used when lowering fdiv f32. Prefer to select it to fmac
if there are no source modifiers, just like we do for other mad/mac and
fma/fmac cases.

Differential Revision: https://reviews.llvm.org/D110074
2021-09-21 11:57:45 +01:00
Jay Foad 86dcb59206 [AMDGPU] Prefer v_fmac over v_fma only when no source modifiers are used
v_fmac with source modifiers forces VOP3 encoding, but it is strictly
better to use the VOP3-only v_fma instead, because $dst and $src2 are
not tied so it gives the register allocator more freedom and avoids a
copy in some cases.

This is the same strategy we already use for v_mad vs v_mac and
v_fma_legacy vs v_fmac_legacy.

Differential Revision: https://reviews.llvm.org/D110070
2021-09-21 11:57:45 +01:00
Cullen Rhodes b23d22f7d5 [PowerPC] NFC: Remove unused tblgen template args
Identified in D109359.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D109715
2021-09-21 08:24:16 +00:00
Yonghong Song ea72b0319d BPF: make 32bit register spill with 64bit alignment
In llvm, for non-alu32 mode, the stack alignment is 64bit so only one
64bit spill per 64bit slot. For alu32 mode, the stack alignment
is 32bit, so it is possible to have two 32bit spills per
64bit slot.

Currently, bpf kernel verifier does not preserve register states
for 32bit spills. That is, one 32bit register may hold a constant
value or a bounded range before spill. After reload from the
stack, the information is lost and sometimes this may cause
verifier failure. For 64bit register spill, the verifier
indeed tries to preserve the register state for reloading.

The current verifier can be modestly changed to handle one
32bit spill per 64bit stack slot with state-preserving reload.
Handling two 32bit spills per 64bit stack slot will require
substantial changes.

This patch changes stack alignment for alu32 to be 64bit.
This way, for any 64bit slot in alu32 mode, only one
32bit or 64bit register values can be saved. Together
with previous-mentioned verifier enhancement, 32bit
spill can be handled with state preserving.

Note that llvm stack slot coallescing
seems only doing adjacent packing which may leave some holes
in the stack. For example,
   stack slot 8   <== 8 bytes
   stack slot 4   <== 8 bytes with 4 byte hole
   stack slot 8   <== 8 bytes
   stack slot 4   <== 4 bytes

Differential Revision: https://reviews.llvm.org/D109073
2021-09-20 21:00:25 -07:00
Kazu Hirata 85b4b21c8b [llvm] Use make_early_inc_range (NFC) 2021-09-20 19:30:02 -07:00
Amara Emerson 4ceea77409 [X86] Rename the X86WinAllocaExpander pass and related symbols to "DynAlloca". NFC.
For x86 Darwin, we have a stack checking feature which re-uses some of this
machinery around stack probing on Windows. Renaming this to be more appropriate
for a generic feature.

Differential Revision: https://reviews.llvm.org/D109993
2021-09-20 16:19:28 -07:00
Jacob Lambert dc6e8dfdfe [AMDGPU][NFC] Correct typos in lib/Target/AMDGPU/AMDGPU*.cpp files. Test commit for new contributor. 2021-09-20 14:48:50 -07:00
Craig Topper a95ba81073 [RISCV] Teach RISCVTargetLowering::shouldSinkOperands to sink splats for FMA.
If either of the multiplicands is a splat, we can sink it to use
vfmacc.vf or similar.
2021-09-20 11:49:50 -07:00
Craig Topper 04ab6c85ef [RISCV] Teach RISCVTargetLowering::shouldSinkOperands to sink splats for FAdd/FSub/FMul/FDiv. 2021-09-20 10:25:46 -07:00
Craig Topper d85e347a28 [RISCV] Add a pass to recognize VLS strided loads/store from gather/scatter.
For strided accesses the loop vectorizer seems to prefer creating a
vector induction variable with a start value of the form
<i32 0, i32 1, i32 2, ...>. This value will be incremented each
loop iteration by a splat constant equal to the length of the vector.
Within the loop, arithmetic using splat values will be done on this
vector induction variable to produce indices for a vector GEP.

This pass attempts to dig through the arithmetic back to the phi
to create a new scalar induction variable and a stride. We push
all of the arithmetic out of the loop by folding it into the start,
step, and stride values. Then we create a scalar GEP to use as the
base pointer for a strided load or store using the computed stride.
Loop strength reduce will run after this pass and can do some
cleanups to the scalar GEP and induction variable.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D107790
2021-09-20 09:39:44 -07:00
David Green 3f90df22f1 [ARM] MVE reverse shuffles.
The vectorizer can sometimes make reverse shuffles from indices that
count down. In MVE, we don't have a 128bit rev instruction, but we can
select this to a VREV64 with some lane movs to swap the two halfs.

Ideally this would use VMOVD's, but only gets as far as VMOVS's at the
moment.

Differential Revision: https://reviews.llvm.org/D69510
2021-09-20 13:48:01 +01:00
Simon Pilgrim 4ab7c0d3fa [X86] X86TargetTransformInfo - remove unnecessary if-else after early exit. NFCI.
(style) Break the if-else chain as they all return.
2021-09-20 12:53:17 +01:00
Petar Avramovic e4c46ddd91 [GlobalISel] Improve elimination of dead instructions in legalizer
Add eraseInstr(s) utility functions. Before deleting an instruction
collects its use instructions. After deletion deletes use instructions
that became trivially dead.
This patch clears all dead instructions in existing legalizer mir tests.

Differential Revision: https://reviews.llvm.org/D109154
2021-09-20 13:00:58 +02:00
Tim Northover 13aa102e07 AArch64: use ldp/stp for 128-bit atomic load/store in v.84 onwards
v8.4 says that normal loads/stores of 128-bytes are single-copy atomic if
they're properly aligned (which all LLVM atomics are) so we no longer need to
do a full RMW operation to guarantee we got a clean read.
2021-09-20 09:50:11 +01:00
David Spickett 92c9b28347 Revert "[AArch64][SVE] Teach cost model that masked loads/stores are cheap"
This reverts commit 734708e04f.

Due to build failures on the 2 stage SVE VLS bot.
https://lab.llvm.org/buildbot/#/builders/176/builds/908/steps/11/logs/stdio
2021-09-20 08:45:18 +00:00
Simon Pilgrim 0e89ff8195 [X86] SimplifyDemandedBits - only narrow a broadcast source if we only have one use.
Helps with the regression noted on D109065 - don't truncate a broadcast source if the source has multiple uses.
2021-09-19 22:53:30 +01:00
Kazu Hirata 84b07c9b3a [llvm] Use pop_back_val (NFC) 2021-09-19 13:44:23 -07:00
Simon Pilgrim f855ef2601 [X86][Atom] Fix FP uops + port usage
Both ports are required in most cases. Update the uops counts + port usage based off the most recent llvm-exegesis captures (PR36895) and what Intel AoM / Agner / InstLatX64 reports as well.

Noticed while trying to improve fp costs for vectorization via the D103695 helper script.
2021-09-19 20:39:20 +01:00
Simon Pilgrim b7342e3137 [X86] Fold SHUFPS(shuffle(x),shuffle(y),mask) -> SHUFPS(x,y,mask')
We can combine unary shuffles into either of SHUFPS's inputs and adjust the shuffle mask accordingly.

Unlike general shuffle combining, we can be more aggressive and handle multiuse cases as we're not going to accidentally create additional shuffles.
2021-09-19 20:39:19 +01:00
Simon Pilgrim cf8fac7d07 [X86][Atom] Specific uops for all IMUL/IDIV instructions
Based off a mixture of llvm-exegesis captures (PR36895) and Intel AoM / Agner / InstLatX64 reports.
2021-09-19 16:58:52 +01:00
Roman Lebedev 5f2fe48d06
[X86][TLI] SimplifyDemandedVectorEltsForTargetNode(): don't break apart broadcasts from which not just the 0'th elt is demanded
Apparently this has no test coverage before D108382,
but D108382 itself shows a few regressions that this fixes.

It doesn't seem worthwhile breaking apart broadcasts,
assuming we want the broadcasted value to be preset in several elements,
not just the 0'th one.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D108411
2021-09-19 17:38:32 +03:00
Roman Lebedev 07f1d8f0ca
[X86] lowerShuffleAsDecomposedShuffleMerge(): if both inputs are broadcastable/identities, canonicalize broadcasts as such
Split off from D108253.
Broadcast is simpler than any other shuffle we might produce
to do what we want to do here, so prefer it.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D108382
2021-09-19 17:35:37 +03:00
Roman Lebedev 0852313e47
[NFC] combineX86ShufflesRecursively(): actually address nits for previous patch 2021-09-19 17:29:18 +03:00
Roman Lebedev 1e72ca94e5
[X86] combineX86ShufflesRecursively(): call SimplifyMultipleUseDemandedVectorElts() on after finishing recursing
This was suggested in https://reviews.llvm.org/D108382#inline-1039018,
and it avoids regressions in that patch.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D109065
2021-09-19 17:24:58 +03:00
David Green 1da52ef294 [ARM] Add VGETLANEu patterns for v4f16 and v8f16
These were apparently missing, having no pattern that could convert a
VGETLANEu of a v4f16 to an i32. Added bf16 whilst here, following the
same code.
2021-09-19 14:25:21 +01:00
Simon Pilgrim e381d8b243 [X86][Atom] Fix (U)COMISS/SD uops, latency and throughput
Both ports are required, for reg and mem variants - we can also use the WriteFComX class directly and remove the unnecessary InstRW overrides. Matches what Intel AoM / Agner / InstLatX64 report as well.
2021-09-19 12:44:44 +01:00
Ben Shi dee5a8ca32 [RISCV] Optimize (add (shl x, c0), (shl y, c1)) with SH*ADD
Optimize (add (shl x, c0), (shl y, c1)) ->
         (SLLI (SH*ADD x, y), c1), if c0-c1 == 1/2/3.

Reviewed By: craig.topper, luismarques

Differential Revision: https://reviews.llvm.org/D108916
2021-09-19 16:35:12 +08:00
Roman Lebedev 6a2c2263fb
[X86] Improve i8 all-ones element insertion in pre-SSE4.1
Should avoid some regressions in D109065

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D109989
2021-09-18 22:24:06 +03:00
David Green cb5e3f7959 [ARM] Prevent large integer VQDMULH pattern crashes
Put a limit on the size of constant integers we test when looking for
VQDMULH, to prevent it from crashing from values more than 64bits.
2021-09-18 18:47:02 +01:00
Usman Nadeem 757384abff [AArch64][SVE][InstCombine] Fold redundant zip1/2(uzp1/2) operations
zip1(uzp1(A, B), uzp2(A, B)) --> A
    zip2(uzp1(A, B), uzp2(A, B)) --> B

Differential Revision: https://reviews.llvm.org/D109666

Change-Id: I4a6578db2fcef9ff71ad0e77b9fe08354e6dbfcd
2021-09-17 15:24:46 -07:00
Roman Lebedev 358df06f4e
[X86] Improve `matchBinaryShuffle()`'s `BLEND` lowering with per-element all-zero/all-ones knowledge
We can use `OR` instead of `BLEND` if either the element we are not picking is zero (or masked away);
or the element we are picking overwhelms (e.g. it's all-ones) whatever the element we are not picking:
https://alive2.llvm.org/ce/z/RKejao

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D109726
2021-09-17 19:13:33 +03:00
Simon Pilgrim db23f27786 [X86] X86PreTileConfig - Use const-ref iterator in for-range loop. NFCI.
Avoid unnecessary copies, reported by MSVC static analyzer.
2021-09-17 14:04:53 +01:00
Simon Pilgrim 5ebe95e256 [X86][Atom] Fix integer shuffles uops, latency and throughput
The MMX pack/unpck shuffles don't need an override - they have the same behaviour as other shuffles (Port0 only).
The SSE pslldq/psrldq shuffles don't need an override - they have the same behaviour as other shuffles (Port0 only).
The SSE pshufb shuffles use 4uops (+1 load).

Noticed the pslldq/psrldq issue while trying to improve reduction costs via the D103695 helper script, and fixed the others while reviewing. Confirmed with Intel AoM / Agner / InstLatX64.
2021-09-17 12:11:54 +01:00
Jonas Paulsson 1a5ab3e97c [SystemZ] Recognize .machine directive in parser.
The .machine directive can be used in assembly files to specify the ISA for
the instructions following it.

Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D109660
2021-09-17 12:03:54 +02:00
Petar Avramovic d477a7c2e7 GlobalISel/Utils: Refactor integer/float constant match functions
Rework getConstantstVRegValWithLookThrough in order to make it clear if we
are matching integer/float constant only or any constant(default).
Add helper functions that get DefVReg and APInt/APFloat from constant instr
getIConstantVRegValWithLookThrough: integer constant, only G_CONSTANT
getFConstantVRegValWithLookThrough: float constant, only G_FCONSTANT
getAnyConstantVRegValWithLookThrough: either G_CONSTANT or G_FCONSTANT

Rename getConstantVRegVal and getConstantVRegSExtVal to getIConstantVRegVal
and getIConstantVRegSExtVal. These now only match G_CONSTANT as described
in comment.

Relevant matchers now return both DefVReg and APInt/APFloat.

Replace existing uses of getConstantstVRegValWithLookThrough and
getConstantVRegVal with new helper functions. Any constant match is
only required in:
ConstantFoldBinOp: for constant argument that was bit-cast of float to int
getAArch64VectorSplat: AArch64::G_DUP operands can be any constant
amdgpu select for G_BUILD_VECTOR_TRUNC: operands can be any constant

In other places use integer only constant match.

Differential Revision: https://reviews.llvm.org/D104409
2021-09-17 11:22:13 +02:00
Chen Zheng 80584f0056 Revert "[PowerPC][ELF] make sure local variable space does not overlap with parameter save area"
This causes mix-compile issues on PowerPC Linux.

This reverts commit 324bd467a2.
2021-09-17 08:07:18 +00:00
Jacob Lambert 4c1023b4b7 [AMDGPU] NFC: Fixing small spelling errors in AMDGPU header files
Nonfunctional commit fixing several minor spelling errors in llvm/lib/Target/AMDGPU header files.
Testing workflow as a new contributor.

Differential Revision: https://reviews.llvm.org/D109733
2021-09-16 13:03:09 -07:00
Craig Topper 73e5b9ea90 [RISCV] Select (srl (sext_inreg X, i32), uimm5) to SRAIW if only lower 32 bits are used.
SimplifyDemandedBits can turn srl into sra if the bits being shifted
in aren't demanded. This patch can recover the original sra in some cases.

I've renamed the tablegen class for detecting W users since the "overflowing operator"
term I originally borrowed from Operator.h does not include srl.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D109162
2021-09-16 11:03:35 -07:00
Vang Thao 106959acc1 [AMDGPU] Inline non-kernel functions using extern lds
In https://reviews.llvm.org/D100481, forceful inline of all non-kernel
functions using lds was disabled since AMDGPULowerModuleLDS pass now handles
static lds. However that pass does not handle extern lds so non-kernel
functions using extern lds must sill be inline.

Reviewed By: hsmhsm, arsenm

Differential Revision: https://reviews.llvm.org/D109773
2021-09-16 10:58:51 -07:00
Kazu Hirata cfc7402419 [llvm] Use drop_begin (NFC) 2021-09-16 08:46:26 -07:00
Doug Gregor a773db7d76 Add a command-line flag to control the Swift extended async frame info.
Introduce a new command-line flag `-swift-async-fp={auto|always|never}`
that controls how code generation sets the Swift extended async frame
info bit. There are three possibilities:

* `auto`: which determines how to set the bit based on deployment target, either
statically or dynamically via `swift_async_extendedFramePointerFlags`.
* `always`: the default, always set the bit statically, regardless of deployment
target.
* `never`: never set the bit, regardless of deployment target.

Patch by Doug Gregor <dgregor@apple.com>

Reviewed By: doug.gregor

Differential Revision: https://reviews.llvm.org/D109392
2021-09-16 06:57:45 -07:00
Alexandros Lamprineas 1bd5ea968e [ARM] Mitigate the cve-2021-35465 security vulnurability.
Recently a vulnerability issue is found in the implementation of VLLDM
instruction in the Arm Cortex-M33, Cortex-M35P and Cortex-M55. If the
VLLDM instruction is abandoned due to an exception when it is partially
completed, it is possible for subsequent non-secure handler to access
and modify the partial restored register values. This vulnerability is
identified as CVE-2021-35465.

The mitigation sequence varies between v8-m and v8.1-m as follows:

v8-m.main
---------
mrs        r5, control
tst        r5, #8       /* CONTROL_S.SFPA */
it         ne
.inst.w    0xeeb00a40   /* vmovne s0, s0 */
1:
vlldm      sp           /* Lazy restore of d0-d16 and FPSCR. */

v8.1-m.main
-----------
vscclrm    {vpr}        /* Clear VPR. */
vlldm      sp           /* Lazy restore of d0-d16 and FPSCR. */

More details on
developer.arm.com/support/arm-security-updates/vlldm-instruction-security-vulnerability

Differential Revision: https://reviews.llvm.org/D109157
2021-09-16 12:56:43 +01:00
Alexandros Lamprineas 61f25daa8d [ARM][CMSE] Clear the secure fp-registers when using softfp abi.
When expanding the non-secure call instruction we are emiting code
to clear the secure floating-point registers only if the targeted
architecture has floating-point support. The potential problem is
when the source code containing non-secure calls are built with
-mfloat-abi=soft but some other part of the system has been built
with -mfloat-abi=softfp (soft and softfp are compatible as they use
the same procedure calling standard). In this case floating-point
registers could leak to non-secure state as the non-secure won't
have cleared them assuming no floating point has been used.

Differential Revision: https://reviews.llvm.org/D109153
2021-09-16 12:56:43 +01:00
Cullen Rhodes 17f1ccc759 [AArch64][SVE] NFC: Remove unnecessary if 2021-09-16 11:26:46 +00:00
Simon Pilgrim 1ef62cb200 [X86] SimplifyDemandedVectorEltsForTargetNode - add PSADBW handling
Peek through PSADBW operands to handle non demanded elements.
2021-09-16 11:28:31 +01:00
Jay Foad 128a49727a [AMDGPU] Fix upcoming TableGen warnings on unused template arguments. NFC.
The warning is implemented by D109359 which is still in review.

Differential Revision: https://reviews.llvm.org/D109826
2021-09-16 09:07:18 +01:00
Jessica Paquette c8b3d7d6d6 [AArch64][GlobalISel] Ensure atomic loads always get assigned GPR destinations
The default register bank selection code for G_LOAD assumes that we ought to
use a FPR when the load is casted to a float/double.

For atomics, this isn't true; we should always use GPRs.

Without this patch, we crash in the following example:

https://godbolt.org/z/MThjas441

Also make the code a little more stylistically consistent while we're here.

Also test some other weird cast combinations as well.

Differential Revision: https://reviews.llvm.org/D109771
2021-09-15 17:05:09 -07:00
Ahmed Bougacha e159d3cbfc [AArch64][GlobalISel] Use MI::getIntrinsicID in more spots. NFC.
There's technically a difference in the logic used by these
findIntrinsicID and MachineInstr::getIntrinsicID, but it shouldn't
be a meaningful difference here, with G_INTRINSIC instructions.
getIntrinsicID's "first non-def" logic should be correct for those.
2021-09-15 16:45:34 -07:00
Simon Pilgrim 0767e43d87 [CostModel][X86] Adjust bitreverse/ctpop/ctlz/cttz AVX2+ costs based on llvm-mca reports
Based off the worse case numbers generated by D103695, the AVX2/512 bit reversing/counting costs were higher than necessary (based off instruction counts instead of actual throughput).
2021-09-15 13:04:40 +01:00
Martin Storsjö b33a43e57c [ARM] Move fetching of ARMSubtarget into the scopes that need it. NFC.
This was requested in D38253, but missed back then.

Differential Revision: https://reviews.llvm.org/D109046
2021-09-15 15:03:20 +03:00
David Green a2332d5332 [ARM] Prevent continuous folding of SUBC
Under some situations under Thumb1, we could be stuck in an infinite
loop recombining the same instruction. This puts a limit on that, not
combining SUBC with SUBE repeatedly.
2021-09-15 11:23:32 +01:00
Simon Pilgrim dcba994184 [X86] combineX86ShuffleChain - ensure we only peek through bitcasts to vectors (PR51858)
When searching for hidden identity shuffles (added at rG41146bfe82aecc79961c3de898cda02998172e4b), only peek through bitcasts to the source operand if it is a vector type as well.
2021-09-15 10:21:05 +01:00
Simon Atanasyan 533471ff2f [MIPS] Remove unused tblgen template args. NFC
Identified in D109359.
2021-09-15 12:16:07 +03:00
Cullen Rhodes 18655140d6 [NVPTX] NFC: Remove unused imm type intrinsic arg
Identified in D109359.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D109755
2021-09-15 08:56:51 +00:00
Xiang1 Zhang 1f1c71aeac [X86][InlineAsm] Use mem size information (*word ptr) for "global variable + registers" memory expression in inline asm.
Differential Revision: https://reviews.llvm.org/D109739
2021-09-15 16:11:14 +08:00
Matt Arsenault f12174204c AMDGPU: Rename attributor class for uniform-work-group-size
This isn't really an AMDGPU specific attribute and could be moved to
generic code. It's also important to include the word uniform in the
name.
2021-09-14 19:49:08 -04:00
David Tenty 26b8031774 [CMake][AIX] Disable visibility options in build
Visibility options currently have limited support on AIX and may cause warnings or errors
depending on the build compiler used.

Reviewed By: ZarkoCA

Differential Revision: https://reviews.llvm.org/D108467
2021-09-14 16:05:12 -04:00
Heejin Ahn 468c4409f6 Revert "[WebAssembly] Rethrow longjmp in EH handling if EmSjLj is enabled"
This reverts commit b7b4ebbcfa.

Reason: This breaks several code-size tests in Emscripten test suite
because this exports `emscripten_longjmp` for programs that didn't do it
before.
2021-09-14 12:59:42 -07:00
Joe Nash 3ce1b9631a [AMDGPU] Switch PostRA sched to MachineSched
Use GCNHazardRecognizer in postra sched.
Updated tests for the new schedules.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D109536

Change-Id: Ia86ba2ae168f12fb34b4d8efdab491f84d936cde
2021-09-14 15:11:27 -04:00
Sam Clegg ef8c9135ef [WebAssembly] Allow import and export of TLS symbols between DSOs
We previously had a limitation that TLS variables could not
be exported (and therefore could also not be imported).  This
change removed that limitation.

Differential Revision: https://reviews.llvm.org/D108877
2021-09-14 06:47:37 -07:00
Amy Kwan 5041a485b9 [PowerPC] Exploit Prefixed Load/Stores using the refactored Load/Store Implementation
This patch exploits the prefixed load and store instructions utilizing the
refactored load/store implementation introduced in D93370.

Prefixed load and store instructions are emitted whenever we are loading or
storing a value with an offset that fits into a 34-bit signed immediate.
Patterns for the prefixed load and stores are added in this patch, as well as
the implementation that detects when we are loading and storing a value with an
offset that fits in 34-bits.

Differential Revision: https://reviews.llvm.org/D96075
2021-09-14 08:39:49 -05:00
David Green 5a6dfbb8cd [ARM] Teach DemandedVectorElts about VMOVN lanes
The class of instructions that write to narrow top/bottom lanes only
demand the even or odd elements of the input lanes. Which means that a
pair of VMOVNT; VMOVNB demands no lanes from the original input. This
teaches that to instcombine from the target hooks available through
ARMTTIImpl.

Differential Revision: https://reviews.llvm.org/D109325
2021-09-14 11:05:31 +01:00
Tim Northover f287405419 AArch64: fix indentation of ProcAppleA14. NFC. 2021-09-14 10:04:15 +01:00
Cullen Rhodes 6fbc167c0a [WebAssembly] NFC: Remove unused tblgen template args
Identified in D109359.

Reviewed By: aheejin

Differential Revision: https://reviews.llvm.org/D109689
2021-09-14 08:26:15 +00:00
Cullen Rhodes 742cf3996e [AArch64] NFC: Use 'asm' in SIMDScalarCPY
Fixes a warning identified in D109359. The mnemonic is also mov, not
cpy.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D109573
2021-09-14 08:26:15 +00:00
Heejin Ahn e85ed44373 [WebAssembly] Fix a typo in comments 2021-09-14 00:45:02 -07:00
Chen Zheng 946e69d253 [PowerPC] prepare more loop load/store instructions
PPCLoopInstrFormPrep pass now can prepare for load store instructions
in a loop whose increment is not a constant integer.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D105872
2021-09-14 05:00:48 +00:00
Matt Arsenault c305513cc2 AMDGPU: Fix assert with indirect call with known required inputs
The attributor can determine that some indirect calls do not require
special inputs. The special inputs will still be present in the ABI,
so we need to allocate the registers and pass undefs.
2021-09-13 22:54:11 -04:00
Brendon Cahoon 42dace9c5b [Hexagon] Use getTypeAllocSize to compute difference between objects
The code was using getTypeStoreSize to calculate the difference
between consecutive objects. The calculation was incorrect due
to padding that is added between consecutive objects. The
getTypeAllocSize includes the padding amount. For example,
if the type is [19 x i8], the difference between consecutive
objects is 32 bytes, not 19 bytes.

A second case for getTypeAllocSize is needed when computing
the pointer values for the vector accesses. The calculation needs
to account for the padding as well.

Differential Revision: https://reviews.llvm.org/D109403
2021-09-13 19:04:59 -05:00
Ankit Aggarwal a72763af67 [Hexagon] Handle bitcast of i64/i128 -> v64i1/v128i1 2021-09-13 18:52:30 -05:00
Heejin Ahn c55b6c593b [WebAssembly] Handle _setjmp and _longjmp in SjLj
In some platforms `_setjmp` and `_longjmp` are used instead of `setjmp`
and `longjmp`. This CL adds support for them.

Fixes https://github.com/emscripten-core/emscripten/issues/14999.

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D109669
2021-09-13 14:20:04 -07:00
Heejin Ahn b7b4ebbcfa [WebAssembly] Rethrow longjmp in EH handling if EmSjLj is enabled
This is a fix on top of D106525's Case 2. In D106525, in
`runEHOnFunction` which handles Emscripten EH, We rethrow `longjmp` only
when the module has any usage of `setjmp` or `longjmp`. But now Wasm
object files are linked using wasm-ld, the module this pass sees is not
the whole program, and even if this module does not contain any
`longjmp`, another file can contain it and can be linked with the
current module. This enables the rethrowing of longjmp whenever
Emscripten SjLj is enabled, regardless of whether it is used in this
module or not.

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D109670
2021-09-13 14:15:25 -07:00
Tim Northover 5d070c8259 SwiftAsync: use runtime-provided flag for extended frame if back-deploying
When back-deploying Swift async code we can't always toggle the flag showing an
extended frame is present because it will confuse unwinders on systems released
before this feature. So in cases where the code might run there, we `or` in a
mask provided by the runtime (as an absolute symbol) telling us whether the
unwinders can cope.

When deploying only for newer OSs, we can still hard-code the bit-set for
greater efficiency.
2021-09-13 13:54:46 +01:00
Cullen Rhodes 1d771e19fd [AArch64] NFC: Remove unused template args
Identified in D109359.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D109491
2021-09-13 10:39:33 +00:00
David Truby 915e9e76bf [llvm][sve] Lowering for VLS masked extending loads
This extends the custom lowering for extending loads on
fixed length vectors in SVE to support masked extending loads.

The existing tests for correct behaviour of masked extending loads
exhibit bad code generation due to the legalistaion of i1 vectors.
They have been left as-is and new tests have been added that do not
exhibit this behaviour.

Differential Revision: https://reviews.llvm.org/D108200
2021-09-13 11:13:25 +01:00
Cullen Rhodes 97a6d76694 [Hexagon] NFC: Remove unused tblgen template args
Identified in D109359.

Reviewed By: kparzysz

Differential Revision: https://reviews.llvm.org/D109604
2021-09-13 10:09:08 +00:00
Cullen Rhodes 9e435c96de [Lanai] NFC: Remove unused tblgen template arg 'OpNode'
Identified in D109359.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D109606
2021-09-13 10:09:08 +00:00
Jay Foad 477b9bc9f7 [AMDGPU] Minor cleanup after D109483. NFC. 2021-09-13 10:27:15 +01:00
Jim Lin f29336104d [RISCV] Rename prefix `FeatureExt*` to `FeatureStdExt*` for all sub-extension
Rename prefix `FeatureExt*` to `FeatureStdExt*` for all sub-extension for consistency

Reviewed By: HsiangKai, asb

Differential Revision: https://reviews.llvm.org/D108187
2021-09-13 16:24:15 +08:00
Simon Pilgrim 65ad09da0e [X86][SLM] Fix DIVPD/DIVPS/RCPPS/RSQRTPS/SQRTPD/SQRTPS/DPPD/DPPS uops, latency and throughput
The packed variants of the instructions had been modelled as the same as the scalar variants.

Reported during a run of llvm-exegesis on a cheap SLM box and matches what Agner / InstLatX64 report as well.
2021-09-13 08:36:43 +01:00
Arthur Eubanks f94a118a6e [NFC] Avoid using pointee types in PPCISelLowering
A cmpxchg's new value type is the same as the pointer operand's pointee type.
2021-09-12 17:37:35 -07:00
Craig Topper 283879793d [RISCV] Initial support .insn directive for the assembler.
This allows for a custom encoding to be emitted. It can also be
used with inline assembly to allow the custom instruction to be
register allocated like other instructions.

I initially started from SystemZ's implementation, but some of
the formats allow operands to be specified in multiple ways so I
had to add support for matching different operand class lists for
the same format. That implementation is a simplified version of
what is emitted by tablegen for regular instructions.

I've left out the compressed formats. And I haven't supported the
named opcodes like LUI or OP_IMM_32. Those can be added in future
patches.

Documentation can be found here https://sourceware.org/binutils/docs-2.37/as/RISC_002dV_002dFormats.html

Reviewed By: jrtc27, MaskRay

Differential Revision: https://reviews.llvm.org/D108602
2021-09-12 15:56:12 -07:00
Nikita Popov 45c467346a [LAA] Pass access type to getPtrStride()
Pass the access type to getPtrStride(), so it is not determined
from the pointer element type. Many cases still fetch the element
type at a higher level though, so this only partially addresses
the issue.
2021-09-11 19:16:49 +02:00
Simon Pilgrim df975e4590 [X86][SLM] Fix PSAD/MPSAD uops, latency and throughput
Noticed while trying to improve generic reduction costs via the D103695 helper script. Confirmed with Intel AoM / Agner / InstLatX64.
2021-09-11 11:44:09 +01:00
Simon Pilgrim 484944ac3b [X86][SLM] Fix HADD/HSUB uops, latency and throughput
Noticed while trying to improve generic reduction costs via the D103695 helper script. Confirmed with Intel AoM / Agner / InstLatX64.
2021-09-11 11:44:09 +01:00
Simon Pilgrim 51d04e2268 [X86][SLM] Swap LoadLat and LoadUOps in the SLMWriteResPair<> helper. NFC.
We set the LoadUOps argument a lot more frequently that LoadLat, by swapping them we can simplify a number of declarations.
2021-09-11 11:44:09 +01:00
Jessica Paquette 4e408aae2c [AArch64][GlobalISel] Select full-fp16 s16 G_FCONSTANT as a constant pool load
When we have full-fp16 support, we should (manually select) s16 G_FCONSTANT to
a constant pool load.

Add support for that to `emitLoadFromConstantPool` + the existing constant
selection code.

Also tidy up the constant selection code a little. There were some out-of-date
comments + some dead code.

Differential Revision: https://reviews.llvm.org/D108957
2021-09-10 19:36:34 -07:00
Usman Nadeem ab111e982f Revert "Revert "[AArch64][SVE][InstCombine] Canonicalize aarch64_sve_dup_x intrinsic to IR splat operation""
This reverts commit eee7d225de.
Effectively relanding 98c37247d8
after fixing the failing tests.

Change-Id: I5d7461aeb820a2d5f1895457d824a8de4d316ee5
2021-09-10 18:11:24 -07:00
Johannes Doerfert c09fbbdcfb Reapply "[GlobalOpt][FIX] Do not embed initializers into AS!=0 globals""
This reapplies commit 7dbba3376f, or, put
differently, this reverts commit d9a8d20827.

The test now requires the amdgpu and nvptx backend explicitly as it
won't work without properly.
2021-09-10 15:22:56 -05:00
Mark Schimmel 7c82db3634 [ARC] Improve code generated for i32 ADDC/ADDE and SUBC/SUBE
This change improves the code generated for long long addition and subtraction

Differential Revision: https://reviews.llvm.org/D109615
2021-09-10 13:04:08 -07:00
Usman Nadeem eee7d225de Revert "[AArch64][SVE][InstCombine] Canonicalize aarch64_sve_dup_x intrinsic to IR splat operation"
This reverts commit 98c37247d8.
2021-09-10 13:01:48 -07:00
Usman Nadeem 98c37247d8 [AArch64][SVE][InstCombine] Canonicalize aarch64_sve_dup_x intrinsic to IR splat operation
Differential Revision: https://reviews.llvm.org/D109118

Change-Id: I47adc1984a54bea02bf5a0a767b765afe7e16aa3
2021-09-10 12:52:14 -07:00
Kazu Hirata c9fca53af1 [CodeGen, Target] Use pred_empty and succ_empty (NFC) 2021-09-10 11:11:31 -07:00
Huihui Zhang da4a2fd832 [AArch64ISelLowering] Fix null pointer access in performSVEAndCombine.
When combining 'and' of an unsigned unpack and shuffle instruction,
bail early if shuffle is not constructed from a constant integer.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D109556
2021-09-10 10:36:43 -07:00
Johannes Doerfert d9a8d20827 Revert "[GlobalOpt][FIX] Do not embed initializers into AS!=0 globals"
This reverts commit 7dbba3376f.

There seems to be a problem with the tests, investigating now:
  https://lab.llvm.org/buildbot/#/builders/61/builds/14574
2021-09-10 12:23:08 -05:00
Johannes Doerfert 7dbba3376f [GlobalOpt][FIX] Do not embed initializers into AS!=0 globals
Not all address spaces support initializers for globals and we can
therefore not set them without checking if they are allowed. This
patch adds a hook into TTI to check if an AS allows non-undef
initializers. We disable it for all but address space 0 by default,
NVPTX and AMDGPU targets allow all but address space 3.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D109337
2021-09-10 12:08:50 -05:00
David Green deefeffb5d [ARM] Remove unused tblgen arguments. NFC
As per D109359, this removes or makes use of some of the existing unused
NEON and base ARM tblgn arguments.
2021-09-10 18:03:54 +01:00
Craig Topper 1b736bda3b [RISCV] Enable CGP to sink splat operands of Add/Sub/Mul/Shl/LShr/AShr
LICM may have pulled out a splat, but with .vx instructions we
can fold it into an operation.

This patch enables CGP to reverse the LICM transform and move the
splat back into the loop.

I've started with the commutable integer operations and shifts, but we can
extend this with more operations in future patches.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D109394
2021-09-10 09:04:01 -07:00
Craig Topper 6c7cadb8c1 [RISCV] Teach vsetvli insertion that stores don't use the policy bits in vtype.
This can avoid a vsetvl after a tail undisturbed operation.

Differential Revision: https://reviews.llvm.org/D109549
2021-09-10 09:03:20 -07:00
David Green 6b7cdb40da [ARM] Remove unused tblgen arguments. NFCI
As per D109359, this removes or makes use of some of the existing unused
MVE tblgn arguments.
2021-09-10 15:06:31 +01:00
Serge Bazanski 788e7b3b8c [Lanai] implement wide immediate support
This fixes LanaiTTIImpl::getIntImmCost to return valid costs for i128
(and wider) values. Previously any immediate wider than
64 bits would cause Lanai llc to crash.

A regression test is also added that exercises this functionality.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D107091
2021-09-10 10:54:43 +00:00
Serge Bazanski 231bfaab31 [Lanai] fix MC / objdump
D78776 removed is{Call,Branch,UnconditionalBranch} guards in objdump
before calling MCInstrAnalysis::evaluateBranch. This is fine for other
architectures as they gracefully handle evaluateBranch being called on
non-branches. However, the Lanai MCInstrAnalysis implementation didn't
and that change caused it to crash.

This inserts the same guards back into Lanai's evaluateBranch
implementation and adds a smoke test that exercises `llc | objdump` so
this kind of regression is hopefully caught next time.

Reviewed By: jpienaar, MaskRay

Differential Revision: https://reviews.llvm.org/D107593
2021-09-10 10:46:13 +00:00
Florian Hahn 5d1a6d0d1a
[ARM] Remove unnecessary use of replaceSymbolicStrideSCEV (NFC).
When passing an empty strides map, there's nothing to replace for
replaceSymbolicStrideSCEV and it just returns the SCEV for Ptr. There
should be no need to call the function.

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D109462
2021-09-10 10:47:26 +02:00
hsmahesha 0c28814015 Revert "[AMDGPU] Split entry basic block after alloca instructions."
This reverts commit 98f4713122.

Without any (theoretical/practical) guarantee that all the allocas within
*entry* basic block are clustered together at the beginning of the block,
this patch is doomed to fail. Hence reverting it.
2021-09-10 10:23:51 +05:30
Yonghong Song e52617c31d BPF: change BTF_KIND_TAG format
Previously we have the following binary representation:
    struct bpf_type { name, info, type }
    struct btf_tag { __u32 component_idx; }
If the tag points to a struct/union/var/func type, we will have
   kflag = 1, component_idx = 0
if the tag points to struct/union member or func argument, we will have
   kflag = 0, component_idx = 0, ..., vlen - 1

The above rather makes interface complex to have both kflag and
component needed to determine its legality and index.

This patch simplifies the interface by removing kflag involvement.
   component_idx = (u32)-1 : tag pointing to a type
   component_idx = 0 ... vlen - 1 : tag pointing to a member or argument
and kflag is always 0 and there is no need to check.

Differential Revision: https://reviews.llvm.org/D109560
2021-09-09 19:03:57 -07:00
Matt Arsenault 0197cd0bd4 AMDGPU: Optimize amdgpu-no-* attributes
This allows clobbering a few extra registers in the fixed ABI, and
avoids some workitem ID packing instructions.
2021-09-09 18:24:28 -04:00
Matt Arsenault db4963d080 AMDGPU: Use attributor to propagate uniform-work-group-size
Drop the legacy version in AMDGPUAnnotateKernelFeatures. This has the
side effect of now respecting the linkage, and not changing externally
visible functions.
2021-09-09 18:24:28 -04:00
Matt Arsenault 722b8e0e5a AMDGPU: Invert ABI attribute handling
Previously we assumed all callable functions did not need any
implicitly passed inputs, and added attributes to functions to
indicate when they were necessary. Requiring attributes for
correctness is pretty ugly, and it makes supporting indirect and
external calls more complicated.

This inverts the direction of the attributes, so an undecorated
function is assumed to need all implicit imputs. This enables
AMDGPUAttributor by default to mark when functions are proven to not
need a given input. This strips the equivalent functionality from the
legacy AMDGPUAnnotateKernelFeatures pass.

However, AMDGPUAnnotateKernelFeatures is not fully removed at this
point although it should be in the future. It is still necessary for
the two hacky amdgpu-calls and amdgpu-stack-objects attributes, which
would be better served by a trivial analysis on the IR during
selection. Additionally, AMDGPUAnnotateKernelFeatures still
redundantly handles the uniform-work-group-size attribute to be
removed in a future commit.

At this point when not using -amdgpu-fixed-function-abi, we are still
modifying the ABI based on these newly negated attributes. In the
future, this option will be removed and the locations for implicit
inputs will always be fixed. We will then use the new attributes to
avoid passing the values when unnecessary.
2021-09-09 18:24:28 -04:00
Amy Kwan 351a0d8a90 [PowerPC] Update PC-Relative Load/Store Patterns to use the refactored Load/Store Implementation
This patch updates the PC-Relative load and store patterns to utilize the
refactored load/store implementation introduced in D93370.

PC-Relative implementation has been added to PPCISelLowering.cpp, and also the
patterns in PPCInstrPrefix.td have been updated and no longer require AddedComplexity.
All existing test cases pass with this update.

Differential Revision: https://reviews.llvm.org/D95116
2021-09-09 15:38:42 -05:00
Craig Topper 9af8f1b18e [SelectionDAG] Add isZero/isAllOnes methods to ConstantSDNode.
Soft deprecrate isNullValue/isAllOnesValue and update in tree
callers. This matches the changes to the APInt interface from
D109483.

Reviewed By: lattner

Differential Revision: https://reviews.llvm.org/D109535
2021-09-09 13:28:30 -07:00
Jameson Nash e20f69f612 [Aarch64] Correct register class for pseudo instructions
This constrains the Mov* and similar pseudo instruction to take
GPR64common register classes rather than GPR64. GPR64 includs XZR
which is invalid here, because this pseudo instructions expands
into an adrp/add pair sharing a destination register. XZR is invalid
on add and attempting to encode it will instead increment the stack
pointer causing crashes (downstream report at [1]). The test case
there reproduces on LLVM11, but I do not have a test case that
reaches this code path on main, since it is being masked by
improved dead code elimination introduced in D91513. Nevertheless,
this seems like a good thing to fix in case there are other cases
that dead code elimination doesn't clean up (e.g. if `optnone` is
used and the optimization is skipped).

I think it would be worth auditing uses of GPR64 in pseudo
instructions to see if there are any similar issues, but I do not
have a high enough view of the backend or knowledge of the
Aarch64 architecture to do this quickly.

[1] https://github.com/JuliaLang/julia/issues/39818

Reviewed By: t.p.northover

Differential Revision: https://reviews.llvm.org/D97435
2021-09-09 14:31:49 -04:00
Artem Belevich d99a83b4e5 [NVPTX] Simplify and generalize constant printer.
This allows handling i128 values and fixes
https://bugs.llvm.org/show_bug.cgi?id=51789.

Differential Revision: https://reviews.llvm.org/D109458
2021-09-09 11:30:19 -07:00
Chris Lattner d51da74889 [CodeGen] Use DAG.getAllOnesConstant where possible to simplify code. NFC. 2021-09-09 10:22:51 -07:00
Craig Topper 124bcc1a13 [X86] Disable muloti4 libcalls for x86-64.
This library function only exists in compiler-rt not libgcc. So
this would fail to link unless we were linking with compiler-rt.

This is consistent with the recent removal of calls to mulodi4 on
32-bit targets like D108928.

I suppose maybe we could keep the libcalls for platforms like
Darwin that use compiler-rt exclusively?

Reviewed By: nickdesaulniers, MaskRay

Differential Revision: https://reviews.llvm.org/D109385
2021-09-09 10:03:15 -07:00
Chris Lattner 735f46715d [APInt] Normalize naming on keep constructors / predicate methods.
This renames the primary methods for creating a zero value to `getZero`
instead of `getNullValue` and renames predicates like `isAllOnesValue`
to simply `isAllOnes`.  This achieves two things:

1) This starts standardizing predicates across the LLVM codebase,
   following (in this case) ConstantInt.  The word "Value" doesn't
   convey anything of merit, and is missing in some of the other things.

2) Calling an integer "null" doesn't make any sense.  The original sin
   here is mine and I've regretted it for years.  This moves us to calling
   it "zero" instead, which is correct!

APInt is widely used and I don't think anyone is keen to take massive source
breakage on anything so core, at least not all in one go.  As such, this
doesn't actually delete any entrypoints, it "soft deprecates" them with a
comment.

Included in this patch are changes to a bunch of the codebase, but there are
more.  We should normalize SelectionDAG and other APIs as well, which would
make the API change more mechanical.

Differential Revision: https://reviews.llvm.org/D109483
2021-09-09 09:50:24 -07:00
Neumann Hon 0782e55c26 [SystemZ] [NFC] Add SystemZELFFrameLowering and SystemZXPLINKFrameLowering classes.
This patch adds class SystemZFrameLowering which is a SystemZ-specific class
detailing special registers used by calling conventions on the target.
SystemZELFFrameLowering and SystemZXPLINKFrameLowering implement this class
for ELF and XPLINK64 respectively. Previous functionality in SystemZFrameLowering
is moved to SystemZELFFrameLowering. SystemZXPLINKFrameLowering can then be
implemented in future patches.

Reviewed By: uweigand, Kai

Differential Revision: https://reviews.llvm.org/D108777
2021-09-09 12:23:40 -04:00
Simon Pilgrim c31a202233 [X86][AVX] Add missing X86ISD::VBROADCAST(v2f64 -> v4f64) isel pattern for AVX1 targets
As discussed on the ticket, I'm intending to add additional 128->256 patterns when we have test coverage, but this addresses a known crash.

Differential Revision: https://reviews.llvm.org/D109434
2021-09-09 12:16:23 +01:00
Bradley Smith 8089f9ed5a [AArch64][SVE] Add missing patterns for unpredicated subr intrinsics
Differential Revision: https://reviews.llvm.org/D109369
2021-09-09 10:28:37 +00:00
Cullen Rhodes d42f76fd36 [AArch64][SVE] NFC: Remove unused template args
For sve_fp_3op_p_zds_zx we have zero patterns downstream but the
intrinsic args can be added again if/when the patterns are implemented.

Identified in D109359.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D109429
2021-09-09 07:10:57 +00:00
Cullen Rhodes 5b848a35d2 [AArch64][SVE] NFC: Use stepvector directly in index multiclasses
Also fixes a couple of warnings identified in D109359:

  SVEInstrFormats.td:5099:59: warning: unused template argument: sve_int_index_ri::step_vector
  SVEInstrFormats.td:5133:59: warning: unused template argument: sve_int_index_rr::step_vector

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D109422
2021-09-09 07:10:57 +00:00
Alexander Pivovarov 4bc8dbe0ca [RISCV] Add SiFive cores E and S series
Add SiFive cores E20, E21, E24, E34, S21, S54 and S76

Differential Revision: https://reviews.llvm.org/D109260
2021-09-08 23:59:04 -07:00
Yvan Roux 261cbe98c3 [RISCV] Fix Machine Outliner jump table handling.
Don't outline machine instructions which are using jump table indexes
since they are materialized as local labels (like the already handled
case of constant pools).

Reviewed By: paquette

Differential Revision: https://reviews.llvm.org/D109436
2021-09-09 07:32:30 +02:00
Jessica Paquette 22a64d4a14 [MachineOutliner][AArch64] Ensure LR is live-in when inserting reg-save calls
Similar to other code which handles creating the function frame.

If LR isn't live-in to the block that we're inserting the call into, we'll get
a MachineVerifier error.
2021-09-08 17:44:27 -07:00
Craig Topper a574f0e0c3 [RISCV] Disable use of i128 shift libcalls on RV32.
Since i128 isn't a legal C type on RV32, I don't believe
libgcc implements these functions for RV32. compiler-rt
does implement them because i128 support is enabled
in order to handle long double.

This is consistent with 32-bit X86 and ARM.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D109383
2021-09-08 14:26:07 -07:00
Akira Hatanaka dea6f71af0 [ObjC][ARC] Use the addresses of the ARC runtime functions instead of
integer 0/1 for the operand of bundle "clang.arc.attachedcall"

https://reviews.llvm.org/D102996 changes the operand of bundle
"clang.arc.attachedcall". This patch makes changes to llvm that are
needed to handle the new IR.

This should make it easier to understand what the IR is doing and also
simplify some of the passes as they no longer have to translate the
integer values to the runtime functions.

Differential Revision: https://reviews.llvm.org/D103000
2021-09-08 11:58:03 -07:00
Kirill Stoimenov 3f875134a7 [asan] Fixed the jump to use the 4 byte offset version.
This should have been the 4 byte version in the first place. Unfortunatelly there is no easy way to add a test as both the 1 byte and 4 byte version are printed as 'jmp' in the assembly code.

Reviewed By: kda

Differential Revision: https://reviews.llvm.org/D109453
2021-09-08 17:58:12 +00:00
Wouter van Oortmerssen a99fb86c65 [WebAssembly] Change WebAssemblyMCLowerPrePass to ModulePass
It was a FunctionPass before, which subverted its purpose to collect ALL symbols before MCLowering, depending on how LLVM schedules function passes.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51555

Differential Revision: https://reviews.llvm.org/D109202
2021-09-08 10:47:43 -07:00
Craig Topper aca14c8cf1 [RISCV] Remove unused tablegen template parameters. NFC
Identified in D109359
2021-09-08 10:01:42 -07:00
Craig Topper b04c09c07c [RISCV] Use V0 instead of VMV0: for mask vectors in isel patterns.
This is consistent with the RVV intrinsic patterns. This has been
shown to prevent some "ran out of registers" errors in our internal
testing.

Unfortunately, there are some regressions on LMUL=8 tests in here.
I think the lack of registers with LMUL=8 just makes it very hard
to schedule correctly.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D109245
2021-09-08 09:46:21 -07:00
Roman Lebedev 0852f8706b
[X86] X86DAGToDAGISel::matchBitExtract(): support 'num high bits to clear' pattern
Currently, we only deal with the case where we can match
the number of low bits to be kept, i.e.:
```
x & ((1 << y) - 1)
```
will extract low `y` bits of `x`.

But what will
```
x & (-1 >> y)
```
do?

Logically, it will extract `bitwidth(x) - y` low bits, i.e.:
```
x & ~(-1 << (bitwidth(x)-y))
```
... except we can't do such a transformation in IR in general,
because if we wanted to extract all the bits `(-1 >> 0)` is fine,
but `-1 << bitwidth(x)` would be `poison`: https://alive2.llvm.org/ce/z/BKJZfw,

Yet, here with BMI's BEXTR and BMI2's BZHI we don't have any such problems with edge-cases.
So what we can do is: https://alive2.llvm.org/ce/z/gm5M2B

As briefly discussed with @craig.topper, this appears to be not worse than what we'd end up with currently (a pair of shifts):
* https://godbolt.org/z/nsPb8bejs (direct data dependency, sequential execution)
* https://godbolt.org/z/7bj3zeh1d (no direct data dependency, parallel execution)

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D107923
2021-09-08 19:27:08 +03:00
Craig Topper 1f16191906 [RISCV] Add an GPR def to the Zvlseg SPILL/RELOAD pseudos
The expansion of these pseudos creates ADD instructions. Those
ADDs modify a GPR so that it is no longer contains the same value
as the input base pointer. Therefore, I believe we should have a
GPR as a Def on these instructions and expansion should get the
destination register for the ADDs from that operand.

At least in our tests here this works out so that register
scavenging picks the same register as the base pointer.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D109405
2021-09-08 09:23:33 -07:00
Cullen Rhodes 89786c2b99 [AArch64][SME] Fix imm bug in mov vector to tile aliases
Also fixes a warning mentioned in D109359.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D109363
2021-09-08 07:42:16 +00:00
Sander de Smalen 981f7d563a [AArch64] Implement extract_subvector for predicates.
This patch implements extract_subvector for predicate types when
the input type is more than twice the size of the subvector that
is being extracted.

Reviewed By: CarolineConcatto

Differential Revision: https://reviews.llvm.org/D109314
2021-09-08 08:18:34 +01:00
Justin Latimer b0d4d969e2 [AVR] Add support for the tinyAVR 0-series and tinyAVR 1-series
Reviewed By: Dylan McKay, Ben Shi

Differential Revision: https://reviews.llvm.org/D103136
2021-09-08 02:35:26 +00:00
Ben Shi f0460fa4eb [AArch64] Improve target hook function to decide folding (mul (add x, c1), c2)
Prevent the folding if it leads to worse code.

Reviewed By: dmgreen, kda

Differential Revision: https://reviews.llvm.org/D108871
2021-09-08 01:51:26 +00:00
Wang, Pengfei 9d7d34c769 [X86][MS] Fix the aligement mismatch of vector variable arguments on Win32
The alignment of vector variable arguments in callee side is 4, which is
aligned with MSVC. But the caller aligns them to the size of vector
arguments. It results in run fails. This patch fixes this problem by
trimming it to 4 bytes for variable arguments on Win32.

Fixed vector arguments are passed by pointer on Win32. So they don't have
the problem.

I don't find a doc in MSDN for this calling conversion, so I did several
experiments here: https://godbolt.org/z/n1zn1Gx1z

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D108887
2021-09-08 09:26:44 +08:00
Heejin Ahn a1d522939c [WebAssembly] Error out on indirect uses of setjmp
Both Wasm & Emscripten SjLj handling has a restriction that `setjmp`
cannot be called indirectly. I thought we have been erroring out on
indirect uses of `setjmp`, but some recent CL disrupted the logic and
we are not erroring out anymore.

We currently
1. Collect functions that contain `setjmp` calls in `SetjmpUsers`. This
   only counts direct calls:
   8f77dc459e/llvm/lib/Target/WebAssembly/WebAssemblyLowerEmscriptenEHSjLj.cpp (L869-L878)
2. Run `runSjLjOnFunction` only on those `SetjmpUsers`. Within
   `runSjLjOnFunction`, if we see an indirect use of `setjmp`, we error
   out:
   8f77dc459e/llvm/lib/Target/WebAssembly/WebAssemblyLowerEmscriptenEHSjLj.cpp (L1218-L1221)

So if there are only indirect setjmp calls within the module,
`SetjmpUsers` will be empty, and `runSjLjOnFunction` is not even entered
once. And the indirect `setjmp` call will error out at link time. So in
this CL we check for the indirect uses of `setjmp` upfront before we
enter `runSjLjOnFunction`.

Also this currently errors out on `invoke @setjmp`, which can only occur
when using Wasm EH + Wasm SjLj within a function. We recently added Wasm
SjLj support but we don't support using Wasm EH + Wasm SjLj in the same
function yet. We plan to add this support very soon, so I don't think
it's worth creating another test file just for this. (This is an error
test so it needs its own file)

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D109375
2021-09-07 15:52:58 -07:00
Irina Dobrescu 7023cefe61 [AArch64][Global ISel] Add sext/zext of vector extract improvements
This patch adds improvements for sext/zext of a vector extract in Global
ISel.

For example, this piece of code:

define i64 @si64(<4 x i32> %0, i32 %1) {
  %3 = extractelement <4 x i32> %0, i64 1
  %s = sext i32 %3 to i64
  ret i64 %s
}

Used to have this lowering:
si64:
  mov s0, v0.s[1]
  fmov w8, s0
  sxtw x0, w8
  ret

Whereas this patch makes it lower to this:
si64:
  smov x0, v0.h[0]
  ret

Differential Revision: https://reviews.llvm.org/D108137
2021-09-07 21:17:51 +01:00
Elliot Saba ae8507b0df [X86] Don't clobber EBX in stackprobes
On X86, the stackprobe emission code chooses the `R11D` register, which
is illegal on i686.  This ends up wrapping around to `EBX`, which does
not get properly callee-saved within the stack probing prologue,
clobbering the register for the callers.

We fix this by explicitly using `EAX` as the stack probe register.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D109203
2021-09-07 15:00:44 -04:00
Nick Desaulniers d0eeb64be5 [X86ISelLowering] avoid emitting libcalls to __mulodi4()
Similar to D108842, D108844, and D108926.

__has_builtin(builtin_mul_overflow) returns true for 32b x86 targets,
but Clang is deferring to compiler RT when encountering long long types.
This breaks ARCH=i386 + CONFIG_BLK_DEV_NBD=y builds of the Linux kernel
that are using builtin_mul_overflow with these types for these targets.

If the semantics of __has_builtin mean "the compiler resolves these,
always" then we shouldn't conditionally emit a libcall.

This will still need to be worked around in the Linux kernel in order to
continue to support these builds of the Linux kernel for this
target with older releases of clang.

Link: https://bugs.llvm.org/show_bug.cgi?id=28629
Link: https://bugs.llvm.org/show_bug.cgi?id=35922
Link: https://github.com/ClangBuiltLinux/linux/issues/1438

Reviewed By: lebedev.ri, RKSimon

Differential Revision: https://reviews.llvm.org/D108928
2021-09-07 10:44:54 -07:00
Simon Pilgrim 9eda472112 [X86] X86InstrAVX512.td - remove unused template parameters. NFC.
Identified in D109359
2021-09-07 17:38:20 +01:00
Kazu Hirata 5648f7170e [Analysis, Target, Transforms] Construct SmallVector with iterator ranges (NFC) 2021-09-07 09:19:33 -07:00
Kazu Hirata 5c6338de16 [RISCV] Fix "set but not used" warnings 2021-09-07 09:19:31 -07:00
Victor Huang 4a226529e2 [PowerPC] Fixed the crash due to early if conversion with fixed CR fields
This patch adds a fix to do early if conversion to select when
conditional branch not using physical register to prevent the crash when
expanding ISEL instruction.

Reviewed By: lei, kamaub, PowerPC

Differential revision: https://reviews.llvm.org/D108302
2021-09-07 10:51:03 -05:00
Simon Pilgrim f8d2cd1428 [X86] Add missing domain to avx512_ord_cmp_sae comis sae patterns
It doesn't appear to be possible to generate this from tests atm, but it matches what we do in sse12_ord_cmp

Fixes unused template arg identified in D109359
2021-09-07 16:20:21 +01:00
Jinsong Ji 042a6564d3 [PowerPC] Guard XSRSP in P8 for FastISel
This is exposed by enabling FastIsel on 64bit AIX.
We are generating XSRSP regardless of the arch,
which may be wrong when -mcpu=pwr7.

The fix is to guard the generation in P8 only.

Reviewed By: qiucf

Differential Revision: https://reviews.llvm.org/D109365
2021-09-07 15:17:51 +00:00
Sander de Smalen bd576e5ac0 [AArch64][SVE] Improve extract_subvector for predicates.
Using PUNPKLO/HI instead of ZIP1/ZIP2, because that avoids
having to generate a predicate with all lanes inactive (PFALSE).

Reviewed By: CarolineConcatto

Differential Revision: https://reviews.llvm.org/D109312
2021-09-07 15:49:29 +01:00
Peter Smith e63455d5e0 [MC] Use local MCSubtargetInfo in writeNops
On some architectures such as Arm and X86 the encoding for a nop may
change depending on the subtarget in operation at the time of
encoding. This change replaces the per module MCSubtargetInfo retained
by the targets AsmBackend in favour of passing through the local
MCSubtargetInfo in operation at the time.

On Arm using the architectural NOP instruction can have a performance
benefit on some implementations.

For Arm I've deleted the copy of the AsmBackend's MCSubtargetInfo to
limit the chances of this causing problems in the future. I've not
done this for other targets such as X86 as there is more frequent use
of the MCSubtargetInfo and it looks to be for stable properties that
we would not expect to vary per function.

This change required threading STI through MCNopsFragment and
MCBoundaryAlignFragment.

I've attempted to take into account the in tree experimental backends.

Differential Revision: https://reviews.llvm.org/D45962
2021-09-07 15:46:19 +01:00
Peter Smith 5e71839f77 [MC] Add MCSubtargetInfo to MCAlignFragment
In preparation for passing the MCSubtargetInfo (STI) through to writeNops
so that it can use the STI in operation at the time, we need to record the
STI in operation when a MCAlignFragment may write nops as padding. The
STI is currently unused, a further patch will pass it through to
writeNops.

There are many places that can create an MCAlignFragment, in most cases
we can find out the STI in operation at the time. In a few places this
isn't possible as we are in initialisation or finalisation, or are
emitting constant pools. When possible I've tried to find the most
appropriate existing fragment to obtain the STI from, when none is
available use the per module STI.

For constant pools we don't actually need to use EmitCodeAlign as the
constant pools are data anyway so falling through into it via an
executable NOP is no better than falling through into data padding.

This is a prerequisite for D45962 which uses the STI to emit the
appropriate NOP for the STI. Which can differ per fragment.

Note that involves an interface change to InitSections. It is now
called initSections and requires a SubtargetInfo as a parameter.

Differential Revision: https://reviews.llvm.org/D45961
2021-09-07 15:46:19 +01:00
Michael Liao 640beb38e7 [amdgpu] Enable selection of `s_cselect_b64`.
Differential Revision: https://reviews.llvm.org/D109159
2021-09-07 10:45:07 -04:00
Mirko Brkusanin 6c4b634da6 [AMDGPU][GlobalISel] Legalize G_MUL for non-standard types
Legalizing G_MUL for non-standard types (like i33) generated an error. Putting
minScalar and maxScalar instead of clampScalar. Also using new rule, instead
of widening to the next power of 2, widen to the next multiple of the passed
argument (32 in this case), so instead of widening i65 to i128, we widen it to
i96.

Patch by: Mateja Marjanovic

Differential Revision: https://reviews.llvm.org/D109228
2021-09-07 16:33:24 +02:00
Mirko Brkusanin 5263bf583a [AMDGPU][GlobalISel] Legalization of G_ROTL and G_ROTR
Add implementation for the legalization of G_ROTL and G_ROTR machine
instructions. They are very similar to funnel shift instructions, the only
difference is funnel shifts have 3 operands, whereas rotate instructions have
two operands, the first being the register that is being rotated and the second
being the number of shifts. The legalization of G_ROTL/G_ROTR is just lowering
them into funnel shift instructions if they are legal.

Patch by: Mateja Marjanovic

Differential Revision: https://reviews.llvm.org/D105347
2021-09-07 16:33:24 +02:00
Simon Pilgrim 0d48ee2774 [X86] X86InstrSSE.td - remove unused template parameters. NFC.
Identified in D109359
2021-09-07 15:13:05 +01:00
Simon Pilgrim b50a60c234 [X86] X86InstrVecCompiler.td - remove unused template parameters. NFC.
Identified in D109359
2021-09-07 14:46:08 +01:00
Simon Pilgrim fb38795062 [X86] X86InstrFMA.td - remove unused template parameters. NFC.
Identified in D109359
2021-09-07 14:46:07 +01:00
Sander de Smalen 448d47f743 [AArch64][SVE] Implement all-inactive predicate with PFALSE.
Instead of using a WHILE XZR, XZR instruction, just emit a PFALSE.

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D109311
2021-09-07 14:29:02 +01:00
Mirko Brkusanin 36527cbe02 [AMDGPU][GlobalISel] Legalize memcpy family of intrinsics
Legalize G_MEMCPY, G_MEMMOVE, G_MEMSET and G_MEMCPY_INLINE.

Corresponding intrinsics are replaced by a loop that uses loads/stores in
AMDGPULowerIntrinsics pass unless their length is a constant lower then
MemIntrinsicExpandSizeThresholdOpt (default 1024). Any G_MEM* instruction that
reaches legalizer should have a const length argument and should be expanded
into appropriate number of loads + stores.

Differential Revision: https://reviews.llvm.org/D108357
2021-09-07 12:24:07 +02:00
Fraser Cormack a823bdf3ab [RISCV][VP] Custom lower VP_STORE and VP_LOAD
This patch adds support for the vector-predicated `VP_STORE` and
`VP_LOAD` nodes. We do this in the same way we lower `MSTORE` and
`MLOAD`: to regular load/store instructions via intrinsics.

One necessary change was made to `SelectionDAGLegalize` so that
`VP_STORE` nodes' operation actions are taken from the stored "value"
operands, in the same vein as `STORE` or `MSTORE`.

Reviewed By: craig.topper, rogfer01

Differential Revision: https://reviews.llvm.org/D108999
2021-09-07 10:53:25 +01:00
Fraser Cormack f4dee8cb82 [RISCV][VP] Custom lower VP_SCATTER and VP_GATHER
This patch adds support for the `VP_SCATTER` and `VP_GATHER` nodes by
lowering them to RVV's `vsox`/`vlux` instructions, respectively. This
process is almost identical to the existing `MSCATTER`/`MGATHER` support.

One extra change was made to `SelectionDAGLegalize` so that
`VP_SCATTER`'s operation action is derived from its stored "value"
operand rather than its return type (which is always the chain).

Reviewed By: craig.topper, rogfer01

Differential Revision: https://reviews.llvm.org/D108987
2021-09-07 10:43:07 +01:00
Andrew Wei da9ed3dc71 [AArch64] Avoid adding duplicate implicit operands when expanding pseudo insts.
When expanding pseudo insts, in order to create a new machine instr, we use BuildMI,
which will add implicit operands by default. And transferImpOps will also copy implicit
operands from old ones. Finally, duplicate implicit operands are added to the same inst.
Sometimes this can cause correctness issues. Like below inst,
    renamable $w18 = nsw SUBSWrr renamable $w30, renamable $w14, implicit-def dead $nzcv
After expanding, it will become
    $w18 = SUBSWrs renamable $w13, renamable $w14, 0, implicit-def $nzcv, implicit-def dead $nzcv
A redundant implicit-def $nzcv is added, but the dead flag is missing.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D109069
2021-09-07 17:11:58 +08:00
Ben Shi 63ca9371c7 [ARM] Implement target hook function to decide folding (mul (add x, c1), c2)
Prevent the folding in DAGCombine if it leads to worse code.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D109124
2021-09-07 15:42:43 +08:00
Craig Topper da3ef8b756 [X86] Handle inverted inputs when matching VPTERNLOG from 2 binary ops.
This is a more general version of D109273. Though it doesn't
peek through bitcasts or rearange broadcasts.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D109295
2021-09-06 17:44:52 -07:00
Fangrui Song 76529b4468 [X86] Simplify condition guarding emitCalleeSavedFrameMoves. NFC 2021-09-06 15:54:02 -07:00
Fangrui Song 4f1e410a1b [X86] Simplify two hasFP(F). NFC 2021-09-06 15:47:40 -07:00
Victor Campos 79f9c79aaf [AArch64][MC] Merge FeaturePMU into FeaturePerfMon
FeaturePMU was created in AArch64 to accommodate one missing system
register, PMMIR_EL1, in commit ffcd7698ae.

However, the Performance Monitors extension already had a target
feature, which is called FeaturePerfMon. Therefore, FeaturePMU is
redundant.

This patch removes FeaturePMU and merges its contents into
FeaturePerfMon.

Reviewed By: dnsampaio

Differential Revision: https://reviews.llvm.org/D109246
2021-09-06 14:56:49 +01:00
David Truby b297531ece [AArch64][sve] Prevent incorrect function call on fixed width vector
The isEssentiallyExtractHighSubvector function currently calls
getVectorNumElements on a type that in specific cases might be scalable.
Since this function only has correct behaviour at the moment on scalable
types anyway, the function can just return false when given a fixed type.

Differential Revision: https://reviews.llvm.org/D109163
2021-09-06 14:25:03 +01:00
Tianqing Wang 12fa608af4 [X86] Add CRC32 feature.
d8faf03807 implemented general-regs-only for X86 by disabling all features
with vector instructions. But the CRC32 instruction in SSE4.2 ISA, which uses
only GPRs, also becomes unavailable. This patch adds a CRC32 feature for this
instruction and allows it to be used with general-regs-only.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D105462
2021-09-06 17:24:30 +08:00
Fangrui Song 0e03450ae4 [AArch64] Remove an uneeded !NeedsWinCFI check. NFC 2021-09-05 21:02:56 -07:00
guopeilin 5f48c144c5 [AArch64][GlobalISel] Use ZExtValue for zext(xor) when invert tb(n)z
Currently, we use SExtValue to decide whether to invert tbz or tbnz.
However, for the case zext (xor x, c), we should use ZExt rather
than SExt otherwise we will generate totally opposite branches.

Reviewed By: paquette

Differential Revision: https://reviews.llvm.org/D108755
2021-09-06 11:12:07 +08:00
Simon Pilgrim f114ef3731 [CostModel][X86] Add generic costs for vXi32 MUL -> v2Xi16 PMADDDW folds
Based off the improved fold in D108522

This should eventually allow us to replace the SLM only cost patterns with generic versions.
2021-09-05 16:08:11 +01:00
Shivam Gupta 5449d2da65 [NFC] Run clang-format on llvm/lib/Trget/AVR/
The current inconsistency confuse contributors which coding guidlines to follow.
It would be better to have it consistent using clang-format tool.

Reviewed By: mhjacobson

Differential Revision: https://reviews.llvm.org/D109270
2021-09-04 20:05:15 +05:30
Simon Pilgrim 2005ae15a6 [X86][SLM] WriteVecIMul instructions only take 1uop (REAPPLIED)
The xmm variant have half the throughput (and +1cy latency) of the mmx variants, but are still 1uop.

I still need to do more thorough testing of SLM on test-suite before fixing the obvious bad numbers for WritePMULLD.

But this helps the D103695 helper script get to more accurate numbers for vXi32 multiplies of extended operands (i.e. we can use PMADDWD, PMULLW/PMULHW etc). Matches what Intel AoM / Agner / llvm-exegesis reports.
2021-09-04 15:03:56 +01:00
Simon Pilgrim ac51d69208 Revert rG994da657076900f5ad7fe593c3b5e5f89ab3d53d "[X86][SLM] WriteVecIMul instructions only take 1uop"
This changed some codegen tests that I forgot about in my rebase, I'll recommit shortly with a fix.
2021-09-04 13:39:10 +01:00
Simon Pilgrim 994da65707 [X86][SLM] WriteVecIMul instructions only take 1uop
The xmm variant have half the throughput (and +1cy latency) of the mmx variants, but are still 1uop.

I still need to do more thorough testing of SLM on test-suite before fixing the obvious bad numbers for WritePMULLD.

But this helps the D103695 helper script get to more accurate numbers for vXi32 multiplies of extended operands (i.e. we can use PMADDWD, PMULLW/PMULHW etc). Matches what Intel AoM / Agner / llvm-exegesis reports.
2021-09-04 13:21:34 +01:00
Simon Pilgrim c6371020a8 [X86][SLM] RMW instructions don't require an extra uop
For RMW instructions, the load and store hold the MEC for an extra cycle, but within the same single uop. This is alluded to in the Intel AOM:

"The MEC also owns the MEC RSV, which is responsible for scheduling of all loads and stores. Load and
store instructions go through addresses generation phase in program order to avoid on-the-fly memory
ordering later in the pipeline. Therefore, an unknown address will stall younger memory instructions."

Noticed while trying to get a cheap SLM test box up and running with llvm-exegesis - RMW arithmetic is always 1uop - and matches what Agner / InstLatX64 report as well.
2021-09-04 13:21:34 +01:00
Simon Pilgrim da965a77d5 [X86][SLM] Fix MUL uops, latency and throughput
These were all set to the same best case mul i32 values (which seems to be the only version of MUL that SLM actually performs well with).

Noticed while trying to improve multiplication costs for vectorization via the D103695 helper script. Confirmed with Intel AoM / Agner / InstLatX64.
2021-09-04 13:21:34 +01:00
Simon Pilgrim 7d062d2c47 [X86][Atom] MUL/DIV instructions require both ports, not either.
Noticed while trying to improve multiplication costs for vectorization via the D103695 helper script. Confirmed with Intel AoM.
2021-09-04 11:58:09 +01:00
Simon Pilgrim 0d0f39b0f3 [X86][Atom] Add missing UOps override to AtomWriteResPair multiclass
Make it easier to describe microcoded instructions.
2021-09-04 11:58:09 +01:00
Nikita Popov 66a54af967 [WebAssembly] Support opaque pointers in AddMissingPrototypes
The change here is basically the same as in D108880: Rather than
looking at bitcasts, look at calls and their function type. We
still need to look through bitcasts to find those calls.

The change in llvm/test/CodeGen/WebAssembly/add-prototypes-conflict.ll
is due to different visitation order. add-prototypes-opaque-ptrs.ll
is a copy of add-prototypes.ll with -force-opaque-pointers.

Differential Revision: https://reviews.llvm.org/D109256
2021-09-04 11:25:42 +02:00
Kevin Athey c7f50a445e Revert "[AArch64] Implement target hook function to decide folding (mul (add x, c1), c2)"
This reverts commit 095bea23d0.

Broke buildbot: https://lab.llvm.org/buildbot/#/builders/5/builds/11411
2021-09-03 18:08:58 -07:00
Ben Shi 095bea23d0 [AArch64] Implement target hook function to decide folding (mul (add x, c1), c2)
Prevent the folding if it leads to worse code.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D108871
2021-09-04 07:24:23 +08:00
Stanislav Mekhanoshin d0c064715c [AMDGPU] Small cleanup in optimizeCompareInstr. NFC. 2021-09-03 11:31:40 -07:00
David Green adfd12e6d1 [ARM] Add patterns for store(fptosisat(..))
As an extension to D107866, this adds store(fptosisat(..)) patterns,
similar to the existing fptosi patterns, to prevent unnecessarily moving
into gpr regs where we can use fp stores directly.

Differential Revision: https://reviews.llvm.org/D108378
2021-09-03 19:22:11 +01:00
David Green f37e132263 [ARM] Add VFP lowering for fptosi.sat
This extends D107865 to the VFP insructions, lowering llvm.fptosi.sat
and llvm.fptoui.sat to VCVT instructions that inherently perform the
saturate.

Differential Revision: https://reviews.llvm.org/D107866
2021-09-03 18:11:08 +01:00
Craig Topper 75620fadf5 [RISCV] Change how we encode AVL operands in vector pseudoinstructions to use GPRNoX0.
This patch changes the register class to avoid accidentally setting
the AVL operand to X0 through MachineIR optimizations.

There are cases where we really want to use X0, but we can't get that
past the MachineVerifier with the register class as GPRNoX0. So I've
use a 64-bit -1 as a sentinel for X0. All other immediate values should
be uimm5. I convert it to X0 at the earliest possible point in the VSETVLI
insertion pass to avoid touching the rest of the algorithm. In
SelectionDAG lowering I'm using a -1 TargetConstant to hide it from
instruction selection and treat it differently than if the user
used -1. A user -1 should be selected to a register since it doesn't
fit in uimm5.

This is the rest of the changes started in D109110. As mentioned there,
I don't have a failing test from MachineIR optimizations anymore.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D109116
2021-09-03 09:19:25 -07:00
Simon Pilgrim 6ba0b9f68a [X86][SLM] Fix PBLENDVB uops and throughput
SLM PBLENDVB is just as bad as BLENDVPD/PS - so model it as such, fixing the rr vs rm uops diff as well. The Intel AoM appears to have a copy+paste typo with PBLENDW, it doesn't match Agner or InstLatX64.

Noticed while investigating some of the weird discrepancies reported by the D103695 helper script (SLM had much better vector shift throughputs than it should).
2021-09-03 11:31:29 +01:00
Florian Mayer abf8ed8a82 [hwasan] Support more complicated lifetimes.
This is important as with exceptions enabled, non-POD allocas often have
two lifetime ends: the exception handler, and the normal one.

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D108365
2021-09-03 10:29:50 +01:00
Cullen Rhodes dc5dd77ac7 [AArch64][SME] Support NEON vector to GPR integer moves in streaming mode
A small subset of the NEON instruction set is legal in streaming mode.
This patch adds support for the following vector to integer move
instructions:

  0x00 1110 0000 0001 0010 11xx xxxx xxxx # SMOV W|Xd,Vn.B[0]
  0x00 1110 0000 0010 0010 11xx xxxx xxxx # SMOV W|Xd,Vn.H[0]
  0100 1110 0000 0100 0010 11xx xxxx xxxx # SMOV Xd,Vn.S[0]
  0000 1110 0000 0001 0011 11xx xxxx xxxx # UMOV Wd,Vn.B[0]
  0000 1110 0000 0010 0011 11xx xxxx xxxx # UMOV Wd,Vn.H[0]
  0000 1110 0000 0100 0011 11xx xxxx xxxx # UMOV Wd,Vn.S[0]
  0100 1110 0000 1000 0011 11xx xxxx xxxx # UMOV Xd,Vn.D[0]

Only the zero index variants are legal, all others indexes are illegal.
To support this, new instructions are defined specifically for zero
index which is hardcoded, along an implicit 'VectorIndex0' operand.
Since the index operand is implicit and takes no bits in the encoding,
custom decoding is required to add the operand.

I'm not sure if this is the best approach but the predicate constraint
on a subset of an operand is unusual. Would be interested to hear some
alternatives.

The instructions are predicated on 'HasNEONorStreamingSVE', i.e. they're
enabled by either +neon or +streaming-sve. This follows on from the work
in D106272 to support the subset of SVE(2) instructions that are legal
in streaming mode.

Depends on D107902.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D107903
2021-09-03 07:59:17 +00:00
Cullen Rhodes 1dcd900d1d [AArch64][ISel] NFC: DAG.getMachineFunction() -> MF
Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D109135
2021-09-03 07:59:17 +00:00
Amara Emerson 6d9505b8e0 [AArch64][GlobalISel] Support for folding G_ROTR as shifted operands.
This allows selection like: eor w0, w1, w2, ror #8

Saves 500 bytes on ClamAV -Os, which is 0.1%.

Differential Revision: https://reviews.llvm.org/D109206
2021-09-02 21:37:24 -07:00
Qiu Chaofan d0f9553ef5 [PowerPC] Enable fast-isel on AIX 64 subtarget
This patch basically enables fast-isel for AIX 64-bit subtarget
(previously enabled only for ELF 64). The initial motivation is to
introduce branch folding to AIX generated code for correct debug
behavior. I also saw some compiling time improvement in a few LLVM
test-suite benchmarks. (toast, dbms, cjpeg, burg, etc.)

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D98844
2021-09-03 11:33:45 +08:00
Matt Arsenault 79bcd4a7db AMDGPU: Remove FeatureLocalMemorySize0
There's no reason to make this an explicit feature, since it's implied
by the lack of a feature with a size.
2021-09-02 22:43:01 -04:00
Alexander Pivovarov 6cd4b508a8 [RISCV] Add SiFive core S51
Add SiFive core s51 as rv64imac RocketModel

Reviewed-By: MaskRay, evandro
Differential Revision: https://reviews.llvm.org/D108886
2021-09-02 18:45:25 -07:00
Alexander Pivovarov 1104e3258b Fix typo in RISCVMatInt.cpp comments 2021-09-02 18:11:09 -07:00
Stanislav Mekhanoshin 78fbd1aa3d [AMDGPU] Process any power of 2 in optimizeCompareInstr
Differential Revision: https://reviews.llvm.org/D109201
2021-09-02 17:39:17 -07:00
Stanislav Mekhanoshin 2cfda6a691 [AMDGPU] Fold immediates in the optimizeCompareInstr
Peephole works before the first SIFoldOperands so most of
the immediates are in registers.

Differential Revision: https://reviews.llvm.org/D109186
2021-09-02 17:23:26 -07:00
Sam Clegg c32884c482 [WebAssembly] Rename WrapperPIC -> WrapperREL. NFC
This ISD node/wrapper represents am address which is relative to a base
address and therefore lowers to `i32.const` rather than `global.get`.

Use this wrapper type for TLS-relative addresses, paving the way for the
non-REL wrapper to be used to external TLS address once those are
supported.

Differential Revision: https://reviews.llvm.org/D109179
2021-09-02 20:04:34 -04:00
Kirill Stoimenov cf53c6c971 [asan] Fixed link error by setting jump symbol to R_X86_64_PLT32.
Fixing this link error:
ld: error: relocation R_X86_64_PC32 cannot be used against symbol __asan_report_load...; recompile with -fPIC

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D109183
2021-09-02 21:50:56 +00:00
Sam Clegg 4664590d53 [WebAssemlby] Remove redundant SDTypeProfile. NFC
I added this back in https://reviews.llvm.org/D54647 but it wasn't
actually needed.

Differential Revision: https://reviews.llvm.org/D109176
2021-09-02 15:21:22 -04:00
Sam Clegg ad2f94f398 [WebAssembly] Fix names of WebAssemblyWrapper SDNodes. NFC
Other platforms all use CamelCase as normal for these wrapper nodes.

Differential Revision: https://reviews.llvm.org/D109172
2021-09-02 13:54:44 -04:00
Heejin Ahn 28780e59f6 [WebAssembly] Add Wasm SjLj support
This add support for SjLj using Wasm exception handling instructions:
https://github.com/WebAssembly/exception-handling/blob/master/proposals/exception-handling/Exceptions.md

This does not yet support the mixed use of EH and SjLj within a
function. It will be added in a follow-up CL.

This currently passes all SjLj Emscripten tests for wasm0/1/2/3/s,
except for the below:
- `test_longjmp_standalone`: Uses Node
- `test_dlfcn_longjmp`: Uses NodeRAWFS
- `test_longjmp_throw`: Mixes EH and SjLj
- `test_exceptions_longjmp1`: Mixes EH and SjLj
- `test_exceptions_longjmp2`: Mixes EH and SjLj
- `test_exceptions_longjmp3`: Mixes EH and SjLj

Reviewed By: dschuff, tlively

Differential Revision: https://reviews.llvm.org/D108960
2021-09-02 10:51:02 -07:00
Nick Desaulniers 6860b136b9 [MipsISelLowering] avoid emitting libcalls to __multi3
Similar to D108842 and D108844.

__has_builtin(builtin_mul_overflow) returns true for 32b MIPS targets,
but Clang is deferring to compiler RT when encountering long long types.
This breaks MIPS malta_defconfig builds of the Linux kernel that are
using __builtin_mul_overflow with these types for these targets.

If the semantics of __has_builtin mean "the compiler resolves these,
always" then we shouldn't conditionally emit a libcall.

This will still need to be worked around in the Linux kernel in order to
continue to support malta_defconfig builds of the Linux kernel for this
target with older releases of clang.

Link: https://bugs.llvm.org/show_bug.cgi?id=28629
Link: https://github.com/ClangBuiltLinux/linux/issues/1438

Reviewed By: rengolin

Differential Revision: https://reviews.llvm.org/D108926
2021-09-02 10:41:37 -07:00
Craig Topper 3e89cc5cda [X86] Remove isel predicates for xgetbv/xsetbv instructions so they can work on Windows.
https://reviews.llvm.org/D56686  was supposed to allow these to
work on Windows without needing to enable the xsave feature to
match MSVC. It seems this didn't work because the backend isel
patterns would still block it.

This patch removes the predicates from the isel patterns.

Fixes PR51706.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D109097
2021-09-02 10:25:02 -07:00