Commit Graph

68207 Commits

Author SHA1 Message Date
Weining Lu 904a87ace3 [LoongArch] Use `end namespace xxx` style comment. NFC 2022-07-26 15:01:29 +08:00
Kazu Hirata 3f3930a451 Remove redundaunt virtual specifiers (NFC)
Identified with tidy-modernize-use-override.
2022-07-25 23:00:59 -07:00
Xiang Li 57006b14fa [DirectX backend] [NFC]Add DXILOpBuilder to generate DXIL operation
A new helper class DXILOpBuilder is added to create DXIL op function calls.

TableGen backend for DXILOperation will create table for DXIL op function parameter types.
When create DXIL op function, these parameter types will used to create the function type.

Reviewed By: bogner

Differential Revision: https://reviews.llvm.org/D130291
2022-07-25 21:49:59 -07:00
Luo, Yuanke 5fb4134210 [X86][DAGISel] Don't widen shuffle element with AVX512
Currently the X86 shuffle lowering would widen the element type for
shuffle if the mask element value is adjacent. For below example

  %t2 = add nsw <16 x i32> %t0, %t1
  %t3 = sub nsw <16 x i32> %t0, %t1
  %t4 = shufflevector <16 x i32> %t2, <16 x i32> %t3,
                      <16 x i32> <i32 16, i32 17, i32 2, i32 3, i32 4,
                       i32 5, i32 6, i32 7, i32 8, i32 9, i32 10,
                       i32 11, i32 12, i32 13, i32 14, i32 15>

  ret <16 x i32> %t4

Compiler would transform the shuffle to
  %t4 = shufflevector <8 x i64> %t2, <8 x i64> %t3,
                      <8 x i64> <i32 8, i32 1, i32 2, i32 3, i32 4,
                                 i32 5, i32 6, i32 7>
This may lose the oppotunity to let ISel select mask instruction when
avx512 is enabled.

This patch is to prevent the tranform when avx512 feature is enabled.
Thank Simon for the idea.

Differential Revision: https://reviews.llvm.org/D129537
2022-07-26 11:56:03 +08:00
Craig Topper 45944e7cf4 [RISCV] Refactor translateSetCCForBranch to prepare for D130508. NFC.
D130508 handles more constants than just 1 or -1. We need to extract
the constant instead of relying isOneConstant or isAllOnesConstant.
2022-07-25 15:54:54 -07:00
Andrew Brown 3696a789d2 [WebAssembly] Use `localexec` as default TLS model for non-Emscripten targets
Only Emscripten supports dynamic linking with threads. To use
thread-local storage for other targets, this change defaults to the
`localexec` model.

Differential Revision: https://reviews.llvm.org/D130053
2022-07-25 13:25:46 -07:00
Matt Arsenault cb0c71e8b1 AMDGPU: Adjust register allocation priority values down
Set the priorities consistently to number of registers in the tuple -
1. Previously we started at 1, and also tried to give SGPR higher
values than VGPRs. There's no point in assigning SGPRs higher values
now that those are allocated in a separate regalloc run.

This avoids overflowing the 5 bits used for the class priority in the
allocation heuristic for 32 element tuples. This avoids some cases
where smaller registers unexpectedly get prioritized over larger.
2022-07-25 15:47:15 -04:00
Craig Topper 1db6d6dcd8 [RISCV] Teach RISCVCodeGenPrepare to optimize (zext (abs(i32 X, i1 1))).
(abs(i32 X, i1 1) always produces a positive result. The 'i1 1'
means INT_MIN input produces poison. If the result is sign extended,
InstCombine will convert it to zext. This does not produce ideal
code for RISCV.

This patch reverses the zext back to sext which can be folded
into a subw or negw. Ideally we'd do this in SelectionDAG, but
we lose the INT_MIN poison flag when llvm.abs becomes ISD::ABS.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D130412
2022-07-25 09:36:41 -07:00
Craig Topper 00060a7b97 [X86] Custom type legalize v2i32 smulo/umulo to use a single pmuldq/pmuludq.
With SSE4.1 and above we were using 3 multiply instructions. This
was due to type legalization widening to v4i32 and the low half
being done with pmulld while the high half used two pmuldq/pmuludq.

Instead of that, we can use a single pmuludq/pmuldq to calculate
the full product at once, extract the high and low bits and compare
to check for overflow.

I've restricted SMULO to sse4.1 to get pmuldq. We can probably
do a fixup to pmuludq on earlier targets, but that's for another day.

I was going through my git stash and found an early version of this patch
from a year or two ago so I went ahead and finished it.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D130432
2022-07-25 09:12:35 -07:00
Bradley Smith 953a98ef8d [AArch64][SVE] Fold target specific ext/trunc nodes into loads/stores
Due to the way fixed length SVE lowering works, we sometimes introduce
ext/trunc nodes very late, these nodes then immediately get converted
into target specific nodes (UUNPKLO/UZP1) before they get a chance to be
folded into a load/store.

This patch introduces target specific dag combines for these nodes so that
we can still create extending loads/truncating stores out of them.

Differential Revision: https://reviews.llvm.org/D128065
2022-07-25 15:24:05 +00:00
Cullen Rhodes c04ff587dc [AArch64] Combine setcc (iN (bitcast (vNi1 X))) with vecreduce_or
Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D130163
2022-07-25 12:14:33 +00:00
David Stuttard b14d7bf750 AMDGPU: Turn off force init 16 input SGPRS for pal
Pal uses a different mechanism for user sgprs.

Differential Revision: https://reviews.llvm.org/D129566
2022-07-25 10:52:46 +01:00
jacquesguan d8800ead62 [RISCV] Scalarize binop followed by extractelement.
This patch adds shouldScalarizeBinop to RISCV target in order to convert an extract element of a vector binary operation into an extract element followed by a scalar binary operation.

Differential Revision: https://reviews.llvm.org/D129545
2022-07-25 17:23:31 +08:00
Rosie Sumpter 034a27e688 [AArch64] Add f16 fpimm patterns
This patch recognizes f16 immediates as legal and adds the necessary
patterns. This allows the fadda folding introduced in 05d424d165
to be applied to the f16 cases.

Differential Revision: https://reviews.llvm.org/D129989
2022-07-25 09:08:10 +01:00
Cullen Rhodes 836f790bb1 [AArch64][SVE] Add patterns to select masked add/sub instructions
When lowering add(a, select(mask, b, splat(0))) the sel instruction can
be removed by using predicated add/sub instructions.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D129751
2022-07-25 07:22:05 +00:00
Kazu Hirata 9d5a544d34 [Hexagon] Remove isLateInstrFeedsEarlyInstr (NFC)
The last use was removed on May 3, 2017 in commit
2af5037d34.

This patch also removes isLateResultInstr and isEarlySourceInstr as
they become dead once we remove isLateInstrFeedsEarlyInstr.
2022-07-24 22:55:14 -07:00
Kazu Hirata 95a932fb15 Remove redundaunt override specifiers (NFC)
Identified with modernize-use-override.
2022-07-24 22:28:11 -07:00
Kazu Hirata b5188591a0 [llvm] Remove redundaunt virtual specifiers (NFC)
Identified with modernize-use-override.
2022-07-24 21:50:35 -07:00
Kazu Hirata acf648b5e9 Use llvm::less_first and llvm::less_second (NFC) 2022-07-24 16:21:29 -07:00
Kazu Hirata bafeb63448 [Hexagon] Remove unused declaration CanReturnSmallStruct (NFC)
The declaration was introduced without a corresponding definition on
Dec 12, 2011 in commit 1213a7a57f.
2022-07-24 14:48:09 -07:00
Kazu Hirata 49f72cb5bd [Hexagon] Remove unused declaration SelectZeroExtend (NFC)
The corresponding definition was removed on Jan 23, 2018 in commit
3780a0e1fa.
2022-07-24 14:48:08 -07:00
Simon Pilgrim 0708771cce [DAG] MaskedVectorIsZero - don't bother with (-1).isSubsetOf mask check. NFC.
Just use KnownBits::isZero() to ensure all the bits are known zero.
2022-07-24 13:12:21 +01:00
Simon Pilgrim 69d1e805ce [X86] combineAndnp - remove unused variable. NFC. 2022-07-24 11:32:44 +01:00
Simon Pilgrim ce81a0df67 [X86][SSE] Enable X86ISD::ANDNP constant folding 2022-07-24 11:07:34 +01:00
Simon Pilgrim 293899c64b [X86] Don't assume an AND/ANDNP element is undef/undemanded just because one element is undef
For mask ops like these, the other operand's corresponding element might be zero (result = zero) - so we must demand all the bits and that element.

This appears to be what D128570 was trying to fix - both sides of the funnel shift mask of the vXi64 (legalized to v2Xi32) were incorrectly simplifying the upper 32-bit halves to undef, resulting in bad folds later on.

I intend to address the test case regressions, but this close to the release branch I'd prefer to get a fix in first.
2022-07-24 10:53:38 +01:00
Kazu Hirata 068d5066b3 [Hexagon] Remove unused declaration getByteVectorTy (NFC)
The declaration was introduced without a corresponding definition on
Sep 7, 2020 in commit f5d07a05bb.
2022-07-23 19:40:44 -07:00
Craig Topper 9adc00a9d0 [RISCV] Add a continue to reduce nesting. NFC 2022-07-23 17:36:12 -07:00
Kazu Hirata ae998555ba [AMDGPU] Remove a redundant variable (NFC)
ArrayRef has operator[], so we don't need to access the contents via
data().
2022-07-23 12:29:05 -07:00
Fangrui Song c17450a094 [AMDGPU] Change DEBUG_TYPE from isel to amdgpu-isel
to match all other *ISelDAGToDAG.cpp
2022-07-23 11:32:02 -07:00
Kazu Hirata 1cc7f5bede Use static_assert instead of assert (NFC)
Identified with misc-static-assert.
2022-07-23 09:22:27 -07:00
Simon Pilgrim 676a03d8a5 [X86] matchBinaryShuffle - limit SHUFFLE(X,Y) -> OR(X,Y) cases to where X + Y are the same width as the result
Minor bit of prep work toward not unnecessarily widening shuffle operands in combineX86ShufflesRecursively, instead only widening in combineX86ShuffleChain if we actual find a match - see Issue #45319
2022-07-23 16:56:45 +01:00
Dmitri Gribenko aba43035bd Use llvm::sort instead of std::sort where possible
llvm::sort is beneficial even when we use the iterator-based overload,
since it can optionally shuffle the elements (to detect
non-determinism). However llvm::sort is not usable everywhere, for
example, in compiler-rt.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D130406
2022-07-23 15:19:05 +02:00
Kjetil Kjeka ff1920d106 [NVPTX] Promote i24, i40, i48 and i56 to next power-of-two register when passing
Today llc will crash when attempting to use non-power-of-two integer types as
function arguments or returns. This patch enables passing non standard integer
values in functions by promoting them before store and truncating after load.

The main motivation of implementing this change is that rust casts small structs
(less than pointer size) into an integer of the same size. As an example, if a
struct contains three u8 then it will be passed as an i24. This patch is a step
towards enabling rust compilation to ptx while retaining the target independent
optimizations.

More context can be found in https://github.com/llvm/llvm-project/issues/55764

Differential Revision: https://reviews.llvm.org/D129291
2022-07-22 14:14:12 -07:00
Arnold Schwaighofer 58e6ee0e1f llvm.swift.async.context.addr cannot be modeled as NoMem because we don't want it to be cse'd accross async suspends
An async suspend models the split between two partial async functions.
`llvm.swift.async.context.addr ` will have a different value in the two
partial functions so it is not correct to generally CSE the instruction.

rdar://97336162

Differential Revision: https://reviews.llvm.org/D130201
2022-07-22 11:50:58 -07:00
Simon Pilgrim 939cf9b1be [AArch64] Use neon instructions for i64/i128 ISD::PARITY calculation
As noticed on D129765 and reported on Issue #56531 - aarch64 targets can use the neon ctpop + add-reduce instructions to speed up scalar ctpop instructions, but we fail to do this for parity calculations.

I'm not sure where the cutoff should be for specific CPUs, but i64 (+ i128 special case) shows a definite reduction in instruction count. i32 is about the same (but scalar <-> neon transfers are probably more costly?), and sub-i32 promotion looks to be a definite regression compared to parity expansion optimized for those widths.

Differential Revision: https://reviews.llvm.org/D130246
2022-07-22 17:24:17 +01:00
Shubham Narlawar f55dbfbd9d [AArch64] Move SeparateConstOffsetFromGEPPass before LSR and enable EnableGEPOpt by default.
GEP's across basic blocks were not getting splitted due to EnableGEPOpt
which was turned off by default. Hence, EarlyCSE missed the opportunity
to eliminate common part of GEP's. This can be achieved by simply
turning GEP pass on.
 - This patch moves SeparateConstOffsetFromGEPPass() just before LSR.
 - It enables EnableGEPOpt by default.

Resolves - https://github.com/llvm/llvm-project/issues/50528

Added an unit test.

Differential Revision: https://reviews.llvm.org/D128582
2022-07-22 15:20:53 +01:00
Petar Avramovic 8de1f04c77 [AMDGPU] gfx11 Fix VOP3 dot instructions
Fix src modifiers for operands with bf16 type.
op_sel[0:1] are ignored.

Differential Revision: https://reviews.llvm.org/D129084
2022-07-22 11:43:35 +02:00
Cullen Rhodes bf268a05cd [AArch64] Emit vector FP cmp when LE is used with fast-math
Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D130093
2022-07-22 07:53:55 +00:00
Fangrui Song 9742166935 [LoongArch] Support load/store of dso_local PIC global values
lowerGlobalAddress added by D128427 can be used for PIC. The actual condition is
that the global value needs to be dso_local (a dso_preemptable one needs GOT
indirection).

load-store.ll has UB due to out-of-bounds load/store. Fix the UB in the variable
test and add an array test. Note: NOPIC array index is currently wrong.

Reviewed By: wangleiat

Differential Revision: https://reviews.llvm.org/D129977
2022-07-21 19:37:56 -07:00
Phoebe Wang 02fe96b240 [X86][FP16] Do not split FP64->FP16 to FP64->FP32->FP16
Truncation from double to half is not always identical to truncating to float first and then to half. https://godbolt.org/z/56s9517hd

On the other hand, expanding to float and then to double is always identical to expanding to double directly. https://godbolt.org/z/Ye8vbYPnY

Reviewed By: RKSimon, skan

Differential Revision: https://reviews.llvm.org/D130151
2022-07-22 08:36:05 +08:00
Ilia Diachkov b8e1544b9d [SPIRV] add SPIRVPrepareFunctions pass and update other passes
The patch adds SPIRVPrepareFunctions pass, which modifies function
signatures containing aggregate arguments and/or return values before
IR translation. Information about the original signatures is stored in
metadata. It is used during call lowering to restore correct SPIR-V types
of function arguments and return values. This pass also substitutes some
llvm intrinsic calls to function calls, generating the necessary functions
in the module, as the SPIRV translator does.

The patch also includes changes in other modules, fixing errors and
enabling many SPIR-V features that were omitted earlier. And 15 LIT tests
are also added to demonstrate the new functionality.

Differential Revision: https://reviews.llvm.org/D129730

Co-authored-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Co-authored-by: Michal Paszkowski <michal.paszkowski@outlook.com>
Co-authored-by: Andrey Tretyakov <andrey1.tretyakov@intel.com>
Co-authored-by: Konrad Trifunovic <konrad.trifunovic@intel.com>
2022-07-22 04:00:48 +03:00
Craig Topper ab2348a6fa [RISCV] Add sext.b/h and zext.b/h/w to RISCVInstrInfo::foldMemoryOperandImpl.
We can always fold zext.b since it is just andi. The others require
Zba/Zbb.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D130302
2022-07-21 14:54:58 -07:00
Martin Storsjö 606348cc72 [MinGW] Don't currently set visibility=hidden when building for MinGW
If we build the Target libraries with -fvisibility=hidden, then
LLVM_EXTERNAL_VISIBILITY must also be able to override it back
to default visibility.

Currently, the LLVM_EXTERNAL_VISIBILITY define is a no-op for
mingw targets, thus set CMAKE_CXX_VISIBILITY_PRESET correspondingly.

This unbreaks the mingw dylib build, if the compiler actually
takes hidden visiblity into account (e.g. after D130121).

(Later, once hidden visiblity can be used for MinGW targets, we
can make LLVM_EXTERNAL_VISIBILITY and LLVM_LIBRARY_VISIBILITY expand
to actual attributes, and reverse this commit.)

Differential Revision: https://reviews.llvm.org/D130200
2022-07-21 23:16:33 +03:00
David Sherwood f15b6b2907 [AArch64] Add target hook for preferPredicateOverEpilogue
This patch adds the AArch64 hook for preferPredicateOverEpilogue,
which currently returns true if SVE is enabled and one of the
following conditions (non-exhaustive) is met:

1. The "sve-tail-folding" option is set to "all", or
2. The "sve-tail-folding" option is set to "all+noreductions"
and the loop does not contain reductions,
3. The "sve-tail-folding" option is set to "all+norecurrences"
and the loop has no first-order recurrences.

Currently the default option is "disabled", but this will be
changed in a later patch.

I've added new tests to show the options behave as expected here:

  Transforms/LoopVectorize/AArch64/sve-tail-folding-option.ll

Differential Revision: https://reviews.llvm.org/D129560
2022-07-21 17:20:06 +01:00
Ivan Kosarev 4b9dbbdb09 [AMDGPU][MC][NFC] Refine SMEM load definitions.
Reviewed By: dp

Differential Revision: https://reviews.llvm.org/D130009
2022-07-21 14:56:56 +01:00
Ivan Kosarev 75950be836 [AMDGPU][NFC] Validate G_MERGE_VALUES as we match zero-extended 32-bit scalars.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D130001
2022-07-21 14:49:57 +01:00
Matt Arsenault 5a5439cb73 AMDGPU: Refine user-sgpr-init16-bug
It only applies to gfx1100 and gfx1102, and for wave32.
2022-07-21 08:57:00 -04:00
Thomas Symalla fd64a857ee [AMDGPU] Combine s_or_saveexec, s_xor instructions.
This patch merges a consecutive sequence of

s_or_saveexec s_o, s_i
s_xor exec, exec, s_o

into a single

s_andn2_saveexec s_o, s_i instruction.
This patch also cleans up the SIOptimizeExecMasking pass a bit.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D129073
2022-07-21 14:16:37 +02:00
Matt Devereau cd3d7bf15d [AArch64][SVE] Add DAG-Combine to push bitcasts from floating point loads after DUPLANE128
This patch lowers
  duplane128(insert_subvector(undef, bitcast(op(128bitsubvec)), 0), 0)
to
  bitcast(duplane128(insert_subvector(undef, op(128bitsubvec), 0), 0)).

This enables floating-point loads to match patterns added in
https://reviews.llvm.org/D130010

Differential Revision: https://reviews.llvm.org/D130013
2022-07-21 11:00:10 +00:00
Matt Devereau e0fbd990c9 [AArch64][SVE] Add ISel pattern to lower DUPLANE128 to LD1RQD
Following on from https://reviews.llvm.org/D128902, lower DUPLANE128 to LD1RQD
for integer load types from instruction selection.

Differential Revision: https://reviews.llvm.org/D130010
2022-07-21 10:56:43 +00:00