Commit Graph

68207 Commits

Author SHA1 Message Date
Jay Foad 9383b09858 [AMDGPU][GlobalISel] Fix subtarget checks for combining to v_med3_i16
Differential Revision: https://reviews.llvm.org/D130243
2022-07-21 11:41:31 +01:00
Zi Xuan Wu (Zeson) 08db089124 [CSKY] Fix the testcase error due to the verifyInstructionPredicates
- Test cases for arch only has 16-bit instruction such as ck801/ck802 need
compile with -mattr=+btst16
- Fix the GPR copy instruction with MOV16 for 16-bit only arch.
2022-07-21 15:53:50 +08:00
Craig Topper add17fc8e4 [RISCV] Combine (select_cc (srl (and X, 1<<C), C), 0, eq/ne, true, fale)
(srl (and X, 1<<C), C) is the form we receive for testing bit C.
An earlier combine removed the setcc so it wasn't there to match
when we created the SELECT_CC. This doesn't happen for BR_CC because
generic DAG combine rebuilds the setcc if it is used by BRCOND.

We can shift X left by XLen-1-C to put the bit to be tested in the
MSB, and use a signed compare with 0 to test the MSB.
2022-07-20 22:32:11 -07:00
Craig Topper 7dda6c71b1 [RISCV] Refactor the common combines for SELECT_CC and BR_CC into a helper function.
The only difference between the combines were the calls to getNode
that include the true/false values for SELECT_CC or the chain
and branch target for BR_CC.

Wrap the rest of the code into a helper that reads LHS, RHS, and
CC and outputs new values and a bool if a new node needs to be
created.
2022-07-20 21:18:07 -07:00
Craig Topper 8983db15a3 [RISCV] Optimize (brcond (seteq (and X, 1 << C), 0))
If C > 10, this will require a constant to be materialized for the
And. To avoid this, we can shift X left by XLen-1-C bits to put the
tested bit in the MSB, then we can do a signed compare with 0 to
determine if the MSB is 0 or 1. Thanks to @reames for the suggestion.

I've implemented this inside of translateSetCCForBranch which is
called when setcc+brcond or setcc+select is converted to br_cc or
select_cc during lowering. It doesn't make sense to do this for
general setcc since we lack a sgez instruction.

I've tested bit 10, 11, 31, 32, 63 and a couple bits betwen 11 and 31
and between 32 and 63 for both i32 and i64 where applicable. Select
has some deficiencies where we receive (and (srl X, C), 1) instead.
This doesn't happen for br_cc due to the call to rebuildSetCC in the
generic DAGCombiner for brcond. I'll explore improving select in a
future patch.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D130203
2022-07-20 18:40:49 -07:00
Craig Topper 31b8939ded [RISCV] Recognize bexti from (srl (and X, 1<<C), C).
This is the form we get for (zext (setne (and X 1<<C))). We only
had bexti patterns for the alternative form (and (srl X, C), 1).
2022-07-20 15:03:52 -07:00
Arthur Eubanks bc9b964f8f [NFC] Suppress unused variable warning in non-assert builds 2022-07-20 12:26:16 -07:00
Joe Nash dc850fbf3b [AMDGPU] NFC. Assert that mask is full with VOPC DPP
VOPC DPP should not be formed when the row_mask and bank_mask are not
0xf (full) because the resulting VOP DPP would have different semantics
than the MOV DPP followed by VOP. Existing checks in GCNDPPCombine cover
this case but for different reasons, so assert the property for
future-proofing.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D130101
2022-07-20 13:23:03 -04:00
David Green 4704da1374 [ARM] Fix Thumb2 compare being emitted ExpandCMP_SWAP
Given a patch like D129506, using instructions not valid for the current
target feature set becomes an error. This fixes an issue in
ARMExpandPseudo::ExpandCMP_SWAP where Thumb2 compares were used in
Thumb1Only code, such as thumbv8m.baseline targets.

Differential Revision: https://reviews.llvm.org/D129695
2022-07-20 12:04:22 +01:00
Alexandros Lamprineas 051738b08c Reland "[AArch64] Add a tablegen pattern for UZP2."
Converts concat_vectors((trunc (lshr)), (trunc (lshr))) to UZP2
when the shift amount is half the width of the vector element.

Prioritize the ADDHN(2), SUBHN(2) patterns over UZP2.
Fixes https://github.com/llvm/llvm-project/issues/52919

Differential Revision: https://reviews.llvm.org/D130061
2022-07-20 09:47:32 +01:00
David Sherwood 79660d339e [LoopVectorize][AArch64] Add TTI hook preferPredicatedReductionSelect
By default if SVE is enabled we want the select instruction used for
reductions to be inside the loop, rather than outside. This makes it
possible for the backend to fold the select into the operation to
produce a single predicated add, fadd, etc.

Differential Revision: https://reviews.llvm.org/D129763
2022-07-20 09:33:29 +01:00
Kazu Hirata 0387da6f4f Use value instead of getValue (NFC) 2022-07-19 21:18:26 -07:00
Haohai Wen d946fb8d95 [X86] Make sure load size is not larger than stack slot
Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D130084
2022-07-20 12:17:44 +08:00
chenglin.bi d337c1f256 [AArch64] Use SUBXrx64 for dynamic stack to refer to sp
When we lower dynamic stack, we need to substract pattern `x15 << 4`  from sp.
Subtract instruction with arith shifted register(SUBXrs) can't refer to sp. So for now we need two extra mov like:

```
mov x0, sp
sub x0, x0, x15, lsl #4
mov sp, x0
```
If we want to refer to sp in subtract instruction like this:
```
sub	sp, sp, x15, lsl #4
```
We must use arith extended register version(SUBXrx).
So in this patch when we find sub have sp operand on src0, try to select to SubXrx64.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D129932
2022-07-20 11:46:10 +08:00
Kazu Hirata 41ae78ea3a Use has_value instead of hasValue (NFC) 2022-07-19 20:15:44 -07:00
Sanjay Patel f0dd12ec5c [x86] use zero-extending load of a byte outside of loops too (2nd try)
The first attempt missed changing test files for tools
(update_llc_test_checks.py).

Original commit message:

This implements the main suggested change from issue #56498.
Using the shorter (non-extending) instruction with only
-Oz ("minsize") rather than -Os ("optsize") is left as a
possible follow-up.

As noted in the bug report, the zero-extending load may have
shorter latency/better throughput across a wide range of x86
micro-arches, and it avoids a potential false dependency.
The cost is an extra instruction byte.

This could cause perf ups and downs from secondary effects,
but I don't think it is possible to account for those in
advance, and that will likely also depend on exact micro-arch.
This does bring LLVM x86 codegen more in line with existing
gcc codegen, so if problems are exposed they are more likely
to occur for both compilers.

Differential Revision: https://reviews.llvm.org/D129775
2022-07-19 21:27:08 -04:00
Sanjay Patel 95401b0153 Revert "[x86] use zero-extending load of a byte outside of loops too"
This reverts commit 9d1ea1774c.
There are tests of update_llc_tests_checks.py that missed being updated.
2022-07-19 17:37:22 -04:00
Johannes Doerfert bf789b1957 [Attributor] Replace AAValueSimplify with AAPotentialValues
For the longest time we used `AAValueSimplify` and
`genericValueTraversal` to determine "potential values". This was
problematic for many reasons:
- We recomputed the result a lot as there was no caching for the 9
  locations calling `genericValueTraversal`.
- We added the idea of "intra" vs. "inter" procedural simplification
  only as an afterthought. `genericValueTraversal` did offer an option
  but `AAValueSimplify` did not. Thus, we might end up with "too much"
  simplification in certain situations and then gave up on it.
- Because `genericValueTraversal` was not a real `AA` we ended up with
  problems like the infinite recursion bug (#54981) as well as code
  duplication.

This patch introduces `AAPotentialValues` and replaces the
`AAValueSimplify` uses with it. `genericValueTraversal` is folded into
`AAPotentialValues` as are the instruction simplifications performed in
`AAValueSimplify` before. We further distinguish "intra" and "inter"
procedural simplification now.

`AAValueSimplify` was not deleted as we haven't ported the
re-materialization of instructions yet. There are other differences over
the former handling, e.g., we may not fold trivially foldable
instructions right now, e.g., `add i32 1, 1` is not folded to `i32 2`
but if an operand would be simplified to `i32 1` we would fold it still.

We are also even more aware of function/SCC boundaries in CGSCC passes,
which is good even if some tests look like they regress.

Fixes: https://github.com/llvm/llvm-project/issues/54981

Note: A previous version was flawed and consequently reverted in
      6555558a80.
2022-07-19 16:24:42 -05:00
Sanjay Patel 9d1ea1774c [x86] use zero-extending load of a byte outside of loops too
This implements the main suggested change from issue #56498.
Using the shorter (non-extending) instruction with only
-Oz ("minsize") rather than -Os ("optsize") is left as a
possible follow-up.

As noted in the bug report, the zero-extending load may have
shorter latency/better throughput across a wide range of x86
micro-arches, and it avoids a potential false dependency.
The cost is an extra instruction byte.

This could cause perf ups and downs from secondary effects,
but I don't think it is possible to account for those in
advance, and that will likely also depend on exact micro-arch.
This does bring LLVM x86 codegen more in line with existing
gcc codegen, so if problems are exposed they are more likely
to occur for both compilers.

Differential Revision: https://reviews.llvm.org/D129775
2022-07-19 16:43:47 -04:00
Yusra Syeda 6fb27bc2e3 [SystemZ][z/OS] Introduce CCAssignToRegAndStack to calling convention
Differential Revision: https://reviews.llvm.org/D127328
2022-07-19 13:55:25 -04:00
Jon Chesterfield 3a20597776 [amdgpu] Implement lds kernel id intrinsic
Implement an intrinsic for use lowering LDS variables to different
addresses from different kernels. This will allow kernels that cannot
reach an LDS variable to avoid wasting space for it.

There are a number of implicit arguments accessed by intrinsic already
so this implementation closely follows the existing handling. It is slightly
novel in that this SGPR is written by the kernel prologue.

It is necessary in the general case to put variables at different addresses
such that they can be compactly allocated and thus necessary for an
indirect function call to have some means of determining where a
given variable was allocated. Claiming an arbitrary SGPR into which
an integer can be written by the kernel, in this implementation based
on metadata associated with that kernel, which is then passed on to
indirect call sites is sufficient to determine the variable address.

The intent is to emit a __const array of LDS addresses and index into it.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D125060
2022-07-19 17:46:19 +01:00
Jon Chesterfield 2224bbcd74 [nfc][amdgpu] LDS. Move selection logic up the stack. 2022-07-19 17:20:19 +01:00
Joe Nash b28bb8cc9c [AMDGPU] Remove old operand from VOPC DPP
For most DPP instructions, the old operand stores the value that was in
the current lane before the DPP operation, and is tied to the
destination. For VOPC DPP, this is unnecessary and incorrect.

There appears to have been a latent bug related to D122737 with
SIInstrInfo::isOperandLegal. If you checked if a register operand was legal
when the InstructionDesc expected an immediate, it reported that is valid.
Its fix is necessary for and tested in this patch.

Reviewed By: foad, rampitec

Differential Revision: https://reviews.llvm.org/D130040
2022-07-19 09:35:05 -04:00
David Green 6cb9529001 [ARM] Remove VBICimm if no cleared bits are demanded
If none of the bits of a VBICimm are demanded, we can remove the node
entirely using the input operand instead.

Differential Revision: https://reviews.llvm.org/D129966
2022-07-19 11:53:47 +01:00
Simon Pilgrim 0f6b0461b0 [DAG] SimplifyDemandedBits - relax "xor (X >> ShiftC), XorC --> (not X) >> ShiftC" to match only demanded bits
The "xor (X >> ShiftC), XorC --> (not X) >> ShiftC" fold is currently limited to the XOR mask being a shifted all-bits mask, but we can relax this to only need to match under the demanded bits.

This helps expose more bit extraction/clearing patterns and fixes the PowerPC testCompares*.ll regressions from D127115

Alive2: https://alive2.llvm.org/ce/z/fl7T7K

Differential Revision: https://reviews.llvm.org/D129933
2022-07-19 10:59:07 +01:00
Abinav Puthan Purayil 9fa425c1ab [AMDGPU] Set amdgpu-memory-bound if a basic block has dense global memory access
AMDGPUPerfHintAnalysis doesn't set the memory bound attribute if
FuncInfo::InstCost outweighs MemInstCost even if we have a basic block
with relatively high global memory access. GCNSchedStrategy could revert
optimal scheduling in favour of occupancy which seems to degrade
performance for some kernels. This change introduces the
HasDenseGlobalMemAcc metric in the heuristic that makes the analysis
more conservative in these cases.

This fixes SWDEV-334259/SWDEV-343932

Differential Revision: https://reviews.llvm.org/D129759
2022-07-19 15:16:28 +05:30
David Spickett 5d14873249 [llvm][AArch64] Add missing FPCR, H and B registers to Codeview mapping
Fixes https://github.com/llvm/llvm-project/issues/56484

H registers are 16 bit views of AArch64's Neon registers and
B are the 8 bit views.

msvc does not support 16 bit float (some mention in DirectX but I
couldn't find a way to get to it) so for lack of a better reference
I'm using:
85c9b41b33/server/references/dia/include/cvconst.h
(the other microsoft-pdb repo is no longer up to date)

Luckily clang does support fp16 so a test is added for that.

There is no 8 bit float type so I had to get creative with the
test case. We're not testing for correct debug info here just
that we can select the B register and not crash in the process.

For FPCR it's never going to be passed as an argument so I've
not added a test for it. It is included to keep our list looking
the same as the reference.

Reviewed By: majnemer

Differential Revision: https://reviews.llvm.org/D129774
2022-07-19 09:33:13 +00:00
Cullen Rhodes f7b2d4aac6 [AArch64] Add patterns to fold zext(cmpeq(x, splat(0)))
Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D129626
2022-07-19 08:14:38 +00:00
Rosie Sumpter 05d424d165 [AArch64][SVE] Fold fadda(ptrue, x, select(mask, y, -0.0)) into fadda(mask, x, y)
This patch adds an SVE pattern to recognize the use of a select with an
fadda in the form fadda(ptrue, x, select(mask, y, -0.0)). In this case
the select can be folded away, with the select mask used as the
predicate for fadda. This improves the codegen when vectorizing loops
with ordered fp reductions.

Differential Revision: https://reviews.llvm.org/D129623
2022-07-19 08:31:51 +01:00
Bing1 Yu e01bf5a3e2 [X86] Promote v32f16's fadd into v32f32's fadd when it is avx512 without avx512fp16
Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D130059
2022-07-19 14:37:50 +08:00
Kazushi (Jam) Marukawa 469044cfd3 [VE] Support load/store/spill of vector mask registers
Support load/store/spill of vector mask registers and add regression
tests.

Reviewed By: efocht

Differential Revision: https://reviews.llvm.org/D129415
2022-07-19 10:29:21 +09:00
zhongyunde bddf20735e [AArch64][NFC] Set true for default of subfeature is more readable
Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D129960
2022-07-19 09:00:00 +08:00
ksyx 3198364e6e [RISCV][Clang] Add support for Zmmul extension
This patch implements recently ratified extension Zmmul, a subextension
of M (Integer Multiplication and Division) consisting only
multiplication part of it.

Differential Revision: https://reviews.llvm.org/D103313
Reviewed By: craig.topper, jrtc27, asb
2022-07-18 20:26:08 -04:00
Matt Arsenault 8d0383eb69 CodeGen: Remove AliasAnalysis from regalloc
This was stored in LiveIntervals, but not actually used for anything
related to LiveIntervals. It was only used in one check for if a load
instruction is rematerializable. I also don't think this was entirely
correct, since it was implicitly assuming constant loads are also
dereferenceable.

Remove this and rely only on the invariant+dereferenceable flags in
the memory operand. Set the flag based on the AA query upfront. This
should have the same net benefit, but has the possible disadvantage of
making this AA query nonlazy.

Preserve the behavior of assuming pointsToConstantMemory implying
dereferenceable for now, but maybe this should be changed.
2022-07-18 17:23:41 -04:00
Stanislav Mekhanoshin 523a99c0eb [AMDGPU] Support for gfx940 fp8 smfmac
Differential Revision: https://reviews.llvm.org/D129908
2022-07-18 12:12:41 -07:00
Stanislav Mekhanoshin 2695f0a688 [AMDGPU] Support for gfx940 fp8 mfma
Differential Revision: https://reviews.llvm.org/D129906
2022-07-18 11:49:56 -07:00
Stanislav Mekhanoshin 9fa5a6b7e8 [AMDGPU] Support for gfx940 fp8 conversions
Differential Revision: https://reviews.llvm.org/D129902
2022-07-18 11:48:43 -07:00
Mubariz Afzal c444f03787 Reland "[SystemZ][z/OS] Fix f32 variadic argument assertion"
This patch relands the f32 vararg assertion on z/OS fix that was reverted previously due to the testcase failing on non-z/OS platforms. It is now passing.

The tablegen lines that specify the XPLINK64 calling convention for promoting an f32 vararg to an f64 are effectively overwritten by the following tablegen line which bitcast an f64 vararg to an i64 (so that it can be used in the GPRs). Thus it becomes a bitcast from f32 to i64. We don't handle bitcasts for f32s and so this causes an assertion to be thrown.

We fix this by simplifying the tablegen lines to explicity show this behaviour, and allow the f32 in the bitcast case by first promoting it to an f64.
2022-07-18 14:25:17 -04:00
Craig Topper 0b02752899 [RISCV] Optimize (seteq (i64 (and X, 0xffffffff)), C1)
(and X, 0xffffffff) requires 2 shifts in the base ISA. Since we
know the result is being used by a compare, we can use a sext_inreg
instead of an AND if we also modify C1 to have 33 sign bits instead
of 32 leading zeros. This can also improve the generated code for
materializing C1.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D129980
2022-07-18 10:54:45 -07:00
Igor Kudrin 32eed8828e Reapply "[NVPTX] Use the mask() operator to initialize packed structs with pointers"
The original patch revealed an issue of reading incorrect values on BE hosts.
That is now changed to use `endian::read32le()` and `endian::read64le()`.

Original commit message:

The current implementation assumes that all pointers used in the
initialization of an aggregate are aligned according to the pointer size
of the target; that might not be so if the object is packed. In that
case, an array of .u8 should be used and pointers should be decorated
with the mask() operator.

The operator was introduced in PTX ISA 7.1, so an error is issued if the
case is detected for an earlier version.

Differential Revision: https://reviews.llvm.org/D127504
2022-07-18 20:56:26 +04:00
Craig Topper 7c0b9b379b [RISCV] Add isel patterns for ineg+setge/le/uge/ule.
setge/le/uge/ule selected by themselves require an xori with 1.
If we're negating the setcc, we can fold the xori with the neg
to create an addi with -1.

This works because xori X, 1 is equivalent to 1 - X if X is either
0 or 1. So we're doing -(1 - X) which is X-1 or X+-1.

This improves the code for selecting between 0 and -1 based on a
condition for some conditions.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D129957
2022-07-18 09:55:01 -07:00
Craig Topper d7f2a63371 [RISCV] Fold stack reload into sext.w by using lw instead of ld.
We can use lw to load 4 bytes from the stack and sign extend them
instead of loading all 8 bytes.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D129948
2022-07-18 09:09:17 -07:00
Igor Kudrin 1e451369d2 Revert "[NVPTX] Use the mask() operator to initialize packed structs with pointers"
The new test fails on BE hosts.

This reverts commit 04e978ccba.
2022-07-18 20:08:39 +04:00
Petar Avramovic c287bc4841 [AMDGPU][MC][GFX11] AsmParser for op_sel for VOP3 dpp opcodes
Parse op_sel for *_e64_dpp VOP3 opcodes.
Depends on D129637 and setting of VOP3_OPSEL in dpp pseudos.

Differential Revision: https://reviews.llvm.org/D129767
2022-07-18 15:08:52 +02:00
Simon Pilgrim 4f10516325 [AArch64] isDesirableToCommuteWithShift - add explicit ShiftLHS variable instead of altering a incoming argument variable
As discussed on D129995, altering the 'N' variable to point to shift's source value was confusing.
2022-07-18 13:28:07 +01:00
Simon Pilgrim 259c36e7c1 [DAG] Add asserts to isDesirableToCommuteWithShift overrides to ensure its being called from a shift. NFC. 2022-07-18 13:11:24 +01:00
Simon Pilgrim 07ab0cb4e7 [DAG] Add missing asserts to shouldFoldConstantShiftPairToMask overrides to ensure a shl/srl pair is used. NFC. 2022-07-18 13:11:23 +01:00
Benjamin Kramer 9234a7c0df [X86][FP16] Don't crash when lowering SELECT on fp16 vectors
This is a regression from f187948162
2022-07-18 13:41:00 +02:00
Igor Kudrin 04e978ccba [NVPTX] Use the mask() operator to initialize packed structs with pointers
The current implementation assumes that all pointers used in the
initialization of an aggregate are aligned according to the pointer size
of the target; that might not be so if the object is packed. In that
case, an array of .u8 should be used and pointers should be decorated
with the mask() operator.

The operator was introduced in PTX ISA 7.1, so an error is issued if the
case is detected for an earlier version.

Differential Revision: https://reviews.llvm.org/D127504
2022-07-18 04:08:59 -07:00
Igor Kudrin e27f9214c0 [NVPTX][NFC] Simplify printing initialization of aggregates
This simplifies NVPTXAsmPrinter::AggBuffer and its usage.
It is also a preparation for D127504.

Differential Revision: https://reviews.llvm.org/D129773
2022-07-18 04:08:59 -07:00
Ivan Kosarev 432cbd7827 [AMDGPU][CodeGen] Support (register + immediate) SMRD offsets.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D129381
2022-07-18 11:29:31 +01:00
Ivan Kosarev 9c66c02e2e [AMDGPU][CodeGen] Match SMRDs with constant bases and register offsets.
Saves some add instructions on a couple Rage 2 shaders and is also a
prerequisite for a coming-soon change matching (register + immediate)
offsets.

Reviewed By: foad, arsenm

Differential Revision: https://reviews.llvm.org/D129095
2022-07-18 11:18:23 +01:00
esmeyi 28b1ba1c07 [PowerPC] Add an ISEL pattern for i32 MULLI.
We add the following ISEL pattern for i64 imm in D87384, this patch is for i32.
`mul with (2^N * int16_imm) -> MULLI + RLWINM`

Reviewed By: shchenz

Differential Revision: https://reviews.llvm.org/D129708
2022-07-18 04:40:51 -04:00
Abinav Puthan Purayil d96361d714 [AMDGPU] Add the uses_dynamic_stack field to the kernel descriptor and the kernel metadata map
This change introduces the dynamic stack boolean field to code-object-v3
and above under the code properties of the kernel descriptor and under
the kernel metadata map of NT_AMDGPU_METADATA. This field corresponds to
the is_dynamic_callstack field of amd_kernel_code_t.

Differential Revision: https://reviews.llvm.org/D128344
2022-07-18 10:07:13 +05:30
jacquesguan 2b11174079 [RISCV][NFC] Use more Arrayref in TargetLowering functions.
This patch replaces some foreach with Arrayref, and abstract some same literal array with a variable.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D125656
2022-07-18 03:33:45 +00:00
jacquesguan bd228a1772 [RISCV] Extend use of SHXADD instructions in RVV spill/reload code.
This patch extends D124824. It uses SHXADD+SLLI to emit 3, 5, or 9 multiplied by a power 2.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D129179
2022-07-18 10:53:19 +08:00
Kazu Hirata 7094ab4ee7 [llvm] Modernize bool literals (NFC)
Identified with modernize-use-bool-literals.
2022-07-17 18:08:51 -07:00
Kazu Hirata 1dc8038dad [AVR] Remove redundant void (NFC)
Identified with modernize-redundant-void-arg.
2022-07-17 18:08:50 -07:00
Fangrui Song d955497112 [RISCV] Simplify lowerGlobalAddress. NFC 2022-07-17 15:42:45 -07:00
Kazu Hirata 8dfdb80f72 Ensure newlines at the end of files (NFC) 2022-07-17 15:37:45 -07:00
David Green cb806ce2aa [ARM] Guard VMOVH and VINS patterns.
These instructions are only available when fp is available, so cannot be
used with just +mve. Add predicates to ensure we fall-back under the
right circumstances.
2022-07-17 21:26:49 +01:00
Craig Topper decf385c27 [RISCV] Teach targetShrinkDemandedConstant to handle OR and XOR.
We were only handling AND before, but SimplifyDemandedBits can
also call it for OR and XOR.
2022-07-17 12:36:33 -07:00
Neumann Hon e8f9a74fbf [SystemZ][z/OS] Implement detection and handling for XPLink Leaf procedures.
This PR adds support for creating leaf functions when there are no CSRs used, no function calls are made, no stack frame is acquired, and contain no try/catch/throw statements.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D129687
2022-07-17 14:30:33 -04:00
Craig Topper 8cc483099a [RISCV] Teach RISCVCodeGenPrepare to optimize (i64 (and (zext/sext (i32 X), C1)))
If X is known positive by a dominating condition, we can fill in
ones into the upper bits of C1 if that would allow it to become an
simm12 allowing the use of ANDI.

This pattern often occurs in unrolled loops where the induction
variable has been widened.

To get the best benefit from this, I had to move the pass above
ConstantHoisting which is in addIRPasses. Otherwise the AND constant
is often hoisted away from the AND.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D129888
2022-07-17 11:00:56 -07:00
Craig Topper 73f766ca9a [RISCV] Remove unnecessary use of IRBuilder from RISCVCodeGenPrepare.
We're creating single instruction to replace another instruction.
We can insert using the InsertBefore operand of the constructor.
Then copy the debug location.
2022-07-17 10:59:54 -07:00
Craig Topper ee6267c443 [RISCV] Remove Gather/Scatter Opt from the O0 pipeline. 2022-07-17 10:58:33 -07:00
Carl Ritson 547e3cba7d [AMDGPU] Improve liveness copying in si-optimize-exec-masking-pre-ra
Further improve liveness copying for CC register post optimization
by mirroring live internal splits.
The fixes a bug in register allocation when CC register liveness
is extended across a branches instead of split.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D129557
2022-07-17 17:34:05 +09:00
Kazu Hirata deac0ac523 [AMDGPU] Use default member initialization (NFC)
Identified with modernize-use-default-member-init.
2022-07-16 12:44:35 -07:00
Kazu Hirata 6cbfffb3a3 [AMDGPU] Declare TableRef in terms of ArrayRef (NFC) 2022-07-16 10:56:20 -07:00
Simon Pilgrim a5d0122f75 [DAG] Canonicalize non-inlane shuffle -> AND if all non-inlane referenced elements are known zero
As mentioned on D127115, this patch that attempts to recognise shuffle masks that could be simplified to a AND mask - we already have a similar transform that will fold AND -> 'clear mask' shuffle, but this patch handles cases where the referenced elements are not from the same lane indices but are known to be zero.

Differential Revision: https://reviews.llvm.org/D129150
2022-07-16 11:38:24 +01:00
Kazu Hirata 5605a1eedd Use drop_begin (NFC) 2022-07-15 23:58:11 -07:00
Phoebe Wang f187948162 [X86][FP16] Enable vector support for FP16 emulation
This is follow up of D107082, which enable vector support according to psABI.

Reviewed By: skan

Differential Revision: https://reviews.llvm.org/D127982
2022-07-16 09:38:58 +08:00
Jon Chesterfield eda2bcad02 [nfc][amdgpu] Remove dead variable and function 2022-07-15 23:56:43 +01:00
Vang Thao 67357739c6 [AMDGPU] Add remarks to output some resource usage
Add analyis remarks to output kernel name, register usage, occupancy,
scratch usage, spills, and LDS information.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D123878
2022-07-15 11:01:53 -07:00
Dmitry Preobrazhensky 185c36de73 [AMDGPU][MC][NFC] Remove unnecessary code
Differential Revision: https://reviews.llvm.org/D129766
2022-07-15 13:17:36 +03:00
Dmitry Preobrazhensky 2a6532d542 [AMDGPU][MC][GFX11] Correct disassembly of *_e64_dpp opcodes which support op_sel
These opcodes cannot be disassembled because op_sel operand is missing - it must be added manually.
See https://github.com/llvm/llvm-project/issues/56512 for detailed issue analysis.

Differential Revision: https://reviews.llvm.org/D129637
2022-07-15 13:11:59 +03:00
Fangrui Song 3c849d0aef Modernize Optional::{getValueOr,hasValue} 2022-07-15 01:20:39 -07:00
Lian Wang dca821d80a [RISCV] Add cost model for vector.reverse mask operation
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D128784
2022-07-15 06:58:57 +00:00
Rainer Orth 09c1ee1086 [Sparc] Don't claim JIT support on SPARC for now
Until D118450 <https://reviews.llvm.org/D118450> lands, there's no JIT
support on SPARC, but the backend claims otherwise, leading to various
testsuite failures.

This patch corrects this.

Tested on `sparcv9-sun-solaris2.11`.

Differential Revision: https://reviews.llvm.org/D129349
2022-07-15 08:18:40 +02:00
Phoebe Wang 190518da4b [X86] Use generic tuning for "x86-64" if "tune-cpu" is not specified
This is an alternative to D129154. See discussions on https://discourse.llvm.org/t/fast-scalar-fsqrt-tuning-in-x86/63605

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D129647
2022-07-15 10:05:08 +08:00
Craig Topper 79016f6eef [RISCV] Refine the heuristics for our custom (mul (and X, C2), C1) isel.
Prefer to use SLLI instead of zext.w/zext.h in more cases. SLLI
might be better for compression.
2022-07-14 18:24:10 -07:00
Craig Topper bc0d656558 [RISCV] Fix mistake in RISCVTTIImpl::getIntImmCostInst.
zext.w requires Zba not Zbb. The test was also wrong, but had the
correct comment.
2022-07-14 16:42:35 -07:00
jeff 8a12f20ef7 [AMDGPU] Update the mechanism used to check for cycles and add eges in power-sched mutation 2022-07-14 16:24:13 -07:00
Craig Topper 9913ea490a [RISCV] Make TuneSiFive7 depend on TuneNoDefaultUnroll instead of listing it for every SiFive7 CPU 2022-07-14 15:57:30 -07:00
Alexander Timofeev 2e29b0138c [AMDGPU] Lowering VGPR to SGPR copies to v_readfirstlane_b32 if profitable.
Since the divergence-driven instruction selection has been enabled for AMDGPU,
 all the uniform instructions are expected to be selected to SALU form, except those not having one.
 VGPR to SGPR copies appear in MIR to connect values producers and consumers. This change implements an algorithm
 that evolves a reasonable tradeoff between the profit achieved from keeping the uniform instructions in SALU form
 and overhead introduced by the data transfer between the VGPRs and SGPRs.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D128252
2022-07-14 23:59:02 +02:00
Craig Topper 1a8468ba61 [RISCV] Add a RISCV specific CodeGenPrepare pass.
Initial optimization is to convert (i64 (zext (i32 X))) to
(i64 (sext (i32 X))) if the dominating condition for the basic block
guaranteed the sign bit of X is zero.

This frequently occurs in loop preheaders where a signed induction
variable that can never be negative has been widened. There will be
a dominating check that the 32-bit trip count isn't negative or zero.
The check here is not restricted to that specific case though.

A i32->i64 sext is cheaper than zext on RV64 without the Zba
extension. Later optimizations can often remove the sext from the
preheader basic block because the dominating block also needs a sext to
evaluate the greater than 0 check.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D129732
2022-07-14 10:20:59 -07:00
Fraser Cormack d1a5669f5e [RISCV] Disable subregister liveness by default
We previously enabled subregister liveness by default when compiling
with RVV. This has been shown to cause miscompilations where RVV
register operand constraints are not met. A test was added for this in
D129639 which explains the issue in more detail.

Until this issue is fixed in some way, we should not be enabling
subregister liveness unless the user asks for it.

Reviewed By: craig.topper, rogfer01, kito-cheng

Differential Revision: https://reviews.llvm.org/D129646
2022-07-14 17:04:10 +01:00
Nikita Popov dcf4b733ef [SCEVExpander] Make CanonicalMode handing in isSafeToExpand() more robust (PR50506)
isSafeToExpand() for addrecs depends on whether the SCEVExpander
will be used in CanonicalMode. At least one caller currently gets
this wrong, resulting in PR50506.

Fix this by a) making the CanonicalMode argument on the freestanding
functions required and b) adding member functions on SCEVExpander
that automatically take the SCEVExpander mode into account. We can
use the latter variant nearly everywhere, and thus make sure that
there is no chance of CanonicalMode mismatch.

Fixes https://github.com/llvm/llvm-project/issues/50506.

Differential Revision: https://reviews.llvm.org/D129630
2022-07-14 14:41:51 +02:00
Jay Foad e45aa230ad [AMDGPU] Update LiveVariables after killing an immediate def
D114999 added code to kill an immediate def if it was folded into its
only use by convertToThreeAddress. This patch updates LiveVariables when
that happens in order to fix verification failures exposed by D129213.

Differential Revision: https://reviews.llvm.org/D129661
2022-07-14 10:49:41 +01:00
Cullen Rhodes 055b409cea [AArch64][NFC] Drop 'V' from ASIMD FP convert, other, D/Q-form regex
In the Cortex A57 Optimization Guide [1] VCVTAU (AArch32) is incorrectly
listed in the AArch64 instructions for instruction groups:

  - ASIMD FP convert, other, D-form
  - ASIMD FP convert, other, Q-form

It's meant to be FCVTAU, this will be fixed in future releases of the guide.

[1] https://developer.arm.com/documentation/uan0015/b
2022-07-14 09:32:20 +00:00
Weining Lu 9b87ad33c1 [LoongArch] Implement OR combination to generate bstrins.w/d
Differential Revision: https://reviews.llvm.org/D129357
2022-07-14 17:20:43 +08:00
David Green 3e0bf1c7a9 [CodeGen] Move instruction predicate verification to emitInstruction
D25618 added a method to verify the instruction predicates for an
emitted instruction, through verifyInstructionPredicates added into
<Target>MCCodeEmitter::encodeInstruction. This is a very useful idea,
but the implementation inside MCCodeEmitter made it only fire for object
files, not assembly which most of the llvm test suite uses.

This patch moves the code into the <Target>_MC::verifyInstructionPredicates
method, inside the InstrInfo.  The allows it to be called from other
places, such as in this patch where it is called from the
<Target>AsmPrinter::emitInstruction methods which should trigger for
both assembly and object files. It can also be called from other places
such as verifyInstruction, but that is not done here (it tends to catch
errors earlier, but in reality just shows all the mir tests that have
incorrect feature predicates). The interface was also simplified
slightly, moving computeAvailableFeatures into the function so that it
does not need to be called externally.

The ARM, AMDGPU (but not R600), AVR, Mips and X86 backends all currently
show errors in the test-suite, so have been disabled with FIXME
comments.

Recommitted with some fixes for the leftover MCII variables in release
builds.

Differential Revision: https://reviews.llvm.org/D129506
2022-07-14 09:33:28 +01:00
Jannik Silvanus e5c4cde451 [AMDGPU] SIMachineScheduler: Add support for several MachineScheduler features
The SI machine scheduler inherits from ScheduleDAGMI.
This patch adds support for a few features that are implemented
in ScheduleDAGMI (or its base classes) that were missing so far
because their support is implemented in overridden functions.

* Support cl::opt -view-misched-dags
  This option allows to open a graphical window of the scheduling DAG.

* Support cl::opt -misched-print-dags
  This option allows to print the scheduling DAG in text form.

* After constructing the scheduling DAG, call postprocessDAG()
  to apply any registered DAG mutations.
  Note that currently there are no mutations defined in AMDGPUTargetMachine.cpp
  in case SIScheduler is used.
  Still add this to avoid surprises in the future in case mutations are added.

Differential Revision: https://reviews.llvm.org/D128808
2022-07-14 09:45:31 +02:00
Kazu Hirata 611ffcf4e4 [llvm] Use value instead of getValue (NFC) 2022-07-13 23:11:56 -07:00
owenca 18a910dfba [llvm] Make lib/Target/BPF/BTF.h self-contained 2022-07-13 20:54:26 -07:00
Zi Xuan Wu (Zeson) 033324db6f [CSKY] Fix the br target operand type in td
br target operand should be Operand<OtherVT> type instead of Operand<iPTR>
2022-07-14 11:27:31 +08:00
Yonghong Song 6e6c1efe04 [BPF] Handle anon record for CO-RE relocations
When doing experiment in kernel, for kernel data structure sockptr_t
in CO-RE operation, I hit an assertion error. The sockptr_t definition
and usage look like below:
  #pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)
  typedef struct {
        union {
                void    *kernel;
                void    *user;
        };
        unsigned is_kernel : 1;
  } sockptr_t;
  #pragma clang attribute pop
  int test(sockptr_t *arg) {
    return arg->is_kernel;
  }
The assertion error looks like
  clang: ../lib/Target/BPF/BPFAbstractMemberAccess.cpp:878: llvm::Value*
   {anonymous}::BPFAbstractMemberAccess::computeBaseAndAccessKey(llvm::CallInst*,
   {anonymous}::BPFAbstractMemberAccess::CallInfo&, std::__cxx11::string&,
   llvm::MDNode*&): Assertion `TypeName.size()' failed.

In this particular, the clang frontend attach the debuginfo metadata associated
with anon structure with the preserve_access_info IR intrinsic. But the first
debuginfo type has to be a named type so libbpf can have a sound start to
do CO-RE relocation.

Besides the above approach using pragma to push attribute, the below typedef/struct
definition can have preserve_access_index directly applying to the anon struct.
  typedef struct {
        union {
                void    *kernel;
                void    *user;
        };
        unsigned is_kernel : 1;
  } __attribute__((preserve_access_index) sockptr_t;

This patch fixed the issue by preprocessing function argument/return types
and local variable types used by other CO-RE intrinsics. For any
   typedef struct/union { ... } typedef_name
an association of <anon struct/union, typedef> is recorded to replace
the IR intrinsic metadata 'anon struct/union' to 'typedef'.
It is possible that two different 'typedef' types may have identical
anon struct/union type. For such a case, the association will be
<anon struct/union, nullptr> to indicate the invalid case.

Differential Revision: https://reviews.llvm.org/D129621
2022-07-13 15:16:16 -07:00
Craig Topper 257755530a [RISCV] Fold (sra (sext_inreg (shl X, C1), i32), C2) -> (sra (shl X, C1+32), C2+32).
The former pattern will select as slliw+sraiw while the latter
will select as slli+srai. This can enable the slli+srai to be
compressed.

Differential Revision: https://reviews.llvm.org/D129688
2022-07-13 14:34:17 -07:00
Philip Reames dde2a7fb6d [RISCV] Exploit fact that vscale is always power of two to replace urem sequence
When doing scalable vectorization, the loop vectorizer uses a urem in the computation of the vector trip count. The RHS of that urem is a (possibly shifted) call to @llvm.vscale.

vscale is effectively the number of "blocks" in the vector register. (That is, types such as <vscale x 8 x i8> and <vscale x 1 x i8> both fill one 64 bit block, and vscale is essentially how many of those blocks there are in a single vector register at runtime.)

We know from the RISCV V extension specification that VLEN must be a power of two between ELEN and 2^16. Since our block size is 64 bits, the must be a power of two numbers of blocks. (For everything other than VLEN<=32, but that's already broken.)

It is worth noting that AArch64 SVE specification explicitly allows non-power-of-two sizes for the vector registers and thus can't claim that vscale is a power of two by this logic.

Differential Revision: https://reviews.llvm.org/D129609
2022-07-13 10:54:47 -07:00
David Green 95252133e1 Revert "Move instruction predicate verification to emitInstruction"
This reverts commit e2fb8c0f4b as it does
not build for Release builds, and some buildbots are giving more warning
than I saw locally. Reverting to fix those issues.
2022-07-13 13:28:11 +01:00