Commit Graph

6750 Commits

Author SHA1 Message Date
Nikita Popov 3ed643ea76 [AMDGPUPromoteAlloca] Make compatible with opaque pointers
This mainly changes the handling of bitcasts to not check the types
being casted from/to -- we should only care about the actual
load/store types. The GEP handling is also changed to not care about
types, and just make sure that we get an offset corresponding to
a vector element.

This was a bit of a struggle for me, because this code seems to be
pretty sensitive to small changes. The end result seems to produce
strictly better results for the existing test coverage though,
because we can now deal with more situations involving bitcasts.

Differential Revision: https://reviews.llvm.org/D121371
2022-03-11 09:20:51 +01:00
Joe Nash c8e6d68a9f [AMDGPU] Use subreg encoding instead of reassign
The HWEncoding for these 64 bit registers should be the same as as the
encoding for the previously defined low halves of the registers. So
reuse that value instead of repeating the assignment. NFC.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D121391
2022-03-10 12:50:29 -05:00
Nico Weber a278250b0f Revert "Cleanup codegen includes"
This reverts commit 7f230feeea.
Breaks CodeGenCUDA/link-device-bitcode.cu in check-clang,
and many LLVM tests, see comments on https://reviews.llvm.org/D121169
2022-03-10 07:59:22 -05:00
alex-t d159b4444c [AMDGPU] Enable divergence predicates for negative inline constant subtraction
We have a pattern that undo sub x, c -> add x, -c canonicalization since c is more likely
 an inline immediate than -c. This patch enables it to select scalar or vector subtracion by the input node divergence.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D121360
2022-03-10 15:03:22 +03:00
serge-sans-paille 7f230feeea Cleanup codegen includes
after:  1061034926
before: 1063332844

Differential Revision: https://reviews.llvm.org/D121169
2022-03-10 10:00:30 +01:00
Changpeng Fang 0f20a35b9e AMDGPU: Set up User SGPRs for queue_ptr only when necessary
Summary:
  In general, we need queue_ptr for aperture bases and trap handling,
and user SGPRs have to be set up to hold queue_ptr. In current implementation,
user SGPRs are set up unnecessarily for some cases. If the target has aperture
registers, queue_ptr is not needed to reference aperture bases. For trap
handling, if target suppots getDoorbellID, queue_ptr is also not necessary.
Futher, code object version 5 introduces new kernel ABI which passes queue_ptr
as an implicit kernel argument, so user SGPRs are no longer necessary for
queue_ptr. Based on the trap handling document:
https://llvm.org/docs/AMDGPUUsage.html#amdgpu-trap-handler-for-amdhsa-os-v4-onwards-table,
llvm.debugtrap does not need queue_ptr, we remove queue_ptr suport for llvm.debugtrap
in the backend.

Reviewers: sameerds, arsenm

Fixes: SWDEV-307189

Differential Revision: https://reviews.llvm.org/D119762
2022-03-09 10:14:05 -08:00
Stanislav Mekhanoshin 33fb23f728 [AMDGPU] Merge flat with global in the SILoadStoreOptimizer
Flat can be merged with flat global since address cast is a no-op.
A combined memory operation needs to be promoted to flat.

Differential Revision: https://reviews.llvm.org/D120431
2022-03-09 10:04:37 -08:00
Jay Foad c7218164c4 [AMDGPU] Remove HasAtomicFaddInstsGFX90X and HasAtomicFaddInstsGFX940
These compound predicates are not required, since we can use a
combination of setting the SubtargetPredicate (to a subtarget
predicate like isGFX940Plus) and OtherPredicates (to a list of feature
predicates like HasAtomicFaddInsts) instead. NFC.

Differential Revision: https://reviews.llvm.org/D121289
2022-03-09 18:02:21 +00:00
Vang Thao 28322c2514 [AMDGPU] Add scheduler pass to rematerialize trivial defs
Add a new pass in the pre-ra AMDGPU scheduler to check if sinking trivially rematerializable defs that only has one use outside of the defining block will increase occupancy. If we can determine that occupancy can be increased, then rematerialize only the minimum amount of defs required to increase occupancy. Also re-schedule all regions that had occupancy matching the previous min occupancy using the new occupancy.

This is based off of the discussion in https://reviews.llvm.org/D117562.

The logic to determine the defs we should collect and determining if sinking would be beneficial is mostly the same. Main differences is that we are no longer limiting it to immediate defs and the def and use does not have to be part of a loop.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D119475
2022-03-09 09:34:33 -08:00
Venkata Ramanaiah Nalamothu 04fff547e2 [AMDGPU] Move call clobbered return address registers s[30:31] to callee saved range
Currently the return address ABI registers s[30:31], which fall in the call
clobbered register range, are added as a live-in on the function entry to
preserve its value when we have calls so that it gets saved and restored
around the calls.

But the DWARF unwind information (CFI) needs to track where the return address
resides in a frame and the above approach makes it difficult to track the
return address when the CFI information is emitted during the frame lowering,
due to the involvment of understanding the control flow.

This patch moves the return address ABI registers s[30:31] into callee saved
registers range and stops adding live-in for return address registers, so that
the CFI machinery will know where the return address resides when CSR
save/restore happen during the frame lowering.

And doing the above poses an issue that now the return instruction uses undefined
register `sgpr30_sgpr31`. This is resolved by hiding the return address register
use by the return instruction through the `SI_RETURN` pseudo instruction, which
doesn't take any input operands, until the `SI_RETURN` pseudo gets lowered to the
`S_SETPC_B64_return` during the `expandPostRAPseudo()`.

As an added benefit, this patch simplifies overall return instruction handling.

Note: The AMDGPU CFI changes are there only in the downstream code and another
version of this patch will be posted for review for the downstream code.

Reviewed By: arsenm, ronlieb

Differential Revision: https://reviews.llvm.org/D114652
2022-03-09 12:18:02 +05:30
Stanislav Mekhanoshin 9eabea3968 [AMDGPU] Set noclobber metadata on loads instead of cast to constant
A load via pointer cast to constant will return true from
pointsToConstantMemory which is not necessarily so.

Fixes: SWDEV-326463

Differential Revision: https://reviews.llvm.org/D121172
2022-03-07 23:13:02 -08:00
Christudasan Devadasan 0d849b8249 AMDGPU: Skip folding REG_SEQUENCE if found unknown regclasses for its users
Use TII::getRegClass to return a valid regclass or a nullptr
if the RC is unknown for a given OpIdx. This fixes a potential
crash occurred while getting the RC from a variadic instruction.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D120813
2022-03-08 10:11:57 +05:30
Jacob Lambert 5160447f58 [AMDGPU] Add gfx10 assembler directive to specify shared VGPR count
Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D105507
2022-03-07 14:27:41 -08:00
Stanislav Mekhanoshin 932f628121 [AMDGPU] new gfx940 fp atomics
Differential Revision: https://reviews.llvm.org/D121028
2022-03-07 12:32:02 -08:00
Stanislav Mekhanoshin e7b362d75d [AMDGPU] Add v_mov_b64 gfx940 opcode
Differential Revision: https://reviews.llvm.org/D121023
2022-03-07 12:07:12 -08:00
Stanislav Mekhanoshin 8992b50e2f [AMDGPU] gfx940 uses new names for coherency bits
Differential Revision: https://reviews.llvm.org/D120855
2022-03-07 11:50:07 -08:00
Austin Kerbow 0c0636f782 [AMDGPU] Fix uninitialized value after 8d0c34fd4f 2022-03-07 11:32:01 -08:00
Stanislav Mekhanoshin 2c830c8fab [AMDGPU] gfx940: support V_FMAMK_F32 and V_FMAAK_F32
Differential Revision: https://reviews.llvm.org/D120769
2022-03-07 11:31:01 -08:00
Austin Kerbow 8d0c34fd4f [AMDGPU] Omit unnecessary waitcnt before barriers
It is not necessary to wait for all outstanding memory operations before
barriers on hardware that can back off of the barrier in the event of an
exception when traps are enabled. Add a new subtarget feature which
tracks which HW has this ability.

Reviewed By: #amdgpu, rampitec

Differential Revision: https://reviews.llvm.org/D120544
2022-03-07 08:23:53 -08:00
Jay Foad d7d4ed0847 [AMDGPU] Tweak predicates for image_bvh_intersect_ray instructions
Don't override SubtargetPredicate since that is already set in the
base classes for the appropriate subtarget like MIMG_gfx10. Use
OtherPredicates instead for consistency with the way we handle
features like HasImageInsts and HasExtendedImageInsts. NFC.

Differential Revision: https://reviews.llvm.org/D120909
2022-03-04 12:05:23 +00:00
Aakanksha 840695814a [AMDGPU] Add gfx1036 target
Differential Revision: https://reviews.llvm.org/D120846
2022-03-02 23:26:38 +00:00
Stanislav Mekhanoshin 35ec58d8c0 [AMDGPU] gfx940 removes all image instructions
Differential Revision: https://reviews.llvm.org/D120763
2022-03-02 13:55:26 -08:00
Stanislav Mekhanoshin 2e2e64df4a [AMDGPU] Add gfx940 target
This is target definition only.

Differential Revision: https://reviews.llvm.org/D120688
2022-03-02 13:54:48 -08:00
Jay Foad 5ddfedc956 [AMDGPU] Fix deleting of move-immediate instructions after folding
SIInstrInfo::FoldImmediate tried to delete move-immediate instructions
after folding them into their only use. This did not work because it was
checking hasOneNonDBGUse after doing the fold, at which point there
should be no uses. This seems to have no effect on codegen, it just
means less stuff for DCE to clean up later.

Differential Revision: https://reviews.llvm.org/D120815
2022-03-02 16:11:16 +00:00
Jay Foad 8bed52c9eb [AMDGPU] Make more use of madmk/fmamk instructions
In convertToThreeAddress handle VOP2 mac/fmac instructions with a
literal src0 operand, since these are prime candidates for
converting to madmk/fmamk.

Previously this would only happen if src0 (or src1) was a register
defined by a move-immediate instruction, but in many cases these
operands have already been folded because SIFoldOperands runs
before TwoAddressInstructionPass.

Differential Revision: https://reviews.llvm.org/D120736
2022-03-02 10:22:10 +00:00
Mircea Trofin cb2160760e [nfc][codegen] Move RegisterBank[Info].h under CodeGen
This wraps up from D119053. The 2 headers are moved as described,
fixed file headers and include guards, updated all files where the old
paths were detected (simple grep through the repo), and `clang-format`-ed it all.

Differential Revision: https://reviews.llvm.org/D119876
2022-03-01 21:53:25 -08:00
Abinav Puthan Purayil 8b4ab01c38 [AMDGPU] Select no-return atomic ops in BUFInstructions.td
This change adds the selection of no-return buffer_* instructions in
tblgen. The motivation for this is to get the no-return atomic isel
working without relying on post-isel hooks so that GlobalISel can start
selecting them (once GlobalISelEmitter allows no return atomic patterns
like how DAGISel does).

This change handles the selection of no-return mubuf_atomic_cmpswap in
tblgen without changing the extract_subreg generation for the return
variant. This handling was done by the post-isel hook.

Differential Revision: https://reviews.llvm.org/D120538
2022-03-02 08:25:28 +05:30
Jay Foad 289339140e [AMDGPU] Handle legacy multiply-accumulate opcodes in convertToThreeAddress
Handle V_MAC_LEGACY_F32 and V_FMAC_LEGACY_F32 in
convertToThreeAddress, to avoid the need for an extra mov
instruction in some cases.

Differential Revision: https://reviews.llvm.org/D120704
2022-03-01 16:58:00 +00:00
Jay Foad 9ac3a85047 [AMDGPU] Disentangle MFMA handling in convertToThreeAddress. NFC.
Move MFMA handling to the top of convertToThreeAddress and pull
IsF16 calculation out of the switch. I think this makes it clearer
exactly which mac/fmac opcodes are handled, since they are now
listed in the switch with minimal extra clutter.

Differential Revision: https://reviews.llvm.org/D120703
2022-03-01 16:56:56 +00:00
Jay Foad 68895098d1 [AMDGPU] Preserve src2_modifiers in convertToThreeAddress
Found by code inspection. I don't think it makes a difference with
current codegen, because if any source modifiers were present we
would have selected mad/fma instead of mac/fmac in the first place.

Differential Revision: https://reviews.llvm.org/D120709
2022-03-01 14:48:25 +00:00
Stanislav Mekhanoshin 517171ce20 [AMDGPU] Extend SILoadStoreOptimizer to handle flat load/stores
TODO: merge flat with global promoting to flat.

Differential Revision: https://reviews.llvm.org/D120351
2022-02-28 11:27:30 -08:00
Carl Ritson 2bbe6506d4 [AMDGPU] Remove redundant isVALU in SIPreEmitPeephole. NFC
Remove redundant isVALU call added in D120202.
2022-02-27 16:09:20 +09:00
Benjamin Kramer 1de11fe360 Use RegisterInfo::regsOverlaps instead of checking aliases
This is both less code and faster since it doesn't have to expand all
the sub & superreg sets. NFCI.
2022-02-26 20:32:12 +01:00
Jameson Nash c4b1a63a1b mark getTargetTransformInfo and getTargetIRAnalysis as const
Seems like this can be const, since Passes shouldn't modify it.

Reviewed By: wsmoses

Differential Revision: https://reviews.llvm.org/D120518
2022-02-25 14:30:44 -05:00
Changpeng Fang ca62b1db9f [AMDGPU][NFC]: Emit metadata for hidden_heap_v1 kernarg
Summary:
  Emit metadata for hidden_heap_v1 kernarg

Reviewers:
  sameerds, b-sumner

Fixes:
  SWDEV-307188

Differential Revision:
  https://reviews.llvm.org/D119027
2022-02-25 10:45:35 -08:00
Aakanksha bf60a1c546 Avoid comparisons between types of different widths in a loop condition to prevent the loop from behaving unexpectedly
This change fixes the code violations flagged in AMD compute CodeQL scan -
Query Description: "Comparisons between types of different widths in a loop condition can cause the loop to behave unexpectedly."

Differential Revision: https://reviews.llvm.org/D120355
2022-02-25 17:30:12 +00:00
Carl Ritson 565af157ef [AMDGPU] Extend pre-emit peephole to redundantly masked VCC
Extend pre-emit peephole for S_CBRANCH_VCC[N]Z to eliminate
redundant S_AND operations against EXEC for V_CMP results in VCC.
These occur after after register allocation when VCC has been
selected as the comparison destination.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D120202
2022-02-25 10:18:31 +09:00
Jay Foad 05d79e3562 [AMDGPU] Divergence-driven instruction selection for bitreverse
Differential Revision: https://reviews.llvm.org/D119702
2022-02-24 20:21:59 +00:00
Stanislav Mekhanoshin 3279e44063 [AMDGPU] Extend SILoadStoreOptimizer to handle global stores
TODO: merge flat load/stores.
TODO: merge flat with global promoting to flat.

Differential Revision: https://reviews.llvm.org/D120346
2022-02-24 11:09:51 -08:00
Stanislav Mekhanoshin cefa1c5ca9 [AMDGPU] Fix combined MMO in load-store merge
Loads and stores can be out of order in the SILoadStoreOptimizer.
When combining MachineMemOperands of two instructions operands are
sent in the IR order into the combineKnownAdjacentMMOs. At the
moment it picks the first operand and just replaces its offset and
size. This essentially loses alignment information and may generally
result in an incorrect base pointer to be used.

Use a base pointer in memory addresses order instead and only adjust
size.

Differential Revision: https://reviews.llvm.org/D120370
2022-02-24 10:47:57 -08:00
Bill Wendling a5bbc6ef99 [NFC] Remove unnecessary "#include"s from header files 2022-02-23 01:20:48 -08:00
Stanislav Mekhanoshin 9e055c0fff [AMDGPU] Extend SILoadStoreOptimizer to handle global saddr loads
This adds handling of the _SADDR forms to the GLOBAL_LOAD combining.

TODO: merge global stores.
TODO: merge flat load/stores.
TODO: merge flat with global promoting to flat.

Differential Revision: https://reviews.llvm.org/D120285
2022-02-22 09:01:43 -08:00
Stanislav Mekhanoshin ba17bd2674 [AMDGPU] Extend SILoadStoreOptimizer to handle global loads
There can be situations where global and flat loads and stores are not
combined by the vectorizer, in particular if their address space
differ in the IR but they end up the same class instructions after
selection. For example a divergent load from constant address space
ends up being the same global_load as a load from global address space.

TODO: merge global stores.
TODO: handle SADDR forms.
TODO: merge flat load/stores.
TODO: merge flat with global promoting to flat.

Differential Revision: https://reviews.llvm.org/D120279
2022-02-22 08:42:36 -08:00
Thomas Symalla 380ff31d83
[AMDGPU] Fix typo in comment [NFC]
This replaces "V_MOB_B32" with "V_MOV_B32" in some comment.
2022-02-22 13:27:26 +01:00
Stanislav Mekhanoshin dc0981562e [AMDGPU] Remove redundand check in the SILoadStoreOptimizer
Differential Revision: https://reviews.llvm.org/D120268
2022-02-21 15:04:44 -08:00
Jay Foad 359a792f9b [AMDGPU] SILoadStoreOptimizer: avoid unbounded register pressure increases
Previously when combining two loads this pass would sink the
first one down to the second one, putting the combined load
where the second one was. It would also sink any intervening
instructions which depended on the first load down to just
after the combined load.

For example, if we started with this sequence of
instructions (code flowing from left to right):

  X A B C D E F Y

After combining loads X and Y into XY we might end up with:

  A B C D E F XY

But if B D and F depended on X, we would get:

  A C E XY B D F

Now if the original code had some short disjoint live ranges
from A to B, C to D and E to F, in the transformed code
these live ranges will be long and overlapping. In this way
a single merge of two loads could cause an unbounded
increase in register pressure.

To fix this, change the way the way that loads are moved in
order to merge them so that:
- The second load is moved up to the first one. (But when
  merging stores, we still move the first store down to the
  second one.)
- Intervening instructions are never moved.
- Instead, if we find an intervening instruction that would
  need to be moved, give up on the merge. But this case
  should now be pretty rare because normal stores have no
  outputs, and normal loads only have address register
  inputs, but these will be identical for any pair of loads
  that we try to merge.

As well as fixing the unbounded register pressure increase
problem, moving loads up and stores down seems like it
should usually be a win for memory latency reasons.

Differential Revision: https://reviews.llvm.org/D119006
2022-02-21 10:51:14 +00:00
Jay Foad 57baa14d74 [AMDGPU] Rename AMDGPUCFGStructurizer to R600MachineCFGStructurizer
Previously the name of the class (AMDGPUCFGStructurizer) did not
match the name of the file (AMDILCFGStructurizer).

Standardize on the name R600MachineCFGStructurizer by analogy with
AMDGPUMachineCFGStructurizer.

Differential Revision: https://reviews.llvm.org/D120128
2022-02-18 15:08:25 +00:00
Sebastian Neubauer 6527b2a4d5 [AMDGPU][NFC] Fix typos
Fix some typos in the amdgpu backend.

Differential Revision: https://reviews.llvm.org/D119235
2022-02-18 15:05:21 +01:00
Sebastian Neubauer 1f0aadfa62 [AMDGPU] Fix kill flag on overlapping sgpr copy
Same as on vgpr copies, we cannot kill the source register if it
overlaps with the destination register. Otherwise, the kill of the
source register will also count as a kill for the destination register.

Differential Revision: https://reviews.llvm.org/D120042
2022-02-18 14:36:00 +01:00
Jay Foad 69ab233a15 [AMDGPU] Return better Changed status from SIFoldOperands
Differential Revision: https://reviews.llvm.org/D120023
2022-02-18 10:35:48 +00:00
Jay Foad 768e6faba8 [AMDGPU] Return better Changed status from SILowerControlFlow
Differential Revision: https://reviews.llvm.org/D120025
2022-02-18 10:09:22 +00:00
Jay Foad d86dcb7ea5 [AMDGPU] Return better Changed status from SIOptimizeExecMasking
Differential Revision: https://reviews.llvm.org/D120024
2022-02-18 10:09:21 +00:00
Stanislav Mekhanoshin b0aa1946df [AMDGPU] Promote recursive loads from kernel argument to constant
Not clobbered pointer load chains are promoted to global now. That
is possible to promote these loads itself into constant address
space. Loaded pointers still need to point to global because we
need to be able to store into that pointer and because an actual
load from it may occur after a clobber.

Differential Revision: https://reviews.llvm.org/D119886
2022-02-17 11:07:03 -08:00
Jay Foad c08896d292 [AMDGPU] Return better Changed status from SILowerI1Copies
Differential Revision: https://reviews.llvm.org/D119946
2022-02-17 09:38:57 +00:00
Jay Foad 78ebb1dd24 [AMDGPU] Return better Changed status from SIAnnotateControlFlow
Differential Revision: https://reviews.llvm.org/D119945
2022-02-17 09:38:57 +00:00
Jay Foad 1822a5ecdd [AMDGPU] Return better Changed status from AMDGPUPerfHintAnalysis
Differential Revision: https://reviews.llvm.org/D119944
2022-02-17 09:31:42 +00:00
Jay Foad 77e793d025 [AMDGPU] Return better Changed status from AMDGPUAnnotateUniformValues
Differential Revision: https://reviews.llvm.org/D119943
2022-02-17 09:31:42 +00:00
Matt Arsenault 3884cb9235 AMDGPU: Always reserve VGPR for AGPR copies on gfx908
Just because there aren't AGPRs in the original program doesn't mean
the register allocator can't choose to use them (unless we were to
forcibly reserve all AGPRs if there weren't any uses). This happens in
high pressure situations and introduces copies to avoid spills.

In this test, the allocator ends up introducing a copy from SGPR to
AGPR which requires an intermediate VGPR. I don't believe it would
introduce a copy from AGPR to AGPR in this situation, since it would
be trying to use an intermediate with a different class.

Theoretically this is also broken on gfx90a, but I have been unable to
come up with a testcase.
2022-02-16 18:48:18 -05:00
Jacob Lambert 7470244475 [AMDGPU] Add agpr_count to metadata and AsmParser
gfx90a allows the number of ACC registers (AGPRs) to be set
independently to the VGPR registers. For both HSA and PAL metadata, we
now include an "agpr_count" key to report the number of AGPRs set for
supported devices (gfx90a, gfx908, as determined by hasMAIInsts()).
This is collected from SIProgramInfo.NumAccVGPR for both HSA and PAL.
The AsmParser also now recognizes ".kernel.agpr_count" for supported
devices.

Differential Revision: https://reviews.llvm.org/D116140
2022-02-16 15:17:23 -08:00
Jacob Lambert 0bad7cb565 Hoist getTotalNumVGPRs into AMDGPUBaseInfo for use in both codegen and MC
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D119912
2022-02-16 11:04:08 -08:00
Dmitry Preobrazhensky 6655c5a6bb [AMDGPU][MC][GFX10] Added an alias for HW_REG_HW_ID1
Enabled HW_REG_HW_ID as an alias for HW_REG_HW_ID1. This is required for compatibility with existing code.

Differential Revision: https://reviews.llvm.org/D119939
2022-02-16 19:45:44 +03:00
Shao-Ce SUN 2aed07e96c [NFC][MC] remove unused argument `MCRegisterInfo` in `MCCodeEmitter`
Reviewed By: skan

Differential Revision: https://reviews.llvm.org/D119846
2022-02-16 13:10:09 +08:00
Matt Arsenault 898dc8a4b1 AMDGPU: Use subtarget in class instead of querying function 2022-02-15 21:28:12 -05:00
Stanislav Mekhanoshin 29a0e0a9e5 [AMDGPU] Do not define GET_INSTRINFO_SCHED_ENUM
Autogenerated names are too long and break compilation on Windows,
while we do not need this enum at all.

Differential Revision: https://reviews.llvm.org/D119869
2022-02-15 13:00:54 -08:00
Jay Foad a65b9dd049 [AMDGPU] Divergence-driven instruction selection for bfm patterns
Differential Revision: https://reviews.llvm.org/D119706
2022-02-15 10:49:18 +00:00
Jay Foad f72d8897ac [AMDGPU] Honor !invariant.load metadata on load-like intrinsics
Differential Revision: https://reviews.llvm.org/D119739
2022-02-15 09:16:57 +00:00
Jay Foad cb199e0fca [MC] Define and use MCRegisterInfo::regsOverlap
Separate MCRegisterInfo::regsOverlap out from
TargetRegisterInfo::regsOverlap. This is useful in the AMDGPU AsmParser
where we only have access to MCRegisterInfo.

Differential Revision: https://reviews.llvm.org/D119533
2022-02-14 20:46:02 +00:00
Joe Nash c87c61c52c [AMDGPU] Fix AGPR offset for waitcnt
An enum value stores the offset between AGPR ranges and VGPR
ranges in the internal storage of SIInsertWaitcnts. It said 226 when
it should say 256, causing some portion of the ranges to overlap. That
in turn causes 'aliasing' between the registers, potentially inserting
waitcnts that are not required.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D119749
2022-02-14 15:16:21 -05:00
alex-t c23198ec13 [AMDGPU] Divergence-driven abs instruction selection
This change enables "abs" SDNodes selection by the node divergence.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D119581
2022-02-14 21:36:32 +03:00
Stanislav Mekhanoshin c7eb846345 [AMDGPU] Merge AMDGPULDSUtils into AMDGPUMemoryUtils
Differential Revision: https://reviews.llvm.org/D119502
2022-02-11 10:32:24 -08:00
Jay Foad b59ad64ead [TableGen][AMDGPU] Allow empty register classes
Remove ARTIFICIAL_VGPR which only existed to make VReg_1 not empty.

Differential Revision: https://reviews.llvm.org/D119552
2022-02-11 17:30:04 +00:00
Sebastian Neubauer a5d4f82b73 [AMDGPU] Make enable-flat-scratch a subtarget feature
Use a subtarget feature instead of a command line argument to reduce
global state.
We want to enable flat scratch for graphics in some cases and this
doesn't work well with command line options.

Differential Revision: https://reviews.llvm.org/D119425
2022-02-11 18:23:07 +01:00
Sameer Sahasrabuddhe d8f99bb6e0 [AMDGPU] replace hostcall module flag with function attribute
The module flag to indicate use of hostcall is insufficient to catch
all cases where hostcall might be in use by a kernel. This is now
replaced by a function attribute that gets propagated to top-level
kernel functions via their respective call-graph.

If the attribute "amdgpu-no-hostcall-ptr" is absent on a kernel, the
default behaviour is to emit kernel metadata indicating that the
kernel uses the hostcall buffer pointer passed as an implicit
argument.

The attribute may be placed explicitly by the user, or inferred by the
AMDGPU attributor by examining the call-graph. The attribute is
inferred only if the function is not being sanitized, and the
implictarg_ptr does not result in a load of any byte in the hostcall
pointer argument.

Reviewed By: jdoerfert, arsenm, kpyzhov

Differential Revision: https://reviews.llvm.org/D119216
2022-02-11 22:51:56 +05:30
Julien Pages dcb2da13f1 [AMDGPU] Add a new intrinsic to control fp_trunc rounding mode
Add a new llvm.fptrunc.round intrinsic to precisely control
the rounding mode when converting from f32 to f16.

Differential Revision: https://reviews.llvm.org/D110579
2022-02-11 12:08:23 -05:00
Mirko Brkusanin 5ff35ba8ae [AMDGPU][GlobalISel] Fix insert point in FoldableFneg combine
Newly created fneg was built after some of it's uses in some cases.
Now it will be built immediately after instruction whose dst it negates.

Differential Revision: https://reviews.llvm.org/D119459
2022-02-11 12:09:40 +01:00
serge-sans-paille 06943537d9 Cleanup MCParser headers
As usual with that header cleanup series, some implicit dependencies now need to
be explicit:

llvm/MC/MCParser/MCAsmParser.h no longer includes llvm/MC/MCParser/MCAsmLexer.h

Preprocessed lines to build llvm on my setup:
after:  1068185081
before: 1068324320

So no compile time benefit to expect, but we still get the looser coupling
between files which is great.

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D119359
2022-02-11 10:39:29 +01:00
Stanislav Mekhanoshin 290e5722e8 [AMDGPU] Improve clobbering checks in the kernel argument promotion
Use same MSSA clobbering checks as in the AMDGPUAnnotateUniformValues.
Kernel argument promotion needs exactly the same information so factor
out utility function isClobberedInFunction.

Differential Revision: https://reviews.llvm.org/D119480
2022-02-10 14:51:47 -08:00
alex-t d88a146f2b [AMDGPU] Missed sign/zero extend patterns for divergence-driven instruction selection
This change includes tablegen patterns that were missed by https://reviews.llvm.org/D110950 and https://reviews.llvm.org/D76230

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D119302
2022-02-10 19:36:12 +03:00
Simon Pilgrim 8de7297374 [AMDGPU] Pull out repeated getVecSize() calls. NFC.
This is guaranteed to be evaluated so we can avoid repeated calls.

Helps the static analyzer as it couldn't recognise that each getVecSize() would return the same value.
2022-02-10 16:31:36 +00:00
Jay Foad e34623b165 [AMDGPU] Rename DSAtomicCmpXChg to DSAtomicCmpXChgSwapped. NFC.
This is just a reminder that the operands are swapped compared with all
the other CmpXChg instructions.

Differential Revision: https://reviews.llvm.org/D119421
2022-02-10 14:54:44 +00:00
Abinav Puthan Purayil 29bd3fadbc [AMDGPU] Select no-return atomic ops in FLATInstructions.td.
This change adds the selection for the no-return global_* and flat_*
instructions in tblgen.  The motivation for this is to get the no-return atomic
isel working without relying on post-isel hooks so that GlobalISel can start
selecting them (once GlobalISelEmitter allows no return atomic patterns like
how DAGISel does).

Differential Revision: https://reviews.llvm.org/D119227
2022-02-10 09:26:37 +05:30
Abinav Puthan Purayil f4e8cf25af [AMDGPU] Select no-return ds_* atomic ops in tblgen.
SelectionDAG relies on MachineInstr's HasPostISelHook for selecting the
no-return atomic ops. GlobalISel, at the moment, doesn't handle
HasPostISelHook.

This change adds the selection for no-return ds_* atomic ops in tblgen
so that it can work with both GlobalISel and SelectionDAG. I couldn't
add the predicates for GlobalISel in this change since there's a
restriction in GlobalISelEmitter that disallows selecting generic
atomics ops that return with instructions that doesn't return.

We can't remove the HasPostISelHook code that selects the no return
atomic ops in SelectionDAG yet since we still need to cover selections
in FLATInstructions.td, BUFInstructions.td.

Differential Revision: https://reviews.llvm.org/D115881
2022-02-10 09:26:37 +05:30
Jay Foad 476bb2d94e [AMDGPU] Remove dead code from shrinkScalarLogicOp
It looks like this code has been dead since shrinkScalarLogicOp
was introduced in svn r348601.
2022-02-09 17:07:12 +00:00
Jay Foad db28a45617 [AMDGPU] Remove irrelevant comments on V_BFE_I32 instructions
These comments explain the encoding of the immediate operand of S_BFE_*
which is not relevant for V_BFE_I32.
2022-02-09 12:06:29 +00:00
serge-sans-paille ef736a1c39 Cleanup LLVMMC headers
There's a few relevant forward declarations in there that may require downstream
adding explicit includes:

llvm/MC/MCContext.h no longer includes llvm/BinaryFormat/ELF.h, llvm/MC/MCSubtargetInfo.h, llvm/MC/MCTargetOptions.h
llvm/MC/MCObjectStreamer.h no longer include llvm/MC/MCAssembler.h
llvm/MC/MCAssembler.h no longer includes llvm/MC/MCFixup.h, llvm/MC/MCFragment.h

Counting preprocessed lines required to rebuild llvm-project on my setup:
before: 1052436830
after:  1049293745

Which is significant and backs up the change in addition to the usual benefits of
decreasing coupling between headers and compilation units.

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D119244
2022-02-09 11:09:17 +01:00
Sameer Sahasrabuddhe c6a6b57902 [AMDGPU] [NFC] Fix incorrect use of bitwise operator.
Differential Revision: https://reviews.llvm.org/D119308
2022-02-08 22:12:54 -05:00
Stanislav Mekhanoshin aeaf85b9c2 [AMDGPU] Select VGPR versions of MFMA if possible
We can select _vgprcd versions of MAI instructions and have no
AGPRs with the whole budget left for VGPRs if:

1. This is a kernel;
2. It has no calls;
3. It runs at least on 2 waves thus having not more that 256 VGPRs.
4. There is no inline asm requesting AGPRs.

Differential Revision: https://reviews.llvm.org/D117253
2022-02-08 10:19:41 -08:00
Matt Arsenault f2c99ea47d AMDGPU: Use reserved VGPR for AGPR spills to memory
Previously would reuse the VGPR used for large frame offsets with the
one needed for copying from the AGPR. Fix this by reusing the register
we already reserved for handling AGPR to AGPR copies.
2022-02-08 11:26:59 -05:00
Matt Arsenault 8b2ca766f0 AMDGPU: Reserve v32 if we may need to copy between AGPRs on gfx908
We need to guarantee cheap copies between AGPRs, and unfortunately
gfx908 cannot directly do this. Theoretically we could set the
scavenger up with an emergency spill slot, but it also feels
unreasonable to pay that cost for what was assumed to be a simple and
cheap copy. Pick a register that doesn't conflict with any ABI
registers.

This does not address the same issue when copying from SGPR to AGPR
for gfx90a (this coincidentally fixes it for gfx908), but that's less
interesting since the register allocator shouldn't be proactively
introducing such copies.

One edge case I'm worried about is respecting the VGPR budget implied
by amdgpu-waves-per-eu. If the theoretical upper bound of a function
is 32 VGPRs, this will force the actual count to be 33.

This is also broken if inline assembly uses/defs something in v32. The
coalescer will eliminate the intermediate vreg between the def and
use, and the introduced copy will clobber the user value.

(cherry picked from commit 3335784ac2d587ff4eac04586e189532ae8b2607)
2022-02-08 11:14:52 -05:00
Nikita Popov 997027347d [AMDGPURewriteOutArguments] Don't use pointer element type
Instead of using the pointer element type, look at how the pointer
is actually being used in store instructions, while looking through
bitcasts. This makes the transform compatible with opaque pointers
and a bit more general.

It's worth noting that I have dropped the 3-vector to 4-vector
shufflevector special case, because this is now handled in a
different way: If the value is actually used as a 4-vector, then
we're directly going to use that type, instead of shuffling to a
3-vector in between.

Differential Revision: https://reviews.llvm.org/D119237
2022-02-08 16:10:41 +01:00
Simon Pilgrim fd2bb51f1e [ADT] Add APInt/MathExtras isShiftedMask variant returning mask offset/length
In many cases, calls to isShiftedMask are immediately followed with checks to determine the size and position of the bitmask.

This patch adds variants of APInt::isShiftedMask, isShiftedMask_32 and isShiftedMask_64 that return these values as additional arguments.

I've updated a number of cases that were either performing seperate size/position calculations or had created their own local wrapper versions of these.

Differential Revision: https://reviews.llvm.org/D119019
2022-02-08 12:04:13 +00:00
Sameer Sahasrabuddhe 02a2e46ff0 [AMDGPU] [NFC] refactor the AMDGPU attributor
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D119087
2022-02-07 21:45:32 -05:00
Carl Ritson a1fb307b4b [AMDGPU] Allow hoisting of some VALU compare instructions
Conversatively allow hoisting/sinking of VALU comparisons.
If the result of a comparison is masked with exec, narrowing the
set of active lanes, then it is safe to hoist it as the masking
instruction will never by hoisted.

Heuristically this is also true for sinking, as we do not expect
the result of a sunk comparison that is masked with exec to be
used outside of the loop.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D118975
2022-02-08 11:27:23 +09:00
Vang Thao 570471199b [AMDGPU] Fix debug values in scheduler not placed correctly when reverting
Debug position data is cleared after ScheduleDAGMILive::schedule() due to it also calling placeDebugValues(). Make it so the data is not cleared after initial call to placeDebugValues since we will call it again after reverting a schedule.

Secondly, since we skip debug instructions when reverting the schedule on AMDGPU, all debug instructions are now moved to the end of the scheduling region. RegionEnd points to the beginning of this chunk of debug instructions since it was not incremented when a debug instruction was skipped. RegionBegin may also point to the same debug instruction if Unsched.front() is a debug instruction thus shrinking the region to 1. Fix RegionBegin and RegionEnd so that they point to the current beginning and ending before calling placeDebugValues() since both vars will be used as reference points to move debug instructions back.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D119022
2022-02-07 11:01:13 -08:00
Matt Arsenault 31973062ec AMDGPU: Fix clobbering SCC when expanding large offset spill pseudos
If we had a large offset which required materializing in a register,
we would emit an s_add_i32, clobbering SCC. Start checking if SCC is
live, and instead use a VGPR offset. For MUBUF, we switch to using
offen. We would do this anyway in a normal load/store with a frame
index, but not for spills.

The same problem still exists in other contexts where we expand frame
indices.

The nasty edge case is when SGPRs are spilled to memory at a large
frame offset where SCC is also clobbered. This requires a second
scavenging index, and also required several patches in the scavenger
to correctly handle multiple recursive scavenge indexes.

An even nastier edge case we still don't support is if we don't have
any free SGPRs. If SCC is live and we don't have any free SGPRs to
save exec, we have no way of flipping exec back and forth without also
clobbering SCC.

Fixes: SWDEV-309419
2022-02-07 10:02:03 -05:00
Kazu Hirata 3a3cb929ab [llvm] Use = default (NFC) 2022-02-06 22:18:35 -08:00
Ruiling Song 0719c43735 AMDGPU: Don't clobber source register for V_SET_INACTIVE_*
The WWM register has unmodeled register liveness, For v_set_inactive_*,
clobberring source register is dangerous because it will overwrite the
inactive lanes. When the source vgpr is dead at v_set_inactive_lane,
the inactive lanes may be not really dead. This may make common
optimizations doing wrong.

For example in a simple if-then cfg in Machine IR:
bb.if:
  %src =

bb.then:
  %src1 = COPY %src
  %dst = V_SET_INACTIVE %src1(tied-def 0), %inactive

bb.end
  ... = PHI [0, %bb.then] [%src, %bb.if]

The register coalescer will think it is safe to optimize "%src1 = COPY %src"
in bb.then. And at the same time, there is no interference for the PHI in
bb.end. The source and destination values of the PHI will be assigned
the same register. The single PHI register will be overwritten by the
v_set_inactive, then we would get wrong value in bb.end.

With this change, we will copy the content of the source register before
setting inactive lanes after register allocation. Yes, this will sacrifice
the WWM code generation a little, but I don't have any better idea to do things
correctly.

Differential Revision: https://reviews.llvm.org/D117482
2022-02-06 12:38:26 +08:00
Matt Arsenault 8b8b491379 AMDGPU/GlobalISel: Fix assertions on invalid addrspacecasts
Fixes some assert on invalid situations and starts directly emitting
the error.
2022-02-04 17:28:49 -05:00
Matt Arsenault 4622afa94c AMDGPU: Convert AMDGPUResourceUsageAnalysis to a Module pass
This is more precise in the face of indirect calls and aliases, still
assuming the call target is defined somewhere in the current module.

This sometimes changes the order the functions are printed, and also
changes the point where context errors are printed relative to
stdout. This also likely has negative consequences for compile time
and memory usage.
2022-02-04 15:56:04 -05:00
Matt Arsenault 935abab65c AMDGPU: Use module level register maximums for unknown callees
Compute the theoretical register budget based on the IR function
signature/attributes, and use the global maximum register budgets for
unknown callees.

This should fix the kernel reported register usage in the presence of
indirect calls. The previous fix in
2b08f6af62 was incorrect becauset it was
only taking the maximum in the known call graph, and missing something
that was either outside of it or codegened later.

This fixes a second case I discovered where calls to aliases also did
not work as expected. CallGraphAnalysis misses these, so functions
called through aliases were not codegened ahead of callers as
expected. CallGraphAnalysis should probably be fixed to understand
this case, and there's likely a bug with IPRA here. This fixes
numerous failures in the conformance test at -O0.
2022-02-04 15:56:03 -05:00