Commit Graph

4784 Commits

Author SHA1 Message Date
Petar Avramovic dc3fbe293f GlobalISel: Fix infinite loop in legalization artifact combiner
ArtifactValueFinder keeps trying to combine g_unmerge_values in some cases.
Fix is to skip combine attempt for dead defs.

Differential Revision: https://reviews.llvm.org/D106879
2021-08-02 12:58:10 +02:00
Carl Ritson a441de6d94 [AMDGPU][GlobalISel] Add missing default mapping for BVH intrinsics
Application of default mapping to BVH intrinsics was missing.
Copy parts of SelectionDAG test to GlobalISel test as these would
have indicated this error.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D107211
2021-08-02 12:43:38 +09:00
Eli Friedman bdd55b2f18 Fix the default alignment of i1 vectors.
Currently, the default alignment is much larger than the actual size of
the vector in memory.  Fix this to use a sane default.

For SVE, temporarily remove lowering of load/store operations for
predicates with less than 16 elements. The layout the backend was
assuming for SVE predicates with less than 16 elements doesn't agree
with the frontend. More work probably needs to be done here.

This change is, strictly speaking, not backwards-compatible at the
bitcode level. But probably nobody is actually depending on that; i1
vectors in memory are rare, and the code that does use them probably
ends up forcing the alignment to something sane anyway.  If we think
this is a concern, I can restrict this to scalable vectors for now
(where it's actually causing issues for me at the moment).

Differential Revision: https://reviews.llvm.org/D88994
2021-07-31 14:09:59 -07:00
Matt Arsenault ebc17a0d68 GlobalISel: Scalarize unaligned vector stores
This has the same problems and limitations as the load path.
2021-07-31 10:37:15 -04:00
Matt Arsenault 43c7cb9a3c AMDGPU/GlobalISel: Check some remarks for failed legalizations
The load/store tests are giant and have some cases that fail in them,
but it's hard to tell which ones are really failing. Check the remarks
to make it easier to track.
2021-07-31 10:37:15 -04:00
Jay Foad 6e712fdf52 [AMDGPU] Autogenerate checks in kernel-args.ll
Differential Revision: https://reviews.llvm.org/D107052
2021-07-30 21:25:51 +01:00
Matt Arsenault e46badd4e9 GlobalISel: Have lowerLoad scalarize unaligned vectors
This could be smarter by picking an ideal type, or at least splitting
the vector in half first. Also handles lower for non-power-of-2,
non-extending vector loads.

Currently this just avoids failing to legalize some odd vector AMDGPU
tests, but is a step towards removing the split logic from the
NarrowScalar logic.
2021-07-30 13:23:29 -04:00
Matt Arsenault f19226dda5 GlobalISel: Have load lowering handle some unaligned accesses
The code for splitting an unaligned access into 2 pieces is
essentially the same as for splitting a non-power-of-2 load for
scalars. It would be better to pick an optimal memory access size and
directly use it, but splitting in half is what the DAG does.

As-is this fixes handling of some unaligned sextload/zextloads for
AMDGPU. In the future this will help drop the ugly abuse of
narrowScalar to handle splitting unaligned accesses.
2021-07-30 12:55:58 -04:00
Matt Arsenault 05ecd7a2ac AMDGPU/GlobalISel: Fix tests using illegal copies to physregs
These are unlegalizable and introduce spurious failures. Ideally the
verifier would reject them. Also avoid some weird G_INSERTs.
2021-07-30 12:37:29 -04:00
Matt Arsenault faccf427df AMDGPU/GlobalISel: Remove special case lowering for non-pow-2 stores
We end up with extra copies from buildAnyExtOrTrunc if these are
lowered after the register types are legalized.
2021-07-30 12:37:29 -04:00
Mirko Brkusanin 971f4173f8 [AMDGPU][GlobalISel] Insert an and with exec before s_cbranch_vccnz if necessary
While v_cmp will AND inactive lanes with 0, that is not the case for logical
operations.

This fixes a Vulkan CTS test that would hang otherwise.

Differential Revision: https://reviews.llvm.org/D105709
2021-07-29 11:20:49 +02:00
RamNalamothu 1a8c57179a [AMDGPU] We would need FP if there is call and caller save VGPR spills
Since https://reviews.llvm.org/D98319, determineCalleeSavesSGPR() needs
to consider caller save VGPR spills as well while anticipating if we
require FP.

Fixes: SWDEV-295978

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D106758
2021-07-28 11:12:55 +05:30
Matt Arsenault d7d2e4545e AMDGPU/GlobalISel: Fix selecting G_SEXTLOAD/G_ZEXTLOAD pre-gfx9
The patterns for the m0 glue patterns were failing to import.
2021-07-27 15:56:42 -04:00
Matt Arsenault 82ab1ae54e AMDGPU/GlobalISel: Fix wrong addrspace in test MMOs 2021-07-27 15:56:41 -04:00
Matt Arsenault 74c65906bc AMDGPU/GlobalISel: Add a few tests for unaligned truncating stores 2021-07-27 15:56:41 -04:00
Matt Arsenault 9b1bcaea4e AMDGPU: Update tests for lower i1 change
I forgot to squash the test updates for b32d3d9e81
2021-07-27 12:14:58 -04:00
Matt Arsenault b32d3d9e81 AMDGPU: Treat IMPLICIT_DEF like a constant lanemask source
This is partially a workaround. SILowerI1Copies does not understand
unstructured loops. This would result in inserting instructions to
merge a mask register in the same block where it was defined in an
unstructured loop.
2021-07-27 11:44:38 -04:00
Jay Foad dc4ca0dbbc [GlobalISel] Constant fold G_SITOFP and G_UITOFP in CSEMIRBuilder
Differential Revision: https://reviews.llvm.org/D104528
2021-07-27 11:27:58 +01:00
Johannes Doerfert 2aaf038efd [Attributor] Update check lines for all AMDGPU attributor tests
I thought there was only one when I pushed
cdb4cfe8b3, these should be all (in the
CodeGen/AMDGPU folder).
2021-07-27 00:55:26 -05:00
Johannes Doerfert cdb4cfe8b3 [Attributor][FIX] Update AMDGPU attributor test
The test contains UB and should be improved, for now we update the check
lines pass it.
2021-07-27 00:23:47 -05:00
Carl Ritson fbaa35e169 [AMDGPU] Add SelectionDAG support for insert_subvector on v4f64
Enable custom insert_subvector for larger vector types.
This is necessary now that SelectionDAG can attempt v3f64 insert
to v4f64, etc.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D105385
2021-07-27 10:11:34 +09:00
Amara Emerson c658b472f3 [GlobalISel] Add a constant folding combine.
Use it AArch64 post-legal combiner. These don't always get folded because when
the instructions are created the constants are obscured by artifacts.

Differential Revision: https://reviews.llvm.org/D106776
2021-07-26 14:53:33 -07:00
Michael Liao b0402a35fc [amdgpu] Add 64-bit PC support when expanding unconditional branches.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D106445
2021-07-26 14:50:30 -04:00
Jay Foad 59f6865231 [AMDGPU][GISel] Fix MMO for raw/struct buffer access with non-constant offset
Codegen for the raw/struct buffer access intrinsics would update the
offset in the MMO to reflect the combined offset, if it was known to be
constant. If the combined offset was not known to be constant, or if
there was an index, it would set the offset in the MMO to 0. This is
unsafe because it makes it look like the access does not alias with
another access with a fixed non-zero offset.

Fix these cases by setting the pointer in the MMO to null, to reflect
the fact that we do not have any known IR value pointer + constant
offset for the access.

D106284 did this for SelectionDAG. This is the corresponding fix for
GlobalISel.

Differential Revision: https://reviews.llvm.org/D106451
2021-07-26 14:27:30 +01:00
Jay Foad 683b9ed0d5 [AMDGPU] Pre-commit global-isel test case for D106451
This test case shows the scheduler wrongly reordering two buffer
accesses that might alias.
2021-07-26 14:27:30 +01:00
Jay Foad 9ac10658ae [AMDGPU] Fix MMO for raw/struct buffer access with non-constant offset
Codegen for the raw/struct buffer access intrinsics would update the
offset in the MMO to reflect the combined offset, if it was known to be
constant. If the combined offset was not known to be constant, or if
there was an index, it would set the offset in the MMO to 0. This is
unsafe because it makes it look like the access does not alias with
another access with a fixed non-zero offset.

Fix these cases by setting the pointer in the MMO to null, to reflect
the fact that we do not have any known IR value pointer + constant
offset for the access.

Differential Revision: https://reviews.llvm.org/D106284
2021-07-26 14:27:30 +01:00
Simon Pilgrim 34dc4f24f2 Revert rG939291041bb35b8088e3b61be2b8b3bc950f64a7 "[AMDGPU] Regenerate wave32.ll test checks"
This still breaks buildbots
2021-07-25 15:59:26 +01:00
Simon Pilgrim 939291041b [AMDGPU] Regenerate wave32.ll test checks
To simplify diff in future patch
2021-07-25 15:13:09 +01:00
Simon Pilgrim 54e5ced7e6 [AMDGPU] Regenerate mul24 test checks
To simplify diffs in future patch
2021-07-25 15:13:09 +01:00
Simon Pilgrim 9591abd74e [AMDGPU] Regenerate global-load-saddr-to-vaddr test checks
To simplify diff in future patch
2021-07-25 14:05:10 +01:00
Simon Pilgrim 00e37c1cd4 [AMDGPU] Regenerate ctpop16 test checks
To simplify diff in future patch
2021-07-25 14:05:09 +01:00
Simon Pilgrim 249ef1fa82 [AMDGPU] Regenerate half test checks
To simplify diff in future patch
2021-07-25 14:05:08 +01:00
Simon Pilgrim 97d2277b37 [AMDGPU] Regenerate anyext test checks
To simplify diff in future patch
2021-07-25 14:05:08 +01:00
Kuter Dinel 96709823ec [AMDGPU] Deduce attributes with the Attributor
This patch introduces a pass that uses the Attributor to deduce AMDGPU specific attributes.

Reviewed By: jdoerfert, arsenm

Differential Revision: https://reviews.llvm.org/D104997
2021-07-24 06:07:15 +03:00
Carl Ritson 7d4baf25aa [AMDGPU] Add maximum NSA size limit ISA feature
Add maximum NSA size limit as an ISA feature.
Use this to reduce NSA usage on GFX10.1 to avoid stability issues
with 4 and 5 dwords NSA instructions.
Maintain use of longer NSA instructions on GFX10.3.

Note: this also contains some minor fixes for GlobalISel which
did not work correctly with non-NSA form instructions on GFX10.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D103348
2021-07-23 16:16:06 +09:00
Carl Ritson 6efb3220b4 [AMDGPU] Add VReg_192/VReg_224 support for MIMG instructions
Allow MIMG instructions to be selected with 6/7 VGPRs for vaddr.
Previously these were rounded up to VReg_256 this saves VGPRs.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D103800
2021-07-22 10:42:15 +09:00
Carl Ritson 9dcd75f86f [AMDGPU] Allow frontends to disable null export for pixel shaders
Disable null export (for kills) when a frontend defines a pixel
shader as not exporting using amdgpu-color-export and
amdgpu-depth-export function attrbutes.
This allows the generation of export free pixel shaders.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D105683
2021-07-22 10:20:46 +09:00
Stanislav Mekhanoshin c54c76037b Prevent dead uses in register coalescer after rematerialization
The coalescer does not check if register uses are available
at the point of rematerialization. If it attempts to rematerialize
an instruction with such uses it can end up with use without a def.

LiveRangeEdit does such check during rematerialization, so just
call LiveRangeEdit::allUsesAvailableAt() to avoid the problem.

Differential Revision: https://reviews.llvm.org/D106396
2021-07-21 15:19:55 -07:00
Stanislav Mekhanoshin fe197ef9f1 [AMDGPU] Mark relevant rematerializable VOP3 instructions
Differential Revision: https://reviews.llvm.org/D106110
2021-07-21 14:44:13 -07:00
Stanislav Mekhanoshin 9625ca5b60 [AMDGPU] Mark relevant rematerializable VOP2 instructions
Differential Revision: https://reviews.llvm.org/D106023
2021-07-21 14:24:59 -07:00
Stanislav Mekhanoshin 4eb24817ec [AMDGPU] Mark all relevant VOP1 instructions rematerializable
Differential Revision: https://reviews.llvm.org/D105919
2021-07-21 14:05:32 -07:00
Stanislav Mekhanoshin d01b34ed31 [AMDGPU] Move perfhint analysis
This is SCC pass, moving it to the end of SCC PM saves one
Function PM. This needs the analysis to take into account
memory access width since it is now places after the
load/store optimizer (D105651).

Differential Revision: https://reviews.llvm.org/D105652
2021-07-21 13:06:49 -07:00
Stanislav Mekhanoshin a397c1c82f [AMDGPU] Tune perfhint analysis to account access width
A function with less memory instructions but wider access
is the same as a function with more but narrower accesses
in terms of memory boundness. In fact the pass would give
different answers before and after vectorization without
this change.

Differential Revision: https://reviews.llvm.org/D105651
2021-07-21 12:46:10 -07:00
Sebastian Neubauer b642d01fa8 [AMDGPU] Improve killed check for vgpr optimization
The killed flag is not always set. E.g. when a variable is used in a
loop, it is never marked as killed, although it is unused in following
basic blocks. Also, we try to deprecate kill flags and not use them.

Check if the register is live in the endif block. If not, consider it
killed in the then and else blocks.

The vgpr-liverange tests have two new tests with loops
(pre-committed, so the diff is visible).
I also needed to change the subtarget to gfx10.1, otherwise calls
are not working.

Differential Revision: https://reviews.llvm.org/D106291
2021-07-21 15:24:59 +02:00
Sebastian Neubauer aba1f157ca [AMDGPU] Precommit vgpr-liverange tests 2021-07-21 15:24:59 +02:00
Sebastian Neubauer 2b08f6af62 [AMDGPU] Improve register computation for indirect calls
First, collect the register usage in each function, then apply the
maximum register usage of all functions to functions with indirect
calls.

This is more accurate than guessing the maximum register usage without
looking at the actual usage.

As before, assume that indirect calls will hit a function in the
current module.

Differential Revision: https://reviews.llvm.org/D105839
2021-07-20 13:48:50 +02:00
Jay Foad 0821c8824b [AMDGPU] Pre-commit test case for D106284
This test case shows the scheduler wrongly reordering two buffer
accesses that might alias.
2021-07-20 12:05:33 +01:00
Stanislav Mekhanoshin 9dc2636623 [AMDGPU] Disable LDS lowering for GFX shaders
Apparently these need external LDS symbols to remain.

Fixes: SC1-3279

Differential Revision: https://reviews.llvm.org/D106288
2021-07-20 02:55:25 -07:00
Matt Arsenault c9ec807b11 CodeGen: Make MachineOptimizationRemarkEmitterPass a CFG analysis
This avoids rerunning it a few times.
2021-07-19 21:08:26 -04:00
Matt Arsenault 67d6132463 GlobalISel: Preserve memory types for implicit sret load/stores 2021-07-19 11:52:42 -04:00
Matt Arsenault 9236125ec8 GlobalISel: Preserve LLT when bitcasting loads and stores
This also avoids improperly legalizing some truncating vector stores.
2021-07-19 11:30:14 -04:00
Simon Pilgrim 5643be96bc [DAG] Enable foldSelectOfBinops on select(setcc(),binop(),binop()) calls 2021-07-18 18:38:59 +01:00
Carl Ritson c7f2f81f5e [AMDGPU] Tidy SReg/SGPR definitions using template class
Use a multiclass to consistently define SReg/SGPR/TTMP register classes.
Add missing TTMP registers for 96b, 160b, 192b, 224b.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D105800
2021-07-17 11:26:46 +09:00
Matt Arsenault 51f115b078 AMDGPU/GlobalISel: Add a few tests for struct arguments
Test structs with pointers and vectors of pointers since this stresses
a future patch.
2021-07-16 20:20:55 -04:00
Matt Arsenault 27addb85a6 AMDGPU/GlobalISel: Fix some incorrect memory types in tests 2021-07-16 20:20:55 -04:00
Matt Arsenault 3ceb92295e AMDGPU/GlobalISel: Preserve more memory types 2021-07-16 08:57:26 -04:00
Matt Arsenault 21a0ef8d19 AMDGPU/GlobalISel: Redo kernel argument load handling
This avoids relying on G_EXTRACT on unusual types, and also properly
decomposes structs into multiple registers. This also preserves the
LLTs in the memory operands.
2021-07-16 08:56:54 -04:00
Matt Arsenault a81a7a9ad8 AMDGPU/GlobalISel: Fix incorrect memory types in test 2021-07-15 19:11:40 -04:00
Matt Arsenault e91da668d0 GlobalISel: Track argument pointeriness with arg flags
Since we're still building on top of the MVT based infrastructure, we
need to track the pointer type/address space on the side so we can end
up with the correct pointer LLTs when interpreting CCValAssigns.
2021-07-15 19:11:40 -04:00
Fangrui Song aa3df8ddcd [test] Avoid llvm-readelf/llvm-readobj one-dash long options and deprecated aliases (e.g. --file-headers) 2021-07-15 10:26:21 -07:00
Stanislav Mekhanoshin c46d99e4ba [AMDGPU] Refine -O0 and -O1 passes.
Differential Revision: https://reviews.llvm.org/D105579
2021-07-15 09:51:54 -07:00
Kuter Dinel a7749c3f79 [AMDGPU] Use update_test_checks.py script for annotate kernel features tests.
This patch makes the annotate kernel features tests use the update_tests_checks.py
script. Which makes it easy to update the tests.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D105864
2021-07-15 03:13:37 +03:00
Stanislav Mekhanoshin 76b7d3432e [AMDGPU] Add TII::isIgnorableUse() to allow VOP rematerialization
Any def of EXEC prevents rematerialization of any VOP instruction
because of the physreg use. Create a callback to check if the
physreg use can be ingored to allow rematerialization.

Differential Revision: https://reviews.llvm.org/D105836
2021-07-14 13:03:58 -07:00
Matt Arsenault 47269da5d8 GlobalISel: Handle lowering non-power-of-2 extloads 2021-07-14 11:54:11 -04:00
Jay Foad 372bb08252 [AMDGPU] Check llc-pipeline.ll with -match-full-lines -strict-whitespace
This prevents breaking the indentation that shows the structure of the
pass managers.

Differential Revision: https://reviews.llvm.org/D105891
2021-07-14 16:33:50 +01:00
Djordje Todorovic df686842bc [RemoveRedundantDebugValues] Add a Pass that removes redundant DBG_VALUEs
This new MIR pass removes redundant DBG_VALUEs.

After the register allocator is done, more precisely, after
the Virtual Register Rewriter, we end up having duplicated
DBG_VALUEs, since some virtual registers are being rewritten
into the same physical register as some of existing DBG_VALUEs.
Each DBG_VALUE should indicate (at least before the LiveDebugValues)
variables assignment, but it is being clobbered for function
parameters during the SelectionDAG since it generates new DBG_VALUEs
after COPY instructions, even though the parameter has no assignment.
For example, if we had a DBG_VALUE $regX as an entry debug value
representing the parameter, and a COPY and after the COPY,
DBG_VALUE $virt_reg, and after the virtregrewrite the $virt_reg gets
rewritten into $regX, we'd end up having redundant DBG_VALUE.

This breaks the definition of the DBG_VALUE since some analysis passes
might be built on top of that premise..., and this patch tries to fix
the MIR with the respect to that.

This first patch performs bacward scan, by trying to detect a sequence of
consecutive DBG_VALUEs, and to remove all DBG_VALUEs describing one
variable but the last one:

For example:

(1) DBG_VALUE $edi, !"var1", ...
(2) DBG_VALUE $esi, !"var2", ...
(3) DBG_VALUE $edi, !"var1", ...
 ...

in this case, we can remove (1).

By combining the forward scan that will be introduced in the next patch
(from this stack), by inspecting the statistics, the RemoveRedundantDebugValues
removes 15032 instructions by using gdb-7.11 as a testbed.

Differential Revision: https://reviews.llvm.org/D105279
2021-07-14 04:29:42 -07:00
Sebastian Neubauer 4359b870b1 [AMDGPU] Init scratch only if necessary
If no scratch or flat instructions are used, we do not need to
initialize the flat scratch hardware register.

Differential Revision: https://reviews.llvm.org/D105920
2021-07-14 10:45:22 +02:00
Sebastian Neubauer a12e551882 [AMDGPU] Precommit flat-scratch-init.ll test 2021-07-14 10:45:22 +02:00
Ruiling Song d9b9fdd91b [AMDGPU] Don't handle export done when unify exit nodes
This patch aims to revert the changes introduced by D70781 D71192 D76364

D70781 was introduced to fix hardware hang where we do not insert exp-
null-done for a kill inside infinit loop. At that time we have not added
exp-null-done for kill early termination, but I believe as for now, we will
always add the exp-null-done for early termination case in LaterBranchLowering.

D71192 was introduced to handle the only_kill case, which is also been
handled by the kill early termination work.

D76364 was used to fix a regression by D71192, where we cleared the done
bit of the export in the existing program and not let the normal return
block branching to the new unified return block.

With this change, we just trust frontends have setup exp-done correctly
which is true for all existing frontends. The backend only inserts
exp-null-done for the kill cases which is handled in SILateBranchLowering.cpp.

Reviewed by: critson

Differential Revision: https://reviews.llvm.org/D105610
2021-07-14 14:54:37 +08:00
Ruiling Song 1d9585c8c1 [NFC][AMDGPU] autogenerate kill-infinite-loop.ll checks
This would help us to track the assembly changes to these tests.

Reviewed by: foad

Differential Revision: https://reviews.llvm.org/D105609
2021-07-14 14:52:31 +08:00
Ruiling Song 40e3df2a1b [RegisterCoalescer] Resolve conflict based on liveness of subregister
Currently we are resolving lane/subregister conflict by visiting
instructions sequentially in current block to see whether there is any
use of the tainted lanes. To save compile time, we are not doing further
check in successor blocks. This sounds reasonable without subgregister liveness.

But since we have added subregister liveness tracking capability to
register coalescer, we can easily determine whether we have subregister
liveness conflict by checking subranges. This would help coalescing more
COPYs for target that enables subregister liveness tracking.

Reviewed by: arsenm, qcolombet

Differential Revision: https://reviews.llvm.org/D104509
2021-07-14 14:43:22 +08:00
Matt Arsenault 3191ac27e3 AMDGPU: Try to fix test failure with EXPENSIVE_CHECKS
The machine verifier is enabled by default for EXPENSIVE_CHECKS, so
the pass runs of it would pollute the output here.
2021-07-13 19:34:14 -04:00
Matt Arsenault eebe841a47 RegAlloc: Allow targets to split register allocation
AMDGPU normally spills SGPRs to VGPRs. Previously, since all register
classes are handled at the same time, this was problematic. We don't
know ahead of time how many registers will be needed to be reserved to
handle the spilling. If no VGPRs were left for spilling, we would have
to try to spill to memory. If the spilled SGPRs were required for exec
mask manipulation, it is highly problematic because the lanes active
at the point of spill are not necessarily the same as at the restore
point.

Avoid this problem by fully allocating SGPRs in a separate regalloc
run from VGPRs. This way we know the exact number of VGPRs needed, and
can reserve them for a second run.  This fixes the most serious
issues, but it is still possible using inline asm to make all VGPRs
unavailable. Start erroring in the case where we ever would require
memory for an SGPR spill.

This is implemented by giving each regalloc pass a callback which
reports if a register class should be handled or not. A few passes
need some small changes to deal with leftover virtual registers.

In the AMDGPU implementation, a new pass is introduced to take the
place of PrologEpilogInserter for SGPR spills emitted during the first
run.

One disadvantage of this is currently StackSlotColoring is no longer
used for SGPR spills. It would need to be run again, which will
require more work.

Error if the standard -regalloc option is used. Introduce new separate
-sgpr-regalloc and -vgpr-regalloc flags, so the two runs can be
controlled individually. PBQB is not currently supported, so this also
prevents using the unhandled allocator.
2021-07-13 18:49:29 -04:00
Matt Arsenault 222fde1eec GlobalISel: Use extension instead of merge with undef in common case
This fixes not respecting signext/zeroext in these cases. In the
anyext case, this avoids a larger merge with undef and should be a
better canonical form.

This should also handle this if a merge is needed, but I'm not aware
of a case where that can happen. In a future change this will also
allow AMDGPU to drop some custom code without introducing regressions.
2021-07-13 11:04:47 -04:00
Sebastian Neubauer ad2c66ec5d [AMDGPU] Optimize VGPR LiveRange in waterfall loops
The loops are run exactly once per lane, so VGPRs do not need to be
saved. Use the SIOptimizeVGPRLiveRange pass to add phi nodes that take
undef when coming from the loop.

There is still a shortcoming:
Return values from a function call in the loop are copied because their
live range conflicts with the live range of arguments, even if arguments
are only IMPLICIT_DEF after the phi insertion.

Differential Revision: https://reviews.llvm.org/D105192
2021-07-13 12:15:08 +02:00
Sebastian Neubauer 9d72c0ad43 [AMDGPU] Mark waterfall loops as SI_WATERFALL_LOOP
This way, they can be detected later, e.g. by the
SIOptimizeVGPRLiveRange pass.

Differential Revision: https://reviews.llvm.org/D105467
2021-07-13 12:15:08 +02:00
Stanislav Mekhanoshin d46d534dbb [AMDGPU] Make some VOP1 instructions rematerializable
This is a pilot change to verify the logic. The rest will be
done in a same way, at least the rest of VOP1.

Differential Revision: https://reviews.llvm.org/D105742
2021-07-12 23:43:45 -07:00
Amara Emerson 58a2cb5143 [GlobalISel] Add a new artifact combiner for unmerge which looks through general artifact expressions.
The original motivation for this was to implement moreElementsVector of shuffles
on AArch64, which resulted in complex sequences of artifacts like unmerge(unmerge(concat...))
which the combiner couldn't handle. It seemed here that the better option,
instead of writing ever-more-complex combines, was to have a way to find
the original "non-artifact" source registers for a given definition, walking
through arbitrary expressions of unmerge/concat/insert. As long as the bits
aren't extended or truncated, this is a pretty simple algorithm that avoids
the need for lots of combines and instead jumps straight to the final result
we want.

I've only used this new technique in 2 places within tryCombineUnmerge, using it
in more general situations resulted in infinite loops in AMDGPU. So for now
it's used when we would otherwise fail to combine and that seems to work.

In order to support looking through G_INSERTs, I also had to add it as an
artifact in isArtifact(), which caused a whole lot of issues in tests. AMDGPU
started infinite looping since full legalization of G_INSERT doensn't seem to
be there. To work around this, I've temporarily added a CLI option to use the
old behaviour so that the MIR tests will still run and terminate.

Other minor changes include no longer making >128b G_MERGE/UNMERGE legal.
We never had isel support for that anyway and it was a remnant of the legacy
legalizer rules. However being legal prevented the combiner from checking if it
was dead and deleting them.

Differential Revision: https://reviews.llvm.org/D104355
2021-07-09 22:35:00 -07:00
Stanislav Mekhanoshin 3e97d11df8 [AMDGPU] Added v_accvgpr_read_b32 rematerialization test. NFC. 2021-07-09 12:59:02 -07:00
Stanislav Mekhanoshin 4a3b055653 [AMDGPU] Fix flags of V_MOV_B64_PSEUDO
In particular it was not rematerializable.

Differential Revision: https://reviews.llvm.org/D105724
2021-07-09 12:49:28 -07:00
Stanislav Mekhanoshin b379ab4193 [AMDGPU] Add VOP rematerialization test. NFC. 2021-07-09 12:16:08 -07:00
Nikita Popov ff8b1b1b9c Reapply [IR] Don't mark mustprogress as type attribute
Reapply with fixes for clang tests.

-----

This is a simple enum attribute. Test changes are because enum
attributes are sorted before type attributes, so mustprogress is
now in a different position.
2021-07-09 20:57:44 +02:00
Nikita Popov 23dd750279 Revert "[IR] Don't mark mustprogress as type attribute"
This reverts commit 84ed3a794b.

A number of clang tests are also affected by this change. Revert
until I can update them.
2021-07-09 18:46:00 +02:00
Nikita Popov 84ed3a794b [IR] Don't mark mustprogress as type attribute
This is a simple enum attribute.

Test changes are because enum attributes are sorted before type
attributes.
2021-07-09 18:24:16 +02:00
Nico Weber 97c675d3d4 Revert "Revert "Temporarily do not drop volatile stores before unreachable""
This reverts commit 52aeacfbf5.
There isn't full agreement on a path forward yet, but there is agreement that
this shouldn't land as-is.  See discussion on https://reviews.llvm.org/D105338

Also reverts unreviewed "[clang] Improve `-Wnull-dereference` diag to be more in-line with reality"
This reverts commit f4877c78c0.

And all the related changes to tests:
This reverts commit 9a0152799f.
This reverts commit 3f7c9cc274.
This reverts commit 329f8197ef.
This reverts commit aa9f58cc2c.
This reverts commit 2df37d5ddd.
This reverts commit a72a441812.
2021-07-09 11:44:34 -04:00
Roman Lebedev 2df37d5ddd
[NFC][Codegen] Harden a few tests to not rely that volatile store to null isn't erased 2021-07-09 13:30:42 +03:00
Stanislav Mekhanoshin e5b0fe1b83 [AMDGPU] Mark more SOP instructions as rematerializable
The rest of the SOP instructions implicitly set SCC and not
suitable for the rematerialization.

Differential Revision: https://reviews.llvm.org/D105670
2021-07-08 16:00:45 -07:00
Stanislav Mekhanoshin de5582be26 [AMDGPU] Fix more indention in llc-pipeline test. NFC. 2021-07-08 11:20:00 -07:00
Stanislav Mekhanoshin 9dae86ce56 [AMDGPU] Fix indention in llc-pipeline test. NFC. 2021-07-08 11:08:25 -07:00
Stanislav Mekhanoshin 74a5760d35 [AMDGPU] Set LoopInfo as preserved by SIAnnotateControlFlow
The pass does not change loops, it just adds calls.

Differential Revision: https://reviews.llvm.org/D105583
2021-07-08 09:34:43 -07:00
Stanislav Mekhanoshin 0fdb25cd95 [AMDGPU] Disable garbage collection passes
Differential Revision: https://reviews.llvm.org/D105593
2021-07-07 15:47:57 -07:00
Stanislav Mekhanoshin a0ab45799b [AMDGPU] Move atomic expand past infer address spaces
There are cases where infer address spaces pass cannot yet
infer an address space in the opt pipeline and then in the
llc pipeline it runs too late for atomic expand pass to
benefit from a specific address space.

Move atomic expand pass past the infer address spaces.

Fixes: SWDEV-293410

Differential Revision: https://reviews.llvm.org/D105511
2021-07-06 15:53:32 -07:00
Stanislav Mekhanoshin 5915d33874 [AMDGPU] Do not run IR optimizations at -O0
Differential Revision: https://reviews.llvm.org/D105515
2021-07-06 15:29:52 -07:00
Sebastian Neubauer db646de3ee [AMDGPU] Set optional PAL metadata
Set informational fields in the .shader_functions table.

Also correct the documentation, .scratch_memory_size and .lds_size are
integers.

Differential Revision: https://reviews.llvm.org/D105116
2021-07-06 11:58:00 +02:00
David Stuttard 83cb9632a1 [DAGCombiner] Add support for mulhi const folding in DAGCombiner
Differential Revision: https://reviews.llvm.org/D103323

Change-Id: I4ffaaa32301795ba8a339567a68e77fe0862b869
2021-07-05 12:01:26 +01:00
David Stuttard 4b125b23ba [DAGCombiner] Pre-commit test to demonstrate mulhi const folding
D103323 will fold this

Differential Revision: https://reviews.llvm.org/D105424

Change-Id: I64947215eb531fbd70b52a72203b39e43fefafcc
2021-07-05 11:34:38 +01:00
David Stuttard b8173c3178 [AMDGPU] Stop mulhi from doing 24 bit mul for uniform values
Added support to check if architecture supports s_mulhi which is used as part of
the decision whether or not to use valu 24 bit mul (if the mulhi gets
transformed to a valu op anyway, then may as well use it).

This is an extension of the work in D97063

Differential Revision: https://reviews.llvm.org/D103321

Change-Id: I80b1323de640a52623d69ac005a97d06a5d42a14
2021-07-05 10:33:23 +01:00
Matt Arsenault 99c7e918b5 GlobalISel: Use LLT in call lowering callbacks
This preserves the memory type so the lowerings can rely on them.
2021-07-01 12:15:54 -04:00
Stanislav Mekhanoshin 661577e698 [AMDGPU] Fix immediate sign during V_MOV_B64_PSEUDO expansion
Creating a V_MOV_B32 with zero extended immediate source
prevented conversion to V_BFREV_B32.

Differential Revision: https://reviews.llvm.org/D105235
2021-07-01 09:00:29 -07:00
Matt Arsenault 28f2f66200 GlobalISel: Use LLT in memory legality queries
This enables proper lowering of non-byte sized loads. We still aren't
faithfully preserving memory types everywhere, so the legality checks
still only consider the size.
2021-06-30 17:44:13 -04:00