Commit Graph

4517 Commits

Author SHA1 Message Date
Teresa Johnson f9ca75f19b [Inliner] Inlining should honor nobuiltin attributes
Summary:
Final patch in series to fix inlining between functions with different
nobuiltin attributes/options, which was specifically an issue in LTO.
See discussion on D61634 for background.

The prior patch in this series (D67923) enabled per-Function TLI
construction that identified the nobuiltin attributes.

Here I have allowed inlining to proceed if the callee's nobuiltins are a
subset of the caller's nobuiltins, but not in the reverse case, which
should be conservatively correct. This is controlled by a new option,
-inline-caller-superset-nobuiltin, which is enabled by default.

Reviewers: hfinkel, gchatelet, chandlerc, davidxl

Subscribers: arsenm, jvesely, nhaehnle, mehdi_amini, eraman, hiraditya, haicheng, dexonsmith, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74162
2020-02-28 07:34:14 -08:00
Jay Foad 970558df94 [AMDGPU] Mark the scheduling model as complete 2020-02-28 13:35:55 +00:00
Jay Foad addcbc401c [AMDGPU] Update a comment missed in 74e2974ac6 2020-02-28 13:35:55 +00:00
Stanislav Mekhanoshin 6b813f2762 [AMDGPU] Enable runtime unroll for LDS
We want to do unroll for LDS even for runtime trip count
to combine LDS operations.

Differential Revision: https://reviews.llvm.org/D75293
2020-02-27 12:59:35 -08:00
Reid Kleckner 465dca79b3 Avoid SmallString.h include in MD5.h, NFC
Saves 200 includes, which is mostly immaterial.
2020-02-26 09:10:24 -08:00
Nicolai Hähnle d6b05fccb7 Full fix for "AMDGPU/SIInsertSkips: Fix the determination of whether early-exit-after-kill is possible" (hopefully)
Properly preserve the MachineDominatorTree in all cases.

Change-Id: I54cf0c0a20934168a356920ba8ed5097a93c4131
2020-02-26 16:21:44 +01:00
Nicolai Hähnle 0aec4b418e Quick fix for bot failure on "AMDGPU/SIInsertSkips: Fix the determination of whether early-exit-after-kill is possible"
Apparently the dominator tree update is incorrect, will investigate.

Change-Id: Ie76f8d11b22a552af1f098c893773f3d85e02d4f
2020-02-26 16:02:22 +01:00
Nicolai Hähnle 0f1df48925 AMDGPU/SIInsertSkips: Fix the determination of whether early-exit-after-kill is possible
Summary:
The old code made some incorrect assumptions about the order in which
basic blocks are laid out in a function. This could lead to incorrect
early-exits, especially when kills occurred inside of loops.

The new approach is to check whether the point where the conditional
kill occurs dominates all reachable code. If that is the case, there
cannot be any other threads in the wave that are waiting to rejoin
at a later point in the CFG, i.e. if exec=0 at that point, then all
threads really are dead and we can exit the wave.

Make some other minor cleanups to the pass while we're at it.

v2: preserve the dominator tree

Reviewers: arsenm, cdevadas, foad, critson

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74908

Change-Id: Ia0d2b113ac944ad642d1c622b6da1b20aa1aabcc
2020-02-26 15:30:42 +01:00
Scott Linder 481b1c8380 [AMDGPU] Implement wave64 DWARF register mapping
Summary:
Implement the DWARF register mapping described in
llvm/docs/AMDGPUUsage.rst

This is currently limited to wave64 VGPRs/AGPRs.

This also includes some minor changes in AMDGPUInstPrinter,
AMDGPUMCTargetDesc, and AMDGPUAsmParser to make generating CFI assembly
text and ELF sections possible to ease testing, although complete CFI
support is not yet implemented.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74915
2020-02-25 14:00:01 -05:00
Matt Arsenault 86e13ec194 AMDGPU/GlobalISel: Use packed for G_ADD/G_SUB/G_MUL v2s16 2020-02-25 11:20:35 -05:00
Jay Foad 33cbd5ee08 AMDGPU/GlobalISel: Legalize s64 min/max by lowering
Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75108
2020-02-25 16:00:43 +00:00
Matt Arsenault fee41517fe AMDGPU/GlobalISel: Introduce post-legalize combiner
The current set of custom combines are only really useful after
legalization, so move them there. There is a lot of overlap in the
boilerplate here, but I think we do want a pretty different set of
combines before and after legalize. I think we will want a lot of
overlap between the post-legalize and a post-regbankselect combiner.
2020-02-24 22:12:12 -05:00
Matt Arsenault 0b46b078b6 AMDGPU/GlobalISel: Fix incorrect VOP3P fneg folding
We use some s32 values in VOP3P operands, and won't see any
intervening casts from a 32-bit fneg. Make sure it's really a packed
fneg before folding.
2020-02-24 21:20:35 -05:00
Jay Foad 0ed4744bb5 AMDGPU/GlobalISel: Lower 64-bit uaddo/usubo
Summary: Add more test cases for signed and unsigned add/sub with overflow.

Reviewers: arsenm, rampitec, kerbowa

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75051
2020-02-24 23:08:14 +00:00
Stanislav Mekhanoshin 4135077e26 [AMDGPU] use llvm_unreachable instead of default for rp set
GCC 9.2 seems to incorrectly issue warning about out of bounds
access. This situation should not happen in any way.

Differential Revision: https://reviews.llvm.org/D75071
2020-02-24 12:02:12 -08:00
Matt Arsenault bf4933b4ea AMDGPU/GlobalISel: Remove dead code 2020-02-21 19:19:32 -05:00
Mark Searles d3e170c438 Revert "[AMDGPU] Don’t marke the .note section as ALLOC"
This reverts commit 977cd661cf.

It breaks OpenCL testing. OpenCL Runtime is using PT_LOAD information
to calculate memory for global variables. This commit should be relanded once
the OpenCL runtime stops relying on PT_LOAD information for calculating global
variable memory size.

Differential Revision: https://reviews.llvm.org/D74995
2020-02-21 16:08:30 -08:00
Jay Foad b72f1448ce AMDGPU/GlobalISel: Better code for one case of G_SHUFFLE_VECTOR on v2i16
Reviewers: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74987
2020-02-21 21:16:39 +00:00
Matt Arsenault 00955a62e4 AMDGPU/GlobalISel: Fix SALU mapping for v2s16 min/max
The legalizer helper functions are unusably awkward to perform the 3-5
part legalization. This needs to be widened, scalarized, lowered, and
we should avoid creating vector extends and truncates. Manually do all
of this and expand.
2020-02-21 14:02:16 -05:00
Matt Arsenault db06870dbd AMDGPU: Move dot intrinsic patterns to instruction def
I tried to use some of the new tablegen features to avoid creating
different operand list permutations, but I still don't see a way to
programmatically build a source pattern dag.

Also add GlobalISel tests, which now all import successfully.

Some of the fneg fold tests are incorrect, which need to be fixed in a
future commit
2020-02-21 13:35:40 -05:00
Matt Arsenault 4c1c9422a3 AMDGPU/GlobalISel: Select llvm.amdgcn.fdot2
I'm slighly worried about the generated checks, since they won't catch
incorrect modifiers being added at the end of the line.
2020-02-21 13:35:40 -05:00
Matt Arsenault dfce5fd50a AMDGPU/GlobalISel: Select VOP3P instructions
This only handles the basic cases. More work is needed to make better
use of op_sel.
2020-02-21 13:35:40 -05:00
Matt Arsenault 72eef820d5 AMDGPU/GlobalISel: Select G_SHUFFLE_VECTOR
G_SHUFFLE_VECTOR is legal since it theoretically may help match op_sel
for VOP3P instructions. Expand it in some other way in case it doesn't
fold into the use instructions.
2020-02-21 13:35:40 -05:00
Matt Arsenault 60023e3471 AMDGPU: Use default operand for VOP3P clamp
We don't use this, and matching from the def doesn't make much sense.

There are multiple tablegen bugs with default operand
handling. undef_tied_input should work to handle the vdst_in
correctly, but this breaks the operand register class constraint which
it should be able to infer.
2020-02-21 12:14:18 -05:00
Matt Arsenault 043ed2e22a AMDGPU/GlobalISel: Fix xnor matching
We should try the generated matchers before the manual selection. This
means the patterns are now handling the common cases, but the manual
selection code is not yet dead. It's still handling the non-s32/s64
cases (like v2s16 and v2s32). Currently tablegen doesn't have a nice
way to have a single pattern that covers multiple types.
2020-02-21 11:42:49 -05:00
Matt Arsenault ac7abe0ba9 AMDGPU/GlobalISel: Manually select G_BUILD_VECTOR_TRUNC
We have patterns for s_pack* selection, but they assume the inputs are
a build_vector with 16-bit inputs, not a truncating build
vector. Since there's still outstanding work for how to handle
mismatched result and source element vector operations, and since I'm
trying a different packed vector strategy than SelectionDAG, just
manually select this for now.
2020-02-21 10:34:11 -05:00
Matt Arsenault 79ff188add AMDGPU/GlobalISel: Legalize G_FPOW
There are few differences from the DAG handling. First, the DAG
handling uses a primitive selection pattern instead of custom
legalizing it. Because of this, this makes use of source modifiers
while the DAG does not.

Also instead of promoting f16, try to use the f16 log/exp. There's no
f16 fmul_legacy, so widen just for the multiply, although I'm not sure
that's the best solution.
2020-02-21 10:31:13 -05:00
Matt Arsenault fab4cdea39 AMDGPU/GlobalISel: Select llvm.amdgcn.fmul.legacy 2020-02-21 10:30:26 -05:00
Matt Arsenault b64aa8c715 AMDGPU/GlobalISel: Fix constant bus violation with source modifiers
This looked through copies to find the source modifiers, which may
have been SGPR->VGPR copies added to avoid potential constant bus
violations. Re-insert a copy to a VGPR if this happens.
2020-02-21 10:30:23 -05:00
Matt Arsenault 083717cf49 AMDGPU: Fix v2i64<->v4f32 bitcast
I'm not sure how to test the v2i64->v4f32 case since I can't think of
any v2i64 cases that won't legalize to v4i32.
2020-02-20 09:49:09 -05:00
Sebastian Neubauer 977cd661cf [AMDGPU] Don’t marke the .note section as ALLOC
Marking a section as ALLOC tells the ELF loader to load the section into memory.
As we do not want to load the notes into VRAM, the flag should not be there.

Differential Revision: https://reviews.llvm.org/D74600
2020-02-20 15:14:48 +01:00
Simon Pilgrim 6085593c12 [AMDGPU] simplifyI24 - replace GetDemandedBits with SimplifyMultipleUseDemandedBits
GetDemandedBits mostly just calls SimplifyMultipleUseDemandedBits now, but it does a very blunt constant simplification that SimplifyMultipleUseDemandedBits avoids.

If we need to demand bits from constants we should handle this through ShrinkDemandedConstant/targetShrinkDemandedConstant.

@arsenm confirmed that the sign extended immediates are better for code size.

Differential Revision: https://reviews.llvm.org/D74857
2020-02-20 12:03:08 +00:00
Matt Arsenault 4bb0c8f91c AMDGPU: Enable integer division bypass
We probably want this, and I've meant to turn this on for a long
time. SC actually emits a special case to early-out for a 1
denominator, which perhaps should also be considered.
2020-02-19 17:50:19 -05:00
Matt Arsenault cbc3b3046f AMDGPU/GlobalISel: Remove outdated comment 2020-02-19 17:32:25 -05:00
Stanislav Mekhanoshin 03954a12ae [AMDGPU] Fix DS_WRITE_B32 patterns
It uses VGPR_32.RegTypes which includes 16 bit types. As a
result DS_WRITE_B32 may be generated for "store i16" which
is a bug. The only reason we do not hit it now is relative
patterns complexity and sorting. Should DS_WRITE_B16 pattern
complexity become higher and the bug appears.

Differential Revision: https://reviews.llvm.org/D74868
2020-02-19 13:42:16 -08:00
Stanislav Mekhanoshin ada205e91e [AMDGPU] Fix assumption about LaneBitmask content
Yet another assumption about an actual LaneBitmask content
is fixed.

Differential Revision: https://reviews.llvm.org/D74805
2020-02-19 09:07:11 -08:00
Matt Arsenault ff4639f060 AMDGPU/GlobalISel: Select MUBUF path for global atomic cmpxchg
I'm not sure why this isn't a pattern, but the DAG manually selects
this.
2020-02-19 06:19:22 -08:00
Simon Pilgrim 4af8db317d [AMDGPU] performCvtF32UByteNCombine - add SHL and SimplifyMultipleUseDemandedBits support
This is part of the work to remove SelectionDAG::GetDemandedBits and just use SimplifyMultipleUseDemandedBits.

Recent experiments raised some v_cvt_f32_ubyte*_e32 regressions, so I've added some additional abilities to performCvtF32UByteNCombine to help unpack byte data more aggressively.

We still don't remove all OR(SHL,SRL) patterns as some of the regenerated nodes don't get combined again, but we are getting closer.

Differential Revision: https://reviews.llvm.org/D74786
2020-02-19 11:45:57 +00:00
Stanislav Mekhanoshin dd4766451e [AMDGPU] Use generated RegisterPressureSets enum
Differential Revision: https://reviews.llvm.org/D74671
2020-02-18 10:34:03 -08:00
Sander de Smalen 8fbc925807 Add OffsetIsScalable to getMemOperandWithOffset
Summary:
Making `Scale` a `TypeSize` in AArch64InstrInfo::getMemOpInfo,
has the effect that all places where this information is used
(notably, TargetInstrInfo::getMemOperandWithOffset) will need
to consider Scale - and derived, Offset - possibly being scalable.

This patch adds a new operand `bool &OffsetIsScalable` to
TargetInstrInfo::getMemOperandWithOffset and fixes up all
the places where this function is used, to consider the
offset possibly being scalable.

In most cases, this means bailing out because the algorithm does not
(or cannot) support scalable offsets in places where it does some
form of alias checking for example.

Reviewers: rovka, efriedma, kristof.beyls

Reviewed By: efriedma

Subscribers: wuzish, kerbowa, MatzeB, arsenm, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, javed.absar, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72758
2020-02-18 15:53:29 +00:00
Matt Arsenault 37c452a289 AMDGPU/GlobalISel: Adjust branch target when lowering loop intrinsic
This needs to steal the branch target like the other control flow
intrinsics.
2020-02-18 06:35:40 -08:00
Stanislav Mekhanoshin 8e760e1018 [TBLGEN] Inhibit generation of unneeded psets
Differential Revision: https://reviews.llvm.org/D74744
2020-02-17 15:38:08 -08:00
Matt Arsenault 5e8792453d AMDGPU/GlobalISel: Fix RegBankSelect for G_SHUFFLE_VECTOR 2020-02-17 15:11:25 -05:00
Matt Arsenault f742a28ae3 AMDGPU/GlobalISel: Custom lower 32-bit G_SDIV/G_SREM 2020-02-17 15:09:51 -05:00
Matt Arsenault e240b27d6d AMDGPU/GlobalISel: Allow arbitrary global values
Treat unknown address spaces as global
2020-02-17 11:32:28 -08:00
Matt Arsenault 54137bbaaf GlobalISel: Allow running localizer earlier
This required legal and regbankselected MIR for seemingly no
reason. For AMDGPU this wouldn't see legalized G_GLOBAL_VALUEs.
2020-02-17 11:24:06 -08:00
Matt Arsenault 96db12d507 AMDGPU/GlobalISel: Custom lower 32-bit G_UDIV/G_UREM
AMDGPUCodeGenPrepare expands this most of the time, but not always. We
will always at least need a fallback option here. This is the 3rd
implementation of the same expansion in the backend. Eventually I
would like to eliminate the IR expansion (and the DAG version
obviously).

Currently the new legalizer path produces a better result, since the
IR expansion results in extra operations which need to be combined
out. Notably, the IR expansion results in multiplies by 0.
2020-02-17 11:05:50 -08:00
Matt Arsenault 0e2eb357e0 GlobalISel: Extend narrowing to G_ASHR 2020-02-17 10:42:59 -08:00
Nikita Popov 98ed613ccc [IRBuilder] Avoid passing IRBuilder by value; NFC
I've fixed most of these before, but missed some occurrences
in targets I don't usually build.
2020-02-17 18:14:47 +01:00
Matt Arsenault 8550859535 GlobalISel: Extend shift narrowing to G_SHL 2020-02-17 09:13:37 -08:00