We can lower these as an or with the negative of the condition value. This appears to result in significantly less branch-y code on multiple common idioms (as seen in tests).
Differential Revision: https://reviews.llvm.org/D135316
This is a non-trivial property relied upon by D135396. I wrote this to convince myself it was true.
Differential Revision: https://reviews.llvm.org/D135403
Clang may optimize conditional tailcall blocks with the following layout:
cmp <condition>
je tailcall_target
ret
When retpoline is in place, indirect calls are converted into direct calls to a retpoline thunk. When these indirect calls are tail calls, they may be subject to the above described optimization (there is no indirect JCC, but since now the jump is direct it can be made conditional). The above layout is non-ideal for the Linux kernel scenario because the branches into thunks may be patched back into indirect branches during runtime depending on the underlying CPU features, what would not be feasible if the binary is emitted with the optimized layout above.
Thus, prevent clang from emitting this it if CodeModel is Kernel.
Feature request from the respective kernel mailing list: https://lore.kernel.org/llvm/Yv3uI%2FMoJVctmBCh@worktop.programming.kicks-ass.net/
Reviewed By: nickdesaulniers, pengfei
Differential Revision: https://reviews.llvm.org/D134915
Optimization for using compressed beqz and bnez
If there is pattern
```
br_cc val1 constval eq/neq place
select_cc val1 constval eq/neq trueval falseval
```
and constval does not fit in compressed imm format(6 bit), but fit in
imm format(12 bit), we can replace by non compress sub and compress
c.beqz/c.bneqz:
```
addi val val -constval
c.beqz val place
```
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D132839
This changes the default value used for mask policy from mask undisturbed to mask agnostic. In hardware, there may be a minor preference for ta/ma, but since this is only going to apply to instructions which don't use the mask policy bit, this is functionally mostly a nop. The main value is to make future changes to using MA when legal for masked instructions easier to review by reducing test churn.
The prior code was motivated by a desire to minimize state transitions between masked and unmasked code. This patch achieves the same effect using the demanded field logic (landed in afb45ff), and there are no regressions I spotted in the test diffs. (Given the size, I have only been able to skim.) I do want to call out that regressions are possible here; the demanded analysis only works on a block local scope right now, so e.g. a tight loop mixing masked and unmasked computation might see an extra vsetvli or two.
Differential Revision: https://reviews.llvm.org/D133803
Previously we would be unable to legalize V2S16 BUILD_VECTOR_TRUNC on GFX8 & below as the custom legalization was missing.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D135149
The patch introduces reading the attributes of kernel arguments both from
function-attached and module-level metadata, during kernel arguments lowering.
Two tests are added to show the improvement.
Differential Revision: https://reviews.llvm.org/D135106
Co-authored-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Co-authored-by: Michal Paszkowski <michal.paszkowski@outlook.com>
Co-authored-by: Andrey Tretyakov <andrey.tretyakov@mail.com>
Co-authored-by: Konrad Trifunovic <konrad.trifunovic@intel.com>
This patch allows the combines that fold extensions in binary operations
to have more than one use.
The approach here is pretty conservative: if all the users of an
extension can fold the extension, then the folding is done, otherwise we
don't fold.
This is the first step towards avoiding the one-use limitation.
As a result, we make a decision to fold/don't fold for a web of
instructions. An instruction is part of the web of instructions as soon
as it consumes an extension that needs to be folded for all its users.
Because of how SDISel works a web of instructions can be visited over
and over. More precisely, if the folding happens, it happens for the
whole web and that's the end of it, but if the folding fails, the whole
web may be revisited when another member of the web is visited.
To avoid a compile time explosion in pathological cases, we bail out
earlier for webs that are bigger than a given threshold (arbitrarily set
at 18 for now.) This size can be changed using
`--riscv-lower-ext-max-web-size=<maxWebSize>`.
At the current time, I didn't see a better scheme for that. Assuming we
want to stick with doing that in SDISel.
Differential Revision: https://reviews.llvm.org/D133739
This adds some missing tablegen patterns to handle trn1/trn2/zip1/zip2/uzp1/uzp2,
similar to the Arm handling in 5e1a9d319d, but via tablegen
patterns for the AArch64 backend.
This patch centralizes all the combines of add|sub|mul with extended
operands in one "framework".
The rationale for this change is to offer a one-stop-shop for all these
transformations so that, in the future, it is easier to make combine
decisions for a web of instructions (i.e., instructions connected
through s|zext operands).
Technically this patch is not NFC because the new version is more
powerful than the previous version.
In particular, it diverges in two cases:
- VWMULSU can now also be produced from `mul(splat, zext)`, whereas
previously only `mul(sext, splat)` were supported when `splat`s were
involved. (As demonstrated in rvv/fixed-vectors-vwmulsu.ll)
- VWSUB(U) can now also be produced from `sub(splat, ext)`, whereas
previously only `sub(ext, splat)` were supported when `splat`s were
involved. (As demonstrated in rvv/fixed-vectors-vwsub.ll)
If we wanted, we could block these transformations to make this
patch really NFC. For instance, we could do something similar to
`AllowSplatInVW_W`, which prevents the combines to form vw(add|sub)(u)_w
when the RHS is a splat.
Regarding the "framework" itself, the bulk of the patch is some
boilderplate code that abstracts away the actual extensions that are
present in the DAG. This allows us to handle `vwadd_w(ext a, b)` as if
it was a regular `add(ext a, ext b)`. Since the node `ext b` doesn't
actually exist in the DAG, we have a bunch of methods (all in the
NodeExtensionHelper class) that fake all that for us.
The other half of the change is around `CombineToTry` and
`CombineResult`. These helper structures respectively:
- Represent the kind of combines that can be applied to a node, and
- Store what needs to happen to do that combine.
This can be viewed as a two step approach:
- First, check if a pattern applies, and
- Second apply it.
The checks and the materialization of the combines are decoupled so that
in the future we can perform several checks and do all the related
applies in one go.
Differential Revision: https://reviews.llvm.org/D134703
For V_CMP_CLASS_F16_t16_e64 and V_CMPX_CLASS_F16_t16_e64,
https://reviews.llvm.org/D133723 changed the value type of src1 from i32 to i16.
These src1 operands are 16 bits, therefore need to be encoded as true16
operands. So the _e32 type was correctly set to VGPR_32_Lo128.
In _e64 form the operand class went from
VSrc_b32 to VSrc_b16. For some reason, we cannot encode inline literals for
VSrc_b16, see 5f5f566b26. In this phase of
the true16 implementation, VSrc_b16 and VSrc_b32 are still similar,
except from that quirk of inlines. So set the operand class to regain
that function.
Reviewed By: dp, arsenm
Differential Revision: https://reviews.llvm.org/D134897
Setting up a lazy-save mechanism around calls is done during SelectionDAG
because calls to intrinsics may be expanded into an actual function call
(e.g. calls to @llvm.cos()), and maintaining an allowed-list in the SMEABI
pass is not feasible.
The approach for conditionally restoring the lazy-save based on the runtime
value of TPIDR2_EL0 is similar to how we handle conditional smstart/smstop.
We create a pseudo-node which gets expanded into a conditional branch and
expands to a call to __arm_tpidr2_restore(%tpidr2_object_ptr).
The lazy-save buffer and TPIDR2 block are only allocated once at the start
of the function. For each call, the TPIDR2 block is initialised, and at
the end of the call, a pseudo node (RestoreZA) is planted.
Patch by Sander de Smalen.
Differential Revision: https://reviews.llvm.org/D133900
If we have a constant aggregate, e.g., as an initializer, we usually
failed to extract the proper value/type from it. This patch provides the
size and offset information necessary to extract the right part of the
constant.
The new pass implements the following:
* Inserts code at the start of an arm_new_za function to
commit a lazy-save when the lazy-save mechanism is active.
* Adds a smstart intrinsic at the start of the function.
* Adds a smstop intrinsic at the end of the function.
Patch co-authored by kmclaughlin.
Differential Revision: https://reviews.llvm.org/D133896
This patch introduces a new AArch64 ISD node (OBSCURE_COPY) that can
be used when we want to prevent SVE object address calculations
from being rematerialised between a smstop/smstart and a call.
At the moment we use COPY to copy the frame index to a register,
which leads to problems because the "simple register coalescing"
pass understands the COPY instruction and attempts to rematerialise
an address calculation with 'addvl' between an smstop and a call.
When in streaming mode the 'addvl' instruction may have different
behaviour because the streaming SVE vector length is not guaranteed
to equal the normal SVE vector length.
The new ISD opcode OBSCURE_COPY gets lowered to a new pseudo
instruction also called OBSCURE_COPY. This ensures it cannot be
rematerialised and we expand this into a simple move very late in
the machine instruction pipeline.
A new test is added here:
CodeGen/AArch64/sme-streaming-interface.ll
Differential Revision: https://reviews.llvm.org/D134940
This makes sure that the instructions of the prologue matches the
SEH opcodes.
Also remove a couple redundant cases of setting HasWinCFI; it was
already set unconditionally after the conditional cases.
Differential Revision: https://reviews.llvm.org/D135101
The patch selects VSELECT_VL/VP_MERGE_VL that uses VF(N)M(ACC|SAC) as its
true operand and the adden of the true operand as its false operand.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D135080
The allocation hints for copies of ACC registers assumed that we would only be
copying between VSRp and UACC registers. In reality it is also possible to copy
between UACC and ACC registers.
This patch adds a new case for the ACC copy to fix that issue.
Note that the test case added with this patch will hit an assert without the
fix.
Reviewed By: lei, amyk
Differential Revision: https://reviews.llvm.org/D134501
Once we are in the `Unreachable` we want to disable type checking, but
we were unconditionally returning `true` here which means we encountered
and error. Instead we unconditionally return false to signal no error.
Fixes: https://github.com/llvm/llvm-project/issues/56935
Differential Revision: https://reviews.llvm.org/D135195
As a result of making these legal, and tweaking the combine to allow vectors,
we generate vector G_SEXT_INREG during legalization.
The reason we want to make these legal in the first place is to allow for
more combine opportunities. Once those have been done, we can just lower them
back to shifts in the post-legalizer lowering.
This needs to be one commit otherwise we start causing tests to fail due to
incomplete support for selection etc.
This information is not preserved in MIR today. So this patch adds
information to RISCVMachineFunctionInfo when the vreg is created for
the argument.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D134621
performCONDCombine removes and 0xff in patterns of
SUBS (and (add(..), 0xff), C)
under certain complex conditions. It doesn't come up often,
but in the lowering of usub.sat where the SUBS is both used as a
condition and as a value, the And is removed where it would only be
valid for the condition.
Fixes#58109.
Differential Revision: https://reviews.llvm.org/D135043
AArch64LoadStoreOptimizer has a bunch of different guards to avoid
corrupting Windows SEH prologues/epilogues, but apparently we missed the
case of merging two instructions where the first instruction isn't part
of the epilogue, but the second instruction is.
Fixes issue discovered at https://reviews.llvm.org/D130049#3704064
Differential Revision: https://reviews.llvm.org/D134992
This code adds initial support for generating the HLSL resources
metadata entries. It has a lot of `FIXMEs` laying around because there
is a lot more work to do here, but this lays a solid groundwork and can
accurately handle some trivial cases.
I've filed a swath of issues covering the deficiencies here and left the
issues in comments so that we can easily follow them.
One big change to make sooner rather than later is to move some of this
code into a new libLLVMFrontendHLSL so that we can share it with the
Clang CodeGen layer.
Reviewed By: python3kgae
Differential Revision: https://reviews.llvm.org/D134682
There are few changes mixed in here.
-Try to reuse the destination register from ADDI instead of always
creating a virtual register. This way we lean on the register
scavenger in fewer case.
-Explicitly reuse the primary virtual register when possible. There's
still a case where both getVLENFactoredAmount and handling large
fixed offsets can both create a secondary virtual register.
-Combine similar BuildMI calls by manipulating the Register variables.
There are still a couple early outs for ADDI, but overall I tried to
arrange the code into steps.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D135009
The old code took two different paths based on whether there is
a scalable offset, but these two paths had some code in common.
The main difference between the two code paths was whether we needed
to create a GPR or not for the ADDI that gets created for RVVSpill.
If we had a scalable offset, the same GPR was used as the destination
for adding the scalable offset and the ADDI. To manage this, we now
cache the scratch register and reuse it if it has already been created.
This is a pre-patch for D135009.
Reviewed By: reames, frasercrmck
Differential Revision: https://reviews.llvm.org/D135092
The bitmask used to extract the bits assumed 16 bit elements and wasn't taking the size of the elements into account.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D135156
One of the sources is the same size as the destination so that source
doesn't have an overlap with the destination register. By using the _TIED
form we avoid an early clobber contraint for that source.
This matches what was already done for instrinsics. ConvertToThreeAddress
will fix it if it can't stay tied.
We want to emit a masked setcc that preserves zeros in all of the bits
where the original mask is zero. To do this we need to pass the original
mask as the passthru operand as well. Otherwise, we'll use the mask agnostic
policy and replace the zeros with 1s on some CPUs.
Differential Revision: https://reviews.llvm.org/D135122
There are a few issues with the code we generate for atomic operations and the way we generate it:
- Hard coded CR0 for compares
- Order of operands for compares not conducive to
emitting compare-immediate or for CSE of compares
- Missing MachineMemOperand for st[bhwd]cx intrinsics
- Missing intrinsic properties for the same
- Unnecessary blocks with store conditional
instructions to clear reservation (which ends
up hindering performance)
- Move from CR instructions just to compare the
result of a store conditional with zero (even
though it is a record-form)
This patch aims to resolve all of those issues.
Differential revision: https://reviews.llvm.org/D134783
Noticed this falling back on CTMark at -Os (bullet).
Seems like we have no 1:1 matching for it, so match SDAG and just lower.
Add testcases for common legal cases as well.
Differential Revision: https://reviews.llvm.org/D135111
Before this patch the code in printScalarConstant was unable to handle
nested constant expressions like (gep (addrspacecast ptr)) and crashed
with:
LLVM ERROR: Unsupported expression in static initializer:
addrspacecast ([4 x i8] addrspace(1)* @ga to [4 x i8]*)
We can use lowerConstantForGV instead which is a customized version of
lowerConstant that supports generic() and nested expressions.
Differential Revision: https://reviews.llvm.org/D127878
These transforms were recently added (by me) in D134881. Looking at the code again, I realized we don't need the (and x, 0x1) portion of the pattern, we just need to know that the result of that sub-tree is either 0 or 1. Checking for this directly allows us to match slightly more broadly. The test changes are zext i1 arguments, but this could also kick in for e.g. shifts of high bits, or any other source of known bits.
Differential Revision: https://reviews.llvm.org/D135081
This fixes a long standing cause of awful code generation when legalization creates
G_SEXT(G_FCMP(...)), for example due to promoting the condition of a vector G_SELECT.
Since on AArch64 vector compares sign-extend the condition value, there's no need
for this extra G_SEXT. Unfortunately by the time we get to post-legalization these
G_SEXTs have already been lowered into shifts, so this combine is a bit more
involved than I'd ideally like. Oh well.
Differential Revision: https://reviews.llvm.org/D135078
Vector support seems to work immediately, as long as we run the combine before
legalization (so the vector SELECTs don't get lowered) and the legalizer rules
are there to enable generation.
Differential Revision: https://reviews.llvm.org/D135047
If we can not prove that f16 operands of a buildvector are canonicalized, then we can not lower into a V_PACK. In this scenario, we would previously lower into some combination of and(sdwa), shr, or. This patch allows for matching into V_PERM instead.
Change-Id: Ifa4a74fdb81ef44f22ba490c7fdf81ec8aebc945
We have a very common pattern of dispatching between BUILD_VECTOR and SPLAT_VECTOR creation repeated in many cases in code. Common the pattern into a utility function.
VMV_V_X_VL nodes should always have a passthru, a splat, and a VL.
We were sometimes missing the VL.
This went unnoticed because these cases were all selected into the
following node to form a .vx or .vi instruction. The ComplexPattern
that does this, doesn't check the VL operand. I've added an assert
to the ComplexPattern to catch if the operand is missing.
@qcolombet spotted some of these in D134703.
VMV_V_X_VL nodes should always have a passthru, a splat, and a VL.
We were sometimes missing the VL.
This went unnoticed because these cases were all selected into the
following node to form a .vx or .vi instruction. The ComplexPattern
that does this, doesn't check the VL operand. I've added an assert
to the ComplexPattern to catch if the operand is missing.
@qcolombet spotted some of these in D134703.
This captures the target shader model and pipeline stage into the DXIL
metadata for consumption by the DirectX runtime.
Reviewed By: python3kgae
Differential Revision: https://reviews.llvm.org/D134469
This is a refactor for another patch. For now we move the vreg
creation to the caller.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D135008
Currently, AArch64 doesn't support vectorization for non temporal loads because `isLegalNTLoad` is not implemented for the target.
This patch applies similar functionality as `D73158` but for non temporal loads
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D131964
Before, the isPreLegalize() query in CombinerHelper only checked for the
presence of a LegalizerInfo object. This is problematic when we want to have
a combine actually check for legality in a pre-legalizer combine pass, since
if we pass a LegalizerInfo object to the constructor it causes the combines to
think that we're running *post* legalizer, which isn't true.
This change fixes it to instead check an explicit bool that passes to signal
whether the pass will be run before or after legalization.
Doing so exposed a bug in the extending loads combine, which tried to check for
legality of candidate extending loads if LegalizerInfo was present. Since we
only ran it pre-legalizer and therefore with a null LegalizerInfo, it never
actually ran. Also fixes the legality checks to keep the tests passing.
Differential Revision: https://reviews.llvm.org/D135044
One of the sources is the same size as the destination so that source
doesn't have an overlap with the destination register. By using the _TIED
form we avoid an early clobber contraint for that source.
This matches what was already done for instrinsics. ConvertToThreeAddress
will fix it if it can't stay tied.
Change the costmodel to lower a = b * C where C = -(2^n - 2^m) to
lsl w8, w0, m
sub w0, w8, w0, lsl n
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D134934
VALU use of an SGPR (pair) as mask followed by SALU write to the
same SGPR can cause incorrect execution of subsequent SALU reads
of the SGPR.
Reviewed By: foad, rampitec
Differential Revision: https://reviews.llvm.org/D134151
If True has a Chain result, the other operands of the vmerge may
depend on it through that Chain. We need to ensure it isn't a
predecessor of those operands.
Reviewed By: fakepaper56
Differential Revision: https://reviews.llvm.org/D134980
This stops reporting CostPerUse 1 for `R8`-`R15` and `XMM8`-`XMM31`.
This was previously done because instruction encoding require a REX
prefix when using them resulting in longer instruction encodings. I
found that this regresses the quality of the register allocation as the
costs impose an ordering on eviction candidates. I also feel that there
is a bit of an impedance mismatch as the actual costs occure when
encoding instructions using those registers, but the order of VReg
assignments is not primarily ordered by number of Defs+Uses.
I did extensive measurements with the llvm-test-suite wiht SPEC2006 +
SPEC2017 included, internal services showed similar patterns. Generally
there are a log of improvements but also a lot of regression. But on
average the allocation quality seems to improve at a small code size
regression.
Results for measuring static and dynamic instruction counts:
Dynamic Counts (scaled by execution frequency) / Optimization Remarks:
Spills+FoldedSpills -5.6%
Reloads+FoldedReloads -4.2%
Copies -0.1%
Static / LLVM Statistics:
regalloc.NumSpills mean -1.6%, geomean -2.8%
regalloc.NumReloads mean -1.7%, geomean -3.1%
size..text mean +0.4%, geomean +0.4%
Static / LLVM Statistics:
mean -2.2%, geomean -3.1%) regalloc.NumSpills
mean -2.6%, geomean -3.9%) regalloc.NumReloads
mean +0.6%, geomean +0.6%) size..text
Static / LLVM Statistics:
regalloc.NumSpills mean -3.0%
regalloc.NumReloads mean -3.3%
size..text mean +0.3%, geomean +0.3%
Differential Revision: https://reviews.llvm.org/D133902
When returning from a function with both SCS and PAC-RET enabled, we need to
authenticate the return address from the stack and then load from the SCS,
but this was happening in the reverse order when RETA[AB] were being used.
Fix it by disabling the use of RETA[AB] when SCS is enabled.
Fixes pr58072.
Differential Revision: https://reviews.llvm.org/D134931
1. Save typed pointer type for GlobalVariable/Function instead of the ObjectType.
This will allow use GlobalVariable/Function as value.
2. Save target type for global ctors for Constant.
3. In DXILBitcodeWriter::getTypeID, check PointerMap first for Constant case.
Reviewed By: beanz
Differential Revision: https://reviews.llvm.org/D133283
Simplify and make the pair-wise relocation more precise. If either of
the symbol references are textual, the relocation must be delayed. If
the difference is across sections, delay it as well which partially
matches the behaviour of gas. We unfortunately do not handle the case
where the difference references a symbol that is not yet defined. In
such a case, we simply fail to resolve the difference, which should
hopefully not be too onerous (particularly since no other target
supports cross-section references and it is not clear if this was
intentional on the part of RISCV).
Differential Revision: https://reviews.llvm.org/D132262
Reviewed By: @MaskRay
This code is directly ported from the X86 backend which applies the same rewrite (along with several others). Planning on looking more closely at the other branchless variants from x86 to see if any are worth porting in future changes.
Motivation here is the coremark crc8 routine from https://github.com/eembc/coremark/blob/main/core_util.c#L165. This patch significantly reduces the number of unpredictable branches in the workload.
Differential Revision: https://reviews.llvm.org/D134881
Recognize more opcodes in the function.
Fixes some regressions introduced in D134857 for fdiv.f16 too.
Depends on D134857
Reviewed By: arsenm, foad
Differential Revision: https://reviews.llvm.org/D134862
Preparation patch for D134354 to make V2S16 G_BUILD_VECTOR legal.
Also removes RegBankInfo's scalarization of small BUILD_VECTORs,
replacing it with InstructionSelector logic instead.
This allows for V2S16 BUILD_VECTOR instructions to survive
all the way to ISel so we can select FMA/MAD_MIX instructions
in D134354.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D134433
Using this helper makes work about neutral elements more easier. Although I only
find one case now, I think it will have more chance to be used since so many
combine works are related to neutral elements.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D133866
This is a port of an existing optimization in AArch64 ISelLowering, handling
a case when the same input vector can be used for both ext inputs.
Differential Revision: https://reviews.llvm.org/D134891
In combineOr (X86ISelLowering.cpp) there is a DAG combine that rewrite
a "(0 - SetCC) | C" pattern into something simpler given that a LEA
can be used. Another requirement is that C has some specific value,
for example 1 or 7. When checking those requirements the code used a
32-bit unsigned variable to store the value of C. So for a 64-bit OR
this could miscompile in case any of the 32 most significant bits in
C were non zero.
This patch adds fixes the bug by using a large enough type for the
C value.
The faulty code seem to have been introduced by commit 9bceb8981d
(D131358).
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D134892
Decompose the const 14 can be separated from D132322
Change the costmodel to lower a = b * C where C = 2^n - 2^m to
lsl w8, w0, n
sub w0, w8, w0, lsl m
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D134706
This change updates the costs to make constant pool loads match their actual cost, and adds the broadcast special case to avoid too many regressions. We really need more information about the constants being rematerialized, but this is an incremental improvement.
Differential Revision: https://reviews.llvm.org/D134746
k: A memory operand whose address is formed by a base register and
(optionally scaled) index register.
m: A memory operand whose address is formed by a base register and
offset that is suitable for use in instructions with the same
addressing mode as st.w and ld.w.
ZB: An address that is held in a general-purpose register. The offset
is zero.
ZC: A memory operand whose address is formed by a base register and
offset that is suitable for use in instructions with the same
addressing mode as ll.w and sc.w.
Differential Revision: https://reviews.llvm.org/D134638
This change sets
-amdgpu-assume-{external-call-stack-size | dynamic-stack-object-size}
options to zero by default for code object v5 and later. The runtime is
expected to adjust the scratch size if the amdhsa_uses_dynamic_stack bit
in the kernel descriptor is set.
Differential Revision: https://reviews.llvm.org/D128346
LoongArchELFObjectWriter::getRelocType check IsPCRel for FK_Data_4
(which we produce a R_LARCH_32_PCREL relocation for if IsPCRel).
R_LARCH_32_PCREL is required for FDE relocation.
Differential Revision: https://reviews.llvm.org/D134715
The target selection DAG lowering information is needed for
SelectionDAGBuilder to lower a call like memcmp into an optimized
form.
Differential Revision: https://reviews.llvm.org/D134712
A few issues:
1. There was no legalizer test for G_PTRTOINT
2. Same clamping issue as in many other opcodes
3. AArch64 pointers can only be 64b, so in reality we always have to trunc or
extend with any size other than p0 anyway.
This seems to actually produce more correct selection for narrow types as well.
Differential Revision: https://reviews.llvm.org/D107588
This is intended to be equivalent to the s32 + s64 cases in
AArch64TargetLowering::LowerFCOPYSIGN.
Widen everything and then use G_BIT + a mask to handle the actual copysign
operation. Then, narrow back down to s32/s64.
I wasn't sure about what the best/most canonical INSERT_SUBREG-selectable
pattern is. I chose G_INSERT_VECTOR_ELT + an undef vector because it produces
reasonably okay codegen. (It doesn't produce INSERT_SUBREG right now though.)
If there's a better way to do this then I'm happy to change it.
We also have a couple codegen deficiencies with how we emit vector constants
right now. (We need a GISel equivalent to the tryAdvSIMDModImm64 stuff)
Differential Revision: https://reviews.llvm.org/D108725
This is necessary for custom-legalizing G_FCOPYSIGN.
This is equivalent to the BIT instruction (bitwise insert if true).
Add selection testcases for imported patterns.
Differential Revision: https://reviews.llvm.org/D108714
1. Save typed pointer type for GlobalVariable/Function instead of the ObjectType.
This will allow use GlobalVariable/Function as value.
2. Save target type for global ctors for Constant.
3. In DXILBitcodeWriter::getTypeID, check PointerMap first for Constant case.
Reviewed By: beanz
Differential Revision: https://reviews.llvm.org/D133283
One of the conditions to flush the vmcnt counter in loop preheaders is: The loop
contains a use of a vgpr that is defined out of the loop. The code currently
checks if a waitcnt is needed by looking at the score of that vgpr in the score
brackets. This is not enough and may cause the generation of an unnecessary
vmcnt flush. This patch fixes that case.
Differential Revision: https://reviews.llvm.org/D130313
The association between kernel and struct is done by symbol name.
This doesn't work robustly for anonymous kernels as shown by the modified
test case.
An alternative association between function and struct can be constructed
if necessary, probably though metadata, but on the basis that we currently
miscompile anonymous kernels and that they are difficult to construct from
application code and difficult to call from the runtime, this patch makes
it a fatal error for now.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D134741
Surprisingly these were getting legalized to something
zero initialized.
This fixes an infinite loop when combining some vector types.
Also fixes zero initializing some undef values.
SimplifyDemandedVectorElts / SimplifyDemandedBits are not checking
for the legality of the output undefs they are replacing unused
operations with. This resulted in turning vectors into undefs
that were later re-legalized back into zero vectors.
For gathers which load in 8 and 16 bit data then use that data
as an index, the index can be extended to 32 bits instead of
64 bits
Differential Revision: https://reviews.llvm.org/D130692
This reverts commit 1c62af3e23.
The commit causes the test below to fail. Revert for now to get the bots
back to green.
Failing test:
lvm/test/Transforms/LoopVectorize/AArch64/masked-op-cost.ll
Currently over 256 non-temporal loads are broken inefficently. For example, `v17i32` gets broken into 2 128-bit loads. It is better if we can use
256-bit loads instead.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D133421
A kernel may have an associated struct for laying out LDS variables.
This patch puts that instance, if present, at a deterministic address by
allocating it at the same time as the module scope instance.
This is relatively likely to be where the instance was allocated anyway (~NFC)
but will allow later patches to calculate where a given field can be found,
which means a function which is only reachable from a single kernel will be
able to access a LDS variable with zero overhead. That will be particularly
helpful for applications that instantiate a function template containing LDS
variables once per kernel.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D127052
fp16 and bf16 values can be used in GCC's inline assembly using the "t"
constraint, which means "VFP floating-point registers s0-s31" - fp16 and
bf16 values are stored in S registers too.
This change ensures that LLVM is compatible with GCC for programs that
use fp16 and the 't' constraint.
Fixes#57753
Differential Revision: https://reviews.llvm.org/D134553
Make MIMG NSA minimum addresses threshold an attribute that can
be set on a function or configured via command line.
This enables frontend tuning which allows increased NSA usage
where beneficial.
Reviewed By: foad
Differential Revision: https://reviews.llvm.org/D134780
Defines LoongArch registers for getExceptionPointerRegister() and
getExceptionSelectorRegister().
Differential Revision: https://reviews.llvm.org/D134709
These instructions are flag setting so the ptest is redundant, the
TableGen class wasn't setting the element size for the predicate causing
the checks in AArch64InstrInfo::optimizePTestInstr to fail.
This is purely NFC restructure in advance of a change which actually exposes zero strides. This is mostly because I find this interface confusing each time I look at it.
I wasn't able to produce a testcase for that because right now VWSUB is
only generated from VWSUB_W and from there to trigger the commutative
bug we would need to grab VWSUB where the splat value is on the LHS,
which is currently not matched.
Differential Revision: https://reviews.llvm.org/D134701
The motivation here is to enable a change I'm exploring in vectorizer to prefer base + offset_vector addressing for scatter/gather. The form the vectorizer would end up emitting would be a gep whose vector operand is an add of the scalar IV (splated) and the index vector. This change makes sure we can recognize that pattern as well as the current code structure. As a side effect, it might improve scatter/gathers from other sources.
Differential Revision: https://reviews.llvm.org/D134755
Previous commit 8b00b24f85 missed to add `int_ceil` anchor for the
llvm.ceil.* section under LangRef.rst
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D134586
My original intent had been to reuse this for arithmetic instructions as well, but due to the availability of a immediate splat encoding there, we will need different heuristics. So specialize the existing code for the store case.
Initial table.get/set implementation would match and lower combinations
of GEP+load/store to table.get/set instructions. However, this is error
prone due to potential combinations of GEP+load/store we don't implement,
and load/store optimizations. By changing the code to using intrinsics, we
avoid both issues and simplify the code.
New builtins implemented:
* @llvm.wasm.table.get.externref
* @llvm.wasm.table.get.funcref
* @llvm.wasm.table.set.externref
* @llvm.wasm.table.set.funcref
Reviewed By: asb, tlively
Differential Revision: https://reviews.llvm.org/D134436
Add vp.maxnum and vp.minnum which are vector predicted intrinsics of llvm.maxnum
and llvm.minnum.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D134639
This feature implements support for making entries in the exception section
on XCOFF on the direct assembly path using the ".except" pseudo-op. It also
provides functionality to lower entries (comprised of language and reason
codes) into the exception section through the use of annotation metadata
attached to llvm.ppc.trap/trapd/tw/tdw intrinsics. Integrated assembler
support will be provided in another review. https://reviews.llvm.org/D133030
needs to merge first for LIT tests
Reviewed By: shchenz, RKSimon
Differential Revision: https://reviews.llvm.org/D132146
Summary:
With opaque pointer support, the "ptr" type is introduced and thus BitCast is not necessary in some cases.
This work takes care of this change, and recognizes the new address patterns to do appropriate optimizations.
Reviewers:
arsenm
Differential Revision:
https://reviews.llvm.org/D134596
Disable FMAX/FMIN selection from select_cc in VEInstrInfo.td because of
the lack of NaN consideration. This patch removes such selection from
VEInstrInfo.td and lets llvm work on it in combineMinNumMaxNum.
Reviewed By: efocht
Differential Revision: https://reviews.llvm.org/D134595
Support smax/smin in VEInstrInfo.td. Remove obsolete patterns for
smax/smin. Add regression tests for smax/smin/umax/umin.
Reviewed By: efocht
Differential Revision: https://reviews.llvm.org/D134583
The `CodeGenPrepare` pass can sink bitwise `and` used by compare to
zero into the basic blocks where the users are. This operation is
guarded by lowering hook, which is disabled for ARM. In the ARM
architecture versions from v7-M up these two operations can be folded
into `tst rN, #imm` instruction. Sinking of `and` can also enable
the cmov-to-bfi DAG combiner.
This patch fixes some benchmark regressions caused
by https://reviews.llvm.org/D129370 as well scoring slightly better overall.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D134360
The commit D120104 enabled FeatureFuseAdrpAdd for -mcpu=generic,
allowing the linker to relax adrp;add pairs where possible. D132075
extended that to neoverse-n1, this patch extends it to all other cortex
and neoverse cpus for the same reasons.
Differential Revision: https://reviews.llvm.org/D134521
This patch uses a unified interface for lower GlobalAddress ConstantPool
BlockAddress and JumpTable.
This patch allows lowering addresses by using PC-relative addressing
for DSO-local symbols, and accessing the address through the global
offset table for DSO-preemptable symbols.
Remove hardcoded `MininumJumpTableEntries` for test lower JumpTable.
Also updated some test cases using ConstantPool, due to the addition of
relocation information.
Differential Revision: https://reviews.llvm.org/D134431
As the LoongArch port is largely modeled after RISCV it has the same
behavior of not accepting `generic` as a CPU name. For better
compatibility with consumers of LLVM (e.g. mesa) follow D121149's suit
and treat `generic` the same as an empty CPU name.
Differential Revision: https://reviews.llvm.org/D134412
As explained in D68559 the `fastcc` calling convention may be requested
under certain conditions, hence the need for supporting it. But unlike
RISCV we actually treat it exactly like ccc, without actually inventing
any performance hack right here. And CSKY does the same thing.
This is going to fix a few more test cases with native LoongArch builds.
Differential Revision: https://reviews.llvm.org/D134443
For the pattern of IR (%if terminates with a divergent branch.),
divergence analysis will report %phi as uniform to help optimal code
generation.
```
%if
| \
| %then
| /
%endif: %phi = phi [ %uniform, %if ], [ %undef, %then ]
```
In the backend, %phi and %uniform will be assigned a scalar register.
But the %undef from %then will make the scalar register dead in %then.
This will likely cause the register being over-written in %then. To fix
the issue, we will rewrite %undef as %uniform. For details, please refer
the comment in AMDGPURewriteUndefForPHI.cpp. Currently there is no test
changes shown, but this is mandatory for later changes.
Reviewed by: sameerds
Differential Revision: https://reviews.llvm.org/D133840
This patch adds support for constraints `f`, `l`, `I`, `K` according
to [1]. The remain constraints (`k`, `m`, `ZB`, `ZC`) will be added
later as they are a little more complex than the others.
f: A floating-point register (if available).
l: A signed 16-bit constant.
I: A signed 12-bit constant (for arithmetic instructions).
K: An unsigned 12-bit constant (for logic instructions).
For now, no need to support register alias (e.g. `$a0`) in llvm as
clang will correctly decode the usage of register name aliases into
their official names. And AFAIK, the not yet upstreamed `rustc` for
LoongArch will always use official register names (e.g. `$r4`).
[1] https://gcc.gnu.org/onlinedocs/gccint/Machine-Constraints.html
Differential Revision: https://reviews.llvm.org/D134157
This is a follow-on to https://reviews.llvm.org/D134073.
It renames a few fields to have consistent names, as well as renaming
operands to match the field names.
The encoder behavior is unchanged by this cleanup, but a few
instructions were previously being disassembled incorrectly, and have
been corrected by this change. All of the affected instructions were
missing disassembly tests, which are now added.
Differential Revision: https://reviews.llvm.org/D134185
This is a follow-on to https://reviews.llvm.org/D134073.
It renames a couple of fields to match their operands, as well as
introducing sub-operand names where required.
This change _only_ fixes the 'R600' half of the target, not the
'AMDGPU' half. Fixing the AMDGPU half will be a significantly more
difficult change (which I've not yet attempted.)
Differential Revision: https://reviews.llvm.org/D134078
This is a follow-on to https://reviews.llvm.org/D134073.
Lanai was almost clean: the only issue is that 'bit' behaves
differently than 'bits<1>', because only the 'bits' type preserves
unresolved references via 'keepUnsetBits()' in
TableGen/Record.h. Thus, use bits instead.
This issue _would_ have caused invalid instruction emission/decoding,
except that the PQ bits were being overriden after the fact by code in
'adjustPqBits' in MCTargetDesc/LanaiMCCodeEmitter.cpp, and
'PostOperandDecodeAdjust' in Disassembler/LanaiDisassembler.cpp.
Differential Revision: https://reviews.llvm.org/D134075
Fix regression from clang opencl test in builtins-fp-atomics-gfx90a.cl
test_flat_add_local_f64 caused by D130579
Revert a3becb333d.
Differential Revision: https://reviews.llvm.org/D134568
Very straight forward extension of the existing pattern matching pass to handle scalable types as well as fixed length types. The only extra bit beyond removing a bailout is recognizing stepvector.
Differential Revision: https://reviews.llvm.org/D134502
The code previously assumed fixed length vectors; make the relevant code conditional.
Having the lowering in place is neccessary for an upcoming change to generalize scatter/gather matching to scalable vectors.
Differential Revision: https://reviews.llvm.org/D134489
Summary:
The existing undefined-bitfield-to-operand matching behavior is very
hard to understand, due to the combination of positional and named
matching. This can make it difficult to track down a bug in a target's
instruction definitions.
Over the last decade, folks have tried to work-around this in various
ways, but it's time to finally ditch the positional matching. With
https://reviews.llvm.org/D131003, there are no longer cases that
_require_ positional matching, and it's time to start removing usage
and support for it.
Therefore: add a (default-false) option, and set it to true only in
those targets that require positional matching today. Subsequent
changes will start cleaning up additional in-tree targets.
NOTE TO OUT OF TREE TARGET MAINTAINERS:
If this change breaks your build, you may restore the previous
behavior simply by adding:
let useDeprecatedPositionallyEncodedOperands = 1;
to your target's InstrInfo tablegen definition. However, this is
temporary -- the option will be removed in the future.
If your target does not set 'decodePositionallyEncodedOperands', you
may thus start migrating to named operands. However, if you _do_
currently set that option, I recommend waiting until a subsequent
change lands, which adds decoder support for named sub-operands.
Differential Revision: https://reviews.llvm.org/D134073
These names can then be matched by name against 'bits' fields in a
record, to populate an instruction's encoding.
This does _not_ yet change DecoderEmitter to allow by-name matching of
sub-operands. Unlike the encoder, the decoder already defaulted to not
supporting positional matching, and backends had workarounds in place
for the missing decoding support.
Additionally, use this new capability to allow the ARM and AArch64
backends not to require any positional operand matching.
Differential Revision: https://reviews.llvm.org/D131003
The full complement of physical VGPRs for GFX11 is 50% more than GFX10.
Some subtargets have this, others stay the same as GFX10. This affects
occupancy calculations.
Differential Revision: https://reviews.llvm.org/D134522
This has the advantage of dealing with live EFLAGS, using LEA instead of
SUB if needed to avoid clobbering. That also respects feature "lea-sp".
We could allow unrolled stack probing from blocks with live-EFLAGS, if
canUseAsEpilogue learns when emitStackProbeInlineGeneric will be used.
Differential Revision: https://reviews.llvm.org/D134495
Remove manual selection for atomic fadd from global-isel.
Stop pre-isel translation to AtomicLoadFAdd/G_ATOMICRMW_FADD
which corresponds to llvm-ir's atomicrmw fadd instruction.
global and flat atomic fadd patterns changes:
Split rtn/no-rtn patterns
Add missing patterns or fix predicates
Remove atomicrmw patterns for v2f16 (atomic rmw doesn't support vectors).
Patterns now check addrspace of pointer, added patterns for flat intrinsic.
with global addrspace pointer that selects into global atomic instruction.
buffer atomic fadd patterns changes:
Rdit patterns to import into global-isel.
Remove gfx6/gfx7 _addr64 and _offset patterns.
Remove patterns that can't be reached (same pattern but different feature).
Differential Revision: https://reviews.llvm.org/D130579
Use same atomicrmw fadd expansion rules for gfx908, gfx940 and gfx11
as for gfx90a. Add missing globalisel legalizer support for flat
atomicrmw fadd f32 on gfx940 and gfx11.
Isel support for gfx11 will be added in D130579.
Differential Revision: https://reviews.llvm.org/D131560
Feature used by targets that have flat_atomic_add_f32 instruction
(gfx940 and gfx11). Remove isGFX940GFX11Plus.
Add hasFlatAtomicFaddF32Inst Subtarget check for codegen.
Differential Revision: https://reviews.llvm.org/D134532
The mul by constant costmodels handle power-of-2 constants, but not negated-power-of-2, despite the backends handling both.
This patch adds the OperandValueProperties::OP_NegatedPowerOf2 enum and wires it for use for basic mul cost analysis and SLP handling.
Fixes#50778
Differential Revision: https://reviews.llvm.org/D111968
This patch removes the aarch64 instrinsic svget/svset/svcreate from llvm.
It also implements the InstCombine for vector.extract that used to be in svget.
Depends on: D131547
Differential Revision: https://reviews.llvm.org/D131548
RISCV doesn't actually support a scaled form of indexed load and store. We previously handled this by forming the scaled SDNode, and then doing custom legalization during lowering. This patch instead adds a callback via TLI to prevent formation entirely.
This has two effects:
* First, the GEP gets expanded (and used). Instead of the shift being created with an SDLoc of the memory operation, it has the SDLoc of the GEP instruction. This avoids the scheduler perturbing IR order when there's no reason to.
* Second, we fix what appears to be a bug in index calculation with RV32. The rules for GEPs require index calculation be done in particular bitwidth, and it appears the custom legalization code got this wrong for the case where index type exceeds pointer width. (Or at least, I trust the generic GEP lowering to be correct a lot more.)
The DAGCombiner change to handle VPScatter/VPGather is technically separate, but is required to prevent a regression on those intrinsics.
Differential Revision: https://reviews.llvm.org/D134382
DXIL relies on a whole bunch of IR metadata constructs being populated
in the right shape. Rather than just hard coding or using complicated
arrangements of constant data strings, let's make first-class objects
that reprensent the metadata and manage reading and writing the
metadata from the module.
Reviewed By: python3kgae
Differential Revision: https://reviews.llvm.org/D134397
This patch uses structured bindings to simplify a couple of specific
cases when lowering RVV operations where we commonly declare two
SDValues and immediately 'tie' them to the mask and vector length.
There's also a couple places where we split vectors that structured
bindings make sense to use.
This patch tries to keep these sorts of changes minimal and to cases
where the returned types are commonly understood, rather than applying
this wholesale to the RISCV backend.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D134442
Various bits of existing code assume the presence of one operand implies the presence of another. Add verifier rules to catch violations.
Differential Revision: https://reviews.llvm.org/D133810
The default fixed vector legalization is to unroll. The default
scalable vector legalization is to clamp in the FP domain. The
RVV vfcvt instructions have saturating behavior so we can use them
directly. The only difference is that RVV instruction turn nan into
the max value, but the _SAT intrinsics want 0.
I'm only supporting 1 step of narrowing for now. I think we can
support more steps by using VNCLIP to saturate and narrower.
The only case that needs 2 steps of widening is f16->i64 which we can
do as f16->f32->i64.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D134400
They're roughly ARMv8.6. This works in the .td file, but in
AArch64TargetParser.def, marking them v8.6 brings in support for the SM4
cryptographic hash and we don't actually have that. So TargetParser side
they're marked as v8.5, with the extra features (BF16 and I8MM added manually).
Finally, A16 supports the HCX extension in addition to v8.6. This has no
TargetParser implications.