We already have patterns for matching fadd(select(..., -0.0)),
but an upcoming patch will lead to patterns using +0.0 as the
identity instead of -0.0. I'm adding support for these patterns
now to avoid any regressions for MVE.
Differential Revision: https://reviews.llvm.org/D127275
I can't remove the function just yet as it is used in the generated .inc files.
I would also like to provide a way to compare alignment with TypeSize since it came up a few times.
Differential Revision: https://reviews.llvm.org/D126910
The TC_RETURN/TCRETURNdi under Arm does not currently add the
register-mask operand when tail folding, which leads to the register
(like LR) not being 'used' by the return. This changes the code to
unconditionally set the register mask on the call, as opposed to
skipping it for tail calls.
I don't believe this will currently alter any codegen, but should glue
things together better post-frame lowering. It matches the AArch64 code
better.
Differential Revision: https://reviews.llvm.org/D125906
The reasoning for creating VSHLIMM/VSHRsIMM/VSHRuIMM nodes in a combine
- because matching i64 constants is difficult - does not apply for MVE,
as there are not v2i64 shifts. Delaying the creation of the nodes can
allow extra transforms on target independant shl/shr.
Use the same enum as the other atomic instructions for consistency, in
preparation for addition of another strategy.
Introduce a new "Expand" option, since the store expansion does not
use cmpxchg. Alternatively, the existing CmpXChg strategy could be
renamed to Expand.
Regression from 2f497ec3; we should not try to generate ldrexd on
targets that don't have it.
Also, while I'm here, fix shouldExpandAtomicStoreInIR, for consistency.
That doesn't really have any practical effect, though. On Thumb targets
where we need to use __sync_* libcalls, there is no libcall for stores,
so SelectionDAG calls __sync_lock_test_and_set_8 anyway.
Without this patch, clang would generate calls to __sync_* routines on
targets where it does not make sense; we can't assume the routines exist
on unknown targets. Linux has special implementations of the routines
that work on old ARM targets; other targets have no such routines. In
general, atomics operations which aren't natively supported should go
through libatomic (__atomic_*) APIs, which can support arbitrary atomics
through locks.
ARM targets older than v6, where this patch makes a difference, are rare
in practice, but not completely extinct. See, for example, discussion on
D116088.
This also affects Cortex-M0, but I don't think __sync_* routines
actually exist in any Cortex-M0 libraries. So in practice this just
leads to a slightly different linker error for those cases, I think.
Mechanically, this patch does the following:
- Ensures we run atomic expansion unconditionally; it never makes sense to
completely skip it.
- Fixes getMaxAtomicSizeInBitsSupported() so it returns an appropriate
number on all ARM subtargets.
- Fixes shouldExpandAtomicRMWInIR() and shouldExpandAtomicCmpXchgInIR() to
correctly handle subtargets that don't have atomic instructions.
Differential Revision: https://reviews.llvm.org/D120026
Reland of D120906 after sanitizer failures.
This patch aims to reduce a lot of the boilerplate around adding new subtarget
features. From the SubtargetFeatures tablegen definitions, a series of calls to
the macro GET_SUBTARGETINFO_MACRO are generated in
ARM/AArch64GenSubtargetInfo.inc. ARMSubtarget/AArch64Subtarget can then use
this macro to define bool members and the corresponding getter methods.
Some naming inconsistencies have been fixed to allow this, and one unused
member removed.
This implementation only applies to boolean members; in future both BitVector
and enum members could also be generated.
Differential Revision: https://reviews.llvm.org/D120906
This patch aims to reduce a lot of the boilerplate around adding new subtarget
features. From the SubtargetFeatures tablegen definitions, a series of calls to
the macro GET_SUBTARGETINFO_MACRO are generated in
ARM/AArch64GenSubtargetInfo.inc. ARMSubtarget/AArch64Subtarget can then use
this macro to define bool members and the corresponding getter methods.
Some naming inconsistencies have been fixed to allow this, and one unused
member removed.
This implementation only applies to boolean members; in future both BitVector
and enum members could also be generated.
Differential Revision: https://reviews.llvm.org/D120906
While experimenting with different algorithms for std::sort
I discovered that combine-vmovdrr.ll fails if this sort is not
stable.
I suspect that the test is too stringent in its check--the resultant
code looks functionally identical to me under both stable and unstable
sorting, but a generic fix is quite a bit more difficult to implement.
Thanks to scw@google.com for finding the proper fix.
Differential Revision: https://reviews.llvm.org/D121870
Includes verifier changes checking the elementtype, clang codegen
changes to emit the elementtype, and ISel changes using the elementtype.
Basically the same as D120527.
Reviewed By: #opaque-pointers, nikic
Differential Revision: https://reviews.llvm.org/D121847
Internally to DAGCombiner the SDValues were passed by non-const
reference despite not being modified. They were then passed by
const reference to TLI.
This patch passes them by value which is consistent with the vast
majority of code.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D120420
We have some recognition of SSAT and USAT from SELECT_CC at the moment.
This extends the matching to SMIN/SMAX which can help catch more cases,
either from min/max being the canonical form in instcombine or from some
expanded nodes like fp_to_si_sat.
Differential Revision: https://reviews.llvm.org/D119819
This patch is the first in a series of patches to upstream the support for Apple's DriverKit. Once complete, it will allow targeting DriverKit platform with Clang similarly to AppleClang.
This code was originally authored by JF Bastien.
Differential Revision: https://reviews.llvm.org/D118046
This function was added in D49837, but no setOperationAction call
was added with it. The code is equivalent to what is done by the
default ExpandIntRes_ABS implementation when ADDCARRY is supported.
Test case added to verify this. There was some existing coverage
from Thumb2 MVE tests, but they started from vectors.
This way they get lowered through the ARMISD::BUILD_VECTOR, which can
produce more efficient D register moves.
Also helps D115653 not get stuck in a loop.
None of the external users actual touch these (they're purely used internally down the recursive call) - its trivial to add another wrapper if anything ever does want to track known elements.
I have updated TargetLowering::isConstTrueVal to also consider
SPLAT_VECTOR nodes with constant integer operands. This allows the
optimisation to also work for targets that support scalable vectors.
Differential Revision: https://reviews.llvm.org/D117210
This reverts commit ef82063207.
- It conflicts with the existing llvm::size in STLExtras, which will now
never be called.
- Calling it without llvm:: breaks C++17 compat
Instead use either Type::getPointerElementType() or
Type::getNonOpaquePointerElementType().
This is part of D117885, in preparation for deprecating the API.
A 'CMOV 1, 0, CC, %cpsr, Cmp' is the same as a 'CSINC 0, 0, CC, Cmp',
and can be treated the same in IsCMPZCSINC added in D114013. This allows
us to remove the unnecessary CMOV in the same way that we could remove a
CSINC.
Differential Revision: https://reviews.llvm.org/D115188
This makes use of the code in D114013 to fold away unnecessary
CMPZ/CSINC starting from a CMOV, in a similar way to how we fold away
CSINV/CSINC/etc
Differential Revision: https://reviews.llvm.org/D115185
Some MVE instructions have qr variants that take a Q and R register,
splatting the R register for each lane. This is usually handled fine for
standard splats as we sink the splat into the loop and combine the
resulting dup into the qr instruction. It does not work for constant
splats though, as we generate a vmovimm or constant pool load instead.
This intercepts that, generating a vdup of the constant instead where we
can turn the result into a qr instruction variant.
Differential Revision: https://reviews.llvm.org/D115242
We can be in situations where And 1 zext nodes will not have been yet,
preventing us from detecting removable cmpz/csinc patterns. This peeks
through those nodes allowing us to simplify more code.
Differential Revision: https://reviews.llvm.org/D115176
This patch implements PAC return address signing for armv8-m. This patch roughly
accomplishes the following things:
- PAC and AUT instructions are generated.
- They're part of the stack frame setup, so that shrink-wrapping can move them
inwards to cover only part of a function
- The auth code generated by PAC is saved across subroutine calls so that AUT
can find it again to check
- PAC is emitted before stacking registers (so that the SP it signs is the one
on function entry).
- The new pseudo-register ra_auth_code is mentioned in the DWARF frame data
- With CMSE also in use: PAC is emitted before stacking FPCXTNS, and AUT
validates the corresponding value of SP
- Emit correct unwind information when PAC is replaced by PACBTI
- Handle tail calls correctly
Some notes:
We make the assembler accept the `.save {ra_auth_code}` directive that is
emitted by the compiler when it saves a register that contains a
return address authentication code.
For EHABI we need to have the `FrameSetup` flag on the instruction and
handle the `t2PACBTI` opcode (identically to `t2PAC`), so we can emit
`.save {ra_auth_code}`, instead of `.save {r12}`.
For PACBTI-M, the instruction which computes return address PAC should use SP
value before adjustment for the argument registers save are (used for variadic
functions and when a parameter is is split between stack and register), but at
the same it should be after the instruction that saves FPCXT when compiling a
CMSE entry function.
This patch moves the varargs SP adjustment after the FPCXT save (they are never
enabled at the same time), so in a following patch handling of the `PAC`
instruction can be placed between them.
Epilogue emission code adjusted in a similar manner.
PACBTI-M code generation should not emit any instructions for architectures
v6-m, v8-m.base, and for A- and R-class cores. Diagnostic message for such cases
is handled separately by a future ticket.
note on tail calls:
If the called function has four arguments that occupy registers `r0`-`r3`, the
only option for holding the function pointer itself is `r12`, but this register
is used to keep the PAC during function/prologue epilogue and clobbers the
function pointer.
When we do the tail call we need the five registers (`r0`-`r3` and `r12`) to
keep six values - the four function arguments, the function pointer and the PAC,
which is obviously impossible.
One option would be to authenticate the return address before all callee-saved
registers are restored, so we have a scratch register to temporarily keep the
value of `r12`. The issue with this approach is that it violates a fundamental
invariant that PAC is computed using CFA as a modifier. It would also mean using
separate instructions to pop `lr` and the rest of the callee-saved registers,
which would offset the advantages of doing a tail call.
Instead, this patch disables indirect tail calls when the called function take
four or more arguments and the return address sign and authentication is enabled
for the caller function, conservatively assuming the caller function would spill
LR.
This patch is part of a series that adds support for the PACBTI-M extension of
the Armv8.1-M architecture, as detailed here:
https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-1-m-pointer-authentication-and-branch-target-identification-extension
The PACBTI-M specification can be found in the Armv8-M Architecture Reference
Manual:
https://developer.arm.com/documentation/ddi0553/latest
The following people contributed to this patch:
- Momchil Velikov
- Ties Stuij
Reviewed By: danielkiss
Differential Revision: https://reviews.llvm.org/D112429
MVE can treat v16i1, v8i1, v4i1 and v2i1 as different views onto the
same 16bit VPR.P0 register, with v2i1 holding two 8 bit values for the
two halves. This was never treated as a legal type in llvm in the past
as there are not many 64bit instructions and no 64bit compares. There
are a few instructions that could use it though, notably a VSELECT (as
it can handle any size using the underlying v16i8 VPSEL), AND/OR/XOR for
similar reasons, some gathers/scatter and long multiplies and VCTP64
instructions.
This patch goes through and makes v2i1 a legal type, handling all the
cases that fall out of that. It also makes VSELECT legal for v2i64 as a
side benefit. A lot of the codegen changes as a result - usually in way
that is a little better or a little worse, but still expensive. Costs
can change a little too in the process, again in a way that expensive
things remain expensive. A lot of the tests that changed are mainly to
ensure correctness - the code can hopefully be improved in the future
where it comes up in practice.
The intrinsics currently remain using the v4i1 they previously did to
emulate a v2i1. This will be changed in a followup patch but this one
was already large enough.
Differential Revision: https://reviews.llvm.org/D114449
This adds a fold in DAGCombine to create fptosi_sat from sequences for
smin(smax(fptosi(x))) nodes, where the min/max saturate the output of
the fp convert to a specific bitwidth (say INT_MIN and INT_MAX). Because
it is dealing with smin(/smax) in DAG they may currently be ISD::SMIN,
ISD::SETCC/ISD::SELECT, ISD::VSELECT or ISD::SELECT_CC nodes which need
to be handled similarly.
A shouldConvertFpToSat method was added to control when converting may
be profitable. The original fptosi will have a less strict semantics
than the fptosisat, with less values that need to produce defined
behaviour.
This especially helps on ARM/AArch64 where the vcvt instructions
naturally saturate the result.
Differential Revision: https://reviews.llvm.org/D111976
It causes builds to fail with this assert:
llvm/include/llvm/ADT/APInt.h:990:
bool llvm::APInt::operator==(const llvm::APInt &) const:
Assertion `BitWidth == RHS.BitWidth && "Comparison requires equal bit widths"' failed.
See comment on the code review.
> This adds a fold in DAGCombine to create fptosi_sat from sequences for
> smin(smax(fptosi(x))) nodes, where the min/max saturate the output of
> the fp convert to a specific bitwidth (say INT_MIN and INT_MAX). Because
> it is dealing with smin(/smax) in DAG they may currently be ISD::SMIN,
> ISD::SETCC/ISD::SELECT, ISD::VSELECT or ISD::SELECT_CC nodes which need
> to be handled similarly.
>
> A shouldConvertFpToSat method was added to control when converting may
> be profitable. The original fptosi will have a less strict semantics
> than the fptosisat, with less values that need to produce defined
> behaviour.
>
> This especially helps on ARM/AArch64 where the vcvt instructions
> naturally saturate the result.
>
> Differential Revision: https://reviews.llvm.org/D111976
This reverts commit 52ff3b0093.
This adds a fold in DAGCombine to create fptosi_sat from sequences for
smin(smax(fptosi(x))) nodes, where the min/max saturate the output of
the fp convert to a specific bitwidth (say INT_MIN and INT_MAX). Because
it is dealing with smin(/smax) in DAG they may currently be ISD::SMIN,
ISD::SETCC/ISD::SELECT, ISD::VSELECT or ISD::SELECT_CC nodes which need
to be handled similarly.
A shouldConvertFpToSat method was added to control when converting may
be profitable. The original fptosi will have a less strict semantics
than the fptosisat, with less values that need to produce defined
behaviour.
This especially helps on ARM/AArch64 where the vcvt instructions
naturally saturate the result.
Differential Revision: https://reviews.llvm.org/D111976
Codegen from expanded vector operations can end up with unnecessary
CMPZ/CSINC, of the form:
CSXYZ A, B, C1 (CMPZ (CSINC 0, 0, C2, D), 0)
These can be converted to remove the CMPZ and CSINC, depending on the
condition.
if C1==NE -> CSXYZ A, B, C2, D
if C1==EQ -> CSXYZ A, B, NOT(C2), D
Differential Revision: https://reviews.llvm.org/D114013
This converts a vector SETCC([0,1,2,..], splat(n), ult) to vctp n, which
can be fewer instructions and prevent the need for constant pool loads.
Differential Revision: https://reviews.llvm.org/D114177
If we only demand bits from one half of a rotation pattern, see if we can simplify to a logical shift.
For the ARM/AArch64 rev16/32 patterns, I had to drop a fold to prevent srl(bswap()) -> rotr(bswap) -> srl(bswap) infinite loops. I've replaced this with an isel PatFrag which should do the same task.
Reapplied with fix for AArch64 rev patterns to matching the ARM fix.
https://alive2.llvm.org/ce/z/iroxki (rol -> shl by amt iff demanded bits has at least as many trailing zeros as the shift amount)
https://alive2.llvm.org/ce/z/4ez_U- (ror -> shl by revamt iff demanded bits has at least as many trailing zeros as the reverse shift amount)
https://alive2.llvm.org/ce/z/cD7dR- (ror -> lshr by amt iff demanded bits has at least as many leading zeros as the shift amount)
https://alive2.llvm.org/ce/z/_XGHtQ (rol -> lshr by revamt iff demanded bits has at least as many leading zeros as the reverse shift amount)
Differential Revision: https://reviews.llvm.org/D114354