Prefer `MCFixupKind` where possible and add getTargetKind() to
convert to `unsigned` when needed rather than scattering cast
operators around the place.
Differential Revision: https://reviews.llvm.org/D59890
llvm-svn: 369720
Local symbols in the indirect symbol table contain the value
`INDIRECT_SYMBOL_LOCAL` and the corresponding __pointers entry must
contain the address of the target.
In r349060, I added support for local symbols in the indirect symbol
table, which was checking if the symbol `isDefined` && `!isExternal` to
determine if the symbol is local or not.
It turns out that `isDefined` will return false if the user of the
symbol comes before its definition, and we'll again generate .long 0
which will be the symbol at the adress 0x0.
Instead of doing that, use GlobalValue::hasLocalLinkage() to check if
the symbol is local.
Differential Revision: https://reviews.llvm.org/D66563
llvm-svn: 369671
The patch introduces MakeLibCallOptions struct as suggested by @efriedma on D65497.
The struct contain argument flags which will pass to makeLibCall function.
The patch should not has any functionality changes.
Differential Revision: https://reviews.llvm.org/D65795
llvm-svn: 369622
Add a GlobalISel equivalent for the logical_imm32_XFORM and logical_imm64_XFORM
SDNodeXForms in AArch64InstrFormats.td.
- Add select-logical-imm.mir, which contains tests for each imported pattern.
- Update select-pr32733.mir and select-scalar-shift-imm.mir, since they now
select instructions of this form.
Differential Revision: https://reviews.llvm.org/D66162
llvm-svn: 369465
This adds GlobalISel equivalents for the following from AArch64InstrFormats:
- arith_shifted_reg32
- arith_shifted_reg64
And partial support for
- logical_shifted_reg32
- logical_shifted_reg32
The only thing missing for the logical cases is support for rotates. Other than
the missing support, the transformation is identical for the arithmetic shifted
register and the logical shifted register.
Lots of tests here:
- Add select-arith-shifted-reg.mir to show that we correctly select add and
sub instructions which use this pattern.
- Add select-logical-shifted-reg.mir to cover patterns which are not shared
between the arithmetic and logical cases.
- Update addsub-shifted.ll to show that we correctly fold shifts into
adds/subs.
- Update eon.ll to show that we can select the eon instruction by folding xors.
Differential Revision: https://reviews.llvm.org/D66163
llvm-svn: 369460
Summary:
Simplify the API using Optional<> and address comments in
https://reviews.llvm.org/D66165
Reviewers: vitalybuka
Subscribers: hiraditya, llvm-commits, ostannard, pcc
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66317
llvm-svn: 369300
Summary:
MTE provides instructions to update memory tags and data at the same
time. This change makes use of those to generate more compact code for
stack variable tagging + initialization.
We collect memory store and memset instructions following an alloca or a
lifetime.start call, and replace them with the corresponding MTE
intrinsics. Since the intrinsics work on 16-byte aligned chunks, the
stored values are combined as necessary.
Reviewers: pcc, vitalybuka, ostannard
Subscribers: srhines, javed.absar, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66167
llvm-svn: 369297
Recommit with fixes for mac builders.
Summary:
AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta
instructions (e.g. CFI_INSTRUCTION) as normal instructions and
giving them a size of 4.
This results in branch relaxation calculating block sizes wrong.
Branch relaxation also considers alignment and thus a single
mistake can result in later blocks being incorrectly sized even
when they themselves do not contain meta instructions.
The net result is we might not relax a branch whose destination is
not within range.
Reviewers: nickdesaulniers, peter.smith
Reviewed By: peter.smith
Subscribers: javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66337
> llvm-svn: 369111
llvm-svn: 369133
Changes:
There was a condition for `!NeedsFrameRecord` missing in the assert. The
assert in question has changed to:
+ assert((!RPI.isPaired() || !NeedsFrameRecord || RPI.Reg2 != AArch64::FP ||
+ RPI.Reg1 == AArch64::LR) &&
+ "FrameRecord must be allocated together with LR");
This addresses PR43016.
llvm-svn: 369122
Summary:
AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta
instructions (e.g. CFI_INSTRUCTION) as normal instructions and
giving them a size of 4.
This results in branch relaxation calculating block sizes wrong.
Branch relaxation also considers alignment and thus a single
mistake can result in later blocks being incorrectly sized even
when they themselves do not contain meta instructions.
The net result is we might not relax a branch whose destination is
not within range.
Reviewers: nickdesaulniers, peter.smith
Reviewed By: peter.smith
Subscribers: javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66337
llvm-svn: 369111
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.
llvm-svn: 369013
This patch changes the location of the frame-record (FP, LR) to the
bottom of the callee-saved area. According to the AAPCS the location of
the frame-record within the stackframe is unspecified (section 5.2.3 The
Frame Pointer), so the compiler should be free to choose a different
location.
The reason for changing the location of the frame-record is to prepare
the frame for allocating an SVE area below the callee-saves. This way the
compiler can use the VL-scaled addressing modes to directly access SVE
objects from the frame-pointer.
: :
| stack | | stack |
| args | | args |
+-------+ +-------+
| x30 | | x19 |
| x29 | | x20 |
FP -> |- - - -| | x21 |
| x19 | ==> | x22 |
| x20 | |- - - -|
| x21 | | x30 |
| x22 | | x29 |
+-------+ +-------+ <- FP
|///////| |///////| // realignment gap
|- - - -| |- - - -|
|spills/| |spills/|
| locals| | locals|
SP -> +-------+ +-------+ <- SP
Things to point out:
- The algorithm to find a paired register should be prevented from
accidentally pairing some callee-saved register with LR that is not
FP, since they should always be paired together when the frame
has a frame-record.
- For Darwin platforms the location of the frame-record is unchanged,
since the unwind encoding does not allow for encoding this position
dynamically and other tools currently depend on the former layout.
Reviewers: efriedma, rovka, rengolin, thegameg, greened, t.p.northover
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D65653
llvm-svn: 368987
Implement this single atomic load instruction so that we can compile stack
protector code.
Differential Revision: https://reviews.llvm.org/D66245
llvm-svn: 368923
Currently shufflemasks get emitted as any other constant, and you end
up with a bunch of virtual registers of G_CONSTANT with a
G_BUILD_VECTOR. The AArch64 selector then asserts on anything that
doesn't fit this pattern. This isn't an ideal representation, and
should avoid legalization and have fewer opportunities for a
representational error.
Rather than invent a new shuffle mask operand type, similar to what
ShuffleVectorSDNode does, just track the original IR Constant mask
operand. I don't completely like the idea of adding another link to
the IR, but MIR is already quite dependent on IR constants already,
and this will allow sharing the shuffle mask utility functions with
the IR.
llvm-svn: 368704
Currently we can't keep any state in the selector object that we get from
subtarget. As a result we have to plumb through all our variables through
multiple functions. This change makes it non-const and adds a virtual init()
method to allow further state to be captured for each target.
AArch64 makes use of this in this patch to cache a call to hasFnAttribute()
which is expensive to call, and is used on each selection of G_BRCOND.
Differential Revision: https://reviews.llvm.org/D65984
llvm-svn: 368652
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
%1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
%2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
%3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 23
All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.
To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
%1 = G_CONSTANT 16
%2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
%2 = G_SEXT_INREG %0, 16
or:
%1 = G_CONSTANT 16
%2:_(s32) = G_SHL %0, %1
%3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.
Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm
Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61289
llvm-svn: 368487
All TLS access on Darwin is in the "general dynamic" form where we call
a function to resolve the address, so implementation is pretty simple.
llvm-svn: 368418
I've now needed to add an extra parameter to this call twice recently. Not only
is the signature getting extremely unwieldy, but just updating all of the
callsites and implementations is a pain. Putting the parameters in a struct
sidesteps both issues.
llvm-svn: 368408
Summary:
The A64 assembly language does not require the '#' character to
introduce constant immediate operands. Avoid the '#' since the AArch64
asm parser does not accept '#' before the lane specifier and rejects the
following:
__asm__ ("fmla v2.4s, v0.4s, v1.s[%0]" :: "I"(0x1))
Fix a test to not expect the '#' and add a new test case with the above
asm.
Fixes: https://github.com/android-ndk/ndk/issues/1036
Reviewers: peter.smith, kristof.beyls
Subscribers: javed.absar, hiraditya, llvm-commits, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65550
llvm-svn: 368320
Prevent the LoadStoreOptimizer from pairing any load/store instructions with
instructions from the prologue/epilogue if the CFI information has encoded the
operations as separate instructions. This would otherwise lead to a mismatch
of the actual prologue size from the size as recorded in the Windows CFI.
Reviewers: efriedma, mstorsjo, ssijaric
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D65817
llvm-svn: 368164
https://reviews.llvm.org/D65698
This adds a KnownBits analysis pass for GISel. This was done as a
pass (compared to static functions) so that we can add other features
such as caching queries(within a pass and across passes) in the future.
This patch only adds the basic pass boiler plate, and implements a lazy
non caching knownbits implementation (ported from SelectionDAG). I've
also hooked up the AArch64PreLegalizerCombiner pass to use this - there
should be no compile time regression as the analysis is lazy.
llvm-svn: 368065
Summary:
This is tested by D61289 but has been pulled into a separate patch at
a reviewers request.
Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm, rovka
Reviewed By: arsenm
Subscribers: javed.absar, hiraditya, wdng, kristof.beyls, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61321
llvm-svn: 368063
Refactor emitFrameOffset to accept a StackOffset struct as its offset argument.
This method currently only supports byte offsets (MVT::i8) but will be extended
in a later patch to support scalable offsets (MVT::nxv1i8) as well.
Reviewers: thegameg, rovka, t.p.northover, efriedma, greened
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D61436
llvm-svn: 368049
To support spilling/filling of scalable vectors we need a more generic
representation of a stack offset than simply 'int'.
For this we introduce the StackOffset struct, which comprises multiple
offsets sized by their respective MVTs. Byte-offsets will thus be a simple
tuple such as { offset, MVT::i8 }. Adding two byte-offsets will result in a
byte offset { offsetA + offsetB, MVT::i8 }. When two offsets have different
types, we can canonicalise them to use the same MVT, as long as their
runtime sizes are guaranteed to have the same size-ratio as they would have
at compile-time.
When we have both scalable- and fixed-size objects on the stack, we can
create an offset that is:
({ offset_fixed, MVT::i8 } + { offset_scalable, MVT::nxv1i8 })
The struct also contains a getForFrameOffset() method that is specific to
AArch64 and decomposes the frame-offset to be used directly in instructions
that operate on the stack or index into the stack.
Note: This patch adds StackOffset as an AArch64-only concept, but we would
like to make this a generic concept/struct that is supported by all
interfaces that take or return stack offsets (currently as 'int'). Since
that would be a bigger change that is currently pending on D32530 landing,
we thought it makes sense to first show/prove the concept in the AArch64
target before proposing to roll this out further.
Reviewers: thegameg, rovka, t.p.northover, efriedma, greened
Reviewed By: rovka, greened
Differential Revision: https://reviews.llvm.org/D61435
llvm-svn: 368024
FastISel already does this since the initial arm64 port was upstreamed, so
it seems there are no issues with doing this at -O0 for very small memcpys.
Gives a 0.2% geomean code size improvement on CTMark.
Differential Revision: https://reviews.llvm.org/D65758
llvm-svn: 367919
Patch D56593 by @courbet results in calls to `bcmp()` in some cases, should
the target support the it. Unless `TTI::MemCmpExpansionOptions()`
is overridden by the target.
In a proprietary benchmark we see a performance drop of about 12% on PNG
compression before this patch, though it passes all tests.
This patch mirrors X86 for AArch64 and initializes
`TTI::MemCmpExpansionOptions()` to then expand calls to `bcmp()` when
appropriate. No tuning of the parameters was performed, but, at this point,
it's enough to recover the performance drop above.
This problem also exists on ARM. Once a consensus is reached for AArch64, we
can work to fix ARM as well.
Authors:
- Evandro Menezes (@evandro) <e.menezes@samsung.com>
- Brian Rzycki (@brzycki) <b.rzycki@samsung.com>
Differential revision: https://reviews.llvm.org/D64805
llvm-svn: 367898
Summary:
The Arm Neoverse N1 Software Optimization Guide [1], Section "4.8 Branch
instruction alignment" states:
"Consider aligning subroutine entry points and branch targets to 32B
boundaries, within the bounds of the code-density requirements of the
program."
This patch sets the preferred function alignment on Neoverse N1 to 2^4=16B.
This was already the case in some of the latest Cortex-A CPUs. Benchmarking
in previous Cortex-A CPUs suggested that 16B alignment is already better
than the default. See commit d04ee305.
The reason we don't set it to 32B right now (as the optimisation guide
suggests) is that this will impact code size and perhaps the instruction
cache performance. Therefore we need benchmark numbers first.
I have also added testing for A75 and A76 that we were missing.
[1] https://developer.arm.com/docs/swog309707/latest
Reviewers: fhahn, greened, samparker, dmgreen
Reviewed By: dmgreen
Subscribers: dmgreen, javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65654
llvm-svn: 367894
Summary:
This patch adds initial support for the SVE calling convention such that
SVE types can be passed as arguments and return values to/from a
subroutine.
The SVE AAPCS states [1]:
z0-z7 are used to pass scalable vector arguments to a subroutine,
and to return scalable vector results from a function. If a
subroutine takes arguments in scalable vector or predicate
registers, or if it is a function that returns results in such
registers, it must ensure that the entire contents of z8-z23 are
preserved across the call. In other cases it need only preserve the
low 64 bits of z8-z15, as described in §5.1.2.
p0-p3 are used to pass scalable predicate arguments to a subroutine
and to return scalable predicate results from a function. If a
subroutine takes arguments in scalable vector or predicate
registers, or if it is a function that returns results in these
registers, it must ensure that p4-p15 are preserved across the call.
In other cases it need not preserve any scalable predicate register
contents.
SVE predicate and data registers are passed indirectly (i.e. spilled to the
stack and pass the address) if they exceed the registers used for argument
passing defined by the PCS referenced above. Until SVE stack support is merged
we can't spill SVE registers to the stack, so currently an llvm_unreachable is
used where we will eventually handle this.
[1] https://static.docs.arm.com/100986/0000/100986_0000.pdf
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D65448
llvm-svn: 367859
We process 2 elements at a time and expect the number of elements to be
even. Similar to D60690.
Reviewers: dmgreen, samparker, t.p.northover
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D65400
llvm-svn: 367831
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet, jfb, jakehehrlich
Reviewed By: jfb
Subscribers: wuzish, jholewinski, arsenm, dschuff, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, s.egerton, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65514
llvm-svn: 367828
This is an old commit that exposed a bug in the GISel importer, which caused
non-truncating stores to be selected for truncating store patterns. Now that's
been fixed in r367737 this can go back in.
llvm-svn: 367739
These cases can come up when the extending loads combiner doesn't combine a
zext(load) to a zextload op, due to some other operation being in between, which
then gets simplified at a later stage.
Differential Revision: https://reviews.llvm.org/D65360
llvm-svn: 367723
Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This
allows us to select immediate adds of -1 by turning them into subtracts.
Update select-binop.mir to show that the pattern works.
Differential Revision: https://reviews.llvm.org/D65460
llvm-svn: 367700
Fix an issue where the compiler still allocates an emergency spill slot even
though it already decided to spill an extra callee-save register to use
as a scratch register.
Reviewers: gberry, thegameg, mstorsjo, t.p.northover
Reviewed By: thegameg
Differential Revision: https://reviews.llvm.org/D65504
llvm-svn: 367540
Summary:
This will make it possible to improve IPRA by taking into account
register usage in indirect calls.
NFC yet; this is just laying the groundwork to start building
up patches to take advantage of the information for improved register
allocation.
Reviewers: aditya_nandakumar, volkan, qcolombet, arsenm, rovka, aemerson, paquette
Subscribers: sdardis, wdng, javed.absar, hiraditya, jrtc27, atanasyan, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65488
llvm-svn: 367476
This feature instructs the backend to allow locally defined global variable
addresses to contain a pointer tag in bits 56-63 that will be ignored by
the hardware (i.e. TBI), but may be used by an instrumentation pass such
as HWASAN. It works by adding a MOVK instruction to the regular ADRP/ADD
sequence that sets bits 48-63 to the corresponding bits of the global, with
the linker bounds check disabled on the ADRP instruction to prevent the tag
from causing a link failure.
This implementation of the feature omits the MOVK when loading from or storing
to a global, which is sufficient for TBI. If the same approach is extended
to MTE, assuming that 0 is not configured as a catch-all tag, we will most
likely also need the MOVK in this case in order to avoid a tag mismatch.
Differential Revision: https://reviews.llvm.org/D65364
llvm-svn: 367475
This makes the field wider than MachineOperand::SubReg_TargetFlags so that
we don't end up silently truncating any higher bits. We should still catch
any bits truncated from the MachineOperand field as a consequence of the
assertion in MachineOperand::setTargetFlags().
Differential Revision: https://reviews.llvm.org/D65465
llvm-svn: 367474
Summary:
* Loads and stores in SVE2 are gather/scatter not contiguous, fixed by
renaming multiclasses to reflect this and also updated comments.
* Remove aliases from load/store multiclasses that reflect the behaviour
of the original form.
* Fix bug in scatter store implementation, vector list should be used as
input, not output.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65392
llvm-svn: 367398
Summary:
* Clarify comment with SVE2 for predicated shifts and move next to other
shift instructions.
* Clarify comments for various instructions.
* Move FCVTX instruction next to other fp conversions.
* Move FLOGB to next to other fp instructions and fix description.
* Remove "cons" from non-constructive multiclass for bitwise shift-right
and accumulate instructions.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65390
llvm-svn: 367396
Summary:
This patch fixes a bug in the following instructions that should have been
implemented as destructive. A destructive instruction is an instruction where
one of the source registers also acts as the destination register. Therefore,
the contents of the source register, when the instruction begins execution, are
replaced by the result of the instruction when the instruction completes
execution [1]:
* SRI/SLI
* EORBT/EORTB
* TBX
* Narrowing top instructions
* FP convert precision instructions
These changes are non-functional from the assembler/diassembler point-of-view
but are necessary for correct codegen.
[1] https://static.docs.arm.com/ddi0584/ae/DDI0584A_e_SVE_supp_armv8A.pdf
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65389
llvm-svn: 367394
Add partial instruction selection for intrinsics like this:
```
declare i32 @llvm.aarch64.stlxr(i64, i32*)
```
(This only handles the case where a G_ZEXT is feeding the intrinsic.)
Also make sure that the added store instruction actually has the memory op from
the original G_STORE.
Update select-stlxr-intrin.mir and arm64-ldxr-stxr.ll.
Differential Revision: https://reviews.llvm.org/D65355
llvm-svn: 367163
Summary:
The bitperm feature flag is now prefixed with SVE2, as it is for all other SVE2
extensions
Patch by Maciej Gabka.
Reviewers: sdesmalen, rovka, chill, SjoerdMeijer, rengolin
Reviewed By: SjoerdMeijer, rengolin
Differential Revision: https://reviews.llvm.org/D65327
llvm-svn: 367124
Embedded Trace Extension and Trace Buffer Extension are optional
future architecture extensions.
(cf. https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools)
Their system registers are documented here:
https://developer.arm.com/docs/ddi0601/a
ETE shares register names with ETM. One exception is the ETE
TRCEXTINSELR0 register, which has the same encoding as the ETM
TRCEXTINSELR register (but different semantics). This patch treats
them as aliases: the assembler will accept both names, emitting
identical encoding, and the disassembler will keep disassembling
to TRCEXRINSELR.
Differential Revision: https://reviews.llvm.org/D63707
llvm-svn: 367093
... for the vector forms of `{SQ,UQ,}{INC,DEC}P` instructions. Also continue
supporting the exsting behaviour of not requiring an explicit size
specifier. The preferred disasembly is *with* the specifier.
This is implemented by redefining intruction forms to require vector predicates
with explicit size and adding aliases, which allow a predicate with no size.
Differential Revision: https://reviews.llvm.org/D65145
llvm-svn: 367019
Before, we weren't able to select things like this for G_GEP:
add x0, x8, #8
And instead we'd materialize the 8.
This teaches GISel to do that. It gives some considerable code size savings
on 252.eon-- about 4%!
Differential Revision: https://reviews.llvm.org/D65248
llvm-svn: 366959
Throughout the legalizerinfo we currently make the assumption that the target
has neon and FP target features available. Fixing it will require a refactor of
the whole thing, so until then make sure we fall back.
Works around PR42734
Differential Revision: https://reviews.llvm.org/D65244
llvm-svn: 366957
Summary:
This was originally reported in D62818.
https://rise4fun.com/Alive/oPH
InstCombine does the opposite fold, in hope that `C l>>/<< Y` expression
will be hoisted out of a loop if `Y` is invariant and `X` is not.
But as it is seen from the diffs here, if it didn't get hoisted,
the produced assembly is almost universally worse.
Much like with my recent "hoist add/sub by/from const" patches,
we should get almost universal win if we hoist constant,
there is almost always an "and/test by imm" instruction,
but "shift of imm" not so much, so we may avoid having to
materialize the immediate, and thus need one less register.
And since we now shift not by constant, but by something else,
the live-range of that something else may reduce.
Special care needs to be applied not to disturb x86 `BT` / hexagon `tstbit`
instruction pattern. And to not get into endless combine loop.
Reviewers: RKSimon, efriedma, t.p.northover, craig.topper, spatel, arsenm
Reviewed By: spatel
Subscribers: hiraditya, MaskRay, wuzish, xbolva00, nikic, nemanjai, jvesely, wdng, nhaehnle, javed.absar, tpr, kristof.beyls, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62871
llvm-svn: 366955
If we have a G_MUL, and either the LHS or the RHS of that mul is the legal
shift value for a load addressing mode, we can fold it into the load.
This gives some code size savings on some SPEC tests. The best are around 2%
on 300.twolf and 3% on 254.gap.
Differential Revision: https://reviews.llvm.org/D65173
llvm-svn: 366954
This introduces a new family of combiner helper routines that re-use the
target specific cost model from SelectionDAG, and generate inline implementations
of the memcpy family of intrinsics.
The combines are only enabled at optimization levels higher than -O0, and give
very substantial performance improvements.
Differential Revision: https://reviews.llvm.org/D65167
llvm-svn: 366951
Fix an off-by-one error which made us not look at the last element of the
zero vector. This caused a miscompile in 188.ammp.
Differential Revision: https://reviews.llvm.org/D65168
llvm-svn: 366930
We need to be able to load and store s128 for memcpy inlining, where we want to
generate Q register mem ops. Making these legal also requires that we add some
support in other instructions. Regbankselect should also know about these since
they have no GPR register class that can hold them, so need special handling to
live on the FPR bank.
Differential Revision: https://reviews.llvm.org/D65166
llvm-svn: 366857
When we select the XRO variants of loads, we can pull in very specific shifts
(of the size of an element). E.g.
```
ldr x1, [x2, x3, lsl #3]
```
This teaches GISel to handle these when they're coming from shifts
specifically.
This adds a new addressing mode function, `selectAddrModeShiftedExtendXReg`
which recognizes this pattern.
This also packs this up with `selectAddrModeRegisterOffset` into
`selectAddrModeXRO`. This is intended to be equivalent to `selectAddrModeXRO`
in AArch64ISelDAGtoDAG.
Also update load-addressing-modes to show that all of the cases here work.
Differential Revision: https://reviews.llvm.org/D65119
llvm-svn: 366819
Sometimes, you can end up with cross-bank copies between same-sized GPRs and
FPRs, which feed into G_STOREs. When these copies feed only into stores, they
aren't necessary; we can just store using the original register bank.
This provides some minor code size savings for some floating point SPEC
benchmarks. (Around 0.2% for 453.povray and 450.soplex)
This issue doesn't seem to show up due to regbankselect or anything similar. So,
this patch introduces an early select function, `contractCrossBankCopyIntoStore`
which performs the contraction when possible. The selector then continues
normally and selects the correct store opcode, eliminating needless copies
along the way.
Differential Revision: https://reviews.llvm.org/D65024
llvm-svn: 366625
I plan on adding memcpy optimizations in the GlobalISel pipeline, but we can't
do that unless we delay lowering to actual function calls. This patch changes
the translator to generate G_INTRINSIC_W_SIDE_EFFECTS for these functions, and
then have each target specify that using the new custom legalizer for intrinsics
hook that they want it expanded it a libcall.
Differential Revision: https://reviews.llvm.org/D64895
llvm-svn: 366516
Add support for folding G_GEPs into loads of the form
```
ldr reg, [base, off]
```
when possible. This can save an add before the load. Currently, this is only
supported for loads of 64 bits into 64 bit registers.
Add a new addressing mode function, `selectAddrModeRegisterOffset` which
performs this folding when it is profitable.
Also add a test for addressing modes for G_LOAD.
Differential Revision: https://reviews.llvm.org/D64944
llvm-svn: 366503
There doesn't seem to be a practical reason for these instructions to have
different restrictions on the types of relocations that they may be used
with, notwithstanding the language in the ELF AArch64 spec that implies that
specific relocations are meant to be used with specific instructions.
For example, we currently forbid the first instruction in the following
sequence, despite it currently being used by clang to generate a global
reference under -mcmodel=large:
movz x0, #:abs_g0_nc:foo
movk x0, #:abs_g1_nc:foo
movk x0, #:abs_g2_nc:foo
movk x0, #:abs_g3:foo
Therefore, allow MOVK/MOVN/MOVZ to accept the union of the set of relocations
that they currently accept individually.
Differential Revision: https://reviews.llvm.org/D64466
llvm-svn: 366461
Implement IR intrinsics for stack tagging. Generated code is very
unoptimized for now.
Two special intrinsics, llvm.aarch64.irg.sp and llvm.aarch64.tagp are
used to implement a tagged stack frame pointer in a virtual register.
Differential Revision: https://reviews.llvm.org/D64172
llvm-svn: 366360
The jcvt intrinsic defined in ACLE [1] is available when ARM_FEATURE_JCVT is defined.
This change introduces the AArch64 intrinsic, wires it up to the instruction and a new clang builtin function.
The __ARM_FEATURE_JCVT macro is now defined when an Armv8.3-A or higher target is used.
I've implemented the target detection logic in Clang so that this feature is enabled for architectures from armv8.3-a onwards (so -march=armv8.4-a also enables this, for example).
make check-all didn't show any new failures.
[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics
Differential Revision: https://reviews.llvm.org/D64495
llvm-svn: 366197
Split AArch64FrameLowering::resolveFrameIndexReference in two parts
* Finding frame offset for the index.
* Finding base register and offset to that register.
The second part will be used to implement a virtual frame pointer in
armv8.5 MTE stack instrumentation lowering.
Reviewers: pcc, vitalybuka, hctim, ostannard
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64171
llvm-svn: 365958
Since we have distinct types for pointers and scalars, G_INTTOPTRs can sometimes
obstruct attempts to find constant source values. These usually come about when
try to do some kind of null pointer check. Teaching getConstantVRegValWithLookThrough
about this operation allows the CBZ/CBNZ optimization to catch more cases.
This change also improves the case where we can't find a constant source at all.
Previously we would emit a cmp, cset and tbnz for that. Now we try to just emit
a cmp and conditional branch, saving an instruction.
The cumulative code size improvement of this change plus D64354 is 5.5% geomean
on arm64 CTMark -O0.
Differential Revision: https://reviews.llvm.org/D64377
llvm-svn: 365690
Some minor cleanup.
This function in Utils does the same thing as `findMIFromReg`. It also looks
through copies, which `findMIFromReg` didn't.
Delete `findMIFromReg` and use `getOpcodeDef` instead. This only happens in
`tryOptVectorDup` right now.
Update opt-shuffle-splat to show that we can look through the copies now, too.
Differential Revision: https://reviews.llvm.org/D64520
llvm-svn: 365684
There are a few places where we walk over copies throughout
AArch64InstructionSelector.cpp. In Utils, there's a function that does exactly
this which we can use instead.
Note that the utility function works with the case where we run into a COPY
from a physical register. We've run into bugs with this a couple times, so using
it should defend us from similar future bugs.
Also update opt-fold-compare.mir to show that we still handle physical registers
properly.
Differential Revision: https://reviews.llvm.org/D64513
llvm-svn: 365683
Rework the TTI cache and software prefetching APIs to prepare for the
introduction of a general system model. Changes include:
- Marking existing interfaces const and/or override as appropriate
- Adding comments
- Adding BasicTTIImpl interfaces that delegate to a subtarget
implementation
- Adding a default "no information" subtarget implementation
Only a handful of targets use these interfaces currently: AArch64,
Hexagon, PPC and SystemZ. AArch64 already has a custom subtarget
implementation, so its custom TTI implementation is migrated to use
the new facilities in BasicTTIImpl to invoke its custom subtarget
implementation. The custom TTI implementations continue to exist for
the other targets with this change. They are not moved over to
subtarget-based implementations.
The end goal is to have the default subtarget implementation defer to
the system model defined by the target. With this change, the default
subtarget implementation essentially returns "no information" for
these interfaces. None of the existing users of TTI will hit that
implementation because they define their own custom TTI
implementations and won't use the BasicTTIImpl implementations.
Once system models are in place for the targets that use these
interfaces, their custom TTI implementations can be removed.
Differential Revision: https://reviews.llvm.org/D63614
llvm-svn: 365676