This is an addition to the existing Statistical Profiling extension, which
introduces an extra system register that is enabled by the new 'spe-eef'
subtarget feature.
Patch written by Simon Tatham.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D92391
This introduces asm support for the Branch Record Buffer extension, through
the new 'brbe' subtarget feature. It consists of a new set of system registers
that enable the handling of branch records.
Patch written by Simon Tatham.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D92389
This is split off from D91718 and adds a new target hook
supportsScalableVectors that can be queried to check if scalable vectors
are supported by the backend. For AArch64 this returns true if SVE is
enabled.
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D93060
The LD/STD likewise instruction are selected only when the alignment in
the load/store >= 4 to deal with the case that the offset might not be
known(i.e. relocations). That means we have to select the X-Form load
for %0 = load i64, i64* %arrayidx, align 2 In fact, we can still select
the D-Form load if the offset is known. So, we only query the load/store
alignment when we don't know if the offset is a multiple of 4.
Reviewed By: jji, Nemanjai
Differential Revision: https://reviews.llvm.org/D93099
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: ShihPo Hung <shihpo.hung@sifive.com>
Co-Authored-by: Monk Chiang <monk.chiang@sifive.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D93366
Define vlse/vsse intrinsics and lower to V instructions.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Zakk Chen <zakk.chen@sifive.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D93445
On PPC, the vector pair instructions are independent from MMA.
This patch renames the vector pair LLVM intrinsics and Clang builtins to replace the _mma_ prefix by _vsx_ in their names.
We also move the vector pair type/intrinsic/builtin tests to their own files.
Differential Revision: https://reviews.llvm.org/D91974
The PPCCTRLoop pass has been moved to HardwareLoops,
so the comments and some useless code are deprecated now.
Reviewed By: #powerpc, nemanjai
Differential Revision: https://reviews.llvm.org/D93336
This extends the command-line support for the 'armv8.7-a' architecture
name to the ARM target.
Based on a patch written by Momchil Velikov.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D93231
This introduces command-line support for the 'armv8.7-a' architecture name
(and an alias without the '-', as usual), and for the 'ls64' extension name.
Based on patches written by Simon Tatham.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D91776
This adds support for the v8.7-A LD64B/ST64B Accelerator extension
through a subtarget feature called "ls64". It adds four 64-byte
load/store instructions with an operand in the new GPR64x8 register
class, and one system register that's part of the same extension.
Based on patches written by Simon Tatham.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D91775
This adds a GPR64x8 register class that will be needed as the data
operand to the LD64B/ST64B family of instructions in the v8.7-A
Accelerator Extension, which load or store a contiguous range of eight
x-regs. It has to be its own register class so that register allocation
will have visibility of the full set of registers actually read/written
by the instructions, which will be needed when we add intrinsics and/or
inline asm access to this piece of architecture.
Patch written by Simon Tatham.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D91774
This introduces support for the v8.7-A architecture through a new
subtarget feature called "v8.7a". It adds two new "WFET" and "WFIT"
instructions, the nXS limited-TLB-maintenance qualifier for DSB and TLBI
instructions, a new CPU id register, ID_AA64ISAR2_EL1, and the new
HCRX_EL2 system register.
Based on patches written by Simon Tatham and Victor Campos.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D91772
This enables the capturing of multiple required features in the AArch64
AsmParser's SysAlias error messages.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D92388
This removes the general forms of the AArch64 MSR and MRS instructions
from the same decoding table that contains many more specific
instructions that supersede them. They're now in a separate decoding
table of their own, called "Fallback", which is only consulted in the
event of the main decoder table failing to produce an answer.
This should avoid decoding conflicts on future specialized instructions
in the MSR space.
Patch written by Simon Tatham.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D91771
This was needed in an earlier version of D92645, but isn't now - and I've just noticed that it was potentially flawed depending on the relevant widths of the broadcasted and extracted subvectors.
Subvector broadcasts are only load instructions, yet X86ISD::SUBV_BROADCAST treats them more generally, requiring a lot of fallback tablegen patterns.
This initial patch replaces constant vector lowering inside lowerBuildVectorAsBroadcast with direct X86ISD::SUBV_BROADCAST_LOAD loads which helps us merge a number of equivalent loads/broadcasts.
As well as general plumbing/analysis additions for SUBV_BROADCAST_LOAD, I needed to wrap SelectionDAG::makeEquivalentMemoryOrdering so it can handle result chains from non generic LoadSDNode nodes.
Later patches will continue to replace X86ISD::SUBV_BROADCAST usage.
Differential Revision: https://reviews.llvm.org/D92645
X86 and AArch64 expand it as libcall inside the target. And PowerPC also
want to expand them as libcall for P8. So, propose an implement in the
legalizer to common the logic and remove the code for X86/AArch64 to
avoid the duplicate code.
Reviewed By: Craig Topper
Differential Revision: https://reviews.llvm.org/D91331
Define vector widening mul intrinsics and lower them to V instructions.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Hsiangkai Wang <kai.wang@sifive.com>
Differential Revision: https://reviews.llvm.org/D93381
Define vector mul/div/rem intrinsics and lower them to V instructions.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Hsiangkai Wang <kai.wang@sifive.com>
Differential Revision: https://reviews.llvm.org/D93380
If users want to use vector floating point instructions, they need to
specify 'F' extension additionally.
Differential Revision: https://reviews.llvm.org/D93282
Define vle/vse intrinsics and lower to V instructions.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Zakk Chen <zakk.chen@sifive.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D93359
The ABI explains that %fs:(%eax) zero-extends %eax to 64 bits, and adds
that the TLS base address, but that the TLS base address need not be
at the start of the TLS block, TLS references may use negative offsets.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D93158
... so just ensure that we pass DomTreeUpdater it into it.
Fixes DomTree preservation for a large number of tests,
all of which are marked as such so that they do not regress.
Similar to D77853. Change ADRP to print the target address in hex, instead of the raw immediate.
The behavior is similar to GNU objdump but we also include `0x`.
Note: GNU objdump is not consistent whether or not to emit `0x` for different architectures. We try emitting 0x consistently for all targets.
```
GNU objdump: adrp x16, 10000000
Old llvm-objdump: adrp x16, #0
New llvm-objdump: adrp x16, 0x10000000
```
`adrp Xd, 0x...` assembles to a relocation referencing `*ABS*+0x10000` which is not intended. We need to use a linker or use yaml2obj.
The main test is `test/tools/llvm-objdump/ELF/AArch64/pcrel-address.yaml`
Differential Revision: https://reviews.llvm.org/D93241
Since these are all working on reduction patterns, actually use that term in the function name to make them easier to search for.
At some point we're likely to start working with the ISD::VECREDUCE_* opcodes directly in the x86 backend, but that is still some way off.
SUMMARY:
In order for the runtime on AIX to find the compact unwind section(EHInfo table),
we would need to set the following on the traceback table:
The 6th byte's longtbtable field to true to signal there is an Extended TB Table Flag.
The Extended TB Table Flag to be 0x08 to signal there is an exception handling info presents.
Emit the offset between ehinfo TC entry and TOC base after all other optional portions of traceback table.
The patch is authored by Jason Liu.
Reviewers: David Tenty, Digger Lin
Differential Revision: https://reviews.llvm.org/D92766
Calling Instruction::copyFastMathFlags() assumes the caller is
FPMathOperator. Avoid calling the function for instructions
that are not instances of FPMathOperator.
I think the global_load/store_dword_addtid instructions support
switching off the scalar address.
Add assembler and disassembler support for this.
Differential Revision: https://reviews.llvm.org/D93288
Refine tablegen pattern for vector load/store, and follow
D93012 to separate masked and unmasked definitions for
pseudo load/store instructions.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D93284
The REX prefix is needed to allow linker relaxations: even if the
instruction we emit may not need it, the linker may change it to a
different instruction which does need it.
Define vfadd/vfsub/vfrsub intrinsics and lower to V instructions.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Hsiangkai Wang <kai.wang@sifive.com>
Differential Revision: https://reviews.llvm.org/D93291
This patch enables the Clang type __vector_pair and its associated LLVM
intrinsics even when MMA is disabled. With this patch, the type is now controlled
by the PPC paired-vector-memops option. The builtins and intrinsics will be
renamed to drop the mma prefix in another patch.
Differential Revision: https://reviews.llvm.org/D91819
- Clarify documentation on initializing scratch.
- Rename compute_pgm_rsrc2 field for enabling scratch from
ENABLE_SGPR_PRIVATE_SEGMENT_WAVEFRONT_OFFSET to
ENABLE_PRIVATE_SEGMENT to match hardware definition.
Differential Revision: https://reviews.llvm.org/D93271
MVE has a dual lane vector move instruction, capable of moving two
general purpose registers into lanes of a vector register. They look
like one of:
vmov q0[2], q0[0], r2, r0
vmov q0[3], q0[1], r3, r1
They only accept these lane indices though (and only insert into an
i32), either moving lanes 1 and 3, or 0 and 2.
This patch adds some tablegen patterns for them, selecting from vector
inserts elements. Because the insert_elements are know to be
canonicalized to ascending order there are several patterns that we need
to select. These lane indices are:
3 2 1 0 -> vmovqrr 31; vmovqrr 20
3 2 1 -> vmovqrr 31; vmov 2
3 1 -> vmovqrr 31
2 1 0 -> vmovqrr 20; vmov 1
2 0 -> vmovqrr 20
With the top one being the most common. All other potential patterns of
lane indices will be matched by a combination of these and the
individual vmov pattern already present. This does mean that we are
selecting several machine instructions at once due to the need to
re-arrange the inserts, but in this case there is nothing else that will
attempt to match an insert_vector_elt node.
Differential Revision: https://reviews.llvm.org/D92553
Indirect sibling calls need to use %r1 to hold the target address.
This is currently hard-coded in many places. This is not only
unnecessary, but makes future changes in this area difficult.
This patch now encodes the target address as operand without
hard coding a register in most places throughout the MI back-end.
Code generation still always uses %r1, but this is now decided
solely in one place in SystemZTargetLowering::LowerCall.
NFC intended.
Define vwadd/vwaddu/vwsub/vwsubu intrinsics and lower to V instructions.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Hsiangkai Wang <kai.wang@sifive.com>
Differential Revision: https://reviews.llvm.org/D93108
AddPromotedToType is being used to legalise INT_TO_FP operations
when the source is a predicate. The point where this introduces
vector extends might cause problems in the future so this patch
falls back to manual promotion within custom lowering.
Differential Revision: https://reviews.llvm.org/D90093
As discussed on D92645, we don't do a good job of recognising when we don't require the full width of a ymm/zmm build vector because the upper elements are undef/zero.
This commit allows us to make use of implicit zeroing of upper elements with AVX instructions, which we emulate in DAG with a INSERT_SUBVECTOR into the bottom of a undef/zero vector of the original type.
This exposed a limitation in getTargetConstantBitsFromNode which didn't extract bits from INSERT_SUBVECTORs of different element widths which I've included as well to prevent a couple of regressions.
Support atomic exchange and atomic compare and exchange instructions.
Change CAS and TS1AM instructions for ISel patterns. Add selectADDRzi
pattern for them. Add TS1AM pseudo instruction also for better ISel.
Add shouldExpandAtomicRMWInIR() function to expand all atomicrmw
instructions except atomicrmw xchg. Add custom lower for i8/i16
atomicrmw xchg. Modify replaceFI to support CAS/TS1AM instructions
which use "reg+disp" operands instead of "reg+imm+disp" operands.
And, add several regression tests to check the correctness.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D93161
Summary:
If a store defines (must alias) a load, it clobbers the load.
Fixes: SWDEV-258915
Reviewers:
arsenm
Differential Revision:
https://reviews.llvm.org/D92951
The X86-64 ABI defines va_list as
typedef struct {
unsigned int gp_offset;
unsigned int fp_offset;
void *overflow_arg_area;
void *reg_save_area;
} va_list[1];
This means the size, alignment, and reg_save_area offset will depend on
whether we are in LP64 or in ILP32 mode, so this commit adds the checks.
Additionally, the VAARG_64 pseudo-instruction assumed 64-bit pointers, so
this commit adds a VAARG_X32 pseudo-instruction that behaves just like
VAARG_64, except for assuming 32-bit pointers.
Some of these changes were originally done by
Michael Liao <michael.hliao@gmail.com>.
Fixes https://bugs.llvm.org/show_bug.cgi?id=48428.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D93160
My bot runs VS 2019, but it could not compile this code.
Message:
[55/2465] Building CXX object lib\Target\Hexagon\CMakeFiles\LLVMHexagonCodeGen.dir\HexagonVectorCombine.cpp.obj
FAILED: lib/Target/Hexagon/CMakeFiles/LLVMHexagonCodeGen.dir/HexagonVectorCombine.cpp.obj
...
C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.23.28105\include\map(71): error C2976: 'std::map': too few template arguments
C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.23.28105\include\map(71): note: see declaration of 'std::map'
The version in the path, 14.23, corresponds to _MSC_VER 1923, so raise
the version floor to 1924.
I have not tested with versions between 1924 and 1928 (latest), but the
latest works with the variadic version.
This moves the vtype decoding and printing to RISCVBaseInfo. This keeps all of
the decoding code in the same area as the encoding code. This will make it
easier to change the decoding for the 1.0 spec in the future.
We're now sharing the printing with the debug output for operands in the
assembler. This also fixes that debug output to include the tail and mask
agnostic bits. Since the printing code works on the vtype immediate value, we
now encode the immediate during parsing and store just the immediate in the
operand.
- New function SDValue getBackchainAddress() used by
lowerDYNAMIC_STACKALLOC() and lowerSTACKRESTORE() to properly handle the
backchain offset also with packed-stack.
- Make a common function getBackchainOffset() for the computation of the
backchain offset and use in some places (NFC).
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D93171
- Once an instruction is simplified, foldable candidates from it should
be invalidated or skipped as the operand index is no longer valid.
Differential Revision: https://reviews.llvm.org/D93174
If a function happens to:
- call setjmp
- do a 16-byte stack allocation
- call a function that sets up a stack frame and longjmp's back
The stack pointer that is restores by setjmp will no longer point to a valid
back chain. According to the ABI, stack accesses in such a function are to be
frame pointer based - so it is an error (quite obviously) to restore the stack
from the back chain.
We already restore the stack from the frame pointer when there are calls to
fast_cc functions. We just need to also do that when there are calls to setjmp.
This patch simply does that.
This was pointed out by the Julia team.
Differential revision: https://reviews.llvm.org/D92906
D82227 has added a proper check to limit PHI vectorization to the
maximum vector register size. That unfortunately resulted in at
least a couple of regressions on SystemZ and x86.
This change reverts PHI handling from D82227 and replaces it with
a more general check in SLPVectorizerPass::tryToVectorizeList().
Moved to tryToVectorizeList() it allows to restart vectorization
if initial chunk fails.
However, this function is more general and handles not only PHI
but everything which SLP handles. If vectorization factor would
be limited to maximum vector register size it would limit much
more vectorization than before leading to further regressions.
Therefore a new TTI callback getMaximumVF() is added with the
default 0 to preserve current behavior and limit nothing. Then
targets can decide what is better for them.
The callback gets ElementSize just like a similar getMinimumVF()
function and the main opcode of the chain. The latter is to avoid
regressions at least on the AMDGPU. We can have loads and stores
up to 128 bit wide, and <2 x 16> bit vector math on some
subtargets, where the rest shall not be vectorized. I.e. we need
to differentiate based on the element size and operation itself.
Differential Revision: https://reviews.llvm.org/D92059
We have this subtarget feature so it makes sense to use it here. This is
NFC because it's always defined by default on GFX8+.
Differential Revision: https://reviews.llvm.org/D93202
Add andm, orm, xorm, eqvm, nndm, negm, pcvm, lzvm, and tovm intrinsic
instructions, a few pseudo instructions to expand logical intrinsic
using VM512, a mechnism to expand such pseudo instructions, and
regression tests. Also, assign vector mask types and vector mask
register classes correctly. This is required to use VM512 registers
as function arguments.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D93093
Changes in this patch:
- Minor changes to the LowerVECREDUCE_SEQ_FADD function added by @cameron.mcinally
to also work for scalable types
- Added TableGen patterns for FP reductions with unpacked types (nxv2f16, nxv4f16 & nxv2f32)
- Asserts added to expandFMINNUM_FMAXNUM & expandVecReduceSeq for scalable types
Reviewed By: cameron.mcinally
Differential Revision: https://reviews.llvm.org/D93050
A vpt block that just contains either VPST;VCTP or VPT;VCTP, once the
VCTP is removed will become invalid. This fixed the first by removing
the now empty block and bails out for the second, as we have no simple
way of converting a VPT to a VCMP.
Differential Revision: https://reviews.llvm.org/D92369
These parameters set a default value of 0, so I believe they
should include a 0 suffix. This allows for versions which do not
set a default value in future.
Reviewed By: foad
Differential Revision: https://reviews.llvm.org/D93187
The runtime library has two family library implementation for ppc_fp128 and fp128.
For IBM Long double(ppc_fp128), it is suffixed with 'l', i.e(sqrtl). For
IEEE Long double(fp128), it is suffixed with "ieee128" or "f128".
We miss to map several libcall for IEEE Long double.
Reviewed By: qiucf
Differential Revision: https://reviews.llvm.org/D91675
add a new goal MustReduceRegisterPressure for machine combiner pass.
PowerPC will use this new goal to do some register pressure related optimization.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D92068
If a faux shuffle uses smaller shuffle inputs, try to recursively combine with those inputs directly instead of widening them immediately. Then widen all smaller inputs at the bottom of the recursion.
This will still mean we're generating nodes on the fly (PR45974) even if we don't combine to a new shuffle but it does help AVX2+ targets combine across xmm/ymm/zmm types, mainly as variable shuffles.
This recommits a87fccb3ff with a fix to mark the destination operand
of the marker instruction as def, to fix a machine verifier failure.
This reverts the revert commit c0f2cea7c0.
Default expansion leads to repeated extensions/truncations to/from vXi16 which shuffle combining and demanded elts can't completely unravel.
Better just to promote (any_extend) the input and perform a vXi16 reduction.
We'll be able to remove a lot of this if we ever get decent legalization support for reduction intrinsics in SelectionDAG.
Some of the pattern matching in PPCInstrVSX.td and node lowering involving vectors assumes 64bit mode. This patch disables some of the unsafe pattern matching and lowering of BUILD_VECTOR in 32bit mode.
Reviewed By: Xiangling_L
Differential Revision: https://reviews.llvm.org/D92789
The getPayload/getMask/getPassThrough functions should return values
that could be composed into a masked load/store without any additional
type casts. The previous fix violated that.
Instead, convert scalar mask to a vector right before rescaling.
AlignVectors treats all loaded/stored values as vectors of bytes,
and masks as corresponding vectors of booleans, so make getMask
produce a 1-element vector for scalars from the start.
The ABI demands a data16 prefix for lea in 64-bit LP64 mode, but not in
64-bit ILP32 mode. In both modes this prefix would ordinarily be
ignored, but the instructions may be changed by the linker to
instructions that are affected by the prefix.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D93157
This adds some basic MVE masked load/store costs, notably changing the
cost of legal loads/stores to the MVECostFactor and the cost of
scalarized instructions to 8*NumElts.
Differential Revision: https://reviews.llvm.org/D86538
The performance improvement on LBM previously achieved with improved software
prefetching (36d4421) have gone lost recently with e00f189. There now is one
memory access in the loop that LoopDataPrefetch cannot handle (while before
there was none) which the heuristic rejects.
This patch adds a small margin by allowing 1 non-prefetched memory access for
every 32 prefetched ones, so that the heuristic doesn't bail in this type of
case.
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D92985
Summary:
"Speculative fix for link failure on bots" with a mention of "the clang-ppc64le-rhel bot fails on link: http://lab.llvm.org:8011/#/builders/57/builds/2307/steps/6/logs/stdio".
PPCAsmPrinter.cpp:(.text._ZN12_GLOBAL__N_116PPCAIXAsmPrinter19emitFunctionBodyEndEv+0x2f8): undefined reference to `llvm::XCOFF::getNameForTracebackTableLanguageId(llvm::XCOFF::TracebackTable::LanguageID)'
PPCAsmPrinter.cpp:(.text._ZN12_GLOBAL__N_116PPCAIXAsmPrinter19emitFunctionBodyEndEv+0x2170): undefined reference to `llvm::XCOFF::parseParmsType(unsigned int, unsigned int)'
SUMMARY:
1. added a new option -xcoff-traceback-table to control whether generate traceback table for function.
2. implement the functionality of emit traceback table of a function.
Reviewers: hubert.reinterpretcast, Jason Liu
Differential Revision: https://reviews.llvm.org/D92398
This transform was added at:
c63799fc52
From what I see, it's the first demanded elements transform that adds
a new instruction using the IRBuilder. There are similar folds in
the generic demanded bits chunk of instcombine that also use the
InsertPointGuard code pattern.
The tests here would assert/crash because the new instruction was
being added at the start of the demanded elements analysis rather
than at the instruction that is being replaced.
This patch adds support for lowering function calls with the
rv_marker attribute. The goal is to expand such calls to the
following sequence of instructions:
BL @fn
mov x29, x29
This sequence of instructions triggers Objective-C runtime optimizations,
hence we want to ensure no instructions get moved in between them.
This patch achieves that by adding a new CALL_RVMARKER ISD node,
which gets turned into the BLR_RVMARKER pseudo, which eventually gets
expanded into the sequence mentioned above. The sequence is then marked
as instruction bundle, to avoid anything being moved in between.
@ahatanak is working on using this attribute in the front- & middle-end.
Together with the front- & middle-end changes, this should address
PR31925 for AArch64.
Reviewed By: t.p.northover
Differential Revision: https://reviews.llvm.org/D92569
Add simple pass for removing redundant vsetvli instructions within a basic block. This handles the case where the AVL register and VTYPE immediate are the same and no other instructions that change VTYPE or VL are between them.
There are going to be more opportunities for improvement in this space as we development more complex tests.
Differential Revision: https://reviews.llvm.org/D92679
Although this was something that I was hoping we would not have to do,
this patch makes t2DoLoopStartTP a terminator in order to keep it at the
end of it's block, so not allowing extra MVE instruction between it and
the end. With t2DoLoopStartTP's also starting tail predication regions,
it also marks them as having side effects. The t2DoLoopStart is still
not a terminator, giving it the extra scheduling freedom that can be
helpful, but now that we have a TP version they can be treated
differently.
Differential Revision: https://reviews.llvm.org/D91887
The compiler is making no effort to preserve upper elements. To do so would require another source operand tied with the destination and a different intrinsic interface to give control of this source to the programmer.
This patch changes the tail policy to agnostic so that the CPU doesn't need to make an effort to preserve them.
This is consistent with the RVV intrinsic spec here https://github.com/riscv/rvv-intrinsic-doc/blob/master/rvv-intrinsic-rfc.md#configuration-setting
Differential Revision: https://reviews.llvm.org/D93080
This CL changes the asm syntax for section flags, making them more like ELF
(previously "passive" was the only option). Now we also allow "G" to designate
COMDAT group sections. In these sections we set the appropriate comdat flag on
function symbols, and also avoid auto-creating a new section for them.
This also adds asm-based tests for the changes D92691 to go along with
the direct-to-object tests.
Differential Revision: https://reviews.llvm.org/D92952
This is a reland of rG4564553b8d8a with a fix to the lit pipeline in
llvm/test/MC/WebAssembly/comdat.ll
This CL changes the asm syntax for section flags, making them more like ELF
(previously "passive" was the only option). Now we also allow "G" to designate
COMDAT group sections. In these sections we set the appropriate comdat flag on
function symbols, and also avoid auto-creating a new section for them.
This also adds asm-based tests for the changes D92691 to go along with
the direct-to-object tests.
Differential Revision: https://reviews.llvm.org/D92952
Use RegisterClass::contains instead of going through getMinimalPhysRegClass
and hasSuperClassEq.
Remove the special case for NoRegister. It's identical to the
handling for any other regsiter that isn't VRM2/M4/M8.
The loop-based probing done for stack clash protection altered R1D which
corrupted the backchain value to be stored after the probing was done.
By using R0D instead for the loop exit value, R1D is not modified.
Review: Ulrich Weigand.
Differential Revision: https://reviews.llvm.org/D92803
Inline asm can contain constructs like .bytes which may have arbitrary size.
In some cases, this causes us to miscalculate the size of blocks and therefore
offsets, causing us to incorrectly compress a JT.
To be safe, just bail out of the whole thing if we find any inline asm.
Fixes PR48255
Differential Revision: https://reviews.llvm.org/D92865
There is an in-progress proposal for the following pseudo-instructions
in the assembler, to complement the existing `sext.w` rv64i instruction:
- sext.b
- sext.h
- zext.b
- zext.h
- zext.w
The `.b` and `.h` variants are available with rv32i and rv64i, and `zext.w` is
only available with `rv64i`.
These are implemented primarily as pseudo-instructions, as these instructions
expand to multiple real instructions. In the case of `zext.b`, this expands to a
single rv32/64i instruction, so it is implemented with an InstAlias (like
`sext.w` is on rv64i).
The proposal is available here: https://github.com/riscv/riscv-asm-manual/pull/61
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D92793
If SETUNE isn't legal, UO can use the NOT of the SETO expansion.
Removes some complex isel patterns. Most of the test changes are
from using XORI instead of SEQZ.
Differential Revision: https://reviews.llvm.org/D92008
This patch changes performMSCATTERCombine to also promote the indices of
masked gathers where the element type is i8 or i16, and adds various tests
for gathers with illegal types.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D91433
We currently have problems with the way that low overhead loops are
specified, with LR being spilled between the t2LoopDec and the t2LoopEnd
forcing the entire loop to be reverted late in the backend. As they will
eventually become a single instruction, this patch introduces a
t2LoopEndDec which is the combination of the two, combined before
registry allocation to make sure this does not fail.
Unfortunately this instruction is a terminator that produces a value
(and also branches - it only produces the value around the branching
edge). So this needs some adjustment to phi elimination and the register
allocator to make sure that we do not spill this LR def around the loop
(needing to put a spill after the terminator). We treat the loop very
carefully, making sure that there is nothing else like calls that would
break it's ability to use LR. For that, this adds a
isUnspillableTerminator to opt in the new behaviour.
There is a chance that this could cause problems, and so I have added an
escape option incase. But I have not seen any problems in the testing
that I've tried, and not reverting Low overhead loops is important for
our performance. If this does work then we can hopefully do the same for
t2WhileLoopStart and t2DoLoopStart instructions.
This patch also contains the code needed to convert or revert the
t2LoopEndDec in the backend (which just needs a subs; bne) and the code
pre-ra to create them.
Differential Revision: https://reviews.llvm.org/D91358
Both ds_read_b128 and ds_read2_b64 are valid for 128bit 16-byte aligned
loads but the one that will be selected is determined either by the order in
tablegen or by the AddedComplexity attribute. Currently ds_read_b128 has
priority.
While ds_read2_b64 has lower alignment requirements, we cannot always
restrict ds_read_b128 to 16-byte alignment because of unaligned-access-mode
option. This was causing ds_read_b128 to be selected for 8-byte aligned
loads regardles of chosen access mode.
To resolve this we use two patterns for selecting ds_read_b128. One
requires alignment of 16-byte and the other requires
unaligned-access-mode option.
Same goes for ds_write2_b64 and ds_write_b128.
Differential Revision: https://reviews.llvm.org/D92767
The phi created in a low overhead loop gets created with a default
register class it seems. There are then copied inserted between the low
overhead loop pseudo instructions (which produce/consume GPRlr
instructions) and the phi holding the induction. This patch removes
those as a step towards attempting to make t2LoopDec and t2LoopEnd a
single instruction, and appears useful in it's own right as shown in the
tests.
Differential Revision: https://reviews.llvm.org/D91267
This patch implements amx programming model that discussed in llvm-dev
(http://lists.llvm.org/pipermail/llvm-dev/2020-August/144302.html).
Thank Hal for the good suggestion in the RA. The fast RA is not in the patch yet.
This patch implemeted 7 components.
1. The c interface to end user.
2. The AMX intrinsics in LLVM IR.
3. Transform load/store <256 x i32> to AMX intrinsics or split the
type into two <128 x i32>.
4. The Lowering from AMX intrinsics to AMX pseudo instruction.
5. Insert psuedo ldtilecfg and build the def-use between ldtilecfg to amx
intruction.
6. The register allocation for tile register.
7. Morph AMX pseudo instruction to AMX real instruction.
Change-Id: I935e1080916ffcb72af54c2c83faa8b2e97d5cb0
Differential Revision: https://reviews.llvm.org/D87981
Rather than creating a series of associated calls and ensuring that
everything is lined up, use a table driven approach that ensures that
they two always stay in sync.
Avoids spurious newlines showing up in the output when emitting assembly
via MC.
Reviewed By: MaskRay, arsenm
Differential Revision: https://reviews.llvm.org/D92690
Pretty sure we meant to be checking signed 32 immediates here
rather than unsigned 32 bit. I suspect I messed this up because
in MathExtras.h we have isIntN and isUIntN so isIntN differs in
signedness depending on whether you're using APInt or plain integers.
This fixes a case where we didn't fold a constant created
by shrinkAndImmediate. Since shrinkAndImmediate doesn't topologically
sort constants it creates, we can fail to convert the Constant
to a TargetConstant. This leads to very strange behavior later.
Fixes PR48458.
Revert part of https://reviews.llvm.org/D92084 to make it simpler to
start consuming the EndOfStatement token within AMDGPU's
ParseInstruction in a future patch. This also brings us back to what
every other target currently does.
A future change to move the position back to the end of the statement
would likely need to audit all of the AMDGPUOperand SMLoc ranges, and
determine the SMLoc for the last character of the last operand.
Reviewed By: dp
Differential Revision: https://reviews.llvm.org/D92960
Add builtins required to implement vcmla and rotated variants from
the ACLE
Reviewed By: t.p.northover
Differential Revision: https://reviews.llvm.org/D92929
Add vfmk intrinsic instructions, a few pseudo instructions to expand
vfmk intrinsic using VM512 correctly, and regression tests.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D92758