It turns our that the BranchFolder and IfCvt does not like unanalyzable
branches that fall-through. This means that removing the unconditional
branches from the end of tail predicated instruction can run into
asserts and verifier issues.
This effectively reverts 372eb2bbb6, but
adds handling to t2DoLoopEndDec which are not branches, so can be safely
skipped.
In most cases, the dup(*ext) pattern can be rearranged to perform
the extension on the vector side, allowing for further vector-specific
optimisations to be made. However the initial checks for this conversion
were insufficient, allowing invalid encodings to be attempted (causing
compilation to fail).
Differential Revision: https://reviews.llvm.org/D94778
Specify LHS/RHS operands in matchShuffleWithUNPCK's calls to isTargetShuffleEquivalent, and handle VBROADCAST/VBROADCAST_LOAD matching in IsElementEquivalent
In MipsDelaySlotFiller, when replacing old call-branch with
the compact branch instruction, an assertion is caused by erasing
the old call with unhandled CSInfo.
The problem was reported in PR48695.
This patch fixes it, by moving call site info from the old call
instruction to its replace.
Patch by Nikola Tesic
Differential revision: https://reviews.llvm.org/D94685
A bug in the system assembler can assemble the xxspltd extended
menemonic into the wrong instruction (extracting the wrong element).
Emit the full xxpermdi with all operands to work around the problem.
Differential Revision: https://reviews.llvm.org/D94419
Original patch by @rogfer01.
This patch supports vector truncates, which on RVV must be done in a
series of instructions truncating by one power-of-two at a time. This is
done through custom-lowering and a custom node to avoid LLVM
re-combining the split TRUNCATE nodes.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Fraser Cormack <fraser@codeplay.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94796
The vcompress intrinsic is defined such that it requires a tail
undisturbed policy. This patch makes it so we can use the tail
agnostic policy if the user has passed vundefined to the dest
operand.
We need to do something similar for masked policy, but we need
annotation of which instructions use the mask policy first.
Not sure if this is sufficient for scheduling or if we'll need to
select different pseudos that don't have a tied def.
Reviewed By: evandro
Differential Revision: https://reviews.llvm.org/D94566
Reassociating some patterns to generate more fma instructions to
reduce register pressure.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D92071
Fix PR48742: the D75203 assembler optimization locates MCRelaxableFragment's
within two MCSymbol's and relaxes some MCRelaxableFragment's to reduce the size
of a MCAlignFragment. A -g build has more MCSymbol's and therefore may have
different assembler output (e.g. a MCRelaxableFragment (jmp) may have 5 bytes
with -O1 while 2 bytes with -O1 -g).
`.p2align 4, 0x90` is common due to loops. For a larger program, with a
lot of temporary labels, the assembly output difference is somewhat
destined. The cost seems to overweigh the benefits so we default to
-x86-pad-for-align=false until the heuristic is improved.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D94542
If the previous block in a function does not fallthough, adding nop's to
align it will never be executed. This means we can freely (except for
codesize) align more branches. This happens in constantislandspass (as
it cannot happen later) and only happens at aggressive optimization
levels as it does increase codesize.
Differential Revision: https://reviews.llvm.org/D94394
This treats low overhead loop branches the same as jump tables and
indirect branches in analyzeBranch - they cannot be analyzed but the
direct branches on the end of the block may be removed. This helps
remove the unnecessary branches earlier, which can help produce better
codegen (and change block layout in a number of cases).
Differential Revision: https://reviews.llvm.org/D94392
According to "9. Vector Memory Alignment Constraints" in V
specification, the alignment of vector memory access is aligned to the
size of the element. In our current implementation, we support ELEN up
to 64. We could assume the alignment of vector registers is 64 under the
assumption.
Differential Revision: https://reviews.llvm.org/D94751
SimplifyDemandedBits can remove set bits from immediates from instructions
like AND/OR/XOR. This can prevent them from being efficiently
codegened on RISCV.
This adds an initial version that tries to keep or form 12 bit
sign extended immediates for AND operations to enable use of ANDI.
If that doesn't work we'll try to create a 32 bit sign extended immediate
to use LUI+ADDIW.
More optimizations are possible for different size immediates or
different operations. But this is a good starting point that already
has test coverage.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94628
The TripCount for a predicated vector loop body will be
ceil(ElementCount/Width). This alters the conversion of an
active.lane.mask to a VCPT intrinsics to match.
Differential Revision: https://reviews.llvm.org/D94608
If this will help us fold shuffles together, then push the shuffle through the merged binops.
Ideally this would be performed in DAGCombiner::visitVECTOR_SHUFFLE but getting an efficient+legal merged shuffle can be tricky - on SSE we can be confident that for 32/64-bit elements vectors shuffles should easily fold.
Not all machine loops will have a predecessor. so the pass needs to
check it before continuing.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D94780
In order to limit the number of combinations of REINTERPRET_CAST,
whilst at the same time prevent overlap with BITCAST, this patch
establishes the following rules:
1. The operand and result element types must be the same.
2. The operand and/or result type must be an unpacked type.
Differential Revision: https://reviews.llvm.org/D94593
I noticed in D94450 that there were quite a few places where we generate
the sequence:
```
xN <- comparison ...
xN <- xor xN, 1
bnez xN, symbol
```
Given we know the XOR will be used by BRCOND, which only looks at the lowest
bit, I think we can remove the XOR and just invert the branch condition in
these cases?
The case mostly seems to come up in floating point tests, where there is often
more logic to combine the results of multiple SETCCs, rather than a single
(BRCOND (SETCC ...) ...).
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94535
For the ARM hard-float calling convention, calls to variadic functions
need to be treated diffrently, even if only the fixed arguments are
provided.
This fixes GCC-C-execute-pr68390 in the test-suite, which is failing on
the ARM GlobaISel bot.
Basic support of A64FX was added in D75594 but its scheduling model
was missing. This commit adds the scheduling model. Also, this commit
amends/adds some subtarget parameters of A64FX.
The A64FX Microarchitecture Manual, which is source information of
this commit, is on GitHub.
https://github.com/fujitsu/A64FX/
Differential Revision: https://reviews.llvm.org/D93791
In order to import patterns for these, we need to define new ops that can map to
the AArch64ISD::[SU]ITOF nodes. We then transform fpr->fpr variants of the generic
opcodes to these custom opcodes in preisel-lowering. We have to do it here and
not the PostLegalizer combiner because this has to run after regbankselect.
Differential Revision: https://reviews.llvm.org/D94702
G_[US]ITOFP users of loads on AArch64 can operate on both gpr and fpr banks for scalars.
Because of this, if their source is a load, then that load can be assigned to an fpr
bank and therefore avoid having to do a cross bank copy via a gpr->fpr conversion.
Differential Revision: https://reviews.llvm.org/D94701
Some FP compares expand to a sequence ending with (xor X, 1) to invert the result. If
the consumer is a select_cc we can likely get rid of this xor by fixing
up the select_cc condition.
This patch combines (select_cc (xor X, 1), 0, setne, trueV, falseV) -
(select_cc X, 0, seteq, trueV, falseV) if we can prove X is 0/1.
Reviewed By: lenary
Differential Revision: https://reviews.llvm.org/D94546
Legacy AIX assembly might not support all extended mnes,
add one feature bit to control the generation in MC,
and avoid generating them by default on AIX.
Reviewed By: sfertile
Differential Revision: https://reviews.llvm.org/D94458
MCTargetDesc includes headers from Utils and Utils includes headers
from MCTargetDesc. So from a library layering perspective it makes sense
for them to be in the same library. I guess the other option might be to
move the tablegen includes from RISCVMCTargetDesc.h to RISCVBaseInfo.h
so that RISCVBaseInfo.h didn't need to include RISCVMCTargetDesc.h.
Everything else that depends on Utils also depends on MCTargetDesc so
having one library seemed simpler.
Differential Revision: https://reviews.llvm.org/D93168
Note -x86-use-fsrm-for-memcpy is still disabled by default and there's no
default behavior change.
Differential Revision: https://reviews.llvm.org/D94436
This reverts commit dda60035e9.
This commit caused failures to compile some sources, erroring out
with "error in backend: Cannot select: t85: v2i32 = AArch64ISD::DUP t15",
see https://reviews.llvm.org/D91271 for the full reproduction case.
This introduces the ARMv8.7-A LS64 extension's intrinsics for 64 bytes
atomic loads and stores: `__arm_ld64b`, `__arm_st64b`, `__arm_st64bv`,
and `__arm_st64bv0`. These are selected into the LS64 instructions
LD64B, ST64B, ST64BV and ST64BV0, respectively.
Based on patches written by Simon Tatham.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D93232
After 49142991a6, clang detects that MUL may be uninitialized. Set it to nullptr to suppress this check.
Adding an assert to check that it is ultimately set fails two test cases. Since this is not a new issue, leave the assertion commented out until a code owner can fix the bug. The two failing test cases are noted in the assertion comment.
This patch custom lowers ISD::VSCALE into a csrr vlenb followed
by a shift right by 3 followed by a multiply by the scale amount.
I've added computeKnownBits support to indicate that the csrr vlenb
always produces 3 trailng bits of 0s so the shift right is "exact".
This allows the shift and multiply sequence to be nicely optimized
into a single shift or removed completely when the scale amount is
a power of 2.
The non power of 2 case multiplying by 24 is still producing
suboptimal code. We could remove the right shift and use a
multiply by 3. Hopefully we can improve DAG combine to fix that
since it's not unique to this sequence.
This replaces D94144.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D94249
The generated code for the split fp128 load/stores was missing a small yet important adjustment to the pointer metadata being fed into `getStore` and `getLoad`, making it out of sync with the effective memory address.
This problem often resulted in instructions being scheduled in the wrong order.
I also took this chance to clean up some "wrong" uses of `getAlignment` as done in D77687.
Thanks @jrtc27 for finding the problem and providing a patch.
Patch by LemonBoy and Jessica Clarke(jrtc27)
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94345
Blocks can be laid out such that a t2WhileLoopStart branches backwards. This is forbidden by the architecture and so it fails to be converted into a low-overhead loop. This new pass checks for these cases and moves the target block, fixing any fall-through that would then be broken.
Differential Revision: https://reviews.llvm.org/D92385
See if we can remove the shuffle by resorting a HOP chain so that the HOP args are pre-shuffled.
This initial version just handles (the most common) v4i32/v4f32 hadd/hsub reduction patterns - future work can extend this to v8i16 types plus PACK chains (2f64 HADD/HSUB should already be handled in the half-lane combine code later on).
Add support for G_FCONSTANT of FP128 (Quadruple precision) type.
It replaces the constant by emitting a load with a constant pool entry.
Reviewed By: aemerson
Differential Revision: https://reviews.llvm.org/D94437
This fixes double printing of insertion debug messages in the
legalizer.
Try to cleanup usage of observers. Currently the use of observers is
pretty hard to follow and it's not clear what is responsible for
them. Observers are referenced in 3 places:
1. In the MachineFunction
2. In the MachineIRBuilder
3. In the LegalizerHelper
The observers in the MachineFunction and MachineIRBuilder are both
called only on insertions, and are redundant with each other. The
source of the double printing was the same observer was added to both
the MachineFunction, and the MachineIRBuilder. One of these references
needs to be removed. Arguably observers in general should be fully
removed from one or the other, but it may be useful to have a local
observer in the MachineIRBuilder that is not added to the function's
observers. Alternatively, the wrapper observer could manage a local
observer in one place.
The LegalizerHelper only ever calls the observer on changing/changed
instructions, and never insertions. Logically these are two different
types of observers, for changes and for insertions.
Additionally, some places used the GISelObserverWrapper when they only
needed a single observer they could use directly.
Setting the observer in the LegalizerHelper constructor is not
flexible enough if the LegalizerHelper is constructed anywhere outside
the one used by the legalizer. AMDGPU calls the LegalizerHelper in
RegBankSelect, and needs to use a local observer to apply the regbank
to newly created instructions. Currently it accomplishes this by
constructing a local MachineIRBuilder. I'm trying to move the
MachineIRBuilder to be owned/maintained by the RegBankSelect pass
itself, but the locally constructed LegalizerHelper would reset the
observer.
Mips also has a special case use of the LegalizationArtifactCombiner
in applyMappingImpl; I think we do need to run the artifact combiner
during RegBankSelect, but in a more consistent way outside of
applyMappingImpl.
Following on from D91255, this patch is responsible for sinking relevant mul
operands to the same block so that umull/smull instructions can be correctly
generated by the mul combine implemented in the aforementioned patch.
Differential revision: https://reviews.llvm.org/D91271
canonicalizeShuffleMaskWithHorizOp currently only supports shuffles with 1 or 2 sources, but PR41813 will require us to support higher numbers of sources.
This patch just generalizes the initial setup stages to ensure all src ops are the same type and opcode and then will continue to early out if we have more than 2 sources.
rG73a44f437bf1 result in 256-bit packss/packus ops with additional shuffles that shuffle combining can sometimes try to convert back into a truncation.
This commit extends SVEIntrinsicOpts::optimizeConvertFromSVBool to
identify and remove longer chains of redundant SVE reintepret
intrinsics. For example, the following chain of redundant SVE
reinterprets is now recognised as redundant:
%a = <vscale x 2 x i1>
%1 = <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool(<vscale x 2 x i1> %a)
%2 = <vscale x 4 x i1> @llvm.aarch64.sve.convert.from.svbool(<vscale x 16 x i1> %1)
%3 = <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool(<vscale x 4 x i1> %2)
%4 = <vscale x 4 x i1> @llvm.aarch64.sve.convert.from.svbool(<vscale x 16 x i1> %3)
%5 = <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool(<vscale x 4 x i1> %4)
%6 = <vscale x 2 x i1> @llvm.aarch64.sve.convert.from.svbool(<vscale x 16 x i1> %5)
ret <vscale x 2 x i1> %6
and will be replaced with:
ret <vscale x 2 x i1> %a
Eliminating these can sometimes mean emitting fewer unnecessary
loads/stores when lowering to assembly.
Differential Revision: https://reviews.llvm.org/D94074
The isVMOVNOriginalMask was previously only checking for two input
shuffles that could be better expanded as vmovn nodes. This expands that
to single input shuffles that will later be legalized to multiple
vectors.
Differential Revision: https://reviews.llvm.org/D94189
Add pseudo instruction to allow early termination of pixel shader
anywhere based on the value of SCC. The intention is to use this
when a mask of live lanes is updated, e.g. live lanes in WQM pass.
This facilitates early termination of shaders even when EXEC is
incomplete, e.g. in non-uniform control flow.
Reviewed By: foad
Differential Revision: https://reviews.llvm.org/D88777
Previously, instructions which could be
expressed as VOP3 in addition to another
encoding had a _e64 suffix on the tablegen
record name, while those
only available as VOP3 did not. With this
patch, all VOP3s will have the _e64 suffix.
The assembly does not change, only the mir.
Reviewed By: foad
Differential Revision: https://reviews.llvm.org/D94341
Change-Id: Ia8ec8890d47f8f94bbbdac43745b4e9dd2b03423
This seems to only have overridden cold handling, which we probably
shouldn't do. As far as I can tell the wrapper library functions are
still inlined as appropriate.
This makes sure that assembly output actually can be assembled.
Set the correct MCExpr relocations specifier VK_PAGEOFF - and also
set VK_PAGE consistently even though it's not visible in the assembly
output.
Differential Revision: https://reviews.llvm.org/D94365
The custom expansion of select operations in the RISC-V backend
interferes with the matching of cmov instructions. Legalizing
select when the Zbt extension is available solves that problem.
Reviewed By: lenary, craig.topper
Differential Revision: https://reviews.llvm.org/D93767
We can use a 0 immediate to avoid needing to materialize 0 into
an FPR first.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94459
Subtarget feature bits are needed to change instprinter's behavior based
on feature bits.
Most of the other popular targets were updated back in 2015,
in https://reviews.llvm.org/rGb46d0234a6969
we should update it too.
Reviewed By: sfertile
Differential Revision: https://reviews.llvm.org/D94449
PowerPC cores like e200z759n3 [1] using an efpu2 only support single precision
hardware floating point instructions. The single precision instructions efs*
and evfs* are identical to the spe float instructions while efd* and evfd*
instructions trigger a not implemented exception.
This patch introduces a new command line option -mefpu2 which leads to
single-hardware / double-software code generation.
[1] Core reference:
https://www.nxp.com/files-static/32bit/doc/ref_manual/e200z759CRM.pdf
Differential revision: https://reviews.llvm.org/D92935
Adapted from D54696 by @nikic.
This patch improves lowering of saturating float to
int conversions, FP_TO_[SU]INT_SAT, for X86.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D86079
We can't easily treat ASHR a faux shuffle, but if it was just feeding a PACKSS then it was likely being used as sign-extension for a truncation, so just peek through and adjust the mask accordingly.
v16i32 -> v16i16/v8i16 truncation is now good enough using PACKSS/PACKUS + shuffle combining that its no longer necessary to early-out on pre-AVX512BW targets.
This was noticed while looking at completing PR40111 and moving combineSubToSubus to DAGCombine entirely.
SSE2 truncation codegen has improved over the past few years (mainly due to better shuffle lowering/combining and computeKnownBits) - its no longer necessary to early-out from v8i32/v8i64 truncations.
This was noticed while looking at completing PR40111 and moving combineSubToSubus to DAGCombine entirely.
After placing markers, we removed some unnecessary branches, but it only
handled the simplest case. This makes more unnecessary branches to be
removed.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94047
In ST mode, flat scratch instructions have neither an sgpr nor a vgpr
for the address. This lead to an assertion when inserting hard clauses.
Differential Revision: https://reviews.llvm.org/D94406
Updating `ScopeTops` is something we frequently do in CFGStackify, so
this factors it out as a function. This also makes a few utility
functions templated so that they are not dependent on input vector
types and simplifies function parameters.
Reviewed By: tlively
Differential Revision: https://reviews.llvm.org/D94046
This ensures every single terminate pad is a single BB in the form of:
```
%exn = catch $__cpp_exception
call @__clang_call_terminate(%exn)
unreachable
```
This is a preparation for HandleEHTerminatePads pass, which will be
added in a later CL and will run after CFGStackify. That pass duplicates
terminate pads with a `catch_all` instruction, and duplicating it
becomes simpler if we can ensure every terminate pad is a single BB.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94045
Define the `vfclass` IR intrinsics for the respective V instructions.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Evandro Menezes <evandro.menezes@sifive.com>
Differential Revision: https://reviews.llvm.org/D94356
Original patch by @rogfer01.
This patch adds ISel patterns for the above operations to the
corresponding vector/vector and vector/scalar RVV instructions, as well
as extra patterns to match operand-swapped scalar/vector vfrsub and
vfrdiv.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Fraser Cormack <fraser@codeplay.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94408
Before this patch there was generic mapping from vector_extract
to G_EXTRACT_VECTOR_ELT added in SelectionDAGCompat.td. That
mapping is now replaced by a mapping from extractelt instead.
The reasoning is that vector_extract is marked as deprecated,
so it is assumed that a majority of targets will use extractelt
and not vector_extract (and that the long term solution for all
targets would be to use extractelt).
Targets like AArch64 that still use vector_extract can add an
additional mapping from the deprecated vector_extract as target
specific tablegen definitions. Such a mapping is added for AArch64
in this patch to avoid breaking tests.
When adding the extractelt => G_EXTRACT_VECTOR_ELT mapping we
triggered some new code paths in GlobalISelEmitter, ending up in
an assert when trying to import a pattern containing EXTRACT_SUBREG
for ARM. Therefore this patch also adds a "failedImport" warning
for that situation (instead of hitting the assert).
Differential Revision: https://reviews.llvm.org/D93416
Original patch by @rogfer01.
All ordered comparisons except ONE are supported natively, and all
unordered comparisons except UNE are expanded into sequences involving
explicit NaN checks and mask arithmetic.
Additionally, we expand GT,OGT,GE,OGE to their swapped-operand versions, and
pattern-match those back to the "original", swapping operands once more. This
way we catch both operations and both "vf" and "fv" forms with fewer patterns.
Also add support for floating-point splat_vector, with an optimization for
splatting fpimm0.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Fraser Cormack <fraser@codeplay.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94242
VOP3 and VOP DPP subroutines to generate input
operands and asm strings were essentially copy
pasted several times. They are deduplicated to
reduce the maintenance burden and allow faster
development.
Reviewed By: dp
Differential Revision: https://reviews.llvm.org/D94102
Change-Id: I76225eed3c33239d9573351e0c8a0abfad0146ea
The result cannot be widened, unfortunately, because widening vNi1
would depend on the context in which it appears (i.e. the type alone
is not sufficient to tell if it needs to be widened).
If vpermf128/vpermi128 is acting on 2 similar 'inlane' ops, then try to perform the vpermf128 first which will allow us to merge the ops.
This will help us fix one of the regressions in D56387
This adds uses for locals introduced for new debug messages for the load store optimizer. Those locals are only used on debug statements and otherwise create unused variable warnings.
Differential Revision: https://reviews.llvm.org/D94398
This reapplies commit rG80dee7965dffdfb866afa9d74f3a4a97453708b2.
[X86][SSE] Fold unpack(hop(),hop()) -> permute(hop())
UNPCKL/UNPCKH only uses one op from each hop, so we can merge the hops and then permute the result.
REAPPLIED with a fix for unary unpacks of HOP.
Changes in this patch:
- When lowering floating-point masked gathers, cast the result of the
gather back to the original type with reinterpret_cast before returning.
- Added patterns for reinterpret_casts from integer to floating point, and
concat_vector patterns for bfloat16.
- Tests for various legalisation scenarios with floating point types.
Reviewed By: sdesmalen, david-arm
Differential Revision: https://reviews.llvm.org/D94171
This patch fixes a bug that could result in miscompiles (at least
in an OOT target). The problem could be seen by adding checks that
the DominatorTree used in BasicAliasAnalysis and ValueTracking was
valid (e.g. by adding DT->verify() call before every DT dereference
and then running all tests in test/CodeGen).
Problem was that the LegacyPassManager calculated "last user"
incorrectly for passes such as the DominatorTree when not telling
the pass manager that there was a transitive dependency between
the different analyses. And then it could happen that an incorrect
dominator tree was used when doing alias analysis (which was a pretty
serious bug as the alias analysis result could be invalid).
Fixes: https://bugs.llvm.org/show_bug.cgi?id=48709
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D94138
We did not have specific costs for larger than legal truncates that were
not otherwise cheap (where they were next to stores, for example). As
MVE does not have a dedicated instruction for them (and we do not use
loads/stores yet), they should be expensive as they get expanded to a
series of lane moves.
Differential Revision: https://reviews.llvm.org/D94260
The Pseudo class sets isCodeGenOnly=1 which causes the asm strings
in the pseudos to be ignored. I think this is why the aliases are
needed at all.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94024
This patch moves all but the BaseInstr to bits in TSFlags.
For the index fields, we can just use a bit to indicate their presence.
The locations of the operands are well defined.
This reduces the llc binary by about 32K on my build. It also
removes the binary search of the table from the custom inserter.
Instead we just check that the SEW op is present.
Reviewed By: rogfer01
Differential Revision: https://reviews.llvm.org/D94375
We are checking the unsafe-fp-math for sqrt but not for fpow, which behaves inconsistent.
As the direction is to remove this global option, we need to remove the unsafe-fp-math
check for sqrt and update the test with afn fast-math flags.
Reviewed By: Spatel
Differential Revision: https://reviews.llvm.org/D93891
This makes the mask align with the position of the bits in TSFlags
which is a little more logical.
I might be adding more fields to TSFlags and some might be single
bits where just ANDing with mask to test the bit would make sense.
While there rename TargetFlags in validateInstruction to reflect
that it's just the constraint bits.
We currently have about 7000 opcodes in the RISCVGenInstrInfo.inc
enum. We can use uint16_t to store these values. We would need to
grow by nearly 9x before we run out of space so this should last
for a little while.
This reduces the llc binary by 32K.
Original patch by @rogfer01.
The RVV integer comparison instructions are defined in such a way that
many LLVM operations are defined by using the "opposite" comparison
instruction and swapping the operands. This is done in this patch in
most cases, except for the mappings where the immediate range must be
adjusted to accomodate:
va < i --> vmsle{u}.vi vd, va, i-1, vm
va >= i --> vmsgt{u}.vi vd, va, i-1, vm
That is left for future optimization; this patch supports all operations
but in the case of the missing mappings the immediate will be moved to
a scalar register first.
Since there are so many condition codes and operand cases to check, it
was decided to reduce the test burden by only testing the "vscale x 8"
vector types.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Fraser Cormack <fraser@codeplay.com>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94168
The TableGen immAllOnesV and immAllZerosV helpers implicitly wrapped the
ISD::isBuildVectorAll(Ones|Zeros) helper functions. This was inhibiting
their use for targets such as RISC-V which use ISD::SPLAT_VECTOR. In
particular, RISC-V had to define its own 'vnot' fragment.
In order to extend the scope of these nodes to include support for
ISD::SPLAT_VECTOR, two new ISD predicate functions have been introduced:
ISD::isConstantSplatVectorAll(Ones|Zeros). These effectively supersede
the older "isBuildVector" predicates, which are now simple wrappers for
the new functions. They pass a defaulted boolean toggle which preserves
the old behaviour. It is hoped that in time all call-sites can be ported
to the "isConstantSplatVector" functions.
While the use of ISD::isBuildVectorAll(Ones|Zeros) has not changed, the
behaviour of the TableGen immAll(Ones|Zeros)V **has**. To test the new
functionality, the custom RISC-V TableGen fragment has been removed and
replaced with the built-in 'vnot'. To test their use as pattern-roots, two
splat patterns have been updated accordingly.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94223
This is a first change needed to fix a crash in which the emergency
spill splot ends being out of reach. This happens when we run the
register scavenger after we have eliminated the frame indexes. The fix
for the actual crash will come in a later change.
This change removes an extra stack size increase we do in
RISCVFrameLowering::determineFrameLayout.
We don't have to change the size of the stack here as
PEI::calculateFrameObjectOffsets is already doing this with the right
size accounting the extra alignment.
Differential Revision: https://reviews.llvm.org/D89237
This removes unreachable EH pads in LateEHPrepare. This is not for
optimization but for preparation for CFGStackify. In CFGStackify, we
determine where to place `try` marker by computing the nearest common
dominator of all predecessors of an EH pad, but when an EH pad does not
have a predecessor, it becomes tricky. We can insert an empty dummy BB
before the EH pad and place the `try` there, but removing unreachable EH
pads is simpler.
This moves an existing exception label test from eh-label.mir to
exception.mir and adds a new test there.
This also adds some comments to existing methods.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94044
- Updates InstPrinter to handle `catch_all`.
- Makes `rethrow` condition an early exit from the function to make the
rest simpler.
- Unify label and catch counters. They don't need to be counted
separately and this will help `delegate` instruction later.
- Removes `LastSeenEHInst` field. This was first introduced to handle
when there are more than one `catch` blocks per `try`, but this was
not implemented correctly and not being used at the moment anyway.
- Reenables all tests in cfg-stackify-eh.ll that don't deal with unwind
destination mismatches, which will be handled in a later CL.
Reviewed By: dschuff, tlively, aardappel
Differential Revision: https://reviews.llvm.org/D94043
This removes `exnref` type and `br_on_exn` instruction. This is
effectively NFC because most uses of these were already removed in the
previous CLs.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94041
This implements basic instructions for the new spec.
- Adds new versions of instructions: `catch`, `catch_all`, and `rethrow`
- Adds support for instruction selection for the new instructions
- `catch` needs a custom routine for the same reason `throw` needs one,
to encode `__cpp_exception` tag symbol.
- Updates `WebAssembly::isCatch` utility function to include `catch_all`
and Change code that compares an instruction's opcode with `catch` to
use that function.
- LateEHPrepare
- Previously in LateEHPrepare we added `catch` instruction to both
`catchpad`s (for user catches) and `cleanuppad`s (for destructors).
In the new version `catch` is generated from `llvm.catch` intrinsic
in instruction selection phase, so we only need to add `catch_all`
to the beginning of cleanup pads.
- `catch` is generated from instruction selection, but we need to
hoist the `catch` instruction to the beginning of every EH pad,
because `catch` can be in the middle of the EH pad or even in a
split BB from it after various code transformations.
- Removes `addExceptionExtraction` function, which was used to
generate `br_on_exn` before.
- CFGStackfiy: Deletes `fixUnwindMismatches` function. Running this
function on the new instruction causes crashes, and the new version
will be added in a later CL, whose contents will be completely
different. So deleting the whole function will make the diff easier to
read.
- Reenables all disabled tests in exception.ll and eh-lsda.ll and a
single basic test in cfg-stackify-eh.ll.
- Updates existing tests to use the new assembly format. And deletes
`br_on_exn` instructions from the tests and FileCheck lines.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94040
Clang generates `wasm.get.exception` and `wasm.get.ehselector`
intrinsics, which respectively return a caught exception value (a
pointer to some C++ exception struct) and a selector (an integer value
that tells which C++ `catch` clause the current exception matches, or
does not match any).
WasmEHPrepare is a pass that does some IR-level preparation before
instruction selection. Previously one of things we did in this pass was
to convert `wasm.get.exception` intrinsic calls to
`wasm.extract.exception` intrinsics. Their semantics were the same
except `wasm.extract.exception` did not have a token argument. We
maintained these two separate intrinsics with the same semantics because
instruction selection couldn't handle token arguments. This
`wasm.extract.exception` intrinsic was later converted to
`extract_exception` instruction in instruction selection, which was a
pseudo instruction to implement `br_on_exn`. Because `br_on_exn` pushed
an extracted value onto the value stack after the `end` instruction of a
`block`, but LLVM does not have a way of modeling that kind of behavior,
so this pseudo instruction was used to pull an extracted value out of
thin air, like this:
```
block $l0
...
br_on_exn $cpp_exception $l0
...
end
extract_exception ;; pushes values onto the stack
```
In the new spec, we don't need this pseudo instruction anymore because
`catch` itself returns a value and we don't have `br_on_exn` anymore. In
the spec `catch` returns multiple values (like `br_on_exn`), but here we
assume it only returns a single i32, which is sufficient to support C++.
So this renames `wasm.get.exception` intrinsic to `wasm.catch`. Because
this CL does not yet contain instruction selection for `wasm.catch`
intrinsic, all `RUN` lines in exception.ll, eh-lsda.ll, and
cfg-stackify-eh.ll, and a single `RUN` line in wasm-eh.cpp (which is an
end-to-end test from C++ source to assembly) fail. So this CL
temporarily disables those `RUN` lines, and for those test files without
any valid remaining `RUN` lines, adds a dummy `RUN` line to make them
pass. These tests will be reenabled in later CLs.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94039
1. Break MUL with specific constant to a SLLI and an ADD/SUB on riscv32
with the M extension.
2. Break MUL with specific constant to two SLLI and an ADD/SUB, if the
constant needs a pair of LUI/ADDI to construct.
Reviewed by: craig.topper
Differential Revision: https://reviews.llvm.org/D93619
Treat a non-atomic volatile load and store as a relaxed atomic at
system scope for the address spaces accessed. This will ensure all
relevant caches will be bypassed.
A volatile atomic is not changed and still only bypasses caches upto
the level specified by the SyncScope operand.
Differential Revision: https://reviews.llvm.org/D94214
The ISel patterns we have for truncating to i1's under MVE do not seem
to be correct. Instead custom lower to icmp(ne, and(x, 1), 0).
Differential Revision: https://reviews.llvm.org/D94226
`wasm_rethrow_in_catch` intrinsic and builtin are used in order to
rethrow an exception when the exception is caught but there is no
matching clause within the current `catch`. For example,
```
try {
foo();
} catch (int n) {
...
}
```
If the caught exception does not correspond to C++ `int` type, it should
be rethrown. These intrinsic/builtin were renamed `rethrow_in_catch`
because at the time I thought there would be another intrinsic for C++'s
`throw` keyword, which rethrows an exception. It turned out that `throw`
keyword doesn't require wasm's `rethrow` instruction, so we rename
`rethrow_in_catch` to just `rethrow` here.
Reviewed By: dschuff, tlively
Differential Revision: https://reviews.llvm.org/D94038
Support pack_f32p and pack_f32a intrinsic instructions and regression tests.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D94296
This implements vp_add, vp_and for the VE target by lowering them to the
VVP_* layer. We also add helper functions for VP SDNodes (isVPSDNode,
getVPMaskIdx, getVPExplicitVectorLengthIdx).
Reviewed By: kaz7
Differential Revision: https://reviews.llvm.org/D93766
Clean ISel patterns for LSV and LVS before upstream more hand-written
ISel patterns.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D94291
Fixes a crash caused by D91255, when LLVMTy is null when
calling changeExtendedVectorElementType.
Differential Revision: https://reviews.llvm.org/D94234
We do this mostly to be able to test the insert_vector_elt isel
patterns. As long as we don't, most single element insertions show up as
`BUILD_VECTOR` in the backend.
Reviewed By: kaz7
Differential Revision: https://reviews.llvm.org/D93759
Define the `vfsqrt` IR intrinsics for the respective V instructions.
Authored-by: Roger Ferrer Ibanez <rofirrim@gmail.com>
Co-Authored-by: Evandro Menezes <evandro.menezes@sifive.com>
Differential Revision: https://reviews.llvm.org/D93745
There are only two used in the IR optimization pipeline.
Port these and add them to the default pipeline.
Similar to https://reviews.llvm.org/D93863.
I added -mtriple to some tests since under the new PM, the passes are
only available when the TargetMachine is specified.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D93930
In https://reviews.llvm.org/D88138 this was incorrectly added with
registerOptimizerLastEPCallback(), when it should be
registerLoopOptimizerEndEPCallback(), matching the legacy PM's
EP_LoopOptimizerEnd.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D93929
The loop index was shadowing the container name.
It seems that we can just not use a for-range loop here since there is
an induction variable anyway.
Differential Revision: https://reviews.llvm.org/D94254
A struct in C passed by value did not get debug information. Such values are currently
lowered to a Wasm local even in -O0 (not to an alloca like on other archs), which becomes
a Target Index operand (TI_LOCAL). The DWARF writing code was not emitting locations
in for TI's specifically if the location is a single range (not a list).
In addition, the ExplicitLocals pass which removes the ARGUMENT pseudo instructions did
not update the associated DBG_VALUEs, and couldn't even find these values since the code
assumed such instructions are adjacent, which is not the case here.
Also fixed asm printing of TIs needed by a test.
Differential Revision: https://reviews.llvm.org/D94140
Make the sequence of passes to select and rewrite instructions to
physical registers be a target callback. This is to prepare to allow
targets to split register allocation into multiple phases.
There are various hacks working around limitations in
handleAssignments, and the logical split between different parts isn't
correct. Start separating the type legalization to satisfy going
through the DAG infrastructure from the code required to split into
register types. The type splitting should be moved to generic code.
Don't directly dereference a dyn_cast<> - use cast<> so we assert for the correct type.
Also, simplify the for loop to a range loop.
Fixes clang static analyzer warning.
This patch fixes a bug introduced in the patch:
https://reviews.llvm.org/D93030
This patch pulls the test for scalable vector to be the first instruction
to be checked. This avoids the Gather and Scatter cost model for AArch64 to
compute the number of vector elements for something that is not a vector and
therefore crashing.
The patterns that want to use 'vnot' use a custom PatFrag. This is
because 'vnot' uses immAllOnesV which implicitly uses BUILD_VECTOR
rather than SPLAT_VECTOR.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94078
The PPCSubTarget variable has been replaced with the Subtarget variable. This
removes the remaining instances of PPCSubTarget as they are no longer necessary.
nvxXi1 types are legal with V extension and that's the result
vmseq/vmsne/vmslt/etc instructions return.
No test cases yet because the setcc isel patterns aren't in
and we'll need more than basic tests to observe this. I locally
tested that this plus D947078, D94168, D94142, and D94149
was enough to be able to handle the overflow result from
llvm.sadd.overflow.
The CSR lists control which registers are spilled and reloaded in the
prologue and epilogue. The stack pointer is managed explicitly, and
should never be pushed or popped. Remove it from these lists. This
affected regcall and preserves all / most.
Differential Revision: https://reviews.llvm.org/D94118
Performing this rearrangement allows for existing patterns
to match cases where the vector may be built after an extend,
instead of before.
Differential Revision: https://reviews.llvm.org/D91255
BRB IALL: Invalidate the Branch Record Buffer
BRB INJ: Branch Record Injection into the Branch Record Buffer
Parser changes based on work by Simon Tatham.
These are two-word mnemonics. The assembly parser works by special-casing
the mnemonic in order to parse the second word as a plain identifier token.
Reviewed by: MarkMurrayARM
Differential Revision: https://reviews.llvm.org/D93899
The new Power10 instruction vsrq was being given the wrong shift vector.
The original code assumed that the shift would be found in bits 121 to 127.
This is not correct. The shift is found in bits 57 to 63.
This can be fixed by swaping the first and second double words.
Reviewed By: nemanjai, #powerpc
Differential Revision: https://reviews.llvm.org/D94113
Same as a9b6440edd, use zanyext to treat any_extends as zero extends
during lowering to create addw/addl/subw/subl nodes.
Differential Revision: https://reviews.llvm.org/D93835
Similar to 78d8a821e2 but for ARM, this handles any_extend whilst
creating MULL nodes, treating them as zextends.
Differential Revision: https://reviews.llvm.org/D93834
This adds an extra tablegen PatFrag, zanyext, which matches either any
extend or zext and uses that in the aarch64 backend to handle any
extends in addw/addl/subw/subl patterns.
Differential Revision: https://reviews.llvm.org/D93833
Demanded bits may turn a sext or zext into an anyext if the top bits are
not needed. This currently prevents the lowering to instructions like
mull, addl and addw. This patch fixes the mull generation by keeping it
simple and treating them like zextends.
Differential Revision: https://reviews.llvm.org/D93832
Extend PEI to emit a DWARF expression for StackOffsets that have
a fixed and scalable component. This means the expression that needs
to be added is either:
<base> + offset
or:
<base> + offset + scalable_offset * scalereg
where for SVE, the scale reg is the Vector Granule Dwarf register, which
encodes the number of 64bit 'granules' in an SVE vector and which
the debugger can evaluate at runtime.
Reviewed By: jmorse
Differential Revision: https://reviews.llvm.org/D90020
There is no test coverage for the mulhs or mulhu patterns as I can't get
the DAGCombiner to generate them for scalable vectors. There are a few
places in that still need updating for that to work. I left the patterns
in regardless as they are correct.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D94073
If the return values can't be lowered to registers
SelectionDAG performs the sret demotion. This patch
contains the basic implementation for the same in
the GlobalISel pipeline.
Furthermore, targets should bring relevant changes
during lowerFormalArguments, lowerReturn and
lowerCall to make use of this feature.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D92953
This patch updates X86InstCombineIntrinsic.cpp to use the newly updated CreateShuffleVector.
The tests are updated because the updated CreateShuffleVector uses poison value for the second vector.
If I didn't miss something, the masks in the tests are choosing elements from the first vector only; therefore the tests are having equivalent behavior.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D94059
Summary:
This is to avoid unnecessary analysis since amdgpu.noclobber is only used for globals.
Reviewers:
arsenm
Fixes:
SWDEV-239161
Differential Revision:
https://reviews.llvm.org/D94107
ComplexPatterns are kind of weird, they don't call any of the predicates on their operands. And their "complexity" used for tablegen ordering purposes in the matcher table is hand specified.
This started as an attempt to just use sext_inreg + SLOIPat to implement SLOIW just to have one less Select function. The matching for the or+shl is the same as long as you know the immediate is less than 32 for SLOIW. But that didn't work out because using uimm5 with SLOIPat didn't do anything if it was a ComplexPattern.
I realized I could just use a PatFrag with the opcodes I wanted to match and an immediate predicate would then evaluate correctly. This also computes the complexity just like any other pattern does. Then I just needed to check the constraints on the immediates in the predicate. Conveniently the predicate is evaluated after the fragment has been matched. So the structure has already been checked, we just need to find the constants.
I'll note that this is unusual, I didn't find any other targets looking through operands in PatFrag predicate. There is a PredicateCodeUsesOperands feature that can be used to collect the operands into an array that is used by AMDGPU/VOP3Instructions.td. I believe that feature exists to handle commuted matching, but since the nodes here use constants, they aren't ever commuted
Differential Revision: https://reviews.llvm.org/D91901
vmsltu.vi v0, v1, 0 is always false there is no unsigned number
less than 0. vmsleu.vi v0, v1, -1 on the other hand is always true
since -1 will be considered unsigned max and all numbers are <=
unsigned max.
A similar problem exists for vmsgeu.vi v0, v1, 0 which is always true,
but becomes vmsgtu.vi v0, v1, -1 which is always false.
To match the GNU assembler we'll emit vmsne.vv and vmseq.vv with
the same register for these cases instead.
I'm using AsmParserOnly pseudo instructions here because we can't
match an explicit immediate in an InstAlias. And we can't use a
AsmOperand for the zero because the output we want doesn't use an
immediate so there's nowhere to name the AsmOperand we want to use.
To keep the implementations similar I'm also handling signed with
pseudo instructions even though they don't have this issue. This
way we can avoid the special renderMethod that decremented by 1 so
the immediate we see for the pseudo instruction in processInstruction
is 0 and not -1. Another option might have been to have a different
simm5_plus1 operand for the unsigned case or just live with the
immediate being pre-decremented. I felt this way was clearer, but I'm
open to other opinions.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94035
This alias for andi x, 255 was recently added to the spec. If we
print it, code we output can't be compiled with -fno-integrated-as
unless the GNU assembler is also a version that supports alias.
Reviewed By: lenary
Differential Revision: https://reviews.llvm.org/D93826
There are vmsle(u).vx and vmsle(u).vi instructions, but there is
only vmslt(u).vx and no vmslt(u).vi. vmslt(u).vi can be emulated
for some immediates by decrementing the immediate and using vmsle(u).vi.
To avoid the user needing to know about this, this patch does this
conversion.
The assembler does the same thing for vmslt(u).vi and vmsge(u).vi
pseudoinstructions. There is no vmsge(u).vx intrinsic or
instruction so this patch is limited to vmslt(u).
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94070
It was removed in GFX10 GPUs, but LLVM could
generate it.
Reviewed By: rampitec, arsenm
Differential Revision: https://reviews.llvm.org/D94020
Change-Id: Id1c716d71313edcfb768b2b175a6789ef9b01f3c
AVX512 has fast truncation ops, but if the truncation source is a concatenation of subvectors then its likely that we can use PACK more efficiently.
This is only guaranteed to work for truncations to 128/256-bit vectors as the PACK works across 128-bit sub-lanes, for now I've just disabled 512-bit truncation cases but we need to get them working eventually for D61129.
Use instead of the isa_and_nonnull<StoreInst> and use the StoreInst::getPointerOperand wrapper instead of a hardcoded Instruction::getOperand.
Looks cleaner and avoids a spurious clang static analyzer null dereference warning.
Convert it to v_fma_legacy_f32 if it is profitable to do so, just like
other mac instructions that are converted to their mad equivalents.
Differential Revision: https://reviews.llvm.org/D94010
Support EH_SJLJ_LONGJMP, EH_SJLJ_SETJMP, and EH_SJLJ_SETUP_DISPATCH
for SjLj exception handling. NC++ uses SjLj exception handling, so
implement it first. Add regression tests also.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D94071
CTLZ and CTPOP are lowered to CLZ and CNT instructions respectively.
CTTZ is not a native SVE operation but is instead lowered to:
CTTZ(V) => CTLZ(BITREVERSE(V))
In the case of fixed-length support using SVE we also lower CTTZ
operating on NEON sized vectors because of its reliance on
BITREVERSE which is also lowered to SVE intructions at these lengths.
Differential Revision: https://reviews.llvm.org/D93607