Commit Graph

4619 Commits

Author SHA1 Message Date
David Green 5e1a9d319d [ARM] Add lowering for bf16 neon vtrn, vzup and vuzp.
These go via Dag2Dag, which are better based on element sizes not the
exact element types.
2022-10-02 15:34:37 +01:00
David Green f2fde99461 [ARM] More bf16 shuffle handling, including perfect shuffles. 2022-10-02 14:31:51 +01:00
David Green 8193f0d1d2 [ARM] Add tablegen patterns for bf16 vrev 2022-10-02 13:42:14 +01:00
David Green 58369c8631 [ARM] Add tablegen patterns for bf16 vext
This adds missing tablegen patterns for VEXT, identical to the fp16
patterns as they only use baseline Neon operations.
Part of fixing #57770.
2022-10-02 12:45:58 +01:00
David Green 3651635eca [ARM][DAG] BF16 constant handling.
Much like f16 and f32, we shouldn't try to shrink bf16 to smaller fp
constant.  The code may not be optimal, but this allows us to legalize
bf16 constants under Arm without errors.
2022-10-02 11:51:08 +01:00
Filipp Zhinkin 945a1468c9 [ARM] Support all versions of AND, ORR, EOR and BIC in optimizeCompareInstr
Combine cmp with zero and all versions of AND, ORR, EOR and BIC instructions into S-suffixed versions.

Related issue: https://github.com/llvm/llvm-project/issues/57122

Reviewed By: efriedma, samtebbs

Differential Revision: https://reviews.llvm.org/D131786
2022-10-01 12:41:37 +03:00
Archibald Elliott ff4027d152 [ARM] Support fp16/bf16 using t constraint
fp16 and bf16 values can be used in GCC's inline assembly using the "t"
constraint, which means "VFP floating-point registers s0-s31" - fp16 and
bf16 values are stored in S registers too.

This change ensures that LLVM is compatible with GCC for programs that
use fp16 and the 't' constraint.

Fixes #57753

Differential Revision: https://reviews.llvm.org/D134553
2022-09-28 14:48:21 +01:00
Momchil Velikov 6602110152 [ARM] Enable and/cmp0 folding
The `CodeGenPrepare` pass can sink bitwise `and` used by compare to
zero into the basic blocks where the users are. This operation is
guarded by lowering hook, which is disabled for ARM.  In the ARM
architecture versions from v7-M up these two operations can be folded
into `tst rN, #imm` instruction. Sinking of `and` can also enable
the cmov-to-bfi DAG combiner.

This patch fixes some benchmark regressions caused
by https://reviews.llvm.org/D129370 as well scoring slightly better overall.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D134360
2022-09-26 11:31:23 +01:00
Momchil Velikov a412e9cd40 [ARM] Enable and/cmp0 folding - precommit test
Precommit test for D134360

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D134358
2022-09-26 11:05:49 +01:00
David Green 2b187effbd [ARM] Fix check lines in memcpy-inline.ll test. NFC
Commit c442698 updated the check lines in this file, but did so in a
way that removed a number of the existing checks, as the
update_llc_test_checks script does not understand all triples. This
fixes it up as needed to keep testing Thumb1 code.
2022-09-24 21:29:41 +01:00
Guillaume Chatelet c442698091 [NFC] update_llc_test_checks llvm/test/CodeGen/ARM/memcpy-inline.ll 2022-09-23 09:00:38 +00:00
Filipp Zhinkin fa67e281b2 [ARM] Add more tests on instructions fusion with comparison with zero; NFC
Baseline tests for D131786
2022-09-17 11:49:58 +03:00
Liqiang Tao 2e37557fde StackProtector: ensure stack checks are inserted before the tail call
The IR stack protector pass should insert stack checks before the tail
calls not only the musttail calls. So that the attributes `ssqreq` and
`tail call`, which are emited by llvm-opt, could be both enabled by
llvm-llc.

Reviewed By: compnerd

Differential Revision: https://reviews.llvm.org/D133860
2022-09-16 22:24:46 +08:00
Craig Topper 38ffa2bb96 [LegalizeTypes] Improve splitting for urem/udiv by constant for some constants.
For remainder:
If (1 << (Bitwidth / 2)) % Divisor == 1, we can add the high and low halves
together and use a (Bitwidth / 2) urem. If (BitWidth /2) is a legal integer
type, this urem will be expand by DAGCombiner using multiply by magic
constant. We do have to take into account that adding high and low
together can produce a carry, making it a (BitWidth / 2)+1 bit number.
So we need to also add back in the carry from the first addition.

For division:
We can use the above trick to compute the remainder, subtract that
remainder from the dividend, then multiply by the multiplicative
inverse of the Divisor modulo (1 << BitWidth).

This is based on the section "Remainder by Summing Digits" in
Hacker's delight.

The remainder trick is similar to a trick you may have learned for
determining if a decimal number is divisible by 3. You can add all the
digits together and see if the sum is divisible by 3. If you're not sure
if the sum is divisible by 3, you can add its digits together. This
can be repeated until you have a single decimal digit. If that digit
is 3, 6, or 9, then the original number is divisible by 3. This works
because 10 % 3 == 1.

gcc already does this same trick. There are additional tricks gcc
does urem as well as srem, udiv, and sdiv that I plan to add in
future patches.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D130862
2022-09-12 10:34:52 -07:00
Filipp Zhinkin c4d0509e3b [ARM] Add tests on instructions fusion with comparison with zero; NFC
Baseline tests for D131786
2022-09-08 20:24:32 +03:00
Matthias Gehre 2090e85fee [llvm/CodeGen] Enable the ExpandLargeDivRem pass for X86, Arm and AArch64
This adds the ExpandLargeDivRem to the default pass pipeline.
The limit at which it expands div/rem instructions is configured
via a new TargetTransformInfo hook (default: no expansion)
X86, Arm and AArch64 backends implement this hook to expand div/rem
instructions with more than 128 bits.

Differential Revision: https://reviews.llvm.org/D130076
2022-09-06 15:32:04 +01:00
John Brawn e26cadcc32 [ARM] Constant pools need 4-byte alignment if we only have tADR
When the only ADR instruction we have is the 16-bit thumb one then all
constant pool entries need to be 4-byte aligned, as tADR has an offset
that's a multiple of 4.

It looks like previously there happened to be no situations in which
we encountered a constant pool entry with alignment less than 4, so
failing to do this didn't cause any problems, but the expansion of
cttz to a table added by D128911 does use a constant pool with
alignment 1, so we now need to handle it correctly.

Differential Revision: https://reviews.llvm.org/D133199
2022-09-06 11:36:12 +01:00
Daniil Fukalov b4e1b0e00d [LiveIntervals] Split live intervals on any dead def
Each dead def of the same virtual register is required to be split into multiple
virtual registers with separate live intervals to avoid MachineVerifier error.

Partially fixes https://github.com/llvm/llvm-project/issues/56050 and
https://github.com/llvm/llvm-project/issues/56051

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D130477
2022-09-02 20:00:22 +03:00
Alex Richardson df00dac828 [ARM] Use getSymbolPreferLocal() in GetARMGVSymbol
This allows relaxing some relocations to symbol+offset instead of emitting
a relocation against a symbol.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D131433
2022-08-26 09:34:06 +00:00
Alex Richardson 0483b00875 Mark the $local function begin symbol as a function
While this does not matter for most targets, when building for Arm Morello,
we have to mark the symbol as a function and add size information, so that
LLD can correctly evaluate relocations against the local symbol.
Since Morello is an out-of-tree target, I tried to reproduce this with
in-tree backends and with the previous reviews applied this results in
a noticeable difference when targeting Thumb.

Background: Morello uses a method similar Thumb where the encoding mode is
specified in the LSB of the symbol. If we don't mark the target as a
function, the relocation will not have the LSB set and calls will end up
using the wrong encoding mode (which will almost certainly crash).

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D131429
2022-08-26 09:34:04 +00:00
Stephen Long 525af9f8eb [MC] Omit fill value if it's zero when emitting code alignment
Previously, we were generating zeroes when generating code alignments for AArch64, but now we should omit the value and let the assembler choose to generate nops or zeroes.

Reviewed By: efriedma, MaskRay

Differential Revision: https://reviews.llvm.org/D132508
2022-08-25 10:07:33 -07:00
Alvin Wong c0214db51a [llvm] Mark CFGuard fn ptr symbol as DSO local and add tests for mingw
For mingw target, if a symbol is not marked DSO local, a `.refptr` is
generated for it. This makes CFG check calls use an extra pointer
dereference, which adds extra overhead compared to the MSVC version,
so mark the CFG guard check funciton pointer DSO local to stop it.
This should have no effect on MSVC target.

Also adapt the existing cfguard tests to run for mingw targets, so that
this change is checked.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D132331
2022-08-23 23:39:39 +03:00
Alan Zhao 8c8cfaaf0a Revert "[ARM] Use getSymbolPreferLocal() in GetARMGVSymbol"
This reverts commit 6db15a82cc.

Reverted because this breaks offical Chrome builds targeting Android on
arm: https://crbug.com/1354305

Repro: https://drive.google.com/file/d/1pgQI2adwx3DJJqIYvMY4i249ouHU0rmu/view?usp=sharing
2022-08-22 16:16:37 -04:00
Eli Friedman cfd2c5ce58 Untangle the mess which is MachineBasicBlock::hasAddressTaken().
There are two different senses in which a block can be "address-taken".
There can be a BlockAddress involved, which means we need to map the
IR-level value to some specific block of machine code.  Or there can be
constructs inside a function which involve using the address of a basic
block to implement certain kinds of control flow.

Mixing these together causes a problem: if target-specific passes are
marking random blocks "address-taken", if we have a BlockAddress, we
can't actually tell which MachineBasicBlock corresponds to the
BlockAddress.

So split this into two separate bits: one for BlockAddress, and one for
the machine-specific bits.

Discovered while trying to sort out related stuff on D102817.

Differential Revision: https://reviews.llvm.org/D124697
2022-08-16 16:15:44 -07:00
David Green dfc95bab07 [DAG] Ensure more Legal BUILD_VECTOR elements types in shuffle->And combine
This is a followup to D131350, which caused another problem for i64
types being split into i32 on i32 targets. This patch tries to make sure
that either Illegal types are OK, or that the element types of a
buildvector are legal and bigger than or equal to the size of the
original elements.

Differential Revision: https://reviews.llvm.org/D131883
2022-08-15 14:41:45 +01:00
Filipp Zhinkin 1626ee6a95 [DAGCombine] Hoist shifts out of a logic operations tree.
Hoist and combine shift operations from logic operations tree:
logic (logic (SH x0, s), y), (logic (SH x1, s), z)  --> logic (SH (logic x0, x1), s), (logic y, z)

The transformation improves code generated for some cases related to the issue https://github.com/llvm/llvm-project/issues/49541.

Correctness:
https://alive2.llvm.org/ce/z/pVqVgY
https://alive2.llvm.org/ce/z/YVvT-q
https://alive2.llvm.org/ce/z/W5zTBq
https://alive2.llvm.org/ce/z/YfJsvJ
https://alive2.llvm.org/ce/z/3YSyDM
https://alive2.llvm.org/ce/z/Bs2kzk
https://alive2.llvm.org/ce/z/EoQpzU
https://alive2.llvm.org/ce/z/Jnc_5H
https://alive2.llvm.org/ce/z/_LP6k_
https://alive2.llvm.org/ce/z/KvZNC9

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D131189
2022-08-12 12:42:16 +03:00
Pengxuan Zheng 9bb6622423 [ARM] Do not use LOAD_STACK_GUARD with ROPI/RWPI
ROPI/RWPI are not supported with LOAD_STACK_GUARD currently.

Reviewed By: nickdesaulniers, rengolin

Differential Revision: https://reviews.llvm.org/D131427
2022-08-09 14:59:08 -07:00
Filipp Zhinkin ea323a4bd5 [X86][ARM] Update tests for bitwise logic trees of shifts; NFC
Baseline tests for D131189.
2022-08-09 20:57:14 +03:00
Alex Richardson 6db15a82cc [ARM] Use getSymbolPreferLocal() in GetARMGVSymbol
This allows relaxing some relocations to STT_SECTION symbol+offset
instead of emitting a relocation against a symbol.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D131433
2022-08-09 09:53:47 +00:00
Alex Richardson fa210dd67b [Thumb] Baseline test for incorrect relocation with -ffunction-sections
When calling a dso_local function, we end up creating a call against the
.Lfoo$local label. This might be converted to a relocation against a
section if there is such a matching one (which is a lot more likely with
-ffunction-sections) and then the LSB (Thumb flag) will be lost.

I originally noticed this with Morello LLVM (which uses the LSB to indicate
a C64 encoding mode function). The missing LSB meant that ld.lld would
insert a thunk that switches encoding mode which then resulted in errors
at runtime since functions were being entered with the wrong encoding mode.
Since the Morello backend is not upstream, I looked if any in-tree
backends could also be affected by the missing STT_FUNC flag and noticed
that Thumb is also affected (although the bug is rather difficult to
trigger - it currently requires inline assembly).

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D131432
2022-08-09 09:53:47 +00:00
Alex Richardson 9a2b14afa0 [ARM] Emit local aliases (.Lfoo$local) for functions
ARMAsmPrinter::emitFunctionEntryLabel() was not calling the base class
function so the $local alias was not being emitted. This should not have
any function effect right now since ARM does not generate different code
for the $local symbols, but it could be improved in the future.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D131392
2022-08-09 09:53:47 +00:00
Alex Richardson 7925341d93 [ARM] Add a baseline test for D131392
We should be emitting .Lfoo$local aliases for dso_local functions.
2022-08-09 09:53:47 +00:00
Alex Richardson 67b075319b [ARM] Add a baseline elf-preemption test
This is based on the RISC-V elf-preemption.ll (converted to opaque
pointers) and is useful for test coverage for the patch series starting
with D131392.
2022-08-09 09:53:46 +00:00
Filipp Zhinkin 6c52f82d77 [X86][ARM] Add tests for bitwise logic trees of shifts; NFC
Baseline tests for D131189.
2022-08-08 21:09:14 +03:00
Shubham Narlawar ab4fc87a9d [DAG] Emit table lookup from TargetLowering::expandCTTZ()
This patch emits table lookup in expandCTTZ.

Context -
https://reviews.llvm.org/D113291 transforms set of IR instructions to
cttz intrinsic but there are some targets which does not support CTTZ or
CTLZ. Hence, I generate a table lookup in TargetLowering::expandCTTZ().

Differential Revision: https://reviews.llvm.org/D128911
2022-08-08 12:08:05 +01:00
Simon Pilgrim e5e93b6130 [DAG] FoldConstantArithmetic - add initial support for undef elements in bitcasted binop constant folding
FoldConstantArithmetic can fold constant vectors hidden behind bitcasts (e.g. vXi64 -> v2Xi32 on 32-bit platforms), but currently bails if either vector contains undef elements. These undefs can often occur due to SimplifyDemandedBits/VectorElts calls recognising that the upper bits are often unnecessary (e.g. funnel-shift/rotate implicit-modulo and AND masks).

This patch adds a basic 'FoldValueWithUndef' handler that will attempt to constant fold if one or both of the ops are undef - so far this just handles the AND and MUL cases where we always fold to zero.

The RISCV codegen increase is interesting - it looks like the BUILD_VECTOR lowering was loading a constant pool entry but now (with all elements defined constant) it can materialize the constant instead?

Differential Revision: https://reviews.llvm.org/D130839
2022-08-08 11:53:56 +01:00
David Green 061e0189a3 [DAG] Ensure Legal BUILD_VECTOR elements types in shuffle->And combine
D129150 added a combine from shuffles to And that creates a BUILD_VECTOR
of constant elements. We need to ensure that the elements are of a legal
type, to prevent asserts during lowering.

Fixes #56970.

Differential Revision: https://reviews.llvm.org/D131350
2022-08-08 09:47:55 +01:00
David Green f8d976171f [ARM] Regenerate vector_store.ll tests. NFC 2022-08-07 12:46:28 +01:00
Lucas Prates ba9caf9170 [Arm] Fix parsing and emission of Tag_also_compatible_with eabi attribute
According to the ABI for the Arm Architecture, the value for the
Tag_also_compatible_with eabi attribute is represented by an NTBS entry.
This string value, in turn, is composed of a pair of tag+value encoded
in one of two formats:
- ULEB128: tag, ULEB128: value, 0.
- ULEB128: tag, NBTS: data.
(See [[ 60a8eb8c55/addenda32/addenda32.rst (3373secondary-compatibility-tag) | section 3.3.7.3 on the Addenda to, and Errata in, the ABI for the Arm Architecture ]].)

Currently the Arm assembly parser and streamer ignore the encoding of
the attribute's NTBS value, which can result in incorrect attributes
being emitted in both assembly and object file outputs.

This patch fixes these issues by properly handing the value's encoding.
An update to llvm-readobj to properly handle the attribute's value will be
covered by a separate patch.

Patch by Victor Campos and Lucas Prates.

Reviewed By: vhscampos

Differential Revision: https://reviews.llvm.org/D129500
2022-08-01 13:28:01 +01:00
Simon Pilgrim 69d5a038b9 [DAG] Enable ISD::SRL SimplifyMultipleUseDemandedBits handling inside SimplifyDemandedBits
This patch allows SimplifyDemandedBits to call SimplifyMultipleUseDemandedBits in cases where the ISD::SRL source operand has other uses, enabling us to peek through the shifted value if we don't demand all the bits/elts.

This is another step towards removing SelectionDAG::GetDemandedBits and just using TargetLowering::SimplifyMultipleUseDemandedBits.

There a few cases where we end up with extra register moves which I think we can accept in exchange for the increased ILP.

Differential Revision: https://reviews.llvm.org/D77804
2022-07-28 14:10:44 +01:00
Simon Pilgrim 529bd4f352 [DAG] SimplifyDemandedBits - don't early-out for multiple use values
SimplifyDemandedBits currently early-outs for multi-use values beyond the root node (just returning the knownbits), which is missing a number of optimizations as there are plenty of cases where we can still simplify when initially demanding all elements/bits.

@lenary has confirmed that the test cases in aea-erratum-fix.ll need refactoring and the current increase codegen is not a major concern.

Differential Revision: https://reviews.llvm.org/D129765
2022-07-27 10:54:06 +01:00
Nikita Popov dc84eeb62b [ARM] Test more atomic sizes with +atomics-32 feature (NFC)
Check that 8-bit and 16-bit atomics also work as expected. Also
fix the alignment on the 64-bit tests -- testing unaligned atomics
wasn't intended here.
2022-07-27 11:33:49 +02:00
Nikita Popov b1b1086973 [ARM] Add target feature to force 32-bit atomics
This adds a +atomic-32 target feature, which instructs LLVM to assume
that lock-free 32-bit atomics are available for this target, even
if they usually wouldn't be.

If only atomic loads/stores are used, then this won't emit libcalls.
If atomic CAS is used, then the user is responsible for providing
any necessary __sync implementations (e.g. by masking interrupts
for single-core privileged use cases).

See https://reviews.llvm.org/D120026#3674333 for context on this
change. The tl;dr is that the thumbv6m target in Rust has
historically made atomic load/store only available, which is
incompatible with the change from D120026, which switched these to
use libatomic.

Differential Revision: https://reviews.llvm.org/D130480
2022-07-27 10:00:31 +02:00
Simon Tatham 5c396be575 [llvm-objdump,ARM] Fix further test failures.
Further test-failure fallout from D130358. There were a handful of
uses of llvm-objdump in the CodeGen tests as well, which have taken me
longer to get to because more things had to be built.
2022-07-26 11:35:16 +01:00
David Green 4704da1374 [ARM] Fix Thumb2 compare being emitted ExpandCMP_SWAP
Given a patch like D129506, using instructions not valid for the current
target feature set becomes an error. This fixes an issue in
ARMExpandPseudo::ExpandCMP_SWAP where Thumb2 compares were used in
Thumb1Only code, such as thumbv8m.baseline targets.

Differential Revision: https://reviews.llvm.org/D129695
2022-07-20 12:04:22 +01:00
Simon Pilgrim 9fc347aa4e [DAG] PromoteIntRes_BUILD_VECTOR - extend constant boolean vectors according to target BooleanContents
PromoteIntRes_BUILD_VECTOR currently always ANY_EXTENDs build vector operands, but if this is a constant boolean vector we're losing the useful ability to keep the vector matching the BooleanContents mode used by the target.

This patch extends constant boolean vectors according to target BooleanContents, allowing a number of additional all-bits folds (notable XOR -> NOT conversions) to occur.

Differential Revision: https://reviews.llvm.org/D129641
2022-07-20 10:49:31 +01:00
David Green e22576455f [ARM] Update atomic tests for D129695. NFC 2022-07-19 19:36:08 +01:00
Simon Pilgrim d8888e14a0 Revert rG14364200821f7b2d97edf6e78160c514800d3ec6 "[ARM] Regenerate reg_sequence.ll test checks"
Breaks on some apple machines
2022-07-16 17:32:58 +01:00
Simon Pilgrim 1436420082 [ARM] Regenerate reg_sequence.ll test checks 2022-07-16 17:10:35 +01:00
Simon Pilgrim a5d0122f75 [DAG] Canonicalize non-inlane shuffle -> AND if all non-inlane referenced elements are known zero
As mentioned on D127115, this patch that attempts to recognise shuffle masks that could be simplified to a AND mask - we already have a similar transform that will fold AND -> 'clear mask' shuffle, but this patch handles cases where the referenced elements are not from the same lane indices but are known to be zero.

Differential Revision: https://reviews.llvm.org/D129150
2022-07-16 11:38:24 +01:00