Commit Graph

63852 Commits

Author SHA1 Message Date
Heejin Ahn aa0b0fbbe6 [WebAssembly] Use `SDValue::getConstantOperandVal` (NFC)
Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D107499
2021-08-04 21:15:23 -07:00
Matt Jacobson 75abeb64ce [AVR] emit 'MCSA_Global' references to '__do_global_ctors' and '__do_global_dtors'
Emit references to '__do_global_ctors' and '__do_global_dtors' to allow
constructor/destructor routines to run.

Reviewed by: MaskRay

Differential Revision: https://reviews.llvm.org/D107133
2021-08-05 10:37:36 +08:00
Yonghong Song e52946b9ab BPF: avoid NE/EQ loop exit condition
Kuniyuki Iwashima reported in [1] that llvm compiler may
convert a loop exit condition with "i < bound" to "i != bound", where
"i" is the loop index variable and "bound" is the upper bound.
In case that "bound" is not a constant, verifier will always have "i != bound"
true, which will cause verifier failure since to verifier this is
an infinite loop.

The fix is to avoid transforming "i < bound" to "i != bound".
In llvm, the transformation is done by IndVarSimplify pass.
The compiler checks loop condition cost (i = i + 1) and if the
cost is lower, it may transform "i < bound" to "i != bound".
This patch implemented getArithmeticInstrCost() in BPF TargetTransformInfo
class to return a higher cost for such an operation, which
will prevent the transformation for the test case
added in this patch.

 [1] https://lore.kernel.org/netdev/1994df05-8f01-371f-3c3b-d33d7836878c@fb.com/

Differential Revision: https://reviews.llvm.org/D107483
2021-08-04 16:54:16 -07:00
Jessica Paquette ca2e053652 [AArch64][GlobalISel] Legalize wide vector G_PHIs
Clamp the max number of elements when legalizing G_PHI. This allows us to
legalize some common fallbacks like 4 x s64.

Here's an example: https://godbolt.org/z/6YocsEYTd

Had to add -global-isel-abort=0 to legalize-phi.mir to account for the
G_EXTRACT_VECTOR_ELT from the 32 x s8 G_PHI.

Differential Revision: https://reviews.llvm.org/D107508
2021-08-04 16:48:59 -07:00
Heejin Ahn 31a71a393f [WebAssembly] Make result of 'catch' inst variadic
`catch` instruction can have any number of result values depending on
its tag, but so far we have only needed a single i32 return value for
C++ exception so the instruction was specified that way. But using the
instruction for SjLj handling requires multiple return values.

This makes `catch` instruction's results variadic and moves selection of
`throw` and `catch` instruction from ISelLowering to ISelDAGToDAG.
Moving `catch` to ISelDAGToDAG is necessary because I am not aware of
a good way to do instruction selection for variadic output instructions
in TableGen. This also moves `throw` because 1. `throw` and `catch`
share the same utility function and 2. there is really no reason we
should do that in ISelLowering in the first place. What we do is mostly
the same in both places, and moving them to ISelDAGToDAG allows us to
remove unnecessary mid-level nodes for `throw` and `catch` in
WebAssemblyISD.def and WebAssemblyInstrInfo.td.

This also adds handling for new `catch` instruction to AsmTypeCheck.

Reviewed By: dschuff, tlively

Differential Revision: https://reviews.llvm.org/D107423
2021-08-04 14:05:33 -07:00
Fangrui Song 9c19b36f1c [X86] Remove -x86-experimental-pref-loop-alignment in favor of -align-loops 2021-08-04 13:23:57 -07:00
Michael Liao 5edc886e90 [amdgpu] Add an enhanced conversion from i64 to f32.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D107187
2021-08-04 15:33:12 -04:00
Reshabh Sharma dce35ef104 Revert "[AMDGPU] Handle functions in llvm's global ctors and dtors list"
This reverts commit d42e70b3d3.
2021-08-04 23:33:31 +05:30
Craig Topper 643ce70a64 [RISCV] Remove the _COMMUTABLE and _TA versions of FMA and wide FMA vector instructions.
Use a tail policy operand instead. Inspired by the work in D105092,
but without the intrinsic interface changes.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D106512
2021-08-04 10:39:50 -07:00
Jessica Paquette d9279843b1 [AArch64][GlobalISel] Widen G_PHI before clamping it during legalization
This allows us to handle weird types like s88; we first widen to s128, then
clamp back down to s64.

https://godbolt.org/z/9xqbP46Mz

Also this makes it possible for GISel to legalize the case in pr48188.ll. It
now does the same thing as SDAG, although regalloc chooses different registers.

Differential Revision: https://reviews.llvm.org/D107417
2021-08-04 10:25:14 -07:00
Jessica Paquette 7d97de60b3 [AArch64][GlobalISel] Widen G_FPTO*I before clamping
Going through our legalization rules and doing some cleanup.

Widening and then clamping is usually easier than clamping and then widening.

This allows us to legalize some weird types like s88.

Differential Revision: https://reviews.llvm.org/D107413
2021-08-04 10:19:26 -07:00
Andrea Di Biagio 7a1a35a1d1 [X86][SchedModel] Add missing ReadAdvance for some arithmetic ops (PR51318 and PR51322).
This fixes a bug where implicit uses of EFLAGS were not marked as ReadAdvance in
the RM/MR variants of ADC/SBB (PR51318)

This also fixes the absence of ReadAdvance for the register operand of
RMW arithmetic instructions (PR51322).

Differential Revision: https://reviews.llvm.org/D107367
2021-08-04 17:50:22 +01:00
Bradley Smith d9cc5d84e4 [AArch64][SVE] Combine bitcasts of predicate types with vector inserts/extracts of loads/stores
An insert subvector that is inserting the result of a vector predicate
sized load into undef at index 0, whose result is casted to a predicate
type, can be combined into a direct predicate load. Likewise the same
applies to extract subvector but in reverse.

The purpose of this optimization is to clean up cases that will be
introduced in a later patch where casts to/from predicate types from i8
types will use insert subvector, rather than going through memory early.

This optimization is done in SVEIntrinsicOpts rather than InstCombine to
re-introduce scalable loads as late as possible, to give other
optimizations the best chance possible to do a good job.

Differential Revision: https://reviews.llvm.org/D106549
2021-08-04 15:51:14 +00:00
Simon Wallis 9269752671 [AArch64] Fix assert AArch64TargetLowering::ReplaceNodeResults
Don't know how to custom expand this
UNREACHABLE executed at llvm-project/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp:16788

The fix is to provide missing expansions for:
  case ISD::STRICT_FP_TO_UINT:
  case ISD::STRICT_FP_TO_SINT:

A test case is provided.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D107452
2021-08-04 16:18:19 +01:00
Reshabh Sharma d42e70b3d3 [AMDGPU] Handle functions in llvm's global ctors and dtors list
This patch introduces a new code object metadata field, ".kind"
which is used to add support for init and fini kernels.

HSAStreamer will use function attributes, "device-init" and
"device-fini" to distinguish between init and fini kernels from
the regular kernels and will emit metadata with ".kind" set to
"init" and "fini" respectively.

To reduce the number of init and fini kernels, the ctors and
dtors present in the llvm's global.ctors and global.dtors lists
are called from a single init and fini kernel respectively.

Reviewed by: yaxunl

Differential Revision: https://reviews.llvm.org/D105682
2021-08-04 19:53:33 +05:30
Roman Lebedev 35c0848b57
[NFC][X86] combineX86ShuffleChain(): hoist Mask variable higher up
Having `NewMask` outside of an if and rebinding `BaseMask` `ArrayRef`
to it is confusing. Instead, just move the `Mask` vector higher up,
and change the code that earlier had no access to it but now does
to use `Mask` instead of `BaseMask`.

This has no other intentional changes.
2021-08-04 17:15:12 +03:00
Roman Lebedev 916cdc3d4b
[NFC][X86] combineX86ShuffleChain(): rename inner Mask to avoid future shadowing
I want to hoist `Mask` variable higher up,
but then it would clash with this one.
So let's rename this one first.

There are no other intentional changes here other than said rename.
2021-08-04 17:15:12 +03:00
Tomas Matheson 40650f27b5 [ARM][atomicrmw] Fix CMP_SWAP_32 expand assert
This assert is intended to ensure that the high registers are not
selected when it is passed to one of the thumb UXT instructions. However
it was triggering even for 32 bit where no UXT instruction is emitted.

Fixes PR51313.

Differential Revision: https://reviews.llvm.org/D107363
2021-08-04 15:02:02 +01:00
Roman Lebedev f819e4c7d0
[X86] combineX86ShuffleChain(): canonicalize mask elts picking from splats
Given a shuffle mask, if it is picking from an input that is splat
given the current granularity of the shuffle, then adjust the mask
to pick from the same lane of the input as the mask element is in.
This may result in a shuffle being simplified into a blend.

I believe this is correct given that the splat detection matches the one
just above the new code,

My basic thought is that we might be able to get less regressions
by handling multiple insertions of the same value into a vector
if we form broadcasts+blend here, as opposed to D105390,
but i have not really thought this through,
and did not try implementing it yet.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D107009
2021-08-04 16:55:04 +03:00
Simon Pilgrim 8cd40ece70 [X86] Rename X86 tuning feature flag FeatureHasFastGather -> FeatureFastGather
Match the naming style used by the other 'FeatureFast/FeatureSlow' tuning flags.
2021-08-04 13:07:50 +01:00
Simon Pilgrim 17e8ac0703 [X86] Move FeatureFastBEXTR from bdver2 features to tuning
Noticed while looking at the feature flag renaming suggested in D107370
2021-08-04 13:07:49 +01:00
Serge Pavlov 0c28a7c990 Revert "Introduce intrinsic llvm.isnan"
This reverts commit 16ff91ebcc.
Several errors were reported mainly test-suite execution time. Reverted
for investigation.
2021-08-04 17:18:15 +07:00
Simon Pilgrim fc8dee1ebb [X86] Split Subtarget ISA / Security / Tuning Feature Flags Definitions. NFC
Our list of slow/fast tuning feature flags has become pretty extensive and is randomly interleaved with ISA and Security (Retpoline etc.) flags, not even based on when the ISAs/flags were introduced, making it tricky to locate them. Plus we started treating tuning flags separately some time ago, so this patch tries to group the flags to match.

I've left them mostly in the same order within each group - I'm happy to rearrange them further if there are specific ISA or Tuning flags that you think should be kept closer together.

Differential Revision: https://reviews.llvm.org/D107370
2021-08-04 11:16:36 +01:00
Tim Northover d7b0e5525a X86: fix frame offset calculation with mandatory tail calls
If there's a region of the stack reserved for potential tail call arguments
(only the case when we guarantee tail calls will be honoured), this is right
next to the incoming stored return address, not necessarily next to the
callee-saved area, so combining the two into a single figure leads to incorrect
offsets in some edge cases.
2021-08-04 10:02:42 +01:00
Serge Pavlov 16ff91ebcc Introduce intrinsic llvm.isnan
Clang has builtin function '__builtin_isnan', which implements C
library function 'isnan'. This function now is implemented entirely in
clang codegen, which expands the function into set of IR operations.
There are three mechanisms by which the expansion can be made.

* The most common mechanism is using an unordered comparison made by
  instruction 'fcmp uno'. This simple solution is target-independent
  and works well in most cases. It however is not suitable if floating
  point exceptions are tracked. Corresponding IEEE 754 operation and C
  function must never raise FP exception, even if the argument is a
  signaling NaN. Compare instructions usually does not have such
  property, they raise 'invalid' exception in such case. So this
  mechanism is unsuitable when exception behavior is strict. In
  particular it could result in unexpected trapping if argument is SNaN.

* Another solution was implemented in https://reviews.llvm.org/D95948.
  It is used in the cases when raising FP exceptions by 'isnan' is not
  allowed. This solution implements 'isnan' using integer operations.
  It solves the problem of exceptions, but offers one solution for all
  targets, however some can do the check in more efficient way.

* Solution implemented by https://reviews.llvm.org/D96568 introduced a
  hook 'clang::TargetCodeGenInfo::testFPKind', which injects target
  specific code into IR. Now only SystemZ implements this hook and it
  generates a call to target specific intrinsic function.

Although these mechanisms allow to implement 'isnan' with enough
efficiency, expanding 'isnan' in clang has drawbacks:

* The operation 'isnan' is hidden behind generic integer operations or
  target-specific intrinsics. It complicates analysis and can prevent
  some optimizations.

* IR can be created by tools other than clang, in this case treatment
  of 'isnan' has to be duplicated in that tool.

Another issue with the current implementation of 'isnan' comes from the
use of options '-ffast-math' or '-fno-honor-nans'. If such option is
specified, 'fcmp uno' may be optimized to 'false'. It is valid
optimization in general, but it results in 'isnan' always returning
'false'. For example, in some libc++ implementations the following code
returns 'false':

    std::isnan(std::numeric_limits<float>::quiet_NaN())

The options '-ffast-math' and '-fno-honor-nans' imply that FP operation
operands are never NaNs. This assumption however should not be applied
to the functions that check FP number properties, including 'isnan'. If
such function returns expected result instead of actually making
checks, it becomes useless in many cases. The option '-ffast-math' is
often used for performance critical code, as it can speed up execution
by the expense of manual treatment of corner cases. If 'isnan' returns
assumed result, a user cannot use it in the manual treatment of NaNs
and has to invent replacements, like making the check using integer
operations. There is a discussion in https://reviews.llvm.org/D18513#387418,
which also expresses the opinion, that limitations imposed by
'-ffast-math' should be applied only to 'math' functions but not to
'tests'.

To overcome these drawbacks, this change introduces a new IR intrinsic
function 'llvm.isnan', which realizes the check as specified by IEEE-754
and C standards in target-agnostic way. During IR transformations it
does not undergo undesirable optimizations. It reaches instruction
selection, where is lowered in target-dependent way. The lowering can
vary depending on options like '-ffast-math' or '-ffp-model' so the
resulting code satisfies requested semantics.

Differential Revision: https://reviews.llvm.org/D104854
2021-08-04 15:27:49 +07:00
hsmahesha 596e61c332 [AMDGPU] Ignore call graph node which does not have function info.
While collecting reachable callees (from kernels), ignore call graph node which
does not have associated function or associated function is not a definition.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D107329
2021-08-04 10:22:33 +05:30
Heejin Ahn 9bd02c433b [WebAssembly] Misc. cosmetic changes in EH (NFC)
- Rename `wasm.catch` intrinsic to `wasm.catch.exn`, because we are
  planning to add a separate `wasm.catch.longjmp` intrinsic which
  returns two values.
- Rename several variables
- Remove an unnecessary parameter from `canLongjmp` and `isEmAsmCall`
  from LowerEmscriptenEHSjLj pass
- Add `-verify-machineinstrs` in a test for a safety measure
- Add more comments + fix some errors in comments
- Replace `std::vector` with `SmallVector` for cases likely with small
  number of elements
- Renamed `EnableEH`/`EnableSjLj` to `EnableEmEH`/`EnableEmSjLj`: We are
  soon going to add `EnableWasmSjLj`, so this makes the distincion
  clearer

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D107405
2021-08-03 21:03:46 -07:00
Arthur Eubanks ad25344620 [MC][CodeGen] Emit constant pools earlier
Previously we would emit constant pool entries for ldr inline asm at the
very end of AsmPrinter::doFinalization(). However, if we're emitting
dwarf aranges, that would end all sections with aranges. Then if we have
constant pool entries to be emitted in those same sections, we'd hit an
assert that the section has already been ended.

We want to emit constant pool entries before emitting dwarf aranges.
This patch splits out arm32/64's constant pool entry emission into its
own MCTargetStreamer virtual method.

Fixes PR51208

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D107314
2021-08-03 20:55:31 -07:00
Jessica Paquette 5643736378 [AArch64][GlobalISel] Widen G_SELECT before clamping it
This allows us to handle the s88 G_SELECTS:

https://godbolt.org/z/5s18M4erY

Weird types like this can result in weird merges.

Widening to s128 first and then clamping down avoids that situation.

Differential Revision: https://reviews.llvm.org/D107415
2021-08-03 18:31:17 -07:00
Evandro Menezes 63a5ac4e0d [RISCV] Add scheduling resources for V
Add the scheduling resources for the V extension instructions.

Differential Revision: https://reviews.llvm.org/D98002
2021-08-03 15:47:51 -05:00
David Green bd07c2e266 [AArch64] Prefer fmov over orr v.16b when copying f32/f64
This changes the lowering of f32 and f64 COPY from a 128bit vector ORR to
a fmov of the appropriate type. At least on some CPU's with 64bit NEON
data paths this is expected to be faster, and shouldn't be slower on any
CPU that treats fmov as a register rename.

Differential Revision: https://reviews.llvm.org/D106365
2021-08-03 17:25:40 +01:00
Craig Topper deaeb16d88 [RISCV] Indicate that RISCVMergeBaseOffsetOpt preserves the CFG.
Return false from runOnFunction if nothing changed. Curiously
we already returned a bool from detectAndFoldOffset, but didn't
use it.

Fix a couple breaks after returns that I saw while auditing
detectAndFoldOffset.

Differential Revision: https://reviews.llvm.org/D107303
2021-08-03 08:32:36 -07:00
Simon Pilgrim d3917bbfc6 [X86] Add title comment to separate the "CPU Families" features from the other subtarget features. NFCI.
Hopefully we can get rid of these some day...
2021-08-03 12:53:57 +01:00
Fraser Cormack cba6aab971 [RISCV] Support simple fractional steps in matching VID sequences
This patch extends the optimization of VID-sequence BUILD_VECTORs
introduced in D104921 to include simple fractional steps composed of a
separated integer numerator and denominator.

A notable limitation in this sequence detection is that only sequences
with steps N/1 or 1/D are found, meaning that the step between elements
and the frequency with which it changes is consistent across the whole
sequence. Fractional steps such as 2/3 won't be matched as those would
involve more complex tracking of state or some level of backtracking.

As is stands, however, this patch is sufficient to match common
interleave-type shuffle indices, for example matching `<0,0,1,1>` (or
commonly `<0,u,1,u>` or `<u,0,u,1>`) to an index sequence divided by 2.

While the optimization is relatively `undef`-tolerant, due to greedy
pattern-matching there even are some simple patterns which confuse the
sequence detection into identifying either a suboptimal sequence or no
sequence at all.

Currently only fractional-step sequences identified as having a
power-of-two denominator are actually lowered to RVV instructions. This
is to avoid introducing divisions into the generated code.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106533
2021-08-03 10:38:24 +01:00
Jason Molenda 0d8cd4e2d5 [AArch64InstPrinter] Change printAddSubImm to comment imm value when shifted
Add a comment when there is a shifted value,
    add x9, x0, #291, lsl #12 ; =1191936
but not when the immediate value is unshifted,
    subs x9, x0, #256 ; =256
when the comment adds nothing additional to the reader.

Differential Revision: https://reviews.llvm.org/D107196
2021-08-03 02:28:46 -07:00
Cullen Rhodes a02bbeeae7 [AArch64][AsmParser] NFC: Use helpers in matrix tile list parsing 2021-08-03 08:13:01 +00:00
Jay Foad 40202b13b2 [AMDGPU] Legalize operands of V_ADDC_U32_e32 and friends
These instructions have an implicit use of vcc which counts towards the
constant bus limit. Pre gfx10 this means that the explicit operands
cannot be sgprs. Use the custom inserter hook to call legalizeOperands
to enforce that restriction.

Fixes https://bugs.llvm.org/show_bug.cgi?id=51217

Differential Revision: https://reviews.llvm.org/D106868
2021-08-03 09:04:52 +01:00
Paulo Matos d3a0a65bf0 Reland: "[WebAssembly] Add new pass to lower int/ptr conversions of reftypes"
Add new pass LowerRefTypesIntPtrConv to generate debugtrap
instruction for an inttoptr and ptrtoint of a reference type instead
of erroring, since calling these instructions on non-integral pointers
has been since allowed (see ac81cb7e6).

Differential Revision: https://reviews.llvm.org/D107102
2021-08-03 09:20:51 +02:00
jacquesguan 7900ee0b61 [RISCV] Teach VSETVLI insertion to merge the unused VSETVLI with the one need to be insert after it.
If a vsetvli instruction is not compatible with the next vector instruction,
and there is no other things that may update or use VL/VTYPE, we could merge
it with the next vsetvli instruction that should be insert for the vector
instruction.

This commit only merge VTYPE with the former vsetvli instruction which has
the same VL.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106857
2021-08-03 12:06:59 +08:00
Roman Lebedev 6f6e9a867f
[BasicTTIImpl][LoopUnroll] getUnrollingPreferences(): emit ORE remark when advising against unrolling due to a call in a loop
I'm not sure this is the best way to approach this,
but the situation is rather not very detectable unless we explicitly call it out when refusing to advise to unroll.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D107271
2021-08-03 00:57:26 +03:00
Jessica Paquette bd13c8e610 [AArch64][GlobalISel] Emit extloads for ZExt/SExt values in assignValueToAddress
When a value is expected to be extended, we should emit an extended load rather
than a normal G_LOAD.

Add checklines to arm64-abi.ll which show that we now emit the correct loads.

For ease of comparison: https://godbolt.org/z/8WvY6EfdE

Differential Revision: https://reviews.llvm.org/D107313
2021-08-02 14:48:44 -07:00
Paulo Matos 245f2ee647 Revert "[WebAssembly] Add new pass to lower int/ptr conversions of reftypes"
This reverts commit ce1c59dea6.
2021-08-02 20:12:25 +02:00
Paulo Matos ce1c59dea6 [WebAssembly] Add new pass to lower int/ptr conversions of reftypes
Add new pass LowerRefTypesIntPtrConv to generate trap
instruction for an inttoptr and ptrtoint of a reference type instead
of erroring, since calling these instructions on non-integral pointers
has been since allowed (see ac81cb7e6).

Differential Revision: https://reviews.llvm.org/D107102
2021-08-02 19:40:00 +02:00
Thomas Lively 417e500668 [WebAssembly] Compute known bits for SIMD bitmask intrinsics
This optimizes out the mask when the result of a bitmask is interpreted as an i8
or i16 value. Resolves PR50507.

Differential Revision: https://reviews.llvm.org/D107103
2021-08-02 09:52:34 -07:00
David Green c423a586a7 [ARM] Remove setPreservesCFG from ARMBlockPlacement
As of 2829391840 it no longer preserves the CFG, needing to
split blocks in order to add DLS instructions.
2021-08-02 14:15:45 +01:00
Irina Dobrescu b01417d3c5 [AArch64] Optimise min/max lowering in ISel
Differential Revision: https://reviews.llvm.org/D106561
2021-08-02 13:40:21 +01:00
Carl Ritson 675c942373 [AMDGPU] Disable NSA for BVH instructions when appropriate
Check maximum NSA size when selecting NSA or non-NSA BVH instructions.

Differential Revision: https://reviews.llvm.org/D103230
2021-08-02 20:09:26 +09:00
Simon Pilgrim 7397dcb403 [TTI] Add basic SK_InsertSubvector shuffle mask recognition
This patch adds an initial ShuffleVectorInst::isInsertSubvectorMask helper to recognize 2-op shuffles where the lowest elements of one of the sources are being inserted into the "in-place" other operand, this includes "concat_vectors" patterns as can be seen in the Arm shuffle cost changes. This also helped fix a x86 issue with irregular/length-changing SK_InsertSubvector costs - I'm hoping this will help with D107188

This doesn't currently attempt to work with 1-op shuffles that could either be a "widening" shuffle or a self-insertion.

The self-insertion case is tricky, but we currently always match this with the existing SK_PermuteSingleSrc logic.

The widening case will be addressed in a follow up patch that treats the cost as 0.

Masks with a high number of undef elts will still struggle to match optimal subvector widths - its currently bounded by minimum-width possible insertion, whilst some cases would benefit from wider (pow2?) subvectors.

Differential Revision: https://reviews.llvm.org/D107228
2021-08-02 11:23:44 +01:00
Simon Pilgrim 0579050116 Fix MSVC signed/unsigned comparison warning. NFCI. 2021-08-02 11:23:43 +01:00
David Green 2829391840 [ARM] Revert WLSTP to DLSTP if the target block is out of range
If the block target for a WLSTP instruction is known to be out of range,
and cannot be fixed by the ARMBlockPlacementPass, we can relax it to a
DLSTP (and cmp/branch) to still allow the creation of tail predicated
loops. That is what this patch does, adding extra revert code to the
fallback path of ARMBlockPlacementPass.

Due to the code produced when reverting, this creates a DLSTP between a
Bcc and a Br. As a DLS isn't necessarily a terminator we need to split
the block to move the DLS/Br into.

Differential Revision: https://reviews.llvm.org/D104709
2021-08-02 10:59:52 +01:00
Cullen Rhodes 7ed0120d84 [AArch64][AsmParser] NFC: Parser.Lex() -> Lex()
Reviewed By: tmatheson

Differential Revision: https://reviews.llvm.org/D107146
2021-08-02 09:48:41 +00:00
Carl Ritson a441de6d94 [AMDGPU][GlobalISel] Add missing default mapping for BVH intrinsics
Application of default mapping to BVH intrinsics was missing.
Copy parts of SelectionDAG test to GlobalISel test as these would
have indicated this error.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D107211
2021-08-02 12:43:38 +09:00
Hsiangkai Wang 8b33839f01 [RISCV] Rename vector inline constraint from 'v' to 'vr' and 'vm' in IR.
Differential Revision: https://reviews.llvm.org/D107139
2021-08-01 05:58:17 +08:00
Eli Friedman bdd55b2f18 Fix the default alignment of i1 vectors.
Currently, the default alignment is much larger than the actual size of
the vector in memory.  Fix this to use a sane default.

For SVE, temporarily remove lowering of load/store operations for
predicates with less than 16 elements. The layout the backend was
assuming for SVE predicates with less than 16 elements doesn't agree
with the frontend. More work probably needs to be done here.

This change is, strictly speaking, not backwards-compatible at the
bitcode level. But probably nobody is actually depending on that; i1
vectors in memory are rare, and the code that does use them probably
ends up forcing the alignment to something sane anyway.  If we think
this is a concern, I can restrict this to scalable vectors for now
(where it's actually causing issues for me at the moment).

Differential Revision: https://reviews.llvm.org/D88994
2021-07-31 14:09:59 -07:00
Craig Topper 593059b328 [RISCV] Rename RISCVISD::FCVT_W_RV64 to FCVT_W_RTZ_RV64. NFC
fcvt.w(u) supports multiple rounding modes, but the ISD node
doesn't encode that. So name it to match the rounding mode it uses.
2021-07-31 11:14:59 -07:00
David Green 15a1d7e839 [ARM] Switch order of creating VADDV and VMLAV.
It can be beneficial to attempt to try the larger VMLAV patterns before
VADDV, in case both may match the same code.
2021-07-31 16:28:52 +01:00
Matt Arsenault ebc17a0d68 GlobalISel: Scalarize unaligned vector stores
This has the same problems and limitations as the load path.
2021-07-31 10:37:15 -04:00
Alexandros Lamprineas 7d940432c4 [AArch64] Legalize MVT::i64x8 in DAG isel lowering
This patch legalizes the Machine Value Type introduced in D94096 for loads
and stores. A new target hook named getAsmOperandValueType() is added which
maps i512 to MVT::i64x8. GlobalISel falls back to DAG for legalization.

Differential Revision: https://reviews.llvm.org/D94097
2021-07-31 09:51:28 +01:00
David Green 69cdadddec [ARM] Distribute reductions based on ascending load offset
This distributes reductions based on the relative offset of loads, if
one is found from their operands. Given chains of reductions this will
then sort them in ascending load order, which in turn can help simple
prefetches latch on to increasing strides more easily.

Differential Revision: https://reviews.llvm.org/D106569
2021-07-30 19:50:07 +01:00
Matt Arsenault faccf427df AMDGPU/GlobalISel: Remove special case lowering for non-pow-2 stores
We end up with extra copies from buildAnyExtOrTrunc if these are
lowered after the register types are legalized.
2021-07-30 12:37:29 -04:00
David Green 532d05b714 [ARM] Attempt to distribute reductions
This adds a combine for adds of reductions, distributing them so that
they occur sequentially to enable better use of accumulating VADDVA
instructions. It combines:
  add(X, add(vecreduce(Y), vecreduce(Z))) ->
    add(add(X, vecreduce(Y)), vecreduce(Z))
and
  add(add(A, reduce(B)), add(C, reduce(D))) ->
    add(add(add(A, C), reduce(B)), reduce(D))

These together distribute the add's so that more reductions can be
selected to VADDVA.

Differential Revision: https://reviews.llvm.org/D106532
2021-07-30 14:48:31 +01:00
David Green 4b56306762 [ARM] Turn vecreduce_add(add(x, y)) into vecreduce(x) + vecreduce(y)
Under MVE we can use VADDV/VADDVA's to perform integer add reductions,
so it can be beneficial to use more reductions than summing subvectors
and reducing once. Especially for VMLAV/VMLAVA the mul can be
incorporated into the reduction, producing less instructions.

Some of the test cases currently get larger due to extra integer adds,
but will be improved in a followup patch.

Differential Revision: https://reviews.llvm.org/D106531
2021-07-30 10:10:41 +01:00
Cullen Rhodes 3a349d2269 [AArch64][SME] Introduce feature for streaming mode
The Scalable Matrix Extension (SME) introduces a new execution mode
called Streaming SVE mode. In streaming mode a substantial subset of the
SVE and SVE2 instruction set is available, along with new outer product,
load, store, extract and insert instructions that operate on the new
architectural register state for the matrix.

To support streaming mode this patch introduces a new subtarget feature
+streaming-sve. If enabled, the subset of SVE(2) instructions are
available. The existing behaviour for SVE(2) remains unchanged, the
subset of instructions that are legal in streaming mode are enabled if
either +sve[2] or +streaming-sve is specified. Instructions that are
illegal in streaming mode remain predicated on +sve[2].

The SME target feature has been updated to imply +streaming-sve rather
than +sve.

The following changes are made to the SVE(2) tests:
  * For instructions that are legal in streaming mode:
    - added RUN line to verify +streaming-sve enables the instruction.
    - updated diagnostic to 'instruction requires: streaming-sve or sve'.
  * For instructions that are illegal in streaming-mode:
    - added RUN line to verify +streaming-sve does not enable the
      instruction.

SVE(2) instructions that are legal in streaming mode have:

  if !HaveSVE[2]() && !HaveSME() then UNDEFINED;

at the top of the pseudocode in the XML.

The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06/SVE-Instructions

Reviewed By: sdesmalen, david-arm

Differential Revision: https://reviews.llvm.org/D106272
2021-07-30 07:30:45 +00:00
Tarindu Jayatilaka 7a797b2902 Take OptimizationLevel class out of Pass Builder
Pulled out the OptimizationLevel class from PassBuilder in order to be able to access it from within the PassManager and avoid include conflicts.

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D107025
2021-07-29 21:57:23 -07:00
Stefan Pintilie 754520a2bf [PowerPC] Fix issue where hint was providing the incorrect regsiter class.
Regsier hints when copying to a UACC register do not always produce VSRp
registers. This patch makes sure that we do not produce hints in cases
where the subregsiter of the UACC is not a VSRp.

Reviewed By: nemanjai, #powerpc

Differential Revision: https://reviews.llvm.org/D107101
2021-07-29 21:10:45 -05:00
Mark Schimmel e622c99f30 [ARC] Add norm/normh instructions with disassembly tests
Add disassembler support for the NORM and NORMH instructions. These instructions
only exist when the ARC processor is configured with the "norm" extension.

fferential Revision: https://reviews.llvm.org/D107118
2021-07-29 17:54:52 -07:00
Ben Shi bb6fddb63c Optimize mul in the zba extension with SH*ADD
This patch does the following optimization of mul with a constant.

(mul x, 11) -> (SH1ADD (SH2ADD x, x), x)
(mul x, 19) -> (SH1ADD (SH3ADD x, x), x)
(mul x, 13) -> (SH2ADD (SH1ADD x, x), x)
(mul x, 21) -> (SH2ADD (SH2ADD x, x), x)
(mul x, 37) -> (SH2ADD (SH3ADD x, x), x)
(mul x, 25) -> (SH3ADD (SH1ADD x, x), x)
(mul x, 41) -> (SH3ADD (SH2ADD x, x), x)
(mul x, 73) -> (SH3ADD (SH3ADD x, x), x)
(mul x, 27) -> (SH1ADD (SH3ADD x, x), (SH3ADD x, x))
(mul x, 45) -> (SH2ADD (SH3ADD x, x), (SH3ADD x, x))
(mul x, 81) -> (SH3ADD (SH3ADD x, x), (SH3ADD x, x))

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D107065
2021-07-30 08:36:28 +08:00
Thomas Johnson cc238a6e03 [ARC] Add additional mov immediate instruction formats with a fix for u6 decoding
Differential Revision: https://reviews.llvm.org/D107088
2021-07-29 16:41:55 -07:00
Adrian Prantl c5d84d2eb3 GlobalISel/AArch64: don't optimize away redundant branches at -O0
This patch prevents GlobalISel from optimizing out redundant branch
instructions when compiling without optimizations.

The motivating example is code like the following common pattern in
Swift, where users expect to be able to set a breakpoint on the early
exit:

public func f(b: Bool) {
  guard b else {
    return // I would like to set a breakpoint here.
  }
  ...
}

The patch modifies two places in GlobalISEL: The first one is in
IRTranslator.cpp where the removal of redundant branches is made
conditional on the optimization level. The second one is in
AArch64InstructionSelector.cpp where an -O0 *only* optimization is
being removed.

Disabling these optimizations increases code size at -O0 by
~8%. However, doing so improves debuggability, and debug builds are
the primary reason why developers compile without optimizations. We
thus concluded that this is the right trade-off.

rdar://79515454

This tenatively reapplies the patch without modifications, the LLDB
test that has blocked this from landing previously has since been
modified to hopefully no longer be sensitive to this change.

Differential Revision: https://reviews.llvm.org/D105238
2021-07-29 16:04:22 -07:00
David Green d4a2daa919 [ARM] Define a couple more ssub indexes. NFC
Same as 91bd3ad128, this doesn't really
change anything but gives the registers better names than the ones
tablegen would define. And fills in the missing gaps.
2021-07-29 23:00:35 +01:00
Bradley Smith 191831e380 [AArch64][SVE] Fix incorrect mask type when lowering fixed type SVE gather/scatter
An incorrect mask type when lowering an SVE gather/scatter was causing
a codegen fault which manifested as the incorrect predicate size being
used for an SVE gather/scatter, (e.g.. p0.b rather than p0.d).

Fixes PR51182.

Differential Revision: https://reviews.llvm.org/D106943
2021-07-29 11:22:17 +00:00
Cullen Rhodes 08d92dbbff [AArch64][AsmParser] NFC: Parser.getTok() -> getTok()
Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D106949
2021-07-29 10:18:54 +00:00
Amara Emerson da61ab8475 [AArch64][GlobalISel] More widenToNextPow2 changes, this time for arithmetic/bitwise ops. 2021-07-29 03:02:29 -07:00
Mirko Brkusanin 971f4173f8 [AMDGPU][GlobalISel] Insert an and with exec before s_cbranch_vccnz if necessary
While v_cmp will AND inactive lanes with 0, that is not the case for logical
operations.

This fixes a Vulkan CTS test that would hang otherwise.

Differential Revision: https://reviews.llvm.org/D105709
2021-07-29 11:20:49 +02:00
Fraser Cormack 02dd4b59bc [RISCV] Optimize floating-point "dominant value" BUILD_VECTORs
This patch aims to improve the performance of BUILD_VECTORs which are
identified as containing a dominant element. Given that most
floating-point constants themselves require a load from the constant
pool, it was possible for the optimization to actually increase the
number of individual loads on small vectors. The exception is the zero
constant -- +0.0 -- which can be materialized efficiently.

While this optimization could do with a proper cost model to weigh the
benfits of a single vector load vs. the manipulation of individual
elements -- even for integer vectors which often require several
instructions to materialize -- without a concrete RVV implementation to
work with any heuristic is likely to be both more obtuse and inaccurate.

Until then, this patch fixes at least one known obvious deficiency.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106963
2021-07-29 09:22:34 +01:00
Ben Shi 264b8e2a20 [RISCV] Optimize mul in the zba extension with SH*ADD
This patch makes the following optimization, if the
immediate multiplier is not a simm12.

(mul x, (power_of_2 + 2)) => (SH1ADD x, (SLLI x, bits))
(mul x, (power_of_2 + 4)) => (SH2ADD x, (SLLI x, bits))
(mul x, (power_of_2 + 8)) => (SH3ADD x, (SLLI x, bits))

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106648
2021-07-29 09:46:41 +08:00
Jessica Paquette 5a333dc5da [AArch64][GlobalISel] Improve legalization for odd-type G_LOAD
Swap the order of widening so that we widen to the next power-of-2 first when
legalizing G_LOAD.

Also, provide a minimum type for the power of 2 to disallow s2 + s1. Clamping
ought to disallow s2 and s1, but I think it's better to be explicit about the
expected minimum size.

We probably need a similar change for G_STORE, but it seems to be a bit more
finnicky. So, let's just handle G_LOAD for now.

Differential Revision: https://reviews.llvm.org/D107013
2021-07-28 17:19:14 -07:00
Jessica Paquette c0a41c3d3b [AArch64][GlobalISel] Improve legalization for odd-sized G_ICMP/G_CONSTANT
We were handing types like s88 like

1) clamp to the range
2) widen to the next power of 2

This isn't desirable because it causes an odd breakdown for types like s88.
If we widen to the next power of 2 (s128) first, then we get a clean breakdown
when we clamp back to s64.

Differential Revision: https://reviews.llvm.org/D106998
2021-07-28 15:31:33 -07:00
Patrick Holland dbed061bf1 [MCA] Moving the target specific CustomBehaviour impl. from /tools/llvm-mca/ to /lib/Target/.
Differential Revision: https://reviews.llvm.org/D106775
2021-07-28 11:23:18 -07:00
Fangrui Song 6da3d8b19c [llvm] Replace LLVM_ATTRIBUTE_NORETURN with C++11 [[noreturn]]
[[noreturn]] can be used since Oct 2016 when the minimum compiler requirement was bumped to GCC 4.8/MSVC 2015.

Note: the definition of LLVM_ATTRIBUTE_NORETURN is kept for now.
2021-07-28 09:31:14 -07:00
Craig Topper 3106f85945 [RISCV] Fix grammar in a comment. NFC 2021-07-28 09:09:26 -07:00
Craig Topper 54588bcc05 [RISCV] Restrict performANY_EXTENDCombine to prevent an infinite loop.
The sign_extend we insert here can get turned into a zero_extend if
the sign bit is known zero. This can enable a setcc combine that
shrinks compares with zero_extend. This reduces the use count of
the zero_extend allowing other combines to turn it back into an
any_extend.

This restricts the combine to only cases where the result is used
by a CopyToReg. This works for my original motivating case. I
hope the CopyToReg use will prevent any converted extends from
turning back into an any_extend.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D106754
2021-07-28 09:05:45 -07:00
Sanjay Patel 4c41caa287 [x86] improve CMOV codegen by pushing add into operands, part 3
In this episode, we are trying to avoid an x86 micro-arch quirk where complex
(3 operand) LEA potentially costs significantly more than simple LEA. So we
simultaneously push and pull the math around the CMOV to balance the operations.

I looked at the debug spew during instruction selection and decided against
trying a later DAGToDAG transform -- it seems very difficult to match if the
trailing memops are already selected and managing the creation of extra
instructions at that level is always tricky.

Differential Revision: https://reviews.llvm.org/D106918
2021-07-28 09:10:33 -04:00
Simon Pilgrim 124d586382 [X86][AVX] Move VPERM2F128 defs above VINSERTF128 defs. NFC.
This will be necessary for a future patch to lower VINSERTF128 custom folds to VPERM2F128
2021-07-28 14:02:17 +01:00
David Green 41cedb1c9a [LV][ARM] Tighten up MLA reduction costing
This makes a couple of changes to the costing of MLA reduction patterns,
to more accurately cost various patterns that can come up from
vectorization.

 - The Arm implementation of getExtendedAddReductionCost is altered to
   only provide costs for legal or smaller types. Larger than legal types
   need to be split, which currently does not work very well, especially
   for predicated reductions where the predicate may be legal but needs to
   be split. Currently we limit it to legal or smaller input types.
 - The getReductionPatternCost has learnt that reduce(ext(mul(ext, ext))
   is a pattern that can come up, and can be treated the same as
   reduce(mul(ext, ext)) providing the extension types match.
 - And it has been adjusted to not count the ext in reduce(mul(ext, ext))
   as part of a reduce(mul) pattern.

Together these changes help to more accurately cost the mla reductions
in cases such as where the extend types don't match or the extend
opcodes are different, picking better vector factors that don't result
in expanded reductions.

Differential Revision: https://reviews.llvm.org/D106166
2021-07-28 12:50:58 +01:00
RamNalamothu 1a8c57179a [AMDGPU] We would need FP if there is call and caller save VGPR spills
Since https://reviews.llvm.org/D98319, determineCalleeSavesSGPR() needs
to consider caller save VGPR spills as well while anticipating if we
require FP.

Fixes: SWDEV-295978

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D106758
2021-07-28 11:12:55 +05:30
Xiang1 Zhang 3223d41017 [X86] Fix lowering to illegal type in LowerINSERT_VECTOR_ELT
Differential Revision: https://reviews.llvm.org/D106780
2021-07-28 08:16:59 +08:00
Xiang1 Zhang 2ca3937131 Revert "[X86] Fix lowering to illegal type in LowerINSERT_VECTOR_ELT"
This reverts commit 6ff73efea9.
2021-07-28 08:12:29 +08:00
Xiang1 Zhang 6ff73efea9 [X86] Fix lowering to illegal type in LowerINSERT_VECTOR_ELT 2021-07-28 08:08:30 +08:00
Krzysztof Parzyszek 64d5b6e373 [Hexagon] Fix resetting dead registers in DBG_VALUE_LISTs
This fixes https://llvm.org/PR51229.
2021-07-27 18:36:28 -05:00
Nemanja Ivanovic 778932c673 [PowerPC] Turn deprecated altivec prefetch instrs to nops on AIX
The dst/dstt/dstst/dststt instructions are nop's on all PowerPC
cores that AIX supports. The AIX assembler also does not accept
these mnemonics. Turn them into nop's on AIX (similar to dstall).
2021-07-27 15:50:02 -05:00
Sanjay Patel 156ba620b3 [x86] update stale code comment; NFC
The transform was generalized with:
1ce05ad619
2021-07-27 16:45:52 -04:00
Matt Arsenault d7d2e4545e AMDGPU/GlobalISel: Fix selecting G_SEXTLOAD/G_ZEXTLOAD pre-gfx9
The patterns for the m0 glue patterns were failing to import.
2021-07-27 15:56:42 -04:00
Amara Emerson a11d9a1f48 [AArch64][GlobalISel] Fix constraining LDXPX intrinsic selection.
Causes a fallback because of lack of regclasses on vregs, unless its without
asserts, where we end up crashing later in codegen.
2021-07-27 12:13:56 -07:00
Craig Topper 3852b8c70f [RISCV] Select vector shl by 1 to a vector add.
A vector add may be faster than a vector shift.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D106689
2021-07-27 10:57:28 -07:00
Matt Arsenault b32d3d9e81 AMDGPU: Treat IMPLICIT_DEF like a constant lanemask source
This is partially a workaround. SILowerI1Copies does not understand
unstructured loops. This would result in inserting instructions to
merge a mask register in the same block where it was defined in an
unstructured loop.
2021-07-27 11:44:38 -04:00
Thomas Lively 33786576fd [WebAssembly] Codegen for extmul SIMD instructions
Replace the clang builtins and LLVM intrinsics for the SIMD extmul instructions
with normal codegen patterns.

Differential Revision: https://reviews.llvm.org/D106724
2021-07-27 08:41:30 -07:00
Anirudh Prasad a8cfa4b9bd [SystemZ][z/OS] Initial code to generate assembly files on z/OS
- This patch consists of the bare basic code needed in order to generate some assembly for the z/OS target.
- Only the .text and the .bss sections are added for now.
- The relevant MCSectionGOFF/Symbol interfaces have been added. This enables us to print out the GOFF machine code sections.
- This patch enables us to add simple lit tests wherever possible, and contribute to the testing coverage for the z/OS target
- Further improvements and additions will be made in future patches.

Reviewed By: tmatheson

Differential Revision: https://reviews.llvm.org/D106380
2021-07-27 11:29:15 -04:00
Tres Popp d225de60c9 Revert "[X86][AVX] Add getBROADCAST_LOAD helper function. NFCI."
This reverts commit 1cfecf4fc4.

This commit broke LLVM code generated through XLA by removing a
conditional on Ld->getExtensionType() == ISD::NON_EXTLOAD

This is not a perfect revert. The new function is left as other uses of
it exist now.
2021-07-27 16:55:50 +02:00
Tres Popp 70fa9479b2 Revert "Revert "[X86][AVX] Add getBROADCAST_LOAD helper function. NFCI.""
This reverts commit d7bbb1230a.

There were follow up uses of a deleted method and I didn't run the
tests. Undo the revert, so I can do it properly.
2021-07-27 16:48:31 +02:00
Tres Popp d7bbb1230a Revert "[X86][AVX] Add getBROADCAST_LOAD helper function. NFCI."
This reverts commit 1cfecf4fc4.

This commit broke LLVM code generated through XLA by removing a
conditional on Ld->getExtensionType() == ISD::NON_EXTLOAD
2021-07-27 16:22:25 +02:00
Fraser Cormack 172487fe4c [RISCV] Add support for vector saturating add/sub operations
This patch adds support for lowering the saturating vector add/sub
intrinsics to RVV instructions, for both fixed-length and
scalable-vector forms alike.

Note that some of the DAG combines are still not triggering for the
scalable-vector tests. These require a bit more work in the DAGCombiner
itself.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106651
2021-07-27 10:04:14 +01:00
Cullen Rhodes 2e27c4e1f1 [AArch64][SME] Add zero instruction
This patch adds the zero instruction for zeroing a list of 64-bit
element ZA tiles. The instruction takes a list of up to eight tiles
ZA0.D-ZA7.D, which must be in order, e.g.

  zero {za0.d,za1.d,za2.d,za3.d,za4.d,za5.d,za6.d,za7.d}
  zero {za1.d,za3.d,za5.d,za7.d}

The assembler also accepts 32-bit, 16-bit and 8-bit element tiles which
are mapped to corresponding 64-bit element tiles in accordance with the
architecturally defined mapping between different element size tiles,
e.g.

  * Zeroing ZA0.B, or the entire array name ZA, is equivalent to zeroing
    all eight 64-bit element tiles ZA0.D to ZA7.D.
  * Zeroing ZA0.S is equivalent to zeroing ZA0.D and ZA4.D.

The preferred disassembly of this instruction uses the shortest list of
tile names that represent the encoded immediate mask, e.g.

  * An immediate which encodes 64-bit element tiles ZA0.D, ZA1.D, ZA4.D and
    ZA5.D is disassembled as {ZA0.S, ZA1.S}.
  * An immediate which encodes 64-bit element tiles ZA0.D, ZA2.D, ZA4.D and
    ZA6.D is disassembled as {ZA0.H}.
  * An all-ones immediate is disassembled as {ZA}.
  * An all-zeros immediate is disassembled as an empty list {}.

This patch adds the MatrixTileList asm operand and related parsing to support
this.

Depends on D105570.

The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D105575
2021-07-27 08:35:45 +00:00
David Green 54c91c0c74 [ARM] Implement isLoad/StoreFromStackSlot for MVE stack stores accesses
This implements the isLoadFromStackSlot and isStoreToStackSlot for MVE
MVE_VSTRWU32 and MVE_VLDRWU32 functions. They behave the same as many
other loads/stores, expecting a FI in Op1 and zero offset in Op2. At the
same time this alters VLDR_P0_off and VSTR_P0_off to use the same code
too, as they too should be returning VPR in Op0, take a FI in Op1 and
zero offset in Op2.

Differential Revision: https://reviews.llvm.org/D106797
2021-07-27 09:11:58 +01:00
Craig Topper 2ea9db0c49 [AArch64] Fix -Wparentheses warning with gcc 5.4. NFC 2021-07-26 21:08:56 -07:00
Carl Ritson fbaa35e169 [AMDGPU] Add SelectionDAG support for insert_subvector on v4f64
Enable custom insert_subvector for larger vector types.
This is necessary now that SelectionDAG can attempt v3f64 insert
to v4f64, etc.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D105385
2021-07-27 10:11:34 +09:00
Nemanja Ivanovic 9654cfd5bb [PowerPC] Fix materialization of SP float values on Power10
All floating point values in registers are in double precision
representation. In order to materialize the correct single precision
value, we need to convert the APFloat that represents the value
to double precision first.

Reviewed By: amyk, NeHuang

Differential Revision: https://reviews.llvm.org/D106812
2021-07-26 19:43:10 -05:00
Jon Roelofs f2e8e46d78 Revert "[AArch64][GlobalISel] Legalize ctpop s128"
This reverts commit 97e95fea53.

It broke test/CodeGen/Mips/GlobalISel/llvm-ir/ctpop.ll. Not sure why I didn't see that.
2021-07-26 17:06:43 -07:00
Jon Roelofs 97e95fea53 [AArch64][GlobalISel] Legalize ctpop s128
Differential revision: https://reviews.llvm.org/D106494
2021-07-26 16:33:50 -07:00
Masoud Ataei 45951ad323 [PowerPC] Add pwr7 and pwr10 support to IBM MASSV pass on AIX
Before MASSV only supported P8 and P9 on AIX ans Linux . This patch proposes
MASSV to add support of P7 and P10 only on AIX too.

Differential: https://reviews.llvm.org/D106678
2021-07-26 23:21:38 +00:00
Amara Emerson 172051a1f4 [AArch64][GlobalISel] Add identity combines to post-legal combiner.
We see some shifts of zero emitted during legalization.

Differential Revision: https://reviews.llvm.org/D106816
2021-07-26 15:17:11 -07:00
Amara Emerson c658b472f3 [GlobalISel] Add a constant folding combine.
Use it AArch64 post-legal combiner. These don't always get folded because when
the instructions are created the constants are obscured by artifacts.

Differential Revision: https://reviews.llvm.org/D106776
2021-07-26 14:53:33 -07:00
Heejin Ahn c285a11efd [WebAssembly] Make Emscripten EH work with Emscripten SjLj
When Emscripten EH mixes with Emscripten SjLj, we are not currently
handling some of them correctly. There are three cases:
1. The current function calls `setjmp` and there is an `invoke` to a
   function that can either throw or longjmp. In this case, we have to
   check both for exception and longjmp. We are currently handling this
   case correctly:
   0c0eb76782/llvm/lib/Target/WebAssembly/WebAssemblyLowerEmscriptenEHSjLj.cpp (L1058-L1090)
   When inserting routines for functions that can longjmp, which we do
   only for setjmp-calling functions, we check if the function was
   previously an `invoke` and handle it correctly.

2. The current function does NOT call `setjmp` and there is an `invoke`
   to a function that can either throw or longjmp. Because there is no
   `setjmp` call, we haven't been doing any check for functions that can
   longjmp. But in that case, for `invoke`, we only check for an
   exception and if it is not an exception we reset `__THREW__` to 0,
   which can silently swallow the longjmp:
   0c0eb76782/llvm/lib/Target/WebAssembly/WebAssemblyLowerEmscriptenEHSjLj.cpp (L70-L80)
   This CL fixes this.

3. The current function calls `setjmp` and there is no `invoke`. Because
   it is not an `invoke`, we haven't been doing any check for functions
   that can throw, and only insert longjmp-checking routines for
   functions that can longjmp. But in that case, if a longjmpable
   function throws, we only check for a longjmp so if it is not a
   longjmp we reset `__THREW__` to 0, which can silently swallow the
   exception:
   0c0eb76782/llvm/lib/Target/WebAssembly/WebAssemblyLowerEmscriptenEHSjLj.cpp (L156-L169)
   This CL fixes this.

To do that, this moves around some code, so we register necessary
functions for both EH and SjLj and precompute some data (the set of
functions that contains `setjmp`) before doing actual EH or SjLj
transformation.

This CL makes 2nd and 3rd tests in
https://github.com/emscripten-core/emscripten/pull/14732 work.

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D106525
2021-07-26 13:48:31 -07:00
Lei Huang 64a15817a0 [PowerPC]Add addex instruction definition and MC tests
Add td definitions and asm/disasm tests for the addex instruction introduced in
ISA 3.0.

Reviewed By: nemanjai, amyk, NeHuang

Differential Revision: https://reviews.llvm.org/D106666
2021-07-26 14:55:38 -05:00
Lei Huang 2d788959ed [PowerPC] Add implicit-def RM to instructions mtfsb[01]
This is a followup patch for D105930 to add implicit-def of RM for
mtfsb[01] instructions as per review comments.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D106603
2021-07-26 14:07:08 -05:00
Michael Liao b0402a35fc [amdgpu] Add 64-bit PC support when expanding unconditional branches.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D106445
2021-07-26 14:50:30 -04:00
Amara Emerson 6af8d36054 [AArch4][GlobalISel] Post-legalize combine s64 = G_MERGE s32, 0 -> G_ZEXT.
These are generated as a byproduce of legalization.

Differential Revision: https://reviews.llvm.org/D106768
2021-07-26 10:58:04 -07:00
Amara Emerson 0d41d21929 [AArch64][GlobalISel] Enable some select combines after legalization.
The legalizer generates selects for some operations, which can have constant
condition values, resulting in lots of dead code if it's not folded away.

Differential Revision: https://reviews.llvm.org/D106762
2021-07-26 10:40:32 -07:00
Amara Emerson dec34104bf [GlobalISel] Add combine for merge(unmerge) and use AArch64 postlegal-combiner.
Differential Revision: https://reviews.llvm.org/D106761
2021-07-26 10:37:31 -07:00
Heejin Ahn 6b9aba43a2 [WebAssembly] Improve pseudocode in LowerEmscriptenEHSjLj
Both `__THREW__` and `__threwValue` are global variables, and we have
been distinguishing the global variable `__THREW__` and the loaded value
`%__THREW__.val` in comments but not doing it for `__threwValue`. Made
the pseudocode comments consistent for both variables.

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D106524
2021-07-26 10:13:28 -07:00
Paul Walker 3b77e2737c [SVE] Use reg+reg addressing mode for immediate offsets.
For reg+imm SVE addressing mode imm is implictly scaled by VL,
making them impractical for truely immediate offsets.  However, if
the offset can be unscaled based on the storage element type we
can use the reg+reg SVE addressing mode and thus either reduce the
number of generate add instructions or replace them with a mov
instruction that can be hoisted from the hot code path.

Differential Revision: https://reviews.llvm.org/D106744
2021-07-26 16:24:16 +01:00
Bradley Smith 81eafb8a37 [AArch64][SVE] Break false dependencies for inactive lanes of unary operations
Differential Revision: https://reviews.llvm.org/D105889
2021-07-26 15:01:21 +00:00
Ulrich Weigand 8cd8120a7b [SystemZ] Add support for new cpu architecture - arch14
This patch adds support for the next-generation arch14
CPU architecture to the SystemZ backend.

This includes:
- Basic support for the new processor and its features.
- Detection of arch14 as host processor.
- Assembler/disassembler support for new instructions.
- New LLVM intrinsics for certain new instructions.
- Support for low-level builtins mapped to new LLVM intrinsics.
- New high-level intrinsics in vecintrin.h.
- Indicate support by defining  __VEC__ == 10304.

Note: No currently available Z system supports the arch14
architecture.  Once new systems become available, the
official system name will be added as supported -march name.
2021-07-26 16:57:28 +02:00
Jay Foad 59f6865231 [AMDGPU][GISel] Fix MMO for raw/struct buffer access with non-constant offset
Codegen for the raw/struct buffer access intrinsics would update the
offset in the MMO to reflect the combined offset, if it was known to be
constant. If the combined offset was not known to be constant, or if
there was an index, it would set the offset in the MMO to 0. This is
unsafe because it makes it look like the access does not alias with
another access with a fixed non-zero offset.

Fix these cases by setting the pointer in the MMO to null, to reflect
the fact that we do not have any known IR value pointer + constant
offset for the access.

D106284 did this for SelectionDAG. This is the corresponding fix for
GlobalISel.

Differential Revision: https://reviews.llvm.org/D106451
2021-07-26 14:27:30 +01:00
Jay Foad 9ac10658ae [AMDGPU] Fix MMO for raw/struct buffer access with non-constant offset
Codegen for the raw/struct buffer access intrinsics would update the
offset in the MMO to reflect the combined offset, if it was known to be
constant. If the combined offset was not known to be constant, or if
there was an index, it would set the offset in the MMO to 0. This is
unsafe because it makes it look like the access does not alias with
another access with a fixed non-zero offset.

Fix these cases by setting the pointer in the MMO to null, to reflect
the fact that we do not have any known IR value pointer + constant
offset for the access.

Differential Revision: https://reviews.llvm.org/D106284
2021-07-26 14:27:30 +01:00
David Green 010f8e3057 [ARM] Ensure correct regclass in distributing postinc
The register class required for some MVE loads/stores is more
constrained than the register we use when creating postinc. Make sure we
constrain the register class to keep the code correct.
2021-07-26 14:26:38 +01:00
Tim Northover a487a49acc AArch64: support i128 (& larger) returns in GlobalISel 2021-07-26 14:16:35 +01:00
Caroline Concatto bf28111ebd [AArch65][SVE] Remove vector_splice from AddedComplexity pattern
The pattern for vector_splice with Index equal or bigger than
zero was misplaced in the AddedComplexity = 1 pattern in the AArch64
tablegen file. This patch fixes it by removing vector_splice pattern
from inside AddedComplexity = 1.
2021-07-26 13:35:51 +01:00
Caroline Concatto 0bfc26e3a4 [SVE][AArch64] Improve code generation for vector_splice for Imm > 0
This patch implements vector_splice in tablegen for all cases when the
Immediate is positive and lower than the known minimum value of
a scalable vector.
Vector_splice can be implemented using SVE instruction EXT.
For instance :
    @llvm.experimental.vector.splice(Vector_1, Vector_2, Imm)
    @llvm.experimental.vector.splice(<A,B,C,D>, <E,F,G,H>, 1) ==> <B, C, D, E>
        EXT  Vector_1, Vector_2, Imm              // Vector_1 = B, C, D + Vector_2 = E

Depends on D105633

Differential Revision: https://reviews.llvm.org/D106273
2021-07-26 11:45:46 +01:00
Caroline Concatto 73e4e9cd00 [AArch64][SVE] Improve code generation for vector_splice for Imm == -1
This patch implements vector_splice in tablegen for:
  a) when the immediate is equal to -1 (Imm==1) and uses:
       INSR  +  LASTB
For instance :
@llvm.experimental.vector.splice(Vector_1, Vector_2, -1)
@llvm.experimental.vector.splice(<A,B,C,D>, <E,F,G,H>, 1) ==> <D, E, F, G>
    LAST   RegLast, Vector_1                 // RegLast = D
    INSR   Res, (Vector_1 >> 1), RegLast     // Res = D + E, F, G

Differential Revision: https://reviews.llvm.org/D105633
2021-07-26 11:25:01 +01:00
Simon Pilgrim c8472db0a8 [X86][AVX] Prefer vinsertf128 to vperm2f128 on AVX1 targets
Splatting the lower xmm with vinsertf128 is at least as quick as vperm2f128, and a lot faster on some AMD targets.

First step towards PR50053
2021-07-26 11:11:56 +01:00
Cullen Rhodes e6ff9179ce [AArch64][AsmParser] NFC: Parser.getTok().getLoc() -> getLoc()
Reviewed By: tmatheson

Differential Revision: https://reviews.llvm.org/D106635
2021-07-26 09:36:34 +00:00
David Sherwood 0aff1798b5 [Analysis] Add simple cost model for strict (in-order) reductions
I have added a new FastMathFlags parameter to getArithmeticReductionCost
to indicate what type of reduction we are performing:

  1. Tree-wise. This is the typical fast-math reduction that involves
  continually splitting a vector up into halves and adding each
  half together until we get a scalar result. This is the default
  behaviour for integers, whereas for floating point we only do this
  if reassociation is allowed.
  2. Ordered. This now allows us to estimate the cost of performing
  a strict vector reduction by treating it as a series of scalar
  operations in lane order. This is the case when FP reassociation
  is not permitted. For scalable vectors this is more difficult
  because at compile time we do not know how many lanes there are,
  and so we use the worst case maximum vscale value.

I have also fixed getTypeBasedIntrinsicInstrCost to pass in the
FastMathFlags, which meant fixing up some X86 tests where we always
assumed the vector.reduce.fadd/mul intrinsics were 'fast'.

New tests have been added here:

  Analysis/CostModel/AArch64/reduce-fadd.ll
  Analysis/CostModel/AArch64/sve-intrinsics.ll
  Transforms/LoopVectorize/AArch64/strict-fadd-cost.ll
  Transforms/LoopVectorize/AArch64/sve-strict-fadd-cost.ll

Differential Revision: https://reviews.llvm.org/D105432
2021-07-26 10:26:06 +01:00
Simon Pilgrim 1cfecf4fc4 [X86][AVX] Add getBROADCAST_LOAD helper function. NFCI.
Begin replacing individual getMemIntrinsicNode calls and setup (for X86ISD::VBROADCAST_LOAD + X86ISD::SUBV_BROADCAST_LOAD opcodes) with this getBROADCAST_LOAD helper.
2021-07-25 20:37:58 +01:00
Kyungwoo Lee 6530ea4095 [AArch64] Fix Local Deallocation for Homogeneous Prolog/Epilog
The stack adjustment for local deallocation was incorrectly ported.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D106760
2021-07-25 10:51:11 -07:00
Simon Pilgrim b95f66ad78 [X86][SSE] LowerRotate - perform modulo on the amount splat source directly.
If the rotation amount is a known splat, perform the modulo on the splat source, and then perform the splat. That way the amount-extension performed later by LowerScalarVariableShift can fold the splats away without any multiple-use issues.

Fixes one of the concerns raised on D104156
2021-07-25 17:30:32 +01:00
Sanjay Patel 1ce05ad619 [x86] improve CMOV codegen by pushing add into operands, part 2
This is a minimum extension of D106607 to allow folding for
2 non-zero constantsi that can be materialized as immediates..

In the reduced test examples, we save 1 instruction by rolling
the constants into LEA/ADD. In the motivating test from the bullet
benchmark, we absorb both of the constant moves into add ops via
LEA magic, so we reduce by 2 instructions.

Differential Revision: https://reviews.llvm.org/D106684
2021-07-25 10:05:41 -04:00
Simon Pilgrim 15b883f457 [X86][AVX] Adjust AllowBWIVPERMV3 tolerance to account for VariableCrossLaneShuffleDepth
As noticed on D105390 - we were hardwiring the depth limit for combining to VPERMI2W/VPERMI2B instructions. Not only had we made the limit too low, we hadn't accounted for slow/fast shuffles via the VariableCrossLaneShuffleDepth control
2021-07-25 14:05:11 +01:00
Amara Emerson acbc0c5f0e [AArch64][GlobalISel] Widen non-pow-2 types for shifts before clamping.
For types like s96, we don't want to clamp to s64, we want to first widen to
s128 and then narrow it. Otherwise we end up with impossible to legalize types.
2021-07-24 15:50:43 -07:00
Craig Topper c63dbd8501 [RISCV] Custom lower (i32 (fptoui/fptosi X)).
I stumbled onto a case where our (sext_inreg (assertzexti32 (fptoui X)), i32)
isel pattern can cause an fcvt.wu and fcvt.lu to be emitted if
the assertzexti32 has an additional user. If we add a one use check
it would just cause a fcvt.lu followed by a sext.w when only need
a fcvt.wu to satisfy both users.

To mitigate this I've added custom isel and new ISD opcodes for
fcvt.wu. This allows us to keep know it started life as a conversion
to i32 without needing to match multiple nodes. ComputeNumSignBits
has been taught that this new nodes produces 33 sign bits. To
prevent regressions when we need to zero extend the result of an
(i32 (fptoui X)), I've added a DAG combine to convert it to an
(i64 (fptoui X)) before type legalization. In most cases this would
happen in InstCombine, but a zero_extend can be created for function
returns or arguments.

To keep everything consistent I've added new nodes for fptosi as well.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D106346
2021-07-24 10:50:43 -07:00
Ayke van Laethem 4d7f5c0a85
[AVR] Only support sp, r0 and r1 in llvm.read_register
Most other registers are allocatable and therefore cannot be used.

This issue was flagged by the machine verifier, because reading other
registers is considered reading from an undefined register.

Differential Revision: https://reviews.llvm.org/D96969
2021-07-24 14:03:27 +02:00
Ayke van Laethem 41f905b211
[AVR] Fix rotate instructions
This patch fixes some issues with the RORB pseudo instruction.

  - A minor issue in which the instructions were said to use the SREG,
    which is not true.
  - An issue with the BLD instruction, which did not have an output operand.
  - A major issue in which invalid instructions were generated. The fix
    also reduce RORB from 4 to 3 instructions, so it's also a small
    optimization.

These issues were flagged by the machine verifier.

Differential Revision: https://reviews.llvm.org/D96957
2021-07-24 14:03:26 +02:00
Ayke van Laethem 6aa9e746eb
[AVR] Expand large shifts early in IR
This patch makes sure shift instructions such as this one:

    %result = shl i32 %n, %amount

are expanded just before the IR to SelectionDAG conversion to a loop so
that calls to non-existing library functions such as __ashlsi3 are
avoided. The generated code is currently pretty bad but there's a lot of
room for improvement: the shift itself can be done in just four
instructions.

Differential Revision: https://reviews.llvm.org/D96677
2021-07-24 14:03:26 +02:00
Ayke van Laethem 431a941465
[AVR] Improve 8/16 bit atomic operations
There were some serious issues with atomic operations. This patch should
fix the biggest issues.

For details on the issue take a look at this Compiler Explorer sample:
https://godbolt.org/z/n3ndhn

Code:

    void atomicadd(_Atomic char *val) {
        *val += 5;
    }

Output:

    atomicadd:
        movw    r26, r24
        ldi     r24, 5     ; 'operand' register
        in      r0, 63
        cli
        ld      r24, X     ; load value
        add     r24, r26   ; value += X
        st      X, r24     ; store value back
        out     63, r0
        ret                ; return the wrong value (in r24)

There are various problems with this.

 - The value to add (5) is stored in r24. However, the value to add to
   is loaded in the same register: r24.
 - The `add` instruction adds half of the pointer to the loaded value,
   instead of (attempting to) add the operand with value 5.
 - The output value of the cmpxchg instruction (which is not used in
   this code sample) is the new value with 5 added, not the old value.
   The LangRef specifies that it has to be the old value, before the
   operation.

This patch fixes the first two and leaves the third problem to be fixed
at a later date. I believe atomics were mostly broken before this patch,
with this patch they should become usable as long as you ignore the
output of the atomic operation. In particular it fixes the following
things:

 - It sets the earlyclobber flag for the input ('$operand' operand) so
   that the register allocator puts it in a different register than the
   output value.
 - It fixes a number of issues with the pseudo op expansion pass, for
   example now it adds the $operand field instead of the pointer. This
   fixes most machine instruction verifier issues (other flagged issues
   are unrelated to atomics).

Differential Revision: https://reviews.llvm.org/D97127
2021-07-24 14:03:26 +02:00
Ayke van Laethem 8544ce80f8
[AVR] Set R31R30 as clobbered after ADJCALLSTACKDOWN
In most cases, using R31R30 is fine because the call (which always
precedes ADJCALLSTACKDOWN) will clobber R31R30 anyway. However, in some
rare cases the register allocator might insert an instruction between
the call and the ADJCALLSTACKDOWN instruction and expect the register
pair to be live afterwards. I think this happens as a result of
rematerialization. Therefore, to fix this, the instruction needs to have
Defs set to R31R30.

Setting the Defs field does have the effect of making the instruction
look dead, which it certainly is not. This is fixed by setting
hasSideEffects to true.

Differential Revision: https://reviews.llvm.org/D97745
2021-07-24 14:03:26 +02:00
Ayke van Laethem feda08b70a
[AVR] Do not chain stores in call frame setup
Previously, AVRTargetLowering::LowerCall attempted to keep stack stores
in order with chains. Perhaps this worked in the past, but it does not
work now: it appears that the SelectionDAG legalization phase removes
these chains. Therefore, I've removed these chains entirely to match
X86 (which, similar to AVR, also prefers to use push instructions over
stack-relative stores to set up a call frame). With this change, all the
stack stores are in a somewhat reasonable order.

Differential Revision: https://reviews.llvm.org/D97853
2021-07-24 14:03:26 +02:00
Alexander Belyaev edb05d555e [llvm] Inline getAssociatedFunction() in LLVM_DEBUG.
Function* F is used only inside LLVM_DEBUG, so that it causes unused
variable warning.
2021-07-24 11:49:21 +02:00
Amara Emerson 5ec0f051c8 [GlobalISel] Add GUnmerge, GMerge, GConcatVectors, GBuildVector abstractions. NFC.
Use these to slightly simplify some code in the artifact combiner.
2021-07-23 22:32:26 -07:00
Kuter Dinel 96709823ec [AMDGPU] Deduce attributes with the Attributor
This patch introduces a pass that uses the Attributor to deduce AMDGPU specific attributes.

Reviewed By: jdoerfert, arsenm

Differential Revision: https://reviews.llvm.org/D104997
2021-07-24 06:07:15 +03:00
Thomas Lively 85157c0079 [WebAssembly] Codegen for pmin and pmax
Replace the clang builtins and LLVM intrinsics for {f32x4,f64x2}.{pmin,pmax}
with standard codegen patterns. Since wasm_simd128.h uses an integer vector as
the standard single vector type, the IR for the pmin and pmax intrinsic
functions contains bitcasts that would not be there otherwise. Add extra codegen
patterns that can still select the pmin and pmax instructions in the presence of
these bitcasts.

Differential Revision: https://reviews.llvm.org/D106612
2021-07-23 14:49:21 -07:00
Thomas Lively 39c0e4afce [WebAssembly][NFC] Simplify SIMD bitconvert pattern
Differential Revision: https://reviews.llvm.org/D106680
2021-07-23 14:43:48 -07:00
Craig Topper 5edccc4581 [RISCV] Avoid using x0,x0 vsetvli for vmv.x.s and vfmv.f.s unless we know the sew/lmul ratio is constant.
Since we're changing VTYPE, we may change VLMAX which could
invalidate the previous VL. If we can't tell if it is safe we
should use an AVL of 1 instead of keeping the old VL.

This is a quick fix. We may want to thread VL to the pseudo
instruction instead of making up a value. That will require ISD
opcode changes and changes to the C intrinsic interface.

This fixes the issue raised in D106286.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D106403
2021-07-23 09:12:05 -07:00
Craig Topper cc6d302c91 [X86] Fix a bug in TEST with immediate creation
This code tries to form a TEST from CMP+AND with an optional
truncate in between. If we looked through the truncate, we may
have extra bits in the AND mask that shouldn't participate in
the checks. Normally SimplifyDemendedBits takes care of this, but
the AND may have another user. So manually mask out any extra bits.

Fixes PR51175.

Differential Revision: https://reviews.llvm.org/D106634
2021-07-23 09:03:53 -07:00
Benjamin Kramer dd70cd089a [llvm][sve] Silence unused variable warning in Release builds. NFC 2021-07-23 16:16:35 +02:00
Sanjay Patel f060aa1cf3 [x86] improve CMOV codegen by pushing add into operands
This is not the transform direction we want in general,
but by the time we have a CMOV, we've already tried
everything else that could be better.
The transform increases the uses of the other add operand,
but that is safe according to Alive2:
https://alive2.llvm.org/ce/z/Yn6p-A

We could probably extend this to other binops (not just add).
This is the motivating pattern discussed in:
https://llvm.org/PR51069

The test with i8 shows a missed fold because there's a trunc
sitting in front of the add. That can be handled with a small
follow-up.

Differential Revision: https://reviews.llvm.org/D106607
2021-07-23 09:39:32 -04:00
David Truby 1528a4d400 [llvm][sve] Lowering for VLS truncating stores
This adds custom lowering for truncating stores when operating on
fixed length vectors in SVE. It also includes a DAG combine to
fold extends followed by truncating stores into non-truncating
stores in order to prevent this pattern appearing once truncating
stores are supported.

Currently truncating stores are not used in certain cases where
the size of the vector is larger than the target vector width.

Differential Revision: https://reviews.llvm.org/D104471
2021-07-23 14:04:55 +01:00
Simon Pilgrim 71d0fd3564 [X86][AVX] lowerV2X128Shuffle - attempt to recognise broadcastf128 subvector load
As noticed on PR50053 we were failing to recognise when a shuffle of a load was really a subvector broadcast load
2021-07-23 13:10:38 +01:00
David Green 38986c6782 [AArch64] Add worst case shuffle costs
This adds some missing single source shuffle costs for AArch64, of i16
and i8 vectors. v4i16 are the same as v4i32 with a worse case cost of 3
coming from the perfect shuffle tables. The larger vector sizes expand
into a constant pool, plus a load (and adrp) and a tbl. I arbitrarily
chose 8 for the cost to be expensive but not too expensive.

Differential Revision: https://reviews.llvm.org/D106241
2021-07-23 09:01:58 +01:00
Sebastian Neubauer 2f15319968 [AMDGPU] Fix running ResourceUsageAnalysis
Clear the map when running the analysis multiple times.
The assertion that should ensure that every function is only
analyzed once triggered sometimes (once every ~70 compiles of some
graphics pipelines) when two functions of subsequent runs were allocated
at the same address.

Differential Revision: https://reviews.llvm.org/D106452
2021-07-23 09:25:15 +02:00
Carl Ritson 7d4baf25aa [AMDGPU] Add maximum NSA size limit ISA feature
Add maximum NSA size limit as an ISA feature.
Use this to reduce NSA usage on GFX10.1 to avoid stability issues
with 4 and 5 dwords NSA instructions.
Maintain use of longer NSA instructions on GFX10.3.

Note: this also contains some minor fixes for GlobalISel which
did not work correctly with non-NSA form instructions on GFX10.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D103348
2021-07-23 16:16:06 +09:00
Cullen Rhodes fde7550094 [AArch64][AsmParser] NFC: when creating a token IsSuffix=false should be default
Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D106568
2021-07-23 06:36:06 +00:00
Hsiangkai Wang 4b2dd318dd [RISCV] Add FrameSetup/FrameDestroy flag to prologue/epilog instructions.
Differential Revision: https://reviews.llvm.org/D105086
2021-07-23 11:35:19 +08:00
Vitaly Buka 44ba8c691c [NFC][asan] Always pass Dominator Trees into forAllReachableExits 2021-07-22 18:01:38 -07:00
Thomas Johnson 51d8e67e88 [ARC] Add tablegen definition for the Find Leading Set (FLS) instruction
Differential Revision: https://reviews.llvm.org/D106602
2021-07-22 17:42:25 -07:00
Paulo Matos 46667a1003 [WebAssembly] Implementation of global.get/set for reftypes in LLVM IR
Reland of 31859f896.

This change implements new DAG notes GLOBAL_GET/GLOBAL_SET, and
lowering methods for load and stores of reference types from IR
globals. Once the lowering creates the new nodes, tablegen pattern
matches those and converts them to Wasm global.get/set.

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D104797
2021-07-22 22:07:24 +02:00
Simon Pilgrim 4185c5502c [CostModel][X86] Adjust shift SSE4 legalized costs based on llvm-mca reports.
Update shl/lshr/ashr costs based on the worst case costs from the script in D103695 - many of the 128-bit shifts (usually where integer multiplies aren't used) have similar behaviour to AVX1 so we can merge them.
2021-07-22 20:07:32 +01:00
Simon Pilgrim d073b19dbf [X86] Fix SLM FP<->INT throughputs.
Noticed while trying to clean up the shift costs model for SSE4 targets using the script in D10369 - SLM double-pumps all the 128-bit vector conversion ops and only use FP0 pipe - numbers taken from Intel AOM + Agner.
2021-07-22 19:39:04 +01:00
Thomas Johnson 1cda1e6186 [ARC] Add disassembly for the conditioned RSUB immediate instruction
Differential Revision: https://reviews.llvm.org/D106497
2021-07-22 11:34:39 -07:00
David Green c9cebda772 [AArch64] Adjust the cost of integer sum reductions
This changes the cost to (LT.first-1) * cost(add) + 2, where the cost of
an add is assumed to be 1. This brings it inline with the other
reductions.

Differential Revision: https://reviews.llvm.org/D106240
2021-07-22 18:19:54 +01:00
Simon Pilgrim e1bdb57958 [CostModel][X86] Adjust shift SSE legalized costs based on llvm-mca reports.
Update shl/lshr/ashr costs based on the worst case costs from the script in D103695.
2021-07-22 18:12:49 +01:00
Victor Huang 26ea4a4432 [PowerPC] Add PowerPC "__stbcx" builtin and intrinsic for XL compatibility
This patch is in a series of patches to provide builtins for compatibility
with the XL compiler. This patch adds the builtin and intrinsic for "__stbcx".

Reviewed By: nemanjai, #powerpc

Differential revision: https://reviews.llvm.org/D106484
2021-07-22 10:48:46 -05:00
Cullen Rhodes 00e87e1c5b [AArch64][SME] Improve diagnostic for vector select register
Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D106540
2021-07-22 13:46:40 +00:00
Fraser Cormack b115c038d2 [RISCV] Fix a crash when lowering split float arguments
Lowering certain float vectors without legal vector types could cause a
crash due to a bad interaction between passing floats via GPRs and
argument splitting. Split vector floats appear just like scalar floats.
Under certain situations we choose to pass these float arguments via
GPRs and use an XLenVT location and set the 'BCvt' info to track how
they must be converted back to floating-point values. However, later
logic for handling split arguments may take over, in which case we lose
the previous information and set the 'Indirect' info, thus incorrectly
lowering to integer types.

I don't believe that we would have come across the notion of split
floating-point arguments before. This patch addresses the issue by
updating the lowering so that split arguments are only passed indirectly
when they are scalar integer types.

This has some change to how we lower some larger illegal float vectors,
as can be seen in 'fastcc-float.ll' where the vector is now passed
partly in registers and partly on the stack.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D102852
2021-07-22 09:55:26 +01:00
Fraser Cormack 7b3a69bc16 [RISCV] Lower more BUILD_VECTOR sequences to RVV's VID
This relands a6ca88e908 which was originally
reverted due to overflow bugs in e3fa2b1eab.

This patch teaches the compiler to identify a wider variety of
`BUILD_VECTOR`s which form integer arithmetic sequences, and to lower
them to `vid.v` with modifications for non-unit steps and non-zero
addends.

The sequences handled by this optimization must either be monotonically
increasing or decreasing. Consecutive elements holding the same value
indicate a fractional step which, while simple mathematically,
becomes more complex to handle both in the realm of lossy integer
division and in the presence of `undef`s.

For example, a common "interleaving" shuffle index will be lowered by
LLVM to both `<0,u,1,u,2,...>` and `<u,0,u,1,u,...>` `BUILD_VECTOR`
nodes. Either of these would ideally be lowered to `vid.v` shifted right
by 1. Detection of this sequence in presence of general `undef` values
is more complicated, however: `<0,u,u,1,>` could match either
`<0,0,0,1,>` or `<0,0,1,1,>` depending on later values in the sequence.
Both are possible, so backtracking or multiple passes is inevitable.

Sticking to monotonic sequences keeps the logic simpler as it can be
done in one pass. Fractional steps will likely be a separate
optimization in a future patch.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D104921
2021-07-22 09:36:12 +01:00
Ben Shi 9e5c5afc7e [RISCV] Optimize multiplication in the zba extension with SH*ADD
This patch make the following optimization.

(mul x, 3 * power_of_2) -> (SLLI (SH1ADD x, x), bits)
(mul x, 5 * power_of_2) -> (SLLI (SH2ADD x, x), bits)
(mul x, 9 * power_of_2) -> (SLLI (SH3ADD x, x), bits)

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D105796
2021-07-22 10:28:41 +08:00
Carl Ritson 6efb3220b4 [AMDGPU] Add VReg_192/VReg_224 support for MIMG instructions
Allow MIMG instructions to be selected with 6/7 VGPRs for vaddr.
Previously these were rounded up to VReg_256 this saves VGPRs.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D103800
2021-07-22 10:42:15 +09:00
Carl Ritson 9dcd75f86f [AMDGPU] Allow frontends to disable null export for pixel shaders
Disable null export (for kills) when a frontend defines a pixel
shader as not exporting using amdgpu-color-export and
amdgpu-depth-export function attrbutes.
This allows the generation of export free pixel shaders.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D105683
2021-07-22 10:20:46 +09:00
Thomas Lively 8af333cf1a [WebAssembly] Replace @llvm.wasm.popcnt with @llvm.ctpop.v16i8
Use the standard target-independent intrinsic to take advantage of standard
optimizations.

Differential Revision: https://reviews.llvm.org/D106506
2021-07-21 16:45:54 -07:00
Jessica Paquette c75a2bbe08 [AArch64][GlobalISel] Change | -> || in an if
I wrote the wrong type of OR by mistake.
2021-07-21 14:57:31 -07:00
Stanislav Mekhanoshin fe197ef9f1 [AMDGPU] Mark relevant rematerializable VOP3 instructions
Differential Revision: https://reviews.llvm.org/D106110
2021-07-21 14:44:13 -07:00
Stanislav Mekhanoshin 9625ca5b60 [AMDGPU] Mark relevant rematerializable VOP2 instructions
Differential Revision: https://reviews.llvm.org/D106023
2021-07-21 14:24:59 -07:00
David Green ba42f6a4b5 [ARM] Pass SelectionDAG to methods that dont require DCI. NFC
In these methods DCI is never used, only the DAG from it. Pass the DAG
directly, cleaning up the code a little.
2021-07-21 22:11:09 +01:00
Stanislav Mekhanoshin 4eb24817ec [AMDGPU] Mark all relevant VOP1 instructions rematerializable
Differential Revision: https://reviews.llvm.org/D105919
2021-07-21 14:05:32 -07:00
Stanislav Mekhanoshin d01b34ed31 [AMDGPU] Move perfhint analysis
This is SCC pass, moving it to the end of SCC PM saves one
Function PM. This needs the analysis to take into account
memory access width since it is now places after the
load/store optimizer (D105651).

Differential Revision: https://reviews.llvm.org/D105652
2021-07-21 13:06:49 -07:00
Jessica Paquette d0af732bd0 [AArch64][GlobalISel] Widen s2 and s4 G_IMPLICIT_DEF + G_FREEZE
These had

```
.clampScalar(0, s1, 64)
.widenScalarToNextPow2(0, 8)
```

If you have s2 or s4, then `widenScalarToNextPow2` does nothing.

This changes the `widenScalarToNextPow2` rule to use s8 as the minimum type
instead, allowing us to correctly widen s2 and s4.

This does not impact s1, since it's marked as legal already.

Differential Revision: https://reviews.llvm.org/D106413
2021-07-21 12:59:20 -07:00
Stanislav Mekhanoshin a397c1c82f [AMDGPU] Tune perfhint analysis to account access width
A function with less memory instructions but wider access
is the same as a function with more but narrower accesses
in terms of memory boundness. In fact the pass would give
different answers before and after vectorization without
this change.

Differential Revision: https://reviews.llvm.org/D105651
2021-07-21 12:46:10 -07:00
Craig Topper a467c08570 [RISCV] Cleanup comment around vector tail policy handling. NFC
vmv.x.s and reductions don't ignore tail policy anymore.
2021-07-21 12:45:08 -07:00
Eli Friedman 0ca46a1757 [SelectionDAG] Fix the representation of ISD::STEP_VECTOR.
The existing rule about the operand type is strange.  Instead, just say
the operand is a TargetConstant with the right width.  (Legalization
ignores TargetConstants, so it doesn't matter if that width is legal.)

Highlights:

1. I had to substantially rewrite the AArch64 isel patterns to expect a
TargetConstant.  Nothing too exotic, but maybe a little hairy. Maybe
worth considering a target-specific node with some dagcombines instead
of this complicated nest of isel patterns.
2. Our behavior on RV32 for vectors of i64 has changed slightly. In
particular, we correctly preserve the width of the arithmetic through
legalization.  This changes the DAG a bit. Maybe room for
improvement here.
3. I explicitly defined the behavior around overflow. This is necessary
to make the DAGCombine transforms legal, and I don't think it causes any
practical issues.

Differential Revision: https://reviews.llvm.org/D105673
2021-07-21 10:58:40 -07:00
Thomas Lively 1a57ee1276 [WebAssembly] Codegen for v128.load{32,64}_zero
Replace the experimental clang builtins and LLVM intrinsics for these
instructions with normal instruction selection patterns. The wasm_simd128.h
intrinsics header was already using portable code for the corresponding
intrinsics, so now it produces the correct instructions.

Differential Revision: https://reviews.llvm.org/D106400
2021-07-21 09:02:12 -07:00
Eric Astor 69551486fd [ms] [llvm-ml] Restrict implicit RIP-relative addressing to named-variable references
ML64.EXE applies implicit RIP-relative addressing only to memory references that include a named-variable reference.

Reviewed By: mstorsjo

Differential Revision: https://reviews.llvm.org/D105372
2021-07-21 11:49:58 -04:00
Quinn Pham e002d251dd [PowerPC] Floating Point Builtins for XL Compat.
This patch is in a series of patches to provide
builtins for compatibility with the XL compiler.
This patch adds builtins related to floating point
operations

Reviewed By: #powerpc, nemanjai, amyk, NeHuang

Differential Revision: https://reviews.llvm.org/D103986
2021-07-21 08:33:39 -05:00
Sebastian Neubauer b642d01fa8 [AMDGPU] Improve killed check for vgpr optimization
The killed flag is not always set. E.g. when a variable is used in a
loop, it is never marked as killed, although it is unused in following
basic blocks. Also, we try to deprecate kill flags and not use them.

Check if the register is live in the endif block. If not, consider it
killed in the then and else blocks.

The vgpr-liverange tests have two new tests with loops
(pre-committed, so the diff is visible).
I also needed to change the subtarget to gfx10.1, otherwise calls
are not working.

Differential Revision: https://reviews.llvm.org/D106291
2021-07-21 15:24:59 +02:00
Jay Foad 3ed29f960c [AMDGPU] NFC refactoring in isel for buffer access intrinsics
Rename getBufferOffsetForMMO to updateBufferMMO and pass in the MMO to
be updated, in preparation for the bug fix in D106284.

Call updateBufferMMO consistently for all buffer intrinsics, even the
ones that use setBufferOffsets to decompose a combined offset
expression.

Add a getIdxEn helper function.

Differential Revision: https://reviews.llvm.org/D106354
2021-07-21 11:12:49 +01:00
Cullen Rhodes 008c755d76 [AArch64][SME] Support .arch and .arch_extension assembler directives
Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D105566
2021-07-21 08:40:27 +00:00
Tim Northover 19d2e42be2 ARM: don't return by popping PC if we have to adjust the stack afterwards.
In mandatory tail calling conventions we might have to deallocate stack
space used by our arguments before return. This happens after popping
CSRs, so the pop cannot be turned into the return itself in this case.

The else branch here was already a nop, so removing it as a tidy-up.
2021-07-21 09:35:14 +01:00
Tim Northover 291e0daa6e AArch64: support 8 & 16-bit atomic operations in GlobalISel
We have SelectionDAG patterns for 8 & 16-bit atomic operations, but they
assume the value types will have been legalized to 32-bits. So this adds
the ability to widen them to both AArch64 & generic GISel
infrastructure.
2021-07-21 09:35:14 +01:00
Cullen Rhodes 2d80bbd939 [AArch64][SME] Add mova instructions
This patch adds the mova instruction to insert/extract an SVE vector
register to/from a ZA tile vector.

The preferred MOV aliases are also implemented.

Depends on D105572.

The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06

Reviewed By: david-arm, CarolineConcatto

Differential Revision: https://reviews.llvm.org/D105574
2021-07-21 08:20:01 +00:00
Cullen Rhodes 6c32cfe85c [AArch64][SME] Add ldr and str instructions
The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06

Reviewed By: kmclaughlin

Differential Revision: https://reviews.llvm.org/D105573
2021-07-21 08:17:13 +00:00
Tianqing Wang bec4a8157d [X86] Update MachineLoopInfo in CMOV conversion.
If a CMOV is in a loop and is converted to branches, CMOV conversion wouldn't
add newly created basic blocks to loop info. Since the candidates is collected
based on loops, instructions in these basic blocks will be ignored.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D104623
2021-07-21 10:53:46 +08:00
Albion Fung 2fd1520247 [PowerPC] Implemented mtmsr, mfspr, mtspr Builtins
Implemented builtins for mtmsr, mfspr, mtspr on PowerPC;
the patch is intended for XL Compatibility.

Differential revision: https://reviews.llvm.org/D106130
2021-07-20 17:51:00 -05:00