Commit Graph

56350 Commits

Author SHA1 Message Date
Stanislav Mekhanoshin dd4766451e [AMDGPU] Use generated RegisterPressureSets enum
Differential Revision: https://reviews.llvm.org/D74671
2020-02-18 10:34:03 -08:00
Matt Arsenault f4d3765fd9 CodeGen: Move undef_tied_input declaration
This doesn't belong in ARM specific code since it's generally
recognized by tablegen.
2020-02-18 10:33:10 -08:00
Mikhail Maltsev 63809d365e [ARM,MVE] Add vbrsrq intrinsics family
Summary:
This patch adds a new MVE intrinsics family, `vbrsrq`: vector bit
reverse and shift right. The intrinsics are compiled into the VBRSR
instruction. Two new LLVM IR intrinsics were also added: arm.mve.vbrsr
and arm.mve.vbrsr.predicated.

Reviewers: simon_tatham, dmgreen, ostannard, MarkMurrayARM

Reviewed By: simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74721
2020-02-18 17:31:21 +00:00
Sean Fertile 3126b556d1 [PowerPC][NFC] Add defines to help creating the SpillSlot arrays.
Create preprocessor defines for callee saved floating-point register spill
slots, vector register spill slots, and both 32-bit and 64-bit general
purpose register spill slots. This is an NFC refactor to prepare for
adding ABI compliant callee saves and restores for AIX.
2020-02-18 11:52:04 -05:00
Andrew Wei 4ca753f4e3 [RISCV] Implement mayBeEmittedAsTailCall for tail call optimization
Implement TargetLowering callback mayBeEmittedAsTailCall for riscv in CodeGenPrepare,
which will duplicate return instructions to enable tailcall optimization.

Differential Revision: https://reviews.llvm.org/D73699
2020-02-18 23:56:42 +08:00
Sander de Smalen 8fbc925807 Add OffsetIsScalable to getMemOperandWithOffset
Summary:
Making `Scale` a `TypeSize` in AArch64InstrInfo::getMemOpInfo,
has the effect that all places where this information is used
(notably, TargetInstrInfo::getMemOperandWithOffset) will need
to consider Scale - and derived, Offset - possibly being scalable.

This patch adds a new operand `bool &OffsetIsScalable` to
TargetInstrInfo::getMemOperandWithOffset and fixes up all
the places where this function is used, to consider the
offset possibly being scalable.

In most cases, this means bailing out because the algorithm does not
(or cannot) support scalable offsets in places where it does some
form of alias checking for example.

Reviewers: rovka, efriedma, kristof.beyls

Reviewed By: efriedma

Subscribers: wuzish, kerbowa, MatzeB, arsenm, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, javed.absar, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72758
2020-02-18 15:53:29 +00:00
Djordje Todorovic 2bf44d11cb Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit rGa82d3e8a6e67.
2020-02-18 16:38:11 +01:00
Kazushi (Jam) Marukawa 5526786a56 [VE] TLS codegen
Summary:
Codegen and tests for thread-local storage.
This implements only the general dynamic model due to limitations in nld 2.26.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D74718
2020-02-18 16:09:12 +01:00
Luke Geeson 4518aab289 [AArch64] Add Cortex-A34 Support for clang and llvm
This patch upstreams support for the AArch64 Armv8-A cpu Cortex-A34.

In detail adding support for:
 - mcpu option in clang
 - AArch64 Target Features in clang
 - llvm AArch64 TargetParser definitions

details of the cpu can be found here:
https://developer.arm.com/ip-products/processors/cortex-a/cortex-a34

Reviewers: SjoerdMeijer

Reviewed By: SjoerdMeijer

Subscribers: SjoerdMeijer, kristof.beyls, hiraditya, cfe-commits,
llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74483

Change-Id: Ida101fc544ca183a0a0e61a1277c8957855fde0b
2020-02-18 14:56:16 +00:00
Matt Arsenault 37c452a289 AMDGPU/GlobalISel: Adjust branch target when lowering loop intrinsic
This needs to steal the branch target like the other control flow
intrinsics.
2020-02-18 06:35:40 -08:00
Djordje Todorovic a82d3e8a6e Reland "[DebugInfo] Enable the debug entry values feature by default"
This patch enables the debug entry values feature.

  - Remove the (CC1) experimental -femit-debug-entry-values option
  - Enable it for x86, arm and aarch64 targets
  - Resolve the test failures
  - Leave the llc experimental option for targets that do not
    support the CallSiteInfo yet

Differential Revision: https://reviews.llvm.org/D73534
2020-02-18 14:41:08 +01:00
Kerry McLaughlin d4576080da [AArch64][SVE] Add remaining SVE2 intrinsics for widening DSP operations
Summary:
Implements the following intrinsics:
 - llvm.aarch64.sve.[s|u]mullb_lane
 - llvm.aarch64.sve.[s|u]mullt_lane
 - llvm.aarch64.sve.sqdmullb_lane
 - llvm.aarch64.sve.sqdmullt_lane
 - llvm.aarch64.sve.[s|u]addwb
 - llvm.aarch64.sve.[s|u]addwt
 - llvm.aarch64.sve.[s|u]shllb
 - llvm.aarch64.sve.[s|u]shllt
 - llvm.aarch64.sve.[s|u]subwb
 - llvm.aarch64.sve.[s|u]subwt

Reviewers: sdesmalen, dancgr, efriedma, c-rhodes, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73903
2020-02-18 10:28:00 +00:00
Mikhail Maltsev 58f66f8af0 [ARM,CDE] Cosmetic changes, additonal driver tests
Summary:
This is a follow-up patch addressing post-commit comments in
https://reviews.llvm.org/D74044:
* Add more Clang driver tests (-march=armv8.1m.main and -march=armv8.1m.main+mve.fp)
* Clang-format a chunk in ARMAsmParser.cpp
* Add a missing copyright header to ARMInstrCDE.td

Reviewers: SjoerdMeijer, simon_tatham, dmgreen

Reviewed By: SjoerdMeijer

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74732
2020-02-18 10:23:09 +00:00
Simon Tatham c32af4447f [ARM,MVE] Add the vmovnbq,vmovntq intrinsic family.
Summary:
These are in some sense the inverse of vmovl[bt]q: they take a vector
of n wide elements and truncate each to half its width. So they only
write half a vector's worth of output data, and therefore they also
take an 'inactive' parameter to provide the other half of the data in
the output vector. So vmovnb overwrites the even lanes of 'inactive'
with the narrowed values from the main input, and vmovnt overwrites
the odd lanes.

LLVM had existing codegen which generates these MVE instructions in
response to IR that takes two vectors of wide elements, or two vectors
of narrow ones. But in this case, we have one vector of each. So my
clang codegen strategy is to narrow the input vector of wide elements
by simply reinterpreting it as the output type, and then we have two
narrow vectors and can represent the operation as a vector shuffle
that interleaves lanes from both of them.

Even so, not all the cases I needed ended up being selected as a
single MVE instruction, so I've added a couple more patterns that spot
combinations of the 'MVEvmovn' and 'ARMvrev32' SDNodes which can be
generated as a VMOVN instruction with operands swapped.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74337
2020-02-18 09:34:50 +00:00
Simon Tatham 5e97940cd2 [ARM,MVE] Add the vmovlbq,vmovltq intrinsic family.
Summary:
These intrinsics take a vector of 2n elements, and return a vector of
n wider elements obtained by sign- or zero-extending every other
element of the input vector. They're represented in IR as a
shufflevector that extracts the odd or even elements of the input,
followed by a sext or zext.

Existing LLVM codegen already matches this pattern and generates the
VMOVLB instruction (which widens the even-index input lanes). But no
existing isel rule was generating VMOVLT, so I've added some. However,
the new rules currently only work in little-endian MVE, because the
pattern they expect from isel lowering includes a bitconvert which
doesn't have the right semantics in big-endian.

The output of one existing codegen test is improved by those new
rules.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74336
2020-02-18 09:34:50 +00:00
Simon Tatham 9dcc1667ab [ARM] Allow `ARMVectorRegCast` to match bitconverts too. (NFC)
Summary:
When we start putting instances of `ARMVectorRegCast` in complex isel
patterns, it will be awkward that they're often turned into the more
standard `bitconvert` in little-endian mode. We'd rather not have to
write separate isel patterns for the two endiannesses, matching
different but equivalent cast operations.

This change aims to fix that awkwardness in advance, by turning the
Tablegen record `ARMVectorRegCast` from a simple `SDNode` instance
into a `PatFrags` that can match either kind of cast – with a
predicate that prevents it matching a bitconvert in the big-endian
case, where bitconvert isn't semantically identical.

No existing code generation should be affected by this change, but it
will enable the patterns introduced by D74336 to work in both
endiannesses.

Reviewers: dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74716
2020-02-18 09:34:50 +00:00
Simon Tatham 68b49f7ef4 [ARM,MVE] Add intrinsics vclzq and vclsq.
Summary:
vclzq maps nicely to the existing target-independent @llvm.ctlz IR
intrinsic. But vclsq ('count leading sign bits') has no corresponding
target-independent intrinsic, so I've made up @llvm.arm.mve.vcls.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74335
2020-02-18 09:34:50 +00:00
Simon Tatham c8b3196e54 [ARM,MVE] Add intrinsics for FP rounding operations.
Summary:
This adds the unpredicated forms of six different MVE intrinsics which
all round a vector of floating-point numbers to integer values,
leaving them still in FP format, differing only in rounding mode and
exception settings.

Five of them map to existing target-independent intrinsics in LLVM IR,
such as @llvm.trunc and @llvm.rint. The sixth, mapping to the `vrintn`
instruction, is done by inventing a target-specific intrinsic.

(`vrintn` behaves the same as `vrintx` in terms of the output value:
the side effects on the FPSCR flags are the only difference between
the two. But ACLE specifies separate user-callable intrinsics for the
two, so the side effects matter enough to make sure we generate the
right one of the two instructions in each case.)

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74333
2020-02-18 09:34:50 +00:00
Craig Topper e90dc7c48b [X86] Move avx512 code that forces zeros to the false side of vselects above a check for legal types.
This helps this transform occur earlier so we can fold the not
with setcc. If we delay it until after type legalization we might
have introduced instructions to widen the mask if the vselect was
widened. This can prevent the not from making it to the setcc.

We could of course add more DAG combines to handle that, but
moving this earlier is easier.
2020-02-17 22:24:21 -08:00
Craig Topper b0840934a7 [X86] Use isScalarFPTypeInSSEReg to simplify code in LowerSELECT. NFC 2020-02-17 19:43:57 -08:00
Jim Lin fa75bffbbb [XCore][NFC] Remove trailing space 2020-02-18 10:32:58 +08:00
Craig Topper 3f4490d384 [X86] Add one use check to '0-x == y --> x+y == 0' in EmitCmp.
I failed to copy it when I moved this in
b62de210cf
2020-02-17 18:16:42 -08:00
Stanislav Mekhanoshin 8e760e1018 [TBLGEN] Inhibit generation of unneeded psets
Differential Revision: https://reviews.llvm.org/D74744
2020-02-17 15:38:08 -08:00
Craig Topper 68400a2308 [X86] Add missing isel pattern for BLCFILL producing flags. 2020-02-17 13:20:13 -08:00
Matt Arsenault 5e8792453d AMDGPU/GlobalISel: Fix RegBankSelect for G_SHUFFLE_VECTOR 2020-02-17 15:11:25 -05:00
Matt Arsenault f742a28ae3 AMDGPU/GlobalISel: Custom lower 32-bit G_SDIV/G_SREM 2020-02-17 15:09:51 -05:00
Matt Arsenault e240b27d6d AMDGPU/GlobalISel: Allow arbitrary global values
Treat unknown address spaces as global
2020-02-17 11:32:28 -08:00
Craig Topper 43e948c4b7 [X86] Change how the alignment for the stack object is created in LowerFLT_ROUNDS_.
We don't need FrameInfo's concept of the stack alignment. We just
need to tell it the desired alignment. Which in this case is 2.
2020-02-17 11:27:34 -08:00
Craig Topper b62de210cf [X86] Move '0-x == y --> x+y == 0' and similar combines to EmitCmp.
AArch64 handles this pattern in their lowering code. By emitting
CMN. ARM handles it as an isel pattern.
2020-02-17 11:27:34 -08:00
Matt Arsenault 54137bbaaf GlobalISel: Allow running localizer earlier
This required legal and regbankselected MIR for seemingly no
reason. For AMDGPU this wouldn't see legalized G_GLOBAL_VALUEs.
2020-02-17 11:24:06 -08:00
Matt Arsenault 96db12d507 AMDGPU/GlobalISel: Custom lower 32-bit G_UDIV/G_UREM
AMDGPUCodeGenPrepare expands this most of the time, but not always. We
will always at least need a fallback option here. This is the 3rd
implementation of the same expansion in the backend. Eventually I
would like to eliminate the IR expansion (and the DAG version
obviously).

Currently the new legalizer path produces a better result, since the
IR expansion results in extra operations which need to be combined
out. Notably, the IR expansion results in multiplies by 0.
2020-02-17 11:05:50 -08:00
Matt Arsenault 0e2eb357e0 GlobalISel: Extend narrowing to G_ASHR 2020-02-17 10:42:59 -08:00
John Brawn 594a89f727 [FPEnv][ARM] Don't call mutateStrictFPToFP when lowering
mutateStrictFPToFP can delete the node and replace it with another with the same
value which can later cause problems, and returning the result of
mutateStrictFPToFP doesn't work because SelectionDAGLegalize expects that the
returned value has the same number of results as the original. Instead handle
things by doing the mutation manually.

Differential Revision: https://reviews.llvm.org/D74726
2020-02-17 18:19:25 +00:00
Mikhail Maltsev 489f62e801 [ARM,MVE] Add vector-scalar intrinsics
Summary:
This patch adds vector-scalar variants to the following families of
MVE intrinsics:
* vaddq
* vsubq
* vmulq
* vqaddq
* vqsubq
* vhaddq
* vhsubq
* vqdmulhq
* vqrdmulhq

The vector-scalar variants perform a splat operation on the scalar
operand and then perform the same operations as their vector-vector
counterparts. Code generation is done accordingly (using LLVM IR 'insert'
and 'shuffle' operations which are later converted into an ARMvdup
SDNode).

Reviewers: simon_tatham, dmgreen, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74620
2020-02-17 17:47:05 +00:00
Nikita Popov 80397d2d12 [IRBuilder] Delete copy constructor
D73835 will make IRBuilder no longer trivially copyable. This patch
deletes the copy constructor in advance, to separate out the breakage.

Currently, the IRBuilder copy constructor is usually used by accident,
not by intention.  In rG7c362b25d7a9 I've fixed a number of cases where
functions accepted IRBuilder rather than IRBuilder &, thus performing
an unnecessary copy. In rG5f7b92b1b4d6 I've fixed cases where an
IRBuilder was copied, while an InsertPointGuard should have been used
instead.

The only non-trivial use of the copy constructor is the
getIRBForDbgInsertion() helper, for which I separated construction and
setting of the insertion point in this patch.

Differential Revision: https://reviews.llvm.org/D74693
2020-02-17 18:14:48 +01:00
Nikita Popov 98ed613ccc [IRBuilder] Avoid passing IRBuilder by value; NFC
I've fixed most of these before, but missed some occurrences
in targets I don't usually build.
2020-02-17 18:14:47 +01:00
Matt Arsenault 8550859535 GlobalISel: Extend shift narrowing to G_SHL 2020-02-17 09:13:37 -08:00
Matt Arsenault d9e8b2cbcc AMDGPU/GlobalISel: Skip DAG hack passes on selected functions
The way fallback to SelectionDAG works is somewhat surprising to
me. When the fallback path is enabled, the entire set of SelectionDAG
selector passes is added to the pass pipeline, and each one needs to
check if the function was selected. This results in the surprising
behavior of running SIFixSGPRCopies for example, but only if
-global-isel-abort=2 is used.

SIAddIMGInitPass is also added in addInstSelector, but I'm not sure
why we have this pass or if it should be added somewhere else for
GlobalISel.
2020-02-17 08:33:17 -08:00
Matt Arsenault 78d455adf0 GlobalISel: Add combine to narrow G_LSHR
Produce an unmerge to a narrower type and introduce a narrower shift
if needed. I wasn't sure if there was a better way to parameterize the
target's preferred shift type for the GICombineRule, so manually call
the combine helper.
2020-02-17 08:04:52 -08:00
Matt Arsenault 86813e2768 AMDGPU/GlobalISel: Select llvm.amdgcn.s.buffer.load
Doesn't try to fail on the dlc bit pre-gfx10 like the DAG lowering
does.
2020-02-17 08:02:40 -08:00
Mikhail Maltsev dd4d093762 [ARM] Add initial support for Custom Datapath Extension (CDE)
Summary:
This patch adds assembly-level support for a new Arm M-profile
architecture extension, Custom Datapath Extension (CDE).

A brief description of the extension is available at
https://developer.arm.com/architectures/instruction-sets/custom-instructions

The latest specification for CDE is currently a beta release and is
available at
https://static.docs.arm.com/ddi0607/aa/DDI0607A_a_armv8m_arm_supplement_cde.pdf

CDE allows chip vendors to add custom CPU instructions.  The CDE
instructions re-use the same encoding space as existing coprocessor
instructions (such as MRC, MCR, CDP etc.). Each coprocessor in range
cp0-cp7 can be configured as either general purpose (GCP) or custom
datapath (CDEv1).  This configuration is defined by the CPU vendor and
is provided to LLVM using 8 subtarget features: cdecp0 ... cdecp7.

The semantics of CDE instructions are implementation-defined, but the
instructions are guaranteed to be pure (that is, they are stateless,
they do not access memory or any registers except their explicit
inputs/outputs).

CDE requires the CPU to support at least Armv8.0-M mainline
architecture. CDE includes 3 sets of instructions:
* Instructions that operate on general purpose registers and NZCV
  flags
* Instructions that operate on the S or D register file (require
  either FP or MVE extension)
* Instructions that operate on the Q register file, require MVE

The user-facing names that can be specified on the command line are
the same as the 8 subtarget feature names. For example:

    $ clang -target arm-none-none-eabi -march=armv8m.main+cdecp0+cdecp3

tells the compiler that the coprocessors 0 and 3 are configured as
CDEv1 and the remaining coprocessors are configured as GCP (which is
the default).

Reviewers: simon_tatham, ostannard, dmgreen, eli.friedman

Reviewed By: simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74044
2020-02-17 15:39:16 +00:00
Matt Arsenault 5fdc9851d0 AMDGPU/GlobalISel: Run the localizer pass
While looking at the output on real sized programs, there is a lot of
extra SGPR spilling compared to the DAG path. This seems to largely be
from all constants being SGPRs in the entry block.
2020-02-17 07:38:12 -08:00
Sander de Smalen a7a96c726e [AArch64] Implement passing SVE vectors by ref for AAPCS.
Summary:
This patch implements the part of the calling convention
where SVE Vectors are passed by reference. This means the
caller must allocate stack space for these objects and
pass the address to the callee.

Reviewers: efriedma, rovka, cameron.mcinally, c-rhodes, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71216
2020-02-17 15:20:28 +00:00
Benjamin Kramer f4c59c0f97 [wasm] Unbreak after 5fc5c7db38. NFCI. 2020-02-17 15:49:49 +01:00
Benjamin Kramer 5fc5c7db38 Strength reduce vectors into arrays. NFCI. 2020-02-17 15:37:35 +01:00
Matt Arsenault e5805529bf AMDGPU/GlobalISel: Select v2s32->v2s16 G_TRUNC
It would be nice if there was a way to avoid the tied operand, but as
far as I can tell there isn't a way to use or with op_sel to achieve
this
2020-02-17 09:20:13 -05:00
Matt Arsenault 361f2a7818 AMDGPU/GlobalISel: Handle sbfe/ubfe intrinsic
Try to handle arbitrary scalar BFEs by packing the operands. The DAG
gives up on non-constant arguments. We're still missing any constant
folding, so we end up with pretty ugly code most of the time. Also
handle the 64-bit scalar case, which the DAG doesn't try to do.
2020-02-17 09:20:13 -05:00
Kerry McLaughlin 633db60f3e [AArch64][SVE] Add SVE index intrinsic
Summary:
Implements the @llvm.aarch64.sve.index intrinsic, which
takes a scalar base and step value.

This patch also adds the printSImm function to AArch64InstPrinter
to ensure that immediates of type i8 & i16 are printed correctly.

Reviewers: sdesmalen, andwar, efriedma, dancgr, cameron.mcinally, rengolin

Reviewed By: cameron.mcinally

Subscribers: tatyana-krasnukha, tschuett, kristof.beyls, hiraditya, rkruppe, arphaman, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74550
2020-02-17 10:30:11 +00:00
Sjoerd Meijer e5043cd3c2 [AArch64] Fix small typos in the target description. NFC.
Patch by Tamas Petz.

Differential Revision: https://reviews.llvm.org/D74603
2020-02-17 10:13:47 +00:00
QingShan Zhang 113df90388 [PowerPC] Add the missing InstrAliasing for 64-bit rotate instructions
We have the InstAlias rules for 32-bit rotate but missing the 64-bit one.
Rotate left immediate rotlwi ra,rs,n rlwinm ra,rs,n,0,31
Rotate left rotlw ra,rs,rb rlwnm ra,rs,rb,0,31

Differential Revision: https://reviews.llvm.org/D72676
2020-02-17 05:42:49 +00:00
Michael Liao 487fcc8d3d Fix `-Wpedantic` warning. NFC. 2020-02-17 00:18:01 -05:00
Craig Topper dd0b18e1ec [X86] Disable load folding for X86ISD::ADD with 128 as an immediate.
It can be turned into a sub with -128 instead as long as the
carry flag isn't used.
2020-02-16 20:52:51 -08:00
Craig Topper 464729cf7c [X86] Remove unnecessary check for null SDValue. NFC 2020-02-16 20:25:24 -08:00
Matt Arsenault 295bbea3ed AMDGPU/GlobalISel: Fix non-power-of-2 G_SITOFP/G_UITOFP
This wouldn't work for s33-s63 sources.
2020-02-16 22:48:57 -05:00
Matt Arsenault 044d40ed46 AMDGPU/GlobalISel: Move lambdas to normal function
These aren't using any local state
2020-02-16 22:48:32 -05:00
Zheng Chen 04377a81ae [Powerpc] set instruction count as lsr first priority of lsr.
On Powerpc, set instruction count as lsr first priority of lsr by default.
Add an option ppc-lsr-no-insns-cost to return back to default lsr cost model.

Reviewed By: steven.zhang, jsji

Differential Revision: https://reviews.llvm.org/D72683
2020-02-16 21:04:55 -05:00
Craig Topper 20c5968e09 [X86] Increase latency of port5 masked compares and kshift/kadd/kunpck instructions in SKX scheduler model
Uops.info shows these as 4 cycle latency.
2020-02-16 16:59:37 -08:00
Craig Topper 272d35aef5 [X86] Separate floating point handling out of EmitCmp and emitFlagsForSetcc.
Both of those functions only have a single caller starting
at LowerSETCC. Just handle floating point directly in LowerSETCC.

This removes the need to pass Chain and IsSignaling all the way
down.
2020-02-16 10:51:05 -08:00
Craig Topper d26f11108b [X86] Split X86ISD::CMP into an integer and FP opcode. 2020-02-16 10:10:19 -08:00
Eric Astor ee2c0f76d7 [ms] [llvm-ml] Add a draft MASM parser
Summary:
Many directives are unavailable, and support for others may be limited.

This first draft has preliminary support for:
    - conditional directives (including errors),
    - data allocation (unsigned types up to 8 bytes, and ALIGN),
    - equates/variables (numeric and text),
    - and procedure directives (without parameters),
as well as COMMENT, ECHO, INCLUDE, INCLUDELIB, PUBLIC, and EXTERN. Text variables (aka text macros) are expanded in-place wherever the identifier occurs.

We deliberately ignore all ml.exe processor directives.

Prominent features not yet supported:
    - structs
    - macros (both procedures and functions)
    - procedures (with specified parameters)
    - substitution & expansion operators

Conditional directives are complicated by the fact that "ifdef rax" is a valid way to check if a file is being assembled for a 64-bit x86 processor; we add support for "ifdef <register>" in general, which requires adding a tryParseRegister method to all MCTargetAsmParsers. (Some targets require backtracking in the non-register case.)

Reviewers: rnk, thakis

Reviewed By: thakis

Subscribers: kerbowa, merge_guards_bot, wuzish, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, mgorny, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, Jim, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72680
2020-02-16 12:30:46 -05:00
Nikita Popov 7c362b25d7 [IRBuilder] Fix unnecessary IRBuilder copies; NFC
Fix a few cases where an IRBuilder is passed to a helper function
by value, while a by reference pass was intended.
2020-02-16 17:57:18 +01:00
Simon Pilgrim b85df2e185 [X86] combineX86ShuffleChain - add support for combining 512-bit shuffles to PALIGNR 2020-02-16 16:13:26 +00:00
Simon Pilgrim c9c1c2b335 [X86] combineX86ShuffleChain - add support for combining 512-bit shuffles to bit shifts 2020-02-16 16:13:25 +00:00
Sanjay Patel e48b536be6 [x86] form broadcast of scalar memop even with >1 use
The unseen logic diff occurs because MayFoldLoad() is defined like this:

static bool MayFoldLoad(SDValue Op) {
  return Op.hasOneUse() && ISD::isNormalLoad(Op.getNode());
}

The test diffs here all seem ok to me on screen/paper, but it's hard to know
if that will lead to universally better perf for all targets. For example,
if a target implements broadcast from mem as multiple uops, we would have to
weigh the potential reduction of instructions and register pressure vs.
possible increase in number of uops. I don't know if we can make a truly
informed decision on this at compile-time.

The motivating case that I'm looking at in PR42024:
https://bugs.llvm.org/show_bug.cgi?id=42024
...resembles the diff in extract-concat.ll, but we're not going to change the
larger example there without at least 1 other fix.

Differential Revision: https://reviews.llvm.org/D74088
2020-02-16 10:32:56 -05:00
Fangrui Song 46788a21f9 [X86][AsmPrinter] PrintSymbolOperand: prefer to lower ELF MO_GlobalAddress to .Lfoo$local 2020-02-15 13:45:29 -08:00
Craig Topper e5b3ae4b34 [X86] Merge two switches together to simplify some code. NFC 2020-02-15 12:55:51 -08:00
Craig Topper c3c20c83f3 [X86] Fix typo in comment. NFC 2020-02-15 12:48:19 -08:00
Simon Pilgrim 34a054ce71 [X86] combineX86ShuffleChain - add support for combining to X86ISD::ROTLI
Refactors matchShuffleAsBitRotate to allow use by both lowerShuffleAsBitRotate and matchUnaryPermuteShuffle.
2020-02-15 20:04:54 +00:00
Craig Topper 3f7649799b [X86] Move combineIncDecVector logic from Select to PreprocessISelDAG.
This allows it to work properly with masked inc/dec for avx512. Those
would have a vselect as the root node so didn't get a chance to call
combineIncDecVector.

This also simplifies the logic because we don't have to manage
the topological ordering.
2020-02-15 09:59:12 -08:00
Fangrui Song 549b436beb [MC] De-capitalize MCStreamer::Emit{Bundle,Addrsig}* etc
So far, all non-COFF-related Emit* functions have been de-capitalized.
2020-02-15 09:11:48 -08:00
Pavel Iliin dc0b815989 [AArch64][FIX] Correct register live range during pseudo expansion.
This commit fixes the broken tests after
commit b6a9fe2099
on the expensive check builder:
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-ubuntu/builds/2884
2020-02-15 12:16:56 +00:00
David Green da147ef0a5 [AArch64] Fixup kill flags on BSL generation
This hopefully fixes up the expensive checks bot.
2020-02-15 11:44:23 +00:00
Fangrui Song 774971030d [MCStreamer] De-capitalize EmitValue EmitIntValue{,InHex} 2020-02-14 23:08:40 -08:00
Fangrui Song 1dc16c752d [MC] Add MCSection::NonUniqueID and delete one MCContext::getELFSection overload 2020-02-14 20:25:52 -08:00
Matt Arsenault 8d8d46b57a AMDGPU/GlobalISel: Fix missing impdef of scc on boolean bit ops 2020-02-14 22:35:30 -05:00
Fangrui Song 6d2d589b06 [MC] De-capitalize another set of MCStreamer::Emit* functions
Emit{ValueTo,Code}Alignment Emit{DTP,TP,GP}* EmitSymbolValue etc
2020-02-14 19:26:52 -08:00
Fangrui Song a55daa1461 [MC] De-capitalize some MCStreamer::Emit* functions 2020-02-14 19:11:53 -08:00
Shiva Chen 1cae2f9d19 [RISCV] Correct the CallPreservedMask for the function call in an interrupt handler
CallPreservedMask is used to describe the register liveness after a
function call. The function call in an interrupt handler should use the same
CallPreservedMask as normal functions. So that only callee save registers
can live through the function call.
2020-02-15 09:14:04 +08:00
Matt Arsenault 65dbdc329f AMDGPU: Don't preserve analyses with div64 IR expansion
The dominator tree needs to be updated, but that isn't handled now.
2020-02-14 20:06:02 -05:00
Matt Arsenault dc3e499dd4 AMDGPU/GlobalISel: Fix G_EXTRACT of 96-bit results
This would assert on an unhandled size in getRegSplitParts.
2020-02-14 15:57:40 -08:00
Matt Arsenault 60fea2713d AMDGPU/GlobalISel: Improve 16-bit bswap
Match the new DAG behavior and use v_perm_b32 when available. Also
does better on SI/CI by expanding 16-bit swaps. Also fix
non-power-of-2 cases.
2020-02-14 15:57:39 -08:00
Stanislav Mekhanoshin 922197d664 [TBLGEN] Allow to override RC weight
Differential Revision: https://reviews.llvm.org/D74509
2020-02-14 15:49:52 -08:00
Craig Topper 8dc659c131 [Hexagon] Add an explicit makeArrayRef to pacify gcc 5.5
The array seemed to have decayed to a pointer before the ArrayRef
constructor got called so there was no size information available.
2020-02-14 13:51:39 -08:00
Austin Kerbow 07824e65bf [AMDGPU] Always enable XNACK feature when support is explicitly requested
Differential Revision: https://reviews.llvm.org/D74630
2020-02-14 11:58:58 -08:00
Matt Arsenault 9ec668606b AMDGPU: Add option to disable CGP division expansion
The division expansions in AMDGPUCodeGenPrepare can't be relied on for
correctness, since they punt to later optimization and possibly
legalization in some cases. We still need a way to be able to write
tests for the legalizer versions of the expansion. This is mostly for
GlobalISel, since the expected optimzations is expecting aren't
implemented.

The interaction with the flag to expand 64-bit division in the IR is
pretty confusing, but these flags have different purposes.
2020-02-14 11:37:07 -08:00
Matt Arsenault 34d9a16e54 AMDGPU: Add option to expand 64-bit integer division in IR
I didn't realize we were already expanding 24/32-bit division here
already. Use the available IntegerDivision utilities. This uses loops,
so produces significantly smaller code than the inline DAG expansion.

This now requires width reductions of 64-bit divisions before
introducing the expanded loops.

This helps work around missing legalization in GlobalISel for
division, which are the only remaining core instructions that didn't
work at all.

I think this is plausibly a better implementation than exists in the
DAG, although turning it on by default misses out on the constant
value optimizations and also needs benchmarking.
2020-02-14 11:16:08 -08:00
Craig Topper 391cc4dd41 [X86] Use ZERO_EXTEND instead of SIGN_EXTEND in the fast isel handling of convert_from_fp16. 2020-02-14 10:57:12 -08:00
Craig Topper fc0c72b2df [X86] Add AVX512 support to the fast isel code for Intrinsic::convert_from_fp16/convert_to_fp16. 2020-02-14 10:57:11 -08:00
Matt Arsenault bfbfa18591 GlobalISel: Lower s64->s16 G_FPTRUNC
This is more or less directly ported from the AMDGPU custom lowering
for FP_TO_FP16. I made a few minor fixups (using G_UNMERGE_VALUES
instead of creating shift/trunc to extract the two halves, and zexting
an inverted compare instead of select_cc).

This also does not include the fast math expansion the DAG which
converts to f32 and then to f16. I think that belongs in a
pre-legalize combine instead.
2020-02-14 10:46:58 -08:00
Volkan Keles 187686a22f [GlobalISel] LegalizationArtifactCombiner: Fix a bug in tryCombineMerges
Like COPY instructions explained in D70616, we don't check the constraints
when combining G_UNMERGE_VALUES. Use the same logic used in D70616 to check
if registers can be replaced, or a COPY instruction needs to be built.

https://reviews.llvm.org/D70564
2020-02-14 10:45:58 -08:00
Brian Cain bf3b86bc2f [Hexagon] v67+ HVX register pairs should support either direction
Assembler now permits pairs like 'v0:1', which are encoded
differently from the odd-first pairs like 'v1:0'.

The compiler will require more work to leverage these new register
pairs.
2020-02-14 12:43:43 -06:00
Matt Arsenault 8c2c0b3637 AMDGPU: Improve i16/v2i16 bswap 2020-02-14 09:53:22 -08:00
Craig Topper 7badb38918 [X86] Fix copy/paste mistake in comment. NFC 2020-02-14 09:47:50 -08:00
Matt Arsenault a257bde420 AMDGPU/GlobalISel: Handle G_BSWAP 2020-02-14 09:09:44 -08:00
Pavel Iliin b6a9fe2099 [AArch64] Add BIT/BIF support.
This patch added generation of SIMD bitwise insert BIT/BIF instructions.
In the absence of GCC-like functionality for optimal constraints satisfaction
during register allocation the bitwise insert and select patterns are matched
by pseudo bitwise select BSP instruction with not tied def.
It is expanded later after register allocation with def tied
to BSL/BIT/BIF depending on operands registers.
This allows to get rid of redundant moves.

Reviewers: t.p.northover, samparker, dmgreen

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D74147
2020-02-14 14:19:39 +00:00
Simon Pilgrim 2492075add [X86][SSE] lowerShuffleAsBitRotate - lower to vXi8 shuffles to ROTL on pre-SSSE3 targets
Without PSHUFB we are better using ROTL (expanding to OR(SHL,SRL)) than using the generic v16i8 shuffle lowering - but if we can widen to v8i16 or more then the existing shuffles are still the better option.

REAPPLIED: Original commit rG11c16e71598d was reverted at rGde1d90299b16 as it wasn't accounting for later lowering. This version emits ROTLI or the OR(VSHLI/VSRLI) directly to avoid the issue.
2020-02-14 11:55:18 +00:00
Kazushi (Jam) Marukawa 60431bd728 [VE] Support for PIC (global data and calls)
Summary: Support for PIC with tests for global variables and function calls.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D74536
2020-02-14 09:50:02 +01:00
Sam Parker fd01b2f4a6 [NFC][ARM] Convert some pointers to references. 2020-02-14 08:29:01 +00:00
Fangrui Song bcd24b2d43 [AsmPrinter][MCStreamer] De-capitalize EmitInstruction and EmitCFI* 2020-02-13 22:08:55 -08:00
Liu, Chen3 ec89335c47 [X86] Fix the bug that _mm_mask_cvtsepi64_epi32 generates result without
zero the upper 64bit.

Differential Revision : https://reviews.llvm.org/D74552
2020-02-14 09:26:06 +08:00
Fangrui Song 1d49eb00d9 [AsmPrinter] De-capitalize all AsmPrinter::Emit* but EmitInstruction
Similar to rL328848.
2020-02-13 17:06:24 -08:00
Thomas Lively 918e90559b [WebAssembly] Make stack pointer args inhibit tail calls
Summary:
Also make return calls terminator instructions so epilogues are
inserted before them rather than after them. Together, these changes
make WebAssembly's tail call optimization more stack-safe.

Reviewers: aheejin, dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73943
2020-02-13 16:43:53 -08:00
Fangrui Song 0bc77a0f0d [AsmPrinter] De-capitalize some AsmPrinter::Emit* functions
Similar to rL328848.
2020-02-13 13:38:33 -08:00
Craig Topper c2e8a421ac [X86] Don't widen 128/256-bit strict compares with vXi1 result to 512-bits on KNL.
If we widen the compare we might trigger a spurious exception from
the garbage data.

We have two choices here. Explicitly force the upper bits to zero.
Or use a legacy VEX vcmpps/pd instruction and convert the XMM/YMM
result to mask register.

I've chosen to go with the second option. I'm not sure which is
really best. In some cases we could get rid of the zeroing since
the producing instruction probably already zeroed it. But we lose
the ability to fold a load. So which is best is dependent on
surrounding code.

Differential Revision: https://reviews.llvm.org/D74522
2020-02-13 13:26:40 -08:00
Fangrui Song 0dce409cee [AsmPrinter] De-capitalize Emit{Function,BasicBlock]* and Emit{Start,End}OfAsmFile 2020-02-13 13:22:49 -08:00
Thomas Lively e252293d06 [WebAssembly] Add cbrt function signatures
Summary:
Fixes a crash in the backend where optimizations produce calls to the
cbrt runtime functions. Fixes PR 44227.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74259
2020-02-13 13:18:42 -08:00
Matt Arsenault 5adbf7d57f AMDGPU/GlobalISel: Make G_TRUNC legal
This is required to be legal. I'm not sure how we were getting away
without defining any rules for it.
2020-02-13 15:25:52 -05:00
Frederic Bastien 019ab61e25 [NVPTX, LSV] Move the LSV optimization pass to later when the graph is cleaner
This allow it to recognize more loads as being consecutive when the load's address are complex at the start.

Differential Revision: https://reviews.llvm.org/D74444
2020-02-13 12:15:38 -08:00
Matt Arsenault bfe3779459 AMDGPU: Use v_perm_b32 to implement bswap
Also greatly improve i64 lowering. LegalizeIntegerTypes does the
correct narrowing if i64 isn't legal. Just workaround this for
SelectionDAG by making i64 legal and splitting in the patterns.
2020-02-13 09:45:31 -08:00
John Brawn 0ec5797296 [ARM] Fix infinite loop when lowering STRICT_FP_EXTEND
If the target has FP64 but not FP16 then we have custom lowering for FP_EXTEND
and STRICT_FP_EXTEND with type f64. However if the extend is from f32 to f64 the
current implementation will cause in infinite loop for STRICT_FP_EXTEND due to
emitting a merge_values of the original node which after replacement becomes a
merge_values of itself.

Fix this by not doing anything for f32 to f64 extend when we have FP64, though
for STRICT_FP_EXTEND we have to do the strict-to-nonstrict mutation as that
doesn't happen automatically for opcodes with custom lowering.

Differential Revision: https://reviews.llvm.org/D74559
2020-02-13 16:12:50 +00:00
Sean Fertile b2d1e002ca [PowerPC][NFC] Small cleanup to restore CR field code in PPCFrameLowering.
Skip the loop over the CalleSavedInfos in 'restoreCalleeSavedRegisters' when
the register is a CR field and we are not targeting 32-bit ELF. This is safe
because:
1) The helper function 'restoreCRs' returns if the target is not 32-bit ELF,
   making all the code in the loop related to CR fields dead for every other
   subtarget. This code is only called on ELF right now, but the patch
   to extend it for AIX also needs to skip 'restoreCRs'.
2) The loop will not otherwise modify the iterator, so the iterator
   manipulations at the bottom of the loop end up setting 'I' to its
   current value.

This simplifciation allows us to remove one argument from 'restoreCRs'.

Also add a helper function to determine if a register is one of the
callee saved condition register fields.
2020-02-13 09:50:28 -05:00
Qiu Chaofan 87c773082a [PowerPC] Exploit VSX rounding instrs for rint
Exploit native VSX rounding instruction, x(v|s)r(d|s)pic, which does
rounding using current rounding mode.

According to C standard library, rint may raise INEXACT exception while
nearbyint won't.

Reviewed By: lkail

Differential Revision: https://reviews.llvm.org/D72685
2020-02-13 20:59:50 +08:00
Daniel Kiss d5a186a600 [AArch64] Fix BTI landing pad generation.
In some cases BTI landing pad is inserted even compatible instruction
was there already. Meta instruction does not count in this case
therefore skip them in the check for first instructions in the function.

Differential revision: https://reviews.llvm.org/D74492
2020-02-13 10:44:34 +00:00
Kerry McLaughlin 671cbc1fbb [AArch64][SVE] Add mul/mla/mls lane & dup intrinsics
Summary:
Implements the following intrinsics:
 - @llvm.aarch64.sve.dup
 - @llvm.aarch64.sve.mul.lane
 - @llvm.aarch64.sve.mla.lane
 - @llvm.aarch64.sve.mls.lane

Reviewers: c-rhodes, sdesmalen, dancgr, efriedma, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74222
2020-02-13 10:32:59 +00:00
David Green 9d4c597541 [ARM] Fix ReconstructShuffle for bigendian
Simon pointed out that this function is doing a bitcast, which can be
incorrect for big endian. That makes the lowering of VMOVN in MVE
wrong, but the function is shared between Neon and MVE so both can
be incorrect.

This attempts to fix things by using the newly added VECTOR_REG_CAST
instead of the BITCAST. As it may now be used on Neon, I've added the
relevant patterns for it there too. I've also added a quick dag combine
for it to remove them where possible.

Differential Revision: https://reviews.llvm.org/D74485
2020-02-13 09:56:46 +00:00
Yonghong Song 61bd33e37b [BPF] explicit warning of not supporting dynamic stack allocation
Currently, BPF does not support dynamic static allocation.
For a program like below:
  extern void bar(int *);
  void foo(int n) {
    int a[n];
    bar(a);
  }

The current error message looks like:
  unimplemented operand
  UNREACHABLE executed at /.../llvm/lib/Target/BPF/BPFISelLowering.cpp:199!

Let us make error message explicit so it will be clear to the user
what is the problem. With this patch, the error message looks like:
  fatal error: error in backend: Unsupported dynamic stack allocation
  ...

Differential Revision: https://reviews.llvm.org/D74521
2020-02-12 20:43:06 -08:00
Austin Kerbow 5db0b2521c [AMDGPU][GlobalISel] Handle 64byte EltSIze in getRegSplitParts
Reviewers: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74518
2020-02-12 19:11:52 -08:00
Amy Huang de1d90299b Revert "[X86][SSE] lowerShuffleAsBitRotate - lower to vXi8 shuffles to ROTL on pre-SSSE3 targets"
This reverts commit 11c16e7159 because it
causes a crash in chromium code. See
https://reviews.llvm.org/rG11c16e71598d51f15b4cfd0f719c4dabcc0bebf7.
2020-02-12 17:00:37 -08:00
Matt Arsenault d1b393d92c AMDGPU/GlobalISel: Select G_CTTZ_ZERO_UNDEF
Directly select this rather than going through the intermediate
instruction, which may provide some combine value in the future.
2020-02-12 16:19:46 -08:00
Matt Arsenault 045a8921d7 AMDGPU/GlobalISel: Select G_CTLZ_ZERO_UNDEF
Directly select this rather than going through the intermediate
instruction, which may provide some combine value in the future.
2020-02-12 16:19:45 -08:00
Matt Arsenault e174c278ca AMDGPU/GlobalISel: Fix mapping G_ICMP with constrained result
When SI_IF is inserted, it constrains the source register with a
register class, which was quite likely a G_ICMP. This was incorrectly
treating it as a scalar, and then applyMappingImpl would end up
producing invalid MIR since this was unexpected.

Also fix not using all VGPR sources for vcc outputs.
2020-02-12 16:19:45 -08:00
Jay Foad 32aac25637 [KnownBits] Introduce anyext instead of passing a flag into zext
Summary:
This was a very odd API, where you had to pass a flag into a zext
function to say whether the extended bits really were zero or not. All
callers passed in a literal true or false.

I think it's much clearer to make the function name reflect the
operation being performed on the value we're tracking (rather than on
the KnownBits Zero and One fields), so zext means the value is being
zero extended and new function anyext means the value is being extended
with unknown bits.

NFC.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74482
2020-02-12 19:06:53 +00:00
Jessica Paquette 45417b7aa7 [AArch64][GlobalISel] Properly implement widening for TB(N)Z
When we have to widen to a 64-bit register, we have to emit a SUBREG_TO_REG.

Add a general-purpose widening helpe  which emits the correct SUBREG_TO_REG
instruction based off of a desired size and add a testcase.

Also remove some asserts which are technically incorrect in `emitTestBit`.

- p0 doesn't count as a scalar type, so we need to check `!Ty.isVector()`
instead

- Whenever we have a s1, the Size/Bit checks are too conservative, so just
remove them

Replace these asserts with less conservative ones where applicable.

Differential Revision: https://reviews.llvm.org/D74427
2020-02-12 09:24:58 -08:00
Simon Pilgrim ff307c8120 [X86] combineFneg - generalize FMA negations with isNegatibleForFree/getNegatedExpression
This has a really interesting side effect in that it improves some UMAX/UMIN reduction code which had redundant XOR(SHUFFLE(XOR(X,SIGNMASK)),SIGNMASK) patterns - the getNegatibleCost recognises it as FNEG(SHUFFLE(FNEG(X))).... We have a lot of FNEG patterns bitcasted to the integer domain for XOR signbit twiddling which is similar to what we do to allow UMAX/UMIN to be lowered using SMAX/SMIN.

Differential Revision: https://reviews.llvm.org/D74231
2020-02-12 16:07:27 +00:00
Danilo Carvalho Grael fc8d033e96 [AArch64][SVE] Add addsub carry long instrinsics
Summary:
Add intrinsics for the following instructions:
- adclb, adclt, sbclb, sbclt

Reviewers: kmclaughlin, c-rhodes, sdesmalen, efriedma, rengolin

Reviewed By: kmclaughlin

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74328
2020-02-12 10:49:10 -05:00
Victor Huang caa10988be [PowerPC] Add options for PPC to enable/disable using non-volatile CR
An option is added for PowerPC to disable use of non-volatile CR
register fields and avoid CR spilling in the prologue.

Differential Revision: https://reviews.llvm.org/D69835
2020-02-12 09:23:11 -06:00
Anil Mahmud ab4d606421 [PowerPC] Add support for intrinsic llvm.ppc.eieio
Add support for the intrinsic llvm.ppc.eieio to emit the instruction eieio.

Differential Revision: https://reviews.llvm.org/D69066
2020-02-12 09:02:17 -06:00
Anil Mahmud b413e5c309 [PowerPC] Add support for intrinsics llvm.ppc.dcbfl and llvm.ppc.dcbflp
Added support for the intrinsic llvm.ppc.dcbfl and llvm.ppc.dcbflp.
These will be used for emitting cache control instructions dcbfl and dcbflp
which are actually mnemonics for using dcbf instruction with different
immediate arguments.

dcbfl ra, rb -> dcbf ra, rb, 1
dcbflp, ra, rb -> dcbf ra, rb, 3

Differential Revision: https://reviews.llvm.org/D68411
2020-02-12 09:02:17 -06:00
Matt Arsenault fa61e200e5 AMDGPU/GlobalISel: Widen non-power-of-2 load results
Load extra bits if suitably aligned. This allows using widened
3-vector loads on SI, and fixes legalization for <9 x s32> (which LSV
apparently forms frequently on lowered kernel argument lists).

Fix incorrectly treating these as legal on SI. This should emit a
64-bit store and a 32-bit store.

I think all of the load and store rules are just about complete, but
due for a rewrite.
2020-02-12 09:35:10 -05:00
Hans Wennborg a19de32095 Fix unused function warning (PR44808) 2020-02-12 15:12:48 +01:00
Simon Pilgrim 9eb426c88c [TargetLowering] Add NegatibleCost enum for isNegatibleForFree return codes
The isNegatibleForFree/getNegatedExpression methods currently rely on a raw char value to indicate whether a negation is beneficial or not.

This patch replaces the char return value with an NegatibleCost enum to more clearly demonstrate what is implied.

It also renames isNegatibleForFree to getNegatibleCost to more accurately reflect whats going on.

Differential Revision: https://reviews.llvm.org/D74221
2020-02-12 11:51:42 +00:00
Jay Foad e9900b1fbf [AMDGPU] Add one more pass to LLVMInitializeAMDGPUTarget 2020-02-12 11:19:14 +00:00
Djordje Todorovic 97ed706a96 Revert "[DebugInfo] Enable the debug entry values feature by default"
This reverts commit rG9f6ff07f8a39.

Found a test failure on clang-with-thin-lto-ubuntu buildbot.
2020-02-12 11:59:04 +01:00
Djordje Todorovic 9f6ff07f8a [DebugInfo] Enable the debug entry values feature by default
This patch enables the debug entry values feature.

  - Remove the (CC1) experimental -femit-debug-entry-values option
  - Enable it for x86, arm and aarch64 targets
  - Resolve the test failures
  - Leave the llc experimental option for targets that do not
    support the CallSiteInfo yet

Differential Revision: https://reviews.llvm.org/D73534
2020-02-12 10:25:14 +01:00
Nicolai Hähnle ab2f610f38 AMDGPU: llvm.amdgcn.writelane is a source of divergence
Summary:
Consider:

  %r = call i32 @llvm.amdgcn.writelane(i32 0, i32 1, i32 2)

This produces a value that is 0 on lane 1, and 2 everywhere else; i.e.,
it is divergent.

Reported-by: Marek Olsak <Marek.Olsak@amd.com>

Reviewers: arsenm, foad, mareko

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74400
2020-02-12 09:12:56 +01:00
Kazushi (Jam) Marukawa 42a16dacda [VE] Bit operator isel
Summary: Isel and tests for bswap,brev,ctpop,ctlz,ctty,rotl,rotr

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D74304
2020-02-12 09:02:13 +01:00
Craig Topper 746395a446 [X86] Remove unnecessary hasSideEffects = 0, mayLoad = 1 from an instruction with a pattern. NFC 2020-02-11 23:26:29 -08:00
Craig Topper 3988b7046a [X86] Correct the predicate on some patterns for 128 and 256 EVEX versions of VCVTPS2PH.
These should require AVX512VL not AVX512F. The legacy VEX patterns
will match first unless AVX512VL is enabled so this doesn't cause
a functional issue.
2020-02-11 23:26:29 -08:00
Craig Topper 0daf9b8e41 [X86][LegalizeTypes] Add SoftPromoteHalf support STRICT_FP_EXTEND and STRICT_FP_ROUND
This adds a strict version of FP16_TO_FP and FP_TO_FP16 and uses
them to implement soft promotion for the half type. This is
enough to provide basic support for __fp16 with strictfp.

Add the necessary X86 support to use VCVTPS2PH/VCVTPH2PS when F16C
is enabled.
2020-02-11 22:30:04 -08:00
Matt Arsenault 6d4ebada79 AMDGPU: Use conditions directly in division expansion
This was creating a select on true/false values, and then comparing
that later. This produced more work for later combines, which can be
avoided by just using the boolean values. This was copied from the
original DAG expansion, which also has the same problem. This doesn't
have a observable change using SelectionDAG, but since GlobalISel is
missing these optimizations, the final code was noticeably longer.
2020-02-11 23:11:30 -05:00
Austin Kerbow 3a312c3ee5 [AMDGPU][GlobalISel] Refactor selectDS1Addr1Offset/selectDS64Bit4ByteAligned
Differential Revision: https://reviews.llvm.org/D74261
2020-02-11 16:57:13 -08:00
Matt Arsenault b30e122333 AMDGPU: Don't expand more special div cases in IR
These have nicer expansions implemented in the DAG. Ideally we would
either directly implement all of these special expansions, or stop
expanding division in the IR.
2020-02-11 19:01:06 -05:00
Matt Arsenault 86f9117d47 AMDGPU: Don't report 2-byte alignment as fast
This is apparently worse than 1-byte alignment. This does not attempt
to decompose 2-byte aligned wide stores, but will stop trying to
produce them.

Also fix bug in LoadStoreVectorizer which was decreasing the alignment
and vectorizing stack accesses. It was assuming a stack object was an
alloca that could have its base alignment changed, which is not true
if the pointer is derived from a function argument.
2020-02-11 18:35:00 -05:00
Justin Lebar 1bd6123b78 Use std::foo_t rather than std::foo in LLVM.
Summary: C++14 migration. No functional change.

Reviewers: bkramer, JDevlieghere, lebedev.ri

Subscribers: MatzeB, hiraditya, jkorous, dexonsmith, arphaman, kadircet, lebedev.ri, usaxena95, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74384
2020-02-11 15:12:51 -08:00
Matt Arsenault f734ce0488 AMDGPU: Fix crash on v3i15 kernel arguments
This was split into 3 i15 arguments. The i15 piece needs to be rounded
to a simple MVT for the memory type.
2020-02-11 18:11:39 -05:00
Matt Arsenault 92c62582fc AMDGPU: Directly use rcp intrinsic in idiv expansions
Since natural fdiv lowering is now more conservative even with
denormals disabled, we get a slower expansion from just a plain
1.0/fdiv. Directly emit the rcp intrinsic when using it to implement
integer division to avoid a pointlessly complex sequence.
2020-02-11 18:11:39 -05:00
Matt Arsenault b87e3e2d0d AMDGPU: Don't create potentially dead rcp declarations
This will introduce unused declarations if this doesn't reach any of
the paths that will really use it.
2020-02-11 18:11:39 -05:00
Craig Topper 846d0ac43e [X86] Don't disable code in combineHorizontalPredicateResult just because we have avx512
We aren't doing a good job of optimizing AVX512 outside of this code. So remove the bail out for AVX512 and replace with a FIXME. This at least gets us the AVX2 codegen.

Differential Revision: https://reviews.llvm.org/D74431
2020-02-11 14:36:29 -08:00
Krzysztof Parzyszek 61ca996e79 [Hexagon] Don't generate short vectors in ISD::SELECT in preprocessing
Selection DAG preprocessing runs long after legalization, so make sure
that the types can be handled by the selection code.
2020-02-11 15:27:33 -06:00
lewis-revill 07f7c00208 [RISCV] Add support for save/restore of callee-saved registers via libcalls
This patch adds the support required for using the __riscv_save and
__riscv_restore libcalls to implement a size-optimization for prologue
and epilogue code, whereby the spill and restore code of callee-saved
registers is implemented by common functions to reduce code duplication.

Logic is also included to ensure that if both this optimization and
shrink wrapping are enabled then the prologue and epilogue code can be
safely inserted into the basic blocks chosen by shrink wrapping.

Differential Revision: https://reviews.llvm.org/D62686
2020-02-11 21:23:03 +00:00
Jay Foad 9df0c264d4 [AMDGPU] Fix implicit operands for ENTER_WWM pseudo
Summary:
SIInstrInfo::expandPostRAPseudo converts ENTER_WWM in-place into an
S_OR_SAVEEXEC instruction that needs certain implicit operands. Without
this patch I get errors like this that make it harder to use -stop-after
to bisect the pass pipeline:

$ llc -march=amdgcn test/CodeGen/AMDGPU/wqm.ll -stop-after=postrapseudos -o - | sed -E 's/ (from|into) custom "TargetCustom[0-9]+"//' | llc -march=amdgcn -x=mir
error: <stdin>:1295:70: missing implicit register operand 'implicit-def $scc'
    renamable $sgpr2_sgpr3 = S_OR_SAVEEXEC_B64 -1, implicit-def $exec
                                                                     ^

Note that this error is currently only generated by MIParser but it
comes with a FIXME comment:

// FIXME: Move the implicit operand verification to the machine verifier.

Reviewers: critson, arsenm, rampitec, nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74428
2020-02-11 20:11:41 +00:00
Craig Topper d7de7ac370 [X86] Raise the latency for VectorImul from 4 to 5 in Skylake scheduler models
Based on uops.info these should have 5 cycle latency as they did on Haswell/Broadwell. I have no additional internal information from Intel.

This was also shown as a discrepancy in the spreadsheet that was sent with an early llvm-dev post about llvm-exegesis.
It also matches Agner Fog.

Differential Revision: https://reviews.llvm.org/D74357
2020-02-11 11:24:25 -08:00
Stanislav Mekhanoshin 453a8f3af7 [AMDGPU] Remove AMDGPURegisterInfo
R600 and GCN do not have anything in common in terms of register
file organization anymore.

Differential Revision: https://reviews.llvm.org/D74426
2020-02-11 11:13:38 -08:00
Jordan Rupprecht 734f086b42 [NFC] Fix unused var in release builds 2020-02-11 10:10:52 -08:00
Yonghong Song 29bc5dd194 [BPF] implement isTruncateFree and isZExtFree in BPFTargetLowering
Currently, isTruncateFree() and isZExtFree() callbacks return false
as they are not implemented in BPF backend. This may cause suboptimal
code generation. For example, if the load in the context of zero extension
has more than one use, the pattern zextload{i8,i16,i32} will
not be generated. Rather, the load will be matched first and
then the result is zero extended.

For example, in the test together with this commit, we have
   I1: %0 = load i32, i32* %data_end1, align 4, !tbaa !2
   I2: %conv = zext i32 %0 to i64
   ...
   I3: %2 = load i32, i32* %data, align 4, !tbaa !7
   I4: %conv2 = zext i32 %2 to i64
   ...
   I5: %4 = trunc i64 %sub.ptr.lhs.cast to i32
   I6: %conv13 = sub i32 %4, %2
   ...

The I1 and I2 will match to one zextloadi32 DAG node, where SUBREG_TO_REG is
used to convert a 32bit register to 64bit one. During code generation,
SUBREG_TO_REG is a noop.

The %2 in I3 is used in both I4 and I6. If isTruncateFree() is false,
the current implementation will generate a SLL_ri and SRL_ri
for the zext part during lowering.

This patch implement isTruncateFree() in the BPF backend, so for the
above example, I3 and I4 will generate a zextloadi32 DAG node with
SUBREG_TO_REG is generated during lowering to Machine IR.

isZExtFree() is also implemented as it should help code gen as well.

This patch also enables the change in https://reviews.llvm.org/D73985
since it won't kick in generates MOV_32_64 machine instruction.

Differential Revision: https://reviews.llvm.org/D74101
2020-02-11 09:59:19 -08:00
Nikita Popov 5eb19bf4a2 [X86CmovConversion] Make heuristic for optimized cmov depth more conservative (PR44539)
Fix/workaround for https://bugs.llvm.org/show_bug.cgi?id=44539.
As discussed there, this pass makes some overly optimistic
assumptions, as it does not have access to actual branch weights.

This patch makes the computation of the depth of the optimized cmov
more conservative, by assuming a distribution of 75/25 rather than
50/50 and placing the weights to get the more conservative result
(larger depth). The fully conservative choice would be
std::max(TrueOpDepth, FalseOpDepth), but that would break at least
one existing test (which may or may not be an issue in practice).

Differential Revision: https://reviews.llvm.org/D74155
2020-02-11 17:33:11 +01:00
Eric Astor 8d5bf0422b [ms] [llvm-ml] Add support for attempted register parsing
Summary:
Add a new method (tryParseRegister) that attempts to parse a register specification.

MASM allows the use of IFDEF <register>, as well as IFDEF <symbol>. To accommodate this, we make it possible to check whether a register specification can be parsed at the current location, without failing the entire parse if it can't.

Reviewers: thakis

Reviewed By: thakis

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73486
2020-02-11 10:45:33 -05:00
Jonas Paulsson 0311e28e9c [SystemZ] Bugfix in emitSelect()
When more than one SelectPseudo instruction is handled a new MBB is
returned. This must not be done if that would result in leaving an undhandled
isel pseudo behind in the original MBB.

Fixes https://bugs.llvm.org/show_bug.cgi?id=44849.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D74352
2020-02-11 10:41:01 -05:00
Sjoerd Meijer 6b0ed508fa [ARM][MVE] Tail-Predication: recognise (again) active lanes IR pattern
A small IR change in calculating the active lanes resulted in no longer
recognising tail-predication. Now recognise both an 'add' and 'or' in
the expression that calculates the active lanes.

Differential Revision: https://reviews.llvm.org/D74394
2020-02-11 15:18:18 +00:00
Andrew Wei db875f6655 [RISCV] Optimize seteq/setne pattern expansions for better code size
ADDI(C.ADDI) may achieve better code size than XORI, since XORI has no C extension.
This patch transforms two patterns and gets almost equivalent results.

Differential Revision: https://reviews.llvm.org/D71774
2020-02-11 22:45:15 +08:00
Simon Pilgrim fa620fc8e2 [X86] combineConcatVectorOps - reuse IsSplat and remove duplicate code. NFC. 2020-02-11 13:37:57 +00:00
Simon Pilgrim 11c16e7159 [X86][SSE] lowerShuffleAsBitRotate - lower to vXi8 shuffles to ROTL on pre-SSSE3 targets
Without PSHUFB we are better using ROTL (expanding to OR(SHL,SRL)) than using the generic v16i8 shuffle lowering - but if we can widen to v8i16 or more then the existing shuffles are still the better option.
2020-02-11 12:21:03 +00:00
Mirko Brkusanin 5ba931a84a [Mips] Add intrinsics for 4-byte and 8-byte MSA loads/stores.
New intrinisics are implemented for when we need to port SIMD code from other
arhitectures and only load or store portions of MSA registers.

Following intriniscs are added which only load/store element 0 of a vector:
v4i32 __builtin_msa_ldrq_w (const void *, imm_n2048_2044);
v2i64 __builtin_msa_ldr_d (const void *, imm_n4096_4088);
void __builtin_msa_strq_w (v4i32, void *, imm_n2048_2044);
void __builtin_msa_str_d (v2i64, void *, imm_n4096_4088);

Differential Revision: https://reviews.llvm.org/D73644
2020-02-11 11:47:30 +01:00
Kerry McLaughlin e7755f9e4f [AArch64][SVE] Add SVE2 intrinsics for complex integer dot product
Summary:
Implements the following intrinsics:
 - @llvm.aarch64.sve.cdot
 - @llvm.aarch64.sve.cdot.lane

Reviewers: sdesmalen, efriedma, dancgr, c-rhodes, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73687
2020-02-11 10:28:31 +00:00
Jay Foad b06a13f541 [AMDGPU] Fix non-deterministic iteration order
Summary:
As far as I know this did not affect code generation, but it did affect
the order of -debug-only=si-wqm output and the naming of autonamed
values in -print-after=si-wqm output.

Reviewers: arsenm, rampitec, nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, mgrang, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74317
2020-02-11 09:19:30 +00:00
Craig Topper 798305d29b [X86] Custom lower ISD::FP16_TO_FP and ISD::FP_TO_FP16 on f16c targets instead of using isel patterns.
We need to use vector instructions for these operations. Previously
we handled this with isel patterns that used extra instructions
and copies to handle the the conversions.

Now we use custom lowering to emit the conversions. This allows
them to be pattern matched and optimized on their own. For
example we can now emit vpextrw to store the result if its going
directly to memory.

I've forced the upper elements to VCVTPHS2PS to zero to keep some
code similar. Zeroes will be needed for strictfp. I've added a
DAG combine for (fp16_to_fp (fp_to_fp16 X)) to avoid extra
instructions in between to be closer to the previous codegen.

This is a step towards strictfp support for f16 conversions.
2020-02-10 22:01:48 -08:00
diggerlin 09d26b79d2 [NFC] Refactor the tuple of symbol information with structure for llvm-objdump
SUMMARY:

refator the std::tuple<uint64_t, StringRef, uint8_t> to structor

Reviewers: daltenty
Subscribers: wuzish, nemanjai, hiraditya

Differential Revision: https://reviews.llvm.org/D74240
2020-02-10 19:23:01 -05:00
Xiangling Liao 660b0d7f7b [AIX] Enable frame pointer for AIX and add related test suite
This patch:
   - enable frame pointer for AIX;
   - update some of red zone comments;
   - add/update testcases;

Differential Revision: https://reviews.llvm.org/D72454
2020-02-10 15:43:41 -05:00
diggerlin aa86311e62 [AIX][XCOFF] Support Mergeable2ByteCString and Mergeable4ByteCString
SUMMARY:
The patch is enable to support Mergeable2ByteCString and Mergeable4ByteCString

Reviewers: daltenty
Subscribers: wuzish, nemanjai, hiraditya

Differential Revision: https://reviews.llvm.org/D74164
2020-02-10 14:45:54 -05:00
Jonas Paulsson fcdb99e0b5 [SystemZ] Add a subtarget cache like some other targets already have.
Each function is with this compiled with the SystemZSubtarget initialized
from the functions attributes.

Review: Ulrich Weigand.

Differential Revision: https://reviews.llvm.org/D74086
2020-02-10 13:10:58 -05:00
Matt Arsenault 7af7b96a9b AMDGPU: Move R600 test compatability hack
Instead of handling the r600 intrinsics on amdgcn, handle the amdgcn
intrinsics on r600.
2020-02-10 10:02:06 -08:00
Simon Pilgrim f319074824 [X86] combineConcatVectorOps - combine X86ISD::PACKSS ops 2020-02-10 17:48:02 +00:00
Simon Pilgrim 74c0f98cf5 [X86] combineConcatVectorOps - combine X86ISD::VPERMI ops 2020-02-10 17:48:01 +00:00
Simon Pilgrim 2463b8c97d [X86] combineConcatVectorOps - combine VSHLI/VSRAI/VSRLI ops
Non-AVX512BW targets failed to concatenate 256-bit shifts back to 512-bits (split during 512-bit shuffle lowering as they don't have v32i16/v64i8 types).
2020-02-10 16:59:09 +00:00
Stanislav Mekhanoshin ed3527c648 [AMDGPU] Split R600 and GCN subregs
These are generated and do not need to have the same values.
We are defining separate subregs for R600 and GCN but then
using AMDGPU subregs on R600.

Differential Revision: https://reviews.llvm.org/D74248
2020-02-10 08:29:56 -08:00
Simon Pilgrim 06617c4522 [X86] Add lowerShuffleAsBitRotate (PR44379)
As noted on PR44379, we didn't attempt to lower vector shuffles using bit rotations on XOP/AVX512F targets.

This patch lowers to uniform ISD:ROTL nodes - ROTR isn't supported by XOP and they are interchangeable for constant values anyway.

There might be cases where targets without ISD:ROTL support would benefit from this (expanding to SRL+SHL+OR), which I'll investigate in a future patch.

REAPPLIED rGe82e17d4d4ca after reversion at rG39eade73a567 - fixed offset matching in matchShuffleAsBitRotate.
2020-02-10 16:16:56 +00:00
Luke Geeson a67db83681 [AArch64] Make Read Write System Registers Read Only
This patch makes the following System Registers Read Only:

 - CurrentEL
 - ICH_MISR_EL2
 - PMBIDR_EL1
 - PMSIDR_EL1

as found in:
https://developer.arm.com/docs/ddi0595/e/aarch64-system-registers

Relative line numbers were also added to the tests so we get more
informative error messages on failure.

Change-Id: I963b4f01ca5737b58f9e8e7abe9ca1d99e328758
2020-02-10 14:34:24 +00:00
Kai Nacke 34946dfd79 [SystemZ] Add implementation for the intrinsic llvm.read_register
This change implements the llvm intrinsic llvm.read_register for
the SystemZ platform which returns the value of the specified
register
(http://llvm.org/docs/LangRef.html#llvm-read-register-and-llvm-write-register-intrinsics).
This implementation returns the value of the stack register, and
can be extended to return the value of other registers. The
implementation for this intrinsic exists on various other platforms
including Power, x86, ARM, etc. but missing on SystemZ.

Reviewers: uweigand

Differential Revision: https://reviews.llvm.org/D73378
2020-02-10 08:19:10 -05:00
Hans Wennborg ea9850b6c7 Fix an unused variable warning 2020-02-10 14:08:18 +01:00
Kerry McLaughlin 92a7875092 [AArch64][SVE] SVE2 intrinsics for complex integer arithmetic
Summary:
Adds the following SVE2 intrinsics:
 - cadd & sqcadd
 - cmla & sqrdcmlah
 - saddlbt, ssublbt & ssubltb

Reviewers: sdesmalen, dancgr, efriedma, cameron.mcinally, c-rhodes, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73636
2020-02-10 12:14:56 +00:00
Simon Pilgrim 39eade73a5 Revert rGe82e17d4d4cac8b2df00094e80d5e1cb22795664 - [X86] Add lowerShuffleAsBitRotate (PR44379)
As noted on PR44379, we didn't attempt to lower vector shuffles using bit rotations on XOP/AVX512F targets.

This patch lowers to uniform ISD:ROTL nodes - ROTR isn't supported by XOP and they are interchangeable for constant values anyway.

There might be cases where targets without ISD:ROTL support would benefit from this (expanding to SRL+SHL+OR), which I'll investigate in a future patch.

Also, non-AVX512BW targets fail to concatenate 256-bit rotations back to 512-bits (split during shuffle lowering as they don't have v32i16/v64i8 types).
---
Internal shuffle tests indicate theres a bug somewhere that I haven't been able to track down yet.
2020-02-10 12:14:26 +00:00
Kerry McLaughlin e299a08149 [AArch64][SVE] SVE2 intrinsics for character match & histogram generation
Summary:
Implements the following intrinsics:
 - @llvm.aarch64.sve.histcnt
 - @llvm.aarch64.sve.histseg
 - @llvm.aarch64.sve.match
 - @llvm.aarch64.sve.nmatch

Reviewers: c-rhodes, sdesmalen, dancgr, efriedma, rengolin

Reviewed By: c-rhodes

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74117
2020-02-10 11:08:00 +00:00
Kerry McLaughlin 5e1d7bb679 [AArch64][SVE] Add SVE2 intrinsics for widening DSP operations
Summary:
Implements the following intrinsics:

 - @llvm.aarch64.sve.[s|u]abalb
 - @llvm.aarch64.sve.[s|u]abalt
 - @llvm.aarch64.sve.[s|u]addlb
 - @llvm.aarch64.sve.[s|u]addlt
 - @llvm.aarch64.sve.[s|u]sublb
 - @llvm.aarch64.sve.[s|u]sublt
 - @llvm.aarch64.sve.[s|u]abdlb
 - @llvm.aarch64.sve.[s|u]abdlt
 - @llvm.aarch64.sve.sqdmullb
 - @llvm.aarch64.sve.sqdmullt
 - @llvm.aarch64.sve.[s|u]mullb
 - @llvm.aarch64.sve.[s|u]mullt

Reviewers: sdesmalen, dancgr, efriedma, cameron.mcinally, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73719
2020-02-10 10:37:59 +00:00
Djordje Todorovic 3a4dc577c9 [CSInfo] Fix the assertions regarding updating the CSInfo
The call site info was not updated correctly when deleting
corresponding call instructions.

Differential Revision: https://reviews.llvm.org/D73700
2020-02-10 10:55:06 +01:00
Kai Nacke a5040d5ec9 [SytemZ] Disable vector ABI when using option -march=arch[8|9|10]
When specifying -march=arch[8|9|10], those CPU types do NOT support
the vector extension. In this case the vector ABI must be disabled.
The generated data layout should NOT contain 64-v128.

Reviewers: uweigand

Differential Revision: https://reviews.llvm.org/D74146
2020-02-10 04:14:05 -05:00
Djordje Todorovic 68908993eb [CSInfo] Use isCandidateForCallSiteEntry() when updating the CSInfo
Use the isCandidateForCallSiteEntry().
This should mostly be an NFC, but there are some parts ensuring
the moveCallSiteInfo() and copyCallSiteInfo() operate with call site
entry candidates (both Src and Dest should be the call site entry
candidates).

Differential Revision: https://reviews.llvm.org/D74122
2020-02-10 10:03:14 +01:00
Sebastian Neubauer 8756869170 [AMDGPU] Add a16 feature to gfx10
Based on D72931

This adds a new feature called A16 which is enabled for gfx10.
gfx9 keeps the R128A16 feature so it can share all the instruction encodings
with gfx7/8.

Differential Revision: https://reviews.llvm.org/D73956
2020-02-10 09:04:23 +01:00
Craig Topper 06ba969c9d [X86] Make (insert_vector_elt (v8i16 zerovec), i16 %x, 0) generate the same code as (v8i16 (build_vector %x, 0, 0, 0, 0, 0, 0, 0)).
Instead of using a insrw to element 0, use movzx and movd.

Same for v16i8.
2020-02-09 21:52:11 -08:00
Craig Topper 05d44204fa [X86] Use MOVZX instead of MOVSX in f16_to_fp isel patterns.
Using sign extend forces the adjacent element to either all zeros
or all ones. But all ones is a NAN. So that doesn't seem like a
great idea.

Trying to work on supporting this with strict FP where NAN would
definitely be bad.
2020-02-09 20:39:52 -08:00
Shiva Chen 64f417200e [RISCV] Fix incorrect FP base CFI offset for variable argument functions
When the FP exists, the FP base CFI directive offset should take the size of variable arguments into account.

Differential Revision: https://reviews.llvm.org/D73862
2020-02-10 11:56:08 +08:00
Matt Arsenault 312a9d1b83 GlobalISel: Fix narrowScalar for G_{CTLZ|CTTZ}_ZERO_UNDEF
Narrow these for 64-bit VALU for AMDGPU.
2020-02-09 19:02:38 -05:00
Matt Arsenault c437f6c687 AMDGPU/GlobalISel: Split 64-bit G_CTPOP in RegBankSelect 2020-02-09 18:39:33 -05:00
Matt Arsenault 6135f5eda4 GlobalISel: Fix narrowing of G_CTLZ/G_CTTZ
The result type is separate from the source type.
2020-02-09 18:11:43 -05:00
Matt Arsenault 2126c70e3a AMDGPU/GlobalISel: Don't mis-select vector index on a constant
Vector indexing with a constant index should be folded out in the
legalizer, but this was accidentally falling through. This would
produce the indexing operation with $noreg. Handle this case as a
dynamic index just in case a bug like this happens again in the
future.
2020-02-09 18:02:37 -05:00
Matt Arsenault f4a38c114e AMDGPU/GlobalISel: Look through casts when legalizing vector indexing
We were failing to find constants that were casted. I feel like the
artifact combiner should have folded the constant in the trunc before
the custom lowering, but that doesn't happen.
2020-02-09 18:02:10 -05:00
Matt Arsenault 00115d767f AMDGPU: Remove dead kill handling
At one point a custom node was used for kill handling, but now the
intrinsic is directly selected. Remove leftover pattern machinery.
2020-02-09 17:59:24 -05:00
Matt Arsenault 6e1770821f AMDGPU: Fix SI_IF lowering when the save exec reg has terminator uses
Reverts part of 6524a7a2b9. Since that
commit, the expansion was ignoring the actual save exec register
produced by the instruction, and looking at other instructions. I do
not understand why it was looking at other instructions, but relying
on this scan was wrong.

Fixes verifier errors after SI_IF is tail duplicated, which should be
correct to do. The results were fed into a phi, which was lowered to
the S_MOV_B64_term instructions.
2020-02-09 17:59:19 -05:00
Simon Pilgrim 29e646fe65 [X86] combineConcatVectorOps - combine VROTLI/VROTRI ops
Fix issue mentioned on rGe82e17d4d4ca - non-AVX512BW targets failed to concatenate 256-bit rotations back to 512-bits (split during shuffle lowering as they don't have v32i16/v64i8 types).
2020-02-09 21:50:10 +00:00
Craig Topper 656d66f5fc [X86] Use custom isel for (X86sbb_flag 0, 0) so we can use 32-bit SBB for i8/i16.
We were using MOV32r0 and an extract_subreg as an input. By using
custom isel we can move the extract_subreg to after the SBB instead
of on the input.
2020-02-09 13:19:35 -08:00
Craig Topper e1cbfecdb8 [X86] Add flag result VT to a MOV32r0 created in X86DAGToDAGISel::Select
The flag isn't used, but I believe this matches the MOV32r0 that
would be created by the table emitter. This should allow this node
to be CSEed with any others created by the table.
2020-02-09 13:19:21 -08:00
Simon Pilgrim e82e17d4d4 [X86] Add lowerShuffleAsBitRotate (PR44379)
As noted on PR44379, we didn't attempt to lower vector shuffles using bit rotations on XOP/AVX512F targets.

This patch lowers to uniform ISD:ROTL nodes - ROTR isn't supported by XOP and they are interchangeable for constant values anyway.

There might be cases where targets without ISD:ROTL support would benefit from this (expanding to SRL+SHL+OR), which I'll investigate in a future patch.

Also, non-AVX512BW targets fail to concatenate 256-bit rotations back to 512-bits (split during shuffle lowering as they don't have v32i16/v64i8 types).
2020-02-09 21:15:03 +00:00
Craig Topper dd262222b4 [X86] Use MVT::i32 for the type of a MOV32r0 created in X86DAGToDAGISel::Select.
Not sure if this really matters. The VT isn't really used after
this point. At best it might affect CSE.
2020-02-09 11:57:42 -08:00
Craig Topper dbcc1392b3 [X86] Remove isel patterns that include a vselect/X86selects and a strict FP node.
A vselect+strictfp node is not equivalent to a masked operation.
The exceptions of the strictfp node are not masked by a vselect
after it so we can't match it to a masked operation.

We already had a hack in IsLegalToFold to prevent these patterns from
matching. This patch removes that hack and removes the patterns.
2020-02-09 11:45:54 -08:00
Simon Pilgrim 29621b2534 [X86] Rename matchShuffleAsRotate - matchShuffleAsByteRotate. NFCI.
A matchShuffleAsBitRotate variant will be added soon and we need to make the difference more obvious.
2020-02-09 18:35:50 +00:00
Simon Pilgrim 3ec6de07e9 Fix signed/unsigned warning. 2020-02-09 13:35:03 +00:00
Simon Pilgrim 644d56b432 [X86] Recognise ROTLI/ROTRI rotations as faux shuffles
Allows us to combine rotations with shuffles.

One of many things necessary to fix PR44379 (lowering shuffles to rotations)
2020-02-09 12:25:49 +00:00
serge_sans_paille e67cbac812 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 10:42:45 +01:00
serge-sans-paille 4546211600 Revert "Support -fstack-clash-protection for x86"
This reverts commit 0fd51a4554.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-win-x-armv7l/builds/4354
2020-02-09 10:06:31 +01:00
serge_sans_paille 0fd51a4554 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 09:35:42 +01:00
Craig Topper e629674176 [X86] Add more scalar intrinsic instructions to isNonFoldablePartialRegisterLoad.
I think this covers most if not all of the scalar intrinsic
instructions.
2020-02-08 20:41:36 -08:00
Fangrui Song ee3f13b81d Fix -Wunused-lambda-capture for -DLLVM_ENABLE_ASSERTIONS=off builds after 6556c615f3 2020-02-08 19:03:58 -08:00
Craig Topper 0152b106ae [X86] Add the recently added (V)CVTSS2SI/CVTSD2SI instructions used for LRINT/LLRINT to the load folding tables. 2020-02-08 17:54:48 -08:00
Craig Topper d643a39aba [X86] Use any_fadd/sub/mul/div/sqrt with the AVX512 scalar_*_patterns.
Making sure not to use them with patterns for masked instructions.

Also fix FMA patterns that were matching strict_fma+x86selects to
masked instructions.
2020-02-08 15:54:40 -08:00
Craig Topper eeb63944e4 [LegalizeTypes][ARM][AArch64][PowerPC][RISCV][X86] Use BUILD_PAIR to return expanded integer results from ReplaceNodeResults instead of just returning two results.
Remove code from LegalizeTypes that allowed this to work.

We were already using BUILD_PAIR for this in some places so this
standardizes on a single way to do this.
2020-02-08 09:52:31 -08:00
Simon Pilgrim 4aa7b9cc96 [X86] X86InstComments - add FMA4 comments
These typically match the FMA3 equivalents, although the multiply operands sometimes get flipped due to the FMA3 permute variants.
2020-02-08 17:02:00 +00:00
Simon Pilgrim 10417ad2e4 [X86] Standardize BROADCAST enum names (PR31079)
Tweak EVEX implementation names so it matches the other variants by adding the 'r' prefix. Oddly some of the subvec broadcast ops already matched.
2020-02-08 16:55:00 +00:00
Simon Pilgrim 0ed79e9b8f [X86] Standardize VPSLLDQ/VPSRLDQ enum names (PR31079)
Tweak EVEX implementation names so it matches the other variants
2020-02-08 14:54:44 +00:00
serge-sans-paille 658495e6ec Revert "Support -fstack-clash-protection for x86"
This reverts commit e229017732.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-debian/builds/2604
http://lab.llvm.org:8011/builders/llvm-clang-win-x-aarch64/builds/4308
2020-02-08 14:26:22 +01:00
Victor Campos af2a384581 Revert "[ARM] Improve codegen of volatile load/store of i64"
This reverts commit 60e0120c91.
2020-02-08 13:18:45 +00:00
serge_sans_paille e229017732 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with better option
handling and more portable testing

Differential Revision: https://reviews.llvm.org/D68720
2020-02-08 13:31:52 +01:00
Benjamin Kramer e4230a9f6c ArrayRef'ize spillCalleeSavedRegisters. NFCI. 2020-02-08 12:19:23 +01:00
Simon Pilgrim 7f5b3fa73c [X86][SSE] Add X86ISD::FRCP handling to isNegatibleForFree
Peek through X86ISD::FRCP nodes to see if there is a negatible input.
2020-02-08 10:56:27 +00:00
Simon Pilgrim 4229f12a22 [TargetLowering] Remove isDesirableToCombineBuildVectorToShuffleTruncate target hook. NFC.
This hasn't been used for years, its original implementation, D35700, had bugs that caused the reversion of most of the code, and since then x86 shuffle lowering/combining has handled most cases and can deal with the rest as well.
2020-02-08 08:55:51 +00:00
Sam Clegg caeb6cfbc2 [WebAssembly] Fix signature of __powitf2 libcall
Add tests for @llvm.powi.f64/f128.

See: https://llvm.org/docs/LangRef.html#llvm-powi-intrinsic

Differential Revision: https://reviews.llvm.org/D74274
2020-02-07 20:30:47 -08:00
Heejin Ahn 5b5cbfe135 [WebAssembly] Add debug info to insts in Emscripten SjLj
Summary:
This makes sure all newly create instructions in Emscripten SjLj has
appropriate debug info attached. Fixes
https://github.com/emscripten-core/emscripten/issues/9797.

Reviewers: kripken

Subscribers: dschuff, aprantl, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74269
2020-02-07 19:08:39 -08:00
Huihui Zhang 6556c615f3 Reland "[AMDGPU] Fix data race on RegisterBank initialization." 2020-02-07 14:18:48 -08:00
Huihui Zhang ae39105466 Reland "[ARM] Fix data race on RegisterBank initialization."
Update lambda function
static auto InitializeRegisterBankOnce = [this](const auto &TRI) {
with
static auto InitializeRegisterBankOnce = [&]() {

Capture reference instead of passing argument, as there are buildbot
compiling errors related when passing argument.
2020-02-07 14:01:06 -08:00
Huihui Zhang 2491fd0e6f Reland "[AArch64] Fix data race on RegisterBank initialization."
Update lambda function
static auto InitializeRegisterBankOnce = [this](const auto &TRI) {
with
static auto InitializeRegisterBankOnce = [&]() {

Capture reference instead of passing argument, as there are buildbot
compiling errors related when passing argument.
2020-02-07 13:13:55 -08:00
Nemanja Ivanovic 26bf877ec5 [PowerPC] Fix spilling of vector registers in PEI of EH aware functions
On little endian targets prior to Power9, we spill vector registers using a
swapping store (i.e. stdxvd2x saves the vector with the two doublewords in
big endian order regardless of endianness). This is generally not a problem
since we restore them using the corresponding swapping load (lxvd2x). However
if the restore is done by the unwinder, the vector register contains data in
the incorrect order.

This patch fixes that by using Altivec loads/stores for vector saves and
restores in PEI (which keep the order correct) under those specific conditions:
- EH aware function
- Subtarget requires swaps for VSX memops (Little Endian prior to Power9)

Differential revision: https://reviews.llvm.org/D73692
2020-02-07 14:41:52 -06:00
Nico Weber b03c3d8c62 Revert "Support -fstack-clash-protection for x86"
This reverts commit 4a1a0690ad.
Breaks tests on mac and win, see https://reviews.llvm.org/D68720
2020-02-07 14:49:38 -05:00
Changpeng Fang 884acbb9e1 AMDGPU: Enhancement on FDIV lowering in AMDGPUCodeGenPrepare
Summary:
  The accuracy limit to use rcp is adjusted to 1.0 ulp from 2.5 ulp.
Also, afn instead of arcp is used to allow inaccurate rcp to be used.

Reviewers:
  arsenm

Differential Revision: https://reviews.llvm.org/D73588
2020-02-07 11:46:23 -08:00
Jessica Paquette 609a489e05 [AArch64][GlobalISel] Reland SLT/SGT TBNZ optimization
The issue in the previous commits was that we swap the LHS and RHS while
looking for the constant. In SLT/SGT, the constant must be on the RHS, or the
optimization is invalid.

Move the swapping logic after the check for the SLT/SGT case and update tests.

Original commits:

d78cefb160
a373841407
2020-02-07 11:15:25 -08:00
Changpeng Fang 6370c7c13e AMDGPU: Limit the search in finding the instruction pattern for v_swap generation.
Summary:
  Current implementation of matchSwap in SIShrinkInstructions searches the entire
use_nodbg_operands set to find the possible pattern to generate v_swap instruction.
This approach will lead to a O(N^3) in compile time for SIShrinkInstructions.

But in reality, the matching pattern only exists within nearby instructions in the
same basic block. This work limits the search to a maximum of 16 instructions, and has
a linear compile time comsumption.

Reviewers:
  rampitec, arsenm

Differential Revision: https://reviews.llvm.org/D74180
2020-02-07 11:06:33 -08:00
serge_sans_paille 4a1a0690ad Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with correct option
flags set.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-07 19:54:39 +01:00
Sean Fertile 88073d40c7 [PowerPC] Create a FixedStack object for CR save in linkage area.
hasReservedSpillSlot returns a dummy frame index of '0' on PPC64 for the
non-volatile condition registers, which leads to the CalleSavedInfo
either referencing an unrelated stack object, or an invalid object if
there are no stack objects. The latter case causes the mir-printer to
crash due to assertions that checks if the frame index referenced by a
CalleeSavedInfo is valid.

To fix the problem create an immutable FixedStack object at the correct offset
in the linkage area of the previous stack frame (ie SP + positive offset).

Differential Revision: https://reviews.llvm.org/D73709
2020-02-07 13:33:44 -05:00
Craig Topper 278578744a [X86] Handle SETB_C32r/SETB_C64r in flag copy lowering the same way we handle SBB
Previously we took the restored flag in a GPR, extended it 32 or 64 bits. Then used as an input to a sub from 0. This requires creating a zero extend and creating a 0.

This patch changes this to just use an ADD with 255 to restore the carry flag and keep the SETB_C32r/SETB_C64r. Exactly like we handle SBB which is what SETB becomes.

Differential Revision: https://reviews.llvm.org/D74152
2020-02-07 10:31:19 -08:00
Petar Avramovic 7df5fc9e03 [GlobalISel] Add buildMerge with SrcOp initializer list
Allows more flexible use of buildMerge in places where
use operands are available as SrcOp since it does not
require explicit conversion to Register.
Simplify code with new buildMerge.

Differential Revision: https://reviews.llvm.org/D74223
2020-02-07 18:43:45 +01:00
Sanjay Patel de6f7eb47e [x86] don't create an unused constant vector
Noticed while scanning through debug spew. Creating unused
nodes is inefficient and makes following the debug output harder.
2020-02-07 12:05:02 -05:00
Simon Pilgrim c96001035d [X86] isNegatibleForFree - allow pre-legalized FMA negation
As long as the FMA operation is legal (which we can proxy for the FMA3/FMA4 variants as well), we don't have to wait for the LegalOperations stage.
2020-02-07 17:04:17 +00:00
Matt Arsenault 2f885cbe90 AMDGPU/GlobalISel: Fix move s.buffer.load to VALU
We were executing this in a waterfall loop as a placeholder, but this
should really be converted to a MUBUF load. Also execute in a
waterfall loop if the resource isn't an SGPR. This is a case where the
DAG handling was wrong because doing the right thing was too hard.

Currently, this will mishandle 96-bit loads. There's currently no way
to track the original memory size with an MMO, so these loads will be
widened andd the resulting memory size will be 128-bits.
2020-02-07 07:19:01 -08:00
Matt Arsenault 3b198518ad GlobalISel: Fix narrowing of G_CTPOP
The result type is separate from the source type. Tests will be
included in a future AMDGPU patch which uses this from
RegBankSelect/applyMappingImpl.
2020-02-07 06:58:00 -08:00
Matt Arsenault 8de2dad9e0 GlobalISel: Fix lowering of G_CTLZ/G_CTTZ
The type passed to lower was invalid, so I'm not sure how this was
even working before. The source and destination type also do not have
to match, so make sure to use the right ones.
2020-02-07 06:54:12 -08:00
Momchil Velikov a2531081b3 [AArch64] Predictably disassemble system registers with the same encoding
The registers TRCEXTINSELR and TRCEXTINSELR0 are distinct registers,
defined by separate extension specifications (ETM and ETE,
respectively), yet they use the same encoding in MSR/MRS.

When performing a system register lookup by encoding, we would
essentially return a random one, depending on the number, relative
position in the TableGen file, whether the TableGen records for system
registers are named or not, and, if they are named, depending on
record (not register!) name as well.

This patch works around the issue by explictly checking for the
TRCEXTINSELR/TRCEXTINSELR0 encoding and always returning TRCEXTINSELR.

Differential Revision: https://reviews.llvm.org/D74074
2020-02-07 12:19:57 +00:00
serge-sans-paille f6d98429fc Revert "Support -fstack-clash-protection for x86"
This reverts commit 39f50da2a3.

The -fstack-clash-protection is being passed to the linker too, which
is not intended.

Reverting and fixing that in a later commit.
2020-02-07 11:36:53 +01:00
Guillaume Chatelet f85d3408e6 [NFC] Introduce an API for MemOp
Summary: This patch introduces an API for MemOp in order to simplify and tighten the client code.

Reviewers: courbet

Subscribers: arsenm, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jsji, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73964
2020-02-07 11:32:27 +01:00
Pierre van Houtryve e8c3a6c260 [ARM][ASMParser] Refuse equal RdHi/RdLo for s/umlal, smlsl, s/umull, umaal
Differential Revision: https://reviews.llvm.org/D74120
2020-02-07 10:05:20 +00:00
serge_sans_paille 39f50da2a3 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

Differential Revision: https://reviews.llvm.org/D68720
2020-02-07 10:56:15 +01:00
Pierre van Houtryve 56d81d4580 [Target][AArch64] Remove non-existing system registers ICH_VSEIR_EL2 & ICC_SEIEN_EL1 from AArch64 backend
Differential Revision: https://reviews.llvm.org/D74118
2020-02-07 09:44:41 +00:00
Craig Topper ae4e49868a [X86] Turn vXi1 any_extends into sign_extends in PreprocessISelDAG and remove some isel patterns.
Similar to what we do for other vector any_extends, but instead
of zero_extend we need to use sign_extend.
2020-02-06 21:32:53 -08:00
Craig Topper 3f62028f2f [X86] Use SelectionDAG::getAllOnesConstant to simplify some code. NFC 2020-02-06 21:32:53 -08:00