Commit Graph

52710 Commits

Author SHA1 Message Date
Matt Arsenault 5dafcb9b11 AMDGPU/GlobalISel: Use and instead of BFE with inline immediate
Zext from s1 is the only case where this should do anything with the
current legal extensions.

llvm-svn: 364760
2019-07-01 13:22:06 +00:00
Simon Atanasyan ceb9da5bc7 [mips] Add missing schedinfo for MSA and ASE instructions
llvm-svn: 364757
2019-07-01 13:21:05 +00:00
Simon Atanasyan c0121bf874 [mips] Add missing schedinfo for atomic instructions
llvm-svn: 364756
2019-07-01 13:20:56 +00:00
Simon Atanasyan 3a10810b7a [mips] Add missing schedinfo for ADJCALLSTACKDOWN, ADJCALLSTACKUP
llvm-svn: 364755
2019-07-01 13:20:48 +00:00
Florian Hahn 33c8c0ea27 [AMDGPU] Call isLoopExiting for blocks in the loop.
isLoopExiting should only be called for blocks in the loop. A follow
up patch makes this requirement an assertion.

I've updated the usage here, to only match for actual exit blocks. Previously,
it would also match blocks not in the loop.

Reviewers: arsenm, nhaehnle

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D63980

llvm-svn: 364750
2019-07-01 12:36:44 +00:00
Fangrui Song 92e78b7bed [RISCV] Add break; to the last switch case
As suggested by jrtc27 in the post-commit review of D60528.

llvm-svn: 364746
2019-07-01 11:41:07 +00:00
Simon Pilgrim 172fe5dd19 [X86] CombineShuffleWithExtract - updated description comments. NFCI.
CombineShuffleWithExtract no longer requires that both shuffle ops are extract_subvectors, from the same type or from the same size.

llvm-svn: 364745
2019-07-01 11:33:45 +00:00
Sam Parker 98722691b0 [ARM] WLS/LE Code Generation
Backend changes to enable WLS/LE low-overhead loops for armv8.1-m:
1) Use TTI to communicate to the HardwareLoop pass that we should try
   to generate intrinsics that guard the loop entry, as well as setting
   the loop trip count.
2) Lower the BRCOND that uses said intrinsic to an Arm specific node:
   ARMWLS.
3) ISelDAGToDAG the node to a new pseudo instruction:
   t2WhileLoopStart.
4) Add support in ArmLowOverheadLoops to handle the new pseudo
   instruction.

Differential Revision: https://reviews.llvm.org/D63816

llvm-svn: 364733
2019-07-01 08:21:28 +00:00
Craig Topper 29fff0797b [X86] Improve the type checking fast-isel handling of vector bitcasts.
We had a bunch of vector size legality checks for the source type
based on feature flags, but we didn't check the destination type at
all beyond ensuring that it was a "simple" type. But this allowed
the destination to be i128 which isn't legal.

This commit changes the code to use TLI's isTypeLegal logic in
place of the all the subtarget checks. Then additionally checks
that the source and dest are vectors.

Fixes 42452

llvm-svn: 364729
2019-07-01 07:09:34 +00:00
Craig Topper 4ca81a9b99 [X86] Add a DAG combine to replace vector loads feeding a v4i32->v2f64 CVTSI2FP/CVTUI2FP node with a vzload.
But only when the load isn't volatile.

This improves load folding during isel where we only have vzload
and scalar_to_vector+load patterns. We can't have full vector load
isel patterns for the same volatile load issue.

Also add some missing masked cvtsi2fp/cvtui2fp with vzload patterns.

llvm-svn: 364728
2019-07-01 07:09:31 +00:00
Craig Topper d1728f8987 [X86] Add MOVHPDrm/MOVLPDrm patterns that use VZEXT_LOAD.
We already had patterns that used scalar_to_vector+load. But we can
also have a vzload.

Found while investigating combining scalar_to_vector+load to vzload.

llvm-svn: 364726
2019-07-01 07:09:23 +00:00
Fangrui Song 78ee2fbf98 Cleanup: llvm::bsearch -> llvm::partition_point after r364719
llvm-svn: 364720
2019-06-30 11:19:56 +00:00
Craig Topper 725a8a5dc4 [X86] Custom lower AVX masked loads to masked load and vselect instead of selecting a maskmov+vblend during isel.
AVX masked loads only support 0 as the value for masked off elements.
So we need an extra blend to support other values. Previously we
expanded the masked load to two instructions with isel patterns.
With this patch we now insert the vselect during lowering and it
will be separately selected as a blend.

llvm-svn: 364718
2019-06-30 06:46:37 +00:00
Matt Arsenault 0d45209757 AMDGPU/GlobalISel: RegBankSelect for update.dpp
llvm-svn: 364701
2019-06-29 00:44:36 +00:00
Matt Arsenault fd82cf4f4d AMDGPU/GlobalISel: RegBankSelect for atomic.inc/atomic.dec
llvm-svn: 364699
2019-06-29 00:39:20 +00:00
Matt Arsenault adb1f21e52 AMDGPU/GlobalISel: RegBankSelect for some DS intrinsics
llvm-svn: 364698
2019-06-29 00:33:13 +00:00
Matt Arsenault b416d5fc8b AMDGPU/GlobalISel: RegBankSelect for some easy intrinsics
llvm-svn: 364697
2019-06-29 00:29:56 +00:00
Matt Arsenault 5ea3c9adb2 AMDGPU/GlobalISel: RegBankSelect for icmp/fcmp intrinsics
llvm-svn: 364696
2019-06-29 00:28:52 +00:00
Matt Arsenault 6aafb3068f AMDGPU/GlobalISel: RegBankSelect for amdgcn.div.fmas
llvm-svn: 364695
2019-06-29 00:25:53 +00:00
Matt Arsenault ade5162432 AMDGPU/GlobalISel: RegBankSelect for some simple leaf intrinsics
llvm-svn: 364694
2019-06-29 00:22:28 +00:00
Wouter van Oortmerssen 319c87d94f [WebAssembly] Assembler: support .int16/32/64 directives.
Reviewers: sbc100

Subscribers: dschuff, jgravelle-google, aheejin, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63959

llvm-svn: 364689
2019-06-28 22:20:33 +00:00
Sanjay Patel 9126c84f50 [x86] remove stale comment about cmov; NFC
The cmov node used to sometimes return a glue result (and that's what
'flag' meant in this context), but that was removed with D38664.

llvm-svn: 364687
2019-06-28 21:45:55 +00:00
Wouter van Oortmerssen fc222e23ca [WebAssembly] Assembler: Allow offsets and p2align in symbol load.
Reviewers: sbc100

Subscribers: dschuff, jgravelle-google, aheejin, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63951

llvm-svn: 364682
2019-06-28 20:31:13 +00:00
Wouter van Oortmerssen 597ba18008 [WebAssembly] Assembler: Improve section parsing.
Reviewers: sbc100

Subscribers: dschuff, jgravelle-google, aheejin, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63947

llvm-svn: 364681
2019-06-28 20:29:16 +00:00
Brad Smith 4b733ca617 Default to Secure PLT on PPC for musl libc.
This matches the default settings of clang.

llvm-svn: 364675
2019-06-28 19:48:31 +00:00
Simon Pilgrim 978a08c885 [X86] CombineShuffleWithExtract - recurse through EXTRACT_SUBVECTOR chain
llvm-svn: 364667
2019-06-28 17:57:32 +00:00
Dmitry Preobrazhensky e1eb25ff3e [AMDGPU][MC] Fix 2 for sanitizer failure in 364645
llvm-svn: 364656
2019-06-28 16:28:46 +00:00
Sam Tebbs e39e958da3 [ARM] Add support for the MVE long shift instructions
MVE adds the lsll, lsrl and asrl instructions, which perform a shift on a 64 bit value separated into two 32 bit registers.

The Expand64BitShift function is modified to accept ISD::SHL, ISD::SRL and ISD::SRA and convert it into the appropriate opcode in ARMISD. An SHL is converted into an lsll, an SRL is converted into an lsrl for the immediate form and a negation and lsll for the register form, and SRA is converted into an asrl.

test/CodeGen/ARM/shift_parts.ll is added to test the logic of emitting these instructions.

Differential Revision: https://reviews.llvm.org/D63430

llvm-svn: 364654
2019-06-28 15:43:31 +00:00
Dmitry Preobrazhensky d12966c088 [AMDGPU][MC] Fix for sanitizer failure in 364645
llvm-svn: 364651
2019-06-28 15:22:47 +00:00
Dmitry Preobrazhensky 1d572ce395 [AMDGPU][MC] Enabled constant expressions as operands of sendmsg
See bug 40820: https://bugs.llvm.org/show_bug.cgi?id=40820

Reviewers: artem.tamazov, arsenm

Differential Revision: https://reviews.llvm.org/D62735

llvm-svn: 364645
2019-06-28 14:14:02 +00:00
Simon Pilgrim a54e1a0f01 [X86] CombineShuffleWithExtract - only require 1 source to be EXTRACT_SUBVECTOR
We were requiring that both shuffle operands were EXTRACT_SUBVECTORs, but we can relax this to only require one of them to be.

Also, we shouldn't bother attempting this if both operands are from the lowest subvector (or not EXTRACT_SUBVECTOR at all).

llvm-svn: 364644
2019-06-28 12:24:49 +00:00
David Green 9dbdfe6b78 [ARM] Add MVE mul patterns
This simply adds integer and floating point VMUL patterns for MVE, same as we
have add and sub.

Differential Revision: https://reviews.llvm.org/D63866

llvm-svn: 364643
2019-06-28 11:44:03 +00:00
David Green 2883944035 [ARM] Mark math routines as non-legal for MVE
This adds handling and tests for a number of floating point math routines,
which have no MVE instructions.

Differential Revision: https://reviews.llvm.org/D63725

llvm-svn: 364641
2019-06-28 11:17:38 +00:00
David Green ff70cbc895 [ARM] MVE patterns for VABS and VNEG
This simply adds the required patterns for fp neg and abs.

Differential Revision: https://reviews.llvm.org/D63861

llvm-svn: 364640
2019-06-28 10:25:35 +00:00
David Green eb7080ac6e [ARM] Widening loads and narrowing stores
MVE has instructions to widen as it loads, and narrow as it stores. This adds
the required patterns and legalisation to make them work including specifying
that they are legal, patterns to select them and test changes.

Patch by David Sherwood.

Differential Revision: https://reviews.llvm.org/D63839

llvm-svn: 364636
2019-06-28 09:47:55 +00:00
Simon Tatham 29ff1b4f46 [ARM] Fix integer UB in MVE load/store immediate handling.
llvm-svn: 364635
2019-06-28 09:28:39 +00:00
David Green 07e53fee14 [ARM] MVE loads and stores
This fills in the gaps for basic MVE loads and stores, allowing unaligned
access and adding far too many tests. These will become important as
narrowing/expanding and pre/post inc are added. Big endian might still not be
handled very well, because we have not yet added bitcasts (and I'm not sure how
we want it to work yet). I've included the alignment code anyway which maps
with our current patterns. We plan to return to that later.

Code written by Simon Tatham, with additional tests from Me and Mikhail Maltsev.

Differential Revision: https://reviews.llvm.org/D63838

llvm-svn: 364633
2019-06-28 08:41:40 +00:00
Dylan McKay 2bc48f503a [AVR] Don't look for the TargetFrameLowering in the FrameLowering implementation
c.f. r364349

llvm-svn: 364632
2019-06-28 08:35:21 +00:00
David Green fc4102417b [ARM] Mark div and rem as expand for MVE
We don't have vector operations for these, so they need to be expanded for both
integer and float.

Differential Revision: https://reviews.llvm.org/D63595

llvm-svn: 364631
2019-06-28 08:18:55 +00:00
David Green 62889b0ea5 [ARM] Select MVE fp add and sub
The same as integer arithmetic, we can add simple floating point MVE addition and
subtraction patterns.

Initial code by David Sherwood

Differential Revision: https://reviews.llvm.org/D63257

llvm-svn: 364629
2019-06-28 07:41:09 +00:00
David Green be05b85db9 [ARM] Select MVE add and sub
This adds the first few patterns for MVE code generation, adding simple integer
add and sub patterns.

Initial code by David Sherwood

Differential Revision: https://reviews.llvm.org/D63255

llvm-svn: 364627
2019-06-28 07:21:11 +00:00
David Green 8be372b190 [ARM] MVE vector shuffles
This patch adds necessary shuffle vector and buildvector support for ARM MVE.
It essentially adds support for VDUP, VREVs and some VMOVs, which are often
required by other code (like upcoming patches).

This mostly uses the same code from Neon that already generated
NEONvdup/NEONvduplane/NEONvrev's. These have been renamed to ARMvdup/etc and
moved to ARMInstrInfo as they are common to both architectures. Most of the
selection code seems to be applicable to both, but NEON does have some more
instructions making some parts specific.

Most code originally by David Sherwood.

Differential Revision: https://reviews.llvm.org/D63567

llvm-svn: 364626
2019-06-28 07:08:42 +00:00
Craig Topper cbb88a5169 [X86] Connect the output chain properly when combining vzext_movl+load into vzext_load.
llvm-svn: 364625
2019-06-28 06:58:50 +00:00
Craig Topper e832adea0f [X86] Remove some duplicate patterns that already exist as part of their instruction definition. NFC
llvm-svn: 364623
2019-06-28 05:03:47 +00:00
Zi Xuan Wu 588a170970 [NFC][PowerPC] Move XS*QP series instruction apart from XS*QPO series in position of td file
llvm-svn: 364620
2019-06-28 02:51:03 +00:00
Stanislav Mekhanoshin 07fd88d735 [AMDGPU] Packed thread ids in function call ABI
Differential Revision: https://reviews.llvm.org/D63851

llvm-svn: 364619
2019-06-28 01:52:13 +00:00
Kai Luo c6fe8436e8 [PowerPC][NFC] Use `|=` to update `Simplified` flag
llvm-svn: 364617
2019-06-28 01:38:42 +00:00
Matt Arsenault 1178dc3d0b AMDGPU/GlobalISel: Convert to using Register
llvm-svn: 364616
2019-06-28 01:16:46 +00:00
Sanjay Patel a95ca2b5ff [x86] prevent crashing from select narrowing with AVX512
llvm-svn: 364585
2019-06-27 20:16:58 +00:00
Jinsong Ji c627aa2fa9 [PowerPC][NFC] Remove unused (and unsupported) fusion feature bits.
FeatureFusion bits was first introduced in
https://reviews.llvm.org/rL253724. for add/load integer fusion for P8.
The only use of `hasFusion` was https://reviews.llvm.org/rL255319.

However, this was removed later in https://reviews.llvm.org/rL280440.

So, there is NO any reference to fusion in code now.

Leaving it there is misleading and confusing, so remove it for now.
We can alwasy add back if we ever support fusion in the future.

llvm-svn: 364581
2019-06-27 19:35:11 +00:00
Wouter van Oortmerssen bfd3f69480 [WebAssembly] AsmParser: better atomic inst detection
Summary:
Previously missed atomic.notify.

Fixes https://bugs.llvm.org/show_bug.cgi?id=40728

Reviewers: aheejin

Subscribers: sbc100, jgravelle-google, sunfish, jfb, llvm-commits, dschuff

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63747

llvm-svn: 364576
2019-06-27 18:58:26 +00:00
Wouter van Oortmerssen 6b3f56b65f [WebAssembly] Fix p2align in assembler.
Summary:
- Match the syntax output by InstPrinter.
- Fix it always emitting 0 for align. Had to work around fact that
  opcode is not available for GetDefaultP2Align while parsing.
- Updated tests that were erroneously happy with a p2align=0

Fixes https://bugs.llvm.org/show_bug.cgi?id=40752

Reviewers: aheejin, sbc100

Subscribers: jgravelle-google, sunfish, jfb, llvm-commits, dschuff

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63633

llvm-svn: 364570
2019-06-27 18:11:15 +00:00
Simon Pilgrim 1fd1c60979 [X86] combineX86ShufflesRecursively - merge shuffles with more than 2 inputs
We already had the infrastructure for this, but were waiting for the fix for a number of regressions which were handled by the recent shuffle(extract_subvector(),extract_subvector()) -> extract_subvector(shuffle()) shuffle combines

llvm-svn: 364569
2019-06-27 17:30:51 +00:00
Nicolai Haehnle 32ef9292be AMDGPU: Make fixing i1 copies robust against re-ordering
Summary:
The new test case led to incorrect code.

Change-Id: Ief48b227e97aa662dd3535c9bafb27d4a184efca

Reviewers: arsenm, david-salinas

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63871

llvm-svn: 364566
2019-06-27 16:56:44 +00:00
Simon Pilgrim e9a2f4fe2c Use getConstantOperandAPInt instead of getConstantOperandVal for comparisons.
getConstantOperandAPInt avoids any large integer issues - these are unlikely but the fuzzers do like to mess around.....

llvm-svn: 364564
2019-06-27 16:46:00 +00:00
Simon Pilgrim 74343eba37 [X86] getTargetVShiftByConstNode - reduce variable scope. NFCI.
Fixes cppcheck warning.

llvm-svn: 364561
2019-06-27 16:33:44 +00:00
Sam Tebbs 8747c5f482 [ARM] Fix formatting issue in ARMISelLowering.cpp
Fix a formatting error in ARMISelLowering.cpp::Expand64BitShift. My test
commit after receiving write access.

llvm-svn: 364560
2019-06-27 16:28:28 +00:00
Roland Froese 9f7f5858fe Recommit [PowerPC] Update P9 vector costs for insert/extract element
Recommit patch D60160 after regression fix patch D63463.

llvm-svn: 364557
2019-06-27 16:20:24 +00:00
Jinsong Ji 157b073fa5 [PowerPC][HTM] Fix disassembling buffer overflow for tabortdc and others
This was reported in https://bugs.llvm.org/show_bug.cgi?id=41751
llvm-mc aborted when disassembling tabortdc.

This patch try to clean up TM related DAGs.

* Fixes the problem by remove explicit output of cr0, and put it as implicit def.
* Update int_ppc_tbegin pattern to accommodate the implicit def of cr0.
* Update the TCHECK operand and int_ppc_tcheck accordingly.
* Add some builtin test and disassembly tests.
* Remove unused CRRC0/crrc0

Differential Revision: https://reviews.llvm.org/D61935

llvm-svn: 364544
2019-06-27 14:11:31 +00:00
Simon Atanasyan e9ec0b6f09 [mips] Mark pseudo select instructions by the `hasNoSchedulingInfo` tag
llvm-svn: 364540
2019-06-27 13:41:30 +00:00
Simon Atanasyan 7c83f0705a [mips] Add new items to the list of features unsupported by P5600
llvm-svn: 364539
2019-06-27 13:41:23 +00:00
Djordje Todorovic 71d3869f60 [Backend] Keep call site info valid through the backend
Handle call instruction replacements and deletions in order to preserve
valid state of the call site info of the MachineFunction.

NOTE: If the call site info is enabled for a new target, the assertion from
the MachineFunction::DeleteMachineInstr() should help to locate places
where the updateCallSiteInfo() should be called in order to preserve valid
state of the call site info.

([10/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D61062

llvm-svn: 364536
2019-06-27 13:10:29 +00:00
Simon Tatham 1a3dc8f678 [ARM] Fix bogus assertions in copyPhysReg v8.1-M cases.
The code to generate register move instructions in and out of VPR and
FPSCR_NZCV had assertions checking that the other register involved
was a GPR _pair_, instead of a single GPR as it should have been.

Reviewers: miyuki, ostannard

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63865

llvm-svn: 364534
2019-06-27 12:41:12 +00:00
Simon Tatham ffb2b347ff [ARM] Fix handling of zero offsets in LOB instructions.
The BF and WLS/WLSTP instructions have various branch-offset fields
occupying different positions and lengths in the instruction encoding,
and all of them were decoded at disassembly time by the function
DecodeBFLabelOffset() which returned SoftFail if the offset was zero.

In fact, it's perfectly fine and not even a SoftFail for most of those
offset fields to be zero. The only one that can't be zero is the 4-bit
field labelled `boff` in the architecture spec, occupying bits {26-23}
of the BF instruction family. If that one is zero, the encoding
overlaps other instructions (WLS, DLS, LETP, VCTP), so it ought to be
a full Fail.

Fixed by adding an extra template parameter to DecodeBFLabelOffset
which controls whether a zero offset is accepted or rejected. Adjusted
existing tests (only in error messages for bad disassemblies); added
extra tests to demonstrate zero offsets being accepted in all the
right places, and a few demonstrating rejection of zero `boff`.

Reviewers: DavidSpickett, ostannard

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63864

llvm-svn: 364533
2019-06-27 12:41:07 +00:00
Simon Tatham e5ce56fb95 [ARM] Make coprocessor number restrictions consistent.
Different versions of the Arm architecture disallow the use of generic
coprocessor instructions like MCR and CDP on different sets of
coprocessors. This commit centralises the check of the coprocessor
number so that it's consistent between assembly and disassembly, and
also updates it for the new restrictions in Arm v8.1-M.

New tests added that check all the coprocessor numbers; old tests
updated, where they used a number that's now become illegal in the
context in question.

Reviewers: DavidSpickett, ostannard

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63863

llvm-svn: 364532
2019-06-27 12:40:55 +00:00
Simon Tatham 02449f9c3c [ARM] Tighten restrictions on use of SP in v8.1-M CSEL.
In the `CSEL Rd,Rm,Rn` instruction family (also including CSINC, CSINV
and CSNEG), the architecture lists it as CONSTRAINED UNPREDICTABLE
(i.e. SoftFail) to use SP in the Rd or Rm slot, but outright illegal
to use it in the Rn slot, not least because some encodings of that
form are used by MVE instructions such as UQRSHLL.

MC was treating all three slots the same, as SoftFail. So the only
reason UQRSHLL was disassembled correctly at all was because the MVE
decode table is separate from the Thumb2 one and takes priority; if
you turned off MVE, then encodings such as `[0x5f,0xea,0x0d,0x83]`
would disassemble as spurious CSELs.

Fixed by inventing another version of the `GPRwithZR` register class,
which disallows SP completely instead of just SoftFailing it.

Reviewers: DavidSpickett, ostannard

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63862

llvm-svn: 364531
2019-06-27 12:40:40 +00:00
Simon Pilgrim c5cff5d3d1 [X86] getFauxShuffle - add DemandedElts as a filter
This is currently benign but will be used in the future based on the elements referenced by the parent shuffle(s).

llvm-svn: 364530
2019-06-27 12:35:52 +00:00
Simon Atanasyan 8c35c43816 [mips] Add GPR_64 predicate to some mov[zn] instructions
llvm-svn: 364527
2019-06-27 12:08:17 +00:00
Simon Atanasyan bf5fc620d9 [mips] Fix indentation and split long lines. NFC
llvm-svn: 364526
2019-06-27 12:08:10 +00:00
Simon Atanasyan 3b184cf7e1 [mips] Reformat MSA instruction definitions. NFC
llvm-svn: 364525
2019-06-27 12:08:03 +00:00
Simon Pilgrim 90e121fbe6 [X86][AVX] SimplifyDemandedVectorElts - combine PERMPD(x) -> EXTRACTF128(X)
If we only use the bottom lane, see if we can simplify this to extract_subvector - which is always at least as quick as PERMPD/PERMQ.

llvm-svn: 364518
2019-06-27 11:16:03 +00:00
Djordje Todorovic 7eeeb5947e [ISEL][X86] Tracking of registers that forward call arguments
While lowering calls, collect info about registers that forward arguments
into following function frame. We store such info into the MachineFunction
of the call. This is used very late when dumping DWARF info about
call site parameters.

([9/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D60715

llvm-svn: 364516
2019-06-27 10:51:15 +00:00
Diana Picus 253b53b2ec [AArch64 GlobalISel] Cleanup CallLowering. NFCI
Now that lowerCall and lowerFormalArgs have been refactored, we can
simplify splitToValueTypes.

Differential Revision: https://reviews.llvm.org/D63552

llvm-svn: 364513
2019-06-27 09:24:30 +00:00
Diana Picus 43fb5ae50c [GlobalISel] Accept multiple vregs for lowerCall's args
Change the interface of CallLowering::lowerCall to accept several
virtual registers for each argument, instead of just one.  This is a
follow-up to D46018.

CallLowering::lowerReturn was similarly refactored in D49660 and
lowerFormalArguments in D63549.

With this change, we no longer pack the virtual registers generated for
aggregates into one big lump before delegating to the target. Therefore,
the target can decide itself whether it wants to handle them as separate
pieces or use one big register.

ARM and AArch64 have been updated to use the passed in virtual registers
directly, which means we no longer need to generate so many
merge/extract instructions.

NFCI for AMDGPU, Mips and X86.

Differential Revision: https://reviews.llvm.org/D63551

llvm-svn: 364512
2019-06-27 09:18:03 +00:00
Diana Picus 8138996128 [GlobalISel] Accept multiple vregs for lowerCall's result
Change the interface of CallLowering::lowerCall to accept several
virtual registers for the call result, instead of just one.  This is a
follow-up to D46018.

CallLowering::lowerReturn was similarly refactored in D49660 and
lowerFormalArguments in D63549.

With this change, we no longer pack the virtual registers generated for
aggregates into one big lump before delegating to the target. Therefore,
the target can decide itself whether it wants to handle them as separate
pieces or use one big register.

ARM and AArch64 have been updated to use the passed in virtual registers
directly, which means we no longer need to generate so many
merge/extract instructions.

NFCI for AMDGPU, Mips and X86.

Differential Revision: https://reviews.llvm.org/D63550

llvm-svn: 364511
2019-06-27 09:15:53 +00:00
Diana Picus c3dbe23977 [GlobalISel] Accept multiple vregs in lowerFormalArgs
Change the interface of CallLowering::lowerFormalArguments to accept
several virtual registers for each formal argument, instead of just one.
This is a follow-up to D46018.

CallLowering::lowerReturn was similarly refactored in D49660. lowerCall
will be refactored in the same way in follow-up patches.

With this change, we forward the virtual registers generated for
aggregates to CallLowering. Therefore, the target can decide itself
whether it wants to handle them as separate pieces or use one big
register. We also copy the pack/unpackRegs helpers to CallLowering to
facilitate this.

ARM and AArch64 have been updated to use the passed in virtual registers
directly, which means we no longer need to generate so many
merge/extract instructions.

AArch64 seems to have had a bug when lowering e.g. [1 x i8*], which was
put into a s64 instead of a p0. Added a test-case which illustrates the
problem more clearly (it crashes without this patch) and fixed the
existing test-case to expect p0.

AMDGPU has been updated to unpack into the virtual registers for
kernels. I think the other code paths fall back for aggregates, so this
should be NFC.

Mips doesn't support aggregates yet, so it's also NFC.

x86 seems to have code for dealing with aggregates, but I couldn't find
the tests for it, so I just added a fallback to DAGISel if we get more
than one virtual register for an argument.

Differential Revision: https://reviews.llvm.org/D63549

llvm-svn: 364510
2019-06-27 08:54:17 +00:00
Diana Picus 69ce1c1319 [GlobalISel] Allow multiple VRegs in ArgInfo. NFC
Allow CallLowering::ArgInfo to contain more than one virtual register.
This is useful when passes split aggregates into several virtual
registers, but need to also provide information about the original type
to the call lowering. Used in follow-up patches.

Differential Revision: https://reviews.llvm.org/D63548

llvm-svn: 364509
2019-06-27 08:50:53 +00:00
Jay Foad 8479240b0a [AMDGPU] Fix +DumpCode to print an entry label for the first function
Summary:
The +DumpCode attribute is a horrible hack in AMDGPU to embed the
disassembly of the generated code into the elf file. It is used by LLPC
to implement an extension that allows the application to read back the
disassembly of the code.

It tries to print an entry label at the start of every function, but
that didn't work for the first function in the module because
DumpCodeInstEmitter wasn't initialised until EmitFunctionBodyStart
which is too late.

Change-Id: I790d73ddf4f51fd02ab32529380c7cb7c607c4ee

Reviewers: arsenm, tpr, kzhuravl

Reviewed By: arsenm

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63712

llvm-svn: 364508
2019-06-27 08:19:28 +00:00
Mikael Holmen 7b81b61368 Silence gcc warning after r364458
Without the fix gcc 7.4.0 complains with

../lib/Target/X86/X86ISelLowering.cpp: In function 'bool getFauxShuffleMask(llvm::SDValue, llvm::SmallVectorImpl<int>&, llvm::SmallVectorImpl<llvm::SDValue>&, llvm::SelectionDAG&)':
../lib/Target/X86/X86ISelLowering.cpp:6690:36: error: enumeral and non-enumeral type in conditional expression [-Werror=extra]
             int Idx = (ZeroMask[j] ? SM_SentinelZero : (i + j + Ofs));
                        ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors

llvm-svn: 364507
2019-06-27 08:16:18 +00:00
Craig Topper 9153501f07 [X86] Remove (vzext_movl (scalar_to_vector (load))) matching code from selectScalarSSELoad.
I think this will be turning into vzext_load during DAG combine.

llvm-svn: 364499
2019-06-27 05:52:00 +00:00
Craig Topper 9ea5a32251 [X86] Teach selectScalarSSELoad to not narrow volatile loads.
llvm-svn: 364498
2019-06-27 05:51:56 +00:00
Kang Zhang 490bc46541 [NFC][PowerPC] Improve the for loop in Early Return
Summary:

In `PPCEarlyReturn.cpp`
```
183       for (MachineFunction::iterator I = MF.begin(); I != MF.end();) {
184         MachineBasicBlock &B = *I++;
185         if (processBlock(B))
186           Changed = true;
187       }
```
Above code can be improved to:
```
184       for (MachineFunction::iterator I = MF.begin(), E = MF.end(); I != E;) {
185         MachineBasicBlock &B = *I++;
186         Changed |= processBlock(B);
187       }
```

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D63800

llvm-svn: 364496
2019-06-27 03:39:09 +00:00
Eli Friedman ab1d73ee32 [ARM] Don't reserve R12 on Thumb1 as an emergency spill slot.
The current implementation of ThumbRegisterInfo::saveScavengerRegister
is bad for two reasons: one, it's buggy, and two, it blocks using R12
for other optimizations.  So this patch gets rid of it, and adds the
necessary support for using an ordinary emergency spill slot on Thumb1.

(Specifically, I think saveScavengerRegister was broken by r305625, and
nobody noticed for two years because the codepath is almost never used.
The new code will also probably not be used much, but it now has better
tests, and if we fail to emit a necessary emergency spill slot we get a
reasonable error message instead of a miscompile.)

A rough outline of the changes in the patch:

1. Gets rid of ThumbRegisterInfo::saveScavengerRegister.
2. Modifies ARMFrameLowering::determineCalleeSaves to allocate an
emergency spill slot for Thumb1.
3. Implements useFPForScavengingIndex, so the emergency spill slot isn't
placed at a negative offset from FP on Thumb1.
4. Modifies the heuristics for allocating an emergency spill slot to
support Thumb1.  This includes fixing ExtraCSSpill so we don't try to
use "lr" as a substitute for allocating an emergency spill slot.
5. Allocates a base pointer in more cases, so the emergency spill slot
is always accessible.
6. Modifies ARMFrameLowering::ResolveFrameIndexReference to compute the
right offset in the new cases where we're forcing a base pointer.
7. Ensures we never generate a load or store with an offset outside of
its frame object.  This makes the heuristics more straightforward.
8. Changes Thumb1 prologue and epilogue emission so it never uses
register scavenging.

Some of the changes to the emergency spill slot heuristics in
determineCalleeSaves affect ARM/Thumb2; hopefully, they should allow
the compiler to avoid allocating an emergency spill slot in cases
where it isn't necessary. The rest of the changes should only affect
Thumb1.

Differential Revision: https://reviews.llvm.org/D63677

llvm-svn: 364490
2019-06-26 23:46:51 +00:00
Matt Arsenault c0cad98363 AMDGPU: Assert SPAdj is 0
llvm-svn: 364473
2019-06-26 20:56:18 +00:00
Matt Arsenault 6a87e0fc6a [AMDGPU] Fix Livereg computation during epilogue insertion
The LivePhysRegs calculated in order to find a scratch register in the
epilogue code wrongly uses 'LiveIns'. Instead, it should use the
'Liveout' sets.  For the liveness, also considering the operands of
the terminator (return) instruction which is the insertion point for
the scratch-exec-copy instruction.

Patch by Christudasan Devadasan

llvm-svn: 364470
2019-06-26 20:35:18 +00:00
Craig Topper 3d12971e1c [X86] Rework the logic in LowerBuildVectorv16i8 to make better use of any_extend and break false dependencies. Other improvements
This patch rewrites the loop iteration to only visit every other element starting with element 0. And we work on the "even" element and "next" element at the same time. The "First" logic has been moved to the bottom of the loop and doesn't run on every element. I believe it could create dangling nodes previously since we didn't check if we were going to use SCALAR_TO_VECTOR for the first insertion. I got rid of the "First" variable and just do a null check on V which should be equivalent. We also no longer use undef as the starting V for vectors with no zeroes to avoid false dependencies. This matches v8i16.

I've changed all the extends and OR operations to use MVT::i32 since that's what they'll be promoted to anyway. I've tried to use zero_extend only when necessary and use any_extend otherwise. This resulted in some improvements in tests where we are now able to promote aligned (i32 (extload i8)) to a 32-bit load.

Differential Revision: https://reviews.llvm.org/D63702

llvm-svn: 364469
2019-06-26 20:16:19 +00:00
Craig Topper afa58b6ba1 [X86] Remove isTypePromotionOfi1ZeroUpBits and its helpers.
This was trying to optimize concat_vectors with zero of setcc or
kand instructions. But I think it produced the same code we
produce for a concat_vectors with 0 even it it doesn't come from
one of those operations.

llvm-svn: 364463
2019-06-26 19:45:48 +00:00
Simon Pilgrim dfe079ffbf [X86][SSE] getFauxShuffleMask - handle OR(x,y) where x and y have no overlapping bits
Create a per-byte shuffle mask based on the computeKnownBits from each operand - if for each byte we have a known zero (or both) then it can be safely blended.

Fixes PR41545

llvm-svn: 364458
2019-06-26 18:21:26 +00:00
Ryan Taylor 9ab812d475 [AMDGPU] Fix for branch offset hardware workaround
Summary:
This fixes a hardware bug that makes a branch offset of 0x3f unsafe.
This replaces the 32 bit branch with offset 0x3f to a 64 bit
instruction that includes the same 32 bit branch and the encoding
for a s_nop 0 to follow. The relaxer than modifies the offsets
accordingly.

Change-Id: I10b7aed99d651f8159401b01bb421f105fa6288e

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63494

llvm-svn: 364451
2019-06-26 17:34:57 +00:00
Ulrich Weigand 4c86dd9032 Allow matching extend-from-memory with strict FP nodes
This implements a small enhancement to https://reviews.llvm.org/D55506

Specifically, while we were able to match strict FP nodes for
floating-point extend operations with a register as source, this
did not work for operations with memory as source.

That is because from regular operations, this is represented as
a combined "extload" node (which is a variant of a load SD node);
but there is no equivalent using a strict FP operation.

However, it turns out that even in the absence of an extload
node, we can still just match the operations explicitly, e.g.
   (strict_fpextend (f32 (load node:$ptr))

This patch implements that method to match the LDEB/LXEB/LXDB
SystemZ instructions even when the extend uses a strict-FP node.

llvm-svn: 364450
2019-06-26 17:19:12 +00:00
Thomas Lively 7663e0cd7d [WebAssembly] Omit wrap on i64x2.{shl,shr*} ISel when possible
Summary:
Since the WebAssembly SIMD shift instructions take i32 operands, we
truncate the i64 operand to <2 x i64> shifts during ISel. When the i64
operand is sign extended from i32, this CL makes it so the sign
extension is dropped instead of a wrap instruction added.

Reviewers: dschuff, aheejin

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63615

llvm-svn: 364446
2019-06-26 16:19:59 +00:00
Thomas Lively a1d97a960e [WebAssembly] Implement tail calls and unify tablegen call classes
Summary:
Implements direct and indirect tail calls enabled by the 'tail-call'
feature in both DAG ISel and FastISel. Updates existing call tests and
adds new tests including a binary encoding test.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62877

llvm-svn: 364445
2019-06-26 16:17:15 +00:00
Simon Pilgrim 435ee9fb1f [X86][SSE] X86TargetLowering::isCommutativeBinOp - add PMULDQ
Allows narrowInsertExtractVectorBinOp to reduce vector size instead of the more restricted SimplifyDemandedVectorEltsForTargetNode

llvm-svn: 364434
2019-06-26 14:58:11 +00:00
Simon Pilgrim 6b687bf681 [X86][SSE] X86TargetLowering::isCommutativeBinOp - add PCMPEQ
Allows narrowInsertExtractVectorBinOp to reduce vector size

llvm-svn: 364432
2019-06-26 14:40:49 +00:00
Simon Pilgrim b13c6f1a9d [X86][SSE] X86TargetLowering::isBinOp - add PCMPGT
Allows narrowInsertExtractVectorBinOp to reduce vector size

llvm-svn: 364431
2019-06-26 14:34:41 +00:00
Simon Pilgrim 24f96a0eee [X86] shouldScalarizeBinop - never scalarize target opcodes.
We have (almost) no target opcodes that have scalar/vector equivalents - for now assume we can't scalarize them (we can add exceptions if we need to).

llvm-svn: 364429
2019-06-26 14:21:29 +00:00
Matt Arsenault 5f798f1346 AMDGPU: Fix unused variable
llvm-svn: 364426
2019-06-26 13:48:04 +00:00
Matt Arsenault e0b8443460 AMDGPU: Check MRI for callee saved regs instead of TRI
This should the same, but MRI does allow dynamically changing the CSR
set, although currently not used.

llvm-svn: 364425
2019-06-26 13:39:29 +00:00
Roman Lebedev 13889145f0 [X86][Codegen] X86DAGToDAGISel::matchBitExtract(): consistently capture lambdas by value
llvm-svn: 364420
2019-06-26 12:19:52 +00:00
Roman Lebedev fbb2e40d5c [X86] X86DAGToDAGISel::matchBitExtract(): pattern c: truncation awareness
Summary:
The one thing of note here is that the 'bitwidth' constant (32/64) was previously pessimistic.
Given `x & (-1 >> (C - z))`, we were taking `C` to be `bitwidth(x)`, but in reality
we want `(-1 >> (C - z))` pattern to mean "low z bits must be all-ones".
And for that, `C` should be `bitwidth(-1 >> (C - z))`, i.e. of the shift operation itself.

Last pattern D does not seem to exhibit any of these truncation issues.
Although it has the opposite problem - if we extract low bits (no shift) from i64,
and then truncate to i32, then we fail to shrink this 64-bit extraction into 32-bit extraction.

Reviewers: RKSimon, craig.topper, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62806

llvm-svn: 364419
2019-06-26 12:19:47 +00:00