Commit Graph

19933 Commits

Author SHA1 Message Date
Diana Picus a3a0cccb2c [ARM] GlobalISel: Add support for G_SUB
Support G_SUB throughout the GlobalISel pipeline. It is exactly the same
as G_ADD, nothing fancy.

llvm-svn: 300546
2017-04-18 12:35:28 +00:00
Kristof Beyls a4e79cca77 Revert "[GlobalISel] Support vector-of-pointers in LLT"
This reverts r300535 and r300537.
The newly added tests in test/CodeGen/AArch64/GlobalISel/arm64-fallback.ll
produces slightly different code between LLVM versions being built with different compilers.
E.g., dependent on the compiler LLVM is built with, either one of the following
can be produced:

remark: <unknown>:0:0: unable to legalize instruction: %vreg0<def>(p0) = G_EXTRACT_VECTOR_ELT %vreg1, %vreg2; (in function: vector_of_pointers_extractelement)
remark: <unknown>:0:0: unable to legalize instruction: %vreg2<def>(p0) = G_EXTRACT_VECTOR_ELT %vreg1, %vreg0; (in function: vector_of_pointers_extractelement)

Non-determinism like this is clearly a bad thing, so reverting this until
I can find and fix the root cause of the non-determinism.

llvm-svn: 300538
2017-04-18 09:26:36 +00:00
Diana Picus e2626bb7c2 [ARM] Check for correct HW div when lowering divmod
For subtargets that use the custom lowering for divmod, e.g. gnueabi,
we used to check if the subtarget has hardware divide and then lower to
a div-mul-sub sequence if true, or to a libcall if false.

However, judging by the usage of hasDivide vs hasDivideInARMMode, it
seems that hasDivide only refers to Thumb. For instance, in the
ARMTargetLowering constructor, the code that specifies whether to use
libcalls for (S|U)DIV looks like this:

bool hasDivide = Subtarget->isThumb() ? Subtarget->hasDivide()
                                      : Subtarget->hasDivideInARMMode();

In the case of divmod for arm-gnueabi, using only hasDivide() to
determine what to do means that instead of lowering to __aeabi_idivmod
to get the remainder, we lower to div-mul-sub and then further lower the
div to __aeabi_idiv. Even worse, if we have hardware divide in ARM but
not in Thumb, we generate a libcall instead of using it (this is not an
issue in practice since AFAICT none of the cores that we support have
hardware divide in ARM but not Thumb).

This patch fixes the code dealing with custom lowering to take into
account the mode (Thumb or ARM) when deciding whether or not hardware
division is available.

Differential Revision: https://reviews.llvm.org/D32005

llvm-svn: 300536
2017-04-18 08:32:27 +00:00
Kristof Beyls fb73eb0324 [GlobalISel] Support vector-of-pointers in LLT
This fixes PR32471.

As comment 10 on that bug report highlights
(https://bugs.llvm.org//show_bug.cgi?id=32471#c10), there are quite a
few different defendable design tradeoffs that could be made, including
not representing pointers at all in LLT.

I decided to go for representing vector-of-pointer as a concept in LLT,
while keeping the size of the LLT type 64 bits (this is an increase from
48 bits before). My rationale for keeping pointers explicit is that on
some targets probably it's very handy to have the distinction between
pointer and non-pointer (e.g. 68K has a different register bank for
pointers IIRC). If we keep a scalar pointer, it probably is easiest to
also have a vector-of-pointers to keep LLT relatively conceptually clean
and orthogonal, while we don't have a very strong reason to break that
orthogonality. Once we gain more experience on the use of LLT, we can
of course reconsider this direction.

Rejecting vector-of-pointer types in the IRTranslator is also an option
to avoid the crash reported in PR32471, but that is only a very
short-term solution; also needs quite a bit of code tweaks in places,
and is probably fragile. Therefore I didn't consider this the best
option.

llvm-svn: 300535
2017-04-18 08:12:45 +00:00
Haicheng Wu 68f82a31d3 Change the testcase tail-merge-after-mbp.ll to tail-merge-after-mbp.mir
Differential Revision: https://reviews.llvm.org/D32037

llvm-svn: 300506
2017-04-17 22:22:38 +00:00
Jacob Gravelle 0bb7541233 [WebAssembly] Fix WebAssemblyOptimizeReturned after r300367
Summary:
Refactoring changed paramHasAttr(1 + i) to paramHasAttr(0), fix that to
paramHasAttr(i).
Add more tests to WebAssemblyOptimizeReturned that catch that
regression.

Reviewers: dschuff

Subscribers: jfb, sbc100, llvm-commits

Differential Revision: https://reviews.llvm.org/D32136

llvm-svn: 300502
2017-04-17 21:40:28 +00:00
Matt Arsenault a3566f2149 AMDGPU: Use MachineRegisterInfo to find max used register
Avoid looping through program to determine register counts.
This avoids needing to look at regmask operands.

Also fixes some counting errors with flat_scr when there
are no stack objects.

llvm-svn: 300482
2017-04-17 19:48:30 +00:00
Tim Northover 46e36f0953 AArch64: put nonlazybind special handling behind a flag for now.
It's basically a terrible idea anyway but objc_msgSend gets emitted like that.
We can decide on a better way to deal with it in the unlikely event that anyone
actually uses it.

llvm-svn: 300474
2017-04-17 18:18:47 +00:00
Tim Northover 879a0b2e1b AArch64: support nonlazybind
It's almost certainly not a good idea to actually use it in most cases (there's
a pretty large code size overhead on AArch64), but we can't do those
experiments until it's supported.

llvm-svn: 300462
2017-04-17 17:27:56 +00:00
Benjamin Kramer f5f593b674 [X86] Remove special handling for 16 bit for A asm constraints.
Our 16 bit support is assembler-only + the terrible hack that is
.code16gcc. Simply using 32 bit registers does the right thing for the
latter.

Fixes PR32681.

llvm-svn: 300429
2017-04-16 20:13:08 +00:00
Dimitry Andric 909b3376ba Use correct registers for "A" inline asm constraint
Summary:
In PR32594, inline assembly using the 'A' constraint on x86_64 causes
llvm to crash with a "Cannot select" stack trace.  This is because
`X86TargetLowering::getRegForInlineAsmConstraint` hardcodes that 'A'
means the EAX and EDX registers.

However, on x86_64 it means the RAX and RDX registers, and on 16-bit x86
(ia16?) it means the old AX and DX registers.

Add new register classes in `X86RegisterInfo.td` to support these cases,
and amend the logic in `getRegForInlineAsmConstraint` to cope with
different subtargets.  Also add a test case, derived from PR32594.

Reviewers: craig.topper, qcolombet, RKSimon, ab

Reviewed By: ab

Subscribers: ab, emaste, royger, llvm-commits

Differential Revision: https://reviews.llvm.org/D31902

llvm-svn: 300404
2017-04-15 22:15:01 +00:00
Stanislav Mekhanoshin eff0bc7839 [AMDGPU] set read_only access qualifier for pointers
If a kernel's pointer argument is known to be readonly
set access qualifier accordingly. This allows RT not to
flush caches before dispatches.

Differential Revision: https://reviews.llvm.org/D32091

llvm-svn: 300362
2017-04-14 19:11:40 +00:00
Simon Pilgrim 5a22eaa2bf [X86][SSE] Update MOVNTDQA non-temporal loads to generic implementation (LLVM)
MOVNTDQA non-temporal aligned vector loads can be correctly represented using generic builtin loads, allowing us to remove the existing x86 intrinsics.

Clang companion patch: D31766.

Differential Revision: https://reviews.llvm.org/D31767

llvm-svn: 300325
2017-04-14 15:05:35 +00:00
Andrew V. Tischenko 4e7bcd5216 Fix for PR#30562: Selection DAG error: Detected cycle in SelectionDAG.
Patch by Dinar Temirbulatov

llvm-svn: 300314
2017-04-14 09:17:09 +00:00
Andrew V. Tischenko 75745d0c3e This patch closes PR#32216: Better testing of schedule model instruction latencies/throughputs.
The details are here: https://reviews.llvm.org/D30941

llvm-svn: 300311
2017-04-14 07:44:23 +00:00
Stanislav Mekhanoshin 86b0a5465b [AMDGPU] added SIInstrInfo::getAddNoCarry() helper
Addressed rest of post submit comments from D31993.

Differential Revision: https://reviews.llvm.org/D32057

llvm-svn: 300288
2017-04-14 00:33:44 +00:00
Adam Nemet c5779460f4 [AArch64] Avoid partial register writes on lane 0 of BUILD_VECTOR for i8/i16/f16
This further improves Ahmed's change in rL299482.  See the new comment for the
rationale.

The patch recovers most of the regression for bzip2 after D31965. We're down
to +2.68% from +6.97%.

Differential Revision: https://reviews.llvm.org/D32028

llvm-svn: 300276
2017-04-13 23:32:47 +00:00
Konstantin Zhuravlyov d24aeb20fc AMDGPU/GFX9: Do not use v_pack_b32_f16 when packing
Differential Revision: https://reviews.llvm.org/D31819

llvm-svn: 300275
2017-04-13 23:17:00 +00:00
Alexei Starovoitov 56db145164 [bpf] Fix memory offset check for loads and stores
If the offset cannot fit into the instruction, an addition to the
pointer is emitted before the actual access. However, BPF offsets are
16-bit but LLVM considers them to be, for the matter of this check,
to be 32-bit long.

This causes the following program:

int bpf_prog1(void *ign)
{

volatile unsigned long t = 0x8983984739ull;
return *(unsigned long *)((0xffffffff8fff0002ull) + t);

}

To generate the following (wrong) code:

0: 18 01 00 00 39 47 98 83 00 00 00 00 89 00 00 00

r1 = 590618314553ll

2: 7b 1a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r1
3: 79 a1 f8 ff 00 00 00 00 r1 = *(u64 *)(r10 - 8)
4: 79 10 02 00 00 00 00 00 r0 = *(u64 *)(r1 + 2)
5: 95 00 00 00 00 00 00 00 exit

Fix it by changing the offset check to 16-bit.

Patch by Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Differential Revision: https://reviews.llvm.org/D32055

llvm-svn: 300269
2017-04-13 22:24:13 +00:00
Stanislav Mekhanoshin d026f79bd3 [AMDGPU] Combine DS operations with offsets bigger than byte
In many cases ds operations can be combined even if offsets do not
fit into 8 bit encoding. What it takes is to adjust base address.

Differential Revision: https://reviews.llvm.org/D31993

llvm-svn: 300227
2017-04-13 17:53:07 +00:00
Krzysztof Parzyszek c87c48019c [Hexagon] Unxfail passing tests
r300198 fixed a problem that caused two tests to be xfailed. Unxfail
these tests now, since they are passing.

llvm-svn: 300203
2017-04-13 16:05:35 +00:00
Wei Ding 74da350b85 AMDGPU : Fix common dominator of two incoming blocks terminates with uniform branch issue.
Differential Revision: http://reviews.llvm.org/D31350

llvm-svn: 300142
2017-04-12 23:51:47 +00:00
Matt Arsenault 0d0d6c2f25 AMDGPU: Fix invalid copies when copying i1 to phys reg
Insert a VReg_1 virtual register so the i1 workaround pass
can handle it.

llvm-svn: 300113
2017-04-12 21:58:23 +00:00
Stanislav Mekhanoshin c90347d760 [AMDGPU] Generate range metadata for workitem id
If workgroup size is known inform llvm about range returned by local
id  and local size queries.

Differential Revision: https://reviews.llvm.org/D31804

llvm-svn: 300102
2017-04-12 20:48:56 +00:00
Igor Breger 3b97ea39e7 [GlobalIsel][X86] support G_CONSTANT selection.
Summary: [GlobalISel][X86] support G_CONSTANT selection. Add regbank select tests.

Reviewers: zvi, guyblank

Reviewed By: guyblank

Subscribers: llvm-commits, dberris, rovka, kristof.beyls

Differential Revision: https://reviews.llvm.org/D31974

llvm-svn: 300057
2017-04-12 12:54:54 +00:00
Sam Kolton aff8341da2 [AMDGPU] SDWA: make pass global
Summary: Remove checks for basic blocks.

Reviewers: vpykhtin, rampitec, arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D31935

llvm-svn: 300040
2017-04-12 09:36:05 +00:00
Kannan Narayanan acb089e12a [AMDGPU] Add a new pass to insert waitcnts. Leave under an option for testing.
Based on comments in https://reviews.llvm.org/D31161.

llvm-svn: 300023
2017-04-12 03:25:12 +00:00
Matt Arsenault 9ac40026dd AMDGPU: Insert wait at start of callee functions
llvm-svn: 300000
2017-04-11 22:29:31 +00:00
Matt Arsenault efa9f4b210 AMDGPU: Refactor SIMachineFunctionInfo slightly
Prepare for handling non-entry functions.

llvm-svn: 299999
2017-04-11 22:29:28 +00:00
Matt Arsenault e622dc3803 AMDGPU: Refactor argument lowering
Split into smaller functions and prepare for handling
non-entry functions.

llvm-svn: 299998
2017-04-11 22:29:24 +00:00
Matt Arsenault fe78ffba92 AMDGPU: Fix folding reg_sequence into copy to phys reg
This was producing an illegal reg_sequence defining
a physical register with virtual register inputs.

llvm-svn: 299997
2017-04-11 22:29:19 +00:00
Zvi Rackover f720c036f4 [DAGCombine] Add more test cases for shuffle of splat. NFC.
Tests added contain splat-masks with undef elements.

llvm-svn: 299988
2017-04-11 21:16:59 +00:00
Easwaran Raman ddb9ae192a [x86] Relax the check in areLoadsFromSameBasePtr
Check if the scale operand is identical (doesn't have to be 1) and
do not check the chaain operand.

Differential revision: https://reviews.llvm.org/D31833

llvm-svn: 299986
2017-04-11 21:05:02 +00:00
Justin Bogner 20dd36a48a MIR: Allow parsing of empty machine functions
If you run llc -stop-after=codegenprepare and feed the resulting MIR
to llc -start-after=codegenprepare, you'll have an empty machine
function since we haven't run any isel yet. Of course, this only works
if the MIRParser believes you that this is okay.

This is essentially a revert of r241862 with a fix for the problem it
was papering over.

llvm-svn: 299975
2017-04-11 19:32:41 +00:00
Davide Italiano 8455f7d623 [X86] Create the correct ADC/SBB SDNode when lowering add.
Differential Revision:  https://reviews.llvm.org/D31911

llvm-svn: 299973
2017-04-11 19:11:20 +00:00
Yaxun Liu e95df719e1 [AMDGPU] Add A5 to data layout for amdgiz environment
Differential Revision: https://reviews.llvm.org/D31589

llvm-svn: 299964
2017-04-11 17:18:13 +00:00
Diana Picus 1314a2889c GlobalISel: Allow legalizing G_FADD to a libcall
Use the same handling in the generic legalizer code as for the other
libcalls (G_FREM, G_FPOW).

Enable it on ARM for float and double so we can test it.

llvm-svn: 299931
2017-04-11 10:52:34 +00:00
Volkan Keles 64ad85f8ba [GlobalISel] LegalizerInfo: Enable legalization of non-power-of-2 types
Summary: Legalize only if the type is marked as Legal or Custom. If not, return Unsupported as LegalizerHelper is not able to handle non-power-of-2 types right now.

Reviewers: qcolombet, aditya_nandakumar, dsanders, t.p.northover, kristof.beyls, javed.absar, ab

Reviewed By: kristof.beyls, ab

Subscribers: dberris, rovka, igorb, llvm-commits

Differential Revision: https://reviews.llvm.org/D31711

llvm-svn: 299929
2017-04-11 10:10:14 +00:00
Sam Parker 4fc5f3c02e [SelectionDAG] Check CALLSEQ_BEGIN nodes in DelayForLiveRegs
A fix for the bug reported in PR30911.

The issue arises when multiple CALLSEQ_BEGIN nodes are unscheduled as
the last node to be unscheduled will gain access to the CallResource
register. But when a node is being picked, only CALLSEQ_END nodes are
checked against the CallResource and have their chains evaluated.
This then means that other CALLSEQ_BEGIN nodes can be scheduled
before the existing call sequence has been finalised. This patch adds
a check against the FrameSetup nodes in DelayForLiveRegs to prevent
this from happening.

Differential Revision: https://reviews.llvm.org/D31536

llvm-svn: 299926
2017-04-11 08:43:32 +00:00
Hal Finkel cef9e52736 [PowerPC] multiply-with-overflow might use the CTR register
Check the legality of ISD::[US]MULO to see whether
Intrinsic::[us]mul_with_overflow will legalize into a function call (and, thus,
will use the CTR register).  Fixes PR32485.

Patch by Tim Neumann!

Differential Revision: https://reviews.llvm.org/D31790

llvm-svn: 299910
2017-04-11 02:03:17 +00:00
Sanjay Patel 8f2001164a [ARM, x86] add tests to show possible improvement for bool math; NFC
llvm-svn: 299897
2017-04-10 23:26:31 +00:00
Kyle Butt 7e8be28661 CodeGen: BlockPlacement: Don't always tail-duplicate with no other successor.
The math works out where it can actually be counter-productive. The probability
calculations correctly handle the case where the alternative is 0 probability,
rely on those calculations.

Includes a test case that demonstrates the problem.

llvm-svn: 299892
2017-04-10 22:28:22 +00:00
Kyle Butt ee51a20164 CodeGen: BlockPlacement: Minor probability changes.
Qin may be large, and Succ may be more frequent than BB. Take these both into
account when deciding if tail-duplication is profitable.

llvm-svn: 299891
2017-04-10 22:28:18 +00:00
Kyle Butt a12bd756e4 CodeGen: BranchFolding: Merge identical blocks, even if they are short.
Merging identical blocks when it doesn't reduce fallthrough. It is common for
the blocks created from critical edge splitting to be identical. We would like
to merge these blocks whenever doing so would not reduce fallthrough.

llvm-svn: 299890
2017-04-10 22:28:12 +00:00
Matt Arsenault f10061ec70 Add address space mangling to lifetime intrinsics
In preparation for allowing allocas to have non-0 addrspace.

llvm-svn: 299876
2017-04-10 20:18:21 +00:00
Simon Pilgrim b6702eaec3 [X86][MMX] Add fast-isel support for MMX non-temporal writes
Differential Revision: https://reviews.llvm.org/D31754

llvm-svn: 299852
2017-04-10 16:58:07 +00:00
Diana Picus 3ff82c8cb7 [ARM] GlobalISel: Support G_FPOW for float and double
Legalize to a libcall.

llvm-svn: 299841
2017-04-10 09:27:39 +00:00
Matt Arsenault dd8fd9dcfd AMDGPU: Actually write nops for writeNopData
Before this was just writing 0s, which ends up looking like a
v_cndmask_b32 v0, s0, v0, vcc. Write out an encoded s_nop instead.

llvm-svn: 299816
2017-04-08 21:28:38 +00:00
Eli Friedman 75631c97ba [ARM] Prefer BIC over BFC in ARM mode.
BIC is generally faster, and it can put the output in a different
register from the input.

We already do this in Thumb2 mode; not sure why the equivalent fix
never got applied to ARM mode.

Differential Revision: https://reviews.llvm.org/D31797

llvm-svn: 299803
2017-04-07 22:01:23 +00:00
Aditya Nandakumar eb80a51b52 [GlobalISel]: Fix bug where we can report GISelFailure on erased instructions
The original instruction might get legalized and erased and expanded
into intermediate instructions and the intermediate instructions might
fail legalization. This end up in reporting GISelFailure on the erased
instruction.
Instead report GISelFailure on the intermediate instruction which failed
legalization.

Reviewed by: ab

llvm-svn: 299802
2017-04-07 21:49:30 +00:00
Petr Hosek c3a9e6db38 [AArch64] Allow global register asm("x18") or asm("w18") under -ffixed-x18
When using -ffixed-x18, the x18 (or w18) register can safely be used
with the "global register variable" GCC extension, but the backend
fails to recognize it.

Patch by Roland McGrath.

Differential Revision: https://reviews.llvm.org/D31793

llvm-svn: 299799
2017-04-07 20:41:58 +00:00
Simon Dardis f7e4388e3b Revert "[SelectionDAG] Enable target specific vector scalarization of calls and returns"
This reverts commit r299766. This change appears to have broken the MIPS
buildbots. Reverting while I investigate.

Revert "[mips] Remove usage of debug only variable (NFC)"

This reverts commit r299769. Follow up commit.

llvm-svn: 299788
2017-04-07 17:25:05 +00:00
Stanislav Mekhanoshin 478b81982f [AMDGPU] Unroll more to eliminate phis and conditions
Increase threshold to unroll a loop which contains an "if" statement
whose condition defined by a PHI belonging to the loop. This may help
to eliminate if region and potentially even PHI itself, saving on
both divergence and registers used for the PHI.

Add a small bonus for each of such "if" statements.

Differential Revision: https://reviews.llvm.org/D31693

llvm-svn: 299779
2017-04-07 16:26:28 +00:00
Dehao Chen 58fa724494 Use PMADDWD to expand reduction in a loop
Summary:
PMADDWD can help improve 8/16 bit integer mutliply-add operation performance for cases like:

for (int i = 0; i < count; i++)
  a += x[i] * y[i];

Reviewers: wmi, davidxl, hfinkel, RKSimon, zvi, mkuper

Reviewed By: mkuper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31679

llvm-svn: 299776
2017-04-07 15:41:52 +00:00
Igor Breger 2953788c36 [GlobalISel] implement narrowing for G_CONSTANT.
Summary: [GlobalISel] implement narrowing for G_CONSTANT.

Reviewers: bogner, zvi, t.p.northover

Reviewed By: t.p.northover

Subscribers: llvm-commits, dberris, rovka, kristof.beyls

Differential Revision: https://reviews.llvm.org/D31744

llvm-svn: 299772
2017-04-07 14:41:59 +00:00
Petar Jovanovic bc54eb89ad [mips][msa] Fix generation of bm(n)zi and bins[lr]i instructions
We have two cases here, the first one being the following instruction
selection from the builtin function:
bm(n)zi builtin -> vselect node -> bins[lr]i machine instruction

In case of bm(n)zi having an immediate which has either its high or low bits
set, a bins[lr] instruction can be selected through the selectVSplatMask[LR]
function. The function counts the number of bits set, and that value is
being passed to the bins[lr]i instruction as its immediate, which in turn
copies immediate modulo the size of the element in bits plus 1 as per specs,
where we get the off-by-one-error.

The other case is:
bins[lr]i -> vselect node -> bsel.v

In this case, a bsel.v instruction gets selected with a mask having one bit
less set than required.

Patch by Stefan Maksimovic.

Differential Revision: https://reviews.llvm.org/D30579

llvm-svn: 299768
2017-04-07 13:31:36 +00:00
Simon Dardis 6470ff0b24 [SelectionDAG] Enable target specific vector scalarization of calls and returns
By target hookifying getRegisterType, getNumRegisters, getVectorBreakdown,
backends can request that LLVM to scalarize vector types for calls
and returns.

The MIPS vector ABI requires that vector arguments and returns are passed in
integer registers. With SelectionDAG's new hooks, the MIPS backend can now
handle LLVM-IR with vector types in calls and returns. E.g.
'call @foo(<4 x i32> %4)'.

Previously these cases would be scalarized for the MIPS O32/N32/N64 ABI for
calls and returns if vector types were not legal. If vector types were legal,
a single 128bit vector argument would be assigned to a single 32 bit / 64 bit
integer register.

By teaching the MIPS backend to inspect the original types, it can now
implement the MIPS vector ABI which requires a particular method of
scalarizing vectors.

Previously, the MIPS backend relied on clang to scalarize types such as "call
@foo(<4 x float> %a) into "call @foo(i32 inreg %1, i32 inreg %2, i32 inreg %3,
i32 inreg %4)".

This patch enables the MIPS backend to take either form for vector types.

Reviewers: zoran.jovanovic, jaydeep, vkalintiris, slthakur

Differential Revision: https://reviews.llvm.org/D27845

llvm-svn: 299766
2017-04-07 13:03:52 +00:00
Jonas Paulsson cad72efee6 [SystemZ] Check for presence of vector support in SystemZISelLowering
A test case was found with llvm-stress that caused DAGCombiner to crash
when compiling for an older subtarget without vector support.

SystemZTargetLowering::combineTruncateExtract() should do nothing for older
subtargets.

This check was placed in canTreatAsByteVector(), which also helps in a few
other places.

Review: Ulrich Weigand
llvm-svn: 299763
2017-04-07 12:35:11 +00:00
Diana Picus fed80723c0 [ARM] GlobalISel: Test hard float properly
It turns out -float-abi=hard doesn't set the hard float calling
convention for libcalls. We need to use a hard float triple instead
(e.g. gnueabihf).

llvm-svn: 299761
2017-04-07 12:04:24 +00:00
Sam Kolton 6e79529db4 [AMDGPU] Move SiShrinkInstruction and SDWAPeephole to SSAOptimization passes
Summary:
Difference beetween PreRegAlloc() and MachineSSAOptimization() are that the former is run despite of -O0 optimization level. In my undestanding SiShrinkInstructions and SDWAPeephole shouldn't run when optimizations are disabled.
With this change order of passes will not change.

Reviewers: arsenm, vpykhtin, rampitec

Subscribers: qcolombet, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D31705

llvm-svn: 299757
2017-04-07 10:53:12 +00:00
Diana Picus 3c608448e1 [ARM] GlobalISel: Support frem for 64-bit values
Legalize to a libcall.

llvm-svn: 299756
2017-04-07 10:50:02 +00:00
Diana Picus a5bab61a8d [ARM] GlobalISel: Support frem for 32-bit values
Legalize to a libcall.
On this occasion, also start allowing soft float subtargets. For the
moment G_FREM is the only legal floating point operation for them.

llvm-svn: 299753
2017-04-07 09:41:39 +00:00
Konstantin Zhuravlyov 4b3847e865 AMDGPU/GFX9: Fix shared and private aperture queries
Differential Revision: https://reviews.llvm.org/D31786

llvm-svn: 299727
2017-04-06 23:02:33 +00:00
Eli Friedman 5fba1e53f2 Turn on -addr-sink-using-gep by default.
The new codepath has been in the tree for years, and there isn't any
reason to use two codepaths here.

Differential Revision: https://reviews.llvm.org/D30596

llvm-svn: 299723
2017-04-06 22:42:18 +00:00
Michael Kuperstein 6129887d21 [X86] Revert r299387 due to AVX legalization infinite loop.
llvm-svn: 299720
2017-04-06 22:33:25 +00:00
Matt Arsenault 21a438255d AMDGPU: Diagnose illegal SGPR to VGPR copies
This is possible in ways that are not compiler bugs,
so stop asserting on them.

This emits an extra error when emitting objects when it
can't encode the new pseudo, but I'm not sure that matters.

llvm-svn: 299712
2017-04-06 21:09:53 +00:00
Matt Arsenault 5cf4271883 AMDGPU: Replace fp16SrcZerosHighBits with a whitelist
FCOPYSIGN is lowered to bit operations which don't clear the high
bits.

llvm-svn: 299708
2017-04-06 20:58:30 +00:00
Huihui Zhang 98240e9643 [SelectionDAG] [ARM CodeGen] Fix chain information of LowerMUL
In LowerMUL, the chain information is not preserved for the new
created Load SDNode.

For example, if a Store alias with one of the operand of Mul.
The Load for that operand need to be scheduled before the Store.
The dependence is recorded in the chain of Store, in TokenFactor.
However, when lowering MUL, the SDNodes for the new Loads for
VMULL are not updated in the TokenFactor for the Store. Thus the
chain is not preserved for the lowered VMULL.

llvm-svn: 299701
2017-04-06 20:22:51 +00:00
Yi Kong 5e7059b702 Revert "[ARM] Add Kryo to available targets"
This reverts commit 942d6e6f58bf7e63810dd7cbcbce1fdfa5ebc6d4.

Build breakage.

llvm-svn: 299689
2017-04-06 19:16:14 +00:00
Nirav Dave 974f7c23ae [SDAG] Fix visitAND optimization to deal with vector extract case again.
Summary:
Fix case elided by rL298920.

Fixes PR32545.

Reviewers: eli.friedman, RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31759

llvm-svn: 299688
2017-04-06 19:05:41 +00:00
Yi Kong 2b622b1fc1 [ARM] Add Kryo to available targets
Summary:
Host CPU detection now supports Kryo, so we need to recognize it in ARM
target.

Reviewers: mcrosier, t.p.northover, rengolin, echristo, srhines

Reviewed By: t.p.northover, echristo

Subscribers: aemerson

Differential Revision: https://reviews.llvm.org/D31775

llvm-svn: 299674
2017-04-06 18:10:08 +00:00
Stanislav Mekhanoshin ea57c38521 [AMDGPU] Eliminate barrier if workgroup size is not greater than wavefront size
If a workgroup size is known to be not greater than wavefront size
the s_barrier instruction is not needed since all threads are guarantied
to come to the same point at the same time.

Differential Revision: https://reviews.llvm.org/D31731

llvm-svn: 299659
2017-04-06 16:48:30 +00:00
Sam Kolton 9fa169601f [AMDGPU] Resubmit SDWA peephole: enable by default
Reviewers: vpykhtin, rampitec, arsenm

Subscribers: qcolombet, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D31671

llvm-svn: 299654
2017-04-06 15:03:28 +00:00
Simon Pilgrim 77d3c770d3 [X86][MMX] Test showing failure to create MMX non-temporal store
llvm-svn: 299640
2017-04-06 10:32:30 +00:00
David Green 1b4b59a415 [ARM] Remove a dead ADD during the creation of TBBs
During the optimisation of jump tables in the constant island pass,
an extra ADD could be left over, now dead but not removed.

Differential Revision: https://reviews.llvm.org/D31389

llvm-svn: 299634
2017-04-06 08:32:47 +00:00
Ivan Krasin d4f70c70b9 Revert r299536. [AMDGPU] SDWA peephole: enable by default.
Reason: breaks multiple bots:

http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/3988
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/1173

Original Review URL: https://reviews.llvm.org/D31671

llvm-svn: 299583
2017-04-05 19:58:12 +00:00
Krzysztof Parzyszek 2182b4b7b3 [Hexagon] Use -mattr to select HVX mode in a testcase, NFC
llvm-svn: 299582
2017-04-05 19:46:37 +00:00
Adam Nemet d5ffdd3605 [DAGCombine] Support FMF contract in fused multiple-and-sub too
This is a follow-on to r299096 which added support for fmadd.

Subtract does not have the case where with two multiply operands we commute in
order to fuse with the multiply with the fewer uses.

llvm-svn: 299572
2017-04-05 17:58:48 +00:00
Renato Golin edfeb773fd [ARM] Try to re-enable MachineBranchProb.ll for ARM/AArch64
Commit r298799 changed code that made the XFAIL on MachineBranchProb.ll
irrelevant, but some configurations still failed. I can't reproduce it
locally, so I'm hoping that enabling this will tell me if some
configurations will really fail or if they were just too slow.

llvm-svn: 299558
2017-04-05 16:27:11 +00:00
Nirav Dave aa65a2beb8 [SystemZ] Prevent Merging Bitcast with non-normal loads
Fixes PR32505.

Reviewers: uweigand, jonpa

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31609

llvm-svn: 299552
2017-04-05 15:42:48 +00:00
Sanjay Patel b2f1621bb1 [DAGCombiner] add and use TLI hook to convert and-of-seteq / or-of-setne to bitwise logic+setcc (PR32401)
This is a generic combine enabled via target hook to reduce icmp logic as discussed in:
https://bugs.llvm.org/show_bug.cgi?id=32401

It's likely that other targets will want to enable this hook for scalar transforms, 
and there are probably other patterns that can use bitwise logic to reduce comparisons.

Note that we are missing an IR canonicalization for these patterns, and we will probably
prefer the pair-of-compares form in IR (shorter, more likely to fold).

Differential Revision: https://reviews.llvm.org/D31483

llvm-svn: 299542
2017-04-05 14:09:39 +00:00
Jonas Paulsson 38a2da92bc [DAGCombiner] Don't make a BUILD_VECTOR with operands of illegal type.
When DAGCombiner visits a SIGN_EXTEND_INREG of a BUILD_VECTOR with
constant operands, a new BUILD_VECTOR node will be created transformed
constants.

Llvm-stress found a case where the new BUILD_VECTOR had constant operands
of an illegal type, because the (legal) element type is in fact not a legal
scalar type.

This patch changes this so that the new BUILD_VECTOR has the same operand
type as the old one.

Review: Eli Friedman, Nirav Dave
https://bugs.llvm.org//show_bug.cgi?id=32422

llvm-svn: 299540
2017-04-05 13:45:37 +00:00
Sam Kolton 34e29784fb [AMDGPU] SDWA peephole: enable by default
Reviewers: vpykhtin, rampitec, arsenm

Subscribers: qcolombet, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D31671

llvm-svn: 299536
2017-04-05 12:00:45 +00:00
Ahmed Bougacha ec8b1fb539 [X86] Relax assert in broadcast-of-subvector lowering.
Before r294774, there was a problem when lowering broadcasts to use
128-bit subvectors.

When we looked through a bitcast to find the broadcast input, we'd keep
using the original type, so you'd end up with things like:
  (v8f32 (broadcast
    (v4f32 (extract_subvector
      (v8i32 V),
      ...))
    ))

r294774 fixed it to always emit subvectors with the scalar type of the
original source.

It also introduced some asserts, to check that we use scalars with
the same size, and vectors with the same number of elements.

The scalar size equality is checked earlier when looking through bitcasts,
and is a useful assert.

However, the number of elements don't have to be identical: we're always
going to extract a 128-bit subvector, and we can have different size
inputs if we looked through a concat_vector to find a 256-bit source.

Relax the overzealous assert.

Replace it with a check of the original source vector being 256 or 512
bits.  If it's 128 bits, we can't extract_subvector from it.

Fixes PR32371.

llvm-svn: 299490
2017-04-05 00:14:39 +00:00
Ahmed Bougacha d3c03a5ddd [AArch64] Avoid partial register deps on insertelt of load into lane 0.
This improves upon r246462: that prevented FMOVs from being emitted
for the cross-class INSERT_SUBREGs by disabling the formation of
INSERT_SUBREGs of LOAD.  But the ld1.s that we started selecting
caused us to introduce partial dependencies on the vector register.

Avoid that by using SCALAR_TO_VECTOR: it's a first-class citizen that
is folded away by many patterns, including the scalar LDRS that we
want in this case.

Credit goes to Adam for finding the issue!

llvm-svn: 299482
2017-04-04 22:55:53 +00:00
Evgeniy Stepanov 12de7b2446 Change section flag character for SHF_LINK_ORDER to "o".
GAS uses "m" as a compatibility alias for "M" (SHF_MERGE).

"o" is free, except on ia64, where it already means SHF_LINK_ORDER.

llvm-svn: 299479
2017-04-04 22:35:08 +00:00
Petr Hosek 9eb0a1e09b [AArch64][Fuchsia] Allow -mcmodel=kernel for --target=aarch64-fuchsia
This mode is just like -mcmodel=small except that it moves the
thread pointer from TPIDR_EL0 to TPIDR_EL1.

Patch by Roland McGrath.

Differential Revision: https://reviews.llvm.org/D31624

llvm-svn: 299462
2017-04-04 19:51:53 +00:00
Matt Arsenault 3333968771 Verifier: Check some amdgpu calling convention restrictions
llvm-svn: 299457
2017-04-04 18:43:11 +00:00
Matt Arsenault 3e90f84806 AMDGPU: Remove legacy export intrinsic
llvm-svn: 299444
2017-04-04 16:34:39 +00:00
Matt Arsenault 236da200f1 AMDGPU: Remove legacy image intrinsics
llvm-svn: 299443
2017-04-04 16:34:35 +00:00
Michael Zuckerman 88fb171015 [X86][LLVM] Converting __mm{|256|512}_movm_epi{8|16|32|64} LLVMIR call into generic intrinsics.
This patch is a part one of two reviews, one for the clang and the other for LLVM. 
The patch deletes the back-end intrinsics and adds support for them in the auto upgrade.

Differential Revision: https://reviews.llvm.org/D31393

llvm-svn: 299432
2017-04-04 13:32:14 +00:00
Daniel Sanders bee5739a7c [tablegen][globalisel] Add support for nested instruction matching.
Summary:
Lift the restrictions that prevented the tree walking introduced in the
previous change and add support for patterns like:
  (G_ADD (G_MUL (G_SEXT $src1), (G_SEXT $src2)), $src3) -> SMADDWrrr $dst, $src1, $src2, $src3
Also adds support for G_SEXT and G_ZEXT to support these cases.

One particular aspect of this that I should draw attention to is that I've
tried to be overly conservative in determining the safety of matches that
involve non-adjacent instructions and multiple basic blocks. This is intended
to be used as a cheap initial check and we may add a more expensive check in
the future. The current rules are:
* Reject if any instruction may load/store (we'd need to check for intervening
  memory operations.
* Reject if any instruction has implicit operands.
* Reject if any instruction has unmodelled side-effects.
See isObviouslySafeToFold().

Reviewers: t.p.northover, javed.absar, qcolombet, aditya_nandakumar, ab, rovka

Reviewed By: ab

Subscribers: igorb, dberris, llvm-commits, kristof.beyls

Differential Revision: https://reviews.llvm.org/D30539

llvm-svn: 299430
2017-04-04 13:25:23 +00:00
Simon Dardis 0a47edb153 [mips] Deal with empty blocks in the mips hazard scheduler
This patch teaches the hazard scheduler how to handle empty blocks
when search for the next real instruction when dealing with forbidden
slots.

Reviewers: slthakur

Differential Revision: https://reviews.llvm.org/D31293

llvm-svn: 299427
2017-04-04 11:28:53 +00:00
Oren Ben Simhon 568fb197da [X86] Add 64 bit pattern matching for PSADBW
PSADBW pattern currently supports the 32 bit IR pattern and only GLT (greather than) comparison.
The patch extends the pattern to catch also 64 bit IR pattern and includes all other comparison types (not only GLT).

Differential Revision: https://reviews.llvm.org/D31577

llvm-svn: 299425
2017-04-04 10:23:18 +00:00
Sanjay Patel a4546efbc8 add/move codegen tests for and/or of setcc; NFC
llvm-svn: 299396
2017-04-03 22:45:46 +00:00
Matt Arsenault b600e138cc AMDGPU: Remove llvm.SI.vs.load.input
llvm-svn: 299391
2017-04-03 21:45:13 +00:00
Matt Arsenault c82768290d DAG: Fix missing legalization for any_extend_vector_inreg operands
llvm-svn: 299389
2017-04-03 21:28:13 +00:00
Simon Pilgrim af33757b5d [X86][SSE]] Lower BUILD_VECTOR with repeated elts as BUILD_VECTOR + VECTOR_SHUFFLE
It can be costly to transfer from the gprs to the xmm registers and can prevent loads merging.

This patch splits vXi16/vXi32/vXi64 BUILD_VECTORS that use the same operand in multiple elements into a BUILD_VECTOR with only a single insertion of each of those elements and then performs an unary shuffle to duplicate the values.

There are a couple of minor regressions this patch unearths due to some missing MOVDDUP/BROADCAST folds that I will address in a future patch.

Note: Now that vector shuffle lowering and combining is pretty good we should be reusing that instead of duplicating so much in LowerBUILD_VECTOR - this is the first of several patches to address this.

Differential Revision: https://reviews.llvm.org/D31373

llvm-svn: 299387
2017-04-03 21:06:51 +00:00
Amjad Aboud 0389f62879 x86 interrupt calling convention: re-align stack pointer on 64-bit if an error code was pushed
The x86_64 ABI requires that the stack is 16 byte aligned on function calls. Thus, the 8-byte error code, which is pushed by the CPU for certain exceptions, leads to a misaligned stack. This results in bugs such as Bug 26413, where misaligned movaps instructions are generated.

This commit fixes the misalignment by adjusting the stack pointer in these cases. The adjustment is done at the beginning of the prologue generation by subtracting another 8 bytes from the stack pointer. These additional bytes are popped again in the function epilogue.

Fixes Bug 26413

Patch by Philipp Oppermann.

Differential Revision: https://reviews.llvm.org/D30049

llvm-svn: 299383
2017-04-03 20:28:45 +00:00
Jun Bum Lim dee5565869 [CodeGenPrep] move aarch64-type-promotion to CGP
Summary:
Move the aarch64-type-promotion pass within the existing type promotion framework in CGP.
This change also support forking sexts when a new sext is required for promotion.
Note that change is based on D27853 and I am submitting this out early to provide a better idea on D27853.

Reviewers: jmolloy, mcrosier, javed.absar, qcolombet

Reviewed By: qcolombet

Subscribers: llvm-commits, aemerson, rengolin, mcrosier

Differential Revision: https://reviews.llvm.org/D28680

llvm-svn: 299379
2017-04-03 19:20:07 +00:00