Commit Graph

39274 Commits

Author SHA1 Message Date
Tim Northover 61c16142b4 GlobalISel: extend add widening to SUB, MUL, OR, AND and XOR.
These are the operations that are trivially identical. Division is omitted for
now because you need to use the correct sign/zero extension.

llvm-svn: 277775
2016-08-04 21:39:49 +00:00
Tim Northover 9656f1476c GlobalISel: implement narrowing for G_ADD.
llvm-svn: 277769
2016-08-04 20:54:13 +00:00
Yaxun Liu 86c052238a [OpenCL] Add missing tests for getOCLTypeName
Adding missing tests for OCL type names for half, float, double, char, short, long, and unknown.

Patch by Aaron En Ye Shi.

Differential Revision: https://reviews.llvm.org/D22964

llvm-svn: 277759
2016-08-04 19:45:00 +00:00
Tim Northover 2f32e7f0ac AArch64: don't assume all i128s are BUILD_PAIRs
It leads to a crash when they're not. I'm *sure* I've made this mistake before,
at least once.

llvm-svn: 277755
2016-08-04 19:32:28 +00:00
Derek Schuff 732636d901 [WebAssembly] Check return value of getRegForValue in FastISel
Previously, FastISel for WebAssembly wasn't checking the return value of
`getRegForValue` in certain cases, which would generate instructions
referencing NoReg. This patch fixes this behavior.

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D23100

llvm-svn: 277742
2016-08-04 18:01:52 +00:00
Krzysztof Parzyszek 04c0796e37 [Hexagon] Validate register class when doing bit simplification
llvm-svn: 277740
2016-08-04 17:56:19 +00:00
Simon Pilgrim 3dbce52c16 [X86][SSE] Rename target shuffle unary permute matching function. NFCI.
In preparation for adding a binary permute matching function.

llvm-svn: 277737
2016-08-04 17:16:50 +00:00
Alina Sbirlea 6f937b1144 LoadStoreVectorizer: Remove TargetBaseAlign. Keep alignment for stack adjustments.
Summary:
TargetBaseAlign is no longer required since LSV checks if target allows misaligned accesses.
A constant defining a base alignment is still needed for stack accesses where alignment can be adjusted.

Previous patch (D22936) was reverted because tests were failing. This patch also fixes the cause of those failures:
- x86 failing tests either did not have the right target, or the right alignment.
- NVPTX failing tests did not have the right alignment.
- AMDGPU failing test (merge-stores) should allow vectorization with the given alignment but the target info
  considers <3xi32> a non-standard type and gives up early. This patch removes the condition and only checks
  for a maximum size allowed and relies on the next condition checking for %4 for correctness.
  This should be revisited to include 3xi32 as a MVT type (on arsenm's non-immediate todo list).

Note that checking the sizeInBits for a MVT is undefined (leads to an assertion failure),
so we need to create an EVT, hence the interface change in allowsMisaligned to include the Context.

Reviewers: arsenm, jlebar, tstellarAMD

Subscribers: jholewinski, arsenm, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23068

llvm-svn: 277735
2016-08-04 16:38:44 +00:00
Simon Pilgrim c2370b810d [X86][SSE] Split off shuffle mask canonicalization from lowerVectorShuffle. NFCI.
The new function now returns true if the shuffle should be commuted.

This will allow target shuffle combines to share the code.

llvm-svn: 277728
2016-08-04 14:21:32 +00:00
Krzysztof Parzyszek 7773c58458 [Hexagon] Clear kill flags from modified registers in peephole optimizer
llvm-svn: 277727
2016-08-04 14:17:16 +00:00
Nikolai Bozhenov f679530ba1 [X86] Heuristic to selectively build Newton-Raphson SQRT estimation
On modern Intel processors hardware SQRT in many cases is faster than RSQRT
followed by Newton-Raphson refinement. The patch introduces a simple heuristic
to choose between hardware SQRT instruction and Newton-Raphson software
estimation.

The patch treats scalars and vectors differently. The heuristic is that for
scalars the compiler should optimize for latency while for vectors it should
optimize for throughput. It is based on the assumption that throughput bound
code is likely to be vectorized.

Basically, the patch disables scalar NR for big cores and disables NR completely
for Skylake. Firstly, scalar SQRT has shorter latency than NR code in big cores.
Secondly, vector SQRT has been greatly improved in Skylake and has better
throughput compared to NR.

Differential Revision: https://reviews.llvm.org/D21379

llvm-svn: 277725
2016-08-04 12:47:28 +00:00
Hrvoje Varga 846bdb746d [mips][microMIPS] Implement CFC1, CFC2, CTC1 and CTC2 instructions
Differential Revision: https://reviews.llvm.org/D22347

llvm-svn: 277719
2016-08-04 11:22:52 +00:00
Simon Pilgrim 5d5ca9c0cb [X86][SSE] Add initial costs for vector CTTZ/CTLZ
llvm-svn: 277716
2016-08-04 10:51:41 +00:00
Simon Pilgrim 8ae6dad49b [X86][SSE] Don't decide when to scalarize CTTZ/CTLZ for performance at lowering - this is what cost models are for
Improved CTTZ/CTLZ costings will be added shortly

llvm-svn: 277713
2016-08-04 10:14:39 +00:00
Simon Dardis 57f4ae4625 [mips] Enable tail calls by default
Enable tail calls by default for (micro)MIPS(64).

microMIPS is slightly more tricky than doing it for MIPS(R6) or microMIPSR6.
microMIPS has two instruction encodings: 16bit and 32bit along with some
restrictions on the size of the instruction that can fill the delay slot.
For safe tail calls for microMIPS, the delay slot filler attempts to find
a correct size instruction for the delay slot of TAILCALL pseudos.

Reviewers: dsanders, vkalintris

Subscribers: jfb, dsanders, sdardis, llvm-commits

Differential Revision: https://reviews.llvm.org/D21138

llvm-svn: 277708
2016-08-04 09:17:07 +00:00
Dean Michael Berris 7e9abea2ae [XRay] Align entry and return sleds to 2 byte boundaries
This should ensure that we can atomically write two bytes (on top of the
retq and the one past it) and have those two bytes not straddle cache
lines.

We also move the label past the alignment instruction so that we can refer
to the actual first instruction, as opposed to potential padding before the
aligned instruction.

Update the tests to allow us to reflect the new order of assembly.

Reviewers: rSerge, echristo, majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23101

llvm-svn: 277701
2016-08-04 07:37:28 +00:00
Guozhi Wei 9584d18d48 [PPC] Handling CallInst in PPCBoolRetToInt
This patch fixes pr25548.

Current implementation of PPCBoolRetToInt doesn't handle CallInst correctly, so it failed to do the intended optimization when there is a CallInst with parameters. This patch fixed that.

llvm-svn: 277655
2016-08-03 21:43:51 +00:00
Bruno Cardoso Lopes 3fcf832cce Revert "[ARM] Constant Materialize: imms with specific value can be encoded into mov.w"
This reverts commit r277610 / d619aa8878c3dafcc0d29a46517f63ff3209fdd4.

This make subtarget-no-movt.ll fail in
http://lab.llvm.org:8080/green/job/clang-stage1-cmake-RA-incremental_check/26892,

llvm-svn: 277654
2016-08-03 21:26:21 +00:00
Simon Pilgrim 898f030f70 [X86][SSE] Enable target shuffle combining to combine multiple shuffle inputs.
We currently only support combining target shuffles that consist of a single source input (plus elements known to be undef/zero).

This patch generalizes the recursive combining of the target shuffle to collect all the inputs, merging any duplicates along the way, into a full set of src ops and its shuffle mask.

We uncover a number of cases where we have failed to combine a unary shuffle because the input has been duplicated and separated during lowering.

This will allow us to combine to 2-input shuffles in a future patch.

Differential Revision: https://reviews.llvm.org/D22859

llvm-svn: 277631
2016-08-03 19:08:24 +00:00
Krzysztof Parzyszek 23ee12e173 [Hexagon] Generate COPY/REG_SEQUENCE more aggressively for vectors
llvm-svn: 277626
2016-08-03 18:35:48 +00:00
Krzysztof Parzyszek 623afbdbd7 [Hexagon-ish] Add function to print cell map contents in bit tracker
llvm-svn: 277622
2016-08-03 18:13:32 +00:00
Weiming Zhao 57dc4cf0e1 [ARM] Constant Materialize: imms with specific value can be encoded into mov.w
Summary: Thumb2 supports encoding immediates with specific patterns into mov.w by splatting the low 8 bits into other bytes.

Reviewers: john.brawn, jmolloy

Subscribers: jmolloy, aemerson, rengolin, samparker, llvm-commits

Differential Revision: https://reviews.llvm.org/D23090

llvm-svn: 277610
2016-08-03 17:05:23 +00:00
Benjamin Kramer 0e4b7646c1 Hexagon: Use llvm_unreachable. NFC.
llvm-svn: 277605
2016-08-03 15:51:10 +00:00
Krzysztof Parzyszek ed4e7827bb [Hexagon] Do not check alignment for unsized types in isLegalAddressingMode
When the same base address is used to load two different data types, LSR
would assume a memory type of "void". This type is not sized and has no
alignment information. Checking for it causes a crash.

llvm-svn: 277601
2016-08-03 15:06:18 +00:00
Igor Breger c59b3a2236 [AVX512] Add aliases for vcvttss2si{l|q}, vcvttsd2si{l|q}, vcvttss2usi{l|q}, vcvttsd2usi{l|q} instructions.
Differential Revision: http://reviews.llvm.org/D23111

llvm-svn: 277586
2016-08-03 10:58:05 +00:00
Dean Michael Berris 0b8f6c8777 [XRay] Make the xray_instr_map section specification more correct
Summary:
We also add a test to show what currently happens when we create a
section per function and emit an xray_instr_map. This illustrates the
relationship (or lack thereof) between the per-function section and the
xray_instr_map section.

We also change the code generation slightly so that we don't always
create group sections, but rather only do so if a function where the
table is associated with is in a group.

Also in this change:

  - Remove the "merge" flag on the xray_instr_map section.
  - Test that we're generating the right table for comdat and non-comdat functions.

Reviewers: echristo, majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23104

llvm-svn: 277580
2016-08-03 07:21:55 +00:00
Derek Schuff 5629ec141f [WebAssembly] Remove unnecessary subtarget checks in peephole pass
Leftover from D22686; the passes can handle all the instructions
unconditionally; only isel needs to care whether to generate them.

llvm-svn: 277549
2016-08-02 23:31:56 +00:00
Derek Schuff 39bf39f35c [WebAssembly] Initial SIMD128 support.
Kicks off the implementation of wasm SIMD128 support (spec:
https://github.com/stoklund/portable-simd/blob/master/portable-simd.md),
adding support for add, sub, mul for i8x16, i16x8, i32x4, and f32x4.

The spec is WIP, and might change in the near future.

Patch by João Porto

Differential Revision: https://reviews.llvm.org/D22686

llvm-svn: 277543
2016-08-02 23:16:09 +00:00
Tim Northover 765777ce67 ARM: only form SMMLS when SUBE flags unused.
In this particular example we wouldn't want the smmls anyway (the value is
actually unused), but in general smmls does not provide the required flags
register so if that SUBE result is used we can't replace it.

llvm-svn: 277541
2016-08-02 23:12:36 +00:00
Matt Arsenault 979902b3ff AMDGPU: fdiv -1, x -> rcp -x
llvm-svn: 277535
2016-08-02 22:25:04 +00:00
Krzysztof Parzyszek 824d347d2d [Hexagon] Recognize vcombine in copy propagation
llvm-svn: 277528
2016-08-02 21:49:20 +00:00
Artem Belevich db4bc667af [NVPTX] remove unnecessary named metadata update that happens to break debug info.
Also added test case to verify IR changes done by NVPTXGenericToNVVM pass.

Differential Revision: https://reviews.llvm.org/D22837

llvm-svn: 277520
2016-08-02 20:58:24 +00:00
Tim Northover 1021d89398 AArch64: properly calculate cmpxchg status in FastISel.
We were relying on the misleadingly-names $status result to actually be the
status. Actually it's just a scratch register that may or may not be valid (and
is the inverse of the real ststus anyway). Success can be determined by
comparing the value loaded against the one we wanted to see for "cmpxchg
strong" loops like this.

Should fix PR28819.

llvm-svn: 277513
2016-08-02 20:22:36 +00:00
Nicolai Haehnle 8a482b33fe AMDGPU: Stay in WQM for non-intrinsic stores
Summary:
Two types of stores are possible in pixel shaders: stores to memory that are
explicitly requested at the API level, and stores that are an implementation
detail of register spilling or lowering of arrays.

For the first kind of store, we must ensure that helper pixels have no effect
and hence WQM must be disabled. The second kind of store must always be
executed, because the written value may be loaded again in a way that is
relevant for helper pixels as well -- and there are no externally visible
effects anyway.

This is a candidate for the 3.9 release branch.

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: https://reviews.llvm.org/D22675

llvm-svn: 277504
2016-08-02 19:31:14 +00:00
Nicolai Haehnle bef0e90cf1 AMDGPU: Track physical registers in SIWholeQuadMode
Summary:
There are cases where uniform branch conditions are computed in VGPRs, and
we didn't correctly mark those as WQM.

The stray change in basic-branch.ll is because invoking the LiveIntervals
analysis leads to the detection of a dead register that would otherwise not
be seen at -O0.

This is a candidate for the 3.9 branch, as it fixes a possible hang.

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D22673

llvm-svn: 277500
2016-08-02 19:17:37 +00:00
Krzysztof Parzyszek 962932c2e2 [Hexagon] Prefer _io over _rr for 64-bit store with constant offset
Identify patterns where the address is aligned to an 8-byte boundary,
but both the base address and the constant offset are both proper
multiples of 4. In such cases, extract Base+4 into a separate instruc-
tion, and use S2_storerd_io, instead of using S4_storerd_rr.

llvm-svn: 277497
2016-08-02 18:50:05 +00:00
Krzysztof Parzyszek 74daece192 [Hexagon] Remove unused option
llvm-svn: 277496
2016-08-02 18:39:32 +00:00
Krzysztof Parzyszek 3e409e127e [Hexagon] Improvements to address mode checks in TargetLowering
- Implement getOptimalMemOpType.
- Check BaseOffset in isLegalAddressingMode.

llvm-svn: 277494
2016-08-02 18:34:31 +00:00
Nirav Dave 8601ac11aa [MC] Fix Intel Operand assembly parsing for .set ids
Recommitting after fixing overaggressive fastpath return in parsing.

Fix intel syntax special case identifier operands that refer to a constant
(e.g. .set <ID> n) to be interpreted as immediate not memory in parsing.

Associated commit to fix clang test commited shortly.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22585

llvm-svn: 277489
2016-08-02 17:56:03 +00:00
Ahmed Bougacha ad30db32e6 [AArch64][GlobalISel] Mark basic binops/memops as legal.
We currently use and test these, and select most of them. Mark them
as legal even though we don't go through the full ir->asm flow yet.

This doesn't currently have standalone tests, but the verifier will
soon learn to check that the regbankselect/select tests are legal.

llvm-svn: 277471
2016-08-02 15:10:28 +00:00
Dan Gohman c558fe203f [WebAssembly] Remove a README.txt entry that is now implemented.
llvm-svn: 277467
2016-08-02 14:53:44 +00:00
Sam Parker 18bc3a002e [ARM] Improve smul* and smla* isel for Thumb2
Added (sra (shl x, 16), 16) to the sext_16_node PatLeaf for ARM to
simplify some pattern matching. This has allowed several patterns
for smul* and smla* to be removed as well as making it easier to add
the matching for the corresponding instructions for Thumb2 targets.
Also added two Pat classes that are predicated on Thumb2 with the
hasDSP flag and UseMulOps flags. Updated the smul codegen test with
the wider range of patterns plus the ThumbV6 and ThumbV6T2 targets.

Differential Revision: https://reviews.llvm.org/D22908

llvm-svn: 277450
2016-08-02 12:44:27 +00:00
NAKAMURA Takumi 3f704497fa HexagonVectorPrint.cpp: Fix r277370. Don't use getInstrVecReg() in the expression of assert(). It has side effects.
llvm-svn: 277448
2016-08-02 11:59:16 +00:00
Simon Dardis 6c3591d33e [mips] Update the P5600 scheduler for isComplete = 1
These changes update the schedule model for the P5600 and includes the
rest of the MSA and MIPS32R5 instruction sets.

Reviewers: dsanders, vkalintris

Differential Revision: https://reviews.llvm.org/D21835

llvm-svn: 277441
2016-08-02 10:32:00 +00:00
Bernard Ogden 849f737155 [ARM] Some saturation instructions not DSP-only
Summary:
Commit 276701 requires that targets have the DSP extensions to use
certain saturating instructions. This requires some corrections.

For ARM ISA the instructions in question are available in all v6*
architectures.

For Thumb2, the instructions in question are available from v6T2.
SSAT and USAT are part of the base architecture while SSAT16 and
USAT16 require the DSP extensions.

Reviewers: rengolin

Subscribers: aemerson, rengolin, samparker, llvm-commits

Differential Revision: https://reviews.llvm.org/D23010

llvm-svn: 277439
2016-08-02 10:04:03 +00:00
Igor Breger f44b79d08e [AVX512] Don't use i128 masked gather/scatter/load/store. Do more accurately dataWidth check.
Differential Revision: http://reviews.llvm.org/D23055

llvm-svn: 277435
2016-08-02 09:15:28 +00:00
Matt Arsenault 6f1ae3c7db AArch64: Assert on branch displacement bits
llvm-svn: 277434
2016-08-02 08:56:52 +00:00
Matt Arsenault 5b54971ff9 AArch64: Consolidate branch inversion logic
llvm-svn: 277431
2016-08-02 08:30:06 +00:00
Matt Arsenault e8da145493 AArch64: BranchRelaxtion cleanups
Move some logic into TII.

llvm-svn: 277430
2016-08-02 08:06:17 +00:00
Matt Arsenault f7065e15f8 AArch64: Fix end iterator dereference
Not all blocks have terminators. I'm not sure how this wasn't
crashing before.

llvm-svn: 277427
2016-08-02 07:20:09 +00:00
Craig Topper 9433f975d0 [AVX-512] Mark VADDPS/PD and VMULPS/PD as commutable. This necessitated adding itineraries to all of the instructions that use the avx512_fp_binop_p class.
llvm-svn: 277422
2016-08-02 06:16:53 +00:00
Craig Topper 553535848f [AVX-512] Use SSE_MUL_ITINS_S/SSE_DIV_ITINS_S for the scalar FMUL/FDIV instructions to match SSE/AVX.
llvm-svn: 277421
2016-08-02 06:16:51 +00:00
Craig Topper 05948fb36c [AVX-512] Correct ExeDomain for many AVX-512 instructions.
llvm-svn: 277416
2016-08-02 05:11:15 +00:00
Hans Wennborg 7a3a49b18a Revert r276895 "[MC][X86] Fix Intel Operand assembly parsing for .set ids"
This caused PR28805. Adding a regression test.

llvm-svn: 277402
2016-08-01 23:00:01 +00:00
Derek Schuff c64d7655b2 [WebAssembly] Support CFI for WebAssembly target
Summary: This patch implements CFI for WebAssembly. It modifies the
LowerTypeTest pass to pre-assign table indexes to functions that are
called indirectly, and lowers type checks to test against the
appropriate table indexes. It also modifies the WebAssembly backend to
support a special ".indidx" assembly directive that propagates the table
index assignments out to the linker.

Patch by Dominic Chen

Differential Revision: https://reviews.llvm.org/D21768

llvm-svn: 277398
2016-08-01 22:25:02 +00:00
Derek Schuff f41f67d3d9 [WebAssembly] Add asm.js-style exception handling support
Summary: This patch includes asm.js-style exception handling support for
WebAssembly. The WebAssembly MVP does not have any support for
unwinding or non-local control flow. In order to support C++ exceptions,
emscripten currently uses JavaScript exceptions along with some support
code (written in JavaScript) that is bundled by emscripten with the
generated code.
This scheme lowers exception-related instructions for wasm such that
wasm modules can be compatible with emscripten's existing scheme and
share the support code.

Patch by Heejin Ahn

Differential Revision: https://reviews.llvm.org/D22958

llvm-svn: 277391
2016-08-01 21:34:04 +00:00
Krzysztof Parzyszek 317d42c1ea [Hexagon] Tidy up some code, NFC: reapply r277372 with a fix
llvm-svn: 277383
2016-08-01 20:31:50 +00:00
Krzysztof Parzyszek d978ae239e Revert r277372, it is causing buildbot failures
llvm-svn: 277374
2016-08-01 20:00:33 +00:00
Krzysztof Parzyszek 1f72abb56b [Hexagon] Tidy up some code, NFC
llvm-svn: 277372
2016-08-01 19:46:21 +00:00
Ron Lieberman 8123b966cb [Hexagon] Generate vector printing instructions
llvm-svn: 277370
2016-08-01 19:36:39 +00:00
Evandro Menezes 82e245a202 [AArch64] Add support for Samsung Exynos M2 (NFC).
llvm-svn: 277364
2016-08-01 18:39:45 +00:00
Krzysztof Parzyszek 8fb181ca5b Replace MachineInstr* with MachineInstr& in TargetInstrInfo, NFC
There were a few cases introduced with the modulo scheduler.

llvm-svn: 277358
2016-08-01 17:55:48 +00:00
Krzysztof Parzyszek ddafa2cd5f [Hexagon] Check for offset overflow when reserving scavenging slots
Scavenging slots were only reserved when pseudo-instruction expansion in
frame lowering created new virtual registers. It is possible to still
need a scavenging slot even if no virtual registers were created, in cases
where the stack is large enough to overflow instruction offsets.

llvm-svn: 277355
2016-08-01 17:15:30 +00:00
Daniel Sanders b3ae33c7a6 [mips][fastisel] Correct argument lowering for (f64, f64, i32) and similar.
Summary:
Allocating an AFGR64 shadows two GPR32's instead of just one.

This fixes an LNT regression detected by our internal buildbots.

Reviewers: sdardis

Subscribers: dsanders, sdardis, llvm-commits

Differential Revision: https://reviews.llvm.org/D23012

llvm-svn: 277348
2016-08-01 15:32:51 +00:00
Valery Pykhtin 902db3101b [AMDGPU] refactor DS instruction definitions. NFC.
Differential revision: https://reviews.llvm.org/D22522

llvm-svn: 277344
2016-08-01 14:21:30 +00:00
Simon Pilgrim 46f119a59f [X86] Use implicit masking of SHLD/SHRD shift double instructions
Similar to the regular shift instructions, SHLD/SHRD only use the bottom bits of the shift value

llvm-svn: 277341
2016-08-01 12:11:43 +00:00
Simon Pilgrim 2ddeee1784 Fixed MSVC out of range shift warning
llvm-svn: 277333
2016-08-01 09:40:38 +00:00
Diana Picus ab5a4c7dbb [AArch64] Return the correct size for TLSDESC_CALLSEQ
The branch relaxation pass is computing the wrong offsets because it assumes
TLSDESC_CALLSEQ eats up 4 bytes, when in fact it is lowered to an instruction
sequence taking up 16 bytes. This can become a problem in huge files with lots
of TLS accesses, as it may slowly move branch targets out of the range computed
by the branch relaxation pass.

Fixes PR24234 https://llvm.org/bugs/show_bug.cgi?id=24234

Differential Revision: https://reviews.llvm.org/D22870

llvm-svn: 277331
2016-08-01 08:38:49 +00:00
Craig Topper c48c029610 [AVX-512] Fix duplicate column in AVX512 execution dependency table that was preventing VMOVDQU32/VMOVDQA32 from being recognized. Fix a bug in the code that stops execution dependency fix from turning operations on 32-bit integer element types into operations on 64-bit integer element types.
llvm-svn: 277327
2016-08-01 07:55:33 +00:00
Hrvoje Varga 00d96ee7b9 [mips] Clang generates unaligned offset for MSA instruction st.d
Differential Revision: https://reviews.llvm.org/D19475

llvm-svn: 277323
2016-08-01 06:46:20 +00:00
Diana Picus 850043b25a [AArch64] Register passes so they can be run by llc
Initialize all AArch64-specific passes in the TargetMachine so they can be run
by llc. This can lead to conflicts in opt with some command line options that
share the same name as the pass, so I took this opportunity to do some cleanups:
* rename all relevant command line options from "aarch64-blah" to
  "aarch64-enable-blah" and update the tests accordingly
* run clang-format on their declarations
* move all these declarations to a common place (the TargetMachine) as opposed
  to having them scattered around (AArch64BranchRelaxation and
  AArch64AddressTypePromotion were the only offenders)

llvm-svn: 277322
2016-08-01 05:56:57 +00:00
Craig Topper 749a111f1e [AVX-512] Teach X86InstrInfo::getLargestLegalSuperClass to inflate to FR32X/FR64X if AVX512 is supported and VR128X/VR256X if VLX is supported.
Had to update a stack folding test to clobber the other 16 registers since this now made them get used instead of spilling.

llvm-svn: 277321
2016-08-01 05:31:50 +00:00
Craig Topper 3946176314 [AVX-512] Use FR32X/FR64X/VR128X/VR256X register classes in addRegisterClass if AVX512(for FR32X/FR64) or VLX(for VR128X/VR256) is supported. This is a minimal requirement to be able to allocate all 32 registers.
llvm-svn: 277319
2016-08-01 04:29:13 +00:00
Craig Topper da50eec26d [X86] Move mask register handling into the main switch of getLoadStoreRegOpcode. No functional change intended.
llvm-svn: 277318
2016-08-01 04:29:11 +00:00
Craig Topper c0097dc7d0 [X86] Simplify code for determing GR or FR reg classes by querying for super classes instead of manually listing individual classes.
llvm-svn: 277306
2016-07-31 20:20:08 +00:00
Craig Topper 7afdc0fb25 [AVX512] Always use EVEX encodings for 128/256-bit move instructions in getLoadStoreRegOpcode if VLX is supported.
llvm-svn: 277305
2016-07-31 20:20:05 +00:00
Craig Topper 4c53e60360 [AVX512] Add VLX packed move instructions to the execution dependency fix pass and update tests.
llvm-svn: 277304
2016-07-31 20:20:01 +00:00
Craig Topper eb1cc981a5 [AVX512] Move FR32X/FR64X handling in getLoadStoreRegOpcode into the main switch. No functional change intended.
llvm-svn: 277303
2016-07-31 20:19:55 +00:00
Craig Topper 338ec9a0cb [AVX512] Stop treating VR512 specially in getLoadStoreRegOpcode and use the regular switch which already tried to handle it, but was unreachable. This has the added benefit of enabling aligned loads/stores if the stack is aligned.
llvm-svn: 277302
2016-07-31 20:19:53 +00:00
Craig Topper 2a6bbb8203 [AVX512] Add X86::VR512RegClassID to X86RegisterInfo::getLargestLegalSuperClass.
llvm-svn: 277301
2016-07-31 20:19:50 +00:00
Simon Pilgrim 6be48e4aa7 [X86] Improve 64-bit shifts on 32-bit targets (PR14593)
As discussed on PR14593, this patch adds support for lowering to SHLD/SHRD from the patterns generated by DAGTypeLegalizer::ExpandShiftWithKnownAmountBit.

Differential Revision: https://reviews.llvm.org/D23000

llvm-svn: 277299
2016-07-31 19:50:45 +00:00
Craig Topper 00d34ed64f [AVX-512] Don't let ExeDependencyFix pass convert VPANDD/Q to VPANDPS/PD unless DQI instructions are supported. Same for ANDN, OR, and XOR.
Thanks to Igor Breger for pointing out my mistake.

llvm-svn: 277292
2016-07-31 17:15:07 +00:00
Elena Demikhovsky 6e9b16054f AVX-512: Removed AssertZext node before TRUNCATE
Removed AssertZext node, which was inserted between X86ISD::SETCC and "truncate to i1".

Differential Revision: https://reviews.llvm.org/D22850

llvm-svn: 277289
2016-07-31 06:48:01 +00:00
Davide Italiano d08e18fc7d [HexagonConstPropagation] Remove dead code.
llvm-svn: 277285
2016-07-30 22:07:21 +00:00
Davide Italiano 892d9f06d0 [HexagonBitSimplify] Remove dead code.
llvm-svn: 277284
2016-07-30 22:07:18 +00:00
Davide Italiano 3ebda7ed88 [ARMConstantIslandPass] Remove dead code.
llvm-svn: 277283
2016-07-30 22:07:15 +00:00
Simon Pilgrim 5e0d6b509a Strip trailing whitespace
llvm-svn: 277280
2016-07-30 20:53:21 +00:00
Simon Pilgrim 8bbd3650a6 [X86] Use peekThroughOneUseBitcasts helper function
llvm-svn: 277279
2016-07-30 20:51:26 +00:00
Simon Pilgrim cf49fa3251 [X86][SSE] Let 64-bit targets use the fast 2i32-2f32 UINT_TO_FP conversion as well as 32-bit
The 2i32-2i64 legalization means that we can use the slightly quicker double bits + fptrunc approach for the same results

llvm-svn: 277271
2016-07-30 14:06:59 +00:00
Benjamin Kramer afff73cb5a [Hexagon] Perform bit arithmetic on unsigned to avoid accidentally shifting negative values.
Found by ubsan.

llvm-svn: 277268
2016-07-30 13:25:37 +00:00
Benjamin Kramer 205159c628 [X86] Fix lifetime of SMRange temporaries.
Found by asan -fsanitize-address-use-after-scope.

llvm-svn: 277266
2016-07-30 11:31:24 +00:00
Benjamin Kramer 22ff865a83 [AMDGPU] Fix lifetime of SmallVector temporaries.
Found by asan -fsanitize-address-use-after-scope.

llvm-svn: 277265
2016-07-30 11:31:16 +00:00
Matt Arsenault 749035b7b1 AMDGPU: Fix shouldConvertConstantLoadToIntImm behavior
This should really be true for any immediate, not just
inline ones.

llvm-svn: 277260
2016-07-30 01:40:36 +00:00
Matt Arsenault d2141b6030 AMDGPU: Set s_setpc_b64 as a terminator
llvm-svn: 277259
2016-07-30 01:40:34 +00:00
Matt Arsenault dc744412ad AMDGPU: Remove unused pattern
llvm-svn: 277258
2016-07-30 01:40:30 +00:00
Tim Northover 5fb414d870 GlobalISel: support translation of intrinsic calls.
These come in two variants for now: G_INTRINSIC and G_INTRINSIC_W_SIDE_EFFECTS.
We may decide to split the latter up with finer-grained restrictions later, if
necessary.

llvm-svn: 277224
2016-07-29 22:32:36 +00:00
Krzysztof Parzyszek f0b34a5c57 [Hexagon] Referencify MachineInstr in HexagonInstrInfo, NFC
llvm-svn: 277220
2016-07-29 21:49:42 +00:00
Michael Kuperstein f396b4c40d [X86] Match PSADBW in straight-line code
Up until now, we only had code to match PSADBW patterns that look like what
comes out of the loop vectorizer - a partial reduction inside the loop body
that gets fed into a horizontal operation in a different basic block.

This adds support for straight-line patterns, like those generated by the
SLP vectorizer.

Differential Revision: https://reviews.llvm.org/D22889

llvm-svn: 277219
2016-07-29 21:45:51 +00:00
Simon Pilgrim f107ffa8f0 [X86][AVX] Fix VBROADCASTF128 selection bug (PR28770)
Support for lowering to VBROADCASTF128 etc. in D22460 was not correctly ensuring that the only users of the 128-bit vector load were the insertions of the vector into the lower/upper subvectors.

llvm-svn: 277214
2016-07-29 21:05:10 +00:00
Tim Northover 6b3bd61283 CodeGen: add new "intrinsic" MachineOperand kind.
This will be used during GlobalISel, where we need a more robust and readable
way to write tests than a simple immediate ID.

llvm-svn: 277209
2016-07-29 20:32:59 +00:00
Simon Pilgrim 7c85862b17 Fixed MSVC out of range shift warning
llvm-svn: 277195
2016-07-29 18:43:59 +00:00
Krzysztof Parzyszek 3e137e3429 Revert r277178, the actual change had already been applied
Will submit another patch with the testcase only.

llvm-svn: 277180
2016-07-29 17:50:47 +00:00
Krzysztof Parzyszek 68fe439d06 [Hexagon] Misaligned loads and stores are not fast
The DAG combiner tries to merge stores to adjacent vector wide memory
locations by creating stores which are integral multiples of the vector
width. Discourage this by informing it that this is slow. This should
not affect legalization passes, because all of them ignore the "Fast"
argument.

Patch by Pranav Bhandarkar.

llvm-svn: 277178
2016-07-29 17:45:16 +00:00
Tim Northover a51575ffa2 CodeGen: improve MachineInstrBuilder & MachineIRBuilder interface
For MachineInstrBuilder, having to manually use RegState::Define is ugly and
makes register definitions clunkier than they need to be, so this adds two
convenience functions: addDef and addUse.

For MachineIRBuilder, we want to avoid BuildMI's first-reg-is-def rule because
it's hidden away and causes bugs. So this patch switches buildInstr to
returning a MachineInstrBuilder and adding *all* operands via addDef/addUse.

NFC.

llvm-svn: 277176
2016-07-29 17:43:52 +00:00
Ahmed Bougacha 6db3cfe2da [AArch64][GlobalISel] Select G_XOR.
llvm-svn: 277173
2016-07-29 16:56:25 +00:00
Ahmed Bougacha 7adfac56b3 [AArch64][GlobalISel] Select G_LOAD/G_STORE.
Mostly straightforward as we ignore addressing modes and just
use the base + unsigned immediate offset (always 0) variants.

This currently fails to select extloads because we have yet to
agree on a representation.

llvm-svn: 277171
2016-07-29 16:56:16 +00:00
Brendon Cahoon 254f889dc5 MachinePipeliner pass that implements Swing Modulo Scheduling
Software pipelining is an optimization for improving ILP by
overlapping loop iterations. Swing Modulo Scheduling (SMS) is
an implementation of software pipelining that attempts to
reduce register pressure and generate efficient pipelines with
a low compile-time cost.

This implementaion of SMS is a target-independent back-end pass.
When enabled, the pass should run just prior to the register
allocation pass, while the machine IR is in SSA form. If the pass
is successful, then the original loop is replaced by the optimized
loop. The optimized loop contains one or more prolog blocks, the
pipelined kernel, and one or more epilog blocks.

This pass is enabled for Hexagon only. To enable for other targets,
a couple of target specific hooks must be implemented, and the
pass needs to be called from the target's TargetMachine
implementation.

Differential Review: http://reviews.llvm.org/D16829

llvm-svn: 277169
2016-07-29 16:44:44 +00:00
Krzysztof Parzyszek 0bd55a7608 [Hexagon] Custom lower VECTOR_SHUFFLE and EXTRACT_SUBVECTOR for HVX
If the mask of a vector shuffle has alternating odd or even numbers
starting with 1 or 0 respectively up to the largest possible index
for the given type in the given HVX mode (single of double) we can
generate vpacko or vpacke instruction respectively.

E.g.
  %42 = shufflevector <32 x i16> %37, <32 x i16> %41,
                      <32 x i32> <i32 1, i32 3, ..., i32 63>
  is %42.h = vpacko(%41.w, %37.w)

Patch by Pranav Bhandarkar.

llvm-svn: 277168
2016-07-29 16:44:27 +00:00
Krzysztof Parzyszek 0006e1afdd [Hexagon] Improve balancing of address calculation
Rebalances address calculation trees and applies Hexagon-specific
optimizations to the trees to improve instruction selection.

Patch by Tobias Edler von Koch.

llvm-svn: 277151
2016-07-29 15:15:35 +00:00
David L Kreitzer 8b959e5cfa Avoid unnecessary 32-bit to 64-bit zero extensions following
32-bit CMOV instructions on x86_64. The 32-bit CMOV implicitly
zero extends.

Differential Revision: https://reviews.llvm.org/D22941

llvm-svn: 277148
2016-07-29 15:09:54 +00:00
Krzysztof Parzyszek 22ae7df6f4 Fix license information in the file header
llvm-svn: 277145
2016-07-29 14:04:17 +00:00
Krzysztof Parzyszek 0005a7284f Add missing files to r277143
llvm-svn: 277144
2016-07-29 13:59:55 +00:00
Krzysztof Parzyszek e95e95521c [Hexagon] Implement DFA based hazard recognizer
The post register allocator scheduler can generate poor schedules
because the scoreboard hazard recognizer is unable to identify
hazards for Hexagon precisely. Instead, Hexagon should use a DFA
based hazard recognizer.

Patch by Brendon Cahoon.

llvm-svn: 277143
2016-07-29 13:59:09 +00:00
Daniel Sanders cbaca42a03 Re-commit: [mips][fastisel] Handle 0-4 arguments without SelectionDAG.
Summary:
Implements fastLowerArguments() to avoid the need to fall back on
SelectionDAG for 0-4 argument functions that don't do tricky things like
passing double in a pair of i32's.

This allows us to move all except one test to -fast-isel-abort=3. The
remaining one has function prototypes of the form 'i32 (i32, double, double)'
which requires floats to be passed in GPR's.

The previous commit had an uninitialized variable that caused the incoming
argument region to have undefined size. This has been fixed.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: https://reviews.llvm.org/D22680

llvm-svn: 277136
2016-07-29 12:27:28 +00:00
Simon Pilgrim cb780b32a3 [X86][SSE] Optimize the truncation of vector comparison results with PACKSS
We currently default to using either generic shuffles or MASK+PACKUS/PACKSS to truncate all integer vectors. For vector comparisons, we know that the result will be either all or zero bits in every element, which can be efficiently truncated by directly using PACKSS to repeatedly halve the size of each element.

Due to the limited input values (-1 or 0) we don't need to account for vector element size, so for simplicity we just use the PACKSS(vXi16,vXi16) implementation in all cases. Additionally for AVX2 PACKSS of 256bit data we must perform a PERMQ shuffle to reorder the data into the correct order. I did investigate performing a single shuffle after all the PACKSS calls but the need to cross 128bit lanes makes this difficult to achieve efficiently.

We avoid performing this on AVX512 as it should have better alternative truncation instructions.

Differential Revision: https://reviews.llvm.org/D22814

llvm-svn: 277132
2016-07-29 10:23:10 +00:00
Simon Pilgrim 0aaf6ba248 Fixed MSVC out of range shift warning
llvm-svn: 277130
2016-07-29 10:03:39 +00:00
Sjoerd Meijer a3de1262d7 Fix for commit rL277126 that broke a build.
llvm-svn: 277129
2016-07-29 09:57:37 +00:00
Prakhar Bahuguna d1233e857e [Thumb] Emit Thumb move in both Thumb modes for struct_byval predicates
Summary:
The MOV/MOVT instructions being chosen for struct_byval predicates was
conditional only on Thumb2, resulting in an ARM MOV/MOVT instruction
being incorrectly emitted in Thumb1 mode. This is especially apparent
with v8-m.base targets. This patch ensures that Thumb instructions are
emitted in both Thumb modes.

Reviewers: rengolin, t.p.northover

Subscribers: llvm-commits, aemerson, rengolin

Differential Revision: https://reviews.llvm.org/D22865

llvm-svn: 277128
2016-07-29 09:16:46 +00:00
Jacques Pienaar da704adc2f [lanai] Update for Target API (TargetRegistry::RegisterMCAsmBackend) change
llvm-svn: 277127
2016-07-29 08:50:23 +00:00
Sjoerd Meijer 0eb96ed0de TargetInstrInfo: add virtual function getInstSizeInBytes
This adds a target hook getInstSizeInBytes to TargetInstrInfo that a lot of
subclasses already implement.

Differential Revision: https://reviews.llvm.org/D22885

llvm-svn: 277126
2016-07-29 08:16:16 +00:00
Craig Topper e4f868ea16 [AVX512] Mark EVEX VMOVSSrm and VMOVSDrm as canFoldAsLoad and isReMaterializable.
llvm-svn: 277120
2016-07-29 06:06:04 +00:00
Craig Topper 5625d24977 [AVX512] Copy the patterns that recognize scalar arimetic operations inserting into the lower element of a packed vector from AVX/SSE so that we can use EVEX encoded instructions.
llvm-svn: 277119
2016-07-29 06:06:00 +00:00
David Majnemer d536f2328e [ConstnatFolding] Teach the folder how to fold ConstantVector
A ConstantVector can have ConstantExpr operands and vice versa.
However, the folder had no ability to fold ConstantVectors which, in
some cases, was an optimization barrier.

Instead, rephrase the folder in terms of Constants instead of
ConstantExprs and teach callers how to deal with failure.

llvm-svn: 277099
2016-07-29 03:27:26 +00:00
Craig Topper c7de3a1018 [AVX512] Remove the intrinsic forms of VMOVSS/VMOVSD. We don't need two different forms of 'rr' and 'rm'. This matches SSE/AVX.
I'm not convinced the patterns for the rm_Int was correct anyway. It had a tied source that should't exist for the unmasked version. The load form of MOVSS always zeros the most significant bits. I've left the patterns off the masked load instructions as I'm not sure what the correct pattern should be and we don't have any tests currently. Nor do we implement masked scalar load intrinsics in clang currently.

llvm-svn: 277098
2016-07-29 02:49:08 +00:00
Changpeng Fang 26fb9d268b AMDGPU/SI: Don't handle a loop if there is no loop at all for a terminator BB.
Differential Revision: http://reviews.llvm.org/D22021

Reviewed by: arsenm

llvm-svn: 277073
2016-07-28 23:01:45 +00:00
Krzysztof Parzyszek 6400dec5ab Fix build breaks after r277028
llvm-svn: 277031
2016-07-28 20:25:21 +00:00
Krzysztof Parzyszek 167d918225 [Hexagon] Implement MI-level constant propagation
llvm-svn: 277028
2016-07-28 20:01:59 +00:00
Krzysztof Parzyszek c43644d332 [Hexagon] Insert CFI instructions before throwing calls
Normally, CFI instructions should be inserted after allocframe, but
if allocframe is in the same packet with a call, the CFI instructions
should be inserted before that packet.

llvm-svn: 277020
2016-07-28 19:13:46 +00:00
Matthias Braun 941a705b7b MachineFunction: Return reference for getFrameInfo(); NFC
getFrameInfo() never returns nullptr so we should use a reference
instead of a pointer.

llvm-svn: 277017
2016-07-28 18:40:00 +00:00
Ahmed Bougacha 8550509b64 [AArch64][GlobalISel] Select G_BR.
This is the first unsized instruction we support; move down the
'sized' check to binops.

llvm-svn: 277007
2016-07-28 17:15:15 +00:00
Ahmed Bougacha d7748d6491 [AArch64][GlobalISel] Select GPR G_SUB.
llvm-svn: 277003
2016-07-28 16:58:35 +00:00
Ahmed Bougacha 61a7928dde [AArch64][GlobalISel] Select GPR G_AND.
llvm-svn: 277002
2016-07-28 16:58:31 +00:00
Ahmed Bougacha 46c05fc861 [GlobalISel] Remove types on selected insts instead of using LLT().
LLT() has a particular meaning: it's one invalid type. But we really
want selected instructions to have no type whatsoever.

Also verify that types don't linger after ISel, and enable the verifier
on the AArch64 select test.

llvm-svn: 277001
2016-07-28 16:58:27 +00:00
Wei Ding 07e03712d3 AMDGPU : Add intrinsics for compare with the full wavefront result
Differential Revision: http://reviews.llvm.org/D22482

llvm-svn: 276998
2016-07-28 16:42:13 +00:00
Sjoerd Meijer 89217f8835 TargetInstrInfo: rename GetInstSizeInBytes to getInstSizeInBytes. NFC
Differential Revision: https://reviews.llvm.org/D22925

llvm-svn: 276997
2016-07-28 16:32:22 +00:00
Daniel Sanders b23005ead4 [mips] Fix a warning that occurs on some gcc 4.9.2's but not all of them.
llvm-svn: 276993
2016-07-28 15:59:06 +00:00
Daniel Sanders 6e74651658 Revert r276982 and r276984: [mips][fastisel] Handle 0-4 arguments without SelectionDAG
It seems that the stack offset in callabi.ll varies between machines. I'll look
into it.

llvm-svn: 276989
2016-07-28 15:37:42 +00:00
Craig Topper 7e27885f69 [X86] Remove CustomInserter for FMA3 instructions. Looks like since we got full commuting support for FMAs after this was added, the coalescer can now get this right on its own.
Differential Revision: https://reviews.llvm.org/D22799

llvm-svn: 276987
2016-07-28 15:28:56 +00:00
Daniel Sanders 313755d9ef [mips] Reword debug message as should have been done before committing r276982
llvm-svn: 276984
2016-07-28 15:13:23 +00:00
Daniel Sanders e0b529f619 [mips][fastisel] Handle 0-4 arguments without SelectionDAG.
Summary:
Implements fastLowerArguments() to avoid the need to fall back on
SelectionDAG for 0-4 argument functions that don't do tricky things like
passing double in a pair of i32's.

This allows us to move all except one test to -fast-isel-abort=3. The
remaining one has function prototypes of the form 'i32 (i32, double, double)'
which requires floats to be passed in GPR's.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: https://reviews.llvm.org/D22680

llvm-svn: 276982
2016-07-28 14:55:28 +00:00
Tom Stellard 19f4301099 AMDGPU/SI: Don't use reserved VGPRs for SGPR spilling
Summary:
We were using reserved VGPRs for SGPR spilling and this was causing
some programs with a workgroup size of 1024 to use more than 64
registers, which is illegal.

Reviewers: arsenm, mareko, nhaehnle

Subscribers: nhaehnle, arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D22032

llvm-svn: 276980
2016-07-28 14:30:43 +00:00
Nicolai Haehnle 3b572002a2 AMDGPU: add execfix flag to SI_ELSE
Summary:
SI_ELSE is lowered into two parts:

s_or_saveexec_b64 dst, src (at the start of the basic block)

s_xor_b64 exec, exec, dst (at the end of the basic block)

The idea is that dst contains the exec mask of the preceding IF block. It can
happen that SIWholeQuadMode decides to switch from WQM to Exact mode inside
the basic block that contains SI_ELSE, in which case it introduces an instruction

s_and_b64 exec, exec, s[...]

which masks out bits that can correspond to both the IF and the ELSE paths.
So the resulting sequence must be:

s_or_savexec_b64 dst, src

s_and_b64 exec, exec, s[...] <-- added by SIWholeQuadMode
s_and_b64 dst, dst, exec <-- added by SILowerControlFlow

s_xor_b64 exec, exec, dst

Whether to add the additional s_and_b64 dst, dst, exec is currently determined
via the ExecModified tracking. With this change, it is instead determined by
an additional flag on SI_ELSE which is set by SIWholeQuadMode.

Finally: It also occured to me that an alternative approach for the long run
is for SILowerControlFlow to unconditionally emit

s_or_saveexec_b64 dst, src

...

s_and_b64 dst, dst, exec
s_xor_b64 exec, exec, dst

and have a pass that detects and cleans up the "redundant AND with exec"
pattern where possible. This could be useful anyway, because we also add
instructions

s_and_b64 vcc, exec, vcc

before s_cbranch_scc (in moveToALU), and those are often redundant. I have
some pending changes to how KILL is lowered that could also benefit from
such a cleanup pass.

In any case, this current patch could help in the short term with the whole
ExecModified business.

Reviewers: tstellarAMD, arsenm

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D22846

llvm-svn: 276972
2016-07-28 11:39:24 +00:00
Zijiao Ma e56a53a9b3 Add unittests to {ARM | AArch64}TargetParser.
Add unittest to {ARM | AArch64}TargetParser,and by the way correct problems as below:
1.Correct a incorrect indexing problem in AArch64TargetParser. The architecture enumeration
 is shared across ARM and AArch64 in original implementation.But In the code,I just used the
 index which was offset by the ARM, and this would index into the array incorrectly. To make
 AArch64 has its own arch enum,or we will do a lot of slowly iterating.
2.Correct a spelling error. The parameter of llvm::AArch64::getArchExtName.
3.Correct a writing mistake, in llvm::ARM::parseArchISA.

Differential Revision: https://reviews.llvm.org/D21785

llvm-svn: 276957
2016-07-28 06:11:18 +00:00
Matt Arsenault edc7dcb2aa AMDGPU: Turn dead checks into asserts
llvm-svn: 276946
2016-07-28 00:32:05 +00:00
Matt Arsenault fe26775992 AMDGPU: Remove analyzeImmediate
This no longer uses the more complicated classification
of constants.

llvm-svn: 276945
2016-07-28 00:32:02 +00:00
Krzysztof Parzyszek 06a2b6b1ee [Hexagon] Find speculative loop preheader in hardware loop generation
Before adding a new preheader block, check if there is a candidate block
where the loop setup could be placed speculatively. This will be off by
default.

llvm-svn: 276919
2016-07-27 21:20:54 +00:00
Michael Kuperstein e7605e28f9 [X86] Factor out another piece of the SAD combine. NFCI.
llvm-svn: 276918
2016-07-27 20:59:51 +00:00
Krzysztof Parzyszek dc42164e39 [Hexagon] Add option to bisect spill slot optimization
llvm-svn: 276917
2016-07-27 20:58:43 +00:00
Krzysztof Parzyszek 5241b8efcf [Hexagon] Do not optimize volatile stack spill slots
llvm-svn: 276916
2016-07-27 20:50:42 +00:00
Krzysztof Parzyszek fae7986bf3 [Hexagon] Handle extended versions of restore routines
llvm-svn: 276903
2016-07-27 18:47:25 +00:00
Duncan P. N. Exon Smith e921088c71 XCore: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr*, mainly by preferring MachineInstr& over MachineInstr*.

llvm-svn: 276899
2016-07-27 18:14:38 +00:00
Nirav Dave 06a99a46e2 [MC][X86] Fix Intel Operand assembly parsing for .set ids
Fix intel syntax special case identifier operands that refer to a constant
(e.g. .set <ID> n) to be interpreted as immediate not memory in parsing.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22585

llvm-svn: 276895
2016-07-27 17:39:41 +00:00
Krzysztof Parzyszek a34d639549 [Hexagon] Add saved callee-saved registers as live-in in non-wrapped blocks
The callee-saved registers that are saved in a function are not pristine,
and so they can be defined and used. In case of shrink-wrapping though,
there are blocks that are outside of the save/restore range, and in those
blocks the saved registers must be treated as pristine. To avoid any uses
of these registers, add them as live-in in all those blocks.
This was already done for blocks reaching function exits after restore,
add code that does the same for blocks reached from the function entry
before save.

llvm-svn: 276886
2016-07-27 16:26:39 +00:00
Reid Kleckner 46cb48c74a Remove MCAsmInfo.h include from TargetOptions.h
TargetOptions wants the ExceptionHandling enum. Move that to
MCTargetOptions.h to avoid transitively including Dwarf.h everywhere in
clang. Now you can add a DWARF tag without a full rebuild of clang
semantic analysis.

llvm-svn: 276883
2016-07-27 16:03:57 +00:00
Diana Picus c65d8bdcf2 Typo fix. NFC
llvm-svn: 276879
2016-07-27 15:13:25 +00:00
Ahmed Bougacha 6756a2c953 [GlobalISel] Introduce an instruction selector.
And implement it for AArch64, supporting x/w ADD/OR.

Differential Revision: https://reviews.llvm.org/D22373

llvm-svn: 276875
2016-07-27 14:31:55 +00:00
Ahmed Bougacha 5e402eec7b [AArch64] Mark various *Info classes as 'final'. NFC.
llvm-svn: 276874
2016-07-27 14:31:46 +00:00
Ahmed Bougacha b8459d14ef [AArch64] Define AArch64RegisterInfo as a class, not a struct. NFC.
llvm-svn: 276873
2016-07-27 14:31:40 +00:00
Daniel Sanders c5537427c2 [mips][ias] Check '$rs = $rd' constraints when both registers are in AsmText.
Summary:
This is one possible solution to the problem of ignoring constraints that Simon
raised in D21473 but it's a bit of a hack.

The integrated assembler currently ignores violations of the tied register
constraints when the operands involved in a tie are both present in the AsmText.
For example, 'dati $rs, $rt, $imm' with the '$rs = $rt' will silently replace
$rt with $rs. So 'dati $2, $3, 1' is processed as if the user provided
'dati $2, $2, 1' without any diagnostic being emitted.

This is difficult to solve properly because there are multiple parts of the
matcher that are silently forcing these constraints to be met. Tied operands are
rendered to instructions by cloning previously rendered operands but this is
unnecessary because the matcher was already instructed to render the operand it
would have cloned. This is also unnecessary because earlier code has already
replaced the MCParsedOperand with the one it was tied to (so the parsed input
is matched as if it were 'dati <RegIdx 2>, <RegIdx 2>, <Imm 1>'). As a result,
it looks like fixing this properly amounts to a rewrite of the tied operand
handling which affects all targets.

This patch however, merely inserts a checking hook just before the
substitution of MCParsedOperands and the Mips target overrides it. It's not
possible to accurately check the registers are the same this early (because
numeric registers haven't been bound to a register class yet) so it cheats a
bit and checks that the tokens that produced the operand are lexically
identical. This works because tied registers need to have the same register
class but it does have a flaw. It will reject 'dati $4, $a0, 1' for violating
the constraint even though $a0 ends up as the same register as $4.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: https://reviews.llvm.org/D21994

llvm-svn: 276867
2016-07-27 13:49:44 +00:00
Nemanja Ivanovic 9163ca0fc7 [PowerPC] Fix typo in PPCHazardRecognizers.cpp
Fixes PR28731.

llvm-svn: 276865
2016-07-27 13:24:54 +00:00
Duncan P. N. Exon Smith e5a22f44b8 PowerPC: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the PowerPC backend, mainly by preferring MachineInstr&
over MachineInstr* when a pointer isn't nullable and using range-based
for loops.

There was one piece of questionable code in PPCInstrInfo::AnalyzeBranch,
where a condition checked a pointer converted from an iterator for
nullptr.  Since this case is impossible (moreover, the code above
guarantees that the iterator is valid), I removed the check when I
changed the pointer to a reference.

Despite that case, there should be no functionality change here.

llvm-svn: 276864
2016-07-27 13:24:16 +00:00
Renato Golin 80e58869f8 [ARM] Set a non-conflicting comment character for assembly in MSVC mode
Currently, for ARMCOFFMCAsmInfoMicrosoft, no comment character is set, thus the
idefault, '#', is used.

The hash character doesn't work as comment character in ARM assembly, since '#'
is used for immediate values.

The comment character is set to ';', which is the comment character used by MS
armasm.exe. (The microsoft armasm.exe uses a different directive syntax than
what LLVM currently supports though, similar to ARM's armasm.)

This allows inline assembly with immediate constants to be built (and brings the
assembly output from clang -S closer to being possible to assemble).

A test is added that verifies that ';' is correctly interpreted as comments in
this mode, and verifies that assembling code that includes literal constants
with a '#' works.

Patch by Martin Storsjö.

llvm-svn: 276859
2016-07-27 12:31:58 +00:00
Matt Arsenault e3862cdc93 AMDGPU: Use rcp for fdiv 1, x with fpmath metadata
Using rcp should be OK for safe math usually, so this
should not be replacing the original fdiv.

llvm-svn: 276823
2016-07-26 23:25:44 +00:00
Matt Arsenault c6b69a9114 AMDGPU: Use implicit_def for selecting anyext
llvm-svn: 276819
2016-07-26 23:06:33 +00:00
Matt Arsenault 60a750fa69 AMDGPU/R600: Remove dead custom inserters
The intrinsics for these were removed, so this is dead.

llvm-svn: 276805
2016-07-26 21:03:38 +00:00
Matt Arsenault b06db8f666 AMDGPU: Minor AsmPrinter cleanups
llvm-svn: 276804
2016-07-26 21:03:36 +00:00
Krzysztof Parzyszek 2a480599bb [Hexagon] Post-increment loads/stores enhancements
- Generate vector post-increment stores more aggressively.
- Predicate post-increment and vector stores in early if-conversion.

llvm-svn: 276800
2016-07-26 20:30:30 +00:00
Michael Kuperstein 2dc08f7df8 [X86] Split out absdiff detection from SAD combine. NFC.
Preparation for supporting PSADBW emission for straight-line code.

llvm-svn: 276798
2016-07-26 20:01:29 +00:00
Krzysztof Parzyszek 57c3ddddec [Hexagon] Gracefully handle reg class mismatch in HexagonLoopReschedule
llvm-svn: 276793
2016-07-26 19:17:13 +00:00
Krzysztof Parzyszek 6eba5b8c37 [Hexagon] Rerun bit tracker on new instructions in RIE
Consider this case:
  vreg1 = A2_zxth vreg0   (1)
  ...
  vreg2 = A2_zxth vreg1   (2)

Redundant instruction elimination could delete the instruction (1)
because the user (2) only cares about the low 16 bits. Then it could
delete (2) because the input is already zero-extended. The problem
is that the properties allowing each individual instruction to be
deleted depend on the existence of the other instruction, so either
one can be deleted, but not both.
The existing check for this situation in RIE was insufficient. The
fix is to update all dependent cells when an instruction is removed
(replaced via COPY) in RIE.

llvm-svn: 276792
2016-07-26 19:08:45 +00:00
Krzysztof Parzyszek 1adca30c39 [Hexagon] Bitwise operations for insert/extract word not simplified
Change the bit simplifier to generate REG_SEQUENCE instructions in
addition to COPY, which will handle cases of word insert/extract.

llvm-svn: 276787
2016-07-26 18:30:11 +00:00
Krzysztof Parzyszek 29c567a3f0 [Hexagon] Add support for proper handling of H and L constraints
H -> High part of reg pair.
L -> Low part of reg pair.

Patch by Sundeep Kushwaha.

llvm-svn: 276773
2016-07-26 17:31:02 +00:00
Matt Arsenault 52ef4019fd AMDGPU: Make AMDGPUMachineFunction fields private
ABIArgOffset is a problem because properly fsetting the
KernArgSize requires that the reserved area before the
real kernel arguments be correctly aligned, which requires
fixing clover.

llvm-svn: 276766
2016-07-26 16:45:58 +00:00
Matt Arsenault 32fc527c65 AMDGPU: Add fp legacy instruction intrinsics
This could use some additional optimization work
to use mad/mac legacy.

llvm-svn: 276764
2016-07-26 16:45:45 +00:00
Daniel Sanders 94ed30a401 [mips] Fix typos in spelling of lowerRETURNADDR.
The first letter was mistakenly capitalized.

llvm-svn: 276753
2016-07-26 14:46:11 +00:00
Krzysztof Parzyszek 3b4682f6ba [Hexagon] Update store offset when not packetizing it with allocframe
When the packetizer wants to put a store to a stack slot in the same
packet with an allocframe, it updates the store offset to reflect the
value of SP before it is updated by allocframe. If the store cannot
be packetized with the allocframe after all, the offset needs to be
updated back to the previous value.

llvm-svn: 276749
2016-07-26 14:24:46 +00:00
Oliver Stannard 1c6e591457 [ARM] Improve error messages for .arch_extension directive
- More informative message when extension name is not an identifier token.
- Stop parsing directive if extension is unknown (avoid duplicate error
  messages).
- Report unsupported extensions with a source location, rather than
  report_fatal_error.

Differential Revision: https://reviews.llvm.org/D22806

llvm-svn: 276748
2016-07-26 14:24:43 +00:00
Oliver Stannard 2171828a49 [ARM] Implement -mimplicit-it assembler option
This option, compatible with gas's -mimplicit-it, controls the
generation/checking of implicit IT blocks in ARM/Thumb assembly.

This option allows two behaviours that were not possible before:
- When in ARM mode, emit a warning when assembling a conditional
  instruction that is not in an IT block. This is enabled with
  -mimplicit-it=never and -mimplicit-it=thumb.
- When in Thumb mode, automatically generate IT instructions when an
  instruction with a condition code appears outside of an IT block. This
  is enabled with -mimplicit-it=thumb and -mimplicit-it=always.

The default option is -mimplicit-it=arm, which matches the existing
behaviour (allow conditional ARM instructions outside IT blocks without
warning, and error if a conditional Thumb instruction is outside an IT
block).

The general strategy for generating IT blocks in Thumb mode is to keep a
small list of instructions which should be in the IT block, and only
emit them when we encounter something in the input which means we cannot
continue the block.  This could be caused by:
- A non-predicable instruction
- An instruction with a condition not compatible with the IT block
- The IT block already contains 4 instructions
- A branch-like instruction (including ALU instructions with the PC as
  the destination), which cannot appear in the middle of an IT block
- A label (branching into an IT block is not legal)
- A change of section, architecture, ISA, etc
- The end of the assembly file.

Some of these, such as change of section and end of file, are parsed
outside of the ARM asm parser, so I've added a new virtual function to
AsmParser to ensure any previously-parsed instructions have been
emitted. The ARM implementation of this flushes the currently pending IT
block.

We now have to try instruction matching up to 3 times, because we cannot
know if the current IT block is valid before matching, and instruction
matching changes depending on the IT block state (due to the 16-bit ALU
instructions, which set the flags iff not in an IT block). In the common
case of not having an open implicit IT block and the instruction being
matched not needing one, we still only have to run the matcher once.

I've removed the ITState.FirstCond variable, because it does not store
any information that isn't already represented by CurPosition. I've also
updated the comment on CurPosition to accurately describe it's meaning
(which this patch doesn't change).

Differential Revision: https://reviews.llvm.org/D22760

llvm-svn: 276747
2016-07-26 14:19:47 +00:00
Simon Pilgrim 019e102426 [X86][SSE] Fixed issue with memory folding of (v)cvtsd2ss intrinsics
Fixed typo in the intrinsic definitions of (v)cvtsd2ss with memory folding.

This was only unearthed when rL276102 started using the intrinsic again.....

llvm-svn: 276740
2016-07-26 10:41:28 +00:00
Simon Dardis 68a204ddc1 [mips] MIPS64R6 compact branch support
MIPS64R6 compact branch support. As the MIPS LLVM backend uses distinct
MachineInstrs for certain 32 and 64 bit instructions (e.g. BEQ & BEQ64) that
map to the same instruction, extend compact branch support for the
corresponding 64bit branches.

Reviewers: dsanders

Differential Revision: https://reviews.llvm.org/D20164

llvm-svn: 276739
2016-07-26 10:25:07 +00:00
Simon Pilgrim 28c7d7093d Fixed spelling in comment
llvm-svn: 276738
2016-07-26 09:55:31 +00:00
Simon Dardis 273fc26b79 [mips] sgtu, s[rl]l, sra, dnegu, neg instruction aliases
Add the instruction alias sgtu (register form only), two operand forms of
s[rl]l and sra, and missing single/two operand forms of dnegu/neg.

Reviewers: dsanders

Differential Revision: https://reviews.llvm.org/D22752

llvm-svn: 276736
2016-07-26 09:13:46 +00:00
Craig Topper 79011a660e [X86] Remove isCommutable=1 from instructions that also load. Commuting such instruction isn't useful as it would unfold the load. The exception being FMA3 instructions.
llvm-svn: 276733
2016-07-26 08:06:18 +00:00
Craig Topper 26000f8d90 [AVX512] Don't mark ADDSSZr_Int or MULSSZr_Int as commutable. The intrinsics have one of their arguments indicated as passing through the high bits and we can't commute that.
llvm-svn: 276732
2016-07-26 08:06:14 +00:00
Renato Golin 32b165f561 [ARM] Saturation instructions are DSP-only
The saturation instructions appeared in v6T2, with DSP extensions, but they
were being accepted / generated on any, with the new introduction of the
saturation detection in the back-end. This commit restricts the usage to
DSP-enable only cores.

Fixes PR28607.

llvm-svn: 276701
2016-07-25 22:25:25 +00:00
David Blaikie bef810ff95 [WebAssembly] Update for Target API (TargetRegistry::RegisterMCAsmBackend) change
llvm-svn: 276694
2016-07-25 21:41:42 +00:00
Tim Northover e2e0067352 GlobalISel[AArch64]: support pointer types in argument lowering.
They're basically i64 for AArch64, but we'll leave them intact for stranger
targets. Also add some tests for the (very few) other cases we can handle right
now.

llvm-svn: 276689
2016-07-25 21:01:17 +00:00
Jan Vesely b64c8925e9 AMDGPU: Remove read_workdim intrinsic
Differential revision: https://reviews.llvm.org/D22732

llvm-svn: 276682
2016-07-25 20:17:02 +00:00
Matt Arsenault 2fa171c43a AMDGPU: Make skip threshold an option
llvm-svn: 276680
2016-07-25 19:48:29 +00:00
Matt Arsenault cdae95bef2 AMDGPU: Delete dead code
llvm-svn: 276675
2016-07-25 19:06:25 +00:00
Joel Jones 373d7d30dd MC] Provide an MCTargetOptions to implementors of MCAsmBackendCtorTy, NFC
Some targets, notably AArch64 for ILP32, have different relocation encodings
based upon the ABI. This is an enabling change, so a future patch can use the
ABIName from MCTargetOptions to chose which relocations to use. Tested using
check-llvm.

The corresponding change to clang is in: http://reviews.llvm.org/D16538

Patch by: Joel Jones

Differential Revision: https://reviews.llvm.org/D16213

llvm-svn: 276654
2016-07-25 17:18:28 +00:00
Elena Demikhovsky 64e5f929d0 AVX-512: Fixed [US]INT_TO_FP selection for i1 vectors.
It failed with assertion before this patch.

Differential Revision: https://reviews.llvm.org/D22735

llvm-svn: 276648
2016-07-25 16:51:00 +00:00
Krzysztof Parzyszek 080bebd212 [Hexagon] Add target feature to generate long calls
llvm-svn: 276638
2016-07-25 14:42:11 +00:00
Sam Parker d5ca0a65b5 [ARM] Improve longMAC codegen test
Added thumb targets and dataflow checks to the longMAC test.

Differential Revision: https://reviews.llvm.org/D22684

llvm-svn: 276629
2016-07-25 10:11:00 +00:00
Simon Dardis 618975206e [mips] Optimize materialization of i64 constants
Avoid MipsAnalyzeImmediate usage if the constant fits in an 32-bit
integer. This allows us to generate the same instructions for the
materialization of the same constants regardless the width of their
type.

Patch by: Vasileios Kalintiris

Contributions by: Simon Dardis

Reviewers: Daniel Sanders

Differential Review: https://reviews.llvm.org/D21689

llvm-svn: 276628
2016-07-25 09:57:28 +00:00
Sam Parker 73f5c785c1 [ARM] Small refactor of Thumb2 SMLA insts
Follow up to r276624. Changes bits 22-20 to be parameters to
instruction class.

Differential Revision: https://reviews.llvm.org/D22562

llvm-svn: 276626
2016-07-25 09:29:24 +00:00
Sam Parker 68c71cd1e4 [ARM] Enable ISel of SMMLS for ARM and Thumb2
Use ISelDAGToDAG to recognise the SMMLS instruction pattern.

Differential Revision: https://reviews.llvm.org/D22562

llvm-svn: 276624
2016-07-25 09:20:20 +00:00
Craig Topper ce415ff9c5 [AVX512] Add load folding support for the unmasked forms of the FMA instructions.
llvm-svn: 276615
2016-07-25 07:20:35 +00:00
Craig Topper 318e40b6f7 [AVX512] Add some additional patterns so that we can fold broadcast loads in the first argument of an FMADD/FMSUB/FNMADD/FNMSUB/FMADDSUB/FMSUBADD node. Also add patterns to support all combinations of the broadcast input and the preserved input for masked versions.
llvm-svn: 276614
2016-07-25 07:20:31 +00:00
Craig Topper 6bcbf5338c [AVX512] Cleanup FMA operand order in patterns to match the VEX versions and to really be 213, 231, and 132.
llvm-svn: 276613
2016-07-25 07:20:28 +00:00
Simon Pilgrim 381a0ade5a [X86] Add 'FeatureSlowSHLD' to cpu 'bdver4'
As with all AMD CPUs, excavator has poor SHLD/SHRD performance. Also added bdver3 to the test as it was missing.

llvm-svn: 276569
2016-07-24 16:00:53 +00:00
Craig Topper 2dca3b287b [X86] Make the FMA3 instruction names consistent between VEX and EVEX encoded versions.
This places the 132/213/231 form number in front of the SS/SD/PS/PD. Move the Y for 256-bit versions to be after the PS/PD. Change the AVX512 scalar forms to include a Z in the their name. This new format should be consistent with the general naming of instructions.

llvm-svn: 276559
2016-07-24 08:26:38 +00:00
Craig Topper 05629d05c7 [X86] Replace CodeGenOnly VPSRAVW/D/Q_Int instructions with patterns since the operand types exactly match the normal VPSRAVW/D/Q instructions.
llvm-svn: 276555
2016-07-24 07:32:45 +00:00
Craig Topper 8152b9cd96 [X86] Fix typo in comment.
llvm-svn: 276528
2016-07-23 16:44:08 +00:00
Chandler Carruth 488cb137a9 Fix a GCC error due to this member name also being a type name. This
should fix the build with GCC 4.9 at least. Not sure if this is the
right name or fix, but I've followed up on the original commit.

llvm-svn: 276522
2016-07-23 07:50:05 +00:00
Craig Topper b6519db90d [AVX512] Implement commuting support for EVEX encoded FMA3 instructions.
llvm-svn: 276521
2016-07-23 07:16:56 +00:00
Craig Topper 6172b0b3e9 [X86] Make one of the FMA3 commuting methods static. Remove a call to isFMA3 just to get the IsIntrisic flag, instead get it during the first call and pass it along. NFC
llvm-svn: 276520
2016-07-23 07:16:53 +00:00
Craig Topper ca8f5f309c [X86] Fix switch statement indentation per coding standards.
llvm-svn: 276519
2016-07-23 07:16:50 +00:00
Matt Arsenault b40d8600ca AMDGPU: Delete dead code
This has been dead since r269479

llvm-svn: 276518
2016-07-23 07:07:14 +00:00
Tom Stellard b8253c88b6 Revert "[AMDGPU] Emit read-only data to .rodata for hsa"
This reverts commit r276298.

Data stored in .rodata can have a negative offset from .text, but we
don't support negative values in relocations yet.

This caused a regression in one of the amp conformance tests:
5_Data_Cont/5_2_a_v/5_2_3_m/Assignment/Test.02.01

llvm-svn: 276498
2016-07-22 23:46:40 +00:00
George Burgess IV 8a457ddfd8 Fix include case. NFC.
llvm-svn: 276465
2016-07-22 20:15:19 +00:00
Tim Northover 33b07d6725 GlobalISel: implement legalization pass, with just one transformation.
This adds the actual MachineLegalizeHelper to do the work and a trivial pass
wrapper that legalizes all instructions in a MachineFunction. Currently the
only transformation supported is splitting up a vector G_ADD into one acting on
smaller vectors.

llvm-svn: 276461
2016-07-22 20:03:43 +00:00
Krzysztof Parzyszek 3c89bb09d5 [Hexagon] Make HexagonCodeGen depend on Scalar
Hexagon backend uses LoopDataPrefetch pass that is defined in Scalar.

llvm-svn: 276441
2016-07-22 17:23:46 +00:00
Matt Arsenault 3c07c813c0 AMDGPU: Fix groupstaticsize for large LDS
The size can exceed s_movk_i32's limit, and we don't
want to use it this early since it inhibits optimizations.

This should probably be merged to the release branch.

llvm-svn: 276438
2016-07-22 17:01:33 +00:00
Matt Arsenault 8d718dcfda AMDGPU: Add HSA dispatch id intrinsic
llvm-svn: 276437
2016-07-22 17:01:30 +00:00
Matt Arsenault f9245b75c0 AMDGPU: Delete more dead code
Remove dead code from r600 intrinsic removal.
Remove unset members, rename StackSize to be less ambiguous.

llvm-svn: 276436
2016-07-22 17:01:25 +00:00
Matt Arsenault 7fb961f3e6 AMDGPU: Fix i1 fp_to_int
R600's i1 fp_to_uint selected but was incorrect according to
what instcombine constant folds to.

llvm-svn: 276435
2016-07-22 17:01:21 +00:00
Matt Arsenault d40ded6681 AMDGPU: Don't reinvent transferSuccessorsAndUpdatePHIs
llvm-svn: 276434
2016-07-22 17:01:15 +00:00
Krzysztof Parzyszek 047149f745 [RDF] Make the graph construction/use less expensive
- FuncNode::findBlock traverses the function every time. Avoid using it,
  and keep a cache of block addresses in DataFlowGraph instead.
- The operator[] in the map of definition stacks was very slow. Replace
  the map with unordered_map.

llvm-svn: 276429
2016-07-22 16:09:47 +00:00
Krzysztof Parzyszek d3d0a4bda3 [Hexagon] Use loop data prefetch on Hexagon
llvm-svn: 276422
2016-07-22 14:22:43 +00:00
Simon Pilgrim ea0d4f9962 [X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128 (reapplied)
As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector.

This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match.

We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts).

Reapplied with fix for PR28657 - removed intrinsic definitions (clang companion patch to be be submitted shortly).

Differential Revision: https://reviews.llvm.org/D22460

llvm-svn: 276416
2016-07-22 13:58:44 +00:00
Benjamin Kramer 5ba0e20315 Revert "[X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128"
It caused PR28657.

This reverts commit r276281.

llvm-svn: 276405
2016-07-22 11:03:10 +00:00
Sjoerd Meijer 5c0ef83386 This refactoring of ARM machine block size computation creates two utility
functions so that the size computation is available not only in ConstantIslands
but in other passes as well.

Differential Revision: https://reviews.llvm.org/D22640

llvm-svn: 276399
2016-07-22 08:39:12 +00:00
Hrvoje Varga 2db00ce4b6 [mips][microMIPS] Implement SLT, SLTI, SLTIU, SLTU microMIPS32r6 instructions
Differential Revision: https://reviews.llvm.org/D19906

llvm-svn: 276397
2016-07-22 07:18:33 +00:00
Craig Topper 52e2e8381b [AVX512] Add ExeDomain to vector extend and truncate instructions.
llvm-svn: 276394
2016-07-22 05:46:44 +00:00
Craig Topper f4151bea72 [AVX512] Add initial support for the Execution Domain fixing pass to change some EVEX instructions.
llvm-svn: 276393
2016-07-22 05:00:52 +00:00
Craig Topper 5ec33a9411 [AVX512] Fix the ExeDomain for some packed fp instructions.
llvm-svn: 276392
2016-07-22 05:00:42 +00:00
Craig Topper 0b90756b0a [AVX512] Add load folding for some AVX512VL logic and arithmetic instructions.
llvm-svn: 276391
2016-07-22 05:00:39 +00:00
Craig Topper ab13b33ded [AVX512] Update X86InstrInfo::foldMemoryOperandCustom to handle the EVEX encoded instructions too.
llvm-svn: 276390
2016-07-22 05:00:35 +00:00
David Majnemer 1182dd8ed9 [AArch64] Cleanup sign extend in genAlternativeCodeSequence
Use the machinery in MathExtras instead of rolling it by hand.

This fixes PR28624.

llvm-svn: 276366
2016-07-21 23:46:56 +00:00
Douglas Katzman 26cfb6a254 [Sparc]: Fix bug in LowerSTORE due to r275592
llvm-svn: 276362
2016-07-21 23:28:54 +00:00
Michael Kuperstein c523333bbf [X86] Do not use AND8ri8 in AVX512 pattern
This variant is (as documented in the TD) for disassembler use only, and should
not be used in patterns - it is longer, and is broken on 64-bit.

llvm-svn: 276347
2016-07-21 22:24:08 +00:00
Akira Hatanaka b8d2873d93 [AArch64][Inline-Asm] Return the 32-bit floating point register class
when constraint "w" is used on a 32-bit operand.

This enables compiling the following code, which used to error out in
the backend:

void foo1(int a) {
  asm volatile ("sqxtn h0, %s0\n" : : "w"(a):);
}

Fixes PR28633.

llvm-svn: 276344
2016-07-21 21:39:05 +00:00
Konstantin Zhuravlyov 3c0d8d22fe [AMDGPU] Emit read-only data to .rodata for hsa
Differential Revision: https://reviews.llvm.org/D22538

llvm-svn: 276298
2016-07-21 15:59:23 +00:00
Konstantin Zhuravlyov 155626238b AMDGPU/SI: Add support for R_AMDGPU_ABS32
Differential Revision: https://reviews.llvm.org/D21646

llvm-svn: 276294
2016-07-21 15:29:19 +00:00
Geoff Berry 4ff2e36d32 [AArch64] Load/store opt: Don't count transient instructions towards search limits.
Summary:
This change also changes findMatchingInsn and
findMatchingUpdateInsnForward to take DBG_VALUE opcodes into account
when tracking register defs and uses, which could potentially inhibit
these optimizations in the presence of debug information.

Reviewers: mcrosier

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D22582

llvm-svn: 276293
2016-07-21 15:20:25 +00:00
Simon Pilgrim 88e0940d3b [X86][SSE] Allow folding of store/zext with PEXTRW of 0'th element
Under normal circumstances we prefer the higher performance MOVD to extract the 0'th element of a v8i16 vector instead of PEXTRW.

But as detailed on PR27265, this prevents the SSE41 implementation of PEXTRW from folding the store of the 0'th element. Additionally it prevents us from making use of the fact that the (SSE2) reg-reg version of PEXTRW implicitly zero-extends the i16 element to the i32/i64 destination register.

This patch only preferentially lowers to MOVD if we will not be zero-extending the extracted i16, nor prevent a store from being folded (on SSSE41).

Fix for PR27265.

Differential Revision: https://reviews.llvm.org/D22509

llvm-svn: 276289
2016-07-21 14:54:17 +00:00
Simon Pilgrim b11bdd95f6 [X86][SSE] Pull out duplicate EXTRW lowering code. NFCI.
As requested on D22509, I've pulled out the v8i16 extraction lowering as the SSE41 and pre-SSE41 implementations are effectively the same.

llvm-svn: 276285
2016-07-21 14:30:17 +00:00
Simon Pilgrim c8e20b1150 [X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128
As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector.

This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match.

We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts).

Differential Revision: https://reviews.llvm.org/D22460

llvm-svn: 276281
2016-07-21 14:10:54 +00:00
Sam Kolton 6c4efcadfe [AMDGPU] Some code cleaning in SIRegisterInfo.td
Reviewers: tstellarAMD, vpykhtin

Subscribers: arsenm, kzhuravl

Differential Revision: https://reviews.llvm.org/D22620

llvm-svn: 276274
2016-07-21 13:29:57 +00:00
Matt Arsenault f0ba86a4d5 AMDGPU: Fix phis from blocks split due to register indexing
llvm-svn: 276257
2016-07-21 09:40:57 +00:00
Matthias Braun ca8210a952 X86InstrInfo: No need for liveness analysis in classifyLEAReg()
classifyLEAReg() deals with switching operands from 32bit to 64bit in
order to use a LEA64_32 instruction (for three address code goodness).
It currently performs a liveness analysis to determine the kill/undef
flag for the newly added operand. This should not be necessary:

- If the previous operand had a kill flag, then the 32bit part of the
  register gets killed, this will kill the super register as well.
- If the previous operand had an undef flag then we didn't care what
  value we read, just use the same flag on the new operand.
  (No matter what an operand with an undef flag won't affect liveness)

This makes the code independent of the presence of kill flags because it
avoids a call to MachineBasicBlock::computeRegisterLiveness().

Differential Revision: http://reviews.llvm.org/D22283

llvm-svn: 276222
2016-07-21 00:33:38 +00:00
Justin Lebar cd564c6b46 [NVPTX] Enable the load-store vectorizer on nvptx.
Reviewers: tra

Subscribers: jholewinski, arsenm, asbirlea

Differential Revision: https://reviews.llvm.org/D22592

llvm-svn: 276196
2016-07-20 22:11:36 +00:00
Geoff Berry 24c81e8d7c [AArch64] Register AArch64LoadStoreOptimizer so it can be run by llc -run-pass. NFCI.
llvm-svn: 276193
2016-07-20 21:45:58 +00:00
Artem Belevich 7e9c9a6582 [NVPTX] Renamed NVPTXLowerKernelArgs -> NVPTXLowerArgs. NFC.
After r276153 the pass applies to both kernels and regular functions.

Differential Revision: https://reviews.llvm.org/D22583

llvm-svn: 276189
2016-07-20 21:44:07 +00:00
Ahmed Bougacha a0cdd79070 [AArch64][FastISel] Select -O0 legal cmpxchg.
At -O0, cmpxchg survives AtomicExpand: it's mostly straightforward
to select it in fast-isel, and let the pseudo be expanded later.

extractvalues on the result are the tricky part: the generic logic
only works for legal types (and it would be painful to make it
support illegal types), so we can only support i32/i64 cmpxchg.

llvm-svn: 276183
2016-07-20 21:12:32 +00:00
Ahmed Bougacha b0674d1143 [AArch64][FastISel] Select atomic stores into STLR.
llvm-svn: 276182
2016-07-20 21:12:27 +00:00
Tim Northover 62ae568bbb GlobalISel: implement low-level type with just size & vector lanes.
This should be all the low-level instruction selection needs to determine how
to implement an operation, with the remaining context taken from the opcode
(e.g. G_ADD vs G_FADD) or other flags not based on type (e.g. fast-math).

llvm-svn: 276158
2016-07-20 19:09:30 +00:00
Artem Belevich 74158b5061 [NVPTX] deal with all aggregate return types.
Fixes a crash in llvm_unreachable when a function has array return type.

Differential Revision: https://reviews.llvm.org/D22524

llvm-svn: 276154
2016-07-20 18:39:52 +00:00
Artem Belevich b2e76a5e7a [NVPTX] Improve lowering of byval args of device functions.
Avoid unnecessary spills of byval arguments of device functions to
local space on SASS level and subsequent pointer conversion to generic
address space that follows. Instead, make a local copy in IR, provide
a way to access arguments directly, and let LLVM optimize the copy away
when possible.

Differential Review: https://reviews.llvm.org/D21421

llvm-svn: 276153
2016-07-20 18:39:47 +00:00
Yaxun Liu 4b1d9f7f18 AMDGPU: Fix bug causing crash due to invalid opencl version metadata.
Differential Revision: https://reviews.llvm.org/D22526

llvm-svn: 276119
2016-07-20 14:38:06 +00:00
Simon Pilgrim 1b4f511aaa [X86][SSE] Add cost model values for CTPOP of vectors
This patch adds costs for the vectorized implementations of CTPOP, the default values were seriously underestimating the cost of these and was encouraging vectorization on targets where serialized use of POPCNT would be much better.

Differential Revision: https://reviews.llvm.org/D22456

llvm-svn: 276104
2016-07-20 10:41:28 +00:00
Diana Picus f345d40ae2 [ARM] Skip inline asm memory operands in DAGToDAGISel
Retry r275776 (no changes, we suspect the issue was with another commit).

The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.

The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.

Fixes PR26038

Differential Revision: https://reviews.llvm.org/D22103

llvm-svn: 276101
2016-07-20 09:48:24 +00:00
Craig Topper 09b99c3a75 [AVX512] Add a missing NoVLX to give priority to the AVX512 version of the pattern.
llvm-svn: 276088
2016-07-20 05:05:50 +00:00
Craig Topper 7a95428f7c [X86] Use 'HasAVX1Only' to properly give priority to the AVX2 version without relying on file ordering.
llvm-svn: 276087
2016-07-20 05:05:48 +00:00
Craig Topper cde436a663 [X86] Create some multiclasses to reduce the repeated patterns for VEXTRACT(F/I)128/VINSERT(I/F)128. NFC
llvm-svn: 276086
2016-07-20 05:05:46 +00:00
Craig Topper 55913ead3d [X86] Create some wrapper multiclasses to create AVX and SSE shift instructions with less repeated code. NFC
llvm-svn: 276085
2016-07-20 05:05:44 +00:00
David Majnemer 5d26127752 Revert "Disable this-return argument forwarding on ARM/AArch64"
Inference of the 'returned' attribute was fixed in r276008, lets try
turning the backend support back on.

This reverts commit r275677.

llvm-svn: 276081
2016-07-20 04:13:01 +00:00
Matt Arsenault a1fe17c9ad AMDGPU: Change fdiv lowering based on !fpmath metadata
If 2.5 ulp is acceptable, denormals are not required, and
isn't a reciprocal which will already be handled, replace
with a faster fdiv.

Simplify the lowering tests by using per function
subtarget features.

llvm-svn: 276051
2016-07-19 23:16:53 +00:00
Evandro Menezes 238fa76574 [AArch64] Properly validate the reciprocal estimation.
Add check for legal data types when expanding into a Newton series.

Differential Revision: https://reviews.llvm.org/D22267

llvm-svn: 276041
2016-07-19 22:31:11 +00:00
Davide Italiano 63e5968033 [AMDGPU] Remove spurious line (should've been removed in r276029).
llvm-svn: 276030
2016-07-19 21:16:30 +00:00
Davide Italiano 1576e38598 [AMDGPU] Remove dead code.
LGTM'd by Matt Arsenault.

llvm-svn: 276029
2016-07-19 21:10:49 +00:00
Tim Northover 554fbd05e8 ARM: move feature for Thumb2 pkhbt/pkhtb onto architectures.
There's not much functional change, but it really is an architectural feature
(on v6T2, v7A, v7R and v7EM) rather than something each CPU implements
individually.

The main functional change is the default behaviour you get when specifying
only "-triple".

llvm-svn: 276013
2016-07-19 19:49:13 +00:00
Matt Arsenault 03006fd3c4 AMDGPU: Only use legal inline immediates with kill pseudo
Only if the value is negative or positive is what matters,
so use a constant that doesn't require an instruction to
materialize.

These should really just emit the write exec directly,
but for stick with the kill pseudo-terminator.

llvm-svn: 275988
2016-07-19 16:27:56 +00:00
Simon Pilgrim 0ea8d275cc [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

A companion clang patch is at D22105

Differential Revision: https://reviews.llvm.org/D22106

llvm-svn: 275981
2016-07-19 15:07:43 +00:00
Sam Parker 6ca4bbb00d [ARM] Refactor Thumb2 Mul and Mla instr descs
Recommitting after r274347 was reverted. This patch introduces some
classes to refactor the 3 and 4 register Thumb2 multiplication
instruction descriptions, plus improved tests for some of those
instructions.

Differential Revision: https://reviews.llvm.org/D21929

llvm-svn: 275979
2016-07-19 14:44:05 +00:00
Pankaj Gode 1bfca191da [AArch64] PredictableSelectIsExpensive for Vulcan.
Adding PredictableSelectIsExpensive for Vulcan

Differential Revision: https://reviews.llvm.org/D22448

llvm-svn: 275978
2016-07-19 14:30:21 +00:00
Peter Smith cbcecca538 Add support for tlsldm assembler operator to ARM target
The standard local dynamic model for TLS on ARM systems needs two 
relocations:
- R_ARM_TLS_LDM32 (module idx)
- R_ARM_TLS_LDO32 (offset of object from origin of module TLS block)
    
In GNU style assembler we use symbol(tlsldm) and symbol(tlsldo) to
produce these relocations.
    
llvm-mc for ARM supports symbol(tlsldo) but does not support symbol(tlsldm).
This patch wires up the existing symbol(tlsldm) to R_ARM_TLS_LDM32.
    
TLS for ARM is defined in Addenda to, and Errata in, the ABI for the
ARM Architecture
    
Differential Revision: https://reviews.llvm.org/D22461

llvm-svn: 275977
2016-07-19 14:15:33 +00:00
Daniel Sanders 3878412875 [mips][ias] R_MIPS_GOT_(PAGE|OFST) do not need symbols
Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: https://reviews.llvm.org/D22458

llvm-svn: 275968
2016-07-19 10:58:06 +00:00
Daniel Sanders 6a73883c48 [mips] Correct label prefixes for N32 and N64.
Summary:
N32 and N64 follow the standard ELF conventions (.L) whereas O32 uses its own
($).

This fixes the majority of object differences between -fintegrated-as and
-fno-integrated-as.

Reviewers: sdardis

Subscribers: dsanders, sdardis, llvm-commits

Differential Revision: https://reviews.llvm.org/D22412

llvm-svn: 275967
2016-07-19 10:49:03 +00:00
Elena Demikhovsky 2c0780b8e5 AVX-512: Fixed BT instruction selection.
The following condition expression ( a >> n) & 1 is converted to "bt a, n" instruction. It works on all intel targets.
But on AVX-512 it was broken because the expression is modified to (truncate (a >>n) to i1).

I added the new sequence (truncate (a >>n) to i1) to the BT pattern.

Differential Revision: https://reviews.llvm.org/D22354

llvm-svn: 275950
2016-07-19 07:14:21 +00:00
Craig Topper d6ca1dc45e [AVX512] Give priority to EVEX encoded PSHUFB over the VEX versions.
llvm-svn: 275942
2016-07-19 02:00:38 +00:00
Craig Topper 592dc30708 [X86] Remove superfluous parameter from a multiclass. All instantiations passed the same value.
llvm-svn: 275941
2016-07-19 02:00:35 +00:00
Craig Topper 6189d3ecd4 [X86] Rename VINSERTzrr to use a capital Z to match other instructions. NFC
llvm-svn: 275939
2016-07-19 01:26:19 +00:00
Matt Arsenault fe358066ea AMDGPU/SI: Fix SI scheduler refcount issue
Without this fix, releaseSuccessors when InOrOutBlock is
false could release SUs outside the schedule BasicBlock.

Patch by Axel Davy

llvm-svn: 275935
2016-07-19 00:35:22 +00:00
Matt Arsenault cb540bc03c AMDGPU: Expand register indexing pseudos in custom inserter
This is to help moveSILowerControlFlow to before regalloc.
There are a couple of tradeoffs with this. The complete CFG
is visible to more passes, the loop body avoids an extra copy of m0,
vcc isn't required, and immediate offsets can be shrunk into s_movk_i32.

The disadvantage is the register allocator doesn't understand that
the single lane's vector is dead within the loop body, so an extra
register is used to outlive the loop block when expanding the
VGPR -> m0 loop. This also now results in worse waitcnt insertion
before the loop instead of after for pending operations at the point
of the indexing, but that should be fixed by future improvements to
cross block waitcnt insertion.

v_movreld_b32's operands are now modeled more correctly since vdst
is not a true output. This is kind of a hack to treat vdst as a
use operand. Extra checking is required in the verifier since
I can't seem to get tablegen to emit an implicit operand for a
virtual register.

llvm-svn: 275934
2016-07-19 00:35:03 +00:00
Artem Belevich 9f97dcb018 [NVPTX] Make sure we adjust alignment at all call sites
.. including calls from kernel functions that were
ignored by mistake before.

llvm-svn: 275920
2016-07-18 21:58:48 +00:00
Artem Belevich 052b1ed2fd [NVPTX] Force minimum alignment of 4 for byval arguments of device-side functions.
Taking address of a byval variable in PTX is legal, but currently runs
into miscompilation by ptxas on sm_50+ (NVIDIA issue 1789042).
Work around the issue by enforcing minimum alignment on byval arguments
of device functions.

The change is a no-op on SASS level for sm_3x where ptxas already aligns
local copy by at least 4.

Differential Revision: https://reviews.llvm.org/D22428

llvm-svn: 275893
2016-07-18 19:54:56 +00:00
Vitaly Buka c93e10fcbb Revert "[ARM] Skip inline asm memory operands in DAGToDAGISel"
Breaks asan, see https://reviews.llvm.org/D22103

This reverts commit r275776.

llvm-svn: 275890
2016-07-18 19:44:01 +00:00
Matt Arsenault 210b7cf3e2 AMDGPU: Remove pointless dyn_cast_or_null
This is already casted above so non-null

llvm-svn: 275881
2016-07-18 19:00:07 +00:00
Matt Arsenault b51dcb97bb AMDGPU: Fix missing switch case warning
llvm-svn: 275873
2016-07-18 18:40:51 +00:00
Matt Arsenault c96e1deffa AMDGPU: Add intrinsic for s_flbit_i32/v_ffbh_i32
llvm-svn: 275871
2016-07-18 18:35:05 +00:00
Matt Arsenault 4c519d3518 AMDGPU/R600: Replace barrier intrinsics
llvm-svn: 275870
2016-07-18 18:34:59 +00:00
Matt Arsenault efb24540b1 AMDGPU: Remove dead check in AMDGPUPromoteAlloca
This is currently only called with GEP users. A direct
alloca would only happen with current typed pointers
for arrays which are a perverse case.

Also fix crashes on 0 x and 1 x arrays.

llvm-svn: 275869
2016-07-18 18:34:53 +00:00
Matt Arsenault 2e08e181a7 AMDGPU: Remove dead code and redundant check
Non intrinsic calls aren't really handled, and this
IntrinsicInst dyn_cast checks for the function for us.

llvm-svn: 275868
2016-07-18 18:34:48 +00:00
Krzysztof Parzyszek 14412ef07a [Hexagon] Handle returning small structures by value
This is not compliant with the official ABI, but allows experimentation
with calling conventions.

llvm-svn: 275825
2016-07-18 17:36:46 +00:00
Krzysztof Parzyszek 4661a958d8 [Hexagon] Revert r275822: mistake in commit message
llvm-svn: 275824
2016-07-18 17:34:49 +00:00
Simon Pilgrim c941f6b329 [X86][AVX] Add target shuffle decode support for VBROADCAST
Currently we only decode broadcasts from a vector of the same size.

llvm-svn: 275823
2016-07-18 17:32:59 +00:00
Krzysztof Parzyszek 5948ea78b9 [Hexagon] Handle returning small structures by value
This is compliant with the official ABI, but allows experimentation with
calling conventions.

llvm-svn: 275822
2016-07-18 17:30:41 +00:00
Krzysztof Parzyszek 2be7eadba3 [Hexagon] Misc changes to HexagonMachineScheduler, NFC
- Remove duplicated code.
- Convert loop to range-for.

llvm-svn: 275806
2016-07-18 16:15:15 +00:00
Krzysztof Parzyszek 786333ffcc [Hexagon] Enable .cur formation in MISched for Hexagon V60
Schedule a load and its use in the same packet in MISched. Previously,
isResourceAvailable was returning false for dependences in the same
packet, which prevented MISched from packetizing a load and its use in
the same packet for v60.

Patch by Ikhlas Ajbar.

llvm-svn: 275804
2016-07-18 16:05:27 +00:00
Krzysztof Parzyszek f05dc4d5dd [Hexagon] Add verbose debugging mode to Hexagon MI Scheduler
Patch by Sergei Larin.

llvm-svn: 275799
2016-07-18 15:47:25 +00:00
Nemanja Ivanovic d3c284f645 [PowerPC] Remove redundant direct moves when extracting integers and converting to FP
This patch corresponds to review:
https://reviews.llvm.org/D21354

We use direct moves for extracting integer elements from vectors. We also use
direct moves when converting integers to FP. When these operations are chained,
we get a direct move out of a VSR followed by a direct move back into a VSR.
These are redundant - all we need to do is line up the element and convert.

llvm-svn: 275796
2016-07-18 15:30:00 +00:00
Krzysztof Parzyszek 393b37937b [Hexagon] Use timing class info as tie-breaker in machine scheduler
Patch by Sirish Pande.

llvm-svn: 275794
2016-07-18 15:17:10 +00:00
Krzysztof Parzyszek 3467e9d0a9 [Hexagon] HexagonMachineScheduler should account for resources
The machine scheduler needs to account for available resources
more accurately in order to avoid scheduling an instruction that
forces a new packet to be created.

This occurs in two ways: First, an instruction without an available
resource may have a large priority due to other metrics and be
scheduled when there are other instructions with available resources.
Second, an instruction with a non-zero latency may become available
prematurely. In both these cases, we attempt change the priority
in order to allow a better instruction to be scheduled.

Patch by Brendon Cahoon.

llvm-svn: 275793
2016-07-18 14:52:13 +00:00
Krzysztof Parzyszek 748d3efec6 [Hexagon] Fix zero latency instructions with multiple predecessors
An instruction may have multiple predecessors that are candidates
for using .cur. However, only one of them can use .cur in the
packet. When this case occurs, we need to make sure that only
one of the dependences gets a 0 latency value.

Patch by Brendon Cahoon.

llvm-svn: 275790
2016-07-18 14:23:10 +00:00
Simon Dardis d32a2d30cb [inlineasm] Propagate operand constraints to the backend
When SelectionDAGISel transforms a node representing an inline asm
block, memory constraint information is not preserved. This can cause
constraints to be broken when a memory offset is of the form:

offset + frame index

when the frame is resolved.

By propagating the constraints all the way to the backend, targets can
enforce memory operands of inline assembly to conform to their constraints.

For MIPSR6, some instructions had their offsets reduced to 9 bits from
16 bits such as ll/sc. This becomes problematic when using inline assembly
to perform atomic operations, as an offset can generated that is too big to
encode in the instruction.

Reviewers: dsanders, vkalintris

Differential Review: https://reviews.llvm.org/D21615

llvm-svn: 275786
2016-07-18 13:17:31 +00:00
Nicolai Haehnle bef1ceb815 AMDGPU: Disable AMDGPUPromoteAlloca pass for shader calling conventions.
Summary:
The work item intrinsics are not available for the shader
calling conventions. And even if we did hook them up most
shader stages haves some extra restrictions on the amount
of available LDS.

Reviewers: tstellarAMD, arsenm

Subscribers: nhaehnle, arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D20728

llvm-svn: 275779
2016-07-18 09:02:47 +00:00
Diana Picus 73ed44d328 [ARM] Skip inline asm memory operands in DAGToDAGISel
The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.

The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.

Fixes PR26038

Differential Revision: https://reviews.llvm.org/D22103

llvm-svn: 275776
2016-07-18 07:35:14 +00:00
Craig Topper a3c55f5915 [AVX512] Add EVEX versions of scalar ADD/SUB/MUL/DIV to load folding tables.
llvm-svn: 275775
2016-07-18 06:49:32 +00:00
Diana Picus 774d157a5d [ARM] Honour ABI for rem under -O0 for EABI, GNUEABI, Android and Musl
At higher optimization levels, we generate the libcall for DIVREM_Ix, which is
fine: aeabi_{u|i}divmod. At -O0 we generate the one for REM_Ix, which is the
default {u}mod{q|h|s|d}i3.

This commit makes sure that we don't generate REM_Ix calls for ABIs that
don't support them (i.e. where we need to use DIVREM_Ix instead). This is
achieved by bailing out of FastISel, which can't handle non-double multi-reg
returns, and letting the legalization infrastructure expand the REM_Ix calls.

It also updates the divmod-eabi.ll test to run under -O0 as well, and adds some
Windows checks to it to make sure we don't break things for it.

Fixes PR27068

Differential Revision: https://reviews.llvm.org/D21926

llvm-svn: 275773
2016-07-18 06:48:25 +00:00
Craig Topper 16a0744955 [AVX512] Add KADD/KAND/KOR/KXOR to X86InstrInfo::isAssociativeAndCommutative.
llvm-svn: 275771
2016-07-18 06:14:59 +00:00
Craig Topper 463f949a3a [X86] Add VPMULLW/D/Q instructions to X86InstrInfo::isAssociativeAndCommutative.
llvm-svn: 275770
2016-07-18 06:14:57 +00:00
Craig Topper 1af6cc00dc [X86] Add VPADD instructions to X86InstrInfo::isAssociativeAndCommutative.
llvm-svn: 275769
2016-07-18 06:14:54 +00:00
Craig Topper ba9b93d7f2 [X86] Add floating point packed logical ops to X86InstrInfo::isAssociativeAndCommutative.
llvm-svn: 275768
2016-07-18 06:14:50 +00:00
Craig Topper 3a99de4067 [X86] Add AVX512 instructions to X86InstrInfo::isAssociativeAndCommutative.
llvm-svn: 275767
2016-07-18 06:14:47 +00:00
Craig Topper fe5a6dc581 [X86] Add more AVX512 instructions to X86InstrInfo::isHighLatencyDef. Also add all packed fp division instructions.
llvm-svn: 275766
2016-07-18 06:14:45 +00:00
Craig Topper f7a06c29bc [X86] Add AVX512 load opcodes and a couple AVX load opcodes to X86InstrInfo::areLoadsFromSameBasePtr.
llvm-svn: 275765
2016-07-18 06:14:43 +00:00
Craig Topper 650a15e2b3 [X86] Add more opcodes to isFrameLoadOpcode/isFrameStoreOpcode. Mainly AVX-512 related.
llvm-svn: 275764
2016-07-18 06:14:39 +00:00
Craig Topper 5c913e84df [AVX512] Use VMOVAPSZ128rr/VMOVAPS256rr for VR128X/VR256X physreg moves when VLX is supported.
Ideally we would use VEX encoded moves instead of EVEX if the high 16 registers aren't referenced, but this a good first step.

llvm-svn: 275763
2016-07-18 06:14:34 +00:00
Craig Topper 53f3d1b4d0 [X86] Fix 80-column violations. NFC
llvm-svn: 275762
2016-07-18 06:14:26 +00:00
Simon Pilgrim 285d9e4d60 Strip trailing whitespace
llvm-svn: 275726
2016-07-17 19:02:27 +00:00
Simon Pilgrim 1be1222293 [X86][SSE] lowerVectorShuffleAsPermuteAndUnpack tidyup. NFCI.
Moved unpack type determination into TryUnpack lambda.

Added missing comment describing lowerVectorShuffleAsPermuteAndUnpack call.

llvm-svn: 275708
2016-07-17 15:48:25 +00:00
Guy Blank 3357ba36e2 test commit
llvm-svn: 275703
2016-07-17 12:10:35 +00:00
Hal Finkel 04b5330ccd Disable this-return argument forwarding on ARM/AArch64
r275042 reverted function-attribute inference for the 'returned' attribute
because the feature triggered self-hosting failures on ARM and AArch64. James
Molloy determined that the this-return argument forwarding feature, which
directly ties the returned input argument to the returned value, was the cause.
It seems likely that this forwarding code contains, or triggers, a subtle bug.
Disabling for now until we can track that down.

llvm-svn: 275677
2016-07-16 07:07:29 +00:00
Yaxun Liu a711cc7951 Re-commit [AMDGPU] Add metadata for runtime
Attempting to fix lit test failure on ppc.

llvm-svn: 275676
2016-07-16 05:09:21 +00:00
Craig Topper 8093437f2e [AVX512] Remove CodeGenOnly VBROADCAST m_Int instructions. They can be implemented with patterns selecting existing instructions. NFC
llvm-svn: 275671
2016-07-16 03:42:59 +00:00
Matthias Braun 8f456fb18f ARM: Initialize LoadStore passes in TargetMachine
Initializing them in LLVMInitializeARMTarget() makes them visible early
enough for "llc -run-pass usage".

This required the pass to be renamed from "arm-load-store-opt" to
"arm-ldst-opt", because there already exists an arm-load-store-opt
cl::opt switch which would now clash with the passname getting added as
a switch in opt. On the bright side the pass name now matches the
DEBUG_TYPE name. Renamed "arm-prera-load-store-opt" to
"arm-repra-ldst-opt" as well for consistency.

llvm-svn: 275661
2016-07-16 02:24:10 +00:00
Duncan P. N. Exon Smith 670900bb9e Reapply "Mips: Avoid implicit iterator conversions, NFC"
This reverts commit r275562, effectively reapplying r275141.  Doug
Gilmore reported that there was an error when bisecting the Mips
buildbot failure, and that r275141 was not to blame after all.  Here is
the green build:
https://dmz-portal.mips.com/bb/builders/LLVM%20with%20integrated%20assembler%20and%20fPIC%20and%20-O0/builds/803

llvm-svn: 275643
2016-07-15 23:09:47 +00:00
Junmo Park 45513a8eca Minor code cleanups. NFC.
llvm-svn: 275637
2016-07-15 22:42:52 +00:00
Jacques Pienaar e2f0699d64 [lanai] Small cleanup: remove/comment out unused args
llvm-svn: 275636
2016-07-15 22:38:32 +00:00
Matt Arsenault 73d2f8954a AMDGPU: Fix verifier error from partially undef copy
In this situation:

%VGPR2<def> = BUFFER_LOAD_DWORD_OFFSET %SGPR8_SGPR9_SGPR10_SGPR11,
%VGPR7<def,tied3> = V_MAC_F32_e32 %VGPR0<undef>, %VGPR1<kill>, %VGPR7<kill,tied0>, %EXEC<imp-use>
%VGPR3_VGPR4_VGPR5_VGPR6<def> = COPY %VGPR0_VGPR1_VGPR2_VGPR3
%VGPR4<def> = COPY %VGPR2

The copy for VGPR1 -> VGPR4 was an error from reading undefined VGPR1,
but VGPR4 is defined immediately after this copy.

llvm-svn: 275635
2016-07-15 22:32:02 +00:00
Alexei Starovoitov cfb51f54ba BPF: Use official ELF e_machine value
The same value for EM_BPF is being propagated to glibc,
elfutils, and binutils.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 275633
2016-07-15 22:27:55 +00:00
Jacques Pienaar e650319772 [lanai] Fix build by updating calls to getLoad & getStore.
rL275592 removed the boolean parameters of SelectionDAG::getLoad and getStore, updating Lanai backend's calls to these functions.

llvm-svn: 275631
2016-07-15 22:18:33 +00:00
Krzysztof Parzyszek 408e300933 [Hexagon] Handle instruction latency for 0 or 2 cycles
The Hexagon schedulers need to handle instructions with a latency
of 0 or 2 more accurately. The problem, in v60, is that a dependence
between two instructions with a 2 cycle latency can use a .cur version
of the source to achieve a 0 cycle latency when the use is in the
same packet. Any othe use, must be at least 2 packets later, or a
stall occurs. In other words, the compiler does not want to schedule
the dependent instructions 1 cycle later.

To achieve this, the latency adjustment code allows only a single
dependence to have a zero latency. All other instructions have the
other value, which is typically 2 cycles. We use a heuristic to
determine which instruction gets the 0 latency.

The Hexagon machine scheduler was also changed to increase the cost
associated with 0 latency dependences than can be scheduled in the
same packet.

Patch by Brendon Cahoon.

llvm-svn: 275625
2016-07-15 21:34:02 +00:00
Matt Arsenault a65e6b8335 AMDGPU: Remove brev intrinsic
llvm-svn: 275620
2016-07-15 21:27:13 +00:00
Matt Arsenault 82e5e1e564 AMDGPU: Fix TargetPrefix for remaining r600 intrinsics
llvm-svn: 275619
2016-07-15 21:27:08 +00:00
Matt Arsenault 11d3e21f2b AMDGPU: Remove AMDGPU.ldexp
llvm-svn: 275618
2016-07-15 21:26:56 +00:00
Matt Arsenault 09b2c4aee8 AMDGPU: Remove legacy rsq.clamped intrinsic
Mesa still has a use of llvm.AMDGPU.rsq.f64 remaining.

Also fix mismatch with non-IEEE rsq selecting to IEEE rsq.

llvm-svn: 275617
2016-07-15 21:26:52 +00:00
Matt Arsenault d7f4414543 AMDGPU/R600: Delete dead code.
Dead or the same as the base implementation.

llvm-svn: 275616
2016-07-15 21:26:46 +00:00
Nico Weber 8d66df15f4 Teach fast isel about the win64 calling convention.
This mostly just works.

Vectorcall rets are still not supported.

The win64_eh test change is because fast isel doesn't use rsi for temporary
computations, so it doesn't need to be pushed. The test case I'm changing was
originally added to test pushes, but by now there are other test cases in that
file exercising that code path.

https://reviews.llvm.org/D22422

llvm-svn: 275607
2016-07-15 20:18:37 +00:00
Krzysztof Parzyszek 6c715e1483 [Hexagon] Make MI scheduler check for stalls in previous packet on v60
Patch by Ikhlas Ajbar.

llvm-svn: 275606
2016-07-15 20:16:03 +00:00
Nemanja Ivanovic 62fba48e0f [PowerPC] Set kill flag for scratch register when spilling the link register
This fixes PR 28526.

llvm-svn: 275603
2016-07-15 19:56:32 +00:00
Derek Schuff 1a946e412c Fix calls to SelectionDAG::getStore
It was refactored in r275592. NFC

llvm-svn: 275601
2016-07-15 19:35:43 +00:00
Vitaly Buka 7f64844481 Revert "[AMDGPU] Add metadata for runtime"
This reverts commit r275566.

llvm-svn: 275599
2016-07-15 19:14:57 +00:00
Krzysztof Parzyszek 36b0f93294 [Hexagon] Replace postprocessDAG with a more elaborate DAG mutation
llvm-svn: 275598
2016-07-15 19:09:37 +00:00
Justin Lebar 9c375817ac [SelectionDAG] Get rid of bool parameters in SelectionDAG::getLoad, getStore, and friends.
Summary:
Instead, we take a single flags arg (a bitset).

Also add a default 0 alignment, and change the order of arguments so the
alignment comes before the flags.

This greatly simplifies many callsites, and fixes a bug in
AMDGPUISelLowering, wherein the order of the args to getLoad was
inverted.  It also greatly simplifies the process of adding another flag
to getLoad.

Reviewers: chandlerc, tstellarAMD

Subscribers: jholewinski, arsenm, jyknight, dsanders, nemanjai, llvm-commits

Differential Revision: http://reviews.llvm.org/D22249

llvm-svn: 275592
2016-07-15 18:27:10 +00:00
Justin Lebar 0af80cd6f0 [CodeGen] Take a MachineMemOperand::Flags in MachineFunction::getMachineMemOperand.
Summary:
Previously we took an unsigned.

Hooray for type-safety.

Reviewers: chandlerc

Subscribers: dsanders, llvm-commits

Differential Revision: http://reviews.llvm.org/D22282

llvm-svn: 275591
2016-07-15 18:26:59 +00:00
Krzysztof Parzyszek 9be66737c1 [Hexagon] Add a scheduling DAG mutation
- Remove output dependencies on USR_OVF register.
- Update chain edge latencies between v60 vector loads/stores.

llvm-svn: 275586
2016-07-15 17:48:09 +00:00
Krzysztof Parzyszek 771c34513a [Hexagon] Update instruction itineraries
llvm-svn: 275578
2016-07-15 16:58:34 +00:00
Krzysztof Parzyszek f24f468e6d [Hexagon] Fixes/changes to instruction selection
- Add patterns for rr/abs addressing modes.
- Set addrMode to PostInc where necessary.
- Misc fixes.

llvm-svn: 275574
2016-07-15 16:29:02 +00:00
Krzysztof Parzyszek bba0bf7d37 [Hexagon] Improve patterns with stack-based addressing
- Treat bitwise OR with a frame index as an ADD wherever possible, fold it
  into addressing mode.
- Extend patterns for memops to allow memops with frame indexes as address
  operands.

llvm-svn: 275569
2016-07-15 15:35:52 +00:00
Yaxun Liu b3d17690eb [AMDGPU] Add metadata for runtime
Added emitting metadata to elf for runtime.

Runtime requires certain information (metadata) about kernels to be able to execute and query them. Such information is emitted to an elf section as a key-value pair stream.

Differential Revision: https://reviews.llvm.org/D21849

llvm-svn: 275566
2016-07-15 14:58:21 +00:00
Jacques Pienaar 71c30a14b7 Rename AnalyzeBranch* to analyzeBranch*.
Summary: NFC. Rename AnalyzeBranch/AnalyzeBranchPredicate to analyzeBranch/analyzeBranchPredicate to follow LLVM coding style and be consistent with TargetInstrInfo's analyzeCompare and analyzeSelect.

Reviewers: tstellarAMD, mcrosier

Subscribers: mcrosier, jholewinski, jfb, arsenm, dschuff, jyknight, dsanders, nemanjai

Differential Revision: https://reviews.llvm.org/D22409

llvm-svn: 275564
2016-07-15 14:41:04 +00:00
Daniel Sanders db5e666304 Revert r275141 - Mips: Avoid implicit iterator conversions, NFC
It appears to have caused some failures in our buildbots.

llvm-svn: 275562
2016-07-15 13:54:20 +00:00
Simon Pilgrim 2683ad54ad [X86][AVX2] Improve lowerShuffleAsRepeatedMaskAndLanePermute permutation of 64-bit sub-lanes
As discussed on PR28136, lowerShuffleAsRepeatedMaskAndLanePermute was attempting to match repeated masks at the 128-bit level and then permute the resultant lanes at the 128-bit (AVX1) or 64-bit (AVX2) sub-lane level.

This change allows us to create the repeated masks at the sub-lane level (and then concat them together to create a 128-bit repeated mask) and then select which sub-lane to permute. This has no effect on the AVX1 codegen.

Fixes PR28136.

llvm-svn: 275543
2016-07-15 09:49:12 +00:00
James Molloy 8f16dffbb1 [ARM] Fix build after r275540
A rebase seemed so innocent before committing. Turns out someone changed a pointer to a reference in the mean time :(

llvm-svn: 275541
2016-07-15 08:12:44 +00:00
James Molloy b3326df56a [Thumb-1] Select post-increment load and store where possible
Thumb-1 doesn't have post-inc or pre-inc load or store instructions. However the LDM/STM instructions with writeback can function as post-inc load/store:

  ldm r0!, {r1}  @ load from r0 into r1 and increment r0 by 4

Obviously, this only works if the post increment is 4.

llvm-svn: 275540
2016-07-15 08:03:56 +00:00
James Molloy 2af08fa051 [ARM] Followup to r275537 addressing review comments
Address Chad's comment in D22216 which I missed due to tunnel vision on the "LGTM" comment.

llvm-svn: 275538
2016-07-15 07:57:35 +00:00
James Molloy a454a11d60 [ARM] Prefer indirect calls in minsize mode
... When we emit several calls to the same function in the same basic block.

An indirect call uses a "BLX r0" instruction which has a 16-bit encoding. If many calls are made to the same target, this can enable significant code size reductions.

llvm-svn: 275537
2016-07-15 07:55:21 +00:00
Matt Arsenault b91805ea2b AMDGPU: Fix not expanding control flow after some kill blocks
Also stop trying to insert skip blocks at end_cf. This
was inserting them at the end of the block which doesn't make
sense. The skip should be inserted at the beginning of the block
right after the end cf. Just remove this for now since no tests
seem to stress this and I think this can be handled more generally
later.

Fixes bug 28550

llvm-svn: 275510
2016-07-15 00:58:15 +00:00
Matt Arsenault fa5a86a403 AMDGPU: Fix trying to skip from a block with no successors
Found while reducing bug 28550

llvm-svn: 275509
2016-07-15 00:58:13 +00:00
Matt Arsenault 83ab049af2 AMDGPU: Fix splitting kill blocks with defs before kill
llvm-svn: 275508
2016-07-15 00:58:09 +00:00
Haicheng Wu f0b0127f60 [AArch64] Set COPY ZR isAsCheapAsAMove when needed.
If a subtarget has both ZCZeroing and CustomCheapAsMoveHandling features (now
only Kryo has both), set COPY (W|X)ZR isAsCheapAsAMove.

Differential Revision: http://reviews.llvm.org/D22360

llvm-svn: 275503
2016-07-15 00:27:01 +00:00
Simon Pilgrim 420b266d0a [X86][AVX2] Allow VPERMPD/VPERMQ shuffles to call combineShuffle (reapplied)
This improves the situation discussed in D19228 where we were forcing VPERMPD/VPERMQ where VPERM2F128/VPERM2I128 would have been better.

This was incorrectly reverted in rL275421 during triage of PR28552.

llvm-svn: 275497
2016-07-14 23:05:09 +00:00
Justin Lebar b49185dd5f s/constexpr/LLVM_CONSTEXPR in AArch64InstrInfo.cpp.
Yet again.

llvm-svn: 275463
2016-07-14 20:08:23 +00:00
Krzysztof Parzyszek ecea07c50e [Hexagon] Packetize function call arguments with tail call instructions
On Hexagon is it legal to packetize the instructions setting up call
arguments with the call instruction itself. This was already done,
except for tail calls. Make sure tail calls are handled as well.

llvm-svn: 275458
2016-07-14 19:30:55 +00:00
Evandro Menezes 77d470ff3c [AArch64] Adjust the scheduling model for Exynos-M1.
Enable use-postra-scheduler. (NFC)

llvm-svn: 275457
2016-07-14 19:25:46 +00:00
Justin Lebar 288b3376ae [CodeGen] Refactor MachineMemOperand::Flags's target-specific flags.
Summary:
Make the target-specific flags in MachineMemOperand::Flags real, bona
fide enum values.  This simplifies users, prevents various constants
from going out of sync, and avoids the false sense of security provided
by declaring static members in classes and then forgetting to define
them inside of cpp files.

Reviewers: MatzeB

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22372

llvm-svn: 275451
2016-07-14 18:15:20 +00:00
Nirav Dave a6c7595d0f [X86][MC] Fix bracket expression parsing in intel-style assembly.
Only perform struct field check on Identifier tokens.

Fixes PR28547.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22361

llvm-svn: 275445
2016-07-14 17:37:05 +00:00
Justin Lebar a3b786a8c1 [CodeGen] Refactor MachineMemOperand's Flags enum.
Summary:
- Give it a shorter name (because we're going to refer to it often from
  SelectionDAG and friends).

- Split the flags and alignment into separate variables.

- Specialize FlagsEnumTraits for it, so we can do bitwise ops on it
  without losing type information.

- Make some enum values constants in MachineMemOperand instead.
  MOMaxBits should not be a valid Flag.

- Simplify some of the bitwise ops for dealing with Flags.

Reviewers: chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22281

llvm-svn: 275438
2016-07-14 17:07:44 +00:00
Tim Northover 6003fb58df ARM: fix vmov.i64 immediate validity check
Typo meant we were only checking the low byte (repeatedly).

llvm-svn: 275437
2016-07-14 17:04:34 +00:00
Nico Weber 5bb284226b Don't optimize movs to pushes in -O0 builds.
https://reviews.llvm.org/D22362

llvm-svn: 275431
2016-07-14 15:40:22 +00:00
Nico Weber ead8f8ffdd Delete some trailing whitespace.
llvm-svn: 275429
2016-07-14 15:07:44 +00:00
Ahmed Bougacha 85dc93c56b [X86] Decode MPX BND registers.
We were able to assemble, but not disassemble.

Note that fixupRMValue was truncating EA_REG_BND0-3 because we hit
the uint8_t max.  The control registers were already squarely above
it, but I don't think they ever go in .r/m, only in .reg.

I also did notice an extra REX.W in our encoding, but I think that's
fine.

llvm-svn: 275427
2016-07-14 14:53:21 +00:00
Ahmed Bougacha 4f7a5e20ae [X86] Don't mark addressing mode operands as "outs". NFC-ish.
Nothing in-tree can tell the difference, but it's incorrect: the
addressing mode registers aren't what's defined.

llvm-svn: 275426
2016-07-14 14:53:17 +00:00
Sam Kolton 7a2a323feb [AMDGPU] Assembler: fix row_bcast parsing
Summary: This change fix bug 28538

Reviewers: tstellarAMD, vpykhtin

Subscribers: arsenm, kzhuravl

Differential Revision: https://reviews.llvm.org/D22355

llvm-svn: 275422
2016-07-14 14:50:35 +00:00
Nico Weber 3afaf16abc Revert r275411, it cause PR28552.
llvm-svn: 275421
2016-07-14 14:49:35 +00:00
Nico Weber ecdf45b1e6 Teach fast isel calls and rets about stdcall.
stdcall is callee-pop like thiscall, so the thiscall changes already did most
of the work for this.  This change only opts stdcall in and adds tests.

llvm-svn: 275414
2016-07-14 13:54:26 +00:00
Simon Pilgrim 534e3240e8 Remove trailing whitespace.
llvm-svn: 275412
2016-07-14 13:29:23 +00:00
Simon Pilgrim 3ecb6bdd5f [X86][AVX2] Allow VPERMPD/VPERMQ shuffles to call combineShuffle
This improves the situation discussed in D19228 where we were forcing VPERMPD/VPERMQ where VPERM2F128/VPERM2I128 would have been better.

llvm-svn: 275411
2016-07-14 13:28:43 +00:00
Daniel Sanders 46fe6550ac [mips] SelectionDAGISel subclasses now follow the optimization level.
Summary:
It was recently discovered that, for Mips's SelectionDAGISel subclasses,
all optimization levels caused SelectionDAGISel to behave like -O2.

This change adds the necessary plumbing to initialize the optimization level.

Reviewers: andrew.w.kaylor

Subscribers: andrew.w.kaylor, sdardis, dean, llvm-commits, vradosavljevic, petarj, qcolombet, probinson, dsanders

Differential Revision: https://reviews.llvm.org/D14900

llvm-svn: 275410
2016-07-14 13:25:22 +00:00
Simon Pilgrim 053d32906f [X86][AVX] Add support for narrowing 128-bit+ shuffle mask elements to 64-bits to allow combining
Primarily this is to allow blend with zero instead of having to use vperm2f128, but we can use this in the future to deal with AVX512 cases where we need to keep the original element size to correctly fold masked operations.

llvm-svn: 275406
2016-07-14 12:58:04 +00:00
Simon Pilgrim a76a8e50e5 [X86][AVX] Add VBROADCASTF128/VBROADCASTI128 shuffle comments support
llvm-svn: 275400
2016-07-14 12:07:43 +00:00
Simon Pilgrim b8c261c931 [X86][AVX2] VBROADCASTSSrr/VBROADCASTSSYrr require AVX2 not AVX
llvm-svn: 275391
2016-07-14 10:37:14 +00:00
Sjoerd Meijer 38c2cd0c14 This implements a more optimal algorithm for selecting a base constant in
constant hoisting. It not only takes into account the number of uses and the
cost of expressions in which constants appear, but now also the resulting
integer range of the offsets. Thus, the algorithm maximizes the number of uses
within an integer range that will enable more efficient code generation. On
ARM, for example, this will enable code size optimisations because less
negative offsets will be created. Negative offsets/immediates are not supported
by Thumb1 thus preventing more compact instruction encoding.

Differential Revision: http://reviews.llvm.org/D21183

llvm-svn: 275382
2016-07-14 07:44:20 +00:00
Craig Topper 6840f1150f [AVX512] Implement EXTLOAD lowering with patterns to select existing VPMOVZX instructions instead of creating CodeGenOnly instructions.
llvm-svn: 275378
2016-07-14 06:41:34 +00:00
Eli Friedman 17e8ea18e9 [X86] Fix stupid typo in isel lowering.
Apparently someone miscounted the number of zeros in the immediate.
Fixes https://llvm.org/bugs/show_bug.cgi?id=28544 .

llvm-svn: 275376
2016-07-14 05:48:25 +00:00
Matt Arsenault ca7f5701f8 AMDGPU/R600: Delete/rename intrinsics no longer used by mesa
Use the replacement pass to update the tests, and delete old names.

llvm-svn: 275375
2016-07-14 05:47:17 +00:00
Matt Arsenault 648e422bd9 AMDGPU/R600: Remove intrinsics with no tests and no users
Mesa removed this path, so nothing is using these anymore.

llvm-svn: 275372
2016-07-14 05:23:23 +00:00
Matt Arsenault 897eee4187 AMDGPU: Remove unused intrinsics
llvm-svn: 275371
2016-07-14 05:23:19 +00:00
Matt Arsenault 0bf9984bc8 AMDGPU: Remove dead code
llvm-svn: 275369
2016-07-14 05:23:08 +00:00
Dean Michael Berris 52735fc435 XRay: Add entry and exit sleds
Summary:
In this patch we implement the following parts of XRay:

- Supporting a function attribute named 'function-instrument' which currently only supports 'xray-always'. We should be able to use this attribute for other instrumentation approaches.
- Supporting a function attribute named 'xray-instruction-threshold' used to determine whether a function is instrumented with a minimum number of instructions (IR instruction counts).
- X86-specific nop sleds as described in the white paper.
- A machine function pass that adds the different instrumentation marker instructions at a very late stage.
- A way of identifying which return opcode is considered "normal" for each architecture.

There are some caveats here:

1) We don't handle PATCHABLE_RET in platforms other than x86_64 yet -- this means if IR used PATCHABLE_RET directly instead of a normal ret, instruction lowering for that platform might do the wrong thing. We think this should be handled at instruction selection time to by default be unpacked for platforms where XRay is not availble yet.

2) The generated section for X86 is different from what is described from the white paper for the sole reason that LLVM allows us to do this neatly. We're taking the opportunity to deviate from the white paper from this perspective to allow us to get richer information from the runtime library.

Reviewers: sanjoy, eugenis, kcc, pcc, echristo, rnk

Subscribers: niravd, majnemer, atrick, rnk, emaste, bmakam, mcrosier, mehdi_amini, llvm-commits

Differential Revision: http://reviews.llvm.org/D19904

llvm-svn: 275367
2016-07-14 04:06:33 +00:00
Nico Weber af7e8465e1 Teach fast isel about thiscall (and callee-pop) calls.
http://reviews.llvm.org/D22315

llvm-svn: 275360
2016-07-14 01:52:51 +00:00
Mehdi Amini cfed2564f7 Add EnableIPRA to TargetOptions, and move the cl::opt -enable-ipra to TargetMachine.cpp
Avoid exposing a cl::opt in a public header and instead promote this
option in the API.
Alternatively, we could land the cl::opt in CommandFlags.h so that
it is available to every tool, but we would still have to find an
option for clang.

llvm-svn: 275348
2016-07-13 23:39:46 +00:00
Nico Weber eb9488b151 Fix a TODO in X86CallFrameOptimization to not rely on a codegen artifact.
This happens to make X86CallFrameOptimization in -O0 / FastISel builds as well,
but it's not clear if the pass should run in that setup.

http://reviews.llvm.org/D22314

llvm-svn: 275320
2016-07-13 21:38:27 +00:00
Matt Arsenault f071102647 AMDGPU: Remove last AMDIL intrinsics
llvm-svn: 275309
2016-07-13 19:42:06 +00:00
Marek Olsak 0532c190f7 AMDGPU/SI: Emit the number of SGPR and VGPR spills
Summary:
v2: don't count SGPRs spilled to scratch twice

I think this is sufficient. It doesn't count private memory usage, which
happens often and uses scratch but isn't technically a spill. The private
memory usage can be computed by:
  [scratch_per_thread - vgpr_spills - a random multiple of SGPR spills].

The fact SGPR spills add very high numbers to the scratch size make that
computation a guessing game, but I don't have a solution to that.

Reviewers: tstellarAMD

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D22197

llvm-svn: 275288
2016-07-13 17:35:15 +00:00
Sanjay Patel 610a2f6525 [x86][SSE/AVX] optimize pcmp results better (PR28484)
We know that pcmp produces all-ones/all-zeros bitmasks, so we can use that behavior to avoid unnecessary constant loading.

One could argue that load+and is actually a better solution for some CPUs (Intel big cores) because shifts don't have the
same throughput potential as load+and on those cores, but that should be handled as a CPU-specific later transformation if
it ever comes up. Removing the load is the more general x86 optimization. Note that the uneven usage of vpbroadcast in the
test cases is filed as PR28505:
https://llvm.org/bugs/show_bug.cgi?id=28505

Differential Revision: http://reviews.llvm.org/D22225

llvm-svn: 275276
2016-07-13 16:04:07 +00:00
Simon Pilgrim a99368fa35 [X86][AVX512] Add support for VPERMILPD/VPERMILPS variable shuffle mask comments
llvm-svn: 275272
2016-07-13 15:45:36 +00:00
Simon Pilgrim 48d8340760 [X86][AVX] Add support for target shuffle combining to VPERMILPS variable shuffle mask
Added AVX512F VPERMILPS shuffle decoding support

llvm-svn: 275270
2016-07-13 15:10:43 +00:00
Tom Stellard 418beb7671 AMDGPU/SI: Add support for R_AMDGPU_GOTPCREL
Reviewers: rafael, ruiu, tony-tye, arsenm, kzhuravl

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D21484

llvm-svn: 275268
2016-07-13 14:23:33 +00:00
Simon Pilgrim 57548a6fa6 [X86][SSE] Check for lane crossing shuffles before trying to combine to PSHUFB
Removes a return-on-fail that was making it tricky to add other variable mask shuffles.

llvm-svn: 275262
2016-07-13 12:48:41 +00:00
Matt Arsenault 0056868c4a AMDGPU: Fold out no-op kill intrinsics
llvm-svn: 275253
2016-07-13 06:04:22 +00:00
Matt Arsenault 8dff86d878 AMDGPU: WQM cleanups
- Add new TTI instruction checks
- Don't use const for blocks that are mutated.
- Checking isBranch and isTerminator should be redundant

llvm-svn: 275252
2016-07-13 05:55:15 +00:00
Craig Topper ff1c327ebb [X86] Remove some seemingly unnecessary patterns that supported vector zext/sext with 256-bit source types producing a 256-bit result.
These patterns just extracted the source down to 128-bits to use the instructions. AVX512 seems to have blindly copied them over for VLX, but did not create similar patterns for 512-bit sources. So I'm hoping the backend can't actually produce these cases.

llvm-svn: 275240
2016-07-13 02:21:25 +00:00
Matt Arsenault 786724a22e AMDGPU: Follow up to r275203
I meant to squash this into it.

llvm-svn: 275220
2016-07-12 21:41:32 +00:00
Nemanja Ivanovic b43bb6141e [Power9] Add codegen for VSX word insert/extract instructions
This patch corresponds to review:
http://reviews.llvm.org/D20239

It adds exploitation of XXINSERTW and XXEXTRACTUW instructions that
are useful in some cases for inserting and extracting vector elements of
v4[if]32 vectors.

llvm-svn: 275215
2016-07-12 21:00:10 +00:00
Simon Pilgrim 6fa71da4a4 [X86][AVX] Add support for target shuffle combining to VPERM2F128/VPERM2I128
llvm-svn: 275212
2016-07-12 20:27:32 +00:00
Matthias Braun 96ec47db74 X86FixupBWInsts: No need for forward liveness analysis.
With r274952 and r275201 in place there are no cases left where a
forward liveness analysis yields different results than a backward one.
So we can remove the forward stepping logic.

Differential Revision: http://reviews.llvm.org/D22083

llvm-svn: 275204
2016-07-12 19:04:30 +00:00
Matt Arsenault 657f871a4e AMDGPU: Fix verifier error with kill intrinsic
Don't create a terminator in the middle of the block.
We should probably get rid of this intrinsic.

llvm-svn: 275203
2016-07-12 19:01:23 +00:00
Matt Arsenault 10531d1020 AMDGPU: Set isConvergent on v_cmpx* instructions
No test since these aren't used now, except for one place
in a pre-emit pass.

llvm-svn: 275200
2016-07-12 18:41:03 +00:00
Wei Ding 5b2636a152 AMDGPU: Add LLVM IR Intrinsic for v_lerp_u8
Differential Revision: http://reviews.llvm.org/D22239

llvm-svn: 275197
2016-07-12 18:02:14 +00:00
Haicheng Wu 711ca868fc [AArch64] Set FMOVS0 and FMOVD0 as isAsCheapAsAMove when needed.
If a subtarget has both ZCZeroing and CustomCheapAsMoveHandling features (now
only Kryo has both), set FMOVS0 and FMOVD0 isAsCheapAsAMove.

Differential Revision: http://reviews.llvm.org/D22256

llvm-svn: 275178
2016-07-12 15:31:41 +00:00
Nemanja Ivanovic eebbcb6d57 [PowerPC] Cannonicalize applicable vector shift immediates as swaps
This patch corresponds to review:
http://reviews.llvm.org/D21358

Vector shifts that have the same semantics as a vector swap are cannonicalized
as such to provide additional opportunities for swap removal optimization to
remove unnecessary swaps.

llvm-svn: 275168
2016-07-12 12:16:27 +00:00
Nicolai Haehnle 7968c34586 AMDGPU: Unify MOVRELSOffset and MOVRELDOffset
Summary:
Previously, constant index insertelements would be turned into SI_INDIRECT_DST,
which is bound to prevent some optimization opportunities. Worse, it mislead
the heuristic that decides whether immediates should be lowered to S_MOV_B32
or V_MOV_B32 in a way that resulted in unnecessary v_readfirstlanes.

Reviewers: arsenm, tstellarAMD

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D22217

llvm-svn: 275160
2016-07-12 08:12:16 +00:00
Craig Topper a6e6febe2c [AVX512] Remove masked logic op intrinsics and autoupgrade them to native IR.
llvm-svn: 275155
2016-07-12 05:27:53 +00:00
Duncan P. N. Exon Smith 7b4c18e8f3 X86: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr*, mainly by preferring MachineInstr& over MachineInstr* and
using range-based for loops.

llvm-svn: 275149
2016-07-12 03:18:50 +00:00
Haicheng Wu 1e39574e9f [Kryo] Enable ZCZeroing feature
This feature uses immediate #0 to zero a register.

Differential Revision: http://reviews.llvm.org/D19985

llvm-svn: 275143
2016-07-12 02:04:01 +00:00
Duncan P. N. Exon Smith 98226e3d93 Hexagon: Avoid implicit iterator conversions, NFC
Avoid implicit iterator conversions from MachineInstrBundleIterator to
MachineInstr* in the Hexagon backend, mostly by preferring MachineInstr&
over MachineInstr* and switching to range-based for loops.

There's a long tail of API cleanup here, but I'm planning to leave the
rest to the Hexagon maintainers.  HexagonInstrInfo defines many of its
own predicates, and most of them still take MachineInstr*.  Some of
those actually check for nullptr, so I didn't feel comfortable changing
them to MachineInstr& en masse.

llvm-svn: 275142
2016-07-12 01:55:32 +00:00
Duncan P. N. Exon Smith fdd30c620d Mips: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the Mips backend, mainly by preferring MachineInstr&
over MachineInstr* when a pointer isn't nullable and using range-based
for loops.

llvm-svn: 275141
2016-07-12 01:47:02 +00:00
Duncan P. N. Exon Smith 4565ec0c1d SystemZ: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the SystemZ backend, mainly by preferring MachineInstr&
over MachineInstr* and using range-based for loops.

llvm-svn: 275137
2016-07-12 01:39:01 +00:00
Nico Weber c7bf646a99 Teach FastISel about thiscall (and, hence, about callee-pop).
http://reviews.llvm.org/D22115

llvm-svn: 275135
2016-07-12 01:30:35 +00:00
Matt Arsenault fc7e6a0a0e AMDGPU: Cleanup pseudoinstructions
llvm-svn: 275133
2016-07-12 00:23:17 +00:00
Matt Arsenault 840593e19d AMDGPU: Fix missing scc def on control flow pseudos
These are all expanded to instructions that include an scc def.

llvm-svn: 275132
2016-07-12 00:08:14 +00:00
Matt Arsenault e3742466b9 AMDGPU: Enable trackLivenessAfterRegAlloc
This has caught a number of bugs.

llvm-svn: 275131
2016-07-11 23:56:30 +00:00
Tim Northover 3e0361710a ARM: validate immediate branch targets in AsmParser.
Immediate branch targets aren't commonly used, but if they are we should make
sure they can actually be encoded. This means they must be divisible by 2 when
targeting Thumb mode, and by 4 when targeting ARM mode.

Also do a little naming cleanup while I was changing everything around anyway.

llvm-svn: 275116
2016-07-11 22:29:37 +00:00
Nicolai Haehnle c06bfa1daa AMDGPU: Treat texture gather instructions more like other MIMG instructions
Summary:
Setting MIMG to 0 has a bunch of unexpected side effects, including that
isVMEM returns false which leads to incorrect treatment in the hazard
recognizer. The reason I noticed it is that it also leads to incorrect
treatment in VGPR-to-SGPR copies, which is one cause of the referenced bug.

The only reason why MIMG was set to 0 is to signal the special handling of
dmasks, but that can be checked differently.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96877

Reviewers: arsenm, tstellarAMD

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D22210

llvm-svn: 275113
2016-07-11 21:59:43 +00:00
Nicolai Haehnle f52c3cf272 AMDGPU: fix local stack slot allocation bugs
Summary:
The main bug fix here is using the 32-bit encoding of V_ADD_I32 in
materializeFrameBaseRegister and resolveFrameIndex, so that arbitrary
immediates work.

The second part is that we may now require the SegmentWaveByteOffset
even when there are initially no stack objects and VGPR spilling isn't
enabled, for stack slots that are allocated later. This means that some
bits become effectively dead and can be cleaned up.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96602
Tested-by: Kai Wasserbäch <kai@dev.carbon-project.org>

Reviewers: arsenm, tstellarAMD

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D21551

llvm-svn: 275108
2016-07-11 21:44:40 +00:00
Michael Kuperstein f0c59330e9 [X86] Make some cast costs more precise
Make some AVX and AVX512 cast costs more precise.
Based on part of a patch by Elena Demikhovsky (D15604).

Differential Revision: http://reviews.llvm.org/D22064

llvm-svn: 275106
2016-07-11 21:39:44 +00:00
Quentin Colombet fb82c7bc94 [X86] Fix tailcall return address clobber bug.
This bug (llvm.org/PR28124) was introduced by r237977, which refactored
the tail call  sequence to be generated in two passes instead of one.

Unfortunately, the stack adjustment produced by the first pass was not
recognized by X86FrameLowering::mergeSPUpdates() in all cases, causing
code such as the following, which clobbers the return address, to be
generated:

popl    %edi
popl    %edi
pushl   %eax
jmp     tailcallee              # TAILCALL

To fix the problem, the entire stack adjustment is performed in
X86ExpandPseudo::ExpandMI() for tail calls.

Patch by Magnus Lång <margnus1@gmail.com>

Differential Revision: http://reviews.llvm.org/D21325

llvm-svn: 275103
2016-07-11 21:03:03 +00:00
Michael Kuperstein cfbac5f361 [X86] Disable FixupSetCC for CodeGenOpt::None
It is an optimization pass, and should not run at -O0. Especially since Fast RA
will not do the required register coalescing anyway, so it's a loss even from
the optimization standpoint.

This also works around (but doesn't quite fix) PR28489.

llvm-svn: 275099
2016-07-11 20:40:44 +00:00
Zhan Jun Liau def708a0f9 [SystemZ] Recognize Load On Condition Immediate (LOCHI/LOGHI) opportunities
Summary: Add support for the z13 instructions LOCHI and LOCGHI which
conditionally load immediate values.  Add target instruction info hooks so
that if conversion will allow predication of LHI/LGHI.

Author: RolandF

Reviewers: uweigand

Subscribers: zhanjunl

Commiting on behalf of Roland.

Differential Revision: http://reviews.llvm.org/D22117

llvm-svn: 275086
2016-07-11 18:45:03 +00:00
Nirav Dave 57033c6336 Add missing include from previous commit
llvm-svn: 275069
2016-07-11 14:32:57 +00:00
Nirav Dave 8603062ee4 Fix branch relaxation in 16-bit mode.
Thread through MCSubtargetInfo to relaxInstruction function allowing relaxation
to generate jumps with 16-bit sized immediates in 16-bit mode.

This fixes PR22097.

Reviewers: dwmw2, tstellarAMD, craig.topper, jyknight

Subscribers: jfb, arsenm, jyknight, llvm-commits, dsanders

Differential Revision: http://reviews.llvm.org/D20830

llvm-svn: 275068
2016-07-11 14:23:53 +00:00
Simon Pilgrim 832463eada [X86][SSE] Generalise target shuffle combine of shuffles using variable masks
At present the only shuffle with a variable mask we recognise is PSHUFB, which influences if its worth the cost of mask creation/loading of a combined target shuffle with a variable mask. This change sets up the infrastructure to support other shuffles in the future but has no effect yet.

llvm-svn: 275059
2016-07-11 12:49:35 +00:00
Artem Tamazov 53c9de08d2 [AMDGPU][llvm-mc] Quickfix for r272748 to enable labels in branch instructions.
Fixes issue mentioned at:
  https://github.com/RadeonOpenCompute/LLVM-AMDGPU-Assembler-Extra/issues/13.
Lit tests added.

Differential Revision: http://reviews.llvm.org/D22133

llvm-svn: 275054
2016-07-11 12:07:18 +00:00
Zlatko Buljan cba9f80ba8 [mips][microMIPS] Implement LDC1, SDC1, LDC2, SDC2, LWC1, SWC1, LWC2 and SWC2 instructions and add CodeGen support
Differential Revision: http://reviews.llvm.org/D18824

llvm-svn: 275050
2016-07-11 07:41:56 +00:00
Elena Demikhovsky d84f337953 AVX-512: DAG lowering for scalar MIN/MAX commutable ops
DAG lowering was missing for the scalar FMINC, FMAXC nodes.
The nodes are generated only in the "unsafe-fp-math" mode.
Added tests.

llvm-svn: 275048
2016-07-11 06:08:06 +00:00
Craig Topper 7ee070e7bc [AVX512] Add support for 512-bit ANDN now that all ones build vectors survive long enough to allow the matching.
llvm-svn: 275046
2016-07-11 05:36:53 +00:00
Craig Topper 516e14cd8e [AVX512] Use vpternlog with an immediate of 0xff to create 512-bit all one vectors.
llvm-svn: 275045
2016-07-11 05:36:48 +00:00
Craig Topper 8674849d6e [X86] Add the AVX512 SET0 pseudos to foldMemoryOperandImpl since they are marked for CanFoldAsLoad.
I don't really know how to test this.

llvm-svn: 275044
2016-07-11 05:36:41 +00:00
Simon Pilgrim ee4a33ae46 [X86][SSE] Relax type assertions for matchVectorShuffleAsInsertPS
Calls to matchVectorShuffleAsInsertPS only need to ensure the inputs are 128-bit vectors. Only lowerVectorShuffleAsInsertPS needs to ensure that they are v4f32.

llvm-svn: 275028
2016-07-10 22:26:05 +00:00
Jan Vesely 2fa28c330c AMDGPU/R600: Add implicitarg.ptr intrinsic
Differential Revision: http://reviews.llvm.org/D21622

llvm-svn: 275024
2016-07-10 21:20:29 +00:00
Simon Pilgrim 2191faa433 [X86][SSE] Add support for target shuffle combining to PSHUFLW/PSHUFHW
llvm-svn: 275022
2016-07-10 21:02:47 +00:00
Marcin Koscielnicki cf7cc724a7 [SystemZ] Utilize Test Data Class instructions.
This adds a new SystemZ-specific intrinsic, llvm.s390.tdc.f(32|64|128),
which maps straight to the test data class instructions.  A new IR pass
is added to recognize instructions that can be converted to TDC and
perform the necessary replacements.

Differential Revision: http://reviews.llvm.org/D21949

llvm-svn: 275016
2016-07-10 14:41:22 +00:00
Benjamin Kramer 4d09892e9a Give helper classes/functions internal linkage. NFC.
llvm-svn: 275014
2016-07-10 11:28:51 +00:00
Craig Topper 0b0954570a [AVX512] Add support for lowering to 512-bit SHUFPS.
llvm-svn: 275011
2016-07-10 05:55:53 +00:00
Simon Pilgrim 606126e848 [X86][SSE] Add support for target shuffle combining to INSERTPS
llvm-svn: 274990
2016-07-09 21:47:55 +00:00
Simon Pilgrim 59c6a211cd [X86][SSE] Use scaleShuffleMask helper. NFCI.
llvm-svn: 274988
2016-07-09 21:12:03 +00:00
Jacques Pienaar b32a912f72 [lanai] Treat .t as optional in assembly parser for RR operands and add predicate operand to ShiftRR
llvm-svn: 274980
2016-07-09 18:26:04 +00:00
Matt Arsenault 52a4d9b429 AMDGPU: Move R600 only pieces into R600 classes
llvm-svn: 274979
2016-07-09 18:11:15 +00:00
Matt Arsenault 48d70cb486 Revert "AMDGPU: Remove unused control flow intrinsic"
llvm-svn: 274978
2016-07-09 17:18:39 +00:00
NAKAMURA Takumi f35424b73f AMDGPU: Prune AMDGPUAsmParser in libdeps.
llvm-svn: 274970
2016-07-09 07:54:27 +00:00
Matt Arsenault dfec5ce032 AMDGPU: Fix fdiv lowering when f32 denormals supported
Also fix test not actually using function labels.

llvm-svn: 274969
2016-07-09 07:48:11 +00:00
Craig Topper 70610cf7b6 [X86] Remove and autoupgrade 512-bit non-temporal store intrinsics.
llvm-svn: 274966
2016-07-09 04:38:27 +00:00
Matt Arsenault 1322b6f8bb AMDGPU: Improve offset folding for register indexing
llvm-svn: 274954
2016-07-09 01:13:56 +00:00
Matt Arsenault 95c7897555 AMDGPU: Simplify isSchedulingBoundary
llvm-svn: 274953
2016-07-09 01:13:51 +00:00
Duncan P. N. Exon Smith be6092deec Lanai: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the Lanai backend.

llvm-svn: 274942
2016-07-08 22:11:30 +00:00
Matt Arsenault 8f0a92f0ba AMDGPU: Remove unused control flow intrinsic
llvm-svn: 274939
2016-07-08 21:39:44 +00:00
Duncan P. N. Exon Smith 8efc5b4f04 MSP430: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIIterator to
MachineInstr* in the MSP430 backend by preferring MachineInstr& over
MachineInstr* when a pointer isn't nullable.

llvm-svn: 274933
2016-07-08 21:19:46 +00:00
Duncan P. N. Exon Smith 68f499a6fa NVPTX: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the NVPTX backend, mainly by preferring MachineInstr&
over MachineInstr* when a pointer isn't nullable and using range-based
for loops.

There was one piece of questionable code in
NVPTXInstrInfo::AnalyzeBranch, where a condition checked a pointer
converted from an iterator for nullptr.  Since this case is impossible
(moreover, the code above guarantees that the iterator is valid), I
removed the check when I changed the pointer to a reference.

Despite that case, there should be no functionality change here.

llvm-svn: 274931
2016-07-08 21:10:58 +00:00
Duncan P. N. Exon Smith ab53fd9b50 AArch64: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleInstr to MachineInstr*
in the AArch64 backend, mainly by preferring MachineInstr& over
MachineInstr* when a pointer isn't nullable.

llvm-svn: 274924
2016-07-08 20:29:42 +00:00
Duncan P. N. Exon Smith 29c524983b ARM: Remove implicit iterator conversions, NFC
Remove remaining implicit conversions from MachineInstrBundleIterator to
MachineInstr* from the ARM backend.  In most cases, I made them less attractive
by preferring MachineInstr& or using a ranged-based for loop.

Once all the backends are fixed I'll make the operator explicit so that this
doesn't bitrot back.

llvm-svn: 274920
2016-07-08 20:21:17 +00:00
Duncan P. N. Exon Smith 811f2b378e Sparc: Avoid implicit iterator conversions, NFC
Remove the only implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the Sparc backend.

llvm-svn: 274913
2016-07-08 19:41:40 +00:00
Duncan P. N. Exon Smith 500d046989 WebAssembly: Avoid implicit iterator conversions, NFC
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the WebAssembly backend by preferring MachineInstr&
over MachineInstr*.

llvm-svn: 274912
2016-07-08 19:36:40 +00:00
Simon Pilgrim 950419f948 [X86][AVX2] Add support for target shuffle combining to VPERMPD/VPERMQ
llvm-svn: 274908
2016-07-08 19:23:29 +00:00
Duncan P. N. Exon Smith 4d29511894 AMDGPU: Remove implicit iterator conversions, NFC
Remove remaining implicit conversions from MachineInstrBundleIterator to
MachineInstr* from the AMDGPU backend.  In most cases, I made them less
attractive by preferring MachineInstr& or using a ranged-based for loop.

Once all the backends are fixed I'll make the operator explicit so that
this doesn't bitrot back.

llvm-svn: 274906
2016-07-08 19:16:05 +00:00
Duncan P. N. Exon Smith 221847ef63 AMDGPU: Make infinite loop clear, NFC
Change a while loop that was checking for nullptr on an
iterator-to-pointer conversion to an infinite for loop.  Now it's clear
that the condition doesn't terminate.

The only change in behaviour is if an invalid iterator (holding nullptr)
was passed into AMDGPUCFGStructurizer::reversePredicateSetter.  There
are only two callers, and they both dereference the iterator before
sending it in, so rather than adding an early return to avoid the loop
I've just asserted (using a static_cast, to avoid an implicit conversion
to pointer).

llvm-svn: 274902
2016-07-08 19:00:17 +00:00
Duncan P. N. Exon Smith 25b132e93b Target: Avoid getFirstTerminator() => pointer, NFC
Stop using an implicit conversion from the return of
MachineBasicBlock::getFirstTerminator to MachineInstr*.  In two cases,
directly dereference to a MachineInstr& since later code assumes it's
valid.  In a third case, change to an iterator since later code checks
against MachineBasicBlock::end.

Although the fix for the third case avoids undefined behaviour, I expect
this doesn't cause a functionality change in practice (since the basic
block already has a terminator).

llvm-svn: 274898
2016-07-08 18:26:20 +00:00
Matt Arsenault b63f18c9c3 AMDGPU: Minor adjustment to r274817
The commit message is inaccurate, modifiesRegister
will check for partial defs of exec.

We currently don't ever emit partial defs of exec,
so it doesn't really matter.

llvm-svn: 274886
2016-07-08 17:06:48 +00:00
Zhan Jun Liau 7d4d436c74 [SystemZ] Add support for the .word directive.
Summary: Branch off the work to add support for the .word directive,
using addAliasForDirective.

Reviewers: koriakin

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22142

llvm-svn: 274878
2016-07-08 16:50:02 +00:00
Zhan Jun Liau 3b4c3f4d51 [SystemZ] Add support for missing instructions
Summary:
Add support to allow clang integrated assembler to recognize some
missing instructions, for openssl.

Instructions are:
LM, LMH, LMY, STM, STMH, STMY, ICM, ICMH, ICMY, SLA, SLAK, TML, TMH, EX, EXRL.

Reviewers: uweigand

Subscribers: koriakin, llvm-commits

Differential Revision: http://reviews.llvm.org/D22050

llvm-svn: 274869
2016-07-08 16:18:40 +00:00
Chris Dewhurst 3202f065b8 [Sparc] Leon errata fix passes.
Errata fixes for various errata in different versions of the Leon variants of the Sparc 32 bit processor.

The nature of the errata are listed in the comments preceding the errata fix passes. Relevant unit tests are implemented for each of these.

Note: Running clang-format has changed a few other lines too, unrelated to the implemented errata fixes. These have been left in as this keeps the code formatting consistent.

Differential Revision: http://reviews.llvm.org/D21960

llvm-svn: 274856
2016-07-08 15:33:56 +00:00
Valery Pykhtin 68853ab2c5 [AMDGPU] fix ds_swizzle_b32 opcode for VI (bz 28371)
Differential Revision: http://reviews.llvm.org/D22049

llvm-svn: 274852
2016-07-08 15:12:46 +00:00
Pankaj Gode 5d118a1676 [AArch64] Macro fusion of simple ALU ops with branches for Broadcom's Vulcan
Support for the macro fusion of simple ALU ops with branches for the Vulcan sub-target.

Patch by Meador Inge <meadori@gmail.com>

Differential Revision: http://reviews.llvm.org/D22042

llvm-svn: 274837
2016-07-08 11:13:59 +00:00
Simon Pilgrim 828c731880 [X86][SSE] Accept any shuffle mask that is all zeroes
Until we have a better way to extract constants through bitcasted build vectors (and how to handle undefs of partial lanes etc.) at least accept build vectors that are all zeroes.

llvm-svn: 274833
2016-07-08 10:39:12 +00:00
Craig Topper f7bf6de0af [AVX512] Remove and autoupgrade a duplicate set of 512-bit masked shift intrinsics.
I'm not sure if clang ever used these builtin names or not.

llvm-svn: 274827
2016-07-08 06:14:47 +00:00
Matt Arsenault a74374a86b AMDGPU: Move si_mask_branch register operand to be a use
llvm-svn: 274818
2016-07-08 00:55:44 +00:00
Matt Arsenault d4a84b1ed2 AMDGPU: Cleanup. Use definesRegister instead of manual loop
Also this will be more precise since it will check
exec_lo/exec_hi writes.

llvm-svn: 274817
2016-07-08 00:55:39 +00:00
Saleem Abdulrasool eb059b0e0a ARM: support high registers in __builtin_longjmp on WoA
Windows on ARM uses a pure thumb-2 environment.  This means that it can select a
high register when doing a __builtin_longjmp.  We would use a tLDRi which would
truncate the register to a low register.  Use a t2LDRi12 to get the full
register file access.  Tweak the code to just load into PC, as that is an
interworking branch on all supported cores anyways.

llvm-svn: 274815
2016-07-08 00:48:22 +00:00
Jacques Pienaar 6d3eecc843 [lanai] Use peephole optimizer to generate more conditional ALU operations.
Summary:
* Similiar to the ARM backend yse the peephole optimizer to generate more conditional ALU operations;
* Add predicated type with default always true to RR instructions in LanaiInstrInfo.td;
* Move LanaiSetflagAluCombiner into optimizeCompare;
* The ASM parser can currently only handle explicitly specified CC, so specify ".t" (true) where needed in the ASM test;
* Remove unused MachineOperand flags;

Reviewers: eliben

Subscribers: aemerson

Differential Revision: http://reviews.llvm.org/D22072

llvm-svn: 274807
2016-07-07 23:36:04 +00:00
Michael Kuperstein 3e3652aef2 Recommit r274692 - [X86] Transform setcc + movzbl into xorl + setcc
xorl + setcc is generally the preferred sequence due to the partial register
stall setcc + movzbl suffers from. As a bonus, it also encodes one byte smaller.
This fixes PR28146.

The original commit tried inserting an 8bit-subreg into a GR32 (not GR32_ABCD)
which was not appreciated by fast regalloc on 32-bit.

llvm-svn: 274802
2016-07-07 22:50:23 +00:00
Chad Rosier 112d0e996b [AArch64] Change the preferred alignment for char and short to word alignment.
The commit reinstates r273279, which was informally approved.

Original Review: http://reviews.llvm.org/D21414

This reverts commit ca632c91aaa7cafc50942f890c49f727a046ace1.

llvm-svn: 274790
2016-07-07 20:02:18 +00:00
Michael Kuperstein edb38a94f8 Revert r274692 to check whether this is what breaks windows selfhost.
llvm-svn: 274771
2016-07-07 16:55:35 +00:00
Justin Bogner a466cc33fa NVPTX: Remove the legacy ptx intrinsics
- Rename the ptx.read.* intrinsics to nvvm.read.ptx.sreg.* - some but
  not all of these registers were already accessible via the nvvm
  name.
- Rename ptx.bar.sync nvvm.bar.sync, to match nvvm.bar0.

There's a fair amount of code motion here, but it's all very
mechanical.

llvm-svn: 274769
2016-07-07 16:40:17 +00:00
Chad Rosier 3972953efd Revert "[AArch64] Change the preferred alignment for char and short to word alignment"
This reverts commit r273279 as the change was not properly approved.

llvm-svn: 274768
2016-07-07 16:37:29 +00:00
Zhan Jun Liau a5d60afc09 [SystemZ] Fix regression when handling conditional calls
Summary:
A regression showed up in node.js when handling conditional calls.
Fix the regression by recognizing external symbols as a possible
operand type in CallJG.

Reviewers: koriakin

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22054

llvm-svn: 274761
2016-07-07 15:34:46 +00:00
Valery Pykhtin af8b1bddbd [AMDGPU] fix ds_write_src2 encoding (bz26027)
Differential revision: http://reviews.llvm.org/D22041

llvm-svn: 274756
2016-07-07 14:23:38 +00:00
Rafael Espindola b34cba97b7 Don't crash trying to relax 32 loads on COFF.
Fixes pr28452.

llvm-svn: 274754
2016-07-07 14:00:07 +00:00
Diana Picus 575f2bb287 [ARM] Do not test for CPUs, use SubtargetFeatures. Also remove 1 flag
This is a follow-up for r273544.

The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.

This commit also removes a command line flag that isn't used in any of the tests:
check-vmlx-hazards. It can be replaced easily with the mattr mechanism, since
this is now a subtarget feature.

There is still some work left regarding FeatureExpandMLx. In the past MLx
expansion was enabled for subtargets with hasVFP2(), until r129775 [1] switched
from that to isCortexA9, without too much justification.

In spite of that, the code performing MLx expansion still contains calls to
isSwift/isLikeA9, although the results of those are pretty clear given that
we're only enabling it for the A9.

We should try to enable it for all targets that have FeatureHasVMLxHazards, as
it seems to be closely related to that behaviour, and if that is possible try to
clean up the MLx expansion pass from all calls to isWhatever. This will require
some performance testing, so it will be done in another patch.

[1] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20110418/119725.html

Differential Revision: http://reviews.llvm.org/D21798

llvm-svn: 274742
2016-07-07 09:11:39 +00:00
Eric Christopher cd7194629b Use the class version of getPointerTy rather than getting back to
ourselves via a call through the DAG.

llvm-svn: 274721
2016-07-07 01:49:59 +00:00
Eric Christopher 317df66f15 Use the class definition for useSoftFloat.
llvm-svn: 274720
2016-07-07 01:49:57 +00:00
Eric Christopher 2454a3b4e7 Rename argument for consistency.
llvm-svn: 274717
2016-07-07 01:08:23 +00:00
Eric Christopher e0d09ba443 Remove the plumbing for isDarwinABI from EmitTailCallLoadFPAndRetAddr.
llvm-svn: 274716
2016-07-07 01:08:21 +00:00
Eric Christopher 606a268bed Use the MachineFunction that we've already queried for in the function.
llvm-svn: 274715
2016-07-07 01:08:19 +00:00
Eric Christopher 327e440c6c Remove the plumbing for isDarwinABI from the PrepareTailCall hierarchy.
llvm-svn: 274714
2016-07-07 01:08:17 +00:00
Eric Christopher ade4eed8a7 Remove the plumbing of 64-bitness from PrepareTailCall and functions
called by it.

llvm-svn: 274711
2016-07-07 00:39:32 +00:00
Eric Christopher c16ccbe731 Sink call to get the MachineFunction into EmitTailCallStoreFPAndRetAddr
and remove the argument.

llvm-svn: 274710
2016-07-07 00:39:30 +00:00
Eric Christopher b976a392e5 Remove unnecessary subtarget parameters in PPCTargetLowering.
llvm-svn: 274709
2016-07-07 00:39:27 +00:00
Junmo Park 384d376545 fix documentation comment. NFC.
llvm-svn: 274704
2016-07-06 23:18:58 +00:00
Junmo Park 5e4bd2e7c4 Minor code cleanup. NFC.
llvm-svn: 274702
2016-07-06 23:15:18 +00:00
Michael Kuperstein 1ef6c59b1d [X86] Transform setcc + movzbl into xorl + setcc
xorl + setcc is generally the preferred sequence due to the partial register
stall setcc + movzbl suffers from. As a bonus, it also encodes one byte smaller.

This fixes PR28146.

Differential Revision: http://reviews.llvm.org/D21774

llvm-svn: 274692
2016-07-06 21:56:18 +00:00
Matthias Braun ad0032a649 AArch64: Change modeling of zero cycle zeroing.
On CPUs with the zero cycle zeroing feature enabled "movi v.2d" should
be used to zero a vector register. This was previously done at
instruction selection time, however the register coalescer sometimes
widened multiple vregs to the Q width because of that leading to extra
spills. This patch leaves the decision on how to zero a register to the
AsmPrinter phase where it doesn't affect register allocation anymore.

This patch also sets isAsCheapAsAMove=1 on FMOVS0, FMOVD0.

This fixes http://llvm.org/PR27454, rdar://25866262

Differential Revision: http://reviews.llvm.org/D21826

llvm-svn: 274686
2016-07-06 21:39:33 +00:00
Matthias Braun 332bb5c236 AArch64: Replace a RegScavenger instance with LivePhysRegs
findScratchNonCalleeSaveRegister() just needs a simple liveness
analysis, use LivePhysRegs for that as it is simpler and does not depend
on the kill flags.

This commit adds a convenience function available() to LivePhysRegs:
This function returns true if the given register is not reserved and
neither the register nor any of its aliases are alive.

Differential Revision: http://reviews.llvm.org/D21865

llvm-svn: 274685
2016-07-06 21:31:27 +00:00
Rafael Espindola a29971faeb Add initial support for R_386_GOT32X.
This adds it only for movl mov@GOT(%reg), %reg.

llvm-svn: 274678
2016-07-06 21:19:11 +00:00
Justin Lebar 6f9d01bbd5 [NVPTX] Add sm_60, sm_61, sm_62 targets to LLVM.
Reviewers: tra

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D22068

llvm-svn: 274674
2016-07-06 21:06:10 +00:00
Justin Bogner a463537a36 NVPTX: Replace uses of cuda.syncthreads with nvvm.barrier0
Everywhere where cuda.syncthreads or __syncthreads is used, use the
properly namespaced nvvm.barrier0 instead.

llvm-svn: 274664
2016-07-06 20:02:45 +00:00
Justin Bogner b3745b6d24 NVPTX: Make the llvm.nvvm.shfl intrinsics and builtin names consistent
The intrinsics here use nvvm, but the builtins and tablegen variable
names were using ptx. Stick to the modern names here.

llvm-svn: 274662
2016-07-06 19:52:27 +00:00
Sanjay Patel 04b3496d9b [x86] fix cost of SINT_TO_FP for i32 --> float (PR21356, PR28434)
This is "cvtdq2ps" which does not appear to be particularly slow on any CPU
according to Agner's tables. Choosing "5" as a cost here as suggested in:
https://llvm.org/bugs/show_bug.cgi?id=21356
...but it seems very conservative given that the instruction is fully pipelined,
and I think these costs are supposed to model throughput.

Note that related costs are also most likely too high, but this fixes PR21356
and partly fixes PR28434.

llvm-svn: 274658
2016-07-06 19:15:54 +00:00
Michael Kuperstein 1b62e0e91f [X86] Sort cast cost tables. NFC.
Cast cost tables are now sorted, for each cast type, lexicographically on
[source base type, source vector width, dest base type, base vector width].

llvm-svn: 274653
2016-07-06 18:26:48 +00:00
Elliot Colp bc2cfc2291 [SystemZ] Remove AND mask of bottom 6 bits when result is used for shift/rotate
On SystemZ, shift and rotate instructions only use the bottom 6 bits of the shift/rotate amount.
Therefore, if the amount is ANDed with an immediate mask that has all of the bottom 6 bits set, we
can remove the AND operation entirely.

Differential Revision: http://reviews.llvm.org/D21854

llvm-svn: 274650
2016-07-06 18:13:11 +00:00
Simon Pilgrim 8ff7157513 [X86][SSE] Fixed typo in insertps lowering.
We were checking for 2 insertions (which is caught earlier in the pattern matching loop) instead of the case where we have no insertions.

Turns out this code never fires as we always try to lower to insertps after trying to lower to blendps, which would catch these cases - I'm about to make some changes to support combining to insertps which could cause this to fire so I don't want to remove it.

llvm-svn: 274648
2016-07-06 18:09:08 +00:00
Kit Barton f9d0a40573 Ensure all uses of permute instructions feed vector stores
There is a problem in VSXSwapRemoval where it is incorrectly removing permute instructions.
In this case, the permute is feeding both a vector store and also a non-store instruction. In this case, the permute cannot be removed.

The fix is to simply look at all the uses of the vector register defined by the permute and ensure that all the uses are vector store instructions.

This problem was reported in PR 27735 (https://llvm.org/bugs/show_bug.cgi?id=27735).

Test case based on the original problem reported.

Phabricator Review: http://reviews.llvm.org/D21802

llvm-svn: 274645
2016-07-06 18:03:52 +00:00
Michael Kuperstein aa71bdd3af [TTI] The cost model should not assume vector casts get completely scalarized
The cost model should not assume vector casts get completely scalarized, since
on targets that have vector support, the common case is a partial split up to
the legal vector size. So, when a vector cast  gets split, the resulting casts
end up legal and cheap.

Instead of pessimistically assuming scalarization, base TTI can use the costs
the concrete TTI provides for the split vector, plus a fudge factor to account
for the cost of the split itself. This fudge factor is currently 1 by default,
except on AMDGPU where inserts and extracts are considered free.

Differential Revision: http://reviews.llvm.org/D21251

llvm-svn: 274642
2016-07-06 17:30:56 +00:00
Sanjay Patel 9cc21ac412 fix typo; NFC
llvm-svn: 274636
2016-07-06 16:42:46 +00:00
Elena Demikhovsky ad0a56f3da Re-commit of 274613.
The prev commit failed on compilation.
A minor change in one pattern in lib/Target/X86/X86InstrAVX512.td fixes the failure.

llvm-svn: 274626
2016-07-06 14:15:43 +00:00
Diana Picus b772e409ba [ARM] Do not test for CPUs, use SubtargetFeatures. Also remove 2 flags.
This is a follow-up for r273544.

The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.

This commit also removes two command-line flags that weren't used in any of the
tests: widen-vmovs and swift-partial-update-clearance. The former may be easily
replaced with the mattr mechanism, but the latter may not (as it is a subtarget
property, and not a proper feature).

Differential Revision: http://reviews.llvm.org/D21797

llvm-svn: 274620
2016-07-06 11:22:11 +00:00
Diana Picus 4879b050cc [ARM] Do not test for CPUs, use SubtargetFeatures (Part 3). NFCI
This is a follow-up for r273544 and r273853.

The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.
This commit also marks them as obsolete.

Differential Revision: http://reviews.llvm.org/D21796

llvm-svn: 274616
2016-07-06 09:22:23 +00:00
Elena Demikhovsky 02ced295aa Reverted 274613 due to compilation failue.
llvm-svn: 274615
2016-07-06 09:11:49 +00:00
Elena Demikhovsky 5a4f2476fd AVX-512: Optimization for patterns with i1 scalar type
The patch removes redundant kmov instructions (not all, we still have a lot of work here) and redundant "and" instructions after "setcc".
I use "AssertZero" marker between X86ISD::SETCC node and "truncate" to eliminate extra "and $1" instruction.
I also changed zext, aext and trunc patterns in the .td file. It allows to remove extra "kmov" instruictions.

This patch fixes https://llvm.org/bugs/show_bug.cgi?id=28173.

Fast ISEL mode is not supported correctly for AVX-512. ICMP/FCMP scalar instruction should return result in k-reg. It will be fixed in one of the next patches. I redirected handling of "cmp" to the DAG builder mode. (The code looks worse in one specific test case, but without this fix the new patch fails).

Differential revision: http://reviews.llvm.org/D21956

llvm-svn: 274613
2016-07-06 09:01:20 +00:00
Nicolai Haehnle e40530ea7b AMDGPU: Fix return of non-void-returning shaders
Summary:
Since "AMDGPU: Fix verifier errors in SILowerControlFlow", the logic that
ensures that a non-void-returning shader falls off the end of the last
basic block was effectively disabled, since SI_RETURN is now used.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96731

Reviewers: arsenm, tstellarAMD

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21975

llvm-svn: 274612
2016-07-06 08:35:17 +00:00
Tim Northover 449c15e1bd AArch64: try to fix optimized build failure.
I think the Ops filled out by Regex::match contain pointers into the temporary
std::string returned by StringRef::upper. Its lifetime is extended by the call
to match, but only until the end of that call (not to the uses of Ops later
on).

llvm-svn: 274586
2016-07-05 23:15:58 +00:00
Simon Pilgrim 7643b337a2 [X86][AVX2] Simplified BROADCAST combining to avoid repeated matching attempts
llvm-svn: 274583
2016-07-05 22:41:04 +00:00
Manman Ren 39b37c0f9d Fix an ordering problem in r274431
llvm-svn: 274582
2016-07-05 22:24:44 +00:00
Matt Arsenault e8dbf791b1 AMDGPU: Remove unnecessary string usage in AsmPrinter
Registers are printed a lot, so don't create temporary
std::strings. Using char instead of a string to an ostream
saves a function call.

llvm-svn: 274581
2016-07-05 22:06:56 +00:00
Tim Northover e6ae6767d9 AArch64: TableGenerate system instruction operands.
The way the named arguments for various system instructions are handled at the
moment has a few problems:

  - Large-scale duplication between AArch64BaseInfo.h and AArch64BaseInfo.cpp
  - That weird Mapping class that I have no idea what I was on when I thought
    it was a good idea.
  - Searches are performed linearly through the entire list.
  - We print absolutely all registers in upper-case, even though some are
    canonically mixed case (SPSel for example).
  - The ARM ARM specifies sysregs in terms of 5 fields, but those are relegated
    to comments in our implementation, with a slightly opaque hex value
    indicating the canonical encoding LLVM will use.

This adds a new TableGen backend to produce efficiently searchable tables, and
switches AArch64 over to using that infrastructure.

llvm-svn: 274576
2016-07-05 21:23:04 +00:00
Balaram Makam d4acd7ed10 Revert r259387: "AArch64: Implement missed conditional compare sequences."
This reverts commit r259387 because it inserts illegal code after legalization
    in some backends where i64 OR type is illegal for example.

llvm-svn: 274573
2016-07-05 20:24:05 +00:00
Simon Pilgrim bec6543d17 [X86][AVX2] Add support for target shuffle combining to BROADCAST
Only support broadcast from vector register so far - memory folding support will have to wait.

llvm-svn: 274572
2016-07-05 20:11:29 +00:00
Simon Pilgrim 48adedffb7 [X86][AVX512] Fixed decoding of permd/permpd variable mask shuffles + enabled them for target shuffle combining
Corrected element mask masking to extract the bottom index bits (now matches the perm2 implementation but for unary inputs).

llvm-svn: 274571
2016-07-05 18:31:17 +00:00
Saleem Abdulrasool 4d950ef892 ARM: fix `-mlong-calls` for WoA
Not all code-paths set the relocation model to static for Windows.  This
currently breaks on Windows ARM with `-mlong-calls` when built with clang.
Loosen the assertion to what it was previously.  We would ideally ensure that
all the configuration sets Windows to static relocation model.

llvm-svn: 274570
2016-07-05 18:30:52 +00:00
Tim Northover 01dff9d18a AArch64: use correct SDValue # when looking for bitfield placement.
The other use really does only care about the SDNode (it checks the
opcode against a whitelist), but bitFieldPlacement can be misled if
the node produces multiple results.

Patch by Ismail Badawi.

llvm-svn: 274567
2016-07-05 18:02:57 +00:00
Matt Arsenault ffc8275f2b AMDGPU: Fix folding SGPRs into madak/madmk src0
Because of the special immediate operand, the constant
bus is already used so SGPRs are never useful.

r263212 changed the name of the immediate operand, which
broke the verifier check for the restriction.

llvm-svn: 274564
2016-07-05 17:09:01 +00:00
Tom Stellard a4b746d808 AMDGPU/SI: Remove address space query functions from AMDGPUDAGToDAGISel
Summary:
These have been replaced with TableGen code (except for isConstantLoad,
which is still used for R600).  The queries were broken for cases
where MemOperand was a PseudoSourceValue.

Reviewers: arsenm

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21684

llvm-svn: 274561
2016-07-05 16:10:44 +00:00
Valery Pykhtin e65b39ec09 [AMDGPU] rename DS_1A1D_Off8_NORET to DS_1A2D_Off8_NORET as ds_write2xx use 2 source registers. NFC.
llvm-svn: 274556
2016-07-05 15:15:28 +00:00
Simon Pilgrim 9769428e08 [X86][AVX512] Remove vector BROADCAST builtins.
llvm-svn: 274555
2016-07-05 14:49:58 +00:00
Michael Zuckerman bdc5f40dca [LLVM][INTRINSICS] adding intrinsics of CLFLUSHOPT
Differential Revision: http://reviews.llvm.org/D21789

llvm-svn: 274553
2016-07-05 14:42:12 +00:00
Sam Kolton a9cd6aa895 [AMDGPU] Assembler: Fix parsing error with floating-point literals passed to integer instructions
Differential Revision: http://reviews.llvm.org/D21972

llvm-svn: 274551
2016-07-05 14:01:11 +00:00
Daniel Sanders 976d938c1e [mips][ias] Remove k_PhysReg since it's not possible to create an operand of this kind.
Reviewers: sdardis

Subscribers: dsanders, sdardis, llvm-commits

Differential Revision: http://reviews.llvm.org/D21986

llvm-svn: 274547
2016-07-05 13:38:40 +00:00
James Molloy ae5ff990ae [Thumb] Reapply r272251 with a fix for PR28348 (mk 2)
The important thing I was missing was ensuring newly added constants were kept in topological order. Repositioning the node is correct if the constant is newly added (so it has no topological ordering) but wrong if it already existed - positioning it next in the worklist would break the topological ordering.

Original commit message:
  [Thumb] Select a BIC instead of AND if the immediate can be encoded more optimally negated

  If an immediate is only used in an AND node, it is possible that the immediate can be more optimally materialized when negated. If this is the case, we can negate the immediate and use a BIC instead;

    int i(int a) {
      return a & 0xfffffeec;
    }

  Used to produce:
      ldr r1, [CONSTPOOL]
      ands r0, r1
    CONSTPOOL: 0xfffffeec

  And now produces:
      movs    r1, #255
      adds    r1, #20  ; Less costly immediate generation
      bics    r0, r1

llvm-svn: 274543
2016-07-05 12:37:13 +00:00
Daniel Sanders 7b361a2cc3 Revert r274536: [mips][ias] Don't break apart and reconstruct StringRef's for k_Token. NFC.
It turns out that MSVC requires this.

llvm-svn: 274538
2016-07-05 10:44:24 +00:00
Daniel Sanders b2e0ca8e9c [mips][ias] Don't break apart and reconstruct StringRef's for k_Token. NFC.
llvm-svn: 274536
2016-07-05 10:10:36 +00:00
Nemanja Ivanovic 44513e545f [PowerPC] - Legalize vector types by widening instead of integer promotion
This patch corresponds to review:
http://reviews.llvm.org/D20443

It changes the legalization strategy for illegal vector types from integer
promotion to widening. This only applies for vectors with elements of width
that is a multiple of a byte since we have hardware support for vectors with
1, 2, 3, 8 and 16 byte elements.
Integer promotion for vectors is quite expensive on PPC due to the sequence
of breaking apart the vector, extending the elements and reconstituting the
vector. Two of these operations are expensive.
This patch causes between minor and major improvements in performance on most
benchmarks. There are very few benchmarks whose performance regresses. These
regressions can be handled in a subsequent patch with a DAG combine (similar
to how this patch handles int -> fp conversions of illegal vector types).

llvm-svn: 274535
2016-07-05 09:22:29 +00:00
Tom Stellard 4a105d73a9 AMDGPU/R600: Add PatFrags for selecting the correct vtx id for loads
This moves of the r600 logic out of isGlobalLoad() and into the
TableGen files.

Differential Revision: http://reviews.llvm.org/D21710

llvm-svn: 274527
2016-07-05 00:12:51 +00:00
Tom Stellard 17a0ec5400 AMDGPU/SI: Remove hack for selecting < 32-bit loads to MUBUF instructions
Summary:
The isGlobalLoad() query was returning true for constant address space loads
with memory types less than 32-bits, which is wrong.  This logic has been
replaced with PatFrag in the TableGen files, to provide the same functionality.

Reviewers: arsenm

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21696

llvm-svn: 274521
2016-07-04 20:41:48 +00:00
Simon Pilgrim 3ad040909a [X86][AVX512] Add support for lowering shuffles to VSHUFPD
llvm-svn: 274520
2016-07-04 20:41:24 +00:00
Craig Topper 5d16cd9d63 [AVX512] Remove masked VPERMD/VPERMQ/VPERMILPS/VPERMILPD intrinsics. They were autoupgraded to native IR in r274506 and r274506.
llvm-svn: 274519
2016-07-04 19:58:38 +00:00
Jan Vesely 991dfd7b07 AMDGPU/R600: Add indentation to VTX and TEX fetch asm strings
These are printed as part of Fetch clauses.

Differential Revision: http://reviews.llvm.org/D21730

llvm-svn: 274517
2016-07-04 19:45:00 +00:00
James Molloy c3b4ed4a70 Revert "[Thumb] Reapply r272251 with a fix for PR28348"
This reverts commit r274510 - it made green dragon unhappy.

llvm-svn: 274512
2016-07-04 17:14:24 +00:00
James Molloy 9f019835ef [Thumb] Reapply r272251 with a fix for PR28348
We were using DAG->getConstant instead of DAG->getTargetConstant. This meant that we could inadvertently increase the use count of a constant if stars aligned, which it did in this testcase. Increasing the use count of the constant could cause ISel to fall over (because DAGToDAG lowering assumed the constant had only one use!)

Original commit message:
  [Thumb] Select a BIC instead of AND if the immediate can be encoded more optimally negated

  If an immediate is only used in an AND node, it is possible that the immediate can be more optimally materialized when negated. If this is the case, we can negate the immediate and use a BIC instead;

    int i(int a) {
      return a & 0xfffffeec;
    }

  Used to produce:
      ldr r1, [CONSTPOOL]
      ands r0, r1
    CONSTPOOL: 0xfffffeec

  And now produces:
      movs    r1, #255
      adds    r1, #20  ; Less costly immediate generation
      bics    r0, r1

llvm-svn: 274510
2016-07-04 16:35:41 +00:00
Simon Pilgrim c804751a18 [X86] Add shuffle mask rescaling helper function. NFCI.
llvm-svn: 274476
2016-07-03 21:28:17 +00:00
Simon Pilgrim 8e84fcf118 [X86][AVX2] Merge unary permute matching behind the same V2.isUndef() condition. NFCI.
llvm-svn: 274474
2016-07-03 20:39:42 +00:00
Simon Pilgrim 7f096de0b8 [X86][AVX512] Add support for 512-bit shuffle lowering to VPERMPD/VPERMQ
llvm-svn: 274473
2016-07-03 19:50:06 +00:00
Simon Pilgrim 68ea80649b [X86][AVX512] Add support for VPERMPD/VPERMQ masked shuffle comments
llvm-svn: 274469
2016-07-03 18:40:24 +00:00
Simon Pilgrim a0d73835b2 [X86][AVX512] Add support for 512-bit shuffle decoding of VPERMPD/VPERMQ
llvm-svn: 274468
2016-07-03 18:27:37 +00:00
Simon Pilgrim 5080e7f56c [X86][AVX] Renamed VPERMILPI shuffle comment macros to be more specific
llvm-svn: 274467
2016-07-03 18:02:43 +00:00
Simon Pilgrim dbd6db0dc7 [X86][AVX512] Add support for VPALIGNR/PSHUFD/PSHUFHW/PSHUFLW masked shuffle comments
llvm-svn: 274466
2016-07-03 15:00:51 +00:00
Simon Pilgrim 598bdb6bfe [X86][AVX512] Add support for UNPCK masked shuffle comments
llvm-svn: 274464
2016-07-03 14:26:21 +00:00
Simon Pilgrim 1f59076196 [X86][AVX512] Add support for VPERM/VSHUF masked shuffle comments
llvm-svn: 274462
2016-07-03 13:55:41 +00:00
Simon Pilgrim 68f438a036 [X86][AVX512] Add support for PMOVZX masked shuffle comments
llvm-svn: 274461
2016-07-03 13:33:28 +00:00
Simon Pilgrim 7c2fbdc101 [X86][AVX512] Add support for masked shuffle comments
This patch adds support for including the avx512 mask register information in the mask/maskz versions of shuffle instruction comments.

This initial version just adds support for MOVDDUP/MOVSHDUP/MOVSLDUP to reduce the mass of test regenerations, other shuffle instructions can be added in due course.

Differential Revision: http://reviews.llvm.org/D21953

llvm-svn: 274459
2016-07-03 13:08:29 +00:00
Simon Pilgrim 129b720c18 [X86][AVX512] Add support for lowering shuffles to VPERMILPS
llvm-svn: 274458
2016-07-03 12:47:21 +00:00
Simon Pilgrim a7329dac6f Fix spelling.
llvm-svn: 274451
2016-07-02 20:21:39 +00:00
Simon Pilgrim 99e8a1aa0b [X86][AVX512] Add support for lowering shuffles to VPERMILPD
llvm-svn: 274450
2016-07-02 20:20:12 +00:00
Simon Pilgrim cde7c54baa [X86][AVX512] Add support for 512-bit PSHUFB lowering
llvm-svn: 274444
2016-07-02 18:14:31 +00:00
Simon Pilgrim 77dda7c2e0 [X86][AVX512] Converted the MOVDDUP/MOVSLDUP/MOVSHDUP masked intrinsics to generic IR
llvm-svn: 274443
2016-07-02 17:16:41 +00:00
Benjamin Kramer 4d9d2cc77f [Hexagon] Create global std::map lazily.
This could of course be a simple binary search with no global state
involved at all if someone cares enough. Just don't make everyone
linking the hexagon backend pay for it on process startup and shutdown.

llvm-svn: 274437
2016-07-02 13:05:12 +00:00
Simon Pilgrim f040d8c061 [X86][AVX512] Add support for lowering shuffles to MOVDDUP/MOVSLDUP/MOVSHDUP
llvm-svn: 274436
2016-07-02 12:45:03 +00:00
Benjamin Kramer 3bc1edf95b Use arrays or initializer lists to feed ArrayRefs instead of SmallVector where possible.
No functionality change intended.

llvm-svn: 274431
2016-07-02 11:41:39 +00:00
Marcin Koscielnicki 32e8734e41 [SystemZ] Move misplaced SystemZ::TDC to non-memory opcode range.
llvm-svn: 274417
2016-07-02 02:20:40 +00:00
Matt Arsenault 7f681ac7a9 AMDGPU: Add feature for unaligned access
llvm-svn: 274398
2016-07-01 23:03:44 +00:00
Matt Arsenault 8af47a09e5 AMDGPU: Expand unaligned accesses early
Due to visit order problems, in the case of an unaligned copy
the legalized DAG fails to eliminate extra instructions introduced
by the expansion of both unaligned parts.

llvm-svn: 274397
2016-07-01 22:55:55 +00:00
Matt Arsenault 327bb5ad82 AMDGPU: Improve load/store of illegal types.
There was a combine before to handle the simple copy case.
Split this into handling loads and stores separately.

We might want to change how this handles some of the vector
extloads, since this can result in large code size increases.

llvm-svn: 274394
2016-07-01 22:47:50 +00:00
Krzysztof Parzyszek 1bba89612b [Hexagon] Revert r274381: that was actually wrong
llvm-svn: 274384
2016-07-01 20:45:19 +00:00
Krzysztof Parzyszek a17250d8e0 [Hexagon] Use MachineOperand::readsReg instead of isUse
llvm-svn: 274381
2016-07-01 20:28:30 +00:00
Matt Arsenault 105c2a204c AMDGPU/SI: Enable testing several variants for si scheduler
Enable testing different scheduling variants if sgpr usage
is very high. It was previously disabled because of a bug
in handleMove, but it has been fixed since.

Patch by Axel Davy

llvm-svn: 274372
2016-07-01 18:03:46 +00:00
Hans Wennborg a3bb5f1594 Revert r274347 "[ARM] Refactor Thumb2 mul instruction descs"
This caused PR28387: Assertion "#operands for dag node doesn't match .td file!"

llvm-svn: 274367
2016-07-01 17:26:42 +00:00
Dehao Chen ad2b4e1334 Do not count debug instructions when counting number of uses to reorder frame objects.
Summary: The code generation should be independent of the debug info.

Reviewers: zansari, davidxl, mkuper, majnemer

Subscribers: majnemer, llvm-commits

Differential Revision: http://reviews.llvm.org/D21911

llvm-svn: 274357
2016-07-01 15:40:25 +00:00
Sam Parker 06692203ed [ARM] Refactor Thumb2 mul instruction descs
No functional changes. Just created wrapper classes around the 3
and 4 reg mult and mac instruction classes.

Differential Revision: http://reviews.llvm.org/D21549

llvm-svn: 274347
2016-07-01 12:55:49 +00:00
Nikolay Haustov beb24f5b20 Resubmit r268719 - AMDGPU/SI: Add amdgpu_kernel calling convention. Part 2.
This was reverted in r268740 because of problems with corresponding Clang change.
Clang change was updated and resubmitted in r274220.

Check calling convention in AMDGPUMachineFunction::isKernel

This will be used for AMDGPU_HSA_KERNEL symbol type in output ELF.

Also, in the future unused non-kernels may be optimized.

Reviewers: tstellarAMD, arsenm

Subscribers: arsenm, joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D19917

llvm-svn: 274341
2016-07-01 10:00:58 +00:00
Sam Kolton 5196b88f07 [AMDGPU] Assembler: support SDWA for VOPC instructions
Summary: dst_sel and dst_unused disabled for VOPC as they have no effect on result

Reviewers: artem.tamazov, tstellarAMD, vpykhtin

Subscribers: arsenm, kzhuravl

Differential Revision: http://reviews.llvm.org/D21376

llvm-svn: 274340
2016-07-01 09:59:21 +00:00
NAKAMURA Takumi 566597330a Update libdeps; AMDGPUCodeGen requires LLVMVectorize.
llvm-svn: 274339
2016-07-01 09:55:23 +00:00
Craig Topper 2bd8b4b180 [CodeGen,Target] Remove the version of DAG.getVectorShuffle that takes a pointer to a mask array. Convert all callers to use the ArrayRef version. No functional change intended.
For the most part this simplifies all callers. There were two places in X86 that needed an explicit makeArrayRef to shorten a statically sized array.

llvm-svn: 274337
2016-07-01 06:54:47 +00:00
Matt Arsenault 908b9e26a6 AMDGPU: Add option to run the load/store vectorizer
llvm-svn: 274329
2016-07-01 03:33:52 +00:00
Duncan P. N. Exon Smith d26fdc83c9 CodeGen: Use MachineInstr& in LiveVariables API, NFC
Change all the methods in LiveVariables that expect non-null
MachineInstr* to take MachineInstr& and update the call sites.  This
clarifies the API, and designs away a class of iterator to pointer
implicit conversions.

llvm-svn: 274319
2016-07-01 01:51:32 +00:00
Matt Arsenault 0994bd57fb AMDGPU: Implement getLoadStoreVecRegBitWidth
llvm-svn: 274312
2016-07-01 00:56:27 +00:00
Duncan P. N. Exon Smith 632987296f Target: Remove unused arguments from overrideSchedPolicy, NFC
TargetSubtargetInfo::overrideSchedPolicy takes two MachineInstr*
arguments (begin and end) that invite implicit conversions from
MachineInstrBundleIterator.  One option would be to change their type to
an iterator, but since they don't seem to have been used since the API
was added in 2010, I'm deleting the dead code.

llvm-svn: 274304
2016-07-01 00:23:27 +00:00
Duncan P. N. Exon Smith e4f5e4f4d1 CodeGen: Use MachineInstr& in TargetLowering, NFC
This is a mechanical change to make TargetLowering API take MachineInstr&
(instead of MachineInstr*), since the argument is expected to be a valid
MachineInstr.  In one case, changed a parameter from MachineInstr* to
MachineBasicBlock::iterator, since it was used as an insertion point.

As a side effect, this removes a bunch of MachineInstr* to
MachineBasicBlock::iterator implicit conversions, a necessary step
toward fixing PR26753.

llvm-svn: 274287
2016-06-30 22:52:52 +00:00
David L Kreitzer 29711c0d83 Test commit.
llvm-svn: 274284
2016-06-30 21:43:11 +00:00
Matt Arsenault c1142725bd AMDGPU: Add m0 vgpr load loop block as successor
This shows up as a verifier error when I move this
earlier, not sure why it didn't before.

llvm-svn: 274275
2016-06-30 20:49:28 +00:00
Rafael Espindola d86e8bb0ed Delete MCCodeGenInfo.
MC doesn't really care about CodeGen stuff, so this was just
complicating target initialization.

llvm-svn: 274258
2016-06-30 18:25:11 +00:00
Elliot Colp bda4cb6091 Test commit
llvm-svn: 274232
2016-06-30 14:42:47 +00:00
Rafael Espindola 222a9d09f3 Don't repeat names in comments. NFC.
llvm-svn: 274226
2016-06-30 12:44:52 +00:00
Rafael Espindola db6bd02185 Delete unused includes. NFC.
llvm-svn: 274225
2016-06-30 12:19:16 +00:00
Jonas Paulsson 25e193da4c [SystemZ] Let z13 also support FeatureMiscellaneousExtensions.
This processor feature had been left out by mistake from the z13
ProcessorModel.

This time with updated test case. Thanks, Hans.

Reviewed by Ulrich Weigand.

llvm-svn: 274216
2016-06-30 07:13:56 +00:00
Pankaj Gode f4b25547cf [AArch64] Add Broadcom Vulcan scheduling model.
Adding scheduling model for new Broadcom Vulcan core (ARMv8.1A).

Differential Revision: http://reviews.llvm.org/D21728

llvm-svn: 274213
2016-06-30 06:42:31 +00:00
Craig Topper bc56e3ba53 Use ShuffleVectorSDNode::isSplat member method instead of static method isSplatMask where the mask came directly from getMask() on a shuffle node.
llvm-svn: 274208
2016-06-30 04:38:51 +00:00
Marcin Koscielnicki 68747ac78e [SystemZ] Split up PerformDAGCombine. [NFC]
This function is already a bit too long, and I'm about to make it worse.

llvm-svn: 274191
2016-06-30 00:08:54 +00:00
Duncan P. N. Exon Smith 9cfc75c214 CodeGen: Use MachineInstr& in TargetInstrInfo, NFC
This is mostly a mechanical change to make TargetInstrInfo API take
MachineInstr& (instead of MachineInstr* or MachineBasicBlock::iterator)
when the argument is expected to be a valid MachineInstr.  This is a
general API improvement.

Although it would be possible to do this one function at a time, that
would demand a quadratic amount of churn since many of these functions
call each other.  Instead I've done everything as a block and just
updated what was necessary.

This is mostly mechanical fixes: adding and removing `*` and `&`
operators.  The only non-mechanical change is to split
ARMBaseInstrInfo::getOperandLatencyImpl out from
ARMBaseInstrInfo::getOperandLatency.  Previously, the latter took a
`MachineInstr*` which it updated to the instruction bundle leader; now,
the latter calls the former either with the same `MachineInstr&` or the
bundle leader.

As a side effect, this removes a bunch of MachineInstr* to
MachineBasicBlock::iterator implicit conversions, a necessary step
toward fixing PR26753.

Note: I updated WebAssembly, Lanai, and AVR (despite being
off-by-default) since it turned out to be easy.  I couldn't run tests
for AVR since llc doesn't link with it turned on.

llvm-svn: 274189
2016-06-30 00:01:54 +00:00
Artem Belevich 4d5d7be8cc Revert r273313 "[NVPTX] Improve lowering of byval args of device functions."
The change causes llvm crash in some unoptimized builds.

llvm-svn: 274163
2016-06-29 20:51:15 +00:00
Nirav Dave 8e10380b73 Permit memory operands in ins/outs instructions
[x86] (PR15455) While (ins|outs)[bwld] instructions do not take %dx as a
memory operand, various unofficial references do and objdump
disassembles to this format. Extend special treatment of
similar (in|out)[bwld] operations.

Reviewers: craig.topper, rnk, ab

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D18837

llvm-svn: 274152
2016-06-29 19:54:27 +00:00
Nico Weber 12fdf60b75 Revert r272251, it caused PR28348.
llvm-svn: 274141
2016-06-29 17:33:41 +00:00
Ahmed Bougacha 15a2f6d58c [X86] Lower blended PACKUSes using appropriate types.
When lowering two blended PACKUS, we used to disregard the types
of the PACKUS inputs, indiscriminately generating a v16i8 PACKUS.

This leads to non-selectable things like:
    (v16i8 (PACKUS (v4i32 v0), (v4i32 v1)))

Instead, check that the PACKUSes have the same type, and use that
as the final result type.

llvm-svn: 274138
2016-06-29 16:56:09 +00:00
Rafael Espindola a99ccfce1a Drop support for creating $stubs.
They are created by ld64 since OS X 10.5.

llvm-svn: 274130
2016-06-29 14:59:50 +00:00
Marcin Koscielnicki 518cbc7cc3 [SystemZ] Add floating-point test data class instructions.
These are not used by CodeGen yet - ISD combiners creating the new node
will come in subsequent patches.

llvm-svn: 274108
2016-06-29 07:29:07 +00:00
Weiming Zhao 5410edddb1 [ARM] Fix 28282: cost computation for constant hoisting
Summary:
This fixes bug: https://llvm.org/bugs/show_bug.cgi?id=28282

Currently the cost model of constant hoisting checks the bit width of the data type of the constants.
However, the actual immediate value is small enough and not need to be hoisted.
This patch checks for the actual bit width needed for the constant.

Reviewers: t.p.northover, rengolin

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D21668

llvm-svn: 274073
2016-06-28 22:30:45 +00:00
Dehao Chen 8cd84aaa6f Relax the clearance calculating for breaking partial register dependency.
Summary: LLVM assumes that large clearance will hide the partial register spill penalty. But in our experiment, 16 clearance is too small. As the inserted XOR is normally fairly cheap, we should have a higher clearance threshold to aggressively insert XORs that is necessary to break partial register dependency.

Reviewers: wmi, davidxl, stoklund, zansari, myatsina, RKSimon, DavidKreitzer, mkuper, joerg, spatel

Subscribers: davidxl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21560

llvm-svn: 274068
2016-06-28 21:19:34 +00:00
Zhan Jun Liau 347db3e18e [SystemZ] Use NILL instruction instead of NILF where possible
Summary: SystemZ shift instructions only use the last 6 bits of the shift
amount. When the result of an AND operation is used as a shift amount, this
means that we can use the NILL instruction (which operates on the last 16 bits)
rather than NILF (which operates on the last 32 bits) for a 16-bit savings in
instruction size.

Reviewers: uweigand

Subscribers: llvm-commits

Author: colpell
Committing on behalf of Elliot.

Differential Revision: http://reviews.llvm.org/D21686

llvm-svn: 274066
2016-06-28 21:03:19 +00:00
Matthias Braun 0b9a07883d X86FrameLowering: Check subregs when deciding prolog kill flags
llvm-svn: 274057
2016-06-28 20:31:56 +00:00
Rafael Espindola b1556c42ce Use isPositionIndependent in a few more places.
I think this converts all the simple cases that really just care about
the generated code being position independent or not. The remaining
uses are a bit more complicated and are checking things like "is this
a library or executable" or "can this symbol be preempted".

llvm-svn: 274055
2016-06-28 20:13:36 +00:00