Commit Graph

44762 Commits

Author SHA1 Message Date
Matt Arsenault 57c37b2dcd AMDGPU: Fix producing saveexec when the copy is spilled
If the register from the copy from exec was spilled,
the copy before the spill was deleted leaving a spill
of undefined register verifier error and miscompiling.
Check for other use instructions of the copy register.

llvm-svn: 318132
2017-11-14 02:16:54 +00:00
Hans Wennborg 08b34a017a Update some code.google.com links
llvm-svn: 318115
2017-11-13 23:47:58 +00:00
Matt Arsenault 4b7938c658 AMDGPU: Fix not converting d16 load/stores to offset
Fixes missed optimization with new MUBUF instructions.

llvm-svn: 318106
2017-11-13 23:24:26 +00:00
Matt Arsenault 4eea3f3da3 AMDGPU: Implement computeKnownBitsForTargetNode for mbcnt
llvm-svn: 318100
2017-11-13 22:55:05 +00:00
Evgeniy Stepanov 76d5ac4906 [arm] Fix Unnecessary reloads from GOT.
Summary:
This fixes PR35221.
Use pseudo-instructions to let MachineCSE hoist global address computation.

Subscribers: aemerson, javed.absar, kristof.beyls, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D39871

llvm-svn: 318081
2017-11-13 20:45:38 +00:00
Craig Topper c314f461dd [X86] Allow X86ISD::Wrapper to be folded into the base of gather/scatter address
If the base of our gather corresponds to something contained in X86ISD::Wrapper we should be able to fold it into the address.

This patch refactors some of the address matching to more fully use the X86ISelAddressMode struct and the getAddressOperands helper. A new helper function matchVectorAddress is added to call matchWrapper or fall back to matchAddressBase.

We should also be able to support constant offsets from a wrapper, but I'll look into that in a future patch. We may even be able to completely reuse matchAddress here, but I wanted to start simple and work up to it.

Differential Revision: https://reviews.llvm.org/D39927

llvm-svn: 318057
2017-11-13 17:53:59 +00:00
Jan Vesely b17f32040c AMDGPU: Drop duplicate setOperationAction
These are set with other scalar int ops few lines up

Differential Revision: https://reviews.llvm.org/D39928

llvm-svn: 318051
2017-11-13 16:46:07 +00:00
Uriel Korach 2aa707bdaa [X86] test/testn intrinsics lowering to IR. llvm part.
Remove builtins from llvm and add AutoUpgrade support.
Also add fast-isel tests for the TEST and TESTN instructions.

Differential Revision: https://reviews.llvm.org/D38736

llvm-svn: 318036
2017-11-13 12:51:18 +00:00
Momchil Velikov 842aa90192 [ARM] Place jump table as the first operand in additions
When generating table jump code for switch statements, place the jump
table label as the first operand in the various addition instructions
in order to enable addressing mode selectors to better match index
computation and possibly fold them into the addressing mode of the
table entry load instruction.

Differential revision: https://reviews.llvm.org/D39752

llvm-svn: 318033
2017-11-13 11:56:48 +00:00
Sander de Smalen 070a7ff1ad Test commit
llvm-svn: 318027
2017-11-13 09:57:20 +00:00
Jina Nahias 9a7f9f123c [x86][AVX512] Lowering shuffle i/f intrinsics to LLVM IR
This patch, together with a matching clang patch (https://reviews.llvm.org/D38672), implements the lowering of X86 shuffle i/f intrinsics to IR.

Differential Revision: https://reviews.llvm.org/D38671

Change-Id: I1e7d359a74743e995ec356237a85214ce55d3661
llvm-svn: 318026
2017-11-13 09:16:39 +00:00
Gadi Haber c9f2300652 [X86][SKX] Adding scheduling info of non-intrinsic + commutable SKX opcodes.
Updated the scheduling information of the SKX subtarget  in the file X86SchedSkylakeServer.td under lib/Target/X86 to:
1. add regular opcodes in addition to the suffixed "_Int" opcodes
2. add the (V)MAXCPD/MAXCPS/MAXCSD/MAXCSS/MINCPD/MINCPS/MINCSD/MINCSS
    instructions that are equivalent to their counterparts without the 'C' as they are part of a hack to
    make floating point min/max commutable under fast math.

Reviewers: zvi, RKSimon, craig.topper
Differential Revision: https://reviews.llvm.org/D39833

Change-Id: Ie13702a5ce1b1a08af91ca637a52b6962881e7d6
llvm-svn: 318024
2017-11-13 08:42:07 +00:00
Craig Topper 1af2adb9f3 [X86] Limit NOPs to 7 bytes when 'slm' is spelled 'silvermont'.
We support 2 spelling for silvermont and we should accept both here.

llvm-svn: 318023
2017-11-13 08:17:30 +00:00
Craig Topper 75d71540f8 [X86] Use sse_load_f32/f64 to improve load folding of scalar vfscalefss/sd, vrcp14ss/sd, rsqrt14ss/sd instructions.
llvm-svn: 318022
2017-11-13 08:07:33 +00:00
Craig Topper ca8abedb2a [X86] Use sse_load_f32/f64 to improve load folding for scalar VFPCLASS intrinsics.
llvm-svn: 318019
2017-11-13 06:46:48 +00:00
Matt Arsenault e5e0c742df AMDGPU: Preserve nuw in shl add ptr combine
llvm-svn: 318017
2017-11-13 05:33:35 +00:00
Craig Topper d4f6094091 [X86] Fix SQRTSS/SQRTSD/RCPSS/RCPSD intrinsics to use sse_load_f32/sse_load_f64 to increase load folding opportunities.
llvm-svn: 318016
2017-11-13 05:25:24 +00:00
Matt Arsenault fbe9533509 AMDGPU: Fix multi-use shl/add combine
This was using a custom function that didn't handle the
addressing modes properly for private. Use
isLegalAddressingMode to avoid duplicating this.

Additionally, skip the combine if there is only one use
since the standard combine will handle it.

llvm-svn: 318013
2017-11-13 05:11:54 +00:00
Craig Topper 23493f3777 [X86] Attempt to fix signed and unsigned comparison warning.
llvm-svn: 318010
2017-11-13 02:19:13 +00:00
Craig Topper deee24b83c [X86] Use sse_load_f32/f64 in patterns for the memory forms of VRNDSCALESS/SD.
llvm-svn: 318009
2017-11-13 02:03:01 +00:00
Craig Topper 63157c4784 [X86] Use EVEX encoded VRNDSCALE instructions to implement the legacy round intrinsics.
The VRNDSCALE instructions implement a superset of the (V)ROUND instructions. They are equivalent if the upper 4-bits of the immediate are 0.

This patch lowers the legacy intrinsics to the VRNDSCALE ISD node and masks the upper bits of the immediate to 0. This allows us to take advantage of the larger register encoding space.

We should maybe consider converting VRNDSCALE back to VROUND in the EVEX to VEX pass if the extended registers are not being used.

I notice some load folding opportunities being missed for the VRNDSCALESS/SD instructions that I'll try to fix in future patches.

llvm-svn: 318008
2017-11-13 02:03:00 +00:00
Craig Topper 0af48f1ad4 [X86] Split VRNDSCALE/VREDUCE/VGETMANT/VRANGE ISD nodes into versions with and without the rounding operand. NFCI
I want to reuse the VRNDSCALE node for the legacy SSE rounding intrinsics so that those intrinsics can use EVEX instructions. All of these nodes share tablegen multiclasses so I split them all so that they all remain similar in their implementations.

llvm-svn: 318007
2017-11-13 02:02:58 +00:00
Matt Arsenault e1cd482fda AMDGPU: Select d16 loads into low component of register
llvm-svn: 318005
2017-11-13 00:22:09 +00:00
Craig Topper b42a23ff8f [X86] Add an X86ISD::RANGES opcode to use for the scalar intrinsics.
This fixes a bug where we selected packed instructions for scalar intrinsics.

llvm-svn: 317999
2017-11-12 18:51:09 +00:00
Craig Topper 1382932c12 [X86] Remove some no longer needed intrinsic lowering code.
llvm-svn: 317997
2017-11-12 18:51:06 +00:00
Mandeep Singh Grang d104673257 [llvm] Remove redundant return [NFC]
Reviewers: davidxl, olista01, Eugene.Zelenko

Reviewed By: Eugene.Zelenko

Subscribers: sdardis, javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D39917

llvm-svn: 317995
2017-11-12 03:47:50 +00:00
Craig Topper ac250825c6 [X86] Use vrndscaleps/pd for 128/256 ffloor/ftrunc/fceil/fnearbyint/frint when avx512vl is enabled.
This matches what we do for scalar and 512-bit types.

llvm-svn: 317991
2017-11-11 21:44:51 +00:00
Simon Pilgrim 294b87b432 [X86] Attempt to match multiple binary reduction ops at once. NFCI
matchBinOpReduction currently matches against a single opcode, but we already have a case where we repeat calls to try to match against AND/OR and I'll be shortly adding another case for SMAX/SMIN/UMAX/UMIN (D39729).

This NFCI patch alters matchBinOpReduction to try and pattern match against any of the provided list of candidate bin ops at once to save time.

Differential Revision: https://reviews.llvm.org/D39726

llvm-svn: 317985
2017-11-11 18:16:55 +00:00
Craig Topper 0ccec70ff5 [X86] Add scalar register class versions of VRNDSCALE instructions and rename the existing versions to _Int.
This is consistent with out normal implementation of scalar instructions.

While there disable load folding for the patterns with IMPLICIT_DEF unless optimizing for size which is also our standard practice.

llvm-svn: 317977
2017-11-11 08:24:15 +00:00
Craig Topper 80405076b0 [X86] Inline some SDNode operand multiclass operands that don't vary. NFC
llvm-svn: 317975
2017-11-11 08:24:12 +00:00
Craig Topper 4a63843706 [X86] Set the execution domain for VFPCLASS to SSEPackedSingle/Double.
llvm-svn: 317974
2017-11-11 06:57:44 +00:00
Craig Topper 1a093934a9 [X86] Set the execution domain for vptest instruction to the integer domain.
llvm-svn: 317973
2017-11-11 06:19:12 +00:00
Craig Topper 0eb4a43384 [X86] Correct the execution domain on ROUND/VROUND instructions.
llvm-svn: 317968
2017-11-11 02:26:05 +00:00
Craig Topper bf9b944ea7 [X86] Remove the default for one of the arguments to some tablegen multiclasses. NFC
No one ever uses this default and probably shouldn't since it sets the execution domain to generic.

llvm-svn: 317967
2017-11-11 02:26:02 +00:00
Krzysztof Parzyszek e8926438a9 Recommit r317904: [Hexagon] Create HexagonISelDAGToDAG.h, NFC
The Windows builder did not reconstruct the HexagonGenDAGISel.inc file
after the TableGen binary has changed.

llvm-svn: 317921
2017-11-10 20:09:46 +00:00
Konstantin Zhuravlyov 27b0a033d8 AMDGPU/NFC: Split Processors.td into GCNProcessors.td and R600Processors.td
Differential Revision: https://reviews.llvm.org/D39880

llvm-svn: 317920
2017-11-10 20:01:58 +00:00
Krzysztof Parzyszek 79dae95f4a Revert "[Hexagon] Create HexagonISelDAGToDAG.h, NFC"
This reverts r317904: broke Windows build.

llvm-svn: 317916
2017-11-10 19:27:18 +00:00
Craig Topper bb001c6ddc [X86] Merge the template method selectAddrOfGatherScatterNode into selectVectorAddr. NFCI
Just need to initialize a couple variables differently based on the node type. No need for a whole separate template method.

llvm-svn: 317915
2017-11-10 19:26:04 +00:00
Mandeep Singh Grang 5f043ae2e1 [RISCV] Silence an unused variable warning in release builds [NFC]
Summary:
Also minor cleanups:
1. Avoided multiple calls to Fixup.getKind()
2. Avoided multiple calls to getFixupKindInfo()
3. Removed a redundant return.

Reviewers: asb, apazos

Reviewed By: asb

Subscribers: rbar, johnrusso, llvm-commits

Differential Revision: https://reviews.llvm.org/D39881

llvm-svn: 317908
2017-11-10 19:09:28 +00:00
Krzysztof Parzyszek 89765acc6c [Hexagon] Create HexagonISelDAGToDAG.h, NFC
llvm-svn: 317904
2017-11-10 18:39:45 +00:00
Jonas Paulsson 4b017e682d [RegAlloc, SystemZ] Increase number of LOCRs by passing "hard" regalloc hints.
* The method getRegAllocationHints() is now of bool type instead of void. If
true is returned, regalloc (AllocationOrder) will *only* try to allocate the
hints, as opposed to merely trying them before non-hinted registers.

* TargetRegisterInfo::getRegAllocationHints() is implemented for SystemZ with
an increase in number of LOCRs.

In this case, it is desired to force the hints even though there is a slight
increase in spilling, because if a non-hinted register would be allocated,
the LOCRMux pseudo would have to be expanded with a jump sequence. The LOCR
(Load On Condition) SystemZ instruction must have both operands in either the
low or high part of the 64 bit register.

Reviewers: Quentin Colombet and Ulrich Weigand
https://reviews.llvm.org/D36795

llvm-svn: 317879
2017-11-10 08:46:26 +00:00
Craig Topper 1a0da2db5f [X86] Add support for combining FMADDSUB(A, B, FNEG(C))->FMSUBADD(A, B, C)
Support the opposite direction as well. Also add a TODO for not being able to combine FMSUB/FNMADD/FNMSUB with FNEG.

llvm-svn: 317878
2017-11-10 08:22:37 +00:00
Yaxun Liu 35845f06a4 [AMDGPU] Fix pointer info for lowering load/store for r600 for amdgiz environment
r600 uses dummy pointer info for lowering load/store. Since dummy pointer info
assumes address space 0, this causes isel failure when temporary load/store SDNodes
are generated for amdgiz environment.

Since the offest is not constant, FixedStack pseudo source value cannot be used
to create the pointer info. This patch creates pointer info using llvm undef value.
At least this provides correct address space so that isel can be done correctly.

Differential Revision: https://reviews.llvm.org/D39698

llvm-svn: 317862
2017-11-10 02:03:28 +00:00
Yaxun Liu 920cc2f813 [AMDGPU] Fix pointer info for pseudo source for r600
The pointer info for pseudo source for r600 is not correct when
alloca addr space is not 0, which causes invalid SDNode for r600---amdgiz.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D39670

llvm-svn: 317861
2017-11-10 01:53:24 +00:00
Ulrich Weigand d39e9dca1b [SystemZ] Add support for the "o" inline asm constraint
We don't really need any special handling of "offsettable"
memory addresses, but since some existing code uses inline
asm statements with the "o" constraint, add support for this
constraint for compatibility purposes.

llvm-svn: 317807
2017-11-09 16:31:57 +00:00
Simon Dardis c2d3e38ba6 [mips] Correct microMIP's jump and add unconditional branch pseudo
Correct the definition of 'j' as being unavailable for microMIPS32R6 and
provide the 'b' assembly idiom for codegen purposes for microMIPS32r3.

Provide the necessary 'br' pattern for microMIPS32R6 as it now longer
incorrectly uses the 'j' instruction.

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39741

llvm-svn: 317801
2017-11-09 16:02:18 +00:00
Alex Bradbury 8c345c5aa9 [RISCV] MC layer support for the standard RV32A instruction set extension
llvm-svn: 317791
2017-11-09 15:00:03 +00:00
Alex Bradbury a47514ce3f [RISCV] MC layer support for the standard RV32M instruction set extension
llvm-svn: 317788
2017-11-09 14:46:30 +00:00
Andrew V. Tischenko f8c75b8794 Sched model improving on btver2: JFPU01 resource, vtestp* for xmm.
Differential Revision: https://reviews.llvm.org/D39802

llvm-svn: 317785
2017-11-09 14:19:59 +00:00
Andrew V. Tischenko 3543f0a712 Add -print-schedule scheduling comments to inline asm.
Differential Revision: https://reviews.llvm.org/D39728

llvm-svn: 317782
2017-11-09 12:45:40 +00:00
Craig Topper 5bfa5ffe5e [X86] Give priority to EVEX FMA instructions over FMA4 instructions.
No existing processor has both so it doesn't really matter what we do here. But we were previously just relying on pattern order which gave FMA4 priority.

llvm-svn: 317775
2017-11-09 08:26:26 +00:00
Vitaly Buka bee1964d80 Fix "default label in switch which covers all enumeration values" warning
llvm-svn: 317771
2017-11-09 07:46:13 +00:00
Craig Topper 7a6e294a6c [X86] Make X86ISD::FMADDS3 isel patterns commutable.
This was missed when FMADDS3 was split from X86ISD::FMADDS3_RND.

llvm-svn: 317769
2017-11-09 06:17:05 +00:00
Marek Olsak 58410f37ff AMDGPU: Merge BUFFER_STORE_DWORD_OFFEN/OFFSET into x2, x4
Summary:
Only 56 shaders (out of 48486) are affected.

Totals from affected shaders (changed stats only):
SGPRS: 2420 -> 2460 (1.65 %)
Spilled VGPRs: 94 -> 112 (19.15 %)
Scratch size: 524 -> 528 (0.76 %) dwords per thread
Code Size: 187400 -> 184992 (-1.28 %) bytes

One DiRT Showdown shader spills 6 more VGPRs.
One Grid Autosport shader spills 12 more VGPRs.

The other 54 shaders only have a decrease in code size.
(I'm ignoring the SGPR noise)

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D39012

llvm-svn: 317755
2017-11-09 01:52:55 +00:00
Marek Olsak 5cec64195c AMDGPU: Lower buffer store and atomic intrinsics manually
Summary:
Without this, SIMemoryLegalizer inserts s_waitcnt vmcnt(0) before every
buffer store and atomic instruction.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D39060

llvm-svn: 317754
2017-11-09 01:52:48 +00:00
Marek Olsak 4c421a2db2 AMDGPU: Merge BUFFER_LOAD_DWORD_OFFSET into x2, x4
Summary: Only 3 (out of 48486) shaders are affected.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D38951

llvm-svn: 317753
2017-11-09 01:52:36 +00:00
Marek Olsak 6a0548acaa AMDGPU: Merge BUFFER_LOAD_DWORD_OFFEN into x2, x4
Summary:
-9.9% code size decrease in affected shaders.

Totals (changed stats only):
SGPRS: 2151462 -> 2170646 (0.89 %)
VGPRS: 1634612 -> 1640288 (0.35 %)
Spilled SGPRs: 8942 -> 8940 (-0.02 %)
Code Size: 52940672 -> 51727288 (-2.29 %) bytes
Max Waves: 373066 -> 371718 (-0.36 %)

Totals from affected shaders:
SGPRS: 283520 -> 302704 (6.77 %)
VGPRS: 227632 -> 233308 (2.49 %)
Spilled SGPRs: 3966 -> 3964 (-0.05 %)
Code Size: 12203080 -> 10989696 (-9.94 %) bytes
Max Waves: 44070 -> 42722 (-3.06 %)

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38950

llvm-svn: 317752
2017-11-09 01:52:30 +00:00
Marek Olsak b953cc36e2 AMDGPU: Merge S_BUFFER_LOAD_DWORD_IMM into x2, x4
Summary:
Only constant offsets (*_IMM opcodes) are merged.
It reuses code for LDS load/store merging.
It relies on the scheduler to group loads.

The results are mixed, I think they are mostly positive. Most shaders are
affected, so here are total stats only:

 SGPRS: 2072198 -> 2151462 (3.83 %)
 VGPRS: 1628024 -> 1634612 (0.40 %)
 Spilled SGPRs: 7883 -> 8942 (13.43 %)
 Spilled VGPRs: 97 -> 101 (4.12 %)
 Scratch size: 1488 -> 1492 (0.27 %) dwords per thread
 Code Size: 60222620 -> 52940672 (-12.09 %) bytes
 Max Waves: 374337 -> 373066 (-0.34 %)

There is 13.4% increase in SGPR spilling, DiRT Showdown spills a few more
VGPRs (now 37), but 12% decrease in code size.

These are the new stats for SGPR spilling. We already spill a lot SGPRs,
so it's uncertain whether more spilling will make any difference since
SGPRs are always spilled to VGPRs:

 SGPR SPILLING APPS   Shaders SpillSGPR AvgPerSh
 alien_isolation         2938       100      0.0
 batman_arkham_origins    589         6      0.0
 bioshock-infinite       1769         4      0.0
 borderlands2            3968        22      0.0
 counter_strike_glob..   1142        60      0.1
 deus_ex_mankind_div..   1410        79      0.1
 dirt-showdown            533         4      0.0
 dirt_rally               364      1163      3.2
 divinity                1052         2      0.0
 dota2                   1747         7      0.0
 f1-2015                  776      1515      2.0
 grid_autosport          1767      1505      0.9
 hitman                  1413       273      0.2
 left_4_dead_2           1762         4      0.0
 life_is_strange         1296        26      0.0
 mad_max                  358        96      0.3
 metro_2033_redux        2670        60      0.0
 payday2                 1362        22      0.0
 portal                   474         3      0.0
 saints_row_iv           1704         8      0.0
 serious_sam_3_bfe        392      1348      3.4
 shadow_of_mordor        1418        12      0.0
 shadow_warrior          3956       239      0.1
 talos_principle          324      1735      5.4
 thea                     172        17      0.1
 tomb_raider             1449       215      0.1
 total_war_warhammer      242        56      0.2
 ue4_effects_cave         295        55      0.2
 ue4_elemental            572        12      0.0
 unigine_tropics          210        56      0.3
 unigine_valley           278       152      0.5
 victor_vran             1262        84      0.1
 yofrankie                 82         2      0.0

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38949

llvm-svn: 317751
2017-11-09 01:52:23 +00:00
Marek Olsak ffadcb744b AMDGPU: Fold immediate offset into BUFFER_LOAD_DWORD lowered from SMEM
Summary:
-5.3% code size in affected shaders.

Changed stats only:

48486 shaders in 30489 tests
Totals:
SGPRS: 2086406 -> 2072430 (-0.67 %)
VGPRS: 1626872 -> 1627960 (0.07 %)
Spilled SGPRs: 7865 -> 7912 (0.60 %)
Code Size: 60978060 -> 60188764 (-1.29 %) bytes
Max Waves: 374530 -> 374342 (-0.05 %)

Totals from affected shaders:
SGPRS: 299664 -> 285688 (-4.66 %)
VGPRS: 233844 -> 234932 (0.47 %)
Spilled SGPRs: 3959 -> 4006 (1.19 %)
Code Size: 14905272 -> 14115976 (-5.30 %) bytes
Max Waves: 46202 -> 46014 (-0.41 %)

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D38915

llvm-svn: 317750
2017-11-09 01:52:17 +00:00
Craig Topper 93e27d2ecc [X86] Make sure we don't read too many operands from X86ISD::FMADDS1/FMADDS3 nodes when doing FNEG combine.
r317453 added new ISD nodes without rounding modes that were added to an existing if/else chain. But all the previous nodes handled there included a rounding mode. The final code after this if/else chain expected an extra operand that isn't present for the new nodes.

llvm-svn: 317748
2017-11-09 01:06:47 +00:00
Craig Topper cfd510678f [X86] X86MaskedGatherSDNode shouldn't inherit from MaskedGatherScatterSDNode
The classof implementation in MaskedGatherScatterSDNode doesn't consider X86MaskedGatherSDNode so its misleading.

llvm-svn: 317733
2017-11-08 22:26:41 +00:00
Craig Topper 61f81f9637 [X86] Preserve memory refs when folding loads into divides.
This is similar to what we already do for multiplies. Without this we can't unfold and hoist an invariant load.

llvm-svn: 317732
2017-11-08 22:26:39 +00:00
Craig Topper 55029d811f [X86] Remove an if check on the result of a cast. NFC
cast takes a non-null input and produces a non-null output. So this if can never fail.

llvm-svn: 317731
2017-11-08 22:26:37 +00:00
Reid Kleckner 7adb2fdbba Revert "Correct dwarf unwind information in function epilogue for X86"
This reverts r317579, originally committed as r317100.

There is a design issue with marking CFI instructions duplicatable. Not
all targets support the CFIInstrInserter pass, and targets like Darwin
can't cope with duplicated prologue setup CFI instructions. The compact
unwind info emission fails.

When the following code is compiled for arm64 on Mac at -O3, the CFI
instructions end up getting tail duplicated, which causes compact unwind
info emission to fail:
  int a, c, d, e, f, g, h, i, j, k, l, m;
  void n(int o, int *b) {
    if (g)
      f = 0;
    for (; f < o; f++) {
      m = a;
      if (l > j * k > i)
        j = i = k = d;
      h = b[c] - e;
    }
  }

We get assembly that looks like this:
; BB#1:                                 ; %if.then
Lloh3:
	adrp	x9, _f@GOTPAGE
Lloh4:
	ldr	x9, [x9, _f@GOTPAGEOFF]
	mov	 w8, wzr
Lloh5:
	str		wzr, [x9]
	stp	x20, x19, [sp, #-16]!   ; 8-byte Folded Spill
	.cfi_def_cfa_offset 16
	.cfi_offset w19, -8
	.cfi_offset w20, -16
	cmp		w8, w0
	b.lt	LBB0_3
	b	LBB0_7
LBB0_2:                                 ; %entry.if.end_crit_edge
Lloh6:
	adrp	x8, _f@GOTPAGE
Lloh7:
	ldr	x8, [x8, _f@GOTPAGEOFF]
Lloh8:
	ldr		w8, [x8]
	stp	x20, x19, [sp, #-16]!   ; 8-byte Folded Spill
	.cfi_def_cfa_offset 16
	.cfi_offset w19, -8
	.cfi_offset w20, -16
	cmp		w8, w0
	b.ge	LBB0_7
LBB0_3:                                 ; %for.body.lr.ph

Note the multiple .cfi_def* directives. Compact unwind info emission
can't handle that.

llvm-svn: 317726
2017-11-08 21:31:14 +00:00
Alex Bradbury fa18b9e73c Set hasSideEffects=0 for PHI and fix affected passes
Previously, hasSideEffects was ? for TargetOpcode::PHI and would be inferred 
as 1. D37065 sets the previously inferred properties explicitly. This patch sets 
hasSideEffects=0 for PHI, as it is for G_PHI. MachineInstr::isSafeToMove has 
been updated so it still returns false for PHI.

Additionally, HexagonBitSimplify relied on a PHI node having the 
hasUnmodeledSideEffects property. This patch fixes that assumption.

Differential Revision: https://reviews.llvm.org/D37097

llvm-svn: 317721
2017-11-08 20:19:16 +00:00
Craig Topper 78a770402a [X86] Correct the implementation of BEXTR load folding to use the shift as the parent node and pass a separate root.
We were calling tryFoldLoad with the 'and' node was the root and parent node of the load. But the parent of the load should be the shift that proceeds the and. While the and node is correctly the root node.

To fix this I had to make tryFoldLoad take a separate use and root input. I've added a convenience version with the old signature to avoid updating the other call sites.

llvm-svn: 317720
2017-11-08 20:17:33 +00:00
Sam Clegg 6368442fb7 [WebAssembly] Update test expectations
I believe these were fixed in rL317707

Differential Revision: https://reviews.llvm.org/D39813

llvm-svn: 317718
2017-11-08 20:14:06 +00:00
Craig Topper e6094f9bd9 [X86] Don't call validateInstruction from MatchAndEmitInstruction when MatchingInlineAsm is set. The MCInst won't be populated.
Without this we can't parse gather instructions in ms inline asm blocks. The validateInstruction function was introduced in r316700 to check gather constraints.

llvm-svn: 317713
2017-11-08 19:38:48 +00:00
Dan Gohman 0828ba1e1e [WebAssembly] Call signExtend to get sign extended register
Patch by Jatin Bhateja!

Differential Revision: https://reviews.llvm.org/D39529

llvm-svn: 317710
2017-11-08 19:24:21 +00:00
Dan Gohman b465aa0504 [WebAssembly] Revise the strategy for inline asm.
Previously, an "r" constraint would mean the compiler provides a value
on WebAssembly's operand stack. This was tricky to use properly,
particularly since it isn't possible to declare a new local from within
an inline asm string.

With this patch, "r" provides the value in a WebAssembly local, and the
local index is provided to the inline asm string. This requires inline
asm to use get_local and set_local to read the register. This does
potentially result in larger code size, however inline asm should
hopefully be quite rare in WebAssembly.

This also means that the "m" constraint can no longer be supported, as
WebAssembly has nothing like a "memory operand" that includes an
implicit get_local.

This fixes PR34599 for the wasm32-unknown-unknown-wasm target (though
not for the ELF target).

llvm-svn: 317707
2017-11-08 19:18:08 +00:00
Alex Bradbury a337675cdb [RISCV] Initial support for function calls
Note that this is just enough for simple function call examples to generate 
working code. Support for varargs etc follows in future patches.

Differential Revision: https://reviews.llvm.org/D29936

llvm-svn: 317691
2017-11-08 13:41:21 +00:00
Alex Bradbury 74913e1c70 [RISCV] Codegen for conditional branches
A good portion of this patch is the extra functions that needed to be 
implemented to support the test case. e.g. storeRegToStackSlot, 
loadRegFromStackSlot, eliminateFrameIndex.

Setting ISD::BR_CC to Expand may appear non-obvious on an architecture with 
branch+cmp instructions. However, I found it much easier to deal with matching 
the expanded form.

I had to change simm13_lsb0 and simm21_lsb0 to inherit from the 
Operand<OtherVT> class rather than Operand<i32> in order to keep tablegen 
happy. This isn't a big deal, but it does seem a shame to lose the uniformity 
across immediate types when there's not an obvious benefit (I'm hoping a 
tablegen expert will educate me on what I'm missing here!).

Differential Revision: https://reviews.llvm.org/D29935

llvm-svn: 317690
2017-11-08 13:31:40 +00:00
Alex Bradbury ec8aa91305 [RISCV] Codegen support for memory operations on global addresses
Differential Revision: https://reviews.llvm.org/D39103

llvm-svn: 317688
2017-11-08 13:24:21 +00:00
Alex Bradbury cfa6291bb1 [RISCV] Codegen support for memory operations
This required the implementation of RISCVTargetInstrInfo::copyPhysReg. Support
for lowering global addresses follow in the next patch.

Differential Revision: https://reviews.llvm.org/D29934

llvm-svn: 317685
2017-11-08 12:20:01 +00:00
Alex Bradbury 0f0e1b54f0 [RISCV] Codegen support for materializing constants
Differential Revision: https://reviews.llvm.org/D39101

llvm-svn: 317684
2017-11-08 12:02:22 +00:00
Simon Dardis 789f7ca265 [mips] Guard indirect and tailcall pseudo instructions correctly.
Previously these pseudo instructions were not guarded by ISA, so their
select was dependant on the ordering of the entries in the DAG matcher.

Reviewers: atanasyan

Differential Revision: https://reviews.llvm.org/D39723

llvm-svn: 317681
2017-11-08 11:13:44 +00:00
Alex Bradbury cc988415fe [NFCI] Ensure TargetOpcode::* are compatible with guessInstructionProperties=0
rL162640 introduced CodeGenTarget::guessInstructionProperties. If a target 
sets guessInstructionProperties=0 in its FooInstrInfo, tablegen will error if 
it has to guess properties from patterns. Unfortunately, 
guessInstructionProperties=0 can't be used with current upstream LLVM as 
instructions in the TargetOpcode namespace are always included and sometimes 
have inferred properties for mayLoad, mayStore, and hasSideEffects. This patch 
provides the simplest possible fix to this problem, setting default values for 
these fields in the TargetOpcode scope. There is no intended functional 
change, as the explicitly set properties should match what was previously 
inferred. A number of the instructions had hasSideEffects=1 inferred 
unintentionally. This patch makes it explicit, while future patches (such as 
D37097) correct the property.

Differential Revision: https://reviews.llvm.org/D37065

llvm-svn: 317674
2017-11-08 09:26:06 +00:00
Craig Topper 65e6d0b758 [X86] Add patterns to fold EVEX store with EVEX encoded vcvtps2ph instructions. Remove bad pattern that had vf432 vcvtps2ph storing 128-bits.
llvm-svn: 317662
2017-11-08 04:00:31 +00:00
Craig Topper b832ee68b4 [X86] Allow legacy vcvtps2ph intrinsics to select EVEX encoded instructions. Rely on EVEX->VEX to convert back.
Missed store folding opportunities will be fixed in a subsequent commit.

llvm-svn: 317661
2017-11-08 04:00:30 +00:00
David Blaikie 3f833edc7c Target/TargetInstrInfo.h -> CodeGen/TargetInstrInfo.h to match layering
This header includes CodeGen headers, and is not, itself, included by
any Target headers, so move it into CodeGen to match the layering of its
implementation.

llvm-svn: 317647
2017-11-08 01:01:31 +00:00
Matt Arsenault 4709ab9124 AMDGPU: Set correct sched model on v_mad_u64_u32
llvm-svn: 317645
2017-11-08 00:48:25 +00:00
Sriraman Tallam 056b3fd6fb Attribute nonlazybind should not affect calls to functions with hidden visibility.
Differential Revision: https://reviews.llvm.org/D39625

llvm-svn: 317639
2017-11-08 00:01:05 +00:00
Justin Lebar da9e0bd3a2 [NVPTX] Implement __nvvm_atom_add_gen_d builtin.
Summary:
This just seems to have been an oversight.  We already supported the f64
atomic add with an explicit scope (e.g. "cta"), but not the scopeless
version.

Reviewers: tra

Subscribers: jholewinski, sanjoy, cfe-commits, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D39638

llvm-svn: 317623
2017-11-07 22:10:54 +00:00
Graham Yiu 5cd044e8c8 Use new vector insert half-word and byte instructions when we see insertelement on '8 x i16' and '16 x i8' types. Also extended existing lit testcase to cover these cases.
Differential Revision: https://reviews.llvm.org/D34630

llvm-svn: 317613
2017-11-07 20:55:43 +00:00
Krzysztof Parzyszek 385a4e0489 [Hexagon] Make a test more flexible in HexagonLoopIdiomRecognition
An "or" that sets the sign-bit can be replaced with a "xor", if
the sign-bit was known to be clear before. With some changes to
instruction combining, the simple sign-bit check was failing.
Replace it with a more flexible one to catch more cases.

llvm-svn: 317592
2017-11-07 17:05:54 +00:00
Florian Hahn b936810833 [AArch64][SVE] Asm: Add support for (ADD|SUB)_ZZZ
Patch [5/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39091

llvm-svn: 317591
2017-11-07 16:58:13 +00:00
Florian Hahn 91f11e5ad1 [AArch64][SVE] Asm: Add SVE (Z) Register definitions and parsing support
Patch [3/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

To summarise, this patch adds:

 * SVE register definitions
 * Methods to parse SVE register operands
 * Methods to print SVE register operands
 * RegKind SVEDataVector to distinguish it from other data types like scalar register or Neon vector.
 * k_SVEDataRegister and SVEDataRegOp to describe SVE registers (which will be extended by further patches with e.g. ElementWidth and the shift-extend type).


Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39089

llvm-svn: 317590
2017-11-07 16:45:48 +00:00
Florian Hahn d825bbdc41 [AArch64][SVE] Asm: Set SVE as unsupported feature for existing scheduler models.
Patch [4/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

We add SVE as unsupported feature for CPUs that don't have SVE to prevent errors from scheduler models saying it lacks information for these instructions.

Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39090

llvm-svn: 317582
2017-11-07 15:03:11 +00:00
Petar Jovanovic e2a585dddc Reland "Correct dwarf unwind information in function epilogue for X86"
Reland r317100 with minor fix regarding ComputeCommonTailLength function in
BranchFolding.cpp. Skipping top CFI instructions block needs to executed on
several more return points in ComputeCommonTailLength().

Original r317100 message:

"Correct dwarf unwind information in function epilogue for X86"

This patch aims to provide correct dwarf unwind information in function
epilogue for X86.

It consists of two parts. The first part inserts CFI instructions that set
appropriate cfa offset and cfa register in emitEpilogue() in
X86FrameLowering. This part is X86 specific.

The second part is platform independent and ensures that:

- CFI instructions do not affect code generation
- Unwind information remains correct when a function is modified by
  different passes. This is done in a late pass by analyzing information
  about cfa offset and cfa register in BBs and inserting additional CFI
  directives where necessary.

Changed CFI instructions so that they:

- are duplicable
- are not counted as instructions when tail duplicating or tail merging
- can be compared as equal

Added CFIInstrInserter pass:

- analyzes each basic block to determine cfa offset and register valid at
  its entry and exit
- verifies that outgoing cfa offset and register of predecessor blocks match
  incoming values of their successors
- inserts additional CFI directives at basic block beginning to correct the
  rule for calculating CFA

Having CFI instructions in function epilogue can cause incorrect CFA
calculation rule for some basic blocks. This can happen if, due to basic
block reordering, or the existence of multiple epilogue blocks, some of the
blocks have wrong cfa offset and register values set by the epilogue block
above them.

CFIInstrInserter is currently run only on X86, but can be used by any target
that implements support for adding CFI instructions in epilogue.

Patch by Violeta Vukobrat.

llvm-svn: 317579
2017-11-07 14:40:27 +00:00
Alexey Bataev e25a6fd390 [SLP] Fix PR35047: Fix default cost model for cast op in X86.
Summary:
The cost calculation for default case on X86 target does not always
follow correct wayt because of missing 4-th argument in
`BaseT::getCastInstrCost()` call. Added this missing parameter.

Reviewers: hfinkel, mkuper, RKSimon, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39687

llvm-svn: 317576
2017-11-07 14:23:44 +00:00
Florian Hahn c4422247b3 [AArch64][SVE] Asm: Replace 'IsVector' by 'RegKind' in AArch64AsmParser (NFC)
Patch [2/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

This change is a non functional change that adds RegKind as an alternative to 'isVector' to prepare it for newer types (SVE data vectors and predicate vectors) that will be added in next patches (where the SVE data vector is added as part of this patch set)

Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39088

llvm-svn: 317569
2017-11-07 13:07:50 +00:00
Kristof Beyls af9814a1fc [GlobalISel] Enable legalizing non-power-of-2 sized types.
This changes the interface of how targets describe how to legalize, see
the below description.

1. Interface for targets to describe how to legalize.

In GlobalISel, the API in the LegalizerInfo class is the main interface
for targets to specify which types are legal for which operations, and
what to do to turn illegal type/operation combinations into legal ones.

For each operation the type sizes that can be legalized without having
to change the size of the type are specified with a call to setAction.
This isn't different to how GlobalISel worked before. For example, for a
target that supports 32 and 64 bit adds natively:

  for (auto Ty : {s32, s64})
    setAction({G_ADD, 0, s32}, Legal);

or for a target that needs a library call for a 32 bit division:

  setAction({G_SDIV, s32}, Libcall);

The main conceptual change to the LegalizerInfo API, is in specifying
how to legalize the type sizes for which a change of size is needed. For
example, in the above example, how to specify how all types from i1 to
i8388607 (apart from s32 and s64 which are legal) need to be legalized
and expressed in terms of operations on the available legal sizes
(again, i32 and i64 in this case). Before, the implementation only
allowed specifying power-of-2-sized types (e.g. setAction({G_ADD, 0,
s128}, NarrowScalar).  A worse limitation was that if you'd wanted to
specify how to legalize all the sized types as allowed by the LLVM-IR
LangRef, i1 to i8388607, you'd have to call setAction 8388607-3 times
and probably would need a lot of memory to store all of these
specifications.

Instead, the legalization actions that need to change the size of the
type are specified now using a "SizeChangeStrategy".  For example:

   setLegalizeScalarToDifferentSizeStrategy(
       G_ADD, 0, widenToLargerAndNarrowToLargest);

This example indicates that for type sizes for which there is a larger
size that can be legalized towards, do it by Widening the size.
For example, G_ADD on s17 will be legalized by first doing WidenScalar
to make it s32, after which it's legal.
The "NarrowToLargest" indicates what to do if there is no larger size
that can be legalized towards. E.g. G_ADD on s92 will be legalized by
doing NarrowScalar to s64.

Another example, taken from the ARM backend is:
   for (unsigned Op : {G_SDIV, G_UDIV}) {
     setLegalizeScalarToDifferentSizeStrategy(Op, 0,
         widenToLargerTypesUnsupportedOtherwise);
     if (ST.hasDivideInARMMode())
       setAction({Op, s32}, Legal);
     else
       setAction({Op, s32}, Libcall);
   }

For this example, G_SDIV on s8, on a target without a divide
instruction, would be legalized by first doing action (WidenScalar,
s32), followed by (Libcall, s32).

The same principle is also followed for when the number of vector lanes
on vector data types need to be changed, e.g.:

   setAction({G_ADD, LLT::vector(8, 8)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(16, 8)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(4, 16)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(8, 16)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(2, 32)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(4, 32)}, LegalizerInfo::Legal);
   setLegalizeVectorElementToDifferentSizeStrategy(
       G_ADD, 0, widenToLargerTypesUnsupportedOtherwise);

As currently implemented here, vector types are legalized by first
making the vector element size legal, followed by then making the number
of lanes legal. The strategy to follow in the first step is set by a
call to setLegalizeVectorElementToDifferentSizeStrategy, see example
above.  The strategy followed in the second step
"moreToWiderTypesAndLessToWidest" (see code for its definition),
indicating that vectors are widened to more elements so they map to
natively supported vector widths, or when there isn't a legal wider
vector, split the vector to map it to the widest vector supported.

Therefore, for the above specification, some example legalizations are:
  * getAction({G_ADD, LLT::vector(3, 3)})
    returns {WidenScalar, LLT::vector(3, 8)}
  * getAction({G_ADD, LLT::vector(3, 8)})
    then returns {MoreElements, LLT::vector(8, 8)}
  * getAction({G_ADD, LLT::vector(20, 8)})
    returns {FewerElements, LLT::vector(16, 8)}


2. Key implementation aspects.

How to legalize a specific (operation, type index, size) tuple is
represented by mapping intervals of integers representing a range of
size types to an action to take, e.g.:

       setScalarAction({G_ADD, LLT:scalar(1)},
                       {{1, WidenScalar},  // bit sizes [ 1, 31[
                        {32, Legal},       // bit sizes [32, 33[
                        {33, WidenScalar}, // bit sizes [33, 64[
                        {64, Legal},       // bit sizes [64, 65[
                        {65, NarrowScalar} // bit sizes [65, +inf[
                       });

Please note that most of the code to do the actual lowering of
non-power-of-2 sized types is currently missing, this is just trying to
make it possible for targets to specify what is legal, and how non-legal
types should be legalized.  Probably quite a bit of further work is
needed in the actual legalizing and the other passes in GlobalISel to
support non-power-of-2 sized types.

I hope the documentation in LegalizerInfo.h and the examples provided in the
various {Target}LegalizerInfo.cpp and LegalizerInfoTest.cpp explains well
enough how this is meant to be used.

This drops the need for LLT::{half,double}...Size().


Differential Revision: https://reviews.llvm.org/D30529

llvm-svn: 317560
2017-11-07 10:34:34 +00:00
Bjorn Steinbrink c02b237e46 [X86] Don't clobber reserved registers with stack adjustments
Summary:
Calls using invoke in funclet based functions are assumed to clobber
all registers, which causes the stack adjustment using pops to consider
all registers not defined by the call to be undefined, which can
unfortunately include the base pointer, if one is needed.

To prevent this (and possibly other hazards), skip reserved registers
when looking for candidate registers.

This fixes issue #45034 in the Rust compiler.

Reviewers: mkuper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39636

llvm-svn: 317551
2017-11-07 08:50:21 +00:00
Craig Topper e7fb300226 [X86] Add patterns to fold a 64-bit load into the EVEX vcvtph2ps instructions.
llvm-svn: 317548
2017-11-07 07:13:07 +00:00
Craig Topper 0231b1d445 [X86] Add patterns for folding a v16i8 with the VEX vcvtph2ps intrinsics.
Disable the peephole pass to prove that the pattern is working.

llvm-svn: 317547
2017-11-07 07:13:06 +00:00
Craig Topper cf8e6d0a76 [X86] Add support for using EVEX instructions for the legacy vcvtph2ps intrinsics.
Looks like there's some missed load folding opportunities for i64 loads.

llvm-svn: 317544
2017-11-07 07:13:03 +00:00
Craig Topper afc3c8206e [X86] Use IMPLICIT_DEF in VEX/EVEX vcvtss2sd/vcvtsd2ss patterns instead of a COPY_TO_REGCLASS.
ExeDepsFix pass should take care of making the registers match.

llvm-svn: 317542
2017-11-07 04:44:22 +00:00
Craig Topper 4ad81b51ed [X86] Remove 'Requires' from instructions with no patterns. NFC
llvm-svn: 317541
2017-11-07 04:44:21 +00:00
Matt Arsenault 6119f80034 AMDGPU: Remove redundant combine
This combine was already done in two places. The
generic combiner already has done this since
r217610, for adds (with a single use).

This one was added in r303641, and added support for handling
or as well. r313251 later added support to the generic
combine for or. It also turns out the isOrEquivalentToAdd
check is not necessary for this combine.

Additionally, we already reproduce this combine in yet
another place in the backend, although in that version
multiple uses of the add are still folded if it will
allow a fold into the addressing mode. That version needs
to be improved to understand ors though, as well as the
correct legal offsets for private.

llvm-svn: 317526
2017-11-07 00:06:32 +00:00
Craig Topper 428a4e6374 [X86] Make FeatureAVX512 imply FeatureF16C.
The EVEX to VEX pass is already assuming this is true under AVX512VL. We had special patterns to use zmm instructions if VLX and F16C weren't available.

Instead just make AVX512 imply F16C to make the EVEX to VEX behavior explicitly legal and remove the extra patterns.

All known CPUs with AVX512 have F16C so this should safe for now.

llvm-svn: 317521
2017-11-06 22:49:04 +00:00