Commit Graph

1344 Commits

Author SHA1 Message Date
Sebastian Pop e08d9c7c87 instr-combiner: sum up all latencies of the transformed instructions
We have found that -- when the selected subarchitecture has a scheduling model
and we are not optimizing for size -- the machine-instruction combiner uses a
too-simple algorithm to compute the cost of one of the two alternatives [before
and after running a combining pass on a section of code], and therefor it throws
away the combination results too often.

This fix has the potential to help any ISA with the potential to combine
instructions and for which at least one subarchitecture has a scheduling model.
As of now, this is only known to definitely affect AArch64 subarchitectures with
a scheduling model.

Regression tested on AMD64/GNU-Linux, new test case tested to fail on an
unpatched compiler and pass on a patched compiler.

Patch by Abe Skolnik and Sebastian Pop.

llvm-svn: 289399
2016-12-11 19:39:32 +00:00
Matthias Braun 2c7d52a540 Move .mir tests to appropriate directories
test/CodeGen/MIR should contain tests that intent to test the MIR
printing or parsing. Tests that test something else should be in
test/CodeGen/TargetName even when they are written in .mir.

As a rule of thumb, only tests using "llc -run-pass none" should be in
test/CodeGen/MIR.

llvm-svn: 289254
2016-12-09 19:08:15 +00:00
Nirav Dave bedb5d906c Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r289221 which appears to be triggering an assertion

llvm-svn: 289226
2016-12-09 17:18:24 +00:00
Nirav Dave fd51ff4fd8 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after fixing overly aggressive load-store forwarding optimization.

Simplify Consecutive Merge Store Candidate Search

Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.

Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).

Additional Minor Changes:

   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.

This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.

Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations

Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

llvm-svn: 289221
2016-12-09 16:15:12 +00:00
Tim Northover b58346f2f2 GlobalISel: fall back gracefully for debug intrinsics.
Supporting them properly is a reasonably complex chunk of work, so to allow bot
testing before then we should at least be able to fall back to DAG ISel.

llvm-svn: 289150
2016-12-08 22:44:13 +00:00
Matthias Braun 9ee1a1df24 The few days mentioned in r267095 are over
llvm-svn: 289004
2016-12-08 00:16:42 +00:00
Tim Northover c53606ef02 GlobalISel: use correct builder for ConstantExprs.
ConstantExpr instances were emitting code into the current block rather than
the entry block. This meant they didn't necessarily dominate all uses, which is
clearly wrong.

llvm-svn: 288985
2016-12-07 21:29:15 +00:00
Tim Northover 05cc4859ad GlobalISel: simplify MachineIRBuilder interface.
MachineIRBuilder had weird before/after and beginning/end flags for the insert
point. Unfortunately the non-default means that instructions will be inserted
in reverse order which is almost never what anyone wants.

Really, I think we just want (like IRBuilder has) the ability to insert at any
C++ iterator-style point (i.e. before any instruction or before MBB.end()). So
this fixes MIRBuilders to behave like IRBuilders in this respect.

llvm-svn: 288980
2016-12-07 21:05:38 +00:00
Tim Northover 14ceb45fb4 GlobalISel: correctly handle small args via memory.
We were rounding size in bits down rather than up, leading to 0-sized slots for
i1 (assert!) and bugs for other types not byte-aligned.

llvm-svn: 288848
2016-12-06 21:02:19 +00:00
Tim Northover 0a683e7bfd GlobalISel: fall back gracefully when we hit unhandled legalizer default.
llvm-svn: 288840
2016-12-06 19:02:15 +00:00
Tim Northover c1a23854f3 GlobalISel: handle G_SEQUENCE fallbacks gracefully.
There were two problems:
  + AArch64 was reusing random data from its binary op tables, which is
    complete nonsense for G_SEQUENCE.
  + Even when AArch64 gave up and said it couldn't handle G_SEQUENCE,
    the generic code asserted.

llvm-svn: 288836
2016-12-06 18:38:38 +00:00
Tim Northover f50f2f3d32 GlobalISel: allow G_SELECT instructions for pointers.
llvm-svn: 288835
2016-12-06 18:38:34 +00:00
Tim Northover 405e25cd6a GlobalISel: stop the legalizer from trying to handle oddly-sized types.
It'll almost immediately fail because it always tries to half/double the size
until it finds a legal one. Unfortunately, this triggers an assertion
preventing the DAG fallback from being possible.

llvm-svn: 288834
2016-12-06 18:38:29 +00:00
Tim Northover 800638fd67 GlobalISel: avoid looking too closely at PHIs when we bail.
The function used to finish off PHIs by adding the relevant basic blocks can
fail if we're aborting and still don't actually have the needed
MachineBasicBlocks. So avoid trying in that case.

llvm-svn: 288727
2016-12-05 23:10:19 +00:00
Tim Northover b566848d68 GlobalISel: place constants correctly in the entry block.
When the entry block was empty after arg lowering, we were always placing
constants at the end. This is probably hamrless while translating the same
block, but horribly wrong once its terminator has been translated. So switch to
inserting at the beginning.

llvm-svn: 288720
2016-12-05 22:40:13 +00:00
Tim Northover c0bd197c6b GlobalISel: handle pointer arguments that get assigned to the stack.
llvm-svn: 288717
2016-12-05 22:20:32 +00:00
Tim Northover cc35f90492 GlobalISel: translate constants larger than 64 bits.
llvm-svn: 288713
2016-12-05 21:54:17 +00:00
Tim Northover 9267ac5d47 GlobalISel: make G_CONSTANT take a ConstantInt rather than int64_t.
This makes it more similar to the floating-point constant, and also allows for
larger constants to be translated later. There's no real functional change in
this patch though, just syntax updates.

llvm-svn: 288712
2016-12-05 21:47:07 +00:00
Tim Northover 6ad7b9f837 GlobalISel: improve translation fallback for constants.
Returning 0 (NoReg) from getOrCreateVReg leads to unexpected situations later
in the translation. It's better to return a valid (if undefined) register and
let the rest of the instruction carry on as planned.

llvm-svn: 288709
2016-12-05 21:40:33 +00:00
Tim Northover d1fd383b28 GlobalISel: handle 1-element aggregates during ABI lowering.
llvm-svn: 288706
2016-12-05 21:25:33 +00:00
Matthias Braun a39c2ca44e testcase only works in a debug build
llvm-svn: 288567
2016-12-03 01:42:32 +00:00
Matthias Braun 1fbb0f6dd9 AArch64CollectLOH: Rewrite as block-local analysis.
Previously this pass was using up to 5% compile time in some cases which
is a bit much for what it is doing. The pass featured a full blown
data-flow analysis which in the default configuration was restricted to a
single block.

This rewrites the pass under the assumption that we only ever work on a
single block. This is done in a single pass maintaining a state machine
per general purpose register to catch LOH patterns.

Differential Revision: https://reviews.llvm.org/D27329

llvm-svn: 288561
2016-12-03 00:52:56 +00:00
Geoff Berry 7ffce7be0c [AArch64] Fold more spilled/refilled COPYs.
Summary:
Make AArch64InstrInfo::foldMemoryOperandImpl more general by folding all
full COPYs between register classes of the same size that are either
spilled or refilled.

Reviewers: MatzeB, qcolombet

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D27271

llvm-svn: 288439
2016-12-01 23:43:55 +00:00
Matthias Braun 709a4cc238 RegisterCoalscer: Only coalesce complete reserved registers.
The coalescer eliminates copies from reserved registers of the form:
   %vregX = COPY %rY
in the case where %rY is a reserved register. However this turns out to
be invalid if only some of the subregisters are reserved (see also
https://reviews.llvm.org/D26648).

Differential Revision: https://reviews.llvm.org/D26687

llvm-svn: 288428
2016-12-01 22:39:51 +00:00
Tim Northover 5bb87b6769 AArch64: fix 128-bit cmpxchg at -O0 (again, again).
This time the issue is fortunately just a simple mistake rather than a horrible
design spectre. I thought SUBS/SBCS provided sufficient NZCV flags for
comparing two 64-bit values, but they don't.

The fix is slightly clunkier in AArch64 because we can't use conditional
execution to emit a pair of CMPs. Traditionally an "icmp ne i128" would map to
an EOR/EOR/ORR/CBNZ, but that uses more registers so it's easier to go with a
CSET/CINC/CBNZ combination. Slightly less efficient, but this is -O0 anyway.

Thanks to Anton Korobeynikov for pointing out the issue.

llvm-svn: 288418
2016-12-01 21:31:59 +00:00
Matthias Braun 39c3c89cdc MCStreamer: Use "cfi" for CFI related temp labels.
Choosing a "cfi" name makes the intend a bit clearer in an assembly dump
and more importantly the assembly dumps are slightly more stable as the
numbers don't move around anymore when unrelated code calls
createTempSymbol() more or less often.
As they are temp labels the name doesn't influence the generated object
code.

Differential Revision: https://reviews.llvm.org/D27244

llvm-svn: 288290
2016-11-30 23:48:26 +00:00
Silviu Baranga aab65b155e [AArch64] Fix useful bits detection for BFM instructions
Summary:
When computing useful bits for a BFM instruction, we need
to take into consideration the case where both operands
of the BFM are equal and provide data that we need to track.

Not doing this can cause us to miss useful bits.
    
Fixes PR31138 (https://llvm.org/bugs/show_bug.cgi?id=31138)

Reviewers: t.p.northover, jmolloy

Subscribers: evandro, gberry, srhines, pirama, mcrosier, aemerson, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D27130

llvm-svn: 288253
2016-11-30 17:04:22 +00:00
Sanjay Patel 6f52fe9c8b [AArch64] use exact checks; NFC
llvm-svn: 288245
2016-11-30 15:00:43 +00:00
Sanjay Patel 47f7f30df9 [AArch64] allow and-not-compare transform to form 'bics'
This target hook was added with D19087:
https://reviews.llvm.org/D19087

Differential Revision: https://reviews.llvm.org/D27221

llvm-svn: 288206
2016-11-29 22:28:58 +00:00
Sanjay Patel 09c5630818 [AArch64] add tests for bics; NFC
llvm-svn: 288183
2016-11-29 19:15:27 +00:00
Sanjay Patel 183f90ad04 [AArch64] add tests to show select transforms; NFC
llvm-svn: 288180
2016-11-29 18:35:04 +00:00
Geoff Berry 7c078fc035 [AArch64] Fold spills of COPY of WZR/XZR
Summary:
In AArch64InstrInfo::foldMemoryOperandImpl, catch more cases where the
COPY being spilled is copying from WZR/XZR, but the source register is
not in the COPY destination register's regclass.

For example, when spilling:

  %vreg0 = COPY %XZR ; %vreg0:GPR64common

without this change, the code in TargetInstrInfo::foldMemoryOperand()
and canFoldCopy() that normally handles cases like this would fail to
optimize since %XZR is not in GPR64common.  So the spill code generated
would be:

  %vreg0 = COPY %XZR
  STR %vreg

instead of the new code generated:

  STR %XZR

Reviewers: qcolombet, MatzeB

Subscribers: mcrosier, aemerson, t.p.northover, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D26976

llvm-svn: 288176
2016-11-29 18:28:32 +00:00
John Brawn 150addb45c [DAGCombiner] Fix infinite loop in vector mul/shl combining
We have the following DAGCombiner transformations:
 (mul (shl X, c1), c2) -> (mul X, c2 << c1)
 (mul (shl X, C), Y) -> (shl (mul X, Y), C)
 (shl (mul x, c1), c2) -> (mul x, c1 << c2)
Usually the constant shift is optimised by SelectionDAG::getNode when it is
constructed, by SelectionDAG::FoldConstantArithmetic, but when we're dealing
with vectors and one of those vector constants contains an undef element
FoldConstantArithmetic does not fold and we enter an infinite loop.

Fix this by making FoldConstantArithmetic use getNode to decide how to fold each
vector element, the same as FoldConstantVectorArithmetic does, and rather than
adding the constant shift to the work list instead only apply the transformation
if it's already been folded into a constant, as if it's not we're going to loop
endlessly. Additionally add missing NoOpaques to one of those transformations,
which I noticed when writing the tests for this.

Differential Revision: https://reviews.llvm.org/D26605

llvm-svn: 287766
2016-11-23 16:05:51 +00:00
Benjamin Kramer 68dd881697 Adjust arm64-irtranslator.ll test to changes from r287368
The test is currently broken, and this CL should fix it.

Patch by Adrian Kuegel!

Differential Revision: https://reviews.llvm.org/D26910

llvm-svn: 287536
2016-11-21 13:15:38 +00:00
Dean Michael Berris 31761f300d [XRay][AArch64] Implemented a test for the compile-time sleds emitted, and fixed a bug in the jump instruction
This patch adds a test for the assembly code emitted with XRay
instrumentation. It also fixes a bug where the operand of a jump
instruction must be not the number of bytes to jump over, but rather the
number of 4-byte instructions.

Author: rSerge

Reviewers: dberris, rengolin

Differential Revision: https://reviews.llvm.org/D26805

llvm-svn: 287516
2016-11-21 03:01:43 +00:00
Tom Stellard df613198c0 GlobalISel: Fix unconditional fallback with global isel abort is disabled
Reviewers: t.p.northover, ab, qcolombet

Subscribers: mehdi_amini, vkalintiris, wdng, dberris, llvm-commits, rovka

Differential Revision: https://reviews.llvm.org/D26765

llvm-svn: 287344
2016-11-18 14:14:35 +00:00
Geoff Berry 8301c645c8 [AArch64] Handle vector types in replaceZeroVectorStore.
Summary:
Extend replaceZeroVectorStore to handle more vector type stores,
floating point zero vectors and set alignment more accurately on split
stores.

This is a follow-up change to r286875.

This change fixes PR31038.

Reviewers: MatzeB

Subscribers: mcrosier, aemerson, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D26682

llvm-svn: 287142
2016-11-16 19:35:19 +00:00
Matthias Braun 3d51cf0a2c AArch64: Use DeadRegisterDefinitionsPass before regalloc.
Doing this before register allocation reduces register pressure as we do
not even have to allocate a register for those dead definitions.

Differential Revision: https://reviews.llvm.org/D26111

llvm-svn: 287076
2016-11-16 03:38:27 +00:00
Chad Rosier 201fc1ed26 [AArch64] Add support for Qualcomm's Falkor CPU.
Differential Revision: https://reviews.llvm.org/D26673

llvm-svn: 287036
2016-11-15 21:34:12 +00:00
Haicheng Wu faee2b71a7 [AArch64] Lower multiplication by a constant int to shl+add+shl
Lower a = b * C where C = (2^n + 1) * 2^m to

add     w0, w0, w0, lsl n
lsl     w0, w0, m

Differential Revision: https://reviews.llvm.org/D229245

llvm-svn: 287019
2016-11-15 20:16:48 +00:00
Evandro Menezes 9fc54826e0 [AArch64] Compute the Newton series for reciprocals natively
Implement the Newton series for square root, its reciprocal and reciprocal
natively using the specialized instructions in AArch64 to perform each
series iteration.

Differential revision: https://reviews.llvm.org/D26518

llvm-svn: 286907
2016-11-14 23:29:01 +00:00
Tim Northover e33b175411 GlobalISel: add tests for G_ZEXT/G_SEXT to types smaller than 32-bits.
Support was accidentally added in r286407, but there were no tests at the time.

llvm-svn: 286903
2016-11-14 22:50:22 +00:00
Geoff Berry 526c50588d [AArch64] Split 0 vector stores into scalar store pairs.
Summary:
Replace a splat of zeros to a vector store by scalar stores of WZR/XZR.
The load store optimizer pass will merge them to store pair stores.
This should be better than a movi to create the vector zero followed by
a vector store if the zero constant is not re-used, since one
instructions and one register live range will be removed.

For example, the final generated code should be:

  stp xzr, xzr, [x0]

instead of:

  movi v0.2d, #0
  str q0, [x0]

Reviewers: t.p.northover, mcrosier, MatzeB, jmolloy

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D26561

llvm-svn: 286875
2016-11-14 19:39:04 +00:00
Chad Rosier 811e76dbcd [AArch64] Add test to show narrow zero store merging is disabled with strict align. NFC.
llvm-svn: 286617
2016-11-11 19:25:48 +00:00
Geoff Berry 25fa4999ff [AArch64] Fix bugs in isel lowering replaceSplatVectorStore.
Summary:
Fix off-by-one indexing error in loop checking that inserted value was a
splat vector.

Add code to check that INSERT_VECTOR_ELT nodes constructing the splat
vector have the expected constant index values.

Reviewers: t.p.northover, jmolloy, mcrosier

Subscribers: aemerson, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D26409

llvm-svn: 286616
2016-11-11 19:25:20 +00:00
Adrian Prantl 554fd99dd5 Revert "Use private linkage for MergedGlobals variables" on Darwin.
This is a partial revert of r244615 (http://reviews.llvm.org/D11942),
which caused a major regression in debug info quality.

Turning the artificial __MergedGlobal symbols into private symbols
(l__MergedGlobal) means that the linker will not include them in the
symbol table of the final executable. Without a symbol table entry
dsymutil is not be able to process the debug info for any of the
merged globals and thus drops the debug info for all of them.

This patch is enabling the old behavior for all MachO targets while
leaving all other targets unaffected.

rdar://problem/29160481
https://reviews.llvm.org/D26531

llvm-svn: 286607
2016-11-11 17:50:09 +00:00
Chad Rosier 10c7aaaee9 [AArch64] Enable merging of adjacent zero stores for all subtargets.
This optimization merges adjacent zero stores into a wider store.

e.g.,

strh wzr, [x0]
strh wzr, [x0, #2]
; becomes
str wzr, [x0]

e.g.,

str wzr, [x0]
str wzr, [x0, #4]
; becomes
str xzr, [x0]

Previously, this was only enabled for Kryo and Cortex-A57.

Differential Revision: https://reviews.llvm.org/D26396

llvm-svn: 286592
2016-11-11 14:10:12 +00:00
Matthias Braun 325cd2c98a ScheduleDAGInstrs: Add condjump deps to addSchedBarrierDeps()
addSchedBarrierDeps() is supposed to add use operands to the ExitSU
node. The current implementation adds uses for calls/barrier instruction
and the MBB live-outs in all other cases. The use
operands of conditional jump instructions were missed.

Also added code to macrofusion to set the latencies between nodes to
zero to avoid problems with the fusing nodes lingering around in the
pending list now.

Differential Revision: https://reviews.llvm.org/D25140

llvm-svn: 286544
2016-11-11 01:34:21 +00:00
Matthias Braun f29b12dca8 ScheduleDAGInstrs: Ignore dependencies of constant physregs
There is no need to track dependencies for constant physregs, as they
don't change their value no matter in what order you read/write to them.

Differential Revision: https://reviews.llvm.org/D26221

llvm-svn: 286526
2016-11-10 23:46:44 +00:00
Matthias Braun 9d62c5571b RegisterCoalescer: Ignore interferences for constant physregs
When copying to/from a constant register interferences can be ignored.

Also update the documentation for isConstantPhysReg() to make it more
obvious that this transformation is valid.

Differential Revision: https://reviews.llvm.org/D26106

llvm-svn: 286503
2016-11-10 21:22:47 +00:00